CN114470755A - Virtual environment picture display method, device, equipment, medium and program product - Google Patents

Virtual environment picture display method, device, equipment, medium and program product Download PDF

Info

Publication number
CN114470755A
CN114470755A CN202111654055.0A CN202111654055A CN114470755A CN 114470755 A CN114470755 A CN 114470755A CN 202111654055 A CN202111654055 A CN 202111654055A CN 114470755 A CN114470755 A CN 114470755A
Authority
CN
China
Prior art keywords
virtual
virtual object
target
target limb
limb part
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111654055.0A
Other languages
Chinese (zh)
Inventor
刘智洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Publication of CN114470755A publication Critical patent/CN114470755A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8076Shooting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a method, a device, equipment, a medium and a program product for displaying a virtual environment picture, and belongs to the field of virtual environments. The method is applied to a client for controlling the first virtual object, and comprises the following steps: displaying a second virtual object, the second virtual object being a virtual object having at least one limb portion; controlling the first virtual object to shoot towards the second virtual object; and controlling the target limb part to be separated from the second virtual object under the condition that the first virtual object successfully hits the target limb part of the second virtual object. The scheme optimizes the simulation effect of the second virtual object when the second virtual object is attacked.

Description

Virtual environment picture display method, device, equipment, medium and program product
The present application claims priority of chinese patent application No. 202111265126.8 entitled "method, apparatus, device, medium, and program product for displaying virtual environment screen" filed on 28/10/2021, the entire contents of which are incorporated herein by reference.
Technical Field
The present disclosure relates to the field of virtual environments, and in particular, to a method, an apparatus, a device, a medium, and a program product for displaying a virtual environment screen.
Background
With the continuous development of the field of games, there is a lot of play methods for setting monsters in a virtual environment to improve the game matching difficulty of players.
In the related art, the figure of monsters is often set according to the typical figure of animals in the real world, e.g., "cow monsters" represented monsters having horns, limbs, head, trunk, and tail. The monsters determined by the correlation technique have single image and poor animal simulation effect.
Disclosure of Invention
The application provides a display method, a display device, equipment, a display medium and a display program product of a virtual environment picture, which optimize the simulation effect of a second virtual object when the second virtual object is attacked. The technical scheme is as follows:
according to an aspect of the present application, there is provided a method for displaying a virtual environment screen, the method being applied to a client controlling a first virtual object, the method including:
displaying a second virtual object, the second virtual object being a virtual object having at least one limb portion;
controlling the first virtual object to shoot towards the second virtual object;
and controlling the target limb part to be separated from the second virtual object under the condition that the first virtual object successfully hits the target limb part of the second virtual object.
According to another aspect of the present application, there is provided a display apparatus of a virtual environment screen, the apparatus including:
a display module for displaying a second virtual object, the second virtual object being a virtual object having at least one limb portion;
the control module is used for controlling the first virtual object to shoot towards the second virtual object;
and the control module is also used for controlling the target limb part to be separated from the second virtual object under the condition that the first virtual object successfully hits the target limb part of the second virtual object.
According to an aspect of the present application, there is provided a computer device including: a processor and a memory, the memory storing a computer program that is loaded and executed by the processor to implement the display method of the virtual environment screen as above.
According to another aspect of the present application, there is provided a computer-readable storage medium storing a computer program loaded and executed by a processor to implement the display method of a virtual environment screen as above.
According to another aspect of the present application, a computer program product is provided, the computer program product comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to execute the display method of the virtual environment screen provided by the above aspect.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
by setting the second virtual object as a virtual object with at least one limb part and separating the limb parts on the second virtual object, the target limb part is separated from the second virtual object under the condition that the first virtual object only hits the target limb part. The method optimizes the simulation effect of the second virtual object when being attacked, for example, the tail of the second virtual object is seriously injured in the virtual environment, the separation of the tail is preferentially considered to prolong the life, the method is used for simulating the serious injury of the tail of an animal in the real world, and the tail is often broken to seek survival.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 illustrates a block diagram of a computer system provided by an exemplary embodiment;
FIG. 2 illustrates a flow chart of a method for displaying a virtual environment screen provided by an exemplary embodiment;
FIG. 3 is a schematic diagram illustrating a positional relationship of a camera model to a first virtual object provided by an exemplary embodiment;
FIG. 4 illustrates a schematic diagram of a virtual environment screen provided by an exemplary embodiment;
FIG. 5 illustrates a flowchart of a method for displaying a virtual environment screen according to another exemplary embodiment;
FIG. 6 illustrates a flowchart of a method for displaying a virtual environment screen provided by another exemplary embodiment;
FIG. 7 is a diagram illustrating a frame in an animation of a target limb portion separating along a fly-through trajectory provided by an exemplary embodiment;
FIG. 8 illustrates a diagram of one frame in an animation of a target limb portion separating along a free-fall trajectory provided by an exemplary embodiment;
fig. 9 illustrates a flowchart of a display method of a virtual environment screen provided by another exemplary embodiment;
FIG. 10 shows a schematic diagram of a target virtual prop provided by an exemplary embodiment;
FIG. 11 is a schematic diagram illustrating a first virtual object lacking a target virtual resource provided by an exemplary embodiment;
fig. 12 is a flowchart illustrating a display method of a virtual environment screen according to another exemplary embodiment;
FIG. 13 illustrates a schematic diagram of a crash box on a first virtual object provided by an exemplary embodiment;
fig. 14 is a flowchart illustrating a display method of a virtual environment screen according to another exemplary embodiment;
fig. 15 is a block diagram showing a configuration of a display apparatus of a virtual environment screen provided by an exemplary embodiment;
FIG. 16 shows a block diagram of a computer device provided in an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
It is to be understood that reference herein to "a number" means one or more and "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
First, terms referred to in the embodiments of the present application are briefly described:
virtual environment: is a virtual environment that is displayed (or provided) when an application is run on the terminal. The virtual environment may be a simulation environment of a real world, a semi-simulation semi-fictional environment, or a pure fictional environment. The virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment, which is not limited in this application. The following embodiments are illustrated with the virtual environment being a three-dimensional virtual environment.
Optionally, the virtual environment may provide a battle environment for the virtual object. Illustratively, in a large-fleeing and killing type game, at least one virtual object carries out single-play battle in a virtual environment, the virtual object achieves the purpose of survival in the virtual environment by avoiding attacks initiated by enemy units and dangers (such as poison circle, marshland and the like) existing in the virtual environment, when the life value of the virtual object in the virtual environment is zero, the life of the virtual object in the virtual environment is finished, and finally the virtual object which smoothly passes through a route in a checkpoint is a winner; for example, in a breakthrough type game, at least one virtual object performs one-round battle in a virtual environment, and the virtual object acquires clearance right of a current level by killing monsters so as to enter a next level or end the current level.
Virtual object: refers to a movable object in a virtual environment. The movable object can be a virtual character, a virtual animal, an animation character, etc., such as: characters and animals displayed in a three-dimensional virtual environment. Optionally, the virtual object is a three-dimensional volumetric model created based on animated skeletal techniques. Each virtual object has its own shape and volume in the three-dimensional virtual environment, occupying a portion of the space in the three-dimensional virtual environment.
Virtual props: the method is characterized in that props which can be used by virtual objects in a virtual environment comprise virtual weapons capable of changing attribute values of the virtual objects, supply props such as bullets, defense props such as shields, armors and armored cars, virtual props such as virtual light beams and virtual shock waves which are displayed through hands when the virtual objects release skills, and virtual props capable of changing attribute values of other virtual objects, and comprises remote virtual props such as pistols, rifles and sniper guns, short-distance virtual props such as daggers, knives, swords and ropes, and throwing type virtual props such as flying axes, flying knives, grenades, flash bombs and smoke bombs.
FIG. 1 shows a block diagram of a computer system provided in an exemplary embodiment of the present application. The computer system 100 includes: a terminal 120 and a server 140.
The terminal 120 is installed and operated with a client supporting a virtual environment. The client may be any one of a three-dimensional map program, a horizontal shooting, a horizontal adventure, a horizontal passing, a horizontal policy, a Virtual Reality (VR) application program, and an Augmented Reality (AR) program. The terminal 120 is a terminal used by a first user who uses the terminal 120 to control a first virtual object located in a virtual environment to perform activities including, but not limited to: adjusting at least one of body posture, walking, running, jumping, riding, driving, aiming, picking up, using a throw-like prop, attacking other virtual objects. Illustratively, the first virtual object is a first virtual person, such as a simulated person object or an animated person object. Illustratively, the first user controls the first avatar to perform an activity through a UI control on the virtual environment screen.
The terminal 120 is connected to the server 140 through a wireless network or a wired network.
The server 140 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. Illustratively, the server 140 includes a processor 144 and a memory 142, the memory 142 further includes a receiving module 1421, a control module 1422, and a sending module 1423, the receiving module 1421 is configured to receive a request sent by a client, such as a request to shoot at a second virtual object; the control module 1422 is configured to control rendering of a virtual environment screen; the sending module 1423 is configured to send a response to the client, such as a response to the client to shoot at the second virtual object. The server 140 is used to provide background services for clients supporting a three-dimensional virtual environment. Alternatively, the server 140 undertakes primary computational tasks and the terminal 120 undertakes secondary computational tasks; alternatively, the server 140 undertakes the secondary computing work and the terminal 120 undertakes the primary computing work; alternatively, the server 140 and the terminal 120 perform cooperative computing by using a distributed computing architecture.
Optionally, the client installed on the terminal 120 is a client on a different operating system platform (android or IOS). The terminal 120 may generally refer to one of a plurality of terminals, and the embodiment is merely illustrated with the terminal 120. The device types of the terminal 120 include: at least one of a smartphone, a vehicle terminal, a wearable device, a smart television, a tablet, an e-book reader, an MP3 player, an MP4 player, a laptop portable computer, and a desktop computer. The following embodiments are illustrated with the terminal comprising a smartphone.
Those skilled in the art will appreciate that the number of terminals described above may be greater or fewer. For example, the number of the terminals may be only one, or several tens or hundreds of the terminals, or more. The number of terminals and the type of the device are not limited in the embodiments of the present application.
To optimize the simulation effect of the second virtual object when being attacked, fig. 2 shows a display method of a virtual environment screen provided by an exemplary embodiment of the present application, which is exemplified by being applied to the terminal 120 (or the client installed with the virtual environment) shown in fig. 1, and is applied to the client controlling the first virtual object, and the method includes:
step 220, displaying a second virtual object, wherein the second virtual object is a virtual object with at least one limb part;
the first virtual object: refers to a virtual object controlled by an application that displays a virtual environment screen.
The second virtual object: refers to a virtual object having at least one limb portion. Illustratively, the second virtual object is a monster having a body trunk, limbs, and a head (limb parts including limbs and a head), and illustratively, the second virtual object is a robot having a mechanical trunk, mechanical limbs, and a mechanical head (limb parts including mechanical limbs and a mechanical head). It is worth noting that at least one limb portion of the second virtual object is determined based on the biology mimicked by the second virtual object, e.g., the second virtual object has a head, a body, and a tail for mimicking the shape of a snake (limb portions include the head and the tail), and the second virtual object has a root, a trunk, and branches for mimicking the shape of a tree (limb portions include the root and the branches).
In one embodiment, the second virtual object is another application-controlled virtual object, optionally belonging to two different camps from the first virtual object, the second virtual object having a hostile relationship with the first virtual object. In one embodiment, the second virtual object belongs to a Non-Player Character (NPC), and the second virtual object is a Non-Player virtual object set for a current play of the first virtual object in the virtual environment. Optionally, when the first virtual object completes separation of a preset number of limb parts by killing the second virtual object, the first virtual object enters a next level or ends the current game; optionally, under the condition that the first virtual object successfully separates the limb part of the second virtual object, the first virtual object obtains a corresponding score; optionally, under the condition that the first virtual object successfully separates the limb part of the second virtual object, the first virtual object acquires the virtual prop.
In an embodiment, the second virtual object is observed based on a viewing angle of the first virtual object, and optionally, a camera model is set in the virtual environment, and the camera model is used for observing the virtual environment at different viewing angles, so as to obtain a virtual environment picture including the second virtual object. The perspective refers to an observation angle when the first person perspective or the third person perspective of the first virtual object is observed in the virtual environment. The embodiment of the present application takes the example of observing the virtual environment from the first-person perspective of the first virtual object.
Optionally, the camera model automatically follows the first virtual object in the virtual environment, that is, when the position of the first virtual object in the virtual environment changes, the camera model changes while following the position of the first virtual object in the virtual environment, and the camera model is always within the preset distance range of the first virtual object in the virtual environment. Optionally, during the automatic following, the relative positions of the camera model and the first virtual object are changed or not changed.
The camera model refers to a three-dimensional model located around a first virtual object in a virtual environment, and when a first human-scale view angle is adopted, the camera model is located near or at the head of the first virtual object; when the third person scale viewing angle is adopted, the camera model may be located behind the first virtual object and bound to the first virtual object, or may be located at any position away from the first virtual object by a preset distance, and the first virtual object located in the virtual environment may be observed from different angles through the camera model. Optionally, the viewing angle includes other viewing angles, such as a top viewing angle, in addition to the first person viewing angle and the third person viewing angle; when a top-down view is used, the camera model may be positioned over the head of the virtual object, the top-down view being the view from which the virtual environment is viewed looking down in the air. Optionally, the camera model is not actually displayed in the virtual environment, i.e. the camera model is not displayed in the virtual environment view displayed by the user interface.
For example, the camera model is located at an arbitrary position away from the first virtual object by a preset distance, optionally, one virtual object corresponds to one camera model, and the camera model can rotate around the first virtual object as a rotation center, for example: the camera model is rotated with an arbitrary point of the first virtual object as a rotation center, the camera model not only rotates in angle but also shifts in displacement during the rotation, and a distance between the camera model and the rotation center is kept constant during the rotation, that is, the camera model is rotated on a spherical surface with the rotation center as a sphere center, wherein the arbitrary point of the first virtual object may be a head, a trunk, or an arbitrary point around the first virtual object, which is not limited in the embodiment of the present application. Optionally, when the camera model observes the virtual object, the center of the view angle of the camera model points in a direction in which a point of the spherical surface on which the camera model is located points at the center of the sphere.
Optionally, the camera model may also observe the virtual object at a preset angle in different directions of the virtual object.
Schematically, referring to fig. 3, a point is determined in the first virtual object 301 as a rotation center 302, around which rotation center 302 the camera model is rotated, optionally the camera model is configured with an initial position, which is a position above and behind the first virtual object 301 (e.g. a rear position of the brain). Illustratively, as shown in fig. 3, the initial position is position 303, and when the camera model rotates to position 304 or position 305, the direction of the angle of view of the camera model changes as the camera model rotates.
Schematically, fig. 4 is a virtual environment screen viewed from a first-person perspective of a first virtual object, the virtual environment screen including at least one second virtual object 401. The variant in fig. 4 is the second virtual object.
Step 240, controlling the first virtual object to shoot towards the second virtual object;
in one embodiment, the terminal controls the first virtual object to fire towards the second virtual object by a virtual firing weapon. Optionally, the virtual shooting weapon includes a hot weapon, for example, a firearm shooting weapon such as a pistol, a rifle, a sniper gun, and optionally, the virtual shooting weapon includes a cold weapon, for example, a throwing shooting weapon such as a flyer, a flying axe, a stone, and a mechanical shooting weapon such as a slingshot, an arrow, a crossbow. It should be noted that the virtual shooting weapon is a weapon having a shooting function and is not limited to a conventional firearm, and the first virtual object may attack the second virtual object without contacting the second virtual object by the virtual shooting weapon.
And step 260, under the condition that the first virtual object successfully hits the target limb part of the second virtual object, controlling the target limb part to be separated from the second virtual object.
Target limb position: refers to one of the at least one limb portion that the second virtual object has. Optionally, the target limb portion includes at least one of a virtual left hand, a virtual right hand, a virtual left leg, a virtual right leg, a virtual head, and a virtual tail.
In one embodiment, the detachment of the target limb portion from the second virtual object is further controlled in dependence on the position at which the first virtual object successfully hits the target limb portion. Illustratively, if the first virtual object hits the left forearm on the virtual left hand, the left forearm and the left palm of the control target limb part are both separated from the second virtual object; illustratively, if the first virtual object hits the left forearm on the virtual left hand, the left forearm, and the left palm on the control-target limb portion are all separated from the second virtual object. In one embodiment, the terminal controls the detachment of the target limb part from the second virtual object in case the first virtual object successfully hits the target limb part of the second virtual object by the virtual shooting weapon. Illustratively, in the case where a bullet fired by the first virtual object toward the second virtual object through the virtual rifle successfully hits the target limb portion of the second virtual object, the terminal controls the separation of the target limb portion from the second virtual object.
In one embodiment, in the case that the first virtual object successfully hits the target limb part, and the number of times that the target limb part is currently hit reaches the number threshold, the target limb part is controlled to be separated from the second virtual object. Illustratively, the preset number threshold of times of the target limb part of the second virtual object is 10, and in response to that the first virtual object successfully hits the target limb part, the number of times of the target limb part hit reaches 10, the terminal controls the target limb part to be separated from the second virtual object.
In one embodiment, in the case that the first virtual object successfully hits the target limb part, and the biological value of the target limb part reaches a biological value threshold value, the target limb part is controlled to be separated from the second virtual object, and the biological value is used for describing the wear degree of the target limb part. Illustratively, the initial biological value of the target limb part is 100hp, the preset biological value threshold is 0hp, and in response to the first virtual object successfully hitting the target limb part, the terminal controls the target limb part to be separated from the second virtual object, so that the biological value of the target limb part reaches 0 hp.
In summary, by setting the second virtual object as a virtual object having at least one limb part and separating the limb parts on the second virtual object, it is achieved that the target limb part is separated from the second virtual object when the first virtual object only hits the target limb part. The method optimizes the simulation effect of the second virtual object when the second virtual object is attacked, for example, the tail of the second virtual object is seriously injured in the virtual environment, the virtual tail is preferentially separated to prolong the life, the method is used for simulating the tail of an animal in the real world to be seriously injured, and the tail is often broken to seek survival.
To achieve the separation of the target limb part from the second virtual object, based on the alternative embodiment shown in fig. 2, fig. 5 shows a display method of a virtual environment screen provided by an exemplary embodiment of the present application, wherein step 260 may be replaced by step 560, which is exemplified by applying the method to the terminal 120 (or the client installed with the virtual environment) shown in fig. 1, and the method includes:
step 560, displaying the separated virtual environment picture under the condition that the first virtual object successfully hits the target limb part of the second virtual object;
the virtual environment picture comprises a target limb part and a second virtual object which lacks the target limb part. Optionally, there are two possible implementations of step 560 described below.
A first possible implementation: in a three-dimensional virtual environment where the first virtual object is located, the terminal replaces a first three-dimensional model of a second virtual object with a second three-dimensional model, and adds a three-dimensional model of a target limb part; and based on the second three-dimensional model and the three-dimensional model of the target limb part, the terminal displays the separated virtual environment picture.
The first three-dimensional model is a complete three-dimensional model of the second virtual object, and the second three-dimensional model is a three-dimensional model of the second virtual object lacking the target limb part.
Illustratively, the first three-dimensional model is a complete three-dimensional model of monster a with a virtual left hand, a virtual right hand, a virtual left leg, a virtual right leg, and a virtual head, and the target limb portion is the virtual left hand, then the second three-dimensional model is a three-dimensional model lacking monster a of the virtual left hand as compared to the first three-dimensional model.
Optionally, the developer is pre-provided with at least six related three-dimensional models of monsters a, namely a complete three-dimensional model, a three-dimensional model lacking a virtual left hand, a three-dimensional model lacking a virtual right hand, a three-dimensional model lacking a virtual left leg, a three-dimensional model lacking a virtual right leg, and a three-dimensional model lacking a virtual head. It is worth mentioning that more three-dimensional models are available based on the six three-dimensional models, such as a three-dimensional model lacking a virtual left hand and a virtual head at the same time.
A second possible implementation: in a three-dimensional virtual environment where a first virtual object is located, a terminal sets a map of a target limb part on a first three-dimensional model of a second virtual object to be a transparent map to obtain a third three-dimensional model, and the terminal adds the three-dimensional model of the target limb part; and based on the third three-dimensional model and the three-dimensional model of the target limb part, the terminal displays the separated virtual environment picture.
And the first three-dimensional model is a complete three-dimensional model of the second virtual object.
Illustratively, the first three-dimensional model is provided with a mapping of each limb part of a virtual left hand, a virtual right hand, a virtual left leg, a virtual right leg and a virtual head of the monster a, the target limb part is the virtual left hand, and the terminal sets the mapping of the virtual left hand on the first three-dimensional model as a transparent mapping to obtain a third three-dimensional model.
Optionally, in the first possible implementation manner and the second possible implementation manner, after the "adding the three-dimensional model of the target limb part by the terminal," the method further includes: and the terminal controls the three-dimensional model of the target limb part to move along a flying-hitting track, wherein the flying-hitting track is determined based on the shooting angle of the first virtual object.
Illustratively, the shooting angle of the first virtual object is an included angle between a connecting line between the first virtual object and the target limb part and the ground, if the target limb part is positioned right in front of the first virtual object, the shooting angle of the first virtual object is 0 degree, and if the target limb part is positioned obliquely above the first virtual object, the shooting angle of the first virtual object is 45 degrees. The flying trajectory is used for simulating the trajectory of flying out of the target limb part according to the shooting angle after being hit.
Optionally, in the first possible implementation manner and the second possible implementation manner, after the "adding the three-dimensional model of the target limb part by the terminal," the method further includes: and the terminal controls the three-dimensional model of the target limb part to move along the free-fall trajectory. Illustratively, the free fall trajectory is used to simulate the trajectory of the natural fall of the target limb portion under the influence of gravity.
In summary, a specific implementation method for separating the target limb part is provided by replacing the three-dimensional model of the second virtual object lacking the target limb part with the complete three-dimensional model of the second virtual object in the three-dimensional virtual environment, or setting the chartlet of the target limb part on the first three-dimensional model of the second virtual object as a transparent chartlet.
The method also sets the moving track of the separated target limb part as a flying-off track or a free-fall track, thereby further optimizing the simulation effect of the second virtual object when the second virtual object is attacked.
Based on the alternative embodiment shown in fig. 2, fig. 6 shows a display method of a virtual environment screen provided in an exemplary embodiment of the present application, wherein step 260 is replaced by step 620, step 640, and step 660, and is exemplified by applying the method to the terminal 120 (or a client installed with a virtual environment) shown in fig. 1, and the method includes:
step 620, adding a separation visual special effect on the target limb part under the condition that the first virtual object successfully hits the target limb part of the second virtual object;
separating visual special effects: the visual special effect is added on the target limb part and is used for marking the target limb part in an imminent separation state. In one embodiment, the threshold of the number of times that the target limb part is hit is 10, and in response to the first virtual object successfully hitting the target limb part of the second virtual object, when the number of times that the target limb part is hit reaches 10, a cracking trace special effect is added on the target limb part, and the cracking trace special effect identifies that the target limb part is in an imminent separation state. In one embodiment, a crack trace special effect is added on the target limb part when the first virtual object successfully hits the target limb part so that the biological value of the target limb part reaches the biological value threshold value 0 hp.
It should be noted that the separation visual effect here identifies that the target limb part is in an imminent separation state, and before that, other visual effects may exist on the target limb part to identify the injury degree of the target limb part, for example, a fresh blood effect is added to the target limb part to simulate the bleeding degree of the target limb part before the hit frequency of the target limb part reaches a frequency threshold (or before a biological value reaches a biological value threshold).
In one embodiment, the terminal adds the visual special effect of separation to the target limb part by replacing the map of the target limb part on the first three-dimensional model. In one embodiment, the terminal adds the separate visual special effect on the position of the target limb part by tracking the position on the user interface.
Step 640, playing an animation that the target limb part is separated along the flying trajectory in the separation process, or playing an animation that the target limb part is separated along the free falling trajectory in the separation process;
optionally, the fly-by trajectory is determined based on a firing angle of the first virtual object. If the first virtual object shoots horizontally towards the target limb part right in front, the target limb part is separated in a horizontal shooting track.
Schematically, fig. 7 shows one screen in an animation in which the virtual right hand of the second virtual object 701 separates along the trajectory of the fly-away. Fig. 8 shows one frame in an animation in which the virtual head of the second virtual object 801 is separated along the free-fall trajectory.
And step 660, displaying the virtual environment picture after the separation is finished.
In one embodiment, the terminal displays a virtual environment picture after the separation is completed, and the virtual environment picture comprises the target limb part and a second virtual object lacking the target limb part.
Obviously, step 660 is similar to step 560, and reference may be made to the related description of step 560, which is not described herein again.
In summary, by adding the separation visual special effect to the target limb part and playing the animation in which the target limb part is separated from the second virtual object in the separation process, an implementation manner that the target limb part is separated from the second virtual object is further provided, and the simulation effect of the second virtual object when being attacked is optimized.
Based on the optional embodiment shown in fig. 2, fig. 9 shows a display method of a virtual environment screen provided in an exemplary embodiment of the present application, wherein step 260 further includes step 281, step 282, and step 283, which are exemplified by applying the method to the terminal 120 (or a client installed with a virtual environment) shown in fig. 1, and the method includes:
step 281, displaying the target virtual item;
the target virtual prop: refers to a virtual prop used to change the force value of a virtual object in a virtual environment. Illustratively, the target virtual props include virtual weapons, supply props such as bullets, defense props such as shields, armors and armored cars, virtual props such as virtual beams and virtual shock waves which are displayed by hands when the virtual objects release skills, and virtual props capable of changing attribute values of other virtual objects, including remote virtual props such as pistols, rifles and sniper guns, short-distance virtual props such as daggers, knives, swords and ropes, and throwing virtual props such as hatchaxes, flying knives, grenades, flash bombs and smoke bombs.
In one embodiment, the terminal displays a target virtual prop that matches the target limb location. Illustratively, the target limb part is a virtual left hand, and the target virtual prop is a magazine containing 12 rounds of 7.62mm bullets; the target limb part is a virtual head, and the target virtual prop is an AK rifle.
In one embodiment, the terminal switches and displays the target limb part as the target virtual prop. Optionally, the terminal switches the three-dimensional model of the target limb part into the three-dimensional model of the target virtual prop in the three-dimensional virtual environment; and based on the three-dimensional model of the target virtual prop, the terminal displays the target virtual prop.
In one embodiment, the terminal displays the target virtual item currently lacking in the first virtual object. Illustratively, the sniping gun owned by the first virtual object lacks a sighting telescope accessory, and the target virtual item is a quadruple telescope.
In one embodiment, the terminal displays the randomly determined target virtual props in a virtual prop set, wherein the virtual prop set comprises at least one virtual prop. Illustratively, the virtual prop set comprises an AK rifle, a 7.62mm magazine, an AK rifle stock, a sighting telescope and an AK rifle handle, and the terminal randomly determines and displays a target virtual prop from the virtual prop set, if the target virtual prop is the AK rifle handle.
Schematically, fig. 10 shows a target virtual prop 1001, which is an AK rifle.
282, controlling the first virtual object to acquire the clearance authority of the current level when the first virtual object lacks the target virtual resource;
in one embodiment, the first virtual object is a virtual object in a breakthrough type game, at least one checkpoint is arranged in the breakthrough type game, the breakthrough type game has at least two pass-through checkpoints, first, the first virtual object can pass through the checkpoint by using a target virtual resource, the target virtual resource is a consumption type virtual resource for the first virtual object to pass through the current checkpoint, and optionally, the target virtual resource comprises one of gold coins, credits and unconditional pass-through cards. Secondly, the first virtual object successfully separates a preset number of limb parts, for example, the first virtual object successfully separates 20 limb parts, and the first virtual object obtains the passing authority of the current level, wherein the preset number of limb parts may be limb parts of a preset number of second virtual objects, or limb parts of a preset number of other virtual objects (including the second virtual objects).
Schematically, fig. 11 shows a screen in which the first virtual object lacks a target virtual resource, and the number of target virtual resources (coins) 1101 owned by the first virtual object is "955" which is lower than the number of target virtual resources "1000" required by the current level.
Step 283, the score of the first virtual object in the current game is increased.
The score is used for determining the reward of the first virtual object after the first virtual object completes the current game.
In one embodiment, after the first virtual object successfully separates the target limb part from the second virtual object, the terminal increases the score of the first virtual object in the current game. After the current game is over, the terminal determines the reward of the first virtual object based on the score of the first virtual object. Illustratively, after the current game is over, the score of the first virtual object is 1500, and the reward of the first virtual object is determined to be 200 points of experience addition.
In summary, after the first virtual object is set to successfully separate the target virtual item from the second virtual object, the target virtual item is displayed on the virtual environment screen, or the first virtual object is controlled to obtain the customs permission, or the score of the first virtual object in the current game is increased, so that the reward to the first virtual object is realized, and a scheme after the target limb part is separated is further provided.
Based on the optional embodiment shown in fig. 2, fig. 12 shows a display method of a virtual environment screen provided in an exemplary embodiment of the present application, where step 250 is further included before step 260, and step 270 is further included after step 260, which is exemplified by applying the method to the terminal 120 (or a client installed with a virtual environment) shown in fig. 1, and the method includes:
step 250, determining that the first virtual object successfully hits the target limb part;
in one embodiment, after the first virtual object is shot towards the second virtual object, the terminal emits a ray from the position of the first virtual object to the position of the second virtual object, and in the case that the ray detects a collision box of the target limb part, it is determined that the first virtual object successfully hits the target limb part.
Schematically, fig. 13 shows a schematic diagram of a crash box that the first virtual object 1301 has, wherein the left hand, the right hand, the left leg, the right leg, and the head of the first virtual object 1301 are provided with crash boxes, respectively.
Step 270, displaying a picture of executing the activity of the second virtual object lacking the target limb part.
In one embodiment, the second virtual object lacking the target limb portion still has the ability to move within the virtual environment. Optionally, the second virtual object lacking the target limb portion may lack partial mobility, compared to the complete second virtual object, for example, the second virtual object lacking the left hand may not be shot, and the second virtual object lacking the head may lack the ability to track the first virtual object, and only move on the movement path before separation.
In one embodiment, the second virtual object lacking the target limb portion enters a survival countdown mode, e.g., the second virtual object lacking the head can only survive 10s in the virtual environment. In one embodiment, the second virtual object lacking the target limb portion automatically repairs the detached target limb portion after a preset period of time, such as the second virtual object lacking the left hand regrows the left hand after 30 seconds.
In summary, the method described above provides a way to determine that the first virtual object successfully hits the target limb portion of the second virtual object by setting the collision box for detecting the target limb portion of the first virtual object by the ray. And a second virtual object lacking the target limb part is still movable in the virtual environment, and the scheme after the target limb part is separated is further provided.
Fig. 14 is a flowchart illustrating a method for displaying a virtual environment screen according to an exemplary embodiment of the present application, where the method is applied to the terminal shown in fig. 1, and the method includes:
step 1401, start;
the terminal determines to begin a process of controlling, by the client, the first virtual object to shoot at the zombie.
Step 1402, entering a zombie mode;
the terminal determines that the first virtual object enters a zombie mode in which the zombie is a virtual object having at least one limb portion, the limb portion on the zombie being separable. In one embodiment, the three-dimensional model of the zombie has a crash box with at least one limb portion thereon. If the left hand, the right hand and the head of the zombie are provided with the crash boxes, the left hand, the right hand and the head can be separated, and the left leg and the right leg cannot be separated.
Step 1403, controlling the first virtual object to shoot at the zombie;
the terminal controls the first virtual object to shoot at the zombie through the virtual shooting weapon. And the terminal controls the first virtual object to send rays to the zombies.
Step 1404, the first virtual object hitting a zombie;
the terminal judges whether the first virtual object hits the zombie. If yes, go to step 1405; if not, step 1403 is entered. In the event that a ray issued by the first virtual object detects a collision box for a zombie (including limb portions and torso), the terminal determines that the first virtual object hits the zombie.
Step 1405, calculating a life value of the zombies;
the terminal calculates a life value of the zombie after being hit by the first virtual object.
Step 1406, the first virtual object hits the target limb part;
the terminal judges whether the first virtual object hits the target limb part, if so, the terminal goes to step 1407; if not, step 1403 is entered. In the case that the ray emitted by the first virtual object detects a collision box of the target limb part, the terminal determines that the first virtual object hits the target limb part.
Step 1407, separating the target limb part from the zombie;
the terminal detaches the target limb portion from the zombie. In one embodiment, the terminal acquires the position of the target limb part on the interface and starts playing the separation animation from the position.
1408, separating the obtained limb parts to reach a preset number;
and the terminal judges whether the separated limb parts reach the preset number. If yes, go to step 1409; if not, step 1403 is entered.
Step 1409, controlling the first virtual object to obtain the reward;
the terminal controls the first virtual object to obtain the reward. Optionally, the reward includes at least one of a virtual weapon and clearance rights.
And step 1410, ending.
The terminal determines to end the process of shooting the first virtual object towards the zombies.
Fig. 15 is a block diagram illustrating a configuration of a display apparatus for a virtual environment screen according to an exemplary embodiment of the present application, the apparatus including:
a display module 1501, configured to display a second virtual object, where the second virtual object is a virtual object having at least one limb portion;
a control module 1502 for controlling a first virtual object to fire towards a second virtual object;
the control module 1502 is further configured to control the target limb part to be detached from the second virtual object if the first virtual object successfully hits the target limb part of the second virtual object.
In an alternative embodiment, the display module 1501 is further configured to display the virtual environment screen after the separation is completed, where the virtual environment screen includes the target limb portion and the second virtual object lacking the target limb portion.
In an alternative embodiment, the control module 1502 is further configured to replace the first three-dimensional model of the second virtual object with the second three-dimensional model in the three-dimensional virtual environment in which the first virtual object is located, and add the three-dimensional model of the target limb portion; the first three-dimensional model is a complete three-dimensional model of the second virtual object, and the second three-dimensional model is a three-dimensional model of the second virtual object lacking the target limb part.
In an alternative embodiment, the display module 1501 is further configured to display the separated virtual environment picture based on the second three-dimensional model and the three-dimensional model of the target limb portion.
In an optional embodiment, the control module 1502 is further configured to set, in the three-dimensional virtual environment where the first virtual object is located, a map of the target limb portion on the first three-dimensional model of the second virtual object as a transparent map, obtain a third three-dimensional model, and add a three-dimensional model of the target limb portion; and the first three-dimensional model is a complete three-dimensional model of the second virtual object.
In an alternative embodiment, the display module 1501 is further configured to display the separated virtual environment picture based on the third three-dimensional model and the three-dimensional model of the target limb portion.
In an alternative embodiment, the control module 1502 is further configured to control the three-dimensional model of the target limb portion to move along a fly-away trajectory, which is determined based on a firing angle of the first virtual object.
In an alternative embodiment, the control module 1502 is further configured to control the three-dimensional model of the target limb portion to move along the freefall trajectory.
In an alternative embodiment, display module 1501 is further configured to play an animation of the detachment of the target limb portion from the second virtual object during the detachment process.
In an alternative embodiment, the display module 1501 is further configured to play an animation of the target limb portion being separated along a fly-away trajectory during the separation process, the fly-away trajectory being determined based on a shooting angle of the first virtual object, the target limb portion including at least one of a virtual left hand, a virtual right hand, a virtual left leg, a virtual right leg, a virtual head, and a virtual tail.
In an alternative embodiment, display module 1501 is further configured to play an animation of a target limb portion being separated along a free-fall trajectory during the separation process, the target limb portion including at least one of a virtual left hand, a virtual right hand, a virtual left leg, a virtual right leg, a virtual head, and a virtual tail.
In an alternative embodiment, the display module 1501 is further configured to add a distraction visual effect to the target limb portion, where the distraction visual effect is used to identify that the target limb portion is in an imminent distraction state.
In an alternative embodiment, the control module 1502 is further configured to control the target limb portion to be detached from the second virtual object if the first virtual object successfully hits the target limb portion and the number of times the target limb portion is hit reaches a threshold number.
In an alternative embodiment, the control module 1502 is further configured to control the separation of the target limb portion from the second virtual object if the first virtual object successfully hits the target limb portion and the biological value of the target limb portion reaches a biological value threshold, where the biological value is used to describe the degree of wear of the target limb portion.
In an alternative embodiment, display module 1501 is further configured to display target virtual props for changing force values of virtual objects in the virtual environment.
In an optional embodiment, the control module 1502 is further configured to control the first virtual object to obtain the clearance right of the current level if the first virtual object lacks a target virtual resource, where the target virtual resource is a consumable virtual resource for the first virtual object to pass through the current level.
In an alternative embodiment, control module 1502 is further configured to increase a score of the first virtual object in the current play, the score being used to determine a reward for the first virtual object after completing the current play.
In an alternative embodiment, display module 1501 is further configured to display the target virtual prop matching the target limb portion.
In an alternative embodiment, display module 1501 is further configured to display the target virtual prop that is currently absent from the first virtual object.
In an alternative embodiment, display module 1501 is further configured to display target virtual props randomly determined in a set of virtual props, where the set of virtual props includes at least one virtual prop.
In an alternative embodiment, display module 1501 is further configured to switch and display the target limb portion as the target virtual prop.
In an alternative embodiment, control module 1502 is further configured to switch the three-dimensional model of the target limb portion to the three-dimensional model of the target virtual prop in the three-dimensional virtual environment.
In an alternative embodiment, display module 1501 is further configured to display the target virtual prop based on a three-dimensional model of the target virtual prop.
In an optional embodiment, the apparatus further comprises a determining module 1503, and the determining module 1503 is configured to determine that the first virtual object successfully hits the target limb portion.
In an alternative embodiment, the determination module 1503 is further configured to determine that the first virtual object successfully hits the target limb portion if the ray of the first virtual object issued towards the second virtual object detects a collision box of the target limb portion.
In an alternative embodiment, the display module 1501 is further configured to display a screen of a second virtual object lacking the target limb portion to perform the activity.
In summary, the apparatus described above realizes that the target limb part is separated from the second virtual object when the first virtual object only hits the target limb part by setting the second virtual object as a virtual object having at least one limb part and the limb parts on the second virtual object can be separated. The device optimizes the simulation effect of the second virtual object when the second virtual object is attacked, for example, the tail of the second virtual object is seriously injured in a virtual environment, the separation of the tail is preferentially considered to prolong the life, the device is used for simulating the serious injury of the tail of an animal in the real world, and the tail is often broken to seek survival.
Fig. 16 shows a block diagram of a computer device 1600 provided in an exemplary embodiment of the present application. The computer device 1600 may be a portable mobile terminal, such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer or a desktop computer. Computer device 1600 may also be referred to by other names such as user equipment, portable terminals, laptop terminals, desktop terminals, etc.
Generally, computer device 1600 includes: a processor 1601, and a memory 1602.
Processor 1601 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 1601 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). Processor 1601 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1601 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing content that the display screen needs to display. In some embodiments, the processor 1601 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 1602 may include one or more computer-readable storage media, which may be non-transitory. The memory 1602 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1602 is used to store at least one instruction for execution by the processor 1601 to implement a method for displaying a virtual environment screen provided by a method embodiment of the present application.
In some embodiments, computer device 1600 may also optionally include: peripheral interface 1603 and at least one peripheral. Processor 1601, memory 1602 and peripheral interface 1603 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 1603 via buses, signal lines, or circuit boards. By way of example, the peripheral device may include: at least one of a radio frequency circuit 1604, a display 1605, a camera assembly 1606, an audio circuit 1607, and a power supply 1609.
Peripheral interface 1603 can be used to connect at least one I/O (Input/Output) related peripheral to processor 1601 and memory 1602. In some embodiments, processor 1601, memory 1602, and peripheral interface 1603 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1601, the memory 1602 and the peripheral device interface 1603 may be implemented on a separate chip or circuit board, which is not limited by this embodiment.
The Radio Frequency circuit 1604 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1604 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 1604 converts the electrical signal into an electromagnetic signal to be transmitted, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1604 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1604 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 16G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1604 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display 1605 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1605 is a touch display screen, the display screen 1605 also has the ability to capture touch signals on or over the surface of the display screen 1605. The touch signal may be input to the processor 1601 as a control signal for processing. At this point, the display 1605 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1605 may be one, disposed on the front panel of the computer device 1600; in other embodiments, the display screens 1605 can be at least two, each disposed on a different surface of the computer device 1600 or in a folded design; in other embodiments, the display 1605 may be a flexible display disposed on a curved surface or on a folded surface of the computer device 1600. Even further, the display 1605 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 1605 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or other materials.
The camera assembly 1606 is used to capture images or video. Optionally, camera assembly 1606 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1606 can also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1607 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1601 for processing or inputting the electric signals to the radio frequency circuit 1604 for voice communication. For stereo capture or noise reduction purposes, the microphones may be multiple and located at different locations on the computer device 1600. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1601 or the radio frequency circuit 1604 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuit 1607 may also include a headphone jack.
Power supply 1609 is used to power the various components within computer device 1600. Power supply 1609 may be alternating current, direct current, disposable or rechargeable. When power supply 1609 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, computer device 1600 also includes one or more sensors 1610. The one or more sensors 1610 include, but are not limited to: acceleration sensor 1611, gyro sensor 1612, pressure sensor 1613, optical sensor 1615, and proximity sensor 1616.
The acceleration sensor 1611 may detect acceleration magnitudes on three coordinate axes of a coordinate system established with the computer apparatus 1600. For example, the acceleration sensor 1611 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1601 may control the display screen 1605 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1611. The acceleration sensor 1611 may also be used for acquisition of motion data of a game or a user.
Gyroscope sensor 1612 can detect the organism direction and turned angle of computer device 1600, and gyroscope sensor 1612 can gather user's 3D action to computer device 1600 in coordination with acceleration sensor 1611. From the data collected by the gyro sensor 1612, the processor 1601 may perform the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 1613 may be positioned on the side bezel of computer device 1600 and/or underneath display 1605. When the pressure sensor 1613 is disposed on the side frame of the computer device 1600, the user's holding signal of the computer device 1600 can be detected, and the processor 1601 performs left-right hand recognition or shortcut operations according to the holding signal collected by the pressure sensor 1613. When the pressure sensor 1613 is disposed at the lower layer of the display 1605, the processor 1601 controls the operability control on the UI interface according to the pressure operation of the user on the display 1605. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The optical sensor 1615 is used to collect ambient light intensity. In one embodiment, the processor 1601 may control the display brightness of the display screen 1605 based on the ambient light intensity collected by the optical sensor 1615. Illustratively, when the ambient light intensity is high, the display brightness of the display screen 1605 is adjusted up; when the ambient light intensity is low, the display brightness of the display screen 1605 is adjusted down. In another embodiment, the processor 1601 may also dynamically adjust the shooting parameters of the camera assembly 1606 based on the ambient light intensity collected by the optical sensor 1615.
A proximity sensor 1616, also known as a distance sensor, is typically disposed on the front panel of the computer device 1600. The proximity sensor 1616 is used to capture the distance between the user and the front of the computer device 1600. In one embodiment, the display 1605 is controlled by the processor 1601 to switch from a bright screen state to a dark screen state when the proximity sensor 1616 detects that the distance between the user and the front surface of the computer device 1600 is gradually decreasing; the display 1605 is controlled by the processor 1601 to switch from a rest state to a lighted state when the proximity sensor 1616 detects that the distance between the user and the front surface of the computer device 1600 is gradually increasing.
Those skilled in the art will appreciate that the configuration shown in FIG. 16 is not intended to be limiting of computer device 1600, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be employed.
The present application further provides a computer-readable storage medium, where at least one instruction, at least one program, a code set, or a set of instructions is stored in the storage medium, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by a processor to implement the method for displaying a virtual environment screen provided by the foregoing method embodiments.
A computer program product or computer program is provided that includes computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instruction from the computer readable storage medium, and the processor executes the computer instruction, so that the computer device executes the display method of the virtual environment picture provided by the method embodiment.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is intended only to illustrate the alternative embodiments of the present application, and should not be construed as limiting the present application, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (20)

1. A method for displaying a virtual environment picture, the method being applied to a client for controlling a first virtual object, the method comprising:
displaying a second virtual object, the second virtual object being a virtual object having at least one limb portion;
controlling the first virtual object to shoot towards the second virtual object;
and controlling the target limb part to be separated from the second virtual object under the condition that the first virtual object successfully hits the target limb part of the second virtual object.
2. The method of claim 1, wherein the controlling the detachment of the target limb portion from the second virtual object comprises:
and displaying the virtual environment picture after the separation is finished, wherein the virtual environment picture comprises the target limb part and a second virtual object lacking the target limb part.
3. The method of claim 2, wherein the displaying the virtual environment screen after the separating is completed comprises:
replacing a first three-dimensional model of the second virtual object with a second three-dimensional model in the three-dimensional virtual environment in which the first virtual object is located, and adding the three-dimensional model of the target limb part;
displaying the virtual environment picture after the separation is finished based on the second three-dimensional model and the three-dimensional model of the target limb part;
wherein the first three-dimensional model is a complete three-dimensional model of the second virtual object, and the second three-dimensional model is a three-dimensional model of the second virtual object lacking the target limb portion.
4. The method of claim 2, wherein the displaying the virtual environment screen after the separating is completed comprises:
in the three-dimensional virtual environment where the first virtual object is located, setting the mapping of the target limb part on the first three-dimensional model of the second virtual object to be a transparent mapping to obtain a third three-dimensional model, and adding the three-dimensional model of the target limb part;
displaying the virtual environment picture after the separation is finished based on the third three-dimensional model and the three-dimensional model of the target limb part;
wherein the first three-dimensional model is a complete three-dimensional model of the second virtual object.
5. The method according to claim 3 or 4, characterized in that the method further comprises:
and controlling the three-dimensional model of the target limb part to move along a flying-off track, wherein the flying-off track is determined based on the shooting angle of the first virtual object.
6. The method of any of claims 2 to 4, further comprising:
and playing the animation of the target limb part separated from the second virtual object in the separation process.
7. The method of claim 6, wherein the playing the animation of the separation of the target limb part from the second virtual object during the separation process comprises:
playing animation that the target limb part is separated along a flying strike track in the separation process, wherein the flying strike track is determined based on the shooting angle of the first virtual object, and the target limb part comprises at least one of a virtual left hand, a virtual right hand, a virtual left leg, a virtual right leg, a virtual head and a virtual tail.
8. The method of claim 6, wherein before playing the animation of the detachment of the target limb portion from the second virtual object in the detachment process, further comprising:
and adding a separation visual special effect on the target limb part, wherein the separation visual special effect is used for identifying that the target limb part is in an imminent separation state.
9. The method according to any one of claims 1 to 4, wherein the controlling the target limb portion to be detached from the second virtual object in case of successful hit of the target limb portion by the first virtual object comprises:
under the condition that the first virtual object successfully hits the target limb part and the number of times of hitting the target limb part reaches a threshold number of times, controlling the target limb part to be separated from the second virtual object;
or the like, or, alternatively,
and under the condition that the first virtual object successfully hits the target limb part and the biological value of the target limb part reaches a biological value threshold value, controlling the target limb part to be separated from the second virtual object, wherein the biological value is used for describing the abrasion degree of the target limb part.
10. The method of any of claims 1 to 4, further comprising:
displaying a target virtual prop, wherein the target virtual prop is used for changing the force value of a virtual object in a virtual environment;
or the like, or, alternatively,
under the condition that the first virtual object lacks target virtual resources, controlling the first virtual object to acquire the passing authority of the current level, wherein the target virtual resources are consumption type virtual resources used for the first virtual object to pass through the current level;
or the like, or, alternatively,
and increasing the score of the first virtual object in the current game, wherein the score is used for determining the reward of the first virtual object after the current game is completed.
11. The method of claim 10, wherein said displaying a target virtual prop comprises:
displaying a target virtual prop matched with the target limb part;
or the like, or, alternatively,
displaying a target virtual item currently lacking in the first virtual object;
or the like, or, alternatively,
displaying a target virtual item randomly determined in a set of virtual items, the set of virtual items including at least one virtual item.
12. The method of claim 11, wherein displaying the target virtual prop matching the target limb portion comprises:
and switching and displaying the target limb part as the target virtual prop.
13. The method of claim 12, wherein said switching the display of the target limb portion as the target virtual prop comprises:
switching the three-dimensional model of the target limb part into a three-dimensional model of the target virtual prop in a three-dimensional virtual environment;
and displaying the target virtual prop based on the three-dimensional model of the target virtual prop.
14. The method of any of claims 1 to 4, further comprising:
determining that the first virtual object successfully hits the target limb portion.
15. The method of claim 14, wherein the determining that the first virtual object successfully hits the target limb portion comprises:
determining that the first virtual object successfully hits the target limb part in the case that a ray emitted by the first virtual object towards the second virtual object detects a collision box of the target limb part.
16. The method of any of claims 1 to 4, further comprising:
displaying a screen on which a second virtual object lacking the target limb part performs an activity.
17. An apparatus for displaying a virtual environment screen, the apparatus comprising:
a display module for displaying a second virtual object, the second virtual object being a virtual object having at least one limb portion;
a control module for controlling a first virtual object to fire towards the second virtual object;
the control module is further configured to control the target limb part to be separated from the second virtual object when the first virtual object successfully hits the target limb part of the second virtual object.
18. A computer device, characterized in that the computer device comprises: a processor and a memory, the memory storing a computer program that is loaded and executed by the processor to implement the method of displaying a virtual environment picture according to any one of claims 1 to 16.
19. A computer-readable storage medium storing a computer program which is loaded and executed by a processor to implement the display method of the virtual environment screen according to any one of claims 1 to 16.
20. A computer program product, characterized in that the computer program product comprises computer instructions stored in a computer-readable storage medium, from which a processor of a computer device reads the computer instructions, the processor executing the computer instructions causing the computer device to execute to implement the display method of the virtual environment picture according to any one of claims 1 to 16.
CN202111654055.0A 2021-10-28 2021-12-30 Virtual environment picture display method, device, equipment, medium and program product Pending CN114470755A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2021112651268 2021-10-28
CN202111265126 2021-10-28

Publications (1)

Publication Number Publication Date
CN114470755A true CN114470755A (en) 2022-05-13

Family

ID=81507972

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111654055.0A Pending CN114470755A (en) 2021-10-28 2021-12-30 Virtual environment picture display method, device, equipment, medium and program product

Country Status (1)

Country Link
CN (1) CN114470755A (en)

Similar Documents

Publication Publication Date Title
CN110413171B (en) Method, device, equipment and medium for controlling virtual object to perform shortcut operation
CN110448891B (en) Method, device and storage medium for controlling virtual object to operate remote virtual prop
CN110917619B (en) Interactive property control method, device, terminal and storage medium
CN110585710B (en) Interactive property control method, device, terminal and storage medium
CN110755841A (en) Method, device and equipment for switching props in virtual environment and readable storage medium
CN110538459A (en) Method, apparatus, device and medium for throwing virtual explosives in virtual environment
CN110917623B (en) Interactive information display method, device, terminal and storage medium
CN110507990B (en) Interaction method, device, terminal and storage medium based on virtual aircraft
CN110876849B (en) Virtual vehicle control method, device, equipment and storage medium
CN111330274B (en) Virtual object control method, device, equipment and storage medium
CN112316421B (en) Equipment method, device, terminal and storage medium of virtual item
US11786817B2 (en) Method and apparatus for operating virtual prop in virtual environment, device and readable medium
CN112138384A (en) Using method, device, terminal and storage medium of virtual throwing prop
CN111298441A (en) Using method, device, equipment and storage medium of virtual prop
CN111330278B (en) Animation playing method, device, equipment and medium based on virtual environment
CN113144597A (en) Virtual vehicle display method, device, equipment and storage medium
CN113713382A (en) Virtual prop control method and device, computer equipment and storage medium
CN112717410B (en) Virtual object control method and device, computer equipment and storage medium
CN112402964B (en) Using method, device, equipment and storage medium of virtual prop
CN112704875B (en) Virtual item control method, device, equipment and storage medium
CN112316430B (en) Prop using method, device, equipment and medium based on virtual environment
CN113713383A (en) Throwing prop control method and device, computer equipment and storage medium
CN111589137B (en) Control method, device, equipment and medium of virtual role
CN110960849B (en) Interactive property control method, device, terminal and storage medium
CN112717394B (en) Aiming mark display method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination