WO2023130800A1 - 生成主控对象投影的方法、装置、设备及介质 - Google Patents

生成主控对象投影的方法、装置、设备及介质 Download PDF

Info

Publication number
WO2023130800A1
WO2023130800A1 PCT/CN2022/126147 CN2022126147W WO2023130800A1 WO 2023130800 A1 WO2023130800 A1 WO 2023130800A1 CN 2022126147 W CN2022126147 W CN 2022126147W WO 2023130800 A1 WO2023130800 A1 WO 2023130800A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
bone
projection
control object
ontology
Prior art date
Application number
PCT/CN2022/126147
Other languages
English (en)
French (fr)
Inventor
汤捷
刘舒畅
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to JP2023566962A priority Critical patent/JP2024518913A/ja
Priority to US18/221,812 priority patent/US20230360348A1/en
Publication of WO2023130800A1 publication Critical patent/WO2023130800A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6646Methods for processing data by generating or executing the game program for rendering three dimensional images for the computation and display of the shadow of an object or character
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Definitions

  • the present application relates to the technical field of animation, and in particular to a method, device, device and medium for generating projections of master objects.
  • an upper projection will be added to the user's main control object to improve the realism of the virtual environment.
  • the related technology provides two animation blueprints, one animation blueprint is used to generate the master control object itself, and the other animation blueprint is used to generate the projection of the master control object.
  • animation pose matching, bone scaling and model detail optimization will be performed on the skeleton model of the main control object in order to generate the ontology model of the main control object.
  • animation pose matching and model detail optimization will be performed on the master control object model to generate the projection model of the master control object.
  • the embodiment of the present application provides a method, device, device and medium for generating the projection of the master control object.
  • the method can relatively quickly generate the projection of the master control object.
  • the technical solution is as follows:
  • a method of generating a master control object projection comprising:
  • the master control object is an object that observes the virtual environment from a first-person perspective, and the body animation blueprint is used to generate the virtual environment
  • the body model of the main control object, the original skeleton model is the model of the main control object without bone deformation;
  • the projection model is a model for generating a projection of the master control object in the virtual environment
  • a device for generating a master object projection comprising:
  • An extraction module configured to extract the original skeletal model of the main control object from the ontology animation blueprint of the main control object, the main control object is an object that observes the virtual environment from a first-person perspective, and the ontology animation blueprint is used for generating an ontology model of the master control object in the virtual environment, the original skeleton model being a model of the master control object without skeletal deformation;
  • An adjustment module configured to obtain a projection model based on the original skeletal model, where the projection model is a model for generating a projection of the master control object in the virtual environment;
  • the rendering module is configured to render the projection model to obtain the projection of the master control object.
  • a computer device includes: a processor and a memory, at least one instruction, at least one section of program, code set or instruction set are stored in the memory, at least one instruction, at least one section of program The code set or instruction set is loaded and executed by the processor to implement the method for generating the master control object projection as described above.
  • a computer storage medium is provided. At least one piece of program code is stored in the computer-readable storage medium, and the program code is loaded and executed by a processor to realize the method of generating a master control object projection as described in the above aspect. method.
  • a computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • a processor of a computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the method for generating a master object projection as described in the above aspect.
  • this method does not require an additional set of animation blueprints to obtain the projection of the master object, which can simplify the acquisition of the projection of the master object
  • the process can obtain the projection of the master control object without too much calculation, and reduce the calculation pressure of the computer equipment. Even if the computing power of the computer equipment is weak, the projection of the master control object can be completed.
  • Fig. 1 is a schematic structural diagram of a computer system provided by an exemplary embodiment of the present application
  • Fig. 2 is a schematic diagram of related technologies provided by an exemplary embodiment of the present application.
  • Fig. 3 is a schematic diagram of an ontology of a master control object provided by an exemplary embodiment of the present application
  • Fig. 4 is a schematic diagram of a projection of a master control object provided by an exemplary embodiment of the present application.
  • Fig. 5 is a schematic diagram of related technologies provided by an exemplary embodiment of the present application.
  • FIG. 6 is a schematic flowchart of a method for generating a master object projection provided by an exemplary embodiment of the present application
  • Fig. 7 is a schematic diagram of an ontology of a master control object provided by an exemplary embodiment of the present application.
  • Fig. 8 is a schematic diagram of a projection of a master control object provided by an exemplary embodiment of the present application.
  • FIG. 9 is a schematic flowchart of a method for generating a master object projection provided by an exemplary embodiment of the present application.
  • FIG. 10 is a schematic diagram of a rendering projection model provided by an exemplary embodiment of the present application.
  • Fig. 11 is a schematic diagram of an integrated projection model provided by an exemplary embodiment of the present application.
  • Fig. 12 is a schematic diagram of a rendering projection model provided by an exemplary embodiment of the present application.
  • Fig. 13 is a schematic diagram of an integrated firearm model provided by an exemplary embodiment of the present application.
  • Fig. 14 is a schematic flowchart of a method for generating a master object projection provided by an exemplary embodiment of the present application
  • Fig. 15 is a schematic flowchart of a method for adjusting the posture of an original skeleton model provided by an exemplary embodiment of the present application
  • Fig. 16 is a schematic diagram of adjusting sub-skeletons provided by an exemplary embodiment of the present application.
  • Fig. 17 is a schematic diagram of adjusting the parent skeleton provided by an exemplary embodiment of the present application.
  • Fig. 18 is a schematic flowchart of a method for implementing an ontology animation blueprint provided by an exemplary embodiment of the present application
  • Fig. 19 is a schematic diagram of an implementation method of an ontology animation blueprint provided by an exemplary embodiment of the present application.
  • Fig. 20 is a schematic flowchart of a method for implementing an ontology animation blueprint provided by an exemplary embodiment of the present application
  • Fig. 21 is a schematic diagram of an implementation method of an ontology animation blueprint provided by an exemplary embodiment of the present application.
  • Fig. 22 is a schematic diagram of a method for generating a master object projection provided by an exemplary embodiment of the present application.
  • Fig. 23 is a schematic structural diagram of an apparatus for generating a master object projection provided by an exemplary embodiment of the present application.
  • Fig. 24 is a schematic structural diagram of a computer device provided by an exemplary embodiment of the present application.
  • the movable object refers to the movable object controlled by the user to observe the virtual environment from the first-person perspective.
  • the movable object may be a virtual character, virtual animal, animation character, etc., such as: characters, animals, plants, oil barrels, walls, stones, etc. displayed in a virtual environment.
  • the virtual object is a three-dimensional model created based on animation skeleton technology, and each virtual object has its own shape and volume in the three-dimensional virtual environment, occupying a part of the space in the three-dimensional virtual environment .
  • the virtual object is a two-dimensional virtual environment
  • the virtual object is a two-dimensional plane model created based on animation technology, each virtual object has its own shape and area in the two-dimensional virtual environment, and occupies part of the area.
  • FPS First Person Shooting game
  • users play games from a first-person perspective.
  • FPS game users can be divided into two hostile camps, and the virtual characters controlled by the users are scattered in the virtual world to compete with each other, and the victory condition is to kill all the virtual characters of the enemy.
  • the unit of FPS game is round, and the duration of a round of FPS game is from the moment when the game starts to the moment when the victory condition is achieved.
  • Fig. 1 shows a schematic structural diagram of a computer system 100 provided by an exemplary embodiment of the present application.
  • the computer system 100 includes: a terminal 120 and a server 140 .
  • An application program supporting a virtual environment is installed on the terminal 120 .
  • the application program can be any one of FPS games, racing games, MOBA (Multiplayer Online Battle Arena, multiplayer online tactical arena) games, virtual reality applications, three-dimensional map programs, and multiplayer gun battle survival games.
  • the user uses the terminal 120 to operate the master object located in the virtual environment to carry out activities, such activities include but not limited to: attack, release skills, purchase props, treat, adjust body posture, crawl, walk, ride, fly, jump, drive, At least one of pick up, shoot, throw.
  • the first virtual character is a first virtual character.
  • the terminal 120 is at least one of a smart phone, a tablet computer, an e-book reader, an MP3 player, an MP4 player, a laptop computer, and a desktop computer.
  • the terminal 120 is connected to the server 140 through a wireless network or a wired network.
  • the server 140 can be an independent physical server, or a server cluster or distributed system composed of multiple physical servers, and can also provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communications, Cloud servers for basic cloud computing services such as middleware services, domain name services, security services, CDN (Content Delivery Network, content distribution network), and big data and artificial intelligence platforms.
  • the server 140 is used to provide background services for applications supporting virtual environments.
  • the server 140 undertakes the main calculation work, and the terminal 120 undertakes the secondary calculation work; or, the server 140 undertakes the secondary calculation work, and the terminal 120 undertakes the main calculation work; or, both the server 140 and the terminal 120 adopt a distributed computing architecture Perform collaborative computing.
  • Matching animation poses 201 determine the animation poses of different parts of the master object.
  • Matching animation pose 201 is used to determine the animation pose of the master object.
  • an animation pose of the master object is determined.
  • the activity includes but is not limited to at least one of walking, running, jumping, standing, squatting, lying down, attacking, flying, driving, picking up, shooting, and throwing.
  • the matching animation pose 201 will match the animation poses of different parts of the main control object. For example, if the main control object runs while shooting, the upper body of the main control object needs to match the shooting animation pose, and the lower body of the main control object matches An animation pose for running.
  • Superimposing animation 202 superimposing the animation poses of different parts of the master control object to obtain the animation pose of the master control object.
  • the overlay animation 202 is used to process the animation overlay of different parts of the master object.
  • the animation pose of the upper body of the master object is superimposed on the animation pose of the lower body of the master object to obtain the animation pose of the master object.
  • Skeleton deformation 203 adjusting the skeleton model of the master control object to obtain a deformed skeleton model.
  • Skeleton deformation 203 is used to adjust the bone shape of the master object.
  • the way of bone deformation 203 includes but not limited to at least one of changing bone position, changing bone orientation and scaling bone scale.
  • the bones of the head of the main control object are scaled.
  • the main control object leans left and right the main control object will be excessively distorted, so that the main control object can complete the action of leaning forward.
  • FIG. 3 when observing the master object 301 from a third-person perspective, the head of the master object 301 is scaled (removed), and the body of the master object 301 is in an excessively distorted state.
  • Adjust pose 204 adjust the pose of the deformed skeleton model based on the current state of the master object. Adjust pose 204 is used to adjust the pose of the master object to improve precision. Exemplarily, the posture of the hand of the main control object is adjusted through inverse dynamics, so as to improve the accuracy of the main control object holding the prop.
  • an ontology model 205 of the master object is output, and the ontology model 205 is used to generate an ontology of the master object.
  • the method of displaying the ontology of the main control object cannot be directly transplanted to the projection of the main control object.
  • the above method is used to generate the projection of the main control object, it will be as shown in Figure 4
  • the projection 401, the shape of the projection 401 obviously does not match the reality. This is because the step of bone deformation 203 will cause the bone deformation of the main control object.
  • the head of the main control object is scaled, and on the other hand, the main control object is distorted, resulting in the bone of the main control object being different from the actual one. .
  • the step of bone deformation 203 cannot be skipped when generating the ontology model of the main control object.
  • the step of bone deformation 203 is skipped, on the one hand, if the head is not scaled in bone deformation 203, the The camera models of the head and the main control object affect each other, causing the phenomenon of model penetration.
  • the master object leans left and right if the master object is not distorted enough, it is difficult to achieve the master object leaning.
  • two different processes are run in the computer device to respectively realize the display of the master object's ontology and projection.
  • the first process 501 is used to generate the ontology of the master object
  • the second process 502 is used to generate the projection of the master object.
  • the inventors have found that other problems will be caused here. Since two different processes need to be calculated for a master control object, and the aforementioned two processes cannot be optimized by other methods such as frequency reduction, for those computer devices with limited computing power For mobile phones, for example, it is impossible to simultaneously display the main body and projection of the master object in this way. Moreover, for those computer devices with strong computing power, it is easy to cause the process processing time to be too long due to thread waiting.
  • Fig. 6 shows a schematic flowchart of a method for generating a master object projection provided by an embodiment of the present application.
  • the method can be executed by the terminal 120 shown in FIG. 1, and the method includes the following steps:
  • Step 602 Extract the original skeletal model of the main control object from the ontology animation blueprint of the main control object.
  • the main control object is an object that observes the virtual environment from a first-person perspective, and the ontology animation blueprint is used to generate the ontology model of the main control object in the virtual environment , the original bone model is the model of the master object without bone deformation.
  • the master control object may be at least one of virtual characters, virtual animals, and cartoon characters. This application does not specifically limit it.
  • the user manipulates the main control object in an FPS game, and the user controls the main control object from a first-person perspective.
  • the ontology animation blueprint includes bone deformation processing, which is used to prevent the main control object from passing through the model and modify the skeleton of the main control object.
  • bone deformation processing is used to prevent the main control object from passing through the model and modify the skeleton of the main control object.
  • the bone model of the main control object undergoes bone deformation
  • the head bone of the main control object will be scaled.
  • the model at this time is not suitable for generating the projection of the main control object. Therefore, the bone model without bone deformation will be used model as the original bone model.
  • the ontology animation blueprint includes the steps shown in the embodiment shown in FIG. 2 .
  • the ontology animation blueprint is the process of generating the ontology model of the master control object in the virtual environment (or, it can also be called the ontology animation blueprint is the process of generating the ontology of the master control object in the virtual environment).
  • the ontology of the main control object can be obtained through the ontology animation blueprint.
  • the ontology of the master object refers to the master object in the virtual environment.
  • Step 604 Obtain a projection model based on the original skeleton model, where the projection model is used to generate a projection of the master control object in the virtual environment.
  • the original bone model is directly used as the projection model. It should be noted that since the projection accuracy of the main control object is often not high, when more emphasis is placed on terminal performance, the original skeleton model can be directly used as the projection model.
  • the pose of the original skeleton model is adjusted to obtain the projection model. It should be noted that if the projection accuracy of the master object is further pursued, the original skeleton model can be further adjusted according to the current state of the master object.
  • the current state includes but is not limited to at least one of attacking, releasing skills, purchasing props, treating, adjusting body posture, crawling, walking, riding, flying, jumping, driving, picking up, shooting, throwing, running, and standing still. kind.
  • the main control object is in a walking state, the pose of the original skeleton model is adjusted to a walking pose.
  • the state of the main control object changes, based on the changed state, adjust the pose of the original skeleton model to obtain the projection model.
  • the master control object changes from a stationary state to a running state, then the master control object changes from a resting posture to a running posture.
  • the pose of the original skeleton model is adjusted by using inverse dynamics to obtain the projection model.
  • the projection accuracy of the master object itself is low, so the pose adjustment operation on the original skeleton model can be canceled, and the pose adjustment operation is used to improve the projection accuracy of the master object.
  • the main control object turns around at a small angle, there is a footstep logic adjustment, which is used to improve the delicateness of the performance, but because the projection accuracy of the main control object is low, even if the footstep logic adjustment is performed, it will not be reflected. out, so the footstep logic adjustment can be canceled to reduce the calculation amount of the computer equipment.
  • the steps of adjusting the pose of the original skeleton model can be reduced, for example, canceling the above-mentioned footstep logic adjustment, or canceling the hand detail adjustment.
  • Step 606 Render the projection model to obtain the projection of the main control object.
  • the projection of the master control object is the projection of the body of the master control object on the backlight side in the virtual environment.
  • the projection of the master control object is the projection on the ground in the virtual environment, or the projection of the master control object is the projection on the wall in the virtual environment, or the projection of the master control object is the projection on the ceiling in the virtual environment.
  • the projection of the master object may also refer to an image on a reflective surface.
  • the projection of the master object refers to the image of the master object on the mirror surface, and for example, the projection of the master object refers to the image of the master object on the water surface.
  • FIG. 7 shows a projection 701 of the master object on the ground when the master object holds a virtual gun.
  • FIG. 8 shows a projection 801 of the master control object on the ground when the master control object is switching virtual firearms. It can be clearly seen from FIG. 7 and FIG. 8 that the projection 701 and the projection 801 of the master control object are in line with the actual situation, and the projection of the master control object can be well restored.
  • the method of the embodiment of the present application when using the method of the embodiment of the present application to obtain the projection of the master control object, it only needs to consume 0.033ms on the CPU, but when using related technologies to obtain the projection of the master control object, it needs to consume 0.033ms on the CPU. 4.133ms.
  • the method of the embodiment of the present application can effectively reduce the calculation time of the computer equipment and improve the efficiency.
  • the projection of the master control object when generating the projection of the master control object, only the original skeletal model needs to be extracted from the ontology animation blueprint, and the projection of the master control object can be obtained according to the original skeletal model.
  • This method can obtain the projection of the master control object without setting an additional set of animation blueprints, which can simplify the process of obtaining the projection of the master control object, and can obtain the projection of the master control object without too much calculation, reducing the cost of computer equipment. Calculation pressure, even if the computing power of the computer equipment is weak, the projection of the master object can be completed.
  • FIG. 9 shows a schematic flowchart of a method for generating a master object projection provided by an embodiment of the present application.
  • the method can be executed by the terminal 120 shown in FIG. 1, and the method includes the following steps:
  • Step 901 Before the first bone deformation processing of the ontology animation blueprint, extract the original skeleton model from the ontology animation blueprint.
  • the first skeletal deformation process is used to scale the bones at the overlapping area of the camera model of the master object and the original skeletal model. Since the main control object is an object to observe the virtual environment from the first-person perspective, it is necessary to place the camera model on the head of the main control object to achieve the effect of the first-person perspective. However, because the head of the master object also has a model, this situation will lead to the appearance of the model. Therefore, the skeleton of the original skeleton model at the overlapping area of the camera model of the main control object and the original skeleton model is scaled, so that there is no overlapping area between the camera model and the original skeleton model, and the occurrence of model penetration is avoided.
  • the first bone deformation processing is realized through programming.
  • Step 902 Based on the current state of the master object, adjust the pose of the original skeleton model to obtain a projection model.
  • reverse dynamics is used to adjust the pose of the original skeleton model to obtain a projection model.
  • the pose of the original skeleton model is adjusted based on the changed state to obtain a projection model.
  • the master control object changes from a stationary state to a running state, then the master control object changes from a resting posture to a running posture.
  • Step 903 Replace the projection model with an integrated projection model.
  • the projection model includes at least two component models, and the at least two component models include models of different parts of the master object.
  • the component model includes at least one of a head model, a torso model, an upper limb model, and a lower limb model. Then, when each component model is called for rendering, a draw call will be generated, and a total of multiple draw calls will be generated. Multiple draw calls will bring greater rendering pressure to the GPU (Graphics Processing Unit, Graphics Processing Unit) of the computer device.
  • the component model includes a head model, a torso model, an upper limb model and a lower limb model, at least four rendering calls need to be submitted, which greatly increases the burden on the GPU.
  • the ontology model includes at least two component models, and the at least two component models include models of different parts on the main control object, and the projection model is replaced by an integrated projection model, and the integrated projection model is a low-level model after merging at least two component models.
  • surface model the number of surfaces of the integrated projection model is less than that of the projection model.
  • the integrated projection model does not include component models, and the integrated projection model consists of a complete model. Exemplarily, as shown in FIG. 11 , FIG. 11 shows an integrated projection model 1101 of the main control object.
  • the projection of the master object can be obtained only by submitting one rendering, which reduces the burden on the GPU.
  • the master object holds a virtual item
  • the model of the virtual item held by the master object can be replaced with an integrated item model.
  • the virtual firearm also includes at least two item component models (eg, butt model, silencer model, grip model, etc.), and the model of the virtual firearm needs to be replaced as one
  • the firearm model including at least two item component models is replaced with an integrated firearm model 1301 .
  • Step 904 Render the integrated projection model to obtain the projection of the master control object.
  • the projection of the master control object is the projection of the backlight side in the virtual environment.
  • the projection of the master control object is the projection on the ground in the virtual environment, or the projection of the master control object is the projection on the wall in the virtual environment, or the projection of the master control object is the projection on the ceiling in the virtual environment.
  • the projection of the master object may also refer to an image on a reflective surface.
  • the projection of the master object refers to the image of the master object on the mirror surface, and for example, the projection of the master object refers to the image of the master object on the water surface.
  • the embodiment of the present application when the embodiment of the present application generates the projection of the master object, before the first bone deformation process, the original bone model is extracted from the animation blueprint to obtain the projection of the master object, so as to ensure that the generated projection exists in the head of.
  • this method does not require an additional set of animation blueprints to obtain the projection of the master object, which can simplify the acquisition of the projection of the master object
  • the process can obtain the projection of the master control object without too much calculation, and reduce the calculation pressure of the computer equipment.
  • replacing the projection model with the integrated projection model reduces the number of renderings submitted by the computer device, which can reduce the working pressure of the GPU.
  • Fig. 14 shows a schematic flowchart of a method for generating a master object projection provided by an embodiment of the present application.
  • the method can be executed by the terminal 120 shown in FIG. 1, and the method includes the following steps:
  • Step 1401 Extract the original skeleton model from the ontology animation blueprint before the second bone deformation process in the ontology animation blueprint.
  • the second bone deformation process is used to distort the original bone model when the master object is in the target current state.
  • the skeleton model of the main control object will be excessively distorted. This is so that the main control object can lean forward behind obstacles and can observe the other objects, so a second bone deformation process is required.
  • the final projection is also in an excessively distorted state, as shown in Figure 4, the projection 401 of the master object is in an excessively distorted state state, but this excessively distorted state cannot be achieved by normal human beings. Therefore, in this application, the original skeleton model needs to be extracted from the ontology animation blueprint before the second bone deformation processing of the ontology animation blueprint.
  • Step 1402 Based on the current state of the main control object, adjust the pose of the original skeleton model to obtain a projection model.
  • the pose of the original skeleton model is adjusted by using inverse dynamics to obtain the projection model.
  • the pose of the original skeleton model is adjusted based on the changed state to obtain a projection model.
  • the master control object changes from a stationary state to a running state, then the master control object changes from a resting posture to a running posture.
  • Step 1403 Replace the projection model with an integrated projection model.
  • the projection model includes at least two component models, and the at least two component models include models of different parts of the master object.
  • the component model includes at least one of a head model, a torso model, an upper limb model, and a lower limb model.
  • the ontology model includes at least two component models, and the at least two component models include models of different parts of the main control object.
  • the integrated projected model is a low-surface model after combining at least two component models, and the number of faces of the integrated projected model is smaller than that of the projected model.
  • the integrated projection model does not include component models, and the integrated projection model consists of a complete model.
  • FIG. 11 shows an integrated projection model 1101 of the main control object.
  • the master object holds a virtual item, and in order to reduce the rendering pressure of the GPU, the model of the virtual item held by the master object can be replaced with an integrated item model.
  • the virtual firearm also includes at least two item component models, and the model of the virtual firearm needs to be replaced with an integrated firearm model in order to render the projection of the firearm, as shown in Figure 13 , replacing the firearm model including at least two item component models with an integrated firearm model 1301 .
  • Step 1404 Render the integrated projection model to obtain the projection of the master control object.
  • the projection of the master control object is the projection of the backlight side in the virtual environment.
  • the projection of the master control object is the projection on the ground in the virtual environment, or the projection of the master control object is the projection on the wall in the virtual environment, or the projection of the master control object is the projection on the ceiling in the virtual environment.
  • the projection of the master object may also refer to an image on a reflective surface.
  • the projection of the master object refers to the image of the master object on the mirror surface, and for example, the projection of the master object refers to the image of the master object on the water surface.
  • the original bone model is extracted from the animation blueprint to obtain the projection of the main control object before the second bone deformation process, so as to ensure that the shape of the generated projection is correct of.
  • this method does not require an additional set of animation blueprints to obtain the projection of the master object, which can simplify the acquisition of the projection of the master object
  • the process can obtain the projection of the master control object without too much calculation, and reduce the calculation pressure of the computer equipment.
  • replacing the projection model with the integrated projection model reduces the number of renderings submitted by the computer device, which can reduce the work pressure of the GPU.
  • the method for adjusting the posture of the original skeleton model involved in the above-mentioned step 902 or step 1402 is briefly introduced.
  • the embodiment of the present application uses inverse dynamics as an example for illustration.
  • the pose of the original skeleton model can be adjusted by other methods, which is not specifically limited in this application.
  • Fig. 15 shows a schematic flowchart of a method for adjusting the pose of an original skeleton model provided by an embodiment of the present application.
  • This method can be executed by the terminal 120 shown in FIG. 1 , and the method shown in FIG. 15 is used to further fine-tune the original skeleton model to improve the accuracy of the projection model.
  • the ontology model of the main control object needs to realize the coordination and linkage of the feet and legs, while the projection model of the main control object does not need to distinguish between the feet and legs when the accuracy requirements are not high.
  • the projection model needs to distinguish between the foot and the leg, and in this application, the movement of the leg is determined depending on the movement of the foot.
  • the method shown in Figure 15 includes the following steps:
  • Step 1501 Based on the current state of the master object, determine the target position of the sub-skeleton.
  • the original bone model includes child bones and parent bones corresponding to the child bones, and the child bones are at the end of the bone chain.
  • the child bone can be the hand of the humanoid character
  • the parent bone is the arm of the humanoid character.
  • a child bone could be a humanoid character's foot
  • a parent bone would be a humanoid character's leg.
  • the current state includes but is not limited to at least one of attacking, releasing skills, purchasing props, treating, adjusting body posture, crawling, walking, riding, flying, jumping, driving, picking up, shooting, throwing, running, and standing still. kind.
  • the target position is the position that the child bone needs to reach in the current state.
  • the main control object is in a walking state
  • the sub-skeleton refers to the foot of the main control object, and the foot of the main control object is in a lifted state
  • the target position is the landing point of the foot of the main control object.
  • the master object is walking on the stairs holding the railing
  • the sub-skeleton refers to the hand of the master object, and the hand of the master object is lifted, then the target position is the landing point of the hand of the master object .
  • Step 1502 Determine the position of the sub-skeleton according to the target position.
  • determining the position of the sub-skeleton includes the following sub-steps:
  • the main control object is in the walking state, and the walking state can be simply divided into the lifting state and the falling state.
  • step 1502 is used to determine the foot (sub-skeleton) in the falling state )s position.
  • the first endpoint of a child bone is the endpoint on the child bone away from the end of the child bone.
  • the first vector is the vector from the first end point 1603 of the sub-skeleton 1602 to the end 1605 of the sub-skeleton.
  • the sub-skeleton is the foot
  • the first end point is the heel of the foot
  • the end of the sub-skeleton is the toe of the foot.
  • the first vector from the heel to the toe is obtained in the lifted state.
  • the second vector is a vector from the first end point 1603 to the target position 1604 .
  • the first end point is the heel of the foot
  • the target position is the target landing position of the toe.
  • the second vector pointing from the heel to the target landing position in the lifted state is obtained.
  • the sub-skeleton 1602 is rotated counterclockwise, and the rotation angle is the same as the included angle, so as to obtain the position of the sub-skeleton 1602 .
  • the first vector refers to the vector of the heel pointing to the toe in the lifted state
  • the second vector refers to the vector of the heel pointing to the target position in the lifted state.
  • Sub-step 3 is to rotate based on the angle between the first vector and the second vector For the foot (sub-skeleton), the rotated foot is obtained, that is, the foot in the falling state is obtained.
  • Step 1503 Determine the position of the parent bone according to the target position and the position of the child bone.
  • determining the position of the parent bone includes the following sub-steps:
  • the second endpoint of the parent bone is the endpoint on the parent bone away from the child bone.
  • the third vector is the vector from the second end point 1606 of the parent bone 1601 to the end point 1605 of the child bone.
  • the parent bone is the calf
  • the second endpoint is the knee
  • the end of the child bone is the toe.
  • Substep 1 obtains the third vector from the knee to the toe (before rotation) in the lifted state.
  • the fourth vector is a vector from the second end point 1606 to the target position 1604 .
  • the target position is the target landing position of the toes
  • the second end point is the knee
  • sub-step 2 obtains the fourth vector pointing from the knee to the target landing position.
  • the parent bone 1601 is rotated counterclockwise, and the rotation angle is the same as the included angle, so as to obtain the position of the parent bone 1601 .
  • the third vector refers to the vector of the knee pointing to the toe in the lifted state
  • the fourth vector refers to the vector of the knee pointing to the target position in the lifted state.
  • Sub-step 3 is based on the angle between the third vector and the fourth vector, and the rotation Calf (parent bone), get the rotated calf, that is, get the calf in the falling state.
  • Step 1504 Adjust the pose of the original bone model according to the positions of the child bone and the parent bone to obtain a projection model.
  • the original bone model can be adjusted according to the positions of the child bone and the parent bone to complete the pose adjustment of the original bone model.
  • the embodiment of this application provides a method for adjusting the pose of the original skeleton model, which can adjust the pose of the original skeleton model so that the action and pose of the master control object fit the current state, so that the pose of the original skeleton model It is closer to the actual situation, so that the state of the master object that the projection model can better reflect is obtained.
  • FIG. 18 shows a schematic flowchart of an implementation method for an ontology animation blueprint provided by an embodiment of the present application.
  • the method can be executed by the terminal 120 shown in FIG. 1, and the method includes the following steps:
  • Step 1801 Determine the initial animation pose of the master control object.
  • the initial animation pose is determined according to the current state of the master object.
  • the skeleton model of the master control object includes at least two component models, and the at least two component models include models of different parts of the master control object.
  • the skeleton model of the master object includes an upper body model and a lower body model.
  • the initial animation pose it is necessary to determine the animation poses of at least two component models in sequence. Overlay animation poses of at least two component models to obtain an initial animation pose.
  • the upper body of the main control object needs to match the shooting animation posture
  • the lower body of the main control object matches the running animation posture
  • the upper body and lower body animation postures are superimposed together to obtain The initial animation pose of the master.
  • Step 1802 Substitute the initial animation pose into the skeleton model of the main control object to obtain the original skeleton model.
  • the skeletal model of the master control object is adjusted according to the initial animation pose to obtain the original skeletal model.
  • a projection model 1903 can be obtained, and the projection model 1903 is used to generate the projection of the main control object in the virtual environment.
  • Step 1803 Carry out bone deformation processing on the original bone model to obtain the original bone model after bone deformation.
  • the first bone deformation process is performed on the original bone model to obtain the original bone model after bone deformation.
  • the first skeletal deformation process is used to scale the bones at the overlapping area of the camera model of the master object and the original skeletal model. For example, scale the bones of the master's head region.
  • the second bone deformation process is performed on the original bone model to obtain the original bone model after bone deformation.
  • the second bone deformation process is used to distort the original bone model when the master object is in the target current state. For example, when the master object leans left and right, the skeleton model of the master object will be excessively distorted.
  • Step 1804 Based on the current state, adjust the posture of the original bone model after bone deformation to obtain the ontology model of the main control object.
  • the posture of the original bone model after bone deformation is adjusted by using inverse dynamics to obtain the ontology model of the main control object.
  • IK Inverse Kinematics, inverse motion
  • AO Align Offset, aiming offset
  • IK operations must be performed after bone deformation processing, while others Some IK operations must be performed before bone deformation processing, while some IK operations can be performed before bone deformation processing or before bone deformation.
  • the processing of bone deformation by IK and AO depends on specific projects.
  • Step 1805 Render the ontology model to obtain the ontology of the master control object.
  • the ontology model includes at least two component models, and the at least two component models include models of different parts of the master control object.
  • the body model includes a head model, a torso model, an upper limb model, and a lower limb model.
  • the ontology model at least two component models need to be rendered respectively to obtain rendering results of at least two component models. Synthesize the rendering results of at least two component models to obtain the ontology of the master control object.
  • the head model, torso model, upper limb model and lower limb model are rendered respectively to obtain rendering results corresponding to the head model, torso model, upper limb model and lower limb model, and these four rendering results are synthesized to obtain the ontology.
  • this embodiment provides a method for an ontology animation blueprint, which can obtain the ontology of the master control object. Also, the original bone model in the ontology animation blueprint can be used to obtain the projection of the master object.
  • the steps after the bone deformation processing in the ontology animation blueprint will be performed as much as possible before the bone deformation processing.
  • another implementation method of ontology animation blueprint is provided.
  • the post-skeleton deformation processing step can be moved to the pre-skeleton deformation processing without affecting the body of the main control object.
  • the operation corresponding to step 1804 is moved to be performed before step 1803 .
  • Fig. 20 shows a schematic flowchart of a method for implementing an ontology animation blueprint provided by an embodiment of the present application.
  • the method can be executed by the terminal 120 shown in FIG. 1, and the method includes the following steps:
  • Step 2001 Determine the initial animation pose of the master object.
  • the initial animation pose is determined according to the current state of the master object.
  • the skeleton model of the master control object includes at least two component models, and the at least two component models include models of different parts of the master control object.
  • the skeleton model of the master object includes an upper body model and a lower body model.
  • the initial animation pose it is necessary to determine the animation poses of at least two component models in sequence. Overlay animation poses of at least two component models to obtain an initial animation pose.
  • the upper body of the main control object needs to match the shooting animation posture
  • the lower body of the main control object matches the running animation posture
  • the upper body and lower body animation postures are superimposed together to obtain The initial animation pose of the master.
  • Step 2002 Substitute the initial animation pose into the skeleton model of the main control object to obtain a skeleton pose model.
  • Step 2003 Based on the current state, adjust the pose of the skeleton pose model to obtain the original skeleton model.
  • the pose of the skeletal pose model is adjusted using inverse dynamics to obtain the original skeletal model.
  • the original skeleton model 2102 extracted from the ontology animation blueprint 2101 has been adjusted in pose, so the original skeleton model 2102 can be directly used as the projection model 2103, and the projection model 2103 is used to generate the master object in the Projection in a virtual environment.
  • Step 2004 performing bone deformation processing on the original bone model to obtain the original bone model after bone deformation.
  • the first bone deformation process is performed on the original bone model to obtain the original bone model after bone deformation.
  • the first skeletal deformation process is used to scale the bones at the overlapping area of the camera model of the master object and the original skeletal model. For example, scale the bones of the master's head region.
  • the second bone deformation process is performed on the original bone model to obtain the original bone model after bone deformation.
  • the second bone deformation process is used to distort the original bone model when the master object is in the target current state. For example, when the master object leans left and right, the skeleton model of the master object will be excessively distorted.
  • Step 2005 Render the ontology model to obtain the ontology of the main control object.
  • the ontology model includes at least two component models, and the at least two component models include models of different parts of the master control object.
  • the body model includes the head model, torso model, upper limb model and lower limb model.
  • at least two component models need to be rendered respectively to obtain rendering results of at least two component models.
  • the head model, torso model, upper limb model and lower limb model are rendered respectively to obtain rendering results corresponding to the head model, torso model, upper limb model and lower limb model, and these four rendering results are synthesized to obtain the ontology.
  • the operations after bone deformation processing are moved to before the bone deformation processing, but there is a premise here, that is, the operations after bone deformation processing are moved to before and after bone deformation processing, which will not affect the main control object's ontology and/or projection.
  • the operations after deformation processing of movable bones are determined by actual requirements. Exemplarily, in the actual requirement A, the operation of adjusting the posture can be moved before the bone deformation processing, but in the actual requirement B, the operation of adjusting the posture cannot be moved before the bone deformation processing processed. This application does not specifically limit the operations that can be moved before bone deformation processing.
  • this embodiment provides a method for an ontology animation blueprint, which can obtain the ontology of the master control object. Also, the original bone model in the ontology animation blueprint can be used to obtain the projection of the master object.
  • an application in a shooting game is used as an example for illustration.
  • Fig. 22 shows a schematic flowchart of a method for generating a master object projection provided by an embodiment of the present application.
  • the method can be executed by the terminal 120 shown in FIG. 1, and the method includes the following steps:
  • Step 2201 Extract the original skeleton model of the master game object from the body animation blueprint of the master game object.
  • the master game object is at least one of a virtual character, a virtual animal, and an animation character. This application does not specifically limit it.
  • the original bone model is the unskeletal model of the master game object.
  • the ontology animation blueprint is the process of generating the ontology of the master object in the virtual environment.
  • the ontology of the main control object can be obtained through the ontology animation blueprint.
  • the ontology of the master object refers to the master object in the virtual environment.
  • the ontology animation blueprint includes bone deformation processing. Extract the original skeleton model from the ontology animation blueprint before the bone deformation processing of the ontology animation blueprint.
  • the bone deformation processing includes a first bone deformation processing, and the first bone deformation processing is used to scale the bone at the overlapping area between the camera model of the main control object and the original bone model; Before bone deformation processing, extract the original bone model from the body animation blueprint.
  • the bone deformation process includes a second bone deformation process, and the second bone deformation process is used to distort the original bone model when the main control object is in the target current state; the second bone deformation process in the body animation blueprint Extract the original skeleton model from the ontology animation blueprint before processing.
  • Step 2202 Render the original skeleton model to obtain the projection of the main control game object.
  • the projection of the master game object is the projection of the backlight side in the virtual environment.
  • the projection of the master game object is the projection on the ground in the virtual environment, or the projection of the master game object is the projection on the wall in the virtual environment, or the projection of the master game object is the projection on the ceiling in the virtual environment projection.
  • the projection of the master game object may also refer to an image on a reflective surface.
  • the projection of the master game object refers to the image of the master game object on the mirror, and for another example, the projection of the master game object refers to the image of the master game object on the water surface.
  • the pose of the original skeleton model is adjusted through inverse dynamics to obtain a pose-adjusted skeleton model. Render the pose-adjusted skeletal model to obtain the projection of the master game object.
  • the pose of the original skeleton model is adjusted based on the current state of the master game object to obtain a projection model. Render the pose-adjusted skeletal model to obtain the projection of the master game object.
  • the original skeleton model includes a child bone and a parent bone corresponding to the child bone, and the child bone is located at the end of the bone chain; based on the current state of the main control game object, determine the target position of the child bone; according to the target Position, determine the position of the child bone; determine the position of the parent bone according to the target position and the position of the child bone; adjust the pose of the original bone model according to the position of the child bone and the parent bone, and obtain the projection model.
  • the ontology model includes at least two component models, and the at least two component models include models that control different parts of the game object; the projection model is replaced by an integrated projection model, and the integrated projection model is a combination of In the low-surface model after at least two component models, the number of faces of the integrated projection model is smaller than that of the projection model; the integrated projection model is rendered to obtain the projection of the main control game object.
  • the ontology animation blueprint includes the following steps:
  • the skeleton model of the master game object includes at least two component models, and the at least two component models include models of different parts on the master game object; determining the initial animation pose of the master game object includes : Determine the animation poses of at least two component models in sequence; superimpose the animation poses of at least two component models to obtain the initial animation pose.
  • the ontology model includes at least two component models, and the at least two component models include models of different parts of the main control game object; rendering the ontology model to obtain the ontology of the main control game object includes: Render the at least two component models separately to obtain rendering results of the at least two component models; synthesize the rendering results of the at least two component models to obtain the main control game object body.
  • the projection of the master game object when generating the projection of the master game object in the embodiment of the present application, only the original skeleton model needs to be extracted from the ontology animation blueprint, and the projection of the master game object can be obtained according to the original skeleton model.
  • This method can obtain the projection of the master control object without setting an additional set of animation blueprints, which can simplify the process of obtaining the projection of the master control object, and can obtain the projection of the master control object without too much calculation, reducing the cost of computer equipment. Calculation pressure, even if the computing power of the computer equipment is weak, the projection of the master object can be completed.
  • FIG. 23 shows a block diagram of an apparatus for generating a master control object projection provided by an embodiment of the present application.
  • the above functions may be implemented by hardware, or may be implemented by hardware executing corresponding software.
  • the device 2300 includes:
  • the extraction module 2301 is used to extract the original skeleton model of the main control object from the ontology animation blueprint of the main control object.
  • the main control object is an object that observes the virtual environment from a first-person perspective, and the ontology animation blueprint is used to generate the main control object in the virtual environment
  • the ontology model, the original bone model is the model of the main control object without bone deformation;
  • An adjustment module 2302 configured to obtain a projection model based on the original skeletal model, where the projection model is a model used to generate a projection of the main control object in the virtual environment;
  • the rendering module 2303 is configured to render the projection model to obtain the projection of the main control object.
  • the ontology animation blueprint includes bone deformation processing; the extraction module 2301 is also used to extract the original skeleton model from the ontology animation blueprint before the bone deformation processing of the ontology animation blueprint.
  • the bone deformation processing includes a first bone deformation processing, and the first bone deformation processing is used to scale the bones in the overlapping area between the camera model of the master object and the original bone model.
  • the extraction module 2301 is also used to extract the original skeleton model from the ontology animation blueprint before the first bone deformation processing of the ontology animation blueprint.
  • the bone deformation processing includes a second bone deformation processing, and the second bone deformation processing is used to distort the original bone model when the master object is in the target current state; the extraction module 2301 is also used to Before the second bone deformation processing of the ontology animation blueprint, extract the original bone model from the ontology animation blueprint.
  • the adjustment module 2302 is configured to determine the original skeleton model as a projection model.
  • the adjustment module 2302 is configured to adjust the pose of the original skeleton model based on the current state of the main control object to obtain a projection model.
  • the original skeleton model includes a child bone and a parent bone corresponding to the child bone, and the child bone is located at the end of the bone chain; the adjustment module 2302 is also used to determine the child bone based on the current state of the master object.
  • the target position of the bone according to the target position, determine the position of the child bone; according to the target position and the position of the child bone, determine the position of the parent bone; according to the positions of the child bone and the parent bone, adjust the pose of the original bone model to obtain the projection model.
  • the adjustment module 2302 is also used to determine the first vector from the first end point of the sub-bone to the end of the sub-bone, and the first end point of the sub-bone is the The end point of the end; determine the second vector from the first end point to the target position; based on the angle between the first vector and the second vector, rotate the child bone to determine the position of the child bone.
  • the adjustment module 2302 is also used to determine a third vector from the second end point of the parent bone to the end of the child bone, and the second end point of the parent bone is an end point on the parent bone away from the child bone; Determine the fourth vector from the second end point to the target position; based on the angle between the third vector and the fourth vector, rotate the parent bone to determine the position of the parent bone.
  • the ontology model includes at least two component models, the at least two component models include models of different parts of the master object, and the rendering module 2303 is also used to replace the projection model with an integrated projection model , the integrated projection model is a low-surface model after combining at least two component models, and the number of faces of the integrated projection model is smaller than that of the projection model; the integrated projection model is rendered to obtain the projection of the master object.
  • the device further includes a production module 2304;
  • the production module 2304 is used to determine the initial animation pose of the master control object; substitute the initial animation pose into the skeleton model of the master control object to obtain the original skeleton model; perform bone deformation processing on the original skeleton model to obtain the deformed original skeleton Model; based on the current state, adjust the posture of the original skeleton model after bone deformation to obtain the ontology model of the master control object, which is used to generate the ontology model of the master control object; render the ontology model to obtain the ontology model of the master control object ontology.
  • the skeleton model of the main control object includes at least two component models, and the at least two component models include models of different parts on the main control object; the production module 2304 is also used to sequentially determine at least two The animation pose of the component model; the animation pose of at least two component models is superimposed to obtain the initial animation pose.
  • the ontology model includes at least two component models, and the at least two component models include models of different parts of the master object; the production module 2304 is also used to render the at least two component models respectively , obtain the rendering results of at least two component models; combine the rendering results of at least two component models to obtain the ontology of the master control object.
  • the original skeleton model when generating the projection of the master control object, the original skeleton model will be extracted from the ontology animation blueprint, and the projection of the master control object will be obtained according to the original skeleton model.
  • This method can obtain the projection of the master control object without setting an additional set of animation blueprints, which can simplify the process of obtaining the projection of the master control object, and can obtain the projection of the master control object without too much calculation, reducing the cost of computer equipment. Calculate the pressure.
  • Fig. 24 is a schematic structural diagram of a computer device according to an exemplary embodiment.
  • the computer device 2400 includes a central processing unit (Central Processing Unit, CPU) 2401, a system memory 2404 including a random access memory (Random Access Memory, RAM) 2402 and a read-only memory (Read-Only Memory, ROM) 2403, and A system bus 2405 that connects the system memory 2404 and the central processing unit 2401 .
  • the computer device 2400 also includes a basic input/output system (Input/Output, I/O system) 2406 that helps to transmit information between various devices in the computer device, and is used to store an operating system 2413, an application program 2414 and other programs The mass storage device 2407 of the module 2415.
  • I/O system Basic input/output system
  • the basic input/output system 2406 includes a display 2408 for displaying information and input devices 2409 such as a mouse and a keyboard for users to input information. Both the display 2408 and the input device 2409 are connected to the central processing unit 2401 through the input and output controller 2410 connected to the system bus 2405 .
  • the basic input/output system 2406 may also include an input-output controller 2410 for receiving and processing input from a keyboard, mouse, or electronic stylus and other devices. Similarly, input output controller 2410 also provides output to a display screen, printer, or other type of output device.
  • the mass storage device 2407 is connected to the central processing unit 2401 through a mass storage controller (not shown) connected to the system bus 2405 .
  • the mass storage device 2407 and its associated computer device readable media provide non-volatile storage for the computer device 2400 . That is to say, the mass storage device 2407 may include a computer-readable medium (not shown) such as a hard disk or a Compact Disc Read-Only Memory (CD-ROM) drive.
  • a computer-readable medium such as a hard disk or a Compact Disc Read-Only Memory (CD-ROM) drive.
  • the computer device readable media may comprise computer device storage media and communication media.
  • Computer device storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer device readable instructions, data structures, program modules or other data.
  • Computer equipment storage media include RAM, ROM, Erasable Programmable Read Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), CD-ROM , Digital Video Disc (DVD) or other optical storage, cassette, tape, magnetic disk storage or other magnetic storage device.
  • EPROM Erasable Programmable Read Only Memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • CD-ROM Compact Disc
  • DVD Digital Video Disc
  • the storage medium of the computer device is not limited to the above-mentioned ones.
  • the above-mentioned system memory 2404 and mass storage device 2407 may be collectively referred to as memory.
  • the computer device 2400 may also operate on a remote computer device connected to a network through a network such as the Internet. That is, the computer equipment 2400 can be connected to the network 2411 through the network interface unit 2412 connected to the system bus 2405, or in other words, the network interface unit 2412 can also be used to connect to other types of networks or remote computer equipment systems (not shown). out).
  • the memory also includes one or more programs, the one or more programs are stored in the memory, and the central processing unit 2401 implements all or part of the above-mentioned method for generating the projection of the master control object by executing the one or more programs step.
  • a computer-readable storage medium stores at least one instruction, at least one program, a code set or an instruction set, the at least one instruction, the At least one segment of program, the code set or instruction set is loaded and executed by the processor to implement the method for generating the master control object projection provided by the above method embodiments.
  • the present application also provides a computer-readable storage medium, wherein at least one instruction, at least one program, code set or instruction set is stored in the storage medium, and the at least one instruction, the at least one program, the code set or The instruction set is loaded and executed by the processor to implement the method for generating the master control object projection provided by the above method embodiments.
  • the present application also provides a computer program product or computer program, where the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the computer device executes the method for generating a master object projection as provided in the above embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Architecture (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请公开了一种生成主控对象投影的方法、装置、设备及介质,涉及动画技术领域。该方法包括:从主控对象的本体动画蓝图中提取主控对象的原始骨骼模型,主控对象是以第一人称视角观察虚拟环境的对象,本体动画蓝图用于生成虚拟环境中主控对象的本体模型,原始骨骼模型是主控对象的未经过骨骼变形的模型(602);基于主控对象的当前状态,调整原始骨骼模型的姿势,得到投影模型,投影模型是用于生成主控对象在虚拟环境中的投影的模型(604);渲染投影模型,得到主控对象的投影(606)。本申请可以降低计算机设备的运算压力。

Description

生成主控对象投影的方法、装置、设备及介质
本申请要求于2022年01月07日提交的申请号为202210015247.5、发明名称为“生成主控对象投影的方法、装置、设备及介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及动画技术领域,特别涉及一种生成主控对象投影的方法、装置、设备及介质。
背景技术
为了在虚拟环境中模拟出现实环境,会为用户的主控对象添加上投影,以提高虚拟环境的真实度。
相关技术会提供两份动画蓝图,一份动画蓝图用于生成主控对象本身,另一份动画蓝图用于生成主控对象的投影。在生成虚拟对象本身时,会对主控对象的骨骼模型依次进行动画姿势匹配、骨骼缩放和模型细节优化,以生成主控对象的本体模型。在生成主控对象的投影时,会对主控对象模型进行动画姿势匹配和模型细节优化,以生成主控对象的投影模型。
相关技术消耗的计算量较大,对计算机设备要求较高。
发明内容
本申请实施例提供了一种生成主控对象投影的方法、装置、设备及介质,该方法可以较为快速地生成主控对象的投影,所述技术方案如下:
根据本申请的一个方面,提供了一种生成主控对象投影的方法,该方法包括:
从所述主控对象的本体动画蓝图中提取所述主控对象的原始骨骼模型,所述主控对象是以第一人称视角观察虚拟环境的对象,所述本体动画蓝图用于生成所述虚拟环境中所述主控对象的本体模型,所述原始骨骼模型是所述主控对象的未经过骨骼变形的模型;
基于所述原始骨骼模型得到投影模型,所述投影模型是用于生成所述主控对象在所述虚拟环境中的投影的模型;
渲染所述投影模型,得到所述主控对象的投影。
根据本申请的另一个方面,提供了一种生成主控对象投影的装置,该装置包括:
提取模块,用于从所述主控对象的本体动画蓝图中提取所述主控对象的原始骨骼模型,所述主控对象是以第一人称视角观察虚拟环境的对象,所述本体动画蓝图用于生成所述虚拟环境中所述主控对象的本体模型,所述原始骨骼模型是所述主控对象的未经过骨骼变形的模型;
调整模块,用于基于所述原始骨骼模型得到投影模型,所述投影模型是用于生成所述主控对象在所述虚拟环境中的投影的模型;
渲染模块,用于渲染所述投影模型,得到所述主控对象的投影。
根据本申请的另一方面,提供了一种计算机设备,该计算机设备包括:处理器和存储器,存储器中存储有至少一条指令、至少一段程序、代码集或指令集,至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行以实现如上方面所述的生成主控对象投影的方法。
根据本申请的另一方面,提供了一种计算机存储介质,计算机可读存储介质中存储有至少一条程序代码,程序代码由处理器加载并执行以实现如上方面所述的生成主控对象投影的方法。
根据本申请的另一方面,提供了一种计算机程序产品或计算机程序,上述计算机程序产品或计算机程序包括计算机指令,上述计算机指令存储在计算机可读存储介质中。计算机设 备的处理器从上述计算机可读存储介质读取上述计算机指令,上述处理器执行上述计算机指令,使得上述计算机设备执行如上方面所述的生成主控对象投影的方法。
本申请实施例提供的技术方案带来的有益效果至少包括:
在生成主控对象的投影时,只需要从本体动画蓝图中提取原始骨骼模型,就可以根据原始骨骼模型得到主控对象的投影。相较于相关技术使用两套不同的动画蓝图来同时获取主控对象的本体和投影,该方法不需要额外设置一套动画蓝图就可以得到主控对象的投影,可以简化获取主控对象的投影的流程,不需要过多的计算量就能得到主控对象的投影,降低计算机设备的计算压力,即使计算机设备的计算能力较弱,也能完成主控对象的投影。
附图说明
图1是本申请一个示例性实施例提供的计算机系统的结构示意图;
图2是本申请一个示例性实施例提供的相关技术的示意图;
图3是本申请一个示例性实施例提供的主控对象的本体的示意图;
图4是本申请一个示例性实施例提供的主控对象的投影的示意图;
图5是本申请一个示例性实施例提供的相关技术的示意图;
图6是本申请一个示例性实施例提供的生成主控对象投影的方法的流程示意图;
图7是本申请一个示例性实施例提供的主控对象的本体的示意图;
图8是本申请一个示例性实施例提供的主控对象的投影的示意图;
图9是本申请一个示例性实施例提供的生成主控对象投影的方法的流程示意图;
图10是本申请一个示例性实施例提供的渲染投影模型的示意图;
图11是本申请一个示例性实施例提供的一体化投影模型的示意图;
图12是本申请一个示例性实施例提供的渲染投影模型的示意图;
图13是本申请一个示例性实施例提供的一体化枪械模型的示意图;
图14是本申请一个示例性实施例提供的生成主控对象投影的方法的流程示意图;
图15是本申请一个示例性实施例提供的调整原始骨骼模型的姿势的方法的流程示意图;
图16是本申请一个示例性实施例提供的调整子骨骼的示意图;
图17是本申请一个示例性实施例提供的调整父骨骼的示意图;
图18是本申请一个示例性实施例提供的本体动画蓝图的实现方法的流程示意图;
图19是本申请一个示例性实施例提供的本体动画蓝图的实现方法的示意图;
图20是本申请一个示例性实施例提供的本体动画蓝图的实现方法的流程示意图;
图21是本申请一个示例性实施例提供的本体动画蓝图的实现方法的示意图;
图22是本申请一个示例性实施例提供的生成主控对象投影的方法的示意图;
图23是本申请一个示例性实施例提供的生成主控对象投影的装置的结构示意图;
图24是本申请一个示例性实施例提供的计算机设备的结构示意图。
具体实施方式
首先,对本申请实施例中涉及的名词进行介绍:
主控对象:是指用户控制的以第一人称视角观察虚拟环境的可活动对象。该可活动对象可以是虚拟人物、虚拟动物、动漫人物等,比如:在虚拟环境中显示的人物、动物、植物、油桶、墙壁、石块等。可选地,当虚拟环境为三维虚拟环境时,虚拟对象是基于动画骨骼技术创建的三维立体模型,每个虚拟对象在三维虚拟环境中具有自身的形状和体积,占据三维虚拟环境中的一部分空间。可选地,当虚拟环境为二维虚拟环境时,虚拟对象是基于动画技术创建的二维平面模型,每个虚拟对象在二维虚拟环境中具有自身的形状和面积,占据二维虚拟环境中的一部分面积。
FPS(First Person Shooting game)游戏:是一种在虚拟世界中提供若干个据点,处于不同阵营的用户控制虚拟角色在虚拟世界中对战,占领据点或摧毁敌对阵营据点或击杀敌对 阵营全部或部分角色的游戏。通常,FPS游戏中用户以第一人称视角进行游戏。例如,FPS游戏可将用户分成两个敌对阵营,将用户控制的虚拟角色分散在虚拟世界中互相竞争,以击杀敌方的全部虚拟角色作为胜利条件。FPS游戏以局为单位,一局FPS游戏的持续时间是从游戏开始的时刻至达成胜利条件的时刻。
图1示出了本申请一个示例性实施例提供的计算机系统100的结构示意图。计算机系统100包括:终端120和服务器140。
终端120上安装有支持虚拟环境的应用程序。该应用程序可以是FPS游戏、竞速游戏、MOBA(Multiplayer Online Battle Arena,多人在线战术竞技游戏)游戏、虚拟现实应用程序、三维地图程序、多人枪战类生存游戏中的任意一种。用户使用终端120操作位于虚拟环境中的主控对象进行活动,该活动包括但不限于:攻击、释放技能、购买道具、治疗、调整身体姿态、爬行、步行、骑行、飞行、跳跃、驾驶、拾取、射击、投掷中的至少一种。示意性的,第一虚拟角色是第一虚拟人物。终端120是智能手机、平板电脑、电子书阅读器、MP3播放器、MP4播放器、膝上型便携计算机和台式计算机中的至少一种。
终端120通过无线网络或有线网络与服务器140相连。
服务器140可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、CDN(Content Delivery Network,内容分发网络)、以及大数据和人工智能平台等基础云计算服务的云服务器。服务器140用于为支持虚拟环境的应用程序提供后台服务。可选地,服务器140承担主要计算工作,终端120承担次要计算工作;或者,服务器140承担次要计算工作,终端120承担主要计算工作;或者,服务器140和终端120两者采用分布式计算架构进行协同计算。
相关技术在显示主控对象时,会按照如下步骤实现,如图2所示:
匹配动画姿势201:确定主控对象的不同部位的动画姿势。匹配动画姿势201用于确定主控对象的动画姿势。可选地,主控对象在活动时,确定主控对象的动画姿势。示例性的,活动包括但不限于走、跑、跳、站、蹲、趴、攻击、飞行、驾驶、拾取、射击、投掷中的至少一种。可选地,匹配动画姿势201会匹配主控对象不同部位的动画姿势,比如,主控对象边射击边跑,则需要为主控对象的上半身匹配射击的动画姿势,为主控对象的下半身匹配跑的动画姿势。
叠加动画202:将主控对象的不同部位的动画姿势进行叠加,得到主控对象的动画姿势。叠加动画202用于处理主控对象不同部位的动画叠加。示例性的,将主控对象的上半身的动画姿势与下半身的动画姿势进行叠加,得到主控对象的动画姿势。
骨骼变形203:调整主控对象的骨骼模型,得到变形后的骨骼模型。骨骼变形203用于调整主控对象的骨骼形状。可选地,骨骼变形203的方式包括但不限于改变骨骼位置、改变骨骼朝向和缩放骨骼比例中的至少一种。示例性的,一方面,在显示主控对象时,为了防止主控对象的头部穿模,会缩放主控对象头部的骨骼。另一方面,在主控对象进行左右探身时,会对主控对象进行过量扭曲,使得主控对象能够完成探身这一动作。如图3所示,以第三人称视角观察主控对象301时,主控对象301的头部被缩放(去除),且主控对象301的身体处于过量扭曲的状态。
调整姿势204:基于主控对象的当前状态,调整变形后的骨骼模型的姿势。调整姿势204用于调整主控对象的姿势以提高精确度。示例性的,通过反向动力学调整主控对象手部的姿势,以提高主控对象持握道具的精确度。
在完成调整姿势204后,会输出主控对象的本体模型205,本体模型205用于生成主控对象的本体。
需要说明的是,显示主控对象的本体的方法并不能直接移植到显示主控对象的投影上, 示例性的,若是采用上述方法来生成主控对象的投影则会得到如图4所示的投影401,投影401的形状明显和实际不符。这是由于骨骼变形203这一步骤会引起主控对象的骨骼变形,一方面将主控对象的头部缩放了,另一方面对主控对象进行了扭曲,导致主控对象的骨骼与实际不同。但是在生成主控对象的本体模型时不能跳过骨骼变形203这一步骤,如果跳过骨骼变形203这一步骤,一方面,如果在骨骼变形203中没有缩放头部,会导致主控对象的头部和主控对象的摄像机模型相互影响,造成穿模的现象。另一方面,当主控对象进行左右探身时,如果不对主控对象进行足够扭曲,则难以实现主控对象的探身。
因此,相关技术会在计算机设备中运行两个不同的进程来分别实现主控对象的本体和投影的显示。示例性的,如图5所示,第一进程501用于生成主控对象的本体,第二进程502用于生成主控对象的投影。但是,发明人发现,这里又会引起其他问题,由于需要对一个主控对象计算两份不同的进程,且前述两个进程无法通过降频等其他方式进行优化,对于那些计算能力有限的计算机设备而言,比如手机,是无法通过这种方式来同时显示主控对象的本体和投影。而且,对于那些计算能力较强的计算机设备而言,也容易因为线程等待而导致进程处理时间过长。
为了解决上述问题:1、主控对象的本体和主控对象的投影的实现方式不兼容,需要两套不同的方法才能分别得到主控对象的本体和主控对象的投影;2、同时生成主控对象的本体和主控对象的投影要求计算机设备具有较强的计算能力,部分计算能力较弱的终端并不能实现同时生成主控对象的本体和主控对象的投影。
图6示出了本申请一个实施例提供的生成主控对象投影的方法的流程示意图。该方法可由图1所示的终端120执行,该方法包括以下步骤:
步骤602:从主控对象的本体动画蓝图中提取主控对象的原始骨骼模型,主控对象是以第一人称视角观察虚拟环境的对象,本体动画蓝图用于生成虚拟环境中主控对象的本体模型,原始骨骼模型是主控对象的未经过骨骼变形的模型。
主控对象可以是虚拟人物、虚拟动物、动漫人物中的至少一种。本申请对此不作具体限定。示例性的,用户在FPS游戏中操控主控对象,用户以第一人称视角控制主控对象。
可选地,本体动画蓝图包括骨骼变形处理,骨骼变形处理用于避免主控对象穿模而修改主控对象的骨骼。示例性的,当主控对象的骨骼模型进行骨骼变形后,会缩放主控对象的头部骨骼,此时的模型不适合用于生成主控对象的投影,因此,将使用未经过骨骼变形的模型作为原始骨骼模型。
可选地,本体动画蓝图包括如图2所示实施例所示的步骤。
本体动画蓝图是生成虚拟环境中主控对象的本体模型的流程(或者,也可称为本体动画蓝图是生成虚拟环境中主控对象的本体的流程)。通过本体动画蓝图可以得到主控对象的本体。主控对象的本体指虚拟环境中的主控对象。
步骤604:基于原始骨骼模型得到投影模型,投影模型是用于生成主控对象在虚拟环境中的投影的模型。
在一个实施例中,将原始骨骼模型直接作为投影模型。需要说明的是,因为主控对象的投影精度往往要求不高,当更侧重终端性能时,即可直接将原始骨骼模型作为投影模型。
在一个实施例中,基于主控对象的当前状态,调整原始骨骼模型的姿势,得到投影模型。需要说明的是,若进一步追求主控对象的投影的精度,则可以根据主控对象的当前状态,对原始骨骼模型进一步调整。
可选地,当前状态包括但不限于攻击、释放技能、购买道具、治疗、调整身体姿态、爬行、步行、骑行、飞行、跳跃、驾驶、拾取、射击、投掷、跑步、静止中的至少一种。示例性的,若主控对象处于步行状态,则将原始骨骼模型的姿势调整为步行姿势。
可选地,当主控对象的状态发生变化时,基于变化后的状态,调整原始骨骼模型的姿 势,得到投影模型。示例性的,主控对象由静止状态变化为跑步状态,则将主控对象由静止姿势变化为跑步姿势。
可选地,基于周期性变化调整原始骨骼模型的姿势。比如,主控对象在步行状态下,双脚交替移动,故基于双脚周期性的交替移动,改变原始骨骼模型的姿势。
示例性的,基于主控对象的当前状态,采用反向动力学调整原始骨骼模型的姿势,得到投影模型。
需要说明的是,出于投影的性质,主控对象的投影的本身精度较低,因此可以取消对原始骨骼模型的姿势调整操作,姿势调整操作用于提升主控对象的投影的精度。比如,当主控对象在小角度转身时,存在脚步逻辑调整,脚步逻辑调整用于提升表现的细腻程度,但是由于主控对象的投影的精度较低,即使进行了脚步逻辑调整也不会体现出来,因此可以取消脚步逻辑调整,以降低计算机设备的计算量。
因此,在一些实际场景,可以减少对原始骨骼模型的姿势调整的步骤,比如,取消上述脚步逻辑调整,或者,取消手部细节调整。
步骤606:渲染投影模型,得到主控对象的投影。
可选地,主控对象的投影是主控对象本体在虚拟环境中背光侧的投影。示例性的,主控对象的投影是虚拟环境中地面上的投影,或者,主控对象的投影是虚拟环境中墙壁上的投影,或者,主控对象的投影是虚拟环境中天花板上的投影。
可选地,主控对象的投影还可以指反射面上的图像。例如,主控对象的投影指镜面上主控对象的图像,又例如,主控对象的投影指水面上主控对象的图像。
示例性的,如图7所示,图7展示了在主控对象持有虚拟枪械的情况下,主控对象在地面上的投影701。示例性的,如图8所示,图8展示了在主控对象在切换虚拟枪械的情况下,主控对象在地面上的投影801。从图7和图8可以明显看出,主控对象的投影701和投影801是符合实际情况的,能够良好地还原出主控对象的投影。
在一个具体的实施方式,采用本申请实施例的方法来获取主控对象的投影时,只需要在CPU上消耗0.033ms,而采用相关技术来获取主控对象的投影时,需要在CPU上消耗4.133ms。明显地,本申请实施例的方法可以有效减少计算机设备的计算时间,提高效率。
综上所述,本申请实施例在生成主控对象的投影时,只需要从本体动画蓝图中提取原始骨骼模型,就可以根据原始骨骼模型得到主控对象的投影。该方法不需要额外设置一套动画蓝图就可以得到主控对象的投影,可以简化获取主控对象的投影的流程,不需要过多的计算量就能得到主控对象的投影,降低计算机设备的计算压力,即使计算机设备的计算能力较弱,也能完成主控对象的投影。
图9示出了本申请一个实施例提供的生成主控对象投影的方法的流程示意图。该方法可由图1所示的终端120执行,该方法包括以下步骤:
步骤901:在本体动画蓝图的第一骨骼变形处理前,从本体动画蓝图中提取原始骨骼模型。
第一骨骼变形处理用于缩放主控对象的摄像机模型和原始骨骼模型的重合区域处的骨骼。由于主控对象是以第一人称视角观察虚拟环境的对象,所以需要将摄像机模型安置在主控对象的头部,以实现第一人称视角的效果。但是,又因为主控对象的头部也是存在模型的,这种情况会导致穿模的出现。因此,对主控对象的摄像机模型和原始骨骼模型的重合区域处原始骨骼模型的骨骼进行缩放,使得摄像机模型和原始骨骼模型不存在重合区域,避免穿模情况的出现。
可选地,通过程序设计来实现第一骨骼变形处理。
步骤902:基于主控对象的当前状态,调整原始骨骼模型的姿势,得到投影模型。
示例性的,基于主控对象的当前状态,采用反向动力学调整原始骨骼模型的姿势,得到 投影模型。
可选地,当主控对象的状态发生变化时,基于变化后的状态,调整原始骨骼模型的姿势,得到投影模型。示例性的,主控对象由静止状态变化为跑步状态,则将主控对象由静止姿势变化为跑步姿势。
可选地,基于周期性变化调整原始骨骼模型的姿势。比如,主控对象在步行状态下,双脚交替移动,故基于双脚周期性的交替移动,改变原始骨骼模型的姿势。
步骤903:将投影模型替换为一体化投影模型。
可选地,投影模型包括至少两个组件模型,至少两个组件模型包括主控对象上不同部位的模型。比如,组件模型包括头模型、躯干模型、上肢模型、下肢模型中的至少一种。则在调用每一个组件模型进行渲染时,都会产生一次绘制调用,则总计会产生多次绘制调用。多次绘制调用会对计算机设备的GPU(Graphics Processing Unit,图形处理器)带来较大的渲染压力。示例性的,如图10所示,当组件模型包括头模型、躯干模型、上肢模型和下肢模型时,则需要提交至少四种渲染调用,这极大地增加了GPU的负担。
因此,在本申请实施例中,出于投影的特性,主控对象的投影的本身精度并不高,即使存在一些偏差,也不会影响投影本身的精度。因此,本体模型包括至少两个组件模型,至少两个组件模型包括主控对象上不同部位的模型,将投影模型替换为一体化投影模型,一体化投影模型是合并至少两个组件模型后的低面模型,一体化投影模型的面数小于投影模型的面数。在一种具体的实现方式中,一体化投影模型不包括组件模型,一体化投影模型由一个完整的模型组成。示例性的,如图11所示,图11展示了主控对象的一体化投影模型1101。
即,在渲染本体模型时,需执行对至少两个组件模型的渲染;然而,在渲染主控对象的投影时,只需执行对一体化投影模型的渲染,一体化模型的面数少于本体模型的面数。
示例性的,如图12所示,在将投影模型替换为一体化投影模型后,只需要提交一次渲染就可以得到主控对象的投影,减轻了GPU的负担。
在一些情况下,主控对象持有虚拟物品,同样为了降低GPU的渲染压力,可以将主控对象持有的虚拟物品的模型替换为一体化物品模型。示例性的,若主控对象持有虚拟枪械,则虚拟枪械也包括至少两个物品组件模型(例如,枪托模型、消音器模型、握把模型等),需要将虚拟枪械的模型替换为一体化枪械模型,以便对枪械投影的渲染,如图13所示,将包括至少两个物品组件模型的枪械模型替换为一体化枪械模型1301。
步骤904:对一体化投影模型进行渲染,得到主控对象的投影。
可选地,主控对象的投影是虚拟环境中背光侧的投影。示例性的,主控对象的投影是虚拟环境中地面上的投影,或者,主控对象的投影是虚拟环境中墙壁上的投影,或者,主控对象的投影是虚拟环境中天花板上的投影。
可选地,主控对象的投影还可以指反射面上的图像。例如,主控对象的投影指镜面上主控对象的图像,又例如,主控对象的投影指水面上主控对象的图像。
综上所述,本申请实施例在生成主控对象的投影时,会在第一骨骼变形处理前,从动画蓝图中提取原始骨骼模型得到主控对象的投影,保证生成的投影是存在头部的。相较于相关技术使用两套不同的动画蓝图来同时获取主控对象的本体和投影,该方法不需要额外设置一套动画蓝图就可以得到主控对象的投影,可以简化获取主控对象的投影的流程,不需要过多的计算量就能得到主控对象的投影,降低计算机设备的计算压力。
而且,将投影模型的替换为一体化投影模型,降低了计算机设备提交渲染的次数,可以减轻GPU的工作压力。
图14示出了本申请一个实施例提供的生成主控对象投影的方法的流程示意图。该方法可由图1所示的终端120执行,该方法包括以下步骤:
步骤1401:在本体动画蓝图的第二骨骼变形处理前,从本体动画蓝图中提取原始骨骼 模型。
第二骨骼变形处理用于在主控对象处于目标当前状态时,扭曲原始骨骼模型。在一个具体的例子中,当主控对象进行左右探身时,会对主控对象的骨骼模型进行过量扭曲,这是为了主控对象在障碍物后能够探身,且探身后能够观察到虚拟环境中的其他对象,因此需要进行第二骨骼变形处理。但是如果在第二骨骼变形处理后提取骨骼模型,由于骨骼模型已经被过量扭曲了,因此最后得到的投影也是处于过量扭曲的状态,如图4所示,主控对象的投影401处于过量扭曲的状态,但是这种过量扭曲的状态并不是正常人类可以实现的。因此,在本申请中,需要在本体动画蓝图的第二骨骼变形处理前,从本体动画蓝图中提取原始骨骼模型。
步骤1402:基于主控对象的当前状态,调整原始骨骼模型的姿势,得到投影模型。
示例性的,基于主控对象的当前状态,采用反向动力学调整原始骨骼模型的姿势,得到投影模型。
可选地,当主控对象的状态发生变化时,基于变化后的状态,调整原始骨骼模型的姿势,得到投影模型。示例性的,主控对象由静止状态变化为跑步状态,则将主控对象由静止姿势变化为跑步姿势。
可选地,基于周期性变化调整原始骨骼模型的姿势。比如,主控对象在步行状态下,双脚交替移动,故基于双脚周期性的交替移动,改变原始骨骼模型的姿势。
步骤1403:将投影模型替换为一体化投影模型。
可选地,投影模型包括至少两个组件模型,至少两个组件模型包括主控对象上不同部位的模型。比如,组件模型包括头模型、躯干模型、上肢模型、下肢模型中的至少一种。
本体模型包括至少两个组件模型,至少两个组件模型包括主控对象上不同部位的模型。一体化投影模型是合并至少两个组件模型后的低面模型,一体化投影模型的面数小于投影模型的面数。在一种具体的实现方式中,一体化投影模型不包括组件模型,一体化投影模型由一个完整的模型组成。示例性的,如图11所示,图11展示了主控对象的一体化投影模型1101。
在一些情况下,主控对象持有虚拟物品,同样为了降低GPU的渲染压力,可以将主控对象持有的虚拟物品的模型替换为一体化物品模型。示例性的,若主控对象持有虚拟枪械,则虚拟枪械也包括至少两个物品组件模型,需要将虚拟枪械的模型替换为一体化枪械模型,以便对枪械投影的渲染,如图13所示,将包括至少两个物品组件模型的枪械模型替换为一体化枪械模型1301。
步骤1404:对一体化投影模型进行渲染,得到主控对象的投影。
可选地,主控对象的投影是虚拟环境中背光侧的投影。示例性的,主控对象的投影是虚拟环境中地面上的投影,或者,主控对象的投影是虚拟环境中墙壁上的投影,或者,主控对象的投影是虚拟环境中天花板上的投影。
可选地,主控对象的投影还可以指反射面上的图像。例如,主控对象的投影指镜面上主控对象的图像,又例如,主控对象的投影指水面上主控对象的图像。
综上所述,本申请实施例在生成主控对象的投影时,会在第二骨骼变形处理前,从动画蓝图中提取原始骨骼模型得到主控对象的投影,保证生成的投影的形状是正确的。相较于相关技术使用两套不同的动画蓝图来同时获取主控对象的本体和投影,该方法不需要额外设置一套动画蓝图就可以得到主控对象的投影,可以简化获取主控对象的投影的流程,不需要过多的计算量就能得到主控对象的投影,降低计算机设备的计算压力。
而且,将投影模型替换为一体化投影模型,降低了计算机设备提交渲染的次数,可以减轻GPU的工作压力。
在接下来的实施例中,对上述步骤902或步骤1402中涉及的调整原始骨骼模型的姿势 的方法进行简要介绍,本申请实施例以反向动力学为例进行说明,需要说明的是,还可以通过其他方法来调整原始骨骼模型的姿势,本申请对此不做具体限定。
图15示出了本申请一个实施例提供的调整原始骨骼模型的姿势的方法的流程示意图。该方法可由图1所示的终端120执行,图15所示的方法用于进一步精细化调整原始骨骼模型,以提高投影模型的精度。以主控对象处于步行状态为例,主控对象的本体模型需实现脚部和腿部的协调联动,而主控对象的投影模型在精度要求不高的情况下,无需区分出脚部和腿部,在精度要求较高的情况下,投影模型需区分出脚部和腿部,并且,在本申请中腿部的移动是依赖于脚部的移动确定的。图15所示的方法包括以下步骤:
步骤1501:基于主控对象的当前状态,确定子骨骼的目标位置。
原始骨骼模型包括子骨骼和与子骨骼对应的父骨骼,子骨骼位于骨骼链的末端。以主控角色是人形角色为例进行说明,子骨骼可以是人形角色的手,父骨骼则是人形角色的手臂。子骨骼可以是人形角色的脚,父骨骼则是人形角色的腿。
可选地,当前状态包括但不限于攻击、释放技能、购买道具、治疗、调整身体姿态、爬行、步行、骑行、飞行、跳跃、驾驶、拾取、射击、投掷、跑步、静止中的至少一种。
目标位置是在当前状态下,子骨骼需要达到的位置。示例性的,主控对象处于步行状态,子骨骼指主控对象的脚部,且主控对象的脚部处于抬起状态,则目标位置是主控对象的脚部的落点。示例性的,主控对象正在楼梯上扶着栏杆步行,子骨骼指主控对象的手部,且主控对象的手部处于抬起状态,则目标位置是主控对象的手部的落点。
步骤1502:根据目标位置,确定子骨骼的位置。
示例性的,确定子骨骼的位置包括以下子步骤:
示意性的,主控对象处于步行状态,步行状态可简单拆分为抬起状态和落下状态,以此时处于抬起状态为例,步骤1502即用于确定落下状态下的脚部(子骨骼)的位置。
1、确定子骨骼的第一端点到子骨骼的末端的第一向量。
子骨骼的第一端点是子骨骼上远离子骨骼的末端的端点。示例性的,如图16所示,第一向量是子骨骼1602的第一端点1603到子骨骼的末端1605所呈的向量。
示意性的,子骨骼为脚部,第一端点为脚部的脚跟,子骨骼的末端为脚部的脚趾,子步骤1即得到了抬起状态下由脚跟指向脚趾的第一向量。
2、确定第一端点到目标位置的第二向量。
示例性的,如图16所示,第二向量是第一端点1603到目标位置1604所呈的向量。
示意性的,第一端点为脚部的脚跟,目标位置为脚趾的目标落点位置,子步骤2即得到了抬起状态下由脚跟指向目标落点位置的第二向量。
3、基于第一向量和第二向量之间的夹角,旋转子骨骼,确定子骨骼的位置。
示例性的,如图16所示,基于第一向量和第二向量之间的夹角,逆时针旋转子骨骼1602,旋转角度与夹角相同,以得到子骨骼1602的位置。
示意性的,第一向量指抬起状态下脚跟指向脚趾的向量,第二向量指抬起状态下脚跟指向目标位置的向量,子步骤3即基于第一向量和第二向量的夹角,旋转脚部(子骨骼),得到旋转后的脚部,也即得到落下状态下的脚部。
步骤1503:根据目标位置和子骨骼的位置,确定父骨骼的位置。
示例性的,确定父骨骼的位置包括以下子步骤:
1、确定父骨骼的第二端点到子骨骼的末端的第三向量。
父骨骼的第二端点是父骨骼上远离子骨骼的端点。示例性的,如图17所示,第三向量是父骨骼1601的第二端点1606到子骨骼的末端1605所呈的向量。
示意性的,父骨骼为小腿部位,第二端点为膝盖,子骨骼的末端为脚趾,子步骤1即得到了抬起状态下由膝盖指向脚趾(旋转前)的第三向量。
2、确定第二端点到目标位置的第四向量。
示例性的,如图17所示,第四向量是第二端点1606到目标位置1604所呈的向量。
示意性的,目标位置是脚趾的目标落点位置,第二端点是膝盖,子步骤2即得到了由膝盖指向目标落点位置的第四向量。
3、基于第三向量和第四向量之间的夹角,旋转父骨骼,确定父骨骼的位置。
示例性的,如图17所示,基于第三向量和第四向量之间的夹角,逆时针旋转父骨骼1601,旋转角度与夹角相同,以得到父骨骼1601的位置。
示意性的,第三向量指抬起状态下膝盖指向脚趾的向量,第四向量指抬起状态下膝盖指向目标位置的向量,子步骤3即基于第三向量和第四向量的夹角,旋转小腿(父骨骼),得到旋转后的小腿,也即得到落下状态下的小腿。
步骤1504:根据子骨骼和父骨骼的位置,调整原始骨骼模型的姿势,得到投影模型。
在已知子骨骼和父骨骼的位置的情况下,可以根据子骨骼和父骨骼的位置调整原始骨骼模型,以完成对原始骨骼模型的姿势调整。
综上所述,本申请实施例提供了一种调整原始骨骼模型姿势的方法,该方法可以调整原始骨骼模型的姿势,使主控对象的动作和姿势贴合当前状态,使得原始骨骼模型的姿势更加贴近实际情况,这样得到投影模型能够较好地体现出的主控对象的状态。
在一种具体的实施方式中,提供一种本体动画蓝图的实现方法,图18示出了本申请一个实施例提供的本体动画蓝图的实现方法的流程示意图。该方法可由图1所示的终端120执行,该方法包括以下步骤:
步骤1801:确定主控对象的初始动画姿势。
可选地,根据主控对象的当前状态确定初始动画姿势。
可选地,主控对象的骨骼模型包括至少两个组件模型,至少两个组件模型包括主控对象上不同部位的模型。比如,主控对象的骨骼模型包括上半身模型和下半身模型。在确定初始动画姿势时,需要依次确定至少两个组件模型的动画姿势。叠加至少两个组件模型的动画姿势,得到初始动画姿势。示例性的,主控对象边射击边跑,则需要为主控对象的上半身匹配射击的动画姿势,为主控对象的下半身匹配跑的动画姿势,将上半身和下半身的动画姿势叠加在一起,得到主控对象的初始动画姿势。
步骤1802:将初始动画姿势代入到主控对象的骨骼模型中,得到原始骨骼模型。
可选地,根据初始动画姿势调整主控对象的骨骼模型,得到原始骨骼模型。
示例性的,如图19所示,在本体动画蓝图1901中提取出的原始骨骼模型1902经过调整姿势后,可以得到投影模型1903,投影模型1903用于生成主控对象在虚拟环境中的投影。
步骤1803:对原始骨骼模型进行骨骼变形处理,得到骨骼变形后的原始骨骼模型。
示例性的,对原始骨骼模型进行第一骨骼变形处理,得到骨骼变形后的原始骨骼模型。第一骨骼变形处理用于缩放主控对象的摄像机模型和原始骨骼模型的重合区域处的骨骼。比如,缩放主控对象的头部区域的骨骼。
示例性的,对原始骨骼模型进行第二骨骼变形处理,得到骨骼变形后的原始骨骼模型。第二骨骼变形处理用于在主控对象处于目标当前状态时,扭曲原始骨骼模型。比如,在主控对象进行左右探身时,会对主控对象的骨骼模型进行过量扭曲。
步骤1804:基于当前状态,调整骨骼变形后的原始骨骼模型的姿势,得到主控对象的本体模型。
可选地,基于当前状态,采用反向动力学调整骨骼变形后的原始骨骼模型的姿势,得到主控对象的本体模型。
需要说明的是,IK(Inverse Kinematics,反向运动)和AO(Aim Offset,瞄准偏移)都对骨骼变形处理有时序上的依赖,比如,部分IK操作必须在骨骼变形处理后进行,而另一 部分IK操作则必须在骨骼变形处理前进行,而有一部分IK操作则是既可以在骨骼变形处理前进行,也可以在骨骼变形前进行。其中,IK和AO对骨骼变形处理依赖于具体的项目。
步骤1805:对本体模型进行渲染,得到主控对象的本体。
可选地,本体模型包括至少两个组件模型,至少两个组件模型包括主控对象上不同部位的模型。比如,本体模型中包括的头模型、躯干模型、上肢模型和下肢模型。在渲染本体模型时,需要分别对至少两个组件模型进行渲染,得到至少两个组件模型的渲染结果。合成至少两个组件模型的渲染结果,得到主控对象的本体。示例性的,分别对头模型、躯干模型、上肢模型和下肢模型进行渲染,得到头模型、躯干模型、上肢模型和下肢模型对应的渲染结果,将这四个渲染结果进行合成,得到主控对象的本体。
综上所述,本实施例提供了一种本体动画蓝图的方法,该方法可以得到主控对象的本体。并且,本体动画蓝图中的原始骨骼模型可以用于获取主控对象的投影。
在本申请实施例,会将本体动画蓝图中骨骼变形处理之后的步骤尽可能地放到骨骼变形处理前进行。在一种具体的实施方式中,提供另一种本体动画蓝图的实现方法。在具体实施时,可以在不影响主控对象的本体的情况下,将骨骼变形处理后的步骤,移至骨骼变形前处理。比如,在图18所示的实施例中,将步骤1804对应的操作移至步骤1803前进行。
图20示出了本申请一个实施例提供的本体动画蓝图的实现方法的流程示意图。该方法可由图1所示的终端120执行,该方法包括以下步骤:
步骤2001:确定主控对象的初始动画姿势。
可选地,根据主控对象的当前状态确定初始动画姿势。
可选地,主控对象的骨骼模型包括至少两个组件模型,至少两个组件模型包括主控对象上不同部位的模型。比如,主控对象的骨骼模型包括上半身模型和下半身模型。在确定初始动画姿势时,需要依次确定至少两个组件模型的动画姿势。叠加至少两个组件模型的动画姿势,得到初始动画姿势。示例性的,主控对象边射击边跑,则需要为主控对象的上半身匹配射击的动画姿势,为主控对象的下半身匹配跑的动画姿势,将上半身和下半身的动画姿势叠加在一起,得到主控对象的初始动画姿势。
步骤2002:将初始动画姿势代入到主控对象的骨骼模型中,得到骨骼姿势模型。
可选地,根据初始动画姿势调整主控对象的骨骼,得到骨骼姿势模型。
步骤2003:基于当前状态,调整骨骼姿势模型的姿势,得到原始骨骼模型。
可选地,基于当前状态,采用反向动力学调整骨骼姿势模型的姿势,得到原始骨骼模型。
示例性的,如图21所示,在本体动画蓝图2101中提取出的原始骨骼模型2102已经调整姿势,即可将原始骨骼模型2102直接作为投影模型2103,投影模型2103用于生成主控对象在虚拟环境中的投影。
步骤2004:对原始骨骼模型进行骨骼变形处理,得到骨骼变形后的原始骨骼模型。
示例性的,对原始骨骼模型进行第一骨骼变形处理,得到骨骼变形后的原始骨骼模型。第一骨骼变形处理用于缩放主控对象的摄像机模型和原始骨骼模型的重合区域处的骨骼。比如,缩放主控对象的头部区域的骨骼。
示例性的,对原始骨骼模型进行第二骨骼变形处理,得到骨骼变形后的原始骨骼模型。第二骨骼变形处理用于在主控对象处于目标当前状态时,扭曲原始骨骼模型。比如,在主控对象进行左右探身时,会对主控对象的骨骼模型进行过量扭曲。
步骤2005:对本体模型进行渲染,得到主控对象的本体。
可选地,本体模型包括至少两个组件模型,至少两个组件模型包括主控对象上不同部位的模型。比如,本体模型包括中的头模型、躯干模型、上肢模型和下肢模型。在渲染本体模型时,需要分别对至少两个组件模型进行渲染,得到至少两个组件模型的渲染结果。合成至 少两个组件模型的渲染结果,得到主控对象的本体。示例性的,分别对头模型、躯干模型、上肢模型和下肢模型进行渲染,得到头模型、躯干模型、上肢模型和下肢模型对应的渲染结果,将这四个渲染结果进行合成,得到主控对象的本体。
需要说明的是,本申请实施例是将骨骼变形处理后的操作移至骨骼变形处理前进行处理,但是这里有一个前提,即将骨骼变形处理后的操作移至骨骼变形处理前后,不会对主控对象的本体和/或投影产生影响。其中,可以移动的骨骼变形处理后的操作由实际需求确定。示例性的,在实际需求A中,调整姿势这一操作是可以移至骨骼变形处理前进行处理的,但是,在实际需求B中,调整姿势这一操作是不可以移至骨骼变形处理前进行处理的。本申请对可以移至骨骼变形处理前的操作不做具体限定。
综上所述,本实施例提供了一种本体动画蓝图的方法,该方法可以得到主控对象的本体。并且,本体动画蓝图中的原始骨骼模型可以用于获取主控对象的投影。
在一个具体的例子中,以应用在射击游戏中来举例说明。
图22示出了本申请一个实施例提供的生成主控对象投影的方法的流程示意图。该方法可由图1所示的终端120执行,该方法包括以下步骤:
步骤2201:从主控游戏对象的本体动画蓝图中提取主控游戏对象的原始骨骼模型。
主控游戏对象是虚拟人物、虚拟动物、动漫人物中的至少一种。本申请对此不作具体限定。
原始骨骼模型是主控游戏对象的未经过骨骼变形的模型。
本体动画蓝图是生成虚拟环境中主控对象的本体的流程。通过本体动画蓝图可以得到主控对象的本体。主控对象的本体指虚拟环境中的主控对象。
在一个可选的实施例中,本体动画蓝图包括骨骼变形处理。在本体动画蓝图的骨骼变形处理前,从本体动画蓝图中提取原始骨骼模型。
在一个可选的实施例中,骨骼变形处理包括第一骨骼变形处理,第一骨骼变形处理用于缩放主控对象的摄像机模型和原始骨骼模型的重合区域处的骨骼;在本体动画蓝图的第一骨骼变形处理前,从本体动画蓝图中提取原始骨骼模型。
在一个可选的实施例中,骨骼变形处理包括第二骨骼变形处理,第二骨骼变形处理用于在主控对象处于目标当前状态时,扭曲原始骨骼模型;在本体动画蓝图的第二骨骼变形处理前,从本体动画蓝图中提取原始骨骼模型。
步骤2202:渲染原始骨骼模型,得到主控游戏对象的投影。
可选地,主控游戏对象的投影是虚拟环境中背光侧的投影。示例性的,主控游戏对象的投影是虚拟环境中地面上的投影,或者,主控游戏对象的投影是虚拟环境中墙壁上的投影,或者,主控游戏对象的投影是虚拟环境中天花板上的投影。
可选地,主控游戏对象的投影还可以指反射面上的图像。例如,主控游戏对象的投影指镜面上主控游戏对象的图像,又例如,主控游戏对象的投影指水面上主控游戏对象的图像。
可选地,在渲染原始模型前,通过反向动力学调整原始骨骼模型的姿势,得到姿势调整后的骨骼模型。渲染姿势调整后的骨骼模型,得到主控游戏对象的投影。可选地,在渲染原始模型前,基于主控游戏对象的当前状态,调整原始骨骼模型的姿势,得到投影模型。渲染姿势调整后的骨骼模型,得到主控游戏对象的投影。
在一个可选的实施例中,原始骨骼模型包括子骨骼和与子骨骼对应的父骨骼,子骨骼位于骨骼链的末端;基于主控游戏对象的当前状态,确定子骨骼的目标位置;根据目标位置,确定子骨骼的位置;根据目标位置和子骨骼的位置,确定父骨骼的位置;根据子骨骼和父骨骼的位置,调整原始骨骼模型的姿势,得到投影模型。
在一个可选的实施例中,确定子骨骼的第一端点到子骨骼的末端的第一向量,子骨骼的第一端点是子骨骼上远离子骨骼的末端的端点;确定第一端点到目标位置的第二向量;基于 第一向量和第二向量之间的夹角,旋转子骨骼,确定子骨骼的位置。
在一个可选的实施例中,确定父骨骼的第二端点到子骨骼的末端的第三向量,父骨骼的第二端点是父骨骼上远离子骨骼的端点;确定第二端点到目标位置的第四向量;基于第三向量和第四向量之间的夹角,旋转父骨骼,确定父骨骼的位置。
在一个可选的实施例中,本体模型包括至少两个组件模型,至少两个组件模型包括主控游戏对象上不同部位的模型;将投影模型替换为一体化投影模型,一体化投影模型是合并至少两个组件模型后的低面模型,一体化投影模型的面数小于投影模型的面数;对一体化投影模型进行渲染,得到主控游戏对象的投影。
在一个可选的实施例中,本体动画蓝图包括以下步骤:
1、确定主控游戏对象的初始动画姿势;
2、将初始动画姿势代入到主控游戏对象的骨骼模型中,得到原始骨骼模型;
3、对原始骨骼模型进行骨骼变形处理,得到骨骼变形后的原始骨骼模型;
4、基于当前状态,调整骨骼变形后的原始骨骼模型的姿势,得到主控游戏对象的本体模型,本体模型是用于生成主控游戏对象地本体的模型;
5、对本体模型进行渲染,得到主控游戏对象的本体。
在一个可选的实施例中,主控游戏对象的骨骼模型包括至少两个组件模型,至少两个组件模型包括主控游戏对象上不同部位的模型;确定主控游戏对象的初始动画姿势,包括:依次确定至少两个组件模型的动画姿势;叠加至少两个组件模型的动画姿势,得到初始动画姿势。
在一个可选的实施例中,本体模型包括至少两个组件模型,至少两个组件模型包括主控游戏对象上不同部位的模型;对本体模型进行渲染,得到主控游戏对象的本体,包括:分别对至少两个组件模型进行渲染,得到至少两个组件模型的渲染结果;合成至少两个组件模型的渲染结果,得到主控游戏对象的本体。
综上所述,本实施例本申请实施例在生成主控游戏对象的投影时,只需要从本体动画蓝图中提取原始骨骼模型,就可以根据原始骨骼模型得到主控对象的投影。该方法不需要额外设置一套动画蓝图就可以得到主控对象的投影,可以简化获取主控对象的投影的流程,不需要过多的计算量就能得到主控对象的投影,降低计算机设备的计算压力,即使计算机设备的计算能力较弱,也能完成主控对象的投影。
下述为本申请装置实施例,可以用于执行本申请方法实施例。对于本申请装置实施例中未披露的细节,请参照本申请方法实施例。
请参考图23,其示出了本申请一个实施例提供的生成主控对象投影的装置的框图。上述功能可以由硬件实现,也可以由硬件执行相应的软件实现。该装置2300包括:
提取模块2301,用于从主控对象的本体动画蓝图中提取主控对象的原始骨骼模型,主控对象是以第一人称视角观察虚拟环境的对象,本体动画蓝图用于生成虚拟环境中主控对象的本体模型,原始骨骼模型是主控对象的未经过骨骼变形的模型;
调整模块2302,用于基于原始骨骼模型得到投影模型,投影模型是用于生成主控对象在虚拟环境中的投影的模型;
渲染模块2303,用于渲染投影模型,得到主控对象的投影。
在本申请的一个可选设计中,本体动画蓝图包括骨骼变形处理;提取模块2301,还用于在本体动画蓝图的骨骼变形处理前,从本体动画蓝图中提取原始骨骼模型。
在本申请的一个可选设计中,骨骼变形处理包括第一骨骼变形处理,第一骨骼变形处理用于缩放主控对象的摄像机模型和原始骨骼模型的重合区域处的骨骼。提取模块2301,还用于在本体动画蓝图的第一骨骼变形处理前,从本体动画蓝图中提取原始骨骼模型。
在本申请的一个可选设计中,骨骼变形处理包括第二骨骼变形处理,第二骨骼变形处理 用于在主控对象处于目标当前状态时,扭曲原始骨骼模型;提取模块2301,还用于在本体动画蓝图的第二骨骼变形处理前,从本体动画蓝图中提取原始骨骼模型。
在本申请的一个可选设计中,调整模块2302,用于将原始骨骼模型确定为投影模型。
在本申请的一个可选设计中,调整模块2302,用于基于主控对象的当前状态,调整原始骨骼模型的姿势,得到投影模型。
在本申请的一个可选设计中,原始骨骼模型包括子骨骼和与子骨骼对应的父骨骼,子骨骼位于骨骼链的末端;调整模块2302,还用于基于主控对象的当前状态,确定子骨骼的目标位置;根据目标位置,确定子骨骼的位置;根据目标位置和子骨骼的位置,确定父骨骼的位置;根据子骨骼和父骨骼的位置,调整原始骨骼模型的姿势,得到投影模型。
在本申请的一个可选设计中,调整模块2302,还用于确定子骨骼的第一端点到子骨骼的末端的第一向量,子骨骼的第一端点是子骨骼上远离子骨骼的末端的端点;确定第一端点到目标位置的第二向量;基于第一向量和第二向量之间的夹角,旋转子骨骼,确定子骨骼的位置。
在本申请的一个可选设计中,调整模块2302,还用于确定父骨骼的第二端点到子骨骼的末端的第三向量,父骨骼的第二端点是父骨骼上远离子骨骼的端点;确定第二端点到目标位置的第四向量;基于第三向量和第四向量之间的夹角,旋转父骨骼,确定父骨骼的位置。
在本申请的一个可选设计中,本体模型包括至少两个组件模型,至少两个组件模型包括主控对象上不同部位的模型,渲染模块2303,还用于将投影模型替换为一体化投影模型,一体化投影模型是合并至少两个组件模型后的低面模型,一体化投影模型的面数小于投影模型的面数;对一体化投影模型进行渲染,得到主控对象的投影。
在本申请的一个可选设计中,装置还包括制作模块2304;
制作模块2304,用于确定主控对象的初始动画姿势;将初始动画姿势代入到主控对象的骨骼模型中,得到原始骨骼模型;对原始骨骼模型进行骨骼变形处理,得到骨骼变形后的原始骨骼模型;基于当前状态,调整骨骼变形后的原始骨骼模型的姿势,得到主控对象的本体模型,本体模型是用于生成主控对象地本体的模型;对本体模型进行渲染,得到主控对象的本体。
在本申请的一个可选设计中,主控对象的骨骼模型包括至少两个组件模型,至少两个组件模型包括主控对象上不同部位的模型;制作模块2304,还用于依次确定至少两个组件模型的动画姿势;叠加至少两个组件模型的动画姿势,得到初始动画姿势。
在本申请的一个可选设计中,本体模型包括至少两个组件模型,至少两个组件模型包括主控对象上不同部位的模型;制作模块2304,还用于分别对至少两个组件模型进行渲染,得到至少两个组件模型的渲染结果;合成至少两个组件模型的渲染结果,得到主控对象的本体。
综上所述,本实施例在生成主控对象的投影时,会从本体动画蓝图中提取原始骨骼模型,根据原始骨骼模型得到主控对象的投影。该方法不需要额外设置一套动画蓝图就可以得到主控对象的投影,可以简化获取主控对象的投影的流程,不需要过多的计算量就能得到主控对象的投影,降低计算机设备的计算压力。
图24是根据一示例性实施例示出的一种计算机设备的结构示意图。所述计算机设备2400包括中央处理单元(Central Processing Unit,CPU)2401、包括随机存取存储器(Random Access Memory,RAM)2402和只读存储器(Read-Only Memory,ROM)2403的系统存储器2404,以及连接系统存储器2404和中央处理单元2401的系统总线2405。所述计算机设备2400还包括帮助计算机设备内的各个器件之间传输信息的基本输入/输出系统(Input/Output,I/O系统)2406,和用于存储操作系统2413、应用程序2414和其他程序模块2415的大容量存储设备2407。
所述基本输入/输出系统2406包括有用于显示信息的显示器2408和用于用户输入信息的诸如鼠标、键盘之类的输入设备2409。其中所述显示器2408和输入设备2409都通过连接到系统总线2405的输入输出控制器2410连接到中央处理单元2401。所述基本输入/输出系统2406还可以包括输入输出控制器2410以用于接收和处理来自键盘、鼠标、或电子触控笔等多个其他设备的输入。类似地,输入输出控制器2410还提供输出到显示屏、打印机或其他类型的输出设备。
所述大容量存储设备2407通过连接到系统总线2405的大容量存储控制器(未示出)连接到中央处理单元2401。所述大容量存储设备2407及其相关联的计算机设备可读介质为计算机设备2400提供非易失性存储。也就是说,所述大容量存储设备2407可以包括诸如硬盘或者只读光盘(Compact Disc Read-Only Memory,CD-ROM)驱动器之类的计算机设备可读介质(未示出)。
不失一般性,所述计算机设备可读介质可以包括计算机设备存储介质和通信介质。计算机设备存储介质包括以用于存储诸如计算机设备可读指令、数据结构、程序模块或其他数据等信息的任何方法或技术实现的易失性和非易失性、可移动和不可移动介质。计算机设备存储介质包括RAM、ROM、可擦除可编程只读存储器(Erasable Programmable Read Only Memory,EPROM)、带电可擦可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,EEPROM),CD-ROM、数字视频光盘(Digital Video Disc,DVD)或其他光学存储、磁带盒、磁带、磁盘存储或其他磁性存储设备。当然,本领域技术人员可知所述计算机设备存储介质不局限于上述几种。上述的系统存储器2404和大容量存储设备2407可以统称为存储器。
根据本公开的各种实施例,所述计算机设备2400还可以通过诸如因特网等网络连接到网络上的远程计算机设备运行。也即计算机设备2400可以通过连接在所述系统总线2405上的网络接口单元2412连接到网络2411,或者说,也可以使用网络接口单元2412来连接到其他类型的网络或远程计算机设备系统(未示出)。
所述存储器还包括一个或者一个以上的程序,所述一个或者一个以上程序存储于存储器中,中央处理器2401通过执行该一个或一个以上程序来实现上述生成主控对象投影的方法的全部或者部分步骤。
在示例性实施例中,还提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现上述各个方法实施例提供的生成主控对象投影的方法。
本申请还提供一种计算机可读存储介质,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现上述方法实施例提供的生成主控对象投影的方法。
本申请还提供一种计算机程序产品或计算机程序,上述计算机程序产品或计算机程序包括计算机指令,上述计算机指令存储在计算机可读存储介质中。计算机设备的处理器从上述计算机可读存储介质读取上述计算机指令,上述处理器执行上述计算机指令,使得上述计算机设备执行如上方面实施例提供的生成主控对象投影的方法。

Claims (20)

  1. 一种生成主控对象投影的方法,其中,所述方法由终端执行,所述方法包括:
    从所述主控对象的本体动画蓝图中提取所述主控对象的原始骨骼模型,所述主控对象是以第一人称视角观察虚拟环境的对象,所述本体动画蓝图用于生成所述虚拟环境中所述主控对象的本体模型,所述原始骨骼模型是所述主控对象的未经过骨骼变形的模型;
    基于所述原始骨骼模型得到投影模型,所述投影模型是用于生成所述主控对象在所述虚拟环境中的投影的模型;
    渲染所述投影模型,得到所述主控对象的投影。
  2. 根据权利要求1所述的方法,其中,所述本体动画蓝图包括骨骼变形处理;
    所述从所述主控对象的本体动画蓝图中提取所述主控对象的原始骨骼模型,包括:
    在所述本体动画蓝图的所述骨骼变形处理前,从所述本体动画蓝图中提取所述原始骨骼模型。
  3. 根据权利要求2所述的方法,其中,所述骨骼变形处理包括第一骨骼变形处理,所述第一骨骼变形处理用于缩放所述主控对象的摄像机模型和所述原始骨骼模型的重合区域处的骨骼;
    所述在所述本体动画蓝图的所述骨骼变形处理前,从所述本体动画蓝图中提取所述原始骨骼模型,包括:
    在所述本体动画蓝图的所述第一骨骼变形处理前,从所述本体动画蓝图中提取所述原始骨骼模型。
  4. 根据权利要求2所述的方法,其中,所述骨骼变形处理包括第二骨骼变形处理,所述第二骨骼变形处理用于在所述主控对象处于目标当前状态时,扭曲所述原始骨骼模型;
    所述在所述本体动画蓝图的所述骨骼变形处理前,从所述本体动画蓝图中提取所述原始骨骼模型,包括:
    在所述本体动画蓝图的所述第二骨骼变形处理前,从所述本体动画蓝图中提取所述原始骨骼模型。
  5. 根据权利要求1至4任一所述的方法,其中,所述基于所述原始骨骼模型得到投影模型,包括:
    将所述原始骨骼模型确定为所述投影模型;
    或者,
    基于所述主控对象的当前状态,调整所述原始骨骼模型的姿势,得到所述投影模型。
  6. 根据权利要求5所述的方法,其中,所述原始骨骼模型包括子骨骼和与所述子骨骼对应的父骨骼,所述子骨骼位于骨骼链的末端;
    所述基于所述主控对象的当前状态,调整所述原始骨骼模型的姿势,得到投影模型,包括:
    基于所述主控对象的当前状态,确定所述子骨骼的目标位置;
    根据所述目标位置,确定所述子骨骼的位置;
    根据所述目标位置和所述子骨骼的位置,确定所述父骨骼的位置;
    根据所述子骨骼和所述父骨骼的位置,调整所述原始骨骼模型的姿势,得到所述投影模型。
  7. 根据权利要求6所述的方法,其中,所述根据所述目标位置,确定所述子骨骼的位置,包括:
    确定所述子骨骼的第一端点到所述子骨骼的末端的第一向量,所述子骨骼的第一端点是所述子骨骼上远离所述子骨骼的末端的端点;
    确定所述第一端点到所述目标位置的第二向量;
    基于所述第一向量和所述第二向量之间的夹角,旋转所述子骨骼,确定所述子骨骼的位置。
  8. 根据权利要求6所述的方法,其中,所述根据所述目标位置和所述子骨骼的位置,确定所述父骨骼的位置,包括:
    确定所述父骨骼的第二端点到所述子骨骼的末端的第三向量,所述父骨骼的第二端点是所述父骨骼上远离所述子骨骼的端点;
    确定所述第二端点到所述目标位置的第四向量;
    基于所述第三向量和所述第四向量之间的夹角,旋转所述父骨骼,确定所述父骨骼的位置。
  9. 根据权利要求1至4任一项所述的方法,其中,所述本体模型包括至少两个组件模型,所述至少两个组件模型包括所述主控对象上不同部位的模型;
    所述渲染所述投影模型,得到所述主控对象的投影,包括:
    将所述投影模型替换为一体化投影模型,所述一体化投影模型是合并所述至少两个组件模型后的低面模型,所述一体化投影模型的面数小于所述投影模型的面数;
    对所述一体化投影模型进行渲染,得到所述主控对象的投影。
  10. 根据权利要求1至4任一项所述的方法,其中,所述本体动画蓝图包括以下步骤:
    确定所述主控对象的初始动画姿势;
    将所述初始动画姿势代入到所述主控对象的骨骼模型中,得到所述原始骨骼模型;
    对所述原始骨骼模型进行骨骼变形处理,得到骨骼变形后的原始骨骼模型;
    基于所述当前状态,调整所述骨骼变形后的原始骨骼模型的姿势,得到所述主控对象的本体模型,所述本体模型是用于生成所述主控对象地本体的模型;
    对所述本体模型进行渲染,得到所述主控对象的本体。
  11. 根据权利要求10所述的方法,其中,所述主控对象的骨骼模型包括至少两个组件模型,所述至少两个组件模型包括所述主控对象上不同部位的模型;
    所述确定所述主控对象的初始动画姿势,包括:
    依次确定所述至少两个组件模型的动画姿势;
    叠加所述至少两个组件模型的动画姿势,得到所述初始动画姿势。
  12. 根据权利要求9所述的方法,其中,所述本体模型包括至少两个组件模型,所述至少两个组件模型包括所述主控对象上不同部位的模型;
    所述对所述本体模型进行渲染,得到所述主控对象的本体,包括:
    分别对所述至少两个组件模型进行渲染,得到所述至少两个组件模型的渲染结果;
    合成所述至少两个组件模型的渲染结果,得到所述主控对象的本体。
  13. 一种生成主控对象投影的装置,其中,所述装置包括:
    提取模块,用于从所述主控对象的本体动画蓝图中提取所述主控对象的原始骨骼模型, 所述主控对象是以第一人称视角观察虚拟环境的对象,所述本体动画蓝图用于生成所述虚拟环境中所述主控对象的本体,所述原始骨骼模型是所述主控对象的未经过骨骼变形的模型;
    调整模块,用于基于所述原始骨骼模型得到投影模型,所述投影模型是用于生成所述主控对象在所述虚拟环境中的投影的模型;
    渲染模块,用于渲染所述投影模型,得到所述主控对象的投影。
  14. 根据权利要求13所述的装置,其中,所述本体动画蓝图包括骨骼变形处理;
    所述提取模块,用于在所述本体动画蓝图的所述骨骼变形处理前,从所述本体动画蓝图中提取所述原始骨骼模型。
  15. 根据权利要求14所述的装置,其中,所述骨骼变形处理包括第一骨骼变形处理,所述第一骨骼变形处理用于缩放所述主控对象的摄像机模型和所述原始骨骼模型的重合区域处的骨骼;
    所述提取模块,还用于在所述本体动画蓝图的所述第一骨骼变形处理前,从所述本体动画蓝图中提取所述原始骨骼模型。
  16. 根据权利要求14所述的装置,其中,所述骨骼变形处理包括第二骨骼变形处理,所述第二骨骼变形处理用于在所述主控对象处于目标当前状态时,扭曲所述原始骨骼模型;
    所述提取模块,还用于在所述本体动画蓝图的所述第二骨骼变形处理前,从所述本体动画蓝图中提取所述原始骨骼模型。
  17. 根据权利要求13至16任一所述的装置,其中,所述调整模块,还用于将所述原始骨骼模型确定为所述投影模型;
    或者,
    所述调整模块,还用于基于所述主控对象的当前状态,调整所述原始骨骼模型的姿势,得到所述投影模型。
  18. 一种计算机设备,其中,所述计算机设备包括:处理器和存储器,所述存储器中存储有至少一条指令,所述至少一条指令由所述处理器加载并执行以实现如权利要求1至12中任一项所述的生成主控对象投影的方法。
  19. 一种计算机可读存储介质,其中,所述计算机可读存储介质中存储有至少一条程序代码,所述程序代码由处理器加载并执行以实现如权利要求1至12中任一项所述的生成主控对象投影的方法。
  20. 一种计算机程序产品,包括计算机程序或指令,其中,所述计算机程序或指令被处理器执行时实现权利要求1至12中任一项所述的生成主控对象投影的方法。
PCT/CN2022/126147 2022-01-07 2022-10-19 生成主控对象投影的方法、装置、设备及介质 WO2023130800A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2023566962A JP2024518913A (ja) 2022-01-07 2022-10-19 主制御対象の投影生成方法、装置、コンピュータ機器及びコンピュータプログラム
US18/221,812 US20230360348A1 (en) 2022-01-07 2023-07-13 Method and apparatus for generating main control object projection, device, and medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210015247.5A CN114359469B (zh) 2022-01-07 2022-01-07 生成主控对象投影的方法、装置、设备及介质
CN202210015247.5 2022-01-07

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/221,812 Continuation US20230360348A1 (en) 2022-01-07 2023-07-13 Method and apparatus for generating main control object projection, device, and medium

Publications (1)

Publication Number Publication Date
WO2023130800A1 true WO2023130800A1 (zh) 2023-07-13

Family

ID=81106389

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/126147 WO2023130800A1 (zh) 2022-01-07 2022-10-19 生成主控对象投影的方法、装置、设备及介质

Country Status (4)

Country Link
US (1) US20230360348A1 (zh)
JP (1) JP2024518913A (zh)
CN (1) CN114359469B (zh)
WO (1) WO2023130800A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114359469B (zh) * 2022-01-07 2023-06-09 腾讯科技(深圳)有限公司 生成主控对象投影的方法、装置、设备及介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112206525A (zh) * 2020-09-30 2021-01-12 深圳市瑞立视多媒体科技有限公司 Ue4引擎中手拧虚拟物品的信息处理方法和装置
CN112927332A (zh) * 2021-04-02 2021-06-08 腾讯科技(深圳)有限公司 骨骼动画更新方法、装置、设备及存储介质
CN114359469A (zh) * 2022-01-07 2022-04-15 腾讯科技(深圳)有限公司 生成主控对象投影的方法、装置、设备及介质

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3241310B2 (ja) * 1996-11-19 2001-12-25 株式会社ナムコ スケルトンモデルの形状変形方法、画像合成装置及び情報記憶媒体
US20100277470A1 (en) * 2009-05-01 2010-11-04 Microsoft Corporation Systems And Methods For Applying Model Tracking To Motion Capture
US9473758B1 (en) * 2015-12-06 2016-10-18 Sliver VR Technologies, Inc. Methods and systems for game video recording and virtual reality replay
US11107183B2 (en) * 2017-06-09 2021-08-31 Sony Interactive Entertainment Inc. Adaptive mesh skinning in a foveated rendering system
CN108664231B (zh) * 2018-05-11 2021-02-09 腾讯科技(深圳)有限公司 2.5维虚拟环境的显示方法、装置、设备及存储介质
CN110755845B (zh) * 2019-10-21 2022-11-22 腾讯科技(深圳)有限公司 虚拟世界的画面显示方法、装置、设备及介质
US11175730B2 (en) * 2019-12-06 2021-11-16 Facebook Technologies, Llc Posture-based virtual space configurations
CN111249729B (zh) * 2020-02-18 2023-10-20 网易(杭州)网络有限公司 一种游戏角色的显示方法、装置、电子设备和存储介质
CN112221133A (zh) * 2020-10-21 2021-01-15 Oppo(重庆)智能科技有限公司 游戏画面的定制方法、云服务器、终端及存储介质
CN113577774A (zh) * 2021-02-01 2021-11-02 腾讯科技(深圳)有限公司 虚拟对象生成方法、装置、电子设备及存储介质

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112206525A (zh) * 2020-09-30 2021-01-12 深圳市瑞立视多媒体科技有限公司 Ue4引擎中手拧虚拟物品的信息处理方法和装置
CN112927332A (zh) * 2021-04-02 2021-06-08 腾讯科技(深圳)有限公司 骨骼动画更新方法、装置、设备及存储介质
CN114359469A (zh) * 2022-01-07 2022-04-15 腾讯科技(深圳)有限公司 生成主控对象投影的方法、装置、设备及介质

Also Published As

Publication number Publication date
JP2024518913A (ja) 2024-05-08
US20230360348A1 (en) 2023-11-09
CN114359469A (zh) 2022-04-15
CN114359469B (zh) 2023-06-09

Similar Documents

Publication Publication Date Title
CN112691377B (zh) 虚拟角色的控制方法、装置、电子设备及存储介质
US10864446B2 (en) Automated player control takeover in a video game
KR20210003687A (ko) 비디오 게임에서 플레이어 게임플레이를 모방하기 위한 커스텀 모델
WO2022001652A1 (zh) 虚拟角色控制方法、装置、计算机设备和存储介质
US11110353B2 (en) Distributed training for machine learning of AI controlled virtual entities on video game clients
US9208613B2 (en) Action modeling device, method, and program
US11305193B2 (en) Systems and methods for multi-user editing of virtual content
US11724191B2 (en) Network-based video game editing and modification distribution system
US20220212104A1 (en) Display method and apparatus for virtual environment picture, and device and storage medium
US11816772B2 (en) System for customizing in-game character animations by players
WO2021238870A1 (zh) 信息显示方法、装置、设备及存储介质
CN113134233B (zh) 控件的显示方法、装置、计算机设备及存储介质
WO2022184128A1 (zh) 虚拟对象的技能释放方法、装置、设备及存储介质
Joselli et al. An architecture for game interaction using mobile
WO2023130800A1 (zh) 生成主控对象投影的方法、装置、设备及介质
WO2022127197A1 (zh) 语音转换文字的方法、系统、设备及介质
US20230016383A1 (en) Controlling a virtual objectbased on strength values
US20230410398A1 (en) System and method for animating an avatar in a virtual world
CN113018862B (zh) 虚拟对象的控制方法、装置、电子设备及存储介质
JP2023548922A (ja) 仮想対象の制御方法、装置、電子機器、及びコンピュータプログラム
JP2019054986A (ja) ゲームシステム及びプログラム
CN114146414A (zh) 虚拟技能的控制方法、装置、设备、存储介质及程序产品
CN112933595B (zh) 游戏中处理跳字显示的方法、装置、电子设备及存储介质
WO2024109389A1 (zh) 虚拟对象的控制方法、装置、设备、介质及程序产品
US20230051531A1 (en) Rapid target selection with priority zones

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22918252

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 11202306432P

Country of ref document: SG

WWE Wipo information: entry into national phase

Ref document number: 2023566962

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 2401002586

Country of ref document: TH