WO2022001652A1 - 虚拟角色控制方法、装置、计算机设备和存储介质 - Google Patents

虚拟角色控制方法、装置、计算机设备和存储介质 Download PDF

Info

Publication number
WO2022001652A1
WO2022001652A1 PCT/CN2021/100092 CN2021100092W WO2022001652A1 WO 2022001652 A1 WO2022001652 A1 WO 2022001652A1 CN 2021100092 W CN2021100092 W CN 2021100092W WO 2022001652 A1 WO2022001652 A1 WO 2022001652A1
Authority
WO
WIPO (PCT)
Prior art keywords
character
bone
target
target virtual
virtual character
Prior art date
Application number
PCT/CN2021/100092
Other languages
English (en)
French (fr)
Inventor
郭畅
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2022001652A1 publication Critical patent/WO2022001652A1/zh
Priority to US17/883,446 priority Critical patent/US20230045852A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/833Hand-to-hand fighting, e.g. martial arts competition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6607Methods for processing data by generating or executing the game program for rendering three dimensional images for animating game characters, e.g. skeleton kinematics
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6692Methods for processing data by generating or executing the game program for rendering three dimensional images using special effects, generally involving post-processing, e.g. blooming
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8029Fighting without shooting

Definitions

  • the present application relates to the field of computer technology, and in particular, to a virtual character control method, apparatus, computer device and computer storage medium.
  • Embodiments of the present application provide a virtual character control method, apparatus, computer device, and storage medium.
  • a virtual character control method executed by a computer device, the method comprising:
  • the target virtual character is bound with a basic bone and a deformed bone
  • the character movement includes character movement, controlling the target virtual character to implement the character movement in the virtual scene through the movement of the basic bone associated with the character movement;
  • the target virtual character is controlled to implement the character local deformation in the virtual scene through the deformation of the deformed bones associated with the character local deformation.
  • a virtual character control device includes:
  • a display module for displaying at least a part of a target virtual character in a virtual scene; the target virtual character is bound with a basic bone and a deformed bone;
  • an action triggering module for triggering the character action of the target virtual character in the virtual scene
  • control module configured to control the target virtual character to implement the character movement in the virtual scene through the movement of the basic skeleton associated with the character movement when the character movement includes character movement;
  • the control module is configured to control the target virtual character to implement the character local deformation in the virtual scene through the deformation of the deformed bones associated with the character local deformation when the character action includes the character local deformation.
  • a computer device comprising a memory and one or more processors, the memory having computer-readable instructions stored therein, the computer-readable instructions, when executed by the processor, cause the one or more processors to execute The steps of the above virtual character control method.
  • One or more non-volatile readable storage media storing computer-readable instructions that, when executed by one or more processors, cause the one or more processors to perform the above-described virtual character control steps of the method.
  • FIG. 1 is an application environment diagram of a virtual character control method in one embodiment.
  • FIG. 2 is a schematic flowchart of a virtual character control method in one embodiment.
  • FIG. 3 is a schematic flowchart of generating a target virtual character in a virtual scene in one embodiment.
  • FIG. 4 is a schematic flowchart of a virtual character control method in another embodiment.
  • FIG. 5 is a schematic interface diagram of a virtual scene of character interaction in one embodiment.
  • FIG. 6 is a schematic interface diagram of a virtual scene of character interaction in another embodiment.
  • FIG. 7 is a schematic diagram of the deformation of the virtual character's hand in one embodiment.
  • FIG. 8 is a schematic diagram of a base skeleton corresponding to a target virtual character in one embodiment.
  • FIG. 9 is a schematic diagram of a base skeleton with deformed bones added in one embodiment.
  • FIG. 10 is a schematic diagram of a target virtual character after skinning processing in one embodiment.
  • FIG. 11 is a schematic diagram of a position interface of deformed bones in one embodiment.
  • FIG. 12 is a schematic diagram of controlling deformation of deformed bones in one embodiment.
  • FIG. 13 is a schematic diagram of a target avatar displayed in a 3D engine in one embodiment.
  • FIG. 14 is a schematic diagram of zooming in on the hand of the target avatar in one embodiment.
  • FIG. 15 is a structural block diagram of a virtual character control apparatus in one embodiment.
  • Figure 16 is a diagram of the internal structure of a computer device in one embodiment.
  • the virtual character control method provided by the present application can be applied to the application environment shown in FIG. 1 .
  • the terminal device 110 communicates with the server 120 through the network.
  • the terminal device 110 may be, but not limited to, smart terminals such as smart phones, tablet computers, notebook computers, desktop computers, and smart TVs.
  • a client terminal is provided on the terminal device 110, and the client terminal can be used for playing videos, for example, playing videos through clients such as a video client, an instant messaging client, a browser client, and an educational client.
  • the server 120 may be an independent physical server, or a server cluster or a distributed system composed of multiple physical servers, or may provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communications, Cloud servers for basic cloud computing services such as middleware services, domain name services, security services, CDN, and big data and artificial intelligence platforms. This application does not limit the number of terminal devices and servers.
  • the server 120 may be configured to send an action parameter value to the terminal device, where the action parameter value may be used to enable the terminal device 110 to control the target virtual character to complete the character action.
  • the terminal device 110 is used to display at least a part of the target virtual character in the virtual scene; the target virtual character is bound with a basic bone and a deformed bone; the character action of the target virtual character is triggered in the virtual scene; The movement of the basic bones associated with the character movement controls the target avatar to implement the character movement in the virtual scene; when the character action includes the local deformation of the character, the deformation of the deformed bones associated with the local deformation of the character is used to control the target avatar to implement in the virtual scene. Character deformation locally.
  • a virtual character control method is provided, and the method can be executed by a terminal device or a server, or can be executed jointly by a terminal device and a server.
  • the implementation of the device 110 is described as an example, including the following operations:
  • the target virtual character is displayed in the virtual scene; the target virtual character is bound with a base bone and a deformed bone.
  • the virtual scene can be used to simulate a three-dimensional virtual space, and can also be used to simulate a two-dimensional virtual space.
  • the virtual scene may include sky, land, ocean, buildings, mountains and forests, target virtual characters, etc., but is not limited to this.
  • the virtual scene can be shown after opening the application, for example after opening the game.
  • the target avatar is an active character in the virtual scene.
  • the target avatar can be used to represent the user's avatar in the virtual scene.
  • the target virtual character may be a target virtual character, a target virtual animal, etc., but is not limited thereto.
  • the base skeleton is used to enable the character movement of the target avatar. Character movement includes forward, backward, turning, running and other actions.
  • the base bone can specifically be a CS (Character Studio, character studio) bone. Morph bones are used to morph the target avatar. Character deformation may include enlarging, shrinking, stretching, etc., but is not limited thereto.
  • the deformed bone can specifically be a dummy (virtual body) bone.
  • base bones and deformed bones may be collectively referred to as bones. Each bone has a unique corresponding bone identifier to distinguish different bones.
  • the terminal device displays at least a part of the target virtual character in the virtual scene.
  • At least a part of the target avatar may be the head of the target avatar, the avatar of the target avatar, or the side view of the target avatar, etc., but not limited thereto.
  • a character action of the target virtual character is triggered in the virtual scene.
  • the character action is used to make the target virtual character move in the virtual scene.
  • the action of the character can be, for example, forward, backward, turn, run, zoom in, zoom out, zoom out, release skills, etc., but not limited thereto.
  • a character action of the target virtual character corresponding to the character control operation is triggered in the virtual scene.
  • w entered through the keyboard of the terminal device corresponds to forward
  • s corresponds to backward
  • a corresponds to move in the direction
  • d corresponds to move to the right
  • the control operation of the w key of the target avatar is detected, in the virtual scene Triggers the forward action of the target avatar corresponding to w.
  • the target virtual character is controlled to implement character motion in the virtual scene through the motion of the basic skeleton associated with the character motion.
  • each basic bone has a unique corresponding basic bone identifier.
  • the basic bones associated with the character movement may be, for example, the basic bones of the limbs of the character, or the basic bones only including the legs, and the like is not limited thereto.
  • the terminal device can determine the basic skeleton associated with the character motion by using the basic skeleton identification associated with the character motion.
  • the terminal device controls the target virtual character to implement character movement in the virtual scene through the movement of the basic bones associated with the character movement. Taking the character motion as walking and the basic bones associated with walking as the basic bones of the limbs for illustration, the terminal device controls the target virtual character to implement character walking in the virtual scene through the swing of the basic bones of the limbs associated with the character motion.
  • the target virtual character is controlled to implement the character local deformation in the virtual scene through the deformation of the deformed bones associated with the character local deformation.
  • each deformed bone has a unique corresponding deformed bone identifier.
  • the local transformation of the character may be the transformation of a certain part of the character's body, or the transformation of the character's weapon, and the like is not limited thereto.
  • the deformed bones associated with the local deformation of the character can be set according to the needs of the character.
  • the deformed bones associated with the local deformation of the character can be set as required, and the specific settings can be set according to the type of the virtual character, etc., and are not limited to this.
  • the deformed bones associated with the type A of the avatar are the deformed bones at the neck
  • the deformed bones associated with the type B of the avatar are the deformed bones at the limbs, and so on.
  • the terminal device can determine the deformed bone associated with the character movement through the identification of the deformed bone associated with the character movement.
  • the terminal device controls the target virtual character to implement the local deformation of the character in the virtual scene through the deformation of the deformed bones associated with the local deformation of the character. For example, take the local deformation of the character as the right arm magnification, and the deformed bone associated with the right arm magnification as the right arm deformation bone. Implement right arm magnification.
  • the terminal device controls the target virtual character to implement character movement in the virtual scene through the movement of the basic bones associated with the character movement, and the terminal device deforms through the associated deformation of the character local deformation.
  • the deformation of the bones controls the target virtual character to implement local deformation of the character in the virtual scene.
  • the local deformation of the character includes X local deformation and Y local deformation.
  • the X local deformation can be triggered by the X key of the keyboard, and the deformed bones associated with the X local deformation are limb deformation bones; the Y local deformation can be triggered by the Y key of the keyboard, and the Y local deformation
  • the associated deformed bone is the head deformed bone.
  • the target virtual character is controlled to implement the character's limb deformation in the virtual scene through the deformation of the limb deformation bones associated with the X local deformation.
  • the target virtual character is controlled to implement the character head deformation in the virtual scene through the deformation of the head deformation bone associated with the Y local deformation.
  • the character action that triggers the target virtual character in the game is to release the skill
  • the skill includes character movement
  • the character movement is a kick
  • the movement of the basic skeleton of the leg associated with the kick is used.
  • the skill also includes thickening of the legs, that is, when the skill is kicking while the legs become thicker, the enlargement of the leg deformation bones associated with the thickening of the legs
  • the game is a virtual scene
  • the release skills are character actions
  • the kicks are the character movements
  • the thickening of the legs is the local deformation of the character.
  • the target virtual character is bound with a basic bone and a deformed bone, and when the character motion triggered in the virtual scene includes character motion, the target virtual character is controlled by the motion of the basic bone to implement the character motion.
  • the triggered character actions include local deformation of the character, and the local deformation of the character is implemented through the deformation of the deformed bones.
  • the motion of the base bone includes at least one of bone movement and bone rotation; the deformation of the deformed bone includes at least one of local scaling of the bone and overall scaling of the bone.
  • bone movement refers to the movement of bones from one position to another in the virtual scene.
  • a bone rotation is when the bone moves around an axis without changing its position.
  • Local expansion of bones refers to the lengthening or shortening of one end of a bone.
  • the overall scaling of the bone refers to the overall enlargement or reduction of the bone.
  • the motion of the base bone can include only bone movement, or only bone rotation, or both bone movement and bone rotation.
  • the deformation of deformed bones can include only local scaling of the bones, or only the overall scaling of the bones, or both local scaling and overall scaling of the bones.
  • the motion of the basic bone includes at least one of bone movement and bone rotation
  • the deformation of the deformed bone includes at least one of local scaling and overall scaling of the bone, that is, the basic bone and the deformed bone can respectively implement different functions.
  • the target virtual character is generated through a character construction operation, and the character construction operation includes: creating a basic skeleton of the target virtual character, the basic skeleton includes more than one basic bone; adding at least one deformed bone on the basic skeleton; adding The base skeleton with deformed bones is skinned to obtain the target virtual character.
  • the virtual character can move in the virtual scene through the basic skeleton.
  • the base skeleton includes at least one base bone. Skinning can be used to add skin to a skeleton.
  • the terminal device creates a virtual skeleton of the target virtual character, and the basic skeleton includes more than one basic bone.
  • the terminal device adds at least one deformable bone on the basis of the base skeleton.
  • an end device can add deformed bones to the limbs of the base skeleton.
  • the terminal device skins the basic skeleton with the deformed bones added to obtain the target virtual character.
  • the terminal device skins the basic skeleton to which the deformed bones are added, and then performs texture processing to obtain the target virtual character.
  • the terminal device can adjust the virtual skeleton of the target virtual character according to actual needs. For example, adjust the length and size of each bone in the virtual skeleton, or increase the basic bone and reduce the basic bone.
  • the basic skeleton of the target virtual character is created, at least one deformed bone is added to the basic skeleton, and the basic skeleton added with the deformed bone is skinned to obtain the target virtual character, and the target can be achieved by erecting the deformed bones.
  • the deformation of the virtual character reduces the storage space occupied.
  • FIG. 3 it is a schematic flowchart of generating a target virtual character in a virtual scene in one embodiment.
  • the virtual character control method further includes:
  • the skinned target virtual character is imported into a three-dimensional engine as a model.
  • the 3D engine can be used to develop stand-alone games on Windows, MacOS and Linux platforms, video games on console platforms such as PlayStation, XBox, Wii, 3DS and Nintendo Switch, or games on mobile devices such as iOS and Android.
  • the game platforms supported by Unity also extend to HTML5 web platforms based on WebGL technology, as well as new-generation multimedia platforms such as tvOS, Oculus Rift, and ARKit.
  • Unity is also a comprehensive creation tool that is widely used for interactive content such as architectural visualization and real-time 3D animation.
  • the terminal device imports the skinned target virtual character as a model into a three-dimensional engine.
  • the target avatar can then be displayed in the 3D engine.
  • the model is generated as a prefab by the three-dimensional engine.
  • a prefab can specifically refer to a prefab, which can be regarded as a component template for batch application work.
  • models that need to be reused in virtual scenes such as enemies, soldiers, weapons, bullets, or any wall that is exactly the same as a brick, etc.
  • the prefab is like a clone, but the generated position, angle or some properties are different, just like the class in C++.
  • the terminal device generates the model as a prefab through a three-dimensional engine.
  • the animation configuration file of the prefab is imported into the 3D engine.
  • the animation configuration file may include animation parameters used to control the basic bones and deformed bones, or may include the target virtual character implementing the character action process.
  • the terminal device imports the animation configuration file of the prefab into the 3D engine.
  • the prefab is invoked to generate the target virtual character in the virtual scene through the three-dimensional engine, and the target virtual character is controlled to perform character actions through the action parameter value of the animation configuration file.
  • the action parameter value is used to represent at least one of the movement parameter value of the basic bone, the rotation parameter value of the basic bone, the scaling parameter value of the deformed bone, and the scaling parameter value of the deformed bone in the target virtual character.
  • the action parameter value may be sent by the server to the terminal device, or may be stored by the terminal device.
  • the terminal device realizes the movement of the associated skeleton through the action parameter value, and controls the target virtual character to implement the character action.
  • the associated bone can be at least one of a base bone and a deformed bone.
  • a prefab such as prefab is called to generate the target virtual character in the virtual scene, and the configuration is performed based on the parameter value of the imported animation configuration file, and the target virtual character is controlled to implement the character action through the parameter value of the animation configuration file.
  • the terminal device generates an animation configuration file through the 3DS max software, imports the animation configuration file of the prefab into the 3D engine, and the 3D engine can parse out the action parameter values of each bone in the animation configuration file. Bones and Deformed Bones that enable the target avatar to perform character actions.
  • the skinned target virtual character is imported into a 3D engine as a model, a prefab is generated from the model through the 3D engine, an animation configuration file is imported, and the prefab is called to generate the target virtual character in the virtual scene,
  • the action parameter value of the animation configuration file controls the target virtual character to perform character actions, and can control the target virtual character to achieve deformation through the action parameter value, without saving a lot of images and reducing the storage space occupied.
  • FIG. 4 it is a schematic flowchart of a virtual character control method in another embodiment.
  • 3DS MAX also known as 3D Studio Max, often referred to as 3d Max or 3ds MAX
  • 3DS MAX is a three-dimensional animation rendering and production software based on PC (Personal Computer, personal computer) system.
  • the operation of erecting bones on the character includes: creating a basic skeleton of the target virtual character, where the basic skeleton includes more than one basic bone, and adding at least one deformation bone on the basic skeleton.
  • Skin binding includes: skin binding of the basic skeleton with deformed bones added to obtain the target virtual character.
  • Importing the model into unity includes: importing the skinned target virtual character as a model into unity.
  • unity is a three-dimensional engine.
  • Making a prefab includes: generating a model into a prefab through a 3D engine. That is, the prefab includes erected bones, skin bindings and models.
  • prefab is a kind of prefab.
  • Importing animation into Unity means importing the animation configuration file of the prefab in Unity. Parsing the animation profile can reveal the artistic effects and animation effects of the target avatar. Configuration refers to configuring the prefab and animation configuration files together in Unity.
  • the program call is to call the prefab to generate the target virtual character in the virtual scene, and control the target virtual character to implement the character action through the action parameter value of the animation configuration file.
  • adding at least one deformed bone on the base skeleton includes: determining a target base bone in the base skeleton; determining the number of deformed bones according to the length of the target base bone; at the position of the target base bone on the base skeleton, Adds a deformed bone with the number of deformed bones connected sequentially.
  • the position corresponding to the target base bone is the position corresponding to the deformed bone to be added. There is no limit to the number of target base bones. And the target base bone is at least one preset base bone in the base bones.
  • the terminal device determines the target base skeleton in the base skeleton.
  • the target base bone can be a preset base bone, such as the base bone at the limbs.
  • the terminal device may determine the target base skeleton in the base skeleton according to the type of the virtual character. For example, if the virtual character is a virtual character, the target base bone can be an arm; if the virtual character is a virtual monster, then the target base bone can be a leg.
  • the terminal device determines the number of deformed bones according to the length of the target base bone.
  • the target virtual character is a virtual character
  • the target base bones may be arm bones
  • the target base bones are four pieces.
  • two deformable bones can be erected on each target base bone. Since the shape of the deformed bone is a cube, if only one deformed bone is set up on a target base bone, the arm of the avatar may be very thick, so the number of deformed bones needs to be adjusted based on the arm design size of the target avatar.
  • the number of deformed bones is determined according to the length of the target base bone, and at the position of the target base bone on the base skeleton, the deformed bones of the number of deformed bones connected in sequence are added, so that the number of deformed bones is more matched with the virtual character. , which can improve the fidelity of the obtained target virtual character.
  • the virtual character control method further includes: displaying an abbreviated object of the target virtual character on the character action map; controlling the abbreviated object to move within the action range of the abbreviated object; when the movement of the abbreviated object satisfies the character When the interaction triggers the condition, switch to the virtual scene for character interaction.
  • the character action map is used to display the range in which the target virtual character can move on the electronic map.
  • the virtual scene may specifically be an RPG (Role-playing game, a role-playing game), an SRPC (Strategy Role-Playing Game, a strategy role-playing game), and the like is not limited thereto.
  • the abbreviated object is the reduced image of the target avatar. Thumbnail objects can be used to perform character actions on the character action map.
  • the virtual scene for character interaction refers to that the virtual scene includes at least two virtual characters, one of which is a target virtual character, and the target virtual character can interact with another virtual character in the virtual scene.
  • the terminal device displays the abbreviated object of the target virtual character on the character action map.
  • the terminal device controls the abbreviated object to move within the behavior range of the abbreviated object.
  • the interaction trigger condition is, for example, when the target avatar meets other avatars located in other camps, or the target avatar falls within the attack range of other avatars located in other camps, or
  • the target virtual character encounters the fighting NPC (Non-Player Character, non-player character), or triggers the skill release operation or the play of the mystery operation on the terminal device, etc., and the terminal device switches to the virtual scene for character interaction.
  • NPC Non-Player Character, non-player character
  • the terminal device displays at least a part of the target virtual character in the virtual scene of character interaction, the target virtual character is bound with the basic bone and the deformed bone, and the character action of the target virtual character is triggered in the virtual scene.
  • the character action includes character motion , through the movement of the basic bones associated with the character movement, to control the target virtual character to implement the character movement in the virtual scene of character interaction;
  • the character movement includes the local deformation of the character, the deformation of the deformed bones associated with the local deformation of the character is used to control the target virtual character. Local deformation of the character is implemented in this virtual scene.
  • FIG. 5 is a schematic interface diagram of a virtual scene of character interaction in one embodiment.
  • FIG. 6 is a schematic interface diagram of a virtual scene of character interaction in another embodiment. A portion of Roy's legs, head, body, and hands are shown in the virtual scene of FIG. 5 .
  • the character actions triggered in the war chess game include when the hand becomes larger, through the enlargement of the bones associated with the larger hand, such as the bones in the palm, to control Roy to implement the larger hand in the interaction process of the war chess game.
  • the virtual scene in FIG. 6 shows a part of Roy's right arm, a part of his left arm, his left hand, his upper body, and his left palm.
  • the character actions triggered in the war chess game include when the arm becomes thicker, through the increase of the bones associated with the thickened hand, such as the bones on the arm, to control Roy to implement the thickening of the arm during the interaction process of the war chess game.
  • the war chess game is a virtual scene
  • Roy is the target virtual character
  • Roy's abbreviated object moves to the position of the robust absentee as the trigger condition for character interaction
  • the behavior range of the displayed square is abbreviated
  • the virtual scene is a war chess game.
  • the above virtual character control method controls the movement of the abbreviated object within the corresponding action range, and when the movement of the abbreviated object satisfies the character interaction trigger condition, switching to the character interaction scene can enhance the interactivity in the virtual scene.
  • FIG. 7 it is a schematic diagram of the deformation of the virtual character's hand in one embodiment. It can be seen from the figure that the hand of the target avatar is enlarged, and the effect of enlargement is greater than that by the angle of view. The hand in Figure 7 also realizes the partial magnification effect of the virtual object by controlling the magnification of the deformed bones.
  • the virtual character control method further includes: in the virtual scene, loading a prefab through a three-dimensional engine; the prefab is obtained by setting up a deformed bone on the basic skeleton of the target virtual character and then performing skinning processing; Create an instance from a prefab to get the target avatar.
  • the prefab is obtained by setting up deformed bones on the basic skeleton of the target virtual character and then performing skinning processing. Specifically, in the virtual scene, the terminal device loads the prefab through the 3D engine, creates an instance through the prefab, and obtains the target virtual character.
  • a 3D engine is used to load a prefab, and an instance is created through the prefab to obtain a target virtual character, which can improve the fidelity of the target virtual character and improve user experience.
  • the virtual scene is an animation editing scene.
  • the virtual character control method further includes: in the virtual scene, when the target virtual character is performing the character action, recording key frames of the target virtual character in the process of implementing the character action; and generating video animation based on the key frames.
  • the animation editing scene may refer to a scene corresponding to an animation being produced.
  • the action of the target virtual character at the time point of the frame is recorded in the key frame.
  • the hand swing and leg deformation of the target virtual character are recorded in the key frame, but not limited to this.
  • Keyframes can be distributed evenly or unevenly. Uniform distribution For example, the 1st frame, the 10th frame, the 20th frame...the 100th frame is the key frame. Uneven distribution, for example, the 1st frame, the 15th frame, the 20th frame... The 100th frame is a key frame.
  • the terminal device when the target avatar is performing character actions, the terminal device records the key frames of the target avatar during the performing process, and the key frames include the action posture of the target avatar.
  • the terminal device can automatically generate video animations of consecutive frames based on key frames. For example, if the target virtual character implements character movement, then record the key frame of the character movement; if the target virtual object implements the character local deformation, then record the key frame of the character local deformation; if the target virtual object implements the character movement and the character local deformation, then the record contains the character movement and keyframes for the character's local deformation.
  • the target virtual character in the process of implementing the character action, the key frames of the target virtual character in the implementation process are recorded, and the video animation is generated based on the key frames, and the target virtual character can be produced.
  • the video animation in the process of character action is implemented, and the video animation of the virtual character is easily modified, so as to improve the control efficiency of the virtual character.
  • the virtual character control method further includes: displaying an abbreviated object of the target virtual character on the character action map; controlling the abbreviated object to move within the action range of the abbreviated object; when the movement of the abbreviated object satisfies the character
  • the interaction trigger condition switch to the character interaction scene; when the character action of the target virtual character is triggered in the character interaction scene, the video animation is played.
  • the character interaction scene includes at least two virtual characters, one of which is a target virtual character, and the target virtual character can interact with another virtual character in the character interaction scene.
  • the terminal device displays the character action map, and displays the abbreviated object of the target virtual character on the character action map.
  • the terminal device controls the abbreviated object to move within the behavior range of the abbreviated object.
  • the interaction trigger condition is, for example, when the target avatar meets other avatars located in other camps, or the target avatar falls within the attack range of other avatars located in other camps, or The encounter between the target virtual character and the fighting NPC is not limited to this, and the terminal device switches to the character interaction scene.
  • the terminal device plays the video animation.
  • a thumbnail object of Roy is displayed on the map of the war chess game, and the abbreviated object is controlled to move within the action range of the displayed square.
  • the character action of the target virtual character is triggered, and the video animation generated in the animation editing scene is played.
  • the war chess game is a virtual scene
  • the behavior range of the displayed squares is the action direction of the abbreviated object
  • the skill release scene is a character interaction scene.
  • the above virtual character control method controls the movement of the abbreviated object within the corresponding action range, when the movement of the abbreviated object satisfies the character interaction triggering condition, switches to the character interaction scene, and triggers the character action of the target virtual character in the character interaction scene. , playing the video animation, which can make the virtual character realize the deformation effect by deforming the bones, and improve the fidelity of the virtual character.
  • a virtual character control method includes the following operations:
  • a base skeleton of the target virtual character is created, and the base skeleton includes more than one base bone.
  • Operation (a2) determines the target base bone in the base skeleton.
  • Operation (a3) determines the number of deformed bones according to the length of the target base bone.
  • Operation (a4) at the position of the target base bone on the base skeleton, add the deformed bones of the number of deformed bones connected in sequence.
  • the skinned target virtual character is imported into the 3D engine as a model.
  • the model is generated as a prefab by the 3D engine.
  • Operation (a8) import the animation profile of the prefab in the 3D engine.
  • the prefab is invoked to generate the target virtual character in the virtual scene, and the target virtual character is controlled to implement the character action through the action parameter value of the animation configuration file.
  • the abbreviated object of the target virtual character is displayed on the character action map.
  • the abbreviated object is controlled to move within the action range of the abbreviated object.
  • a 3D engine is used to load a prefab; the prefab is obtained by setting up deformed bones on the basic skeleton of the target virtual character and then performing skinning processing.
  • an instance is created through the prefab to obtain the target virtual character.
  • At least a part of the target avatar is displayed in the virtual scene.
  • the target avatar is rigged with base bones and deformed bones.
  • the target virtual character is controlled to implement character movement in the virtual scene through the movement of the basic skeleton associated with the character movement.
  • the target virtual character is controlled to implement the character local deformation in the virtual scene through the deformation of the deformed bones associated with the character local deformation.
  • the above virtual character control method creates the basic skeleton of the target virtual character, and determines the number of deformed bones based on the target basic skeleton, so that the number of deformed bones can be more matched with the virtual character, and the fidelity of the obtained target virtual character can be improved; Calling the prefab to generate the target virtual character, and controlling the target virtual character to implement the character action through the action parameter value, compared with the way of manually drawing the animation of the virtual character in the traditional technology, the control efficiency of the virtual character can be improved.
  • the skeleton controls the target virtual character, and it does not need to save too many images, which can reduce the storage space occupied.
  • a virtual character control method includes the following operations:
  • At least a part of the target virtual character is displayed in the animation editing scene, and the target virtual character is bound with a basic bone and a deformed bone.
  • the target virtual character is controlled to implement character motion in the animation editing scene through the motion of the basic skeleton associated with the character motion.
  • the target virtual character is controlled to implement character local deformation in the animation editing scene through the deformation of the deformed bones associated with the character local deformation.
  • a video animation is generated based on the key frames.
  • the thumbnail object is controlled to move within the action range of the thumbnail object.
  • the target virtual character in the process of implementing the character action, the key frames of the target virtual character in the implementation process are recorded, and the video animation is generated based on the key frames, and the target virtual character can be produced.
  • Implement the video animation in the process of character action and easily modify the video animation of the virtual character, improve the control efficiency of the virtual character, and control the movement of the thumbnail object within the corresponding action range.
  • switch In the character interaction scene when the character action of the target virtual character is triggered in the character interaction scene, the video animation is played, so that the virtual character can realize the deformation effect by deforming the bones, and improve the fidelity of the virtual character.
  • 3DS MAX 3D Studio Max, often abbreviated as 3dMax or 3ds MAX, is a PC-based three-dimensional animation rendering and production software developed by Discreet (later merged by Autodesk). Its predecessor is 3D Studio series software based on DOS operating system.
  • FIG. 8 it is a schematic diagram of the basic skeleton corresponding to the target virtual character in one embodiment.
  • FIG. 9 it is a schematic diagram of a basic skeleton with deformed bones added in one embodiment.
  • the block 902 in the figure is the deformed bone 902 .
  • the left arm of the target virtual character includes 4 deformation bones 902
  • the right arm also includes 4 deformation bones 902 .
  • the square 904 on the head of the target virtual character is the special effect hanging point 904, which is used to display a bubble box and the like at the corresponding position.
  • FIG. 9 also includes a weapon mounting point 906 , and a weapon can be mounted on the position corresponding to 906 .
  • FIG. 10 it is a schematic diagram of a target virtual character after skinning processing in one embodiment.
  • the mesh in Figure 10 is the skin of the target avatar.
  • the target virtual object in Figure 10 holds a weapon on its right hand.
  • Skinning can be selected in the 3DS MAX software, and the DQ skinning switch can be selected in the Dual Quaternion in the parameters.
  • Bones can be added or removed in the software, and each bone has a corresponding bone ID. And in the software can set envelope properties and so on.
  • FIG. 11 it is a schematic diagram of a position interface of deformed bones in one embodiment. Among them, find the deformation bone establishment point in the max software, and then select "Virtual Object" (English version: dummy) to drag the deformed bone out, adjust the size of the deformed bone, and match it with the virtual character.
  • the deformed bone can be configured at the position corresponding to the base bone and can be configured according to the muscle trend of the virtual character.
  • FIG. 12 it is a schematic diagram of controlling the deformation of deformed bones in one embodiment. After the skin binding is completed, the deformed bone can be enlarged or reduced in the 3DS MAX software, and it can be intuitively displayed that the left arm of the target virtual character becomes larger.
  • FIG. 13 it is a schematic diagram of a target virtual character displayed in a three-dimensional engine in one embodiment.
  • FIG. 14 it is a schematic diagram of enlarging the hand of the target virtual character in one embodiment. In FIG. 14 , the palm part of the target virtual character is bound with a deformed bone, and by controlling the enlargement of the deformed bone, the hand of the target virtual character exhibits a magnification effect.
  • the present application also provides an application scenario applied to a war chess game, where the above-mentioned virtual character control method is applied to the application scenario.
  • the application of the virtual character control method in this application scenario is as follows: In the war chess game project, we will have a lot of characters fighting, and in the battle of these characters, sometimes it is necessary to play the mystery, release skills or release big moves etc. close-up. In some special battles, the expression method of art animation needs to be more exaggerated and powerful, which requires adding arms, fingers, torso, feet and other parts to increase local magnification or deformation functions on the basis of traditional animation performance. Create a CS skeleton of the target virtual character, the CS skeleton includes more than one CS bone.
  • the thumbnail avatar of the target virtual character on the character action map of the war chess game; the thumbnail avatar is the thumbnail object; control the thumbnail avatar to move within the corresponding activity range; when the movement of the thumbnail object satisfies the character interaction
  • the conditions are triggered, that is, when you need to release skills or play secrets, etc., switch to the battle scene.
  • At least a part of the target virtual character is displayed in the war chess game; the target virtual character is bound with a CS bone and a dummy bone.
  • the role action as the release skill and the virtual scene of character interaction as the confrontation scene as an example, trigger the skills of the target virtual character in the virtual scene of the confrontation; when the skills include character movement, such as walking, jumping, etc.
  • the movement of the CS bones associated with the movement controls the target virtual character to implement character movement in the war chess game scene.
  • the target virtual character is controlled to implement the local deformation of the character in the war chess game scene.
  • the virtual scene is a war chess game scene
  • the CS skeleton is the basic skeleton
  • the CS skeleton is the basic skeleton
  • the target CS skeleton is the target basic skeleton
  • the dummy skeleton is the deformation skeleton
  • the unity is the 3D engine
  • the prefab is the prefab
  • the abbreviated avatar is the abbreviated object
  • the character interaction scene is the confrontation scene.
  • the present application also provides an application scenario of animation editing, where the above-mentioned virtual character control method is applied to the application scenario.
  • character animation some shots in the game need to highlight the special effects of the character, such as the expansion of the hand becomes larger, the torso becomes larger, and the part of the character needs special deformation, display the target virtual character in the animation editing scene.
  • the target virtual character is bound with CS bones and dummy bones; the character action of the target virtual character is triggered in the virtual scene; when the character action includes character motion, the target virtual character is controlled through the movement of the CS bone associated with the character motion.
  • the character movement is implemented in the animation editing scene; when the character action includes the local deformation of the character, the target virtual character is controlled to implement the local deformation of the character in the animation editing scene through the deformation of the dummy bone associated with the local deformation of the character.
  • the target virtual character when the target virtual character is in the process of implementing the character action, the key frames of the target virtual character in the process of implementing the character action are recorded; video animation is generated based on the key frames.
  • the virtual scene is the animation editing scene
  • the CS skeleton is the basic skeleton
  • the CS skeleton is the basic skeleton
  • the target CS skeleton is the target basic skeleton
  • the dummy skeleton is the deformation skeleton
  • the confrontation scene is the character interaction scene.
  • a virtual character control apparatus may adopt a software module or a hardware module, or a combination of the two to become a part of a computer device, and the apparatus specifically includes: a display module 1502. Action trigger module 1504 and control module 1506, wherein:
  • the display module 1502 is configured to display at least a part of the target virtual character in the virtual scene; the target virtual character is bound with a basic bone and a deformed bone.
  • the action triggering module 1504 is used for triggering the character action of the target virtual character in the virtual scene.
  • the first control module 1506 is configured to control the target virtual character to implement character movement in the virtual scene through the movement of the basic bones associated with the character movement when the character movement includes character movement.
  • the second control module 1506 is configured to control the target virtual character to implement the character local deformation in the virtual scene through the deformation of the deformed bones associated with the character local deformation when the character action includes the character local deformation.
  • the target virtual character is bound with a basic bone and a deformed bone, and when the character motion triggered in the virtual scene includes character motion, the target virtual character is controlled by the motion of the basic bone to implement character motion.
  • the triggered character actions include local deformation of the character, and the local deformation of the character is implemented through the deformation of the deformed bones.
  • the motion of the base bone includes at least one of bone movement and bone rotation; the deformation of the deformed bone includes at least one of local scaling of the bone and overall scaling of the bone.
  • the movement of the basic bone includes at least one of bone movement and bone rotation
  • the deformation of the deformed bone includes at least one of local scaling of the bone and overall scaling of the bone, that is, the basic bone and the deformed bone can respectively implement different functions.
  • the virtual character control device further includes a character building module.
  • the character building module is used to create the basic skeleton of the target virtual character, and the basic skeleton includes more than one basic bone; add at least one deformed bone to the basic skeleton; skin the basic skeleton with the deformed bone added to obtain the target virtual character .
  • the above virtual character control device creates the basic skeleton of the target virtual character, adds at least one deformed bone on the basic skeleton, and performs skinning processing on the basic skeleton with the deformed bone added to obtain the target virtual character, and then the target can be achieved by erecting the deformed bone.
  • the deformation of the virtual character reduces the storage space occupied.
  • the character building module is further configured to import the skinned target virtual character as a model into a 3D engine; generate the model as a prefab through the 3D engine; import the animation configuration file of the prefab in the 3D engine;
  • the three-dimensional engine invokes the prefab to generate the target virtual character in the virtual scene;
  • the control module 1506 is used to control the target virtual character to implement character actions through the action parameter values of the animation configuration file.
  • the above virtual character control device imports the skinned target virtual character as a model into a three-dimensional engine, generates a prefab from the model through the three-dimensional engine, imports an animation configuration file, and invokes the prefab to generate the target virtual character in the virtual scene,
  • the action parameter value of the animation configuration file controls the target virtual character to perform character actions, and can control the target virtual character to achieve deformation through the action parameter value, without saving a lot of images and reducing the storage space occupied.
  • the character building module is further used to determine the target base bone in the base skeleton; determine the number of deformed bones according to the length of the target base bone; at the position of the target base bone on the base skeleton, add sequentially connected deformed bones The number of deformed bones.
  • the above virtual character control device determines the number of deformed bones according to the length of the target basic bone, and at the position of the target basic bone on the basic skeleton, adds the deformed bones of the number of deformed bones connected in sequence, so that the number of deformed bones is more matched with the virtual character. , which can improve the fidelity of the obtained target virtual character.
  • the character building module is also used to load a prefab in the virtual scene through a 3D engine; the prefab is obtained by setting up deformed bones on the basic skeleton of the target virtual character and then performing skinning processing; file to create an instance to get the target avatar.
  • the above virtual character control device controls the movement of the abbreviated object within the corresponding action range, and when the movement of the abbreviated object satisfies the character interaction trigger condition, it switches to the character interaction scene, which can enhance the interactivity in the virtual scene.
  • the virtual object control device further includes an animation generation module, and the animation generation module is configured to, in the virtual scene, when the target avatar is performing the character action, record the animation of the target avatar in the process of performing the character action.
  • Keyframes Keyframes; generate video animations based on keyframes.
  • the above-mentioned virtual character control device in the virtual scene, when the target virtual character is in the process of implementing the character action, records the key frames of the target virtual character in the implementation process, generates video animation based on the key frames, and can produce the target virtual character in the implementation process.
  • Video animation in the process of character action and it is easy to modify the video animation of the virtual character to improve the control efficiency of the virtual character.
  • control module 1506 is further configured to display the abbreviated object of the target virtual character on the character action map; control the abbreviated object to move within the action range of the abbreviated object; when the movement of the abbreviated object satisfies the character interaction trigger When the conditions are met, switch to the character interaction scene; when the character action of the target virtual character is triggered in the character interaction scene, the video animation is played.
  • the above virtual character control device controls the movement of the abbreviated object within the corresponding action range, when the movement of the abbreviated object satisfies the character interaction trigger condition, switches to the character interaction scene, and triggers the character action of the target virtual character in the character interaction scene , playing the video animation, which can make the virtual character realize the deformation effect by deforming the bones, and improve the fidelity of the virtual character.
  • Each module in the above-mentioned virtual character control device can be implemented in whole or in part by software, hardware and combinations thereof.
  • the above modules can be embedded in or independent of the processor in the computer device in the form of hardware, or stored in the memory in the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
  • a computer device in one embodiment, the computer device may be a terminal device, and its internal structure diagram may be as shown in FIG. 16 .
  • the computer equipment includes a processor, memory, a communication interface, a display screen, and an input device connected by a system bus.
  • the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium, an internal memory.
  • the nonvolatile storage medium stores an operating system and a computer program.
  • the internal memory provides an environment for the execution of the operating system and computer programs in the non-volatile storage medium.
  • the communication interface of the computer equipment is used for wired or wireless communication with external terminal equipment, and the wireless communication can be realized by WIFI, operator network, NFC (Near Field Communication) or other technologies.
  • the computer program implements a virtual character control method when executed by the processor.
  • the display screen of the computer equipment may be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment may be a touch layer covered on the display screen, or a button, a trackball or a touchpad set on the shell of the computer equipment , or an external keyboard, trackpad, or mouse.
  • FIG. 16 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the computer equipment to which the solution of the present application is applied. Include more or fewer components than shown in the figures, or combine certain components, or have a different arrangement of components.
  • a computer device comprising a memory and one or more processors, the memory having computer readable instructions stored in the memory, the computer readable instructions when executed by the processor cause the one or more processors.
  • one or more non-volatile readable storage media are provided that store computer-readable instructions that, when executed by one or more processors, cause the one or more processors to execute The operations in the above method embodiments.
  • a computer program product or computer program comprising computer instructions stored in a computer readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the operations in the foregoing method embodiments.
  • Non-volatile memory may include read-only memory (Read-Only Memory, ROM), magnetic tape, floppy disk, flash memory, or optical memory, and the like.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • the RAM may be in various forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种虚拟角色控制方法,包括:在虚拟场景中显示目标虚拟角色的至少一部分;目标虚拟角色绑定有基础骨骼和变形骨骼;在虚拟场景中触发目标虚拟角色的角色动作;当角色动作包括角色运动时,通过角色运动所关联基础骨骼的运动,控制目标虚拟角色在虚拟场景中实施角色运动;当角色动作包括角色局部变形时,通过角色局部变形所关联变形骨骼的变形,控制目标虚拟角色在虚拟场景中实施角色局部变形。

Description

虚拟角色控制方法、装置、计算机设备和存储介质
本申请要求于2020年07月02日提交中国专利局,申请号为2020106246994、发明名称为“虚拟角色控制方法、装置、计算机设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,特别是涉及一种虚拟角色控制方法、装置、计算机设备和计算机存储介质。
背景技术
在制作角色动画的时候,动画中的某些镜头需要突出此角色特殊效果。比如某角色的手的伸缩变大、躯干变大等需要特殊变形的时候,常规做法是通过手动绘制这一部分模型动画使得角色的相应部位发生变形。然而传统的虚拟角色控制方式,存在虚拟角色控制效率低下的问题。
发明内容
本申请各实施例提供了一种虚拟角色控制方法、装置、计算机设备和存储介质。
一种虚拟角色控制方法,由计算机设备执行,所述方法包括:
在虚拟场景中显示目标虚拟角色的至少一部分;所述目标虚拟角色绑定有基础骨骼和变形骨骼;
在所述虚拟场景中触发所述目标虚拟角色的角色动作;
当所述角色动作包括角色运动时,通过所述角色运动所关联基础骨骼的运动,控制所述目标虚拟角色在所述虚拟场景中实施所述角色运动;及
当所述角色动作包括角色局部变形时,通过所述角色局部变形所关联变形骨骼的变形,控制所述目标虚拟角色在所述虚拟场景中实施所述角色局部变形。
一种虚拟角色控制装置,所述装置包括:
显示模块,用于在虚拟场景中显示目标虚拟角色的至少一部分;所述目标虚拟角色绑定有基础骨骼和变形骨骼;
动作触发模块,用于在所述虚拟场景中触发所述目标虚拟角色的角色动作;
控制模块,用于当所述角色动作包括角色运动时,通过所述角色运动所关联基础骨骼的运动,控制所述目标虚拟角色在所述虚拟场景中实施所述角色运动;及
所述控制模块,用于当所述角色动作包括角色局部变形时,通过所述角色局部变形所关联变形骨骼的变形,控制所述目标虚拟角色在所述虚拟场景中实施所述角色局部变形。
一种计算机设备,包括存储器和一个或多个处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述一个或多个处理器执行上述虚拟角色控制方法的步骤。
一个或多个存储有计算机可读指令的非易失性可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行上述虚拟角色控制方法的步骤。
附图说明
图1为一个实施例中虚拟角色控制方法的应用环境图。
图2为一个实施例中虚拟角色控制方法的流程示意图。
图3为一个实施例中生成虚拟场景下的目标虚拟角色的流程示意图。
图4为另一个实施例中虚拟角色控制方法的流程示意图。
图5为一个实施例中角色交互的虚拟场景的界面示意图。
图6为另一个实施例中角色交互的虚拟场景的界面示意图。
图7为一个实施例中虚拟角色手部变形的示意图。
图8为一个实施例中目标虚拟角色对应的基础骨架的示意图。
图9为一个实施例中添加有变形骨骼的基础骨架的示意图。
图10为一个实施例中蒙皮处理后的目标虚拟角色的示意图。
图11为一个实施例中变形骨骼的位置界面示意图。
图12为一个实施例中控制变形骨骼变形的示意图。
图13为一个实施例中在三维引擎中显示的目标虚拟角色的示意图。
图14为一个实施例中放大目标虚拟角色的手的示意图。
图15为一个实施例中虚拟角色控制装置的结构框图。
图16为一个实施例中计算机设备的内部结构图。
具体实施方式
本申请提供的虚拟角色控制方法,可以应用于如图1所示的应用环境中。其中,终端设备110通过网络与服务器120进行通信。其中,终端设备110可以但不限于是:智能手机、平板电脑、笔记本电脑、桌上型电脑、智能电视等智能终端。终端设备110上设有客户端,该客户端可以用于播放视频等,如通过视频客户端、即时通信客户端、浏览器客户端、教育客户端等客户端播放视频。服务器120可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、CDN、以及大数据和人工智能平台等基础云计算服务的云服务器。本申请对终端设备和服务器的数量不做限制。其中,服务器120可用于向终端设备发送动作参数值,该动作参数值可用于使得终端设备110能够控制目标虚拟角色完成角色动作。终端设备110用于在虚拟场景中显示目标虚拟角色的至少一部分;目标虚拟角色绑定有基础骨骼和变形骨骼;在虚拟场景中触发目标虚拟角色的角色动作;当角色动作包括角色运动时,通过角色运动所关联基础骨骼的运动,控制目标虚拟角色在虚拟场景中实施角色运动;当角色动作包括角色局部变形时,通过角色局部变形所关联变形骨骼的变形,控制目标虚拟角色在虚拟场景中实施角色局部变形。
在一个实施例中,如图2所示,提供了一种虚拟角色控制方法,该方法可以由终端设备或服务器执行,也可以由终端设备和服务器共同执行,本方法实施例以该方法由终端设备110执行为例进行说明,包括以下操作:
操作202,在虚拟场景中显示目标虚拟角色的至少一部分;目标虚拟角色绑定有基础 骨骼和变形骨骼。
其中,虚拟场景可用于模拟三维虚拟空间,也可以用于模拟二维虚拟空间。该虚拟场景中可以包括天空、陆地、海洋、建筑、山林、目标虚拟角色等不限于此。虚拟场景可在打开应用程序后展示,例如打开游戏后展示。目标虚拟角色是在虚拟场景中的可活动角色。目标虚拟角色可用于在虚拟场景中代表用户的虚拟形象。目标虚拟角色具体可以是目标虚拟人物、目标虚拟动物等不限于此。
基础骨骼用于使目标虚拟角色实现角色运动。角色运动包括前进、后退、转弯、跑步等动作。基础骨骼具体可以是CS(Character Studio,角色工作室)骨骼。变形骨骼用于使目标虚拟角色实现角色变形。角色变形可包括放大、缩小、伸缩等不限于此。变形骨骼具体可以是dummy(虚拟体)骨骼。在本申请中,基础骨骼和变形骨骼可统称为骨骼。每块骨骼均有唯一对应的骨骼标识,用以区分不同的骨骼。
具体地,终端设备在虚拟场景中显示目标虚拟角色的至少一部分。目标虚拟角色的至少一部分可以是目标虚拟角色的头部、目标虚拟角色的头像或目标虚拟角色的侧视图等不限于此。
操作204,在虚拟场景中触发目标虚拟角色的角色动作。
其中,角色动作用于使目标虚拟角色在虚拟场景中活动。角色动作例如可以是前进、后退、转弯、跑步、放大、缩小、伸缩、释放技能等不限于此。
具体地,当检测到对该目标虚拟角色的角色控制操作时,在虚拟场景中触发与该角色控制操作相应的目标虚拟角色的角色动作。例如,通过终端设备的键盘输入的w对应前进,s对应后退,a对应向方移动,d对应向右方移动,那么当检测到对目标虚拟角色的w键的控制操作时,在虚拟场景中触发与w相应的目标虚拟角色的前进动作。
操作206,当角色动作包括角色运动时,通过角色运动所关联基础骨骼的运动,控制目标虚拟角色在虚拟场景中实施角色运动。
其中,每块基础骨骼均有唯一对应的基础骨骼标识。角色运动所关联角色运动所关联基础骨骼例如可以是角色四肢的基础骨骼、或者仅包括腿部的基础骨骼等不限于此。
具体地,当在虚拟场景触发的角色动作包括角色运动时,终端设备可通过角色运动所关联基础骨骼标识,确定角色运动所关联基础骨骼。终端设备通过角色运动所关联基础骨骼的运动,控制目标虚拟角色在虚拟场景中实施角色运动。以角色运动为行走、行走所关联的基础骨骼是四肢的基础骨骼为例进行说明,那么终端设备通过角色运动所关联的四肢的基础骨骼的摆动,控制目标虚拟角色在虚拟场景中实施角色行走。
操作208,当角色动作包括角色局部变形时,通过角色局部变形所关联变形骨骼的变形,控制目标虚拟角色在虚拟场景中实施角色局部变形。
其中,每块变形骨骼均有唯一对应的变形骨骼标识。角色局部变形可以是角色的身体中某一个部分进行变形、也可以是角色的武器进行变形等不限于此。角色局部变形所关联变形骨骼可根据角色需要设定。角色局部变形所关联变形骨骼可根据需要设定,具体可根据虚拟角色的类型设定等不限于此。例如,虚拟角色的类型A所关联的变形骨骼为脖子处的变形骨骼,虚拟角色的类型B所关联的变形骨骼为四肢处的变形骨骼等不限于此。
具体地,当在虚拟场景中触发的角色动作包括角色局部变形时,终端设备可通过角色运动所关联变形骨骼标识,确定角色运动所关联变形骨骼。终端设备通过角色局部变形所 关联变形骨骼的变形,控制目标虚拟角色在虚拟场景中实施角色局部变形。例如,以角色局部变形为右手臂放大、右手臂放大所关联变形骨骼为右手臂变形骨骼为例,终端设备通过右手臂放大所关联的右手臂变形骨骼的放大,控制目标虚拟角色在虚拟场景中实施右手臂放大。
本实施例中,当角色动作包括角色运动和角色局部变形时,终端设备通过角色运动所关联基础骨骼的运动,控制目标虚拟角色在虚拟场景中实施角色运动,终端设备通过角色局部变形所关联变形骨骼的变形,控制目标虚拟角色在虚拟场景中实施角色局部变形。
本实施例中,角色局部变形的种类可为至少一种。例如角色局部变形包括X局部变形和Y局部变形,X局部变形可通过键盘的X键触发,X局部变形所关联变形骨骼为四肢变形骨骼;Y局部变形可通过键盘的Y键触发,Y局部变形所关联变形骨骼为头部变形骨骼。那么当角色动作包括X局部变形时,通过X局部变形所关联的四肢变形骨骼的变形,控制目标虚拟角色在虚拟场景中实施角色四肢变形。当角色动作包括Y局部变形时,通过Y局部变形所关联的头部变形骨骼的变形,控制目标虚拟角色在虚拟场景中实施角色头部变形。
本实施例中,例如,当在游戏中触发目标虚拟角色的角色动作是释放技能时,当技能中包括角色运动时,例如角色运动为踢腿,通过踢腿所关联的腿部基础骨骼的运动,控制目标虚拟角色在游戏中实施踢腿动作;当技能还包括腿部变粗时,即技能为一边踢腿一边腿部变粗时,通过腿部变粗所关联的腿部变形骨骼的放大,控制目标虚拟角色在游戏中实施腿部变形。在上述例子中,游戏为虚拟场景,释放技能为角色动作,踢腿为角色运动,腿部变粗为角色局部变形。
上述虚拟角色控制方法,目标虚拟角色绑定有基础骨骼和变形骨骼,并且在虚拟场景中触发的角色动作包括角色运动时,通过基础骨骼的运动控制目标虚拟角色实施角色运动,当在虚拟场景中触发的角色动作包括角色局部变形时,通过变形骨骼的变形实施角色局部变形,则相对于传统技术中手动绘制虚拟角色的动画的方式,能够提高虚拟角色控制效率,同时由于通过基础骨骼和变形骨骼控制目标虚拟角色,不需要保存太多图像,能够减少占用的存储空间。
在一个实施例中,基础骨骼的运动包括骨骼移动和骨骼旋转中至少一种;变形骨骼的变形包括骨骼局部伸缩和骨骼整体缩放中至少一种。
具体地,骨骼移动是指骨骼从虚拟场景中的一个位置移动到另一个位置。骨骼旋转是指该骨骼绕着轴运动但是位置不发生改变。骨骼局部伸缩是指将骨骼的一端伸长或者缩短。骨骼整体缩放是指该骨骼整体放大或者整体缩小。基础骨骼的运动可仅包括骨骼移动,或者仅包括骨骼旋转,或者包括骨骼移动和骨骼旋转。变形骨骼的变形可仅包括骨骼局部伸缩,或者仅包括骨骼整体缩放,或者包括骨骼局部伸缩和骨骼整体缩放。
上述虚拟角色控制方法,基础骨骼的运动包括骨骼移动、骨骼旋转中至少一种,变形骨骼的变形包括骨骼局部伸缩和骨骼整体缩放中至少一种,即基础骨骼和变形骨骼能够分别实现不同的功能,使得目标虚拟角色实现不同的角色动作,提高虚拟角色的交互性。
在一个实施例中,目标虚拟角色通过角色构建操作生成,角色构建操作包括:创建目标虚拟角色的基础骨架,基础骨架包括多于一个的基础骨骼;在基础骨架上添加至少一个变形骨骼;对添加有变形骨骼的基础骨架进行蒙皮处理,得到目标虚拟角色。
其中,通过基础骨架使得虚拟角色能够在虚拟场景中活动。基础骨架包括至少一个基础骨骼。通过蒙皮处理可为骨架添加皮肤。
具体地,终端设备创建目标虚拟角色的虚拟骨架,基础骨架包括多于一个的基础骨骼。终端设备在基础骨架的基础上添加至少一个变形骨骼。例如,终端设备可在基础骨架的四肢部位添加变形骨骼。终端设备对添加有变形骨骼的基础骨架进行蒙皮处理,得到目标虚拟角色。
本实施例中,终端设备对添加有变形骨骼的基础骨架进行蒙皮处理后,进行贴图处理,得到目标虚拟角色。
本实施例中,终端设备可根据实际需求调节目标虚拟角色的虚拟骨架。例如调节虚拟骨架中各骨骼的长度、大小等,或者增加基础骨骼减少基础骨骼等。
上述虚拟角色控制方法,创建目标虚拟角色的基础骨架,在基础骨架上添加至少一个变形骨骼,对添加有变形骨骼的基础骨架进行蒙皮处理,得到目标虚拟角色,则能够通过架设变形骨骼实现目标虚拟角色的变形,减少占用的存储空间。
在一个实施例中,如图3所示,为一个实施例中生成虚拟场景下的目标虚拟角色的流程示意图。该虚拟角色控制方法还包括:
操作302,将蒙皮处理后的目标虚拟角色作为模型导入三维引擎。
其中,三维引擎可用于开发Windows、MacOS及Linux平台的单机游戏,PlayStation、XBox、Wii、3DS和任天堂Switch等游戏主机平台的视频游戏,或是iOS、Android等移动设备的游戏。Unity所支持的游戏平台还延伸到了基于WebGL技术的HTML5网页平台,以及tvOS、Oculus Rift、ARKit等新一代多媒体平台。除可以用于研发电子游戏之外,Unity还是被广泛用于建筑可视化、实时三维动画等类型互动内容的综合型创作工具。
具体地,终端设备将蒙皮处理后的目标虚拟角色作为模型导入三维引擎中。那么在三维引擎中可显示该目标虚拟角色。
操作304,通过三维引擎将模型生成为预制件。
其中,预制件具体可以是指prefab,可视为一个组件模板,用于批量的套用工作。例如在虚拟场景中需要重复使用的模型,如敌人、士兵、武器、子弹或者任意一个和砖块完全相同的墙体等。prefab像是克隆体,但生成的位置、角度或者一些属性不同,就好像c++里边的类一样。
具体地,终端设备通过三维引擎将模型生成为预制件。
操作306,在三维引擎中导入预制件的动画配置文件。
其中,动画配置文件可以是包括用于控制基础骨骼和变形骨骼的动画参数,也可以是包括目标虚拟角色实施角色动作过程。
具体地,终端设备将预制件的动画配置文件导入三维引擎中。
操作308,通过三维引擎,调用预制件以生成虚拟场景下的目标虚拟角色,并通过动画配置文件的动作参数值控制目标虚拟角色实施角色动作。
其中,动作参数值用于表示目标虚拟角色中基础骨骼的移动参数值、基础骨骼的旋转参数值、变形骨骼的伸缩参数值、变形骨骼的缩放参数值中至少一种。动作参数值可以是服务器向终端设备发送的,也可以是终端设备存储的。终端设备通过动作参数值实现所关 联骨骼的运动,控制目标虚拟角色实施角色动作。所关联骨骼可为基础骨骼、变形骨骼中至少一种。
具体地,通过该三维引擎,调用预制件如prefab以生成虚拟场景下的目标虚拟角色,并基于导入的动画配置文件的参数值进行配置,通过动画配置文件的参数值控制目标虚拟角色实施角色动作。例如,终端设备通过3DS max软件生成动画配置文件,在三维引擎中导入预制件的动画配置文件,通过三维引擎可解析出动画配置文件中各骨骼的动作参数值,那么基于动作参数值可控制基础骨骼和变形骨骼,使得目标虚拟角色实施角色动作。
上述虚拟角色控制方法,将蒙皮处理后的目标虚拟角色作为模型导入三维引擎,通过三维引擎将模型生成预制件,并导入动画配置文件,调用预制件以生成虚拟场景下的目标虚拟角色,通过动画配置文件的动作参数值控制目标虚拟角色实施角色动作,能够通过动作参数值控制目标虚拟角色实现变形,不需要保存很多图像,减少占用的存储空间。
在一个实施例中,如图4所示,为另一个实施例中虚拟角色控制方法的流程示意图。开始后,进入3DS MAX软件中。其中,3DS MAX又称3D Studio Max,常简称为3d Max或3ds MAX,是基于PC(Personal Computer,个人计算机)系统的三维动画渲染和制作软件。对角色架设骨骼的操作包括:创建目标虚拟角色的基础骨架,基础骨架包括多于一个的基础骨骼,在基础骨架上添加至少一个变形骨骼。蒙皮绑定包括:对添加有变形骨骼的基础骨架进行蒙皮绑定,得到目标虚拟角色。模型导入unity包括:将蒙皮处理后的目标虚拟角色作为模型导入unity。其中,unity是一种三维引擎。制作成prefab包括:通过三维引擎将模型生成为prefab。即prefab中包括架设的骨骼、蒙皮绑定和模型。其中,prefab是一种预制件。动画导入unity即在unity中导入预制件的动画配置文件。解析动画配置文件可显示出目标虚拟角色的美术效果和动画效果。配置是指在unity中将prefab和动画配置文件配置在一起。程序调用即调用预制件以生成虚拟场景下的目标虚拟角色,并通过动画配置文件的动作参数值控制目标虚拟角色实施角色动作。
在一个实施例中,在基础骨架上添加至少一个变形骨骼,包括:确定基础骨架中的目标基础骨骼;按照目标基础骨骼的长度确定变形骨骼数量;在基础骨架上的目标基础骨骼的位置处,添加顺序连接的变形骨骼数量的变形骨骼。
其中,目标基础骨骼所对应的位置即为待添加的变形骨骼所对应的位置。目标基础骨骼的数量不限。并且目标基础骨骼为基础骨骼中预设的至少一个基础骨骼。
具体地,终端设备确定基础骨架中的目标基础骨骼。目标基础骨骼可以是预设的基础骨骼,例如四肢处的基础骨骼。或者终端设备可按照虚拟角色的类型确定基础骨架中的目标基础骨骼。例如虚拟角色为虚拟人物,那么目标基础骨骼可以是手臂;虚拟角色为虚拟怪物,那么目标基础骨骼可以是腿部。
终端设备按照目标基础骨骼的长度确定变形骨骼数量。在基础骨架上的目标基础骨骼的位置处,添加顺序连接的变形骨骼数量的变形骨骼。例如目标虚拟角色为虚拟人物,目标基础骨骼可以是手臂骨骼,且目标基础骨骼为四块。那么按照目标基础骨骼的长度,每节目标基础骨骼上可架设两块变形骨骼。由于变形骨骼的形状是正方体,那么,若在一块目标基础骨骼上仅架设一块变形骨骼,那么该虚拟角色的手臂可能会很粗,因此需要基于目标虚拟角色的手臂设计尺寸调整变形骨骼数量。
上述虚拟角色控制方法,按照目标基础骨骼的长度确定变形骨骼数量,在基础骨架上 的目标基础骨骼的位置处,添加顺序连接的变形骨骼数量的变形骨骼,使得变形骨骼的数量与虚拟角色更加匹配,能够提高得到的目标虚拟角色的逼真度。
在一个实施例中,该虚拟角色控制方法还包括:在角色行动地图上显示目标虚拟角色的缩略对象;控制缩略对象在缩略对象的行动范围内移动;当缩略对象的移动满足角色交互触发条件时,切换到进行角色交互的虚拟场景。
其中,角色行动地图用于在电子地图上显示目标虚拟角色能够行动的范围。虚拟场景具体可以是RPG(Role-playing game,角色扮演游戏)、SRPC(Strategy Role-Playing Game,策略角色扮演游戏)等不限于此。缩略对象即为目标虚拟角色缩小后的形象。缩略对象可用于在角色行动地图上执行角色动作。角色交互的虚拟场景是指虚拟场景中包括至少两个虚拟角色,其中一个是目标虚拟角色,且目标虚拟角色可在该虚拟场景中与另一虚拟角色产生交互。
具体地,终端设备在角色行动地图上显示目标虚拟角色的缩略对象。当检测到对缩略对象的移动操作时,终端设备控制缩略对象在缩略对象的行为范围内移动。当缩略对象的移动满足交互触发条件时,交互触发条件例如是当目标虚拟角色与位于其他阵营的其他虚拟角色相遇、或者目标虚拟角色落在位于其他阵营的其他虚拟角色的攻击范围内、或者目标虚拟角色与战斗的NPC(Non-Player Character,非玩家角色)相遇、或者在终端设备触发了技能释放操作或者播放奥义操作等不限于此,终端设备切换到进行角色交互的虚拟场景。则终端设备在角色交互的虚拟场景中显示目标虚拟角色的至少一部分,该目标虚拟角色绑定有基础骨骼和变形骨骼,在虚拟场景中触发目标虚拟角色的角色动作,当角色动作包括角色运动时,通过角色运动所关联基础骨骼的运动,控制目标虚拟角色在角色交互的虚拟场景中实施角色运动;当角色动作包括角色局部变形时,通过角色局部变形所关联变形骨骼的变形,控制目标虚拟角色在该虚拟场景中实施角色局部变形。
例如,在战棋游戏中,在战棋游戏的地图上显示罗伊的缩略对象,控制该缩略对象在显示的方格的行动范围内移动。当罗伊的缩略对象移动到健壮的旷工所在位置时,则需要释放技能或播放奥义,此时需要夸张变形自己的身体,那么需要切换到如图5所示的角色交互的虚拟场景中。图5为一个实施例中角色交互的虚拟场景的界面示意图。图6为另一个实施例中角色交互的虚拟场景的界面示意图。图5的虚拟场景中显示罗伊的腿的一部分、头、身体、和手。且在战棋游戏中触发的角色动作包括手部变大时,通过手部变大所关联骨骼例如手掌处的骨骼的变大,控制罗伊在战棋游戏的交互过程中实施手部变大。而图6的虚拟场景中显示罗伊的右手臂的一部分、左手臂的一部分、左手、上身和左手掌。且在战棋游戏中触发的角色动作包括手臂变粗时,通过手部变粗所关联的骨骼例如手臂上的骨骼的变大,控制罗伊在战棋游戏的交互过程中实施手臂变粗。其中,在上述例子中,战棋游戏为虚拟场景,罗伊为目标虚拟角色,罗伊的缩略对象移动到健壮的旷工所在位置为角色交互触发条件,显示的方格的行为范围为缩略对象的行动方位,角色局部变形为手部变大和手臂变粗,虚拟场景为战棋游戏。
上述虚拟角色控制方法,控制缩略对象在对应的行动范围内移动,当缩略对象移动满足角色交互触发条件时,切换到角色交互场景,能够增强在虚拟场景中的交互性。
在一个实施例中,如图7所示,为一个实施例中虚拟角色手部变形的示意图。由图可知,目标虚拟角色的手部放大,且放大的效果比通过视场角的方式更大。图7中的手也是 通过控制变形骨骼的放大实现虚拟对象的局部放大效果。
在一个实施例中,该虚拟角色控制方法还包括:在虚拟场景中,通过三维引擎,加载预制件;预制件是在目标虚拟角色的基础骨架上架设变形骨骼再进行蒙皮处理后得到的;通过预制件创建实例,得到目标虚拟角色。
其中,预制件是在目标虚拟角色的基础骨架上架设变形骨骼再进行蒙皮处理后得到的。具体地,在虚拟场景中,终端设备通过三维引擎,加载预制件,通过预制件创建实例,得到目标虚拟角色。
上述虚拟角色控制方法,通过三维引擎,加载预制件,通过预制件创建实例,得到目标虚拟角色,能够提高目标虚拟角色的逼真度,提高用户体验。
在一个实施例中,该虚拟场景为动画编辑场景。该虚拟角色控制方法还包括:在虚拟场景中,当目标虚拟角色在实施角色动作的过程中,记录目标虚拟角色在角色动作的实施过程中的关键帧;基于关键帧生成视频动画。
其中,动画编辑场景可以指制作动画对应的场景。关键帧中记录目标虚拟角色在该帧时间点的动作。例如关键帧中记录了目标虚拟角色的手部摆动、腿部变形等不限于此。关键帧可均匀分布,也可以不均匀分布。均匀分布例如,第1帧、第10帧、第20帧……第100帧为关键帧。不均匀分布例如第1帧、第15帧、第20帧……第100帧为关键帧。
具体地,在动画编辑场景中,当目标虚拟角色在实施角色动作的过程中,终端设备记录目标虚拟角色在实施过程中的关键帧,关键帧中包含目标虚拟角色的动作姿态。终端设备可基于关键帧自动生成连续帧的视频动画。例如,目标虚拟角色实施角色运动,那么记录角色运动的关键帧;目标虚拟对象实施角色局部变形,那么记录角色局部变形的关键帧;目标虚拟对象实施角色运动和角色局部变形,那么记录包含角色运动和角色局部变形的关键帧。
上述虚拟角色控制方法,在动画编辑场景中,当目标虚拟角色在实施角色动作的过程中,记录目标虚拟角色在实施过程中的关键帧,基于关键帧生成视频动画,能够制作得到目标虚拟角色在实施角色动作过程中的视频动画,并且易于修改虚拟角色的视频动画,提高虚拟角色控制效率。
在一个实施例中,该虚拟角色控制方法还包括:在角色行动地图上显示目标虚拟角色的缩略对象;控制缩略对象在缩略对象的行动范围内移动;当缩略对象的移动满足角色交互触发条件时,切换到角色交互场景;在角色交互场景下触发目标虚拟角色的角色动作时,播放视频动画。
其中,角色交互场景是中包括至少两个虚拟角色,其中一个是目标虚拟角色,且目标虚拟角色可在该角色交互场景中与另一虚拟角色产生交互。
具体地,终端设备显示角色行动地图,并在角色行动地图上显示目标虚拟角色的缩略对象。当检测到对缩略对象的移动操作时,终端设备控制缩略对象在缩略对象的行为范围内移动。当缩略对象的移动满足交互触发条件时,交互触发条件例如是当目标虚拟角色与位于其他阵营的其他虚拟角色相遇、或者目标虚拟角色落在位于其他阵营的其他虚拟角色的攻击范围内、或者目标虚拟角色与战斗的NPC相遇等不限于此,终端设备切换到进行角色交互场景。当在角色交互场景下触发目标虚拟角色的角色动作时,终端设备播放该视频动画。
例如,在战棋游戏中,在战棋游戏的地图上显示罗伊的缩略对象,控制该缩略对象在显示的方格的行动范围内移动。当罗伊的缩略对象移动到健壮的旷工所在位置时,切换到技能释放场景。在技能释放场景下触发了目标虚拟角色的角色动作,则播放在动画编辑场景下生成的视频动画。在上述例子中,战棋游戏为虚拟场景,显示的方格的行为范围为缩略对象的行动方位,技能释放场景为角色交互场景。
上述虚拟角色控制方法,控制缩略对象在对应的行动范围内移动,当缩略对象的移动满足角色交互触发条件时,切换到角色交互场景,在角色交互场景下触发目标虚拟角色的角色动作时,播放视频动画,能够使得虚拟角色通过变形骨骼实现变形的效果,提高虚拟角色的逼真度。
在一个实施例中,一种虚拟角色控制方法,包括以下操作:
操作(a1),创建目标虚拟角色的基础骨架,基础骨架包括多于一个的基础骨骼。
操作(a2),确定基础骨架中的目标基础骨骼。
操作(a3),按照目标基础骨骼的长度确定变形骨骼数量。
操作(a4),在基础骨架上的目标基础骨骼的位置处,添加顺序连接的变形骨骼数量的变形骨骼。
操作(a5),对添加有变形骨骼的基础骨架进行蒙皮处理,得到目标虚拟角色。
操作(a6),将蒙皮处理后的目标虚拟角色作为模型导入三维引擎。
操作(a7),通过三维引擎将模型生成为预制件。
操作(a8),在三维引擎中导入预制件的动画配置文件。
操作(a9),通过三维引擎,调用预制件以生成虚拟场景下的目标虚拟角色,并通过动画配置文件的动作参数值控制目标虚拟角色实施角色动作。
操作(a10),在角色行动地图上显示目标虚拟角色的缩略对象。
操作(a11),控制缩略对象在缩略对象的行动范围内移动。
操作(a12),当缩略对象的移动满足角色交互触发条件时,切换到进行角色交互的虚拟场景。
操作(a13),在虚拟场景中,通过三维引擎,加载预制件;预制件是在目标虚拟角色的基础骨架上架设变形骨骼再进行蒙皮处理后得到的。
操作(a14),通过预制件创建实例,得到目标虚拟角色。
操作(a15),在虚拟场景中显示目标虚拟角色的至少一部分。目标虚拟角色绑定有基础骨骼和变形骨骼。
操作(a16),在虚拟场景中触发目标虚拟角色的角色动作。
操作(a17),当角色动作包括角色运动时,通过角色运动所关联基础骨骼的运动,控制目标虚拟角色在虚拟场景中实施角色运动。
操作(a18),当角色动作包括角色局部变形时,通过角色局部变形所关联变形骨骼的变形,控制目标虚拟角色在虚拟场景中实施角色局部变形。
上述虚拟角色控制方法,创建目标虚拟角色的基础骨架,并基于目标基础骨架确定变形骨骼数量,能够使得变形骨骼的数量与虚拟角色更加匹配,能够提高得到的目标虚拟角色的逼真度;通过三维引擎调用预制件生成目标虚拟角色,并通过动作参数值控制目标虚拟角色实施角色动作,则相对于传统技术中手动绘制虚拟角色的动画的方式,能够提高虚 拟角色控制效率,同时由于通过基础骨骼和变形骨骼控制目标虚拟角色,不需要保存太多图像,能够减少占用的存储空间。
在一个实施例中,一种虚拟角色控制方法,包括以下操作:
操作(b1),在动画编辑场景中显示目标虚拟角色的至少一部分,目标虚拟角色绑定有基础骨骼和变形骨骼。
操作(b2),在动画编辑场景中触发目标虚拟角色的角色动作。
操作(b3),当角色动作包括角色运动时,通过角色运动所关联基础骨骼的运动,控制目标虚拟角色在动画编辑场景中实施角色运动。
操作(b4),当角色动作包括角色局部变形时,通过角色局部变形所关联变形骨骼的变形,控制目标虚拟角色在动画编辑场景中实施角色局部变形。
操作(b5),在动画编辑场景中,当目标虚拟角色在实施角色动作的过程中,记录目标虚拟角色在实施角色动作的过程中的关键帧。
操作(b6),基于关键帧生成视频动画。
操作(b7),在角色行动地图上显示目标虚拟角色的缩略对象。
操作(b9),控制缩略对象在缩略对象的行动范围内移动。
操作(b10),当缩略对象的移动满足角色交互触发条件时,切换到角色交互场景。
操作(b11),在角色交互场景下触发目标虚拟角色的角色动作时,播放视频动画。
上述虚拟角色控制方法,在动画编辑场景中,当目标虚拟角色在实施角色动作的过程中,记录目标虚拟角色在实施过程中的关键帧,基于关键帧生成视频动画,能够制作得到目标虚拟角色在实施角色动作过程中的视频动画,并且易于修改虚拟角色的视频动画,提高虚拟角色控制效率,控制缩略对象在对应的行动范围内移动,当缩略对象的移动满足角色交互触发条件时,切换到角色交互场景,在角色交互场景下触发目标虚拟角色的角色动作时,播放视频动画,能够使得虚拟角色通过变形骨骼实现变形的效果,提高虚拟角色的逼真度。
在一个实施例中,3DS MAX:3D Studio Max,常简称为3dMax或3ds MAX,是Discreet公司开发的(后被Autodesk公司合并)基于PC系统的三维动画渲染和制作软件。其前身是基于DOS操作系统的3D Studio系列软件。如图8所示,为一个实施例中目标虚拟角色对应的基础骨架的示意图。如图9所示,为一个实施例中添加有变形骨骼的基础骨架的示意图。其中,图中的方块902即为变形骨骼902。在图9中,目标虚拟角色的左臂包括4块变形骨骼902,右臂也包括4块变形骨骼902。目标虚拟角色头上的方块904为特效挂点904,用于在对应位置显示气泡框等。图9中还包括武器挂点906,在906对应的位置上能够挂载武器。
如图10所示,为一个实施例中蒙皮处理后的目标虚拟角色的示意图。图10中的网格即为目标虚拟角色的皮肤(skin)。图10中目标虚拟对象右手上持有武器。通过蒙皮处理,能够使目标虚拟角色更加逼真。可在3DS MAX软件中选择蒙皮,并在参数中的双四元数选择DQ蒙皮切换。可在软件中添加或者移除骨骼,并且每块骨骼均有对应的骨骼标识。并且在软件中可设置封套属性等。以及可选择显示的内容,例如图10中勾选色彩显示顶点权重、显示有色面、显示所有Gizmos(小装置)、不显示封套,并可勾选在顶部绘制 横截面和封套,并且可设置高级参数。高级参数例如始终变形、回退变换顶点等。
如图11所示,为一个实施例中变形骨骼的位置界面示意图。其中,在max软件中找到变形骨骼建立点,然后选中“虚拟对象”(英文版:dummy)将变形骨骼拖出,调节变形骨骼的大小,与该虚拟角色匹配上即可。该变形骨骼配置在基础骨骼对应的位置并且按照虚拟角色的肌肉走势配置即可。
如图12所示,为一个实施例中控制变形骨骼变形的示意图。在蒙皮绑定结束后,可在3DS MAX软件中放大缩小该变形骨骼,则可直观得显示该目标虚拟角色的左手臂变大。如图13所示,为一个实施例中在三维引擎中显示的目标虚拟角色的示意图。如图14所示,为一个实施例中放大目标虚拟角色的手的示意图。在图14中,目标虚拟角色的手掌部分绑定有变形骨骼,通过控制变形骨骼放大,使得该目标虚拟角色的手呈现放大效果。
本申请还提供一种应用于战棋游戏的应用场景,该应用场景应用上述的虚拟角色控制方法。具体地,该虚拟角色控制方法在该应用场景的应用如下:在战棋游戏项目中,我们会有很多的角色进行战斗,在这些角色的战斗中,有时需要播放奥义、释放技能或者释放大招等的镜头特写。在某些特殊战斗时,美术动画的表现方法需要更夸张和有力的,这就需要在传统动画表现的基础上,增加手臂,手指,躯干,脚等部位增加局部放大或变形功能。创建目标虚拟角色的CS骨架,CS骨架包括多于一个的CS骨骼。确定CS骨架中的目标CS骨骼;按照目标CS骨骼的长度确定变形骨骼数量;在CS骨架上的目标CS骨骼的位置处,添加顺序连接的变形骨骼数量的dummy骨骼。对添加有dummy骨骼的CS骨架进行蒙皮处理,得到目标虚拟角色;将蒙皮处理后的目标虚拟角色作为模型导入unity。通过unity将模型生成为prefab;在unity中导入prefab的动画配置文件;通过unity,调用prefab以生成虚拟场景下的目标虚拟角色,并通过动画配置文件的动作参数值控制目标虚拟角色实施角色动作。在战棋游戏的角色行动地图上显示目标虚拟角色的缩略头像;其中,缩略头像即为缩略对象;控制缩略头像在对应的活动范围内移动;当缩略对象的移动满足角色交互触发条件时,即需要释放技能或播放奥义等时,切换到进行对阵场景。在战棋游戏中显示目标虚拟角色的至少一部分;目标虚拟角色绑定有CS骨骼和dummy骨骼。以角色动作为释放技能、角色交互的虚拟场景为对阵场景为例,在对阵的虚拟场景中触发目标虚拟角色的技能;当技能包括角色运动时,例如行走、跳起等角色运动时,通过角色运动所关联CS骨骼的运动,控制目标虚拟角色在战棋游戏场景中实施角色运动。当技能包括角色局部变形时,例如手部放大等,通过角色局部变形所关联dummy骨骼的变形,控制目标虚拟角色在战棋游戏场景中实施角色局部变形。其中,虚拟场景是战棋游戏场景,CS骨架即为基础骨架,CS骨骼即为基础骨骼,目标CS骨骼即为目标基础骨骼,dummy骨骼即为变形骨骼,unity即为三维引擎,prefab是预制件,缩略头像是缩略对象,角色交互的场景是对阵场景。
本申请还提供一种动画编辑的应用场景,该应用场景应用上述的虚拟角色控制方法。当在制作角色动画的时候,游戏里某些镜头需要突出此角色特殊效果,比如手的伸缩变大,躯干变大,角色的局部需要特殊变形的时候,在动画编辑场景中显示目标虚拟角色的至少一部分;目标虚拟角色绑定有CS骨骼和dummy骨骼;在虚拟场景中触发目标虚拟角色的角色动作;当角色动作包括角色运动时,通过角色运动所关联CS骨骼的运动,控制目 标虚拟角色在动画编辑场景中实施角色运动;当角色动作包括角色局部变形时,通过角色局部变形所关联dummy骨骼的变形,控制目标虚拟角色在动画编辑场景中实施角色局部变形。在动画编辑场景中,当目标虚拟角色在实施角色动作的过程中,记录目标虚拟角色在实施角色动作的过程中的关键帧;基于关键帧生成视频动画。在虚拟场景中在角色行动地图上显示目标虚拟角色的缩略对象;控制缩略对象在缩略对象的行动范围内移动;当缩略对象的移动满足角色交互触发条件时,切换到对阵场景;在下触发目标虚拟角色的角色动作时,播放基于目标虚拟角色实施角色动作过程中生成的视频动画。其中,虚拟场景是动画编辑场景,CS骨架即为基础骨架,CS骨骼即为基础骨骼,目标CS骨骼即为目标基础骨骼,dummy骨骼即为变形骨骼,对阵场景是角色交互场景。
应该理解的是,虽然图2至4的流程图中的各个操作按照箭头的指示依次显示,但是这些操作并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些操作的执行并没有严格的顺序限制,这些操作可以以其它的顺序执行。而且,图2至4中的至少一部分操作可以包括多个操作或者多个阶段,这些操作或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些操作或者阶段的执行顺序也不必然是依次进行,而是可以与其它操作或者其它操作中的操作或者阶段的至少一部分轮流或者交替地执行。
在一个实施例中,如图15所示,提供了一种虚拟角色控制装置,该装置可以采用软件模块或硬件模块,或者是二者的结合成为计算机设备的一部分,该装置具体包括:显示模块1502、动作触发模块1504和控制模块1506,其中:
显示模块1502,用于在虚拟场景中显示目标虚拟角色的至少一部分;目标虚拟角色绑定有基础骨骼和变形骨骼。
动作触发模块1504,用于在虚拟场景中触发目标虚拟角色的角色动作。
第一控制模块1506,用于当角色动作包括角色运动时,通过角色运动所关联基础骨骼的运动,控制目标虚拟角色在虚拟场景中实施角色运动。
第二控制模块1506,用于当角色动作包括角色局部变形时,通过角色局部变形所关联变形骨骼的变形,控制目标虚拟角色在虚拟场景中实施角色局部变形。
上述虚拟角色控制装置,目标虚拟角色绑定有基础骨骼和变形骨骼,并且在虚拟场景中触发的角色动作包括角色运动时,通过基础骨骼的运动控制目标虚拟角色实施角色运动,当在虚拟场景中触发的角色动作包括角色局部变形时,通过变形骨骼的变形实施角色局部变形,则相对于传统技术中手动绘制虚拟角色的动画的方式,能够提高虚拟角色控制效率,同时由于通过基础骨骼和变形骨骼控制目标虚拟角色,不需要保存太多图像,能够减少占用的存储空间。
在一个实施例中,基础骨骼的运动包括骨骼移动和骨骼旋转中至少一种;变形骨骼的变形包括骨骼局部伸缩和骨骼整体缩放中至少一种。
上述虚拟角色控制装置,基础骨骼的运动包括骨骼移动、骨骼旋转中至少一种,变形骨骼的变形包括骨骼局部伸缩和骨骼整体缩放中至少一种,即基础骨骼和变形骨骼能够分别实现不同的功能,使得目标虚拟角色实现不同的角色动作,提高虚拟角色的交互性。
在一个实施例中,该虚拟角色控制装置还包括角色构建模块。角色构建模块用于创建目标虚拟角色的基础骨架,基础骨架包括多于一个的基础骨骼;在基础骨架上添加至少一 个变形骨骼;对添加有变形骨骼的基础骨架进行蒙皮处理,得到目标虚拟角色。
上述虚拟角色控制装置,创建目标虚拟角色的基础骨架,在基础骨架上添加至少一个变形骨骼,对添加有变形骨骼的基础骨架进行蒙皮处理,得到目标虚拟角色,则能够通过架设变形骨骼实现目标虚拟角色的变形,减少占用的存储空间。
在一个实施例中,角色构建模块还用于将蒙皮处理后的目标虚拟角色作为模型导入三维引擎;通过三维引擎将模型生成为预制件;在三维引擎中导入预制件的动画配置文件;通过三维引擎,调用预制件以生成虚拟场景下的目标虚拟角色;控制模块1506用于通过动画配置文件的动作参数值控制目标虚拟角色实施角色动作。
上述虚拟角色控制装置,将蒙皮处理后的目标虚拟角色作为模型导入三维引擎,通过三维引擎将模型生成预制件,并导入动画配置文件,调用预制件以生成虚拟场景下的目标虚拟角色,通过动画配置文件的动作参数值控制目标虚拟角色实施角色动作,能够通过动作参数值控制目标虚拟角色实现变形,不需要保存很多图像,减少占用的存储空间。
在一个实施例中,角色构建模块还用于确定基础骨架中的目标基础骨骼;按照目标基础骨骼的长度确定变形骨骼数量;在基础骨架上的目标基础骨骼的位置处,添加顺序连接的变形骨骼数量的变形骨骼。
上述虚拟角色控制装置,按照目标基础骨骼的长度确定变形骨骼数量,在基础骨架上的目标基础骨骼的位置处,添加顺序连接的变形骨骼数量的变形骨骼,使得变形骨骼的数量与虚拟角色更加匹配,能够提高得到的目标虚拟角色的逼真度。
在一个实施例中,角色构建模块还用于在虚拟场景中,通过三维引擎,加载预制件;预制件是在目标虚拟角色的基础骨架上架设变形骨骼再进行蒙皮处理后得到的;通过预制件创建实例,得到目标虚拟角色。
上述虚拟角色控制装置,控制缩略对象在对应的行动范围内移动,当缩略对象移动满足角色交互触发条件时,切换到角色交互场景,能够增强在虚拟场景中的交互性。
在一个实施例中,虚拟对象控制装置还包括动画生成模块,动画生成模块用于在虚拟场景中,当目标虚拟角色在实施角色动作的过程中,记录目标虚拟角色在实施角色动作的过程中的关键帧;基于关键帧生成视频动画。
上述虚拟角色控制装置,在虚拟场景中,当目标虚拟角色在实施角色动作的过程中,记录目标虚拟角色在实施过程中的关键帧,基于关键帧生成视频动画,能够制作得到目标虚拟角色在实施角色动作过程中的视频动画,并且易于修改虚拟角色的视频动画,提高虚拟角色控制效率。
在一个实施例中,控制模块1506还用于在角色行动地图上显示目标虚拟角色的缩略对象;控制缩略对象在缩略对象的行动范围内移动;当缩略对象的移动满足角色交互触发条件时,切换到角色交互场景;在角色交互场景下触发目标虚拟角色的角色动作时,播放视频动画。
上述虚拟角色控制装置,控制缩略对象在对应的行动范围内移动,当缩略对象的移动满足角色交互触发条件时,切换到角色交互场景,在角色交互场景下触发目标虚拟角色的角色动作时,播放视频动画,能够使得虚拟角色通过变形骨骼实现变形的效果,提高虚拟角色的逼真度。
关于虚拟角色控制装置的具体限定可以参见上文中对于虚拟角色控制方法的限定,在 此不再赘述。上述虚拟角色控制装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是终端设备,其内部结构图可以如图16所示。该计算机设备包括通过系统总线连接的处理器、存储器、通信接口、显示屏和输入装置。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统和计算机程序。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的通信接口用于与外部的终端设备进行有线或无线方式的通信,无线方式可通过WIFI、运营商网络、NFC(近场通信)或其他技术实现。该计算机程序被处理器执行时以实现一种虚拟角色控制方法。该计算机设备的显示屏可以是液晶显示屏或者电子墨水显示屏,该计算机设备的输入装置可以是显示屏上覆盖的触摸层,也可以是计算机设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。
本领域技术人员可以理解,图16中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,还提供了一种计算机设备,包括存储器和一个或多个处理器,存储器中存储有计算机可读指令,计算机可读指令被处理器执行时,使得一个或多个处理器执行上述各方法实施例中的操作。
在一个实施例中,提供了一个或多个存储有计算机可读指令的非易失性可读存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行上述各方法实施例中的操作。
在一个实施例中,提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述各方法实施例中的操作。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存或光存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或外部高速缓冲存储器。作为说明而非局限,RAM可以是多种形式,比如静态随机存取存储器(Static Random Access Memory,SRAM)或动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。

Claims (20)

  1. 一种虚拟角色控制方法,由计算机设备执行,所述方法包括:
    在虚拟场景中显示目标虚拟角色的至少一部分;所述目标虚拟角色绑定有基础骨骼和变形骨骼;
    在所述虚拟场景中触发所述目标虚拟角色的角色动作;
    当所述角色动作包括角色运动时,通过所述角色运动所关联基础骨骼的运动,控制所述目标虚拟角色在所述虚拟场景中实施所述角色运动;及
    当所述角色动作包括角色局部变形时,通过所述角色局部变形所关联变形骨骼的变形,控制所述目标虚拟角色在所述虚拟场景中实施所述角色局部变形。
  2. 根据权利要求1所述的方法,其特征在于,所述基础骨骼的运动包括骨骼移动和骨骼旋转中至少一种;所述变形骨骼的变形包括骨骼局部伸缩和骨骼整体缩放中至少一种。
  3. 根据权利要求1所述的方法,其特征在于,所述目标虚拟角色通过角色构建步骤生成,所述角色构建步骤包括:
    创建目标虚拟角色的基础骨架,所述基础骨架包括多于一个的基础骨骼;
    在所述基础骨架上添加至少一个变形骨骼;及
    对添加有变形骨骼的所述基础骨架进行蒙皮处理,得到目标虚拟角色。
  4. 根据权利要求3所述的方法,其特征在于,所述方法还包括:
    将蒙皮处理后的目标虚拟角色作为模型导入三维引擎;
    通过所述三维引擎将所述模型生成为预制件;
    在三维引擎中导入所述预制件的动画配置文件;及
    通过所述三维引擎,调用所述预制件以生成虚拟场景下的目标虚拟角色,并通过所述动画配置文件的动作参数值控制所述目标虚拟角色实施角色动作。
  5. 根据权利要求3所述的方法,其特征在于,所述在所述基础骨架上添加至少一个变形骨骼,包括:
    确定所述基础骨架中的目标基础骨骼;
    按照所述目标基础骨骼的长度确定变形骨骼数量;及
    在所述基础骨架上的所述目标基础骨骼的位置处,添加顺序连接的所述变形骨骼数量的变形骨骼。
  6. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    在角色行动地图上显示目标虚拟角色的缩略对象;
    控制所述缩略对象在所述缩略对象的行动范围内移动;及
    当所述缩略对象的移动满足角色交互触发条件时,切换到进行角色交互的虚拟场景。
  7. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    在虚拟场景中,通过三维引擎,加载预制件;所述预制件是在所述目标虚拟角色的基础骨架上架设变形骨骼再进行蒙皮处理后得到的;及
    通过所述预制件创建实例,得到目标虚拟角色。
  8. 根据权利要求1所述的方法,其特征在于,所述虚拟场景为动画编辑场景,所述方法还包括:
    在所述虚拟场景中,当所述目标虚拟角色在实施所述角色动作的过程中,记录所述目标虚拟角色在实施所述角色动作的过程中的关键帧;及
    基于所述关键帧生成视频动画。
  9. 根据权利要求8所述的方法,其特征在于,所述方法还包括:
    在角色行动地图上显示目标虚拟角色的缩略对象;
    控制所述缩略对象在所述缩略对象的行动范围内移动;
    当所述缩略对象的移动满足角色交互触发条件时,切换到角色交互场景;及
    在所述角色交互场景下触发所述目标虚拟角色的角色动作时,播放所述视频动画。
  10. 一种虚拟角色控制装置,所述装置包括:
    显示模块,用于在虚拟场景中显示目标虚拟角色的至少一部分;所述目标虚拟角色绑定基础骨骼和变形骨骼;
    动作触发模块,用于在所述虚拟场景中触发所述目标虚拟角色的角色动作;
    控制模块,用于当所述角色动作包括角色运动时,通过所述角色运动所关联基础骨骼的运动,控制所述目标虚拟角色在所述虚拟场景中实施所述角色运动;及
    所述控制模块,用于当所述角色动作包括角色局部变形时,通过所述角色局部变形所关联变形骨骼的变形,控制所述目标虚拟角色在所述虚拟场景中实施所述角色局部变形。
  11. 根据权利要求10所述的装置,其特征在于,所述基础骨骼的运动包括骨骼移动和骨骼旋转中至少一种;所述变形骨骼的变形包括骨骼局部伸缩和骨骼整体缩放中至少一种。
  12. 根据权利要求10所述的装置,其特征在于,所述装置还包括角色构建模块;所述角色构建模块,用于创建目标虚拟角色的基础骨架,所述基础骨架包括多于一个的基础骨骼;在所述基础骨架上添加至少一个变形骨骼;及对添加有变形骨骼的所述基础骨架进行蒙皮处理,得到目标虚拟角色。
  13. 根据权利要求12所述的装置,其特征在于,所述角色构建模块,还用于将蒙皮处理后的目标虚拟角色作为模型导入三维引擎;通过所述三维引擎将所述模型生成为预制件;在三维引擎中导入所述预制件的动画配置文件;及通过所述三维引擎,调用所述预制件以生成虚拟场景下的目标虚拟角色,并通过所述动画配置文件的动作参数值控制所述目标虚拟角色实施角色动作。
  14. 根据权利要求12所述的装置,其特征在于,所述角色构建模块,还用于确定所述基础骨架中的目标基础骨骼;按照所述目标基础骨骼的长度确定变形骨骼数量;及在所述基础骨架上的所述目标基础骨骼的位置处,添加顺序连接的所述变形骨骼数量的变形骨骼。
  15. 根据权利要求10所述的装置,其特征在于,所述角色构建模块,还用于在角色行动地图上显示目标虚拟角色的缩略对象;控制所述缩略对象在所述缩略对象的行动范围内移动;及当所述缩略对象的移动满足角色交互触发条件时,切换到进行角色交互的虚拟场景。
  16. 根据权利要求10所述的装置,其特征在于,所述角色构建模块,还用于在虚拟场景中,通过三维引擎,加载预制件;所述预制件是在所述目标虚拟角色的基础骨架上架设变形骨骼再进行蒙皮处理后得到的;及通过所述预制件创建实例,得到目标虚拟角色。
  17. 根据权利要求10所述的装置,其特征在于,所述装置还包括动画生成模块;所述动画生成模块,用于在所述虚拟场景中,当所述目标虚拟角色在实施所述角色动作的过程中,记录所述目标虚拟角色在实施所述角色动作的过程中的关键帧;及基于所述关键帧生成视频动画。
  18. 根据权利要求17所述的装置,其特征在于,所述控制模块,还用于在角色行动地图上显示目标虚拟角色的缩略对象;控制所述缩略对象在所述缩略对象的行动范围内移动;当所述缩略对象的移动满足角色交互触发条件时,切换到角色交互场景;及在所述角色交互场景下触发所述目标虚拟角色的角色动作时,播放所述视频动画。
  19. 一种计算机设备,包括存储器和一个或多个处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述一个或多个处理器执行权利要求1至9中任一项所述的方法的步骤。
  20. 一个或多个存储有计算机可读指令的非易失性可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行权利要求1至9中任一项所述的方法的步骤。
PCT/CN2021/100092 2020-07-02 2021-06-15 虚拟角色控制方法、装置、计算机设备和存储介质 WO2022001652A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/883,446 US20230045852A1 (en) 2020-07-02 2022-08-08 Method and apparatus for controlling virtual character, computer device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010624699.4 2020-07-02
CN202010624699.4A CN111659115B (zh) 2020-07-02 2020-07-02 虚拟角色控制方法、装置、计算机设备和存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/883,446 Continuation US20230045852A1 (en) 2020-07-02 2022-08-08 Method and apparatus for controlling virtual character, computer device, and storage medium

Publications (1)

Publication Number Publication Date
WO2022001652A1 true WO2022001652A1 (zh) 2022-01-06

Family

ID=72391158

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/100092 WO2022001652A1 (zh) 2020-07-02 2021-06-15 虚拟角色控制方法、装置、计算机设备和存储介质

Country Status (3)

Country Link
US (1) US20230045852A1 (zh)
CN (1) CN111659115B (zh)
WO (1) WO2022001652A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782645A (zh) * 2022-03-11 2022-07-22 科大讯飞(苏州)科技有限公司 虚拟数字人制作方法、相关设备及可读存储介质
CN115690282A (zh) * 2022-12-30 2023-02-03 海马云(天津)信息技术有限公司 虚拟角色的调整方法及装置

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111659115B (zh) * 2020-07-02 2022-03-11 腾讯科技(深圳)有限公司 虚拟角色控制方法、装置、计算机设备和存储介质
CN112184863B (zh) * 2020-10-21 2024-03-15 网易(杭州)网络有限公司 一种动画数据的处理方法和装置
CN112233211B (zh) * 2020-11-03 2024-04-09 网易(杭州)网络有限公司 动画制作的方法、装置、存储介质及计算机设备
CN112354186B (zh) * 2020-11-10 2024-07-16 网易(杭州)网络有限公司 游戏动画模型控制方法、装置、电子设备以及存储介质
CN112435323B (zh) * 2020-11-26 2023-08-22 网易(杭州)网络有限公司 虚拟模型中的光效处理方法、装置、终端及介质
CN112785668B (zh) * 2021-01-25 2021-11-09 旗林信息科技(杭州)有限公司 一种动漫三维角色动作轨迹融合系统
CN113610992B (zh) * 2021-08-04 2022-05-20 北京百度网讯科技有限公司 骨骼驱动系数确定方法、装置、电子设备及可读存储介质
CN114596393B (zh) * 2022-01-24 2024-06-07 深圳市大富网络技术有限公司 一种骨骼模型生成方法、装置、系统及存储介质
CN114602177A (zh) * 2022-03-28 2022-06-10 百果园技术(新加坡)有限公司 虚拟角色的动作控制方法、装置、设备和存储介质
CN116228942B (zh) * 2023-03-17 2024-02-06 北京优酷科技有限公司 角色动作提取方法、设备和存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007103312A2 (en) * 2006-03-07 2007-09-13 Goma Systems Corp. User interface for controlling virtual characters
CN102509338A (zh) * 2011-09-20 2012-06-20 北京航空航天大学 一种基于轮廓骨架图的视频场景行为生成方法
CN108961367A (zh) * 2018-06-21 2018-12-07 珠海金山网络游戏科技有限公司 三维虚拟偶像直播中角色形象变形的方法、系统及装置
CN110992495A (zh) * 2019-12-26 2020-04-10 珠海金山网络游戏科技有限公司 一种虚拟模型的变形方法及装置
CN111161427A (zh) * 2019-12-04 2020-05-15 北京代码乾坤科技有限公司 虚拟骨骼模型的自适应调节方法、装置及电子装置
CN111659115A (zh) * 2020-07-02 2020-09-15 腾讯科技(深圳)有限公司 虚拟角色控制方法、装置、计算机设备和存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10188028A (ja) * 1996-10-31 1998-07-21 Konami Co Ltd スケルトンによる動画像生成装置、該動画像を生成する方法、並びに該動画像を生成するプログラムを記憶した媒体
CN106296778B (zh) * 2016-07-29 2019-11-15 网易(杭州)网络有限公司 虚拟对象运动控制方法与装置
CN107294838B (zh) * 2017-05-24 2021-02-09 腾讯科技(深圳)有限公司 社交应用的动画生成方法、装置、系统以及终端
CN110298907B (zh) * 2019-07-04 2023-07-25 广州西山居网络科技有限公司 一种虚拟角色动作控制方法及装置、计算设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007103312A2 (en) * 2006-03-07 2007-09-13 Goma Systems Corp. User interface for controlling virtual characters
CN102509338A (zh) * 2011-09-20 2012-06-20 北京航空航天大学 一种基于轮廓骨架图的视频场景行为生成方法
CN108961367A (zh) * 2018-06-21 2018-12-07 珠海金山网络游戏科技有限公司 三维虚拟偶像直播中角色形象变形的方法、系统及装置
CN111161427A (zh) * 2019-12-04 2020-05-15 北京代码乾坤科技有限公司 虚拟骨骼模型的自适应调节方法、装置及电子装置
CN110992495A (zh) * 2019-12-26 2020-04-10 珠海金山网络游戏科技有限公司 一种虚拟模型的变形方法及装置
CN111659115A (zh) * 2020-07-02 2020-09-15 腾讯科技(深圳)有限公司 虚拟角色控制方法、装置、计算机设备和存储介质

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782645A (zh) * 2022-03-11 2022-07-22 科大讯飞(苏州)科技有限公司 虚拟数字人制作方法、相关设备及可读存储介质
CN114782645B (zh) * 2022-03-11 2023-08-29 科大讯飞(苏州)科技有限公司 虚拟数字人制作方法、相关设备及可读存储介质
CN115690282A (zh) * 2022-12-30 2023-02-03 海马云(天津)信息技术有限公司 虚拟角色的调整方法及装置

Also Published As

Publication number Publication date
US20230045852A1 (en) 2023-02-16
CN111659115B (zh) 2022-03-11
CN111659115A (zh) 2020-09-15

Similar Documents

Publication Publication Date Title
WO2022001652A1 (zh) 虚拟角色控制方法、装置、计算机设备和存储介质
US11446582B2 (en) System and method for streaming game sessions to third party gaming consoles
US20230050933A1 (en) Two-dimensional figure display method and apparatus for virtual object, device, and storage medium
US20140078144A1 (en) Systems and methods for avatar creation
US10933327B2 (en) Network-based video game editing and modification distribution system
CN111714880B (zh) 画面的显示方法和装置、存储介质、电子装置
US11816772B2 (en) System for customizing in-game character animations by players
WO2022184128A1 (zh) 虚拟对象的技能释放方法、装置、设备及存储介质
US11238667B2 (en) Modification of animated characters
TWI831074B (zh) 虛擬場景中的信息處理方法、裝置、設備、媒體及程式產品
CN111899319B (zh) 动画对象的表情生成方法和装置、存储介质及电子设备
JP2023126292A (ja) 情報表示方法、装置、機器及びプログラム
JP2024518913A (ja) 主制御対象の投影生成方法、装置、コンピュータ機器及びコンピュータプログラム
CN113313796B (zh) 场景生成方法、装置、计算机设备和存储介质
CN115526967A (zh) 虚拟模型的动画生成方法、装置、计算机设备及存储介质
US20220172431A1 (en) Simulated face generation for rendering 3-d models of people that do not exist
CN113902881A (zh) 虚拟场景的适配显示方法、装置、设备、介质及程序产品
Qiu et al. Design and Implementation of “Winning Luding Bridge” Immersion FPS Game Based on Unity3D Technology
CN112933595A (zh) 游戏中处理跳字显示的方法、装置、电子设备及存储介质
Do Manh Multi-platform Multiplayer RPG Game
CN111078031A (zh) 一种虚拟人物的位置确定方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21834520

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 19/05/2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21834520

Country of ref document: EP

Kind code of ref document: A1