CN111063012A - Animation character display method and device, electronic equipment and storage medium - Google Patents

Animation character display method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111063012A
CN111063012A CN201911302869.0A CN201911302869A CN111063012A CN 111063012 A CN111063012 A CN 111063012A CN 201911302869 A CN201911302869 A CN 201911302869A CN 111063012 A CN111063012 A CN 111063012A
Authority
CN
China
Prior art keywords
dimensional model
target
hitting
current
shaped posture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911302869.0A
Other languages
Chinese (zh)
Inventor
汪皓浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mihoyo Technology Shanghai Co ltd
Original Assignee
Mihoyo Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mihoyo Technology Shanghai Co ltd filed Critical Mihoyo Technology Shanghai Co ltd
Priority to CN201911302869.0A priority Critical patent/CN111063012A/en
Publication of CN111063012A publication Critical patent/CN111063012A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/58Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/833Hand-to-hand fighting, e.g. martial arts competition
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2250/00Miscellaneous game characteristics
    • A63F2250/30Miscellaneous game characteristics with a three-dimensional image
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8029Fighting without shooting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention discloses a display method and device of an animation character, electronic equipment and a storage medium. The method comprises the following steps: after the hitting operation of the user on the target animation role is detected, operation information matched with the hitting operation is obtained; acquiring a current T-shaped posture three-dimensional model of a target animation role; generating a T-shaped attitude three-dimensional model of the target animation role in the hitting state according to the operation information and the current T-shaped attitude three-dimensional model; and updating the current T-shaped posture three-dimensional model of the target animation character into a T-shaped posture three-dimensional model in a hit state, and generating an animation matched with the target animation character for display according to the updated current T-shaped posture three-dimensional model of the target animation character. The embodiment of the invention can simulate the real hit state after the target animation character is hit, so that a user can see the target animation character with the scars generated by the hit operation, the interactive reality of the user and the animation character is improved, and the user experience is improved.

Description

Animation character display method and device, electronic equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to a display method and device of an animation character, electronic equipment and a storage medium.
Background
With the development of network technology, people have higher and higher requirements on the reality of interaction with animated characters in games. In a battle-type game, a user may attack an animated character whose character type is enemy. In order to improve user experience, when a user hits an animation character with a character type of enemy, a preset animation special effect is displayed to simulate the hit.
In the prior art, a preset animation special effect is displayed to simulate a hit only when an animation character is hit. In the subsequent game process, the hit animation character has no hit trace, the real hit state can not be simulated after the animation character is hit, the user can not generate the feeling of real interaction with the animation character, and the user experience is poor.
Disclosure of Invention
The embodiment of the invention provides a display method and device of an animation character, electronic equipment and a storage medium, so that the display scheme of the existing animation character is optimized, the real hit state can be simulated after the animation character is hit, the interactive reality of a user and the animation character is improved, and the user experience is improved.
In a first aspect, an embodiment of the present invention provides a method for displaying an animated character, including:
after the hitting operation of the user on the target animation role is detected, operation information matched with the hitting operation is obtained;
acquiring a current T-shaped posture three-dimensional model of a target animation role;
generating a T-shaped attitude three-dimensional model of the target animation role in the hitting state according to the operation information and the current T-shaped attitude three-dimensional model;
and updating the current T-shaped posture three-dimensional model of the target animation character into a T-shaped posture three-dimensional model in a hit state, and generating an animation matched with the target animation character for display according to the updated current T-shaped posture three-dimensional model of the target animation character.
In a second aspect, an embodiment of the present invention further provides a display apparatus for an animated character, including:
the operation detection module is used for acquiring operation information matched with the hitting operation after the hitting operation of the user on the target animation role is detected;
the model acquisition module is used for acquiring a current T-shaped posture three-dimensional model of the target animation role;
the model generation module is used for generating a T-shaped posture three-dimensional model under the hitting state of the target animation role according to the operation information and the current T-shaped posture three-dimensional model;
and the model updating module is used for updating the current T-shaped posture three-dimensional model of the target animation role into the T-shaped posture three-dimensional model in the hit state, and generating animation matched with the target animation role for displaying according to the updated current T-shaped posture three-dimensional model of the target animation role.
In a third aspect, an embodiment of the present invention further provides an electronic device, including:
one or more processors;
a memory for storing one or more programs;
when executed by one or more processors, cause the one or more processors to implement a method for displaying an animated character according to an embodiment of the present invention.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the method for displaying an animated character according to the embodiment of the present invention.
The technical scheme of the embodiment of the invention comprises the steps of obtaining operation information matched with the hitting operation after the hitting operation of a user on a target animation role is detected, obtaining a current T-shaped posture three-dimensional model of the target animation role, generating the T-shaped posture three-dimensional model of the target animation role in the hitting state according to the operation information and the current T-shaped posture three-dimensional model, updating the current T-shaped posture three-dimensional model of the target animation role into the T-shaped posture three-dimensional model in the hitting state, generating animation matched with the target animation role for display according to the updated current T-shaped posture three-dimensional model of the target animation role, generating the T-shaped posture three-dimensional model of the target animation role in the hitting state according to the hitting operation of the user on the target animation role, generating the animation matched with the target animation role for display according to the T-shaped posture three-dimensional model in the hitting state, therefore, the real hit state is simulated after the target animation character is hit, so that the user can see the target animation character with the scars generated by the hit operation in the subsequent game, the interactive reality of the user and the animation character is improved, and the user experience is improved.
Drawings
FIG. 1 is a flowchart illustrating a method for displaying an animated character according to an embodiment of the present invention;
FIG. 2 is a flowchart of a method for displaying an animated character according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of a display device for an animated character according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention.
It should be further noted that, for the convenience of description, only some but not all of the relevant aspects of the present invention are shown in the drawings. Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Example one
Fig. 1 is a flowchart of a method for displaying an animated character according to an embodiment of the present invention. The embodiment is applicable to the case of controlling the display effect of the animated character, and the method can be executed by the display device of the animated character provided by the embodiment of the invention, and the device can be realized in a software and/or hardware manner, and can be generally integrated in electronic equipment. As shown in fig. 1, the method of this embodiment specifically includes:
step 101, after detecting the hitting operation of the user on the target animation character, acquiring operation information matched with the hitting operation.
Alternatively, in a battle-type game, a user may attack an animated character whose character type is enemy. The target animated character is the animated character that the user hit. And after the hitting operation of the user on the target animation role is detected, acquiring operation information matched with the hitting operation.
In one particular example, a "monster" is an animated character in a game whose character type is enemy. The user may attack the "monster". The "monster" hit by the user is the target animated character. And after the hitting operation of the user on the target animation role is detected, acquiring operation information matched with the hitting operation.
Optionally, the operation information may include: attack type, hit location, and hit direction.
The attack type is the type of attack that hits the operation. The hit location is the location where the target animated character is hit by the user. The hitting direction is the direction of the hitting operation.
In one embodiment, the user uses the taiwang in the game virtual weapon to hit the target animated character from the forward direction, and the attack type matched with the hitting operation is the taiwang. The hit position is the position where the target animated character is hit by the tai-knife. The hitting direction is the direction of the tai-knife hitting operation: and (4) a positive direction.
And 102, acquiring a current T-shaped posture three-dimensional model of the target animation character.
The T-shaped posture three-dimensional model of the target animation character is a three-dimensional model of the target animation character in a T-position state. The current T-shaped posture three-dimensional model of the target animation role is the T-shaped posture three-dimensional model of the target animation role at the current moment.
And 103, generating a T-shaped posture three-dimensional model of the target animation character in the hitting state according to the operation information and the current T-shaped posture three-dimensional model.
Optionally, the operation information may include: attack type, hit location, and hit direction; generating a T-shaped posture three-dimensional model of the target animation character in the hit state according to the operation information and the current T-shaped posture three-dimensional model, wherein the generating may include: determining a target hitting area on the current T-shaped posture three-dimensional model according to the hitting position and the hitting direction; generating a scar map matched with the target hitting area according to the attack type and a preset scar map generating rule; and according to the hitting position and the hitting direction, deriving the scar map to the current T-shaped posture three-dimensional model to obtain the T-shaped posture three-dimensional model of the target animation character in the hitting state.
The target hitting region is a region matched with the hitting position on the current T-shaped posture three-dimensional model.
Optionally, determining a target hitting region on the current T-shaped pose three-dimensional model according to the hitting position and the hitting direction may include: and cutting the current T-shaped attitude three-dimensional model according to the hitting position and the hitting direction to obtain a target hitting region on the current T-shaped attitude three-dimensional model.
Optionally, the model cutting tool box is used to cut the current T-shaped posture three-dimensional model according to the hitting position and the hitting direction, so as to obtain the target hitting region on the current T-shaped posture three-dimensional model.
Optionally, generating a scar map matched with the target hitting region according to the attack type and a preset scar map generation rule may include: determining the type of a scar map matched with the target hitting area according to the attack type; and generating a scar map matched with the target hitting area according to the type of the scar map, the current map of the target hitting area and a preset scar map generation rule.
Optionally, determining a type of a scar map matching the target hitting region according to the attack type includes: setting at least one preset attack type and a scar mapping type corresponding to each preset attack type in advance according to business requirements; acquiring a target preset attack type matched with the attack type in the operation information from all preset attack types; and acquiring a scar map type corresponding to the preset target attack type as a scar map type matched with the target hitting area.
Alternatively, scar map types may include: and (6) pasting a cutter mark.
In one embodiment, the user hits the target animated character from a forward direction using a taiwang in the game virtual weapon, and the attack type matching the hit operation is taiwang. The type of scar map corresponding to tai-chi is knife scar map. Namely, the type of the scar map matched with the target hitting area is the cutter mark map.
Optionally, generating a scar map matched with the target hitting area according to the type of the scar map, the current map of the target hitting area, and a preset scar map generation rule, including: setting at least one scar mapping generation rule and a scar mapping type corresponding to each scar mapping generation rule in advance according to business requirements; inquiring the target scar map type matched with the target hitting region in all the scar map types; acquiring a scar map generation rule corresponding to the type of the target scar map, and taking the scar map generation rule as a scar map generation rule matched with the target hitting area; and generating a scar map matched with the target hitting area according to the scar map generation rule and the current map of the target hitting area.
A map is a picture that defines the color of each pixel point on the surface of the model, including vertices and non-vertices. The current map is a map matched with the target hitting region at the current time, and the current map is used for defining the current color of each pixel point in the target hitting region at the current time.
Optionally, the scar map generation rule is a rule for modifying the original color of each pixel point in the target region. And according to a scar mapping generation rule, modifying the current color of each pixel point in the current mapping of the target hitting region into a scar color matched with the type of the scar mapping, so as to obtain the scar mapping matched with the target hitting region.
And after the scar map matched with the target hitting area is obtained, deriving the scar map to the current T-shaped posture three-dimensional model according to the hitting position and the hitting direction, and obtaining the T-shaped posture three-dimensional model of the target animation character in the hitting state. The T-shaped posture three-dimensional model in the hitting state is the T-shaped posture three-dimensional model with the hurt.
And 104, updating the current T-shaped posture three-dimensional model of the target animation role into a T-shaped posture three-dimensional model in a hit state, and generating an animation matched with the target animation role for display according to the updated current T-shaped posture three-dimensional model of the target animation role.
The method comprises the steps of updating a current T-shaped posture three-dimensional model of a target animation role, generating an animation matched with the target animation role for display according to the updated current T-shaped posture three-dimensional model of the target animation role, and enabling scars to follow the target animation role, so that a real hit state is simulated after the target animation role is hit, a user can see scars generated by hitting operation on the target animation role in a subsequent game, the interactive reality of the user and the animation role is improved, and the user experience is improved.
Optionally, after detecting the hitting operation of the user on the target animated character, the method may further include: the operation time of the hit operation is recorded.
Optionally, after generating an animation matched with the target animation character for display according to the updated current T-shaped posture three-dimensional model of the target animation character, the method may further include: when the time interval between the current time and the operation time is detected to reach a preset time threshold, the current T-shaped posture three-dimensional model of the target animation role is restored to the current T-shaped posture three-dimensional model before updating; and generating an animation matched with the target animation character for display according to the recovered current T-shaped posture three-dimensional model of the target animation character.
Therefore, the effect that the scars of the hit target animation character heal after a certain time can be simulated, the real hit state can be simulated more truly, and the interactive reality and user experience of the user and the animation character are further improved.
The embodiment of the invention provides a display method of an animation role, which comprises the steps of obtaining operation information matched with a hitting operation after the hitting operation of a user on a target animation role is detected, obtaining a current T-shaped posture three-dimensional model of the target animation role, generating the T-shaped posture three-dimensional model under the hitting state of the target animation role according to the operation information and the current T-shaped posture three-dimensional model, updating the current T-shaped posture three-dimensional model of the target animation role into the T-shaped posture three-dimensional model under the hitting state, generating an animation matched with the target animation role for display according to the updated current T-shaped posture three-dimensional model of the target animation role, generating the T-shaped posture three-dimensional model under the hitting state of the target animation role according to the hitting operation of the user on the target animation role, and generating the T-shaped posture three-dimensional model under the hitting state of the target animation role according to, and generating animation matched with the target animation character for displaying, so that a real hit state is simulated after the target animation character is hit, a user can see the target animation character with scars generated by the hitting operation in a subsequent game, the interactive reality of the user and the animation character is improved, and the user experience is improved.
Example two
Fig. 2 is a flowchart of a method for displaying an animated character according to a second embodiment of the present invention. This embodiment may be combined with each optional solution in one or more of the above embodiments, and in this embodiment, the operation information may include: attack type, hit location, and hit direction; generating a T-shaped posture three-dimensional model of the target animation character in the hit state according to the operation information and the current T-shaped posture three-dimensional model, wherein the generating may include: determining a target hitting area on the current T-shaped posture three-dimensional model according to the hitting position and the hitting direction; generating a scar map matched with the target hitting area according to the attack type and a preset scar map generating rule; and according to the hitting position and the hitting direction, deriving the scar map to the current T-shaped posture three-dimensional model to obtain the T-shaped posture three-dimensional model of the target animation character in the hitting state.
And after detecting the hitting operation of the user on the target animated character, the method may further include: the operation time of the hit operation is recorded.
After generating an animation matched with the target animation character for display according to the updated current T-shaped posture three-dimensional model of the target animation character, the method may further include: when the time interval between the current time and the operation time is detected to reach a preset time threshold, the current T-shaped posture three-dimensional model of the target animation role is restored to the current T-shaped posture three-dimensional model before updating; and generating an animation matched with the target animation character for display according to the recovered current T-shaped posture three-dimensional model of the target animation character.
As shown in fig. 2, the method of this embodiment specifically includes:
step 201, after detecting the hitting operation of the user on the target animation character, acquiring an attack type, a hitting position and a hitting direction matched with the hitting operation.
Optionally, the attack type is a hit type of operation. The hit location is the location where the target animated character is hit by the user. The hitting direction is the direction of the hitting operation.
In one embodiment, the user uses the taiwang in the game virtual weapon to hit the target animated character from the forward direction, and the attack type matched with the hitting operation is the taiwang. The hit position is the position where the target animated character is hit by the tai-knife. The hitting direction is the direction of the tai-knife hitting operation: and (4) a positive direction.
Step 202, recording the operation time of the hitting operation.
Optionally, recording the operation time of the hit operation includes: and when the user hits the target animation character, acquiring the current time as the operation time of the hitting operation.
And step 203, acquiring a current T-shaped posture three-dimensional model of the target animation role.
And step 204, determining a target hitting area on the current T-shaped posture three-dimensional model according to the hitting position and the hitting direction.
The target hitting region is a region matched with the hitting position on the current T-shaped posture three-dimensional model.
Optionally, determining a target hitting region on the current T-shaped pose three-dimensional model according to the hitting position and the hitting direction may include: and cutting the current T-shaped attitude three-dimensional model according to the hitting position and the hitting direction to obtain a target hitting region on the current T-shaped attitude three-dimensional model.
Optionally, the model cutting tool box is used to cut the current T-shaped posture three-dimensional model according to the hitting position and the hitting direction, so as to obtain the target hitting region on the current T-shaped posture three-dimensional model.
And step 205, generating a scar map matched with the target hitting area according to the attack type and a preset scar map generating rule.
Optionally, generating a scar map matched with the target hitting region according to the attack type and a preset scar map generation rule may include: determining the type of a scar map matched with the target hitting area according to the attack type; and generating a scar map matched with the target hitting area according to the type of the scar map, the current map of the target hitting area and a preset scar map generation rule.
Optionally, determining a type of a scar map matching the target hitting region according to the attack type includes: setting at least one preset attack type and a scar mapping type corresponding to each preset attack type in advance according to business requirements; acquiring a target preset attack type matched with the attack type in the operation information from all preset attack types; and acquiring a scar map type corresponding to the preset target attack type as a scar map type matched with the target hitting area.
Alternatively, scar map types may include: and (6) pasting a cutter mark.
In one embodiment, the user hits the target animated character from a forward direction using a taiwang in the game virtual weapon, and the attack type matching the hit operation is taiwang. The type of scar map corresponding to tai-chi is knife scar map. Namely, the type of the scar map matched with the target hitting area is the cutter mark map.
Optionally, generating a scar map matched with the target hitting area according to the type of the scar map, the current map of the target hitting area, and a preset scar map generation rule, including: setting at least one scar mapping generation rule and a scar mapping type corresponding to each scar mapping generation rule in advance according to business requirements; inquiring the target scar map type matched with the target hitting region in all the scar map types; acquiring a scar map generation rule corresponding to the type of the target scar map, and taking the scar map generation rule as a scar map generation rule matched with the target hitting area; and generating a scar map matched with the target hitting area according to the scar map generation rule and the current map of the target hitting area.
A map is a picture that defines the color of each pixel point on the surface of the model, including vertices and non-vertices. The current map is a map matched with the target hitting region at the current time, and the current map is used for defining the current color of each pixel point in the target hitting region at the current time.
Optionally, the scar map generation rule is a rule for modifying the original color of each pixel point in the target region. And according to a scar mapping generation rule, modifying the current color of each pixel point in the current mapping of the target hitting region into a scar color matched with the type of the scar mapping, so as to obtain the scar mapping matched with the target hitting region.
And step 206, deriving the scar map to the current T-shaped posture three-dimensional model according to the hitting position and the hitting direction to obtain the T-shaped posture three-dimensional model of the target animation character in the hitting state.
After the scar map matched with the target hitting area is obtained, the scar map is derived to the current T-shaped posture three-dimensional model according to the hitting position and the hitting direction, and the T-shaped posture three-dimensional model of the target animation character in the hitting state is obtained. The T-shaped posture three-dimensional model in the hitting state is the T-shaped posture three-dimensional model with the hurt.
And step 207, updating the current T-shaped posture three-dimensional model of the target animation character into a T-shaped posture three-dimensional model in a hit state, and generating an animation matched with the target animation character for display according to the updated current T-shaped posture three-dimensional model of the target animation character.
The method comprises the steps of updating a current T-shaped posture three-dimensional model of a target animation role, generating an animation matched with the target animation role for display according to the updated current T-shaped posture three-dimensional model of the target animation role, and enabling scars to follow the target animation role, so that a real hit state is simulated after the target animation role is hit, a user can see scars generated by hitting operation on the target animation role in a subsequent game, the interactive reality of the user and the animation role is improved, and the user experience is improved.
And 208, when the time interval between the current time and the operation time is detected to reach a preset time threshold, restoring the current T-shaped posture three-dimensional model of the target animation role to the current T-shaped posture three-dimensional model before updating.
Optionally, the preset time threshold may be set according to a service requirement. And recovering the current T-shaped posture three-dimensional model of the target animation character into the current T-shaped posture three-dimensional model before updating, namely recovering the current T-shaped posture three-dimensional model of the target animation character into the T-shaped posture three-dimensional model state before hitting.
And 209, generating an animation matched with the target animation character according to the recovered current T-shaped posture three-dimensional model of the target animation character and displaying the animation.
The method comprises the steps of generating a T-shaped posture three-dimensional model state before hitting, generating animation matched with a target animation role, and displaying the animation, so that the effect that a scar of the hit target animation role is healed after a certain time can be simulated, the real hit state can be simulated more truly, and the interactive reality and user experience of a user and the animation role are further improved.
The embodiment of the invention provides a display method of an animation role, which comprises the steps of determining a target hitting area on a current T-shaped posture three-dimensional model according to a hitting position and a hitting direction, generating a scar map matched with the target hitting area according to an attack type and a preset scar map generation rule, leading the scar map out to the current T-shaped posture three-dimensional model according to the hitting position and the hitting direction to obtain the T-shaped posture three-dimensional model of the target animation role in a hitting state, recovering the current T-shaped posture three-dimensional model of the target animation role to the current T-shaped posture three-dimensional model before updating when detecting that the time interval between the current time and the operation time reaches a preset time threshold, generating an animation matched with the target animation role according to the recovered current T-shaped posture three-dimensional model of the target animation role for display, can generate a T-shaped attitude three-dimensional model of the target animation role in the hitting state according to the attack type, the hitting position and the hitting direction, and generating animation matched with the target animation role for display according to the T-shaped posture three-dimensional model in the hit state, therefore, after the target animation character is hit, the real hit state is simulated, when the time interval between the current time and the operation time reaches the preset time threshold, the current T-shaped posture three-dimensional model of the target animation role is restored to the state of the T-shaped posture three-dimensional model before the hit, and generating animation matched with the target animation role for display according to the state of the T-shaped posture three-dimensional model before the hit, therefore, the effect that the scars of the hit target animation character heal after a certain time can be simulated, the real hit state can be simulated more truly, and the interactive reality and user experience of the user and the animation character are further improved.
EXAMPLE III
Fig. 3 is a schematic structural diagram of a display device for an animated character according to a third embodiment of the present invention. As shown in fig. 3, the apparatus may be configured with an electronic device, including: an operation detection module 301, a model acquisition module 302, a model generation module 303, and a model update module 304.
The operation detection module 301 is configured to, after detecting a hit operation of a user on a target animation character, obtain operation information matched with the hit operation; the model obtaining module 302 is used for obtaining a current T-shaped posture three-dimensional model of the target animation role; the model generation module 303 is configured to generate a T-shaped posture three-dimensional model in a hit state of the target animation character according to the operation information and the current T-shaped posture three-dimensional model; and the model updating module 304 is configured to update the current T-shaped posture three-dimensional model of the target animation character to a T-shaped posture three-dimensional model in a hit state, and generate an animation matched with the target animation character for display according to the updated current T-shaped posture three-dimensional model of the target animation character.
The embodiment of the invention provides a display device of an animation role, which obtains operation information matched with the hitting operation after the hitting operation of a user on a target animation role is detected, obtains a current T-shaped posture three-dimensional model of the target animation role, then generates the T-shaped posture three-dimensional model of the target animation role in the hitting state according to the operation information and the current T-shaped posture three-dimensional model, updates the current T-shaped posture three-dimensional model of the target animation role into the T-shaped posture three-dimensional model in the hitting state, generates an animation matched with the target animation role for display according to the updated current T-shaped posture three-dimensional model of the target animation role, can generate the T-shaped posture three-dimensional model of the target animation role in the hitting state according to the hitting operation of the user on the target animation role, and generates the T-shaped posture three-dimensional model of the target animation role in the hitting state according to the T, and generating animation matched with the target animation character for displaying, so that a real hit state is simulated after the target animation character is hit, and in a subsequent game, a user can see the target animation character with scars generated by the hitting operation, thereby improving the interactive reality of the user and the animation character and improving the user experience.
On the basis of the above embodiments, the operation information includes: attack type, hit location, and hit direction; the model generation module 303 may include: the region determining unit is used for determining a target hitting region on the current T-shaped posture three-dimensional model according to the hitting position and the hitting direction; the mapping generating unit is used for generating a scar mapping matched with the target hitting area according to the attack type and a preset scar mapping generating rule; and the mapping derivation unit is used for deriving the scar mapping to the current T-shaped posture three-dimensional model according to the hitting position and the hitting direction to obtain the T-shaped posture three-dimensional model of the target animation character in the hitting state.
On the basis of the above embodiments, the area determination unit may include: and the model cutting subunit is used for cutting the current T-shaped posture three-dimensional model according to the hitting position and the hitting direction to obtain a target hitting region on the current T-shaped posture three-dimensional model.
On the basis of the foregoing embodiments, the map generating unit may include: the type determining subunit is used for determining the type of the scar map matched with the target hitting area according to the attack type; and the mapping generating subunit is used for generating a scar mapping matched with the target hitting area according to the type of the scar mapping, the current mapping of the target hitting area and a preset scar mapping generating rule.
On the basis of the above embodiments, the types of scar maps may include: and (6) pasting a cutter mark.
In addition to the above embodiments, the display device of an animated character may further include: and the time recording module is used for recording the operation time of the hitting operation.
In addition to the above embodiments, the display device of an animated character may further include: the model recovery module is used for recovering the current T-shaped posture three-dimensional model of the target animation role into the current T-shaped posture three-dimensional model before updating when the time interval between the current time and the operation time is detected to reach a preset time threshold; and the animation generation module is used for generating an animation matched with the target animation character for display according to the recovered current T-shaped posture three-dimensional model of the target animation character.
The display device of the animation character can execute the display method of the animation character provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of executing the display method of the animation character.
Example four
Fig. 4 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention. FIG. 4 illustrates a block diagram of an exemplary electronic device 12 suitable for use in implementing embodiments of the present invention. The electronic device 12 shown in fig. 4 is only an example and should not bring any limitation to the function and the scope of use of the embodiment of the present invention.
As shown in fig. 4, the electronic device 12 is represented in the form of a general-purpose electronic device. The components of electronic device 12 may include, but are not limited to: one or more processors 16, a memory 28, and a bus 18 that connects the various system components (including the memory 28 and the processors 16). The processor 16 includes, but is not limited to, an AI processor.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Electronic device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by electronic device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)30 and/or cache memory 32. The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 4, and commonly referred to as a "hard drive"). Although not shown in FIG. 4, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.
Electronic device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with electronic device 12, and/or with any devices (e.g., network card, modem, etc.) that enable electronic device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, the electronic device 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via the network adapter 20. As shown, the network adapter 20 communicates with other modules of the electronic device 12 via the bus 18. It should be appreciated that although not shown in FIG. 4, other hardware and/or software modules may be used in conjunction with electronic device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processor 16 of the electronic device 12 executes various functional applications and data processing, such as implementing the display method of the animated character provided by the embodiment of the present invention, by executing the program stored in the memory 28. The method specifically comprises the following steps: after the hitting operation of the user on the target animation role is detected, operation information matched with the hitting operation is obtained; acquiring a current T-shaped posture three-dimensional model of a target animation role; generating a T-shaped attitude three-dimensional model of the target animation role in the hitting state according to the operation information and the current T-shaped attitude three-dimensional model; and updating the current T-shaped posture three-dimensional model of the target animation character into a T-shaped posture three-dimensional model in a hit state, and generating an animation matched with the target animation character for display according to the updated current T-shaped posture three-dimensional model of the target animation character.
EXAMPLE five
The fifth embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method for displaying an animated character according to the fifth embodiment of the present invention. The method specifically comprises the following steps: after the hitting operation of the user on the target animation role is detected, operation information matched with the hitting operation is obtained; acquiring a current T-shaped posture three-dimensional model of a target animation role; generating a T-shaped attitude three-dimensional model of the target animation role in the hitting state according to the operation information and the current T-shaped attitude three-dimensional model; and updating the current T-shaped posture three-dimensional model of the target animation character into a T-shaped posture three-dimensional model in a hit state, and generating an animation matched with the target animation character for display according to the updated current T-shaped posture three-dimensional model of the target animation character.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, Ruby, Go, and conventional procedural programming languages, such as the "C" programming language or similar programming languages, and computer languages for AI algorithms. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A method for displaying an animated character, comprising:
after the hitting operation of a user on a target animation role is detected, operation information matched with the hitting operation is obtained;
acquiring a current T-shaped posture three-dimensional model of the target animation role;
generating a T-shaped posture three-dimensional model of the target animation role in the hitting state according to the operation information and the current T-shaped posture three-dimensional model;
and updating the current T-shaped posture three-dimensional model of the target animation character into the T-shaped posture three-dimensional model in the hit state, and generating an animation matched with the target animation character for display according to the updated current T-shaped posture three-dimensional model of the target animation character.
2. The method of claim 1, wherein the operational information comprises: attack type, hit location, and hit direction;
generating a T-shaped posture three-dimensional model of the target animation role in the hitting state according to the operation information and the current T-shaped posture three-dimensional model, wherein the T-shaped posture three-dimensional model comprises the following steps:
determining a target hitting area on the current T-shaped posture three-dimensional model according to the hitting position and the hitting direction;
generating a scar map matched with the target hitting area according to the attack type and a preset scar map generating rule;
and exporting the scar map to the current T-shaped posture three-dimensional model according to the hitting position and the hitting direction to obtain the T-shaped posture three-dimensional model of the target animation character in the hitting state.
3. The method of claim 2, wherein determining a target hit region on the current T-pose three-dimensional model based on the hit location and the hit direction comprises:
and cutting the current T-shaped posture three-dimensional model according to the hitting position and the hitting direction to obtain a target hitting region on the current T-shaped posture three-dimensional model.
4. The method according to claim 2, wherein generating a scar map matching the target hitting region according to the attack type and a preset scar map generating rule comprises:
determining the type of a scar map matched with the target hitting region according to the attack type;
and generating a scar map matched with the target hitting area according to the type of the scar map, the current map of the target hitting area and a preset scar map generation rule.
5. The method of claim 4, wherein the scar map type includes: and (6) pasting a cutter mark.
6. The method of claim 1, upon detecting a hit by a user on a target animated character, further comprising:
and recording the operation time of the hitting operation.
7. The method of claim 6, after generating an animation matching the target animated character for display based on the updated current T-pose three-dimensional model of the target animated character, further comprising:
when the time interval between the current time and the operation time reaches a preset time threshold, restoring the current T-shaped posture three-dimensional model of the target animation role to the current T-shaped posture three-dimensional model before updating;
and generating an animation matched with the target animation character for display according to the recovered current T-shaped posture three-dimensional model of the target animation character.
8. A display device for an animated character, comprising:
the operation detection module is used for acquiring operation information matched with the hitting operation after the hitting operation of the user on the target animation role is detected;
the model acquisition module is used for acquiring a current T-shaped posture three-dimensional model of the target animation role;
the model generation module is used for generating a T-shaped posture three-dimensional model of the target animation role in the hitting state according to the operation information and the current T-shaped posture three-dimensional model;
and the model updating module is used for updating the current T-shaped posture three-dimensional model of the target animation role into the T-shaped posture three-dimensional model in the hit state, and generating an animation matched with the target animation role for displaying according to the updated current T-shaped posture three-dimensional model of the target animation role.
9. An electronic device, comprising:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a method for displaying an animated character according to any one of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a method of displaying an animated character according to any one of claims 1 to 7.
CN201911302869.0A 2019-12-17 2019-12-17 Animation character display method and device, electronic equipment and storage medium Pending CN111063012A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911302869.0A CN111063012A (en) 2019-12-17 2019-12-17 Animation character display method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911302869.0A CN111063012A (en) 2019-12-17 2019-12-17 Animation character display method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111063012A true CN111063012A (en) 2020-04-24

Family

ID=70302002

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911302869.0A Pending CN111063012A (en) 2019-12-17 2019-12-17 Animation character display method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111063012A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113797548A (en) * 2021-09-18 2021-12-17 珠海金山网络游戏科技有限公司 Object processing method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004016752A (en) * 2002-06-20 2004-01-22 Konami Sports Life Corp Exercise assisting device and program used for exercise assisting device
US20080043042A1 (en) * 2006-08-15 2008-02-21 Scott Bassett Locality Based Morphing Between Less and More Deformed Models In A Computer Graphics System
CN110465097A (en) * 2019-09-09 2019-11-19 网易(杭州)网络有限公司 Role in game, which stands, draws display methods and device, electronic equipment, storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004016752A (en) * 2002-06-20 2004-01-22 Konami Sports Life Corp Exercise assisting device and program used for exercise assisting device
US20080043042A1 (en) * 2006-08-15 2008-02-21 Scott Bassett Locality Based Morphing Between Less and More Deformed Models In A Computer Graphics System
CN110465097A (en) * 2019-09-09 2019-11-19 网易(杭州)网络有限公司 Role in game, which stands, draws display methods and device, electronic equipment, storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
任帅;王震;苏东旭;张弢;慕德俊;: "基于三维模型贴图与结构数据的信息隐藏算法" *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113797548A (en) * 2021-09-18 2021-12-17 珠海金山网络游戏科技有限公司 Object processing method and device
CN113797548B (en) * 2021-09-18 2024-02-27 珠海金山数字网络科技有限公司 Object processing method and device

Similar Documents

Publication Publication Date Title
CN108683937B (en) Voice interaction feedback method and system for smart television and computer readable medium
CN110738737A (en) AR scene image processing method and device, electronic equipment and storage medium
CN110969687B (en) Collision detection method, device, equipment and medium
US20130010071A1 (en) Methods and systems for mapping pointing device on depth map
US20080231631A1 (en) Image processing apparatus and method of controlling operation of same
CN108874136B (en) Dynamic image generation method, device, terminal and storage medium
CN109509236B (en) Vehicle bounding box generation method and device in unmanned scene and storage medium
CN105159537A (en) Multiscreen-based real-time independent interaction system
US11995254B2 (en) Methods, devices, apparatuses, and storage media for mapping mouse models for computer mouses
CN112766027A (en) Image processing method, device, equipment and storage medium
CN111481923B (en) Rocker display method and device, computer storage medium and electronic equipment
CN110992453B (en) Scene object display method and device, electronic equipment and storage medium
CN111045777A (en) Rendering method, rendering device, storage medium and electronic equipment
CN111063012A (en) Animation character display method and device, electronic equipment and storage medium
CN110096134B (en) VR handle ray jitter correction method, device, terminal and medium
Rana et al. Augmented reality engine applications: a survey
CN109636888B (en) 2D special effect manufacturing method and device, electronic equipment and storage medium
JP7375149B2 (en) Positioning method, positioning device, visual map generation method and device
CN116012913A (en) Model training method, face key point detection method, medium and device
CN114092608B (en) Expression processing method and device, computer readable storage medium and electronic equipment
CN112788390B (en) Control method, device, equipment and storage medium based on man-machine interaction
CN112435318B (en) Anti-threading method and device in game, electronic equipment and storage medium
CN109815307B (en) Position determination method, apparatus, device, and medium
CN113343951A (en) Face recognition countermeasure sample generation method and related equipment
US20210375063A1 (en) System and method for user interaction in complex web 3d scenes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination