CN116363267A - Animation display method and device for action object - Google Patents

Animation display method and device for action object Download PDF

Info

Publication number
CN116363267A
CN116363267A CN202111628131.0A CN202111628131A CN116363267A CN 116363267 A CN116363267 A CN 116363267A CN 202111628131 A CN202111628131 A CN 202111628131A CN 116363267 A CN116363267 A CN 116363267A
Authority
CN
China
Prior art keywords
depth
animation
latent
action object
latency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111628131.0A
Other languages
Chinese (zh)
Inventor
李鑫培
赵男
包炎
胡婷婷
林越浩
刘超
师锐
施一东
杨雯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Miha Youhaiyuancheng Technology Co ltd
Original Assignee
Shanghai Miha Youhaiyuancheng Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Miha Youhaiyuancheng Technology Co ltd filed Critical Shanghai Miha Youhaiyuancheng Technology Co ltd
Priority to CN202111628131.0A priority Critical patent/CN116363267A/en
Publication of CN116363267A publication Critical patent/CN116363267A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to the field of animation, and particularly discloses an animation display method and device of an action object, wherein the method comprises the following steps: under the condition that the action object is detected to enter a water body area, calculating the latency depth of the action object according to the latency position information of the action object and the water surface information of the water body area; and acquiring the latent animation corresponding to the latent depth according to the corresponding relation between the depth data and the depth animation, and playing the acquired latent animation. According to the method, the latent animation is determined directly and quickly according to the latent depth, and complex calculation is not needed to be executed by means of a physical engine, so that the smoothness of animation playing can be improved under the conditions of saving calculation power and reducing system resource consumption.

Description

Animation display method and device for action object
Technical Field
The embodiment of the invention relates to the field of animation, in particular to an animation display method and device of an action object.
Background
In a virtual reality scene, an action object refers to an object capable of exhibiting an action animation, such as a person object, an animal object, or the like. Motion animation is typically played in two ways:
In the first playback mode, a fixed motion picture is set in advance for the motion object. For example, for character a, at least one motion animation of character a is preset, and the motion of character a is exhibited by the motion animation when character a is displayed. In this method, the motion animation cannot be adaptively changed according to the scene and environment in which the motion object is located, and thus, the sense of realism is lacking.
In the second playing mode, the physical engine dynamically calculates the relevant data of the action object and the scene and environment where the action object is located, and renders the animation data of the action object in real time. The method is good in reality, but a large amount of computing resources are consumed in a real-time rendering mode through a physical engine, so that huge consumption of system resources is caused, and problems of cartoon interface blocking and unsmooth playing are easily caused.
Therefore, the existing animation display mode cannot achieve the problems of reality and resource consumption.
Disclosure of Invention
In view of the foregoing, the present invention has been made to provide an animation display method and apparatus for an action object that overcomes or at least partially solves the foregoing problems.
According to one aspect of the present invention, there is provided an animation display method of an action object, including:
under the condition that the action object is detected to enter a water body area, calculating the latency depth of the action object according to the latency position information of the action object and the water surface information of the water body area;
and acquiring the latent animation corresponding to the latent depth according to the corresponding relation between the depth data and the depth animation, and playing the acquired latent animation.
Optionally, the correspondence between the depth data and the depth animation includes:
at least two depth intervals and depth animations corresponding to the respective depth intervals; wherein, the latency animation corresponding to the latency depth is obtained by the following means: determining a depth interval corresponding to the latent depth, and determining the latent animation corresponding to the latent depth according to the depth animation of the depth interval corresponding to the latent depth; or alternatively, the process may be performed,
the corresponding relation between the depth data and the depth animation comprises the following steps: at least two depth values and a depth animation corresponding to each depth value; wherein, the latency animation corresponding to the latency depth is obtained by the following means: determining two depth values corresponding to the latent depth, and carrying out interpolation processing on the two depth animations corresponding to the two depth values to obtain the latent animation.
Optionally, the correspondence between the depth data and the depth animation includes:
the motion speed of the animation motion included in the depth animation has a first proportional relationship with the depth data, and/or the motion amplitude of the animation motion included in the depth animation has a second proportional relationship with the depth data.
Optionally, the calculating the latency depth of the moving object according to the latency position information of the moving object and the water surface information of the water body area includes:
determining a horizontal plane grid corresponding to first water surface information, transmitting a first ray between the action object and the horizontal plane grid, and determining the latency depth of the action object according to the first ray; and/or the number of the groups of groups,
determining a water bottom grid corresponding to second water surface information, transmitting a second ray between the action object and the water bottom grid, and determining the latency depth of the action object according to the second ray.
Optionally, the obtaining the latent animation corresponding to the latent depth according to the corresponding relation between the depth data and the depth animation, and playing the obtained latent animation includes:
acquiring the latent depth through an object animation model of the action object, and determining and playing a latent animation corresponding to the latent depth;
The object animation model of the action object is used for displaying the animation action of the action object, and the corresponding relation between the depth data and the depth animation is stored in the object animation model of the action object.
Optionally, the detecting that the action object enters the water body area includes:
determining an action collision volume corresponding to the action object, and a grid collision volume corresponding to the water body region;
and when detecting that the action collision body and the grid collision body collide, determining that the action object enters a water body area.
Optionally, the acquiring the latent animation corresponding to the latent depth includes:
acquiring water feature data corresponding to the latency depth, and determining the latency animation by combining the water feature data;
wherein the water body characteristic data includes: water body attribute information, and the water body attribute information includes: water flow speed information, water flow direction information, water flow temperature information and water flow illumination information; and, the water body characteristic data further includes: object attribute information of the water body associated object; wherein, the water body associated object includes: a biological class object.
According to still another aspect of the present invention, there is provided an animation exhibiting apparatus of an action object, comprising:
the detection module is suitable for calculating the latency depth of the action object according to the latency position information of the action object and the water surface information of the water body area under the condition that the action object is detected to enter the water body area;
the playing module is suitable for acquiring the latent animation corresponding to the latent depth according to the corresponding relation between the depth data and the depth animation and playing the acquired latent animation.
Optionally, the correspondence between the depth data and the depth animation includes:
at least two depth intervals and depth animations corresponding to the respective depth intervals; wherein, the latency animation corresponding to the latency depth is obtained by the following means: determining a depth interval corresponding to the latent depth, and determining the latent animation corresponding to the latent depth according to the depth animation of the depth interval corresponding to the latent depth; or alternatively, the process may be performed,
the corresponding relation between the depth data and the depth animation comprises the following steps: at least two depth values and a depth animation corresponding to each depth value; wherein, the latency animation corresponding to the latency depth is obtained by the following means: determining two depth values corresponding to the latent depth, and carrying out interpolation processing on the two depth animations corresponding to the two depth values to obtain the latent animation.
Optionally, the correspondence between the depth data and the depth animation includes:
the motion speed of the animation motion included in the depth animation has a first proportional relationship with the depth data, and/or the motion amplitude of the animation motion included in the depth animation has a second proportional relationship with the depth data.
Optionally, the detection module is specifically adapted to:
determining a horizontal plane grid corresponding to first water surface information, transmitting a first ray between the action object and the horizontal plane grid, and determining the latency depth of the action object according to the first ray; and/or the number of the groups of groups,
determining a water bottom grid corresponding to second water surface information, transmitting a second ray between the action object and the water bottom grid, and determining the latency depth of the action object according to the second ray.
Optionally, the playing module is specifically adapted to:
acquiring the latent depth through an object animation model of the action object, and determining and playing a latent animation corresponding to the latent depth;
the object animation model of the action object is used for displaying the animation action of the action object, and the corresponding relation between the depth data and the depth animation is stored in the object animation model of the action object.
Optionally, the detection module is specifically adapted to:
determining an action collision volume corresponding to the action object, and a grid collision volume corresponding to the water body region;
and when detecting that the action collision body and the grid collision body collide, determining that the action object enters a water body area.
Optionally, the playing module is specifically adapted to:
acquiring water feature data corresponding to the latency depth, and determining the latency animation by combining the water feature data;
wherein the water body characteristic data includes: water body attribute information, and the water body attribute information includes: water flow speed information, water flow direction information, water flow temperature information and water flow illumination information; and, the water body characteristic data further includes: object attribute information of the water body associated object; wherein, the water body associated object includes: a biological class object.
According to still another aspect of the present invention, there is provided an electronic apparatus including: the device comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete communication with each other through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the operation corresponding to the animation display method of the action object.
According to still another aspect of the embodiments of the present invention, there is provided a computer storage medium having at least one executable instruction stored therein, where the executable instruction causes a processor to perform operations corresponding to the animation display method of an action object as described above.
According to the animation display method and device of the action object, when the action object is detected to enter the water body area, the latency depth of the action object is calculated according to the latency position information of the action object and the water surface information of the water body area; and obtaining and playing the latent animation corresponding to the latent depth according to the corresponding relation between the depth data and the depth animation. The method can dynamically acquire the latency depth of the action object after entering the water body area, and dynamically select the latency animation to play according to the latency depth, so that the problem of lack of realism caused by the fact that the action animation is fixed is avoided. In addition, the method can directly and quickly determine the latent animation according to the latent depth without executing complex calculation by means of a physical engine, so that the smoothness of animation playing can be improved under the conditions of saving calculation power and reducing system resource consumption.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
FIG. 1 is a flow chart of an animation demonstration method of an action object according to one embodiment of the present invention;
FIG. 2 is a flowchart of an animation display method of an action object according to another embodiment of the present invention;
FIG. 3 shows a schematic diagram of the manner in which the latency percentage is calculated;
FIG. 4 is a block diagram showing an animation exhibiting apparatus of an action object according to still another embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to another embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Fig. 1 is a flowchart of an animation display method of an action object according to an embodiment of the present invention. As shown in fig. 1, the method includes:
step S110: when the action object is detected to enter the water body area, calculating the latency depth of the action object according to the latency position information of the action object and the water surface information of the water body area.
Wherein, the action object includes: a virtual object with motion capabilities contained in a game application or video application. The action object's range of motion may include: land area, air area, water area, etc. In this embodiment, when it is detected that the moving object enters the water body region, the latency position information of the moving object and the water surface information of the water body region are acquired, so that the latency depth of the moving object is calculated.
The latent position information is used for describing the position of the action object in the water body area, and can be represented by position coordinates, azimuth data and the like. The water surface information of the water body area comprises: and the water level information of the water body area is used for describing the water level position. Accordingly, the latency depth of the moving object can be calculated according to the latency position information of the moving object and the water surface information of the water body area. The latency depth refers to: a vertical distance from a position of the action object to a horizontal plane.
Step S120: and acquiring the latent animation corresponding to the latent depth according to the corresponding relation between the depth data and the depth animation, and playing the acquired latent animation.
The inventor finds that the action object is influenced by factors such as water pressure, buoyancy and the like in water in the process of realizing the invention, so that the animation state of the action object in water is different from the animation state of the action object in land or air. In addition, since the water pressure received by the underwater different depths is different, the animation state of the same action object in the underwater different depths is also different.
In order to accurately reflect the association relationship between the animation state of the action object and the underwater depth, in this embodiment, the correspondence relationship between the depth data and the depth animation is preset. Wherein, the depth data refers to: the depth of the action object under water is: vertical distance between the action object and the horizontal plane. Depth animation refers to: video animation corresponding to certain depth data. In summary, the correspondence between the depth data and the depth animation is preset, thereby determining the latent animation corresponding to the latent depth.
The latent animation corresponding to the latent depth may be a depth animation or an interpolation animation obtained by performing interpolation processing according to the depth animation, which is not limited in the present invention.
Therefore, the method can dynamically acquire the latent depth of the moving object after entering the water body region, and dynamically select the latent animation to play according to the latent depth, so that the problem of lack of realism caused by fixed motion animation is avoided. In addition, the method can directly and quickly determine the latent animation according to the latent depth without executing complex calculation by means of a physical engine, so that the smoothness of animation playing can be improved under the conditions of saving calculation power and reducing system resource consumption.
Fig. 2 is a flowchart of an animation display method of an action object according to another embodiment of the present invention. As shown in fig. 2, the method includes:
step S200: and detecting whether the action object enters the water body area.
Wherein, the action object includes: a virtual object with motion capabilities contained in a game application or video application. The video application includes: virtual reality-like video applications, augmented reality-like video applications, and the like. The action object's range of motion may include: land area, air area, water area, etc. Since the moving object is subjected to the double effects of water pressure and buoyancy in water, the underwater animation is different from the water animation, and therefore, whether the moving object enters a water body area needs to be detected. In the detection process, the detection can be realized in the following various ways:
In the first detection method, detection is performed based on a collision operation between collision bodies. Specifically, determining an action collision body corresponding to the action object and a grid collision body corresponding to the water body area; when the collision operation of the action collision body and the grid collision body is detected, determining that the action object enters the water body area. Wherein the mesh collider corresponding to the water body region may be a mesh collider for rendering a water surface object. The collision operation can be detected in a collision event mode, a collision instruction mode and the like, and the collision body detection can be used for timely detecting when an action object just contacts a water body area, so that the real-time performance of animation adjustment is ensured.
In the second detection mode, the area range of the water body area is preset, the current position information of the action object is dynamically detected, and when the current position information falls into the area range of the water body area, the action object is determined to enter the water body area. According to the method, whether the action object falls into the water body area or not can be detected rapidly according to the area range of the water body area, and efficient detection can be achieved through less calculation force.
Step S210: when the action object is detected to enter the water body area, calculating the latency depth of the action object according to the latency position information of the action object and the water surface information of the water body area.
In this embodiment, when it is detected that the moving object enters the water body region, the latency position information of the moving object and the water surface information of the water body region are acquired, so that the latency depth of the moving object is calculated. The latent position information is used for describing the position of the action object in the water body area, and can be represented by position coordinates, azimuth data and the like. The water surface information of the water body area comprises: the water level information of the water body area and the water bottom information of the water body area are used for comprehensively describing the water surface position. Accordingly, the latency depth of the moving object can be calculated according to the latency position information of the moving object and the water surface information of the water body area. The latency depth refers to: a vertical distance from a position of the action object to a horizontal plane.
In specific implementation, the latency depth of the action object can be determined in a ray mode, and the method can be realized in at least one of the following modes:
in a first implementation, a horizontal plane grid corresponding to the first water surface information is determined, a first ray is emitted between the action object and the horizontal plane grid, and a latency depth of the action object is determined according to the first ray. Specifically, first, a horizontal plane and a horizontal plane mesh corresponding to the horizontal plane are defined. The horizontal grid is not in a horizontal state, but in a corrugated state, and accordingly, in order to simulate the situation that the real water surface is disturbed, a disturbance value N needs to be set for the water surface. Assuming that the highest point of the water sheet is Mu and the lowest point of the water sheet is Md, the horizontal coordinate sw= (Mu-Md)/2*N of the horizontal plane grid. In the specific implementation, a first emission ray (also called a first emission ray) is emitted to the upper part of the world coordinate from the position of the driven object, and the latency depth of the driven object is determined according to the distance when the first emission ray touches the horizontal grid. In addition, in order to accurately acquire the latency depth, after the first emission ray touches the horizontal plane grid, a first reflection ray obtained after the first emission ray is reflected by the horizontal plane grid may be further acquired, the first reflection ray is reversely and vertically emitted downwards from the horizontal plane grid, when the first reflection ray touches the action object, the total length of the first emission ray and the first reflection ray is calculated, and half of the total length is determined as the latency depth of the action object. The length of the reflected rays is added, so that the length calculation of the two rays can be integrated, and the method is more accurate than a method for calculating the latent depth by only adopting the emitted rays.
In a second implementation, a water bottom grid corresponding to the second water surface information is determined, a second ray is emitted between the action object and the water bottom grid, and a latency depth of the action object is determined according to the second ray. Wherein the water bottom grid corresponds to the bottom of the water, and the water bottom grid corresponding to the water bottom is likely to have large fluctuation in form unlike the corrugated level since the water bottom is generally uneven. Therefore, the distance between the moving object and the water bottom can be calculated by the second ray, and the state of the moving object in the water can be reflected more accurately by combining the distance. In practice, divergent rays can be emitted vertically upwards from a bottom scene (such as a water bottom grid) in a water area, and when the rays touch an action object, the lengths of the rays are acquired.
Because the state of the action object is an uncontrollable behavior, and the deeper the depth is, the larger the influence on the behavior is, so in order to avoid the problem that the latency depth obtained by calculation is inaccurate, the emission rays can be a group of rays, and correspondingly, the density among all rays in the rays is combined for calculation. For example, when the ray density of the ray group is greater than a first preset threshold, calculating based on the longest distance ray touched in the ray group; and when the ray density of the ray group is smaller than a second preset threshold value, calculating based on the shortest distance rays touched in the ray group. And, the state attribute and the depth label of the action object can be further combined for calculation.
In addition, the latency depth may further include: the first latency depth is determined in a first mode and is used for describing the distance between the action object and the horizontal plane; the second latency depth determined by the second mode is used for describing the distance between the action object and the water bottom surface. The first latency depth is used for calculating data such as water pressure, buoyancy and the like. The second latency depth is used to determine the distance of the action object from the water bottom in order to determine whether the action object will touch the water bottom while performing the downward movement.
In order to facilitate the comprehensive reflection of the distance between the moving object and the water level and water bottom, in a further alternative implementation, the latency is a numerical value in the form of a percentage, and the latency is calculated by the following formula: position coordinates of the action object/(underwater position coordinates-horizontal position coordinates) ×100% =current depth percentage. And setting a coordinate axis by taking the horizontal plane as the origin of coordinates, wherein the corresponding position coordinates of the action object are used for reflecting the distance between the action object and the horizontal plane. The difference between the water bottom position coordinates and the water level position coordinates is used to reflect the overall depth of the water body region. The latency represented by the current depth percentage can more fully reflect the relative position of the action object under water. Wherein, when the latency depth is the latency depth percentage, the latency depth percentage is: the ratio of the first latency depth determined by the first means to the sum of the first latency depth and the second latency depth described above.
For ease of understanding, fig. 3 shows a schematic diagram of the manner in which the percentage of latency is calculated. As shown in fig. 3, a position O between the horizontal plane mesh 31 and the water bottom mesh 32 is a current position of the action object. The horizontal surface mesh 31 may be a corrugated mesh in nature, and the shape of the water bottom mesh 32 is specifically set according to the topography of the river bottom. Line segment OE represents a first vertical distance OF the action object from the horizontal plane grid, and line segment OF represents a second vertical distance OF the action object from the water bottom plane grid. The sum of the first vertical distance and the second vertical distance is the total depth of the water body, and correspondingly, the proportion between the first vertical distance and the total depth of the water body is the latency depth in the form of percentage.
Step S220: and acquiring the latent animation corresponding to the latent depth according to the corresponding relation between the depth data and the depth animation, and playing the acquired latent animation.
The inventor finds that the action object is influenced by factors such as water pressure, buoyancy and the like in water in the process of realizing the invention, so that the animation state of the action object in water is different from the animation state of the action object in land or air. In addition, since the water pressure received by the underwater different depths is different, the animation state of the same action object in the underwater different depths is also different. In order to accurately reflect the association relationship between the animation state of the action object and the underwater depth, in this embodiment, the correspondence relationship between the depth data and the depth animation is preset. Wherein, the depth data refers to: the depth of the action object under water is: vertical distance between the action object and the horizontal plane. Depth animation refers to: video animation corresponding to certain depth data.
In practice, the latent animation may be obtained in a number of ways:
in the first mode, a plurality of depth sections are divided in advance, and depth animations corresponding to the respective depth sections are set, respectively. Correspondingly, the corresponding relation between the depth data and the depth animation comprises: at least two depth intervals and a depth animation corresponding to each depth interval. Wherein the latency animation corresponding to the latency depth is obtained by: a depth interval corresponding to the latent depth is determined, and a latent animation corresponding to the latent depth is determined according to the depth animation of the depth interval corresponding to the latent depth. For example, assume that 0-100 meters underwater is a first depth interval, which corresponds to a first depth animation; 100-200 meters under water is a second depth interval, which corresponds to a second depth animation; 300-500 meters under water is a third depth interval, which corresponds to a third depth animation. Accordingly, if the latency depth is 150 meters, the depth section corresponding to the latency depth is determined to be the second depth section, and therefore the second depth animation is regarded as the latency animation corresponding to the latency depth. In addition, in order to avoid the problem of hard transition of the latent animation, the gradual transition fusion can be performed for adjacent depth intervals. For example, adjacent sections of the depth section corresponding to the latent depth are determined, and animation fusion processing is performed in combination with the depth animation of the adjacent sections to obtain fused latent animation.
In the second mode, a plurality of depth values are set in advance, and depth animations corresponding to the respective depth values are set, respectively. Correspondingly, the corresponding relation between the depth data and the depth animation comprises: at least two depth values and a depth animation corresponding to each depth value. Wherein the latency animation corresponding to the latency depth is obtained by: determining two depth values corresponding to the latent depth, and carrying out interpolation processing on the two depth animations corresponding to the two depth values to obtain the latent animation. For example, assume that the first depth value is 100 meters, which corresponds to the first depth animation; the second depth value is 500 meters, which corresponds to the second depth animation. If the latent depth is 300 meters, interpolation processing is carried out on the first depth animation and the second depth animation, and the obtained interpolation animation is taken as the latent animation.
In addition, when the latency depth is a value in the form of a percentage, the above-mentioned depth interval and depth value may be replaced by a depth percentage, and a corresponding depth animation may be set for each depth percentage. When the depth percentage is larger than a preset percentage threshold, further acquiring the object size and the activity amplitude of the action object, judging whether the action object touches the underwater region according to the object size and the activity amplitude, and if so, further including collision characteristic data in the depth animation.
In addition, the correspondence between the depth data and the depth animation includes: the motion speed of the animation motion included in the depth animation has a first proportional relationship with the depth data, and/or the motion amplitude of the animation motion included in the depth animation has a second proportional relationship with the depth data. The first proportional relationship may be a negative correlation, for example, the greater the value of the depth data, the slower the motion speed of the animation motion (influenced by the water pressure, the greater the motion resistance, and thus the slower the motion speed). The second proportional relationship may be a positive correlation or a negative correlation, for example, the larger the value of the depth data is, the larger the motion amplitude of the animation motion is (the larger the resistance is due to the influence of the water pressure, so that the motion party having to make a larger amplitude can move).
In addition, when the latent animation corresponding to the latent depth is acquired, it can be further realized by: and acquiring water feature data corresponding to the latency depth, and determining the latency animation by combining the water feature data. Wherein, the water characteristic data includes: water body attribute information, and the water body attribute information includes: water flow speed information, water flow direction information, water flow temperature information and water flow illumination information; and, the water body characteristic data further includes: object attribute information of the water body associated object; wherein, the water body associated object includes: a biological class object. Specifically, corresponding water feature data can be set in advance for different depth intervals, so that animation feature data of the latent animation can be determined by combining the water feature data.
For example, the water temperature, flow rate, illumination condition are different from one depth to another. Accordingly, corresponding water feature data may be set according to the depth data so as to determine animation feature data of the latent animation according to the water feature data. For example, the illumination feature data included in the animation feature data is related to the water flow illumination information, and the motion amplitude feature data included in the animation feature data is related to the water flow velocity information and the water flow direction information. As another example, the object types of the water body related objects in different water areas are different. The water body associated object can be various biological objects such as aquatic weed objects and the like. Since the biological class object has its own biological attribute, the animation feature data of the latent animation can be influenced by the biological attribute.
In the implementation, the animation playing can be realized by the object animation model of the action object. Correspondingly, the latent depth is acquired through the object animation model of the action object, and the latent animation corresponding to the latent depth is determined and played. Wherein an object animation model for displaying an animation action of an action object is set in advance for the action object, and a correspondence relationship between depth data and depth animation is stored in the object animation model of the action object.
According to the embodiment, the depth animation is stored in the object animation model of the action object (such as the character object), and the quick playing of the animation is realized by binding the animation data corresponding to the depth information on the character, so that scenes with different depths can be flexibly adapted. At least two limit motion animations or a plurality of animations influenced by the depth are set and respectively bound with the depth information, and the middle animation data is calculated through interpolation, so that the corresponding animation data is confirmed to be played according to the input depth percentage information. For example, the action animation of a character object on the water surface is an action for easy swimming, and the deeper the depth is, the more laborious the swimming action is.
In summary, the method can dynamically acquire the latency depth of the moving object after entering the water body region, and dynamically select the latency animation to play according to the latency depth, so that the problem of lack of realism caused by the fact that the moving animation is fixed is avoided. In addition, the method can directly and quickly determine the latent animation according to the latent depth without executing complex calculation by means of a physical engine, so that the smoothness of animation playing can be improved under the conditions of saving calculation power and reducing system resource consumption. The problem of hard switching of the animation can be avoided through an interpolation mode or a decremental transition fusion mode.
In addition, in the embodiment, the correspondence between the depth data and the depth animation is stored in the object animation model of the action object, so that animation playing can be directly realized based on the object animation model of the action object, and related data is not required to be acquired from an external storage space, so that the processing time is saved, and the playing speed is improved.
Example III
Fig. 4 is a schematic structural diagram of an animation display device for an action object according to a third embodiment of the present invention, including:
the detection module 41 is adapted to calculate the latency depth of the action object according to the latency position information of the action object and the water surface information of the water body area when the action object is detected to enter the water body area;
the playing module 42 is adapted to obtain a latent animation corresponding to the latent depth according to the corresponding relation between the depth data and the depth animation, and play the obtained latent animation.
Optionally, the correspondence between the depth data and the depth animation includes:
at least two depth intervals and depth animations corresponding to the respective depth intervals; wherein, the latency animation corresponding to the latency depth is obtained by the following means: determining a depth interval corresponding to the latent depth, and determining the latent animation corresponding to the latent depth according to the depth animation of the depth interval corresponding to the latent depth; or alternatively, the process may be performed,
The corresponding relation between the depth data and the depth animation comprises the following steps: at least two depth values and a depth animation corresponding to each depth value; wherein, the latency animation corresponding to the latency depth is obtained by the following means: determining two depth values corresponding to the latent depth, and carrying out interpolation processing on the two depth animations corresponding to the two depth values to obtain the latent animation.
Optionally, the correspondence between the depth data and the depth animation includes:
the motion speed of the animation motion included in the depth animation has a first proportional relationship with the depth data, and/or the motion amplitude of the animation motion included in the depth animation has a second proportional relationship with the depth data.
Optionally, the detection module is specifically adapted to:
determining a horizontal plane grid corresponding to first water surface information, transmitting a first ray between the action object and the horizontal plane grid, and determining the latency depth of the action object according to the first ray; and/or the number of the groups of groups,
determining a water bottom grid corresponding to second water surface information, transmitting a second ray between the action object and the water bottom grid, and determining the latency depth of the action object according to the second ray.
Optionally, the playing module is specifically adapted to:
acquiring the latent depth through an object animation model of the action object, and determining and playing a latent animation corresponding to the latent depth;
the object animation model of the action object is used for displaying the animation action of the action object, and the corresponding relation between the depth data and the depth animation is stored in the object animation model of the action object.
Optionally, the detection module is specifically adapted to:
determining an action collision volume corresponding to the action object, and a grid collision volume corresponding to the water body region;
and when detecting that the action collision body and the grid collision body collide, determining that the action object enters a water body area.
Optionally, the playing module is specifically adapted to:
acquiring water feature data corresponding to the latency depth, and determining the latency animation by combining the water feature data;
wherein the water body characteristic data includes: water body attribute information, and the water body attribute information includes: water flow speed information, water flow direction information, water flow temperature information and water flow illumination information; and, the water body characteristic data further includes: object attribute information of the water body associated object; wherein, the water body associated object includes: a biological class object.
The specific structure and working principle of each module may refer to the description of the corresponding parts of the method embodiment, and are not repeated here.
Yet another embodiment of the present application provides a non-volatile computer storage medium storing at least one executable instruction that can perform the animation method of an action object in any of the above method embodiments. The executable instructions may be particularly useful for causing a processor to perform the operations corresponding to the method embodiments described above.
Fig. 5 shows a schematic structural diagram of an electronic device according to another embodiment of the present invention, and the specific embodiment of the present invention is not limited to the specific implementation of the electronic device.
As shown in fig. 5, the electronic device may include: a processor 502, a communication interface (Communications Interface) 506, a memory 504, and a communication bus 508.
Wherein:
processor 502, communication interface 506, and memory 504 communicate with each other via communication bus 508.
A communication interface 506 for communicating with network elements of other devices, such as clients or other servers.
The processor 502 is configured to execute the program 510, and may specifically perform relevant steps in the above-described animation display method embodiment of the action object.
In particular, program 510 may include program code including computer-operating instructions.
The processor 502 may be a central processing unit CPU, or a specific integrated circuit ASIC (Application Specific Integrated Circuit), or one or more integrated circuits configured to implement embodiments of the present invention. The one or more processors included in the electronic device may be the same type of processor, such as one or more CPUs; but may also be different types of processors such as one or more CPUs and one or more ASICs.
Memory 504 for storing program 510. The memory 504 may comprise high-speed RAM memory or may further comprise non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 510 may be specifically configured to cause the processor 502 to perform the respective operations corresponding to the above-described method embodiments.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose devices may also be used with the teachings herein. The required structure for the construction of such devices is apparent from the description above. In addition, the present invention is not directed to any particular programming language. It will be appreciated that the teachings of the present invention described herein may be implemented in a variety of programming languages, and the above description of specific languages is provided for disclosure of enablement and best mode of the present invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed invention requires more features than are expressly recited in each claim.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, any of the claimed embodiments can be used in any combination.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some or all of the components in an apparatus according to embodiments of the present invention may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present invention can also be implemented as an apparatus or device program (e.g., a computer program and a computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present invention may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.

Claims (10)

1. An animation display method of an action object, comprising:
under the condition that the action object is detected to enter the water body area, calculating the latency depth of the action object according to the latency position information of the action object and the water surface information of the water body area;
And acquiring the latent animation corresponding to the latent depth according to the corresponding relation between the depth data and the depth animation, and playing the acquired latent animation.
2. The method of claim 1, wherein the correspondence between the depth data and depth animation comprises:
at least two depth intervals and depth animations corresponding to the respective depth intervals; wherein, the latency animation corresponding to the latency depth is obtained by the following means: determining a depth interval corresponding to the latent depth, and determining the latent animation corresponding to the latent depth according to the depth animation of the depth interval corresponding to the latent depth; or alternatively, the process may be performed,
the corresponding relation between the depth data and the depth animation comprises the following steps: at least two depth values and a depth animation corresponding to each depth value; wherein, the latency animation corresponding to the latency depth is obtained by the following means: determining two depth values corresponding to the latent depth, and carrying out interpolation processing on the two depth animations corresponding to the two depth values to obtain the latent animation.
3. The method of claim 1 or 2, wherein the correspondence between the depth data and depth animation comprises:
The motion speed of the animation motion included in the depth animation has a first proportional relationship with the depth data, and/or the motion amplitude of the animation motion included in the depth animation has a second proportional relationship with the depth data.
4. A method according to any one of claims 1-3, wherein said calculating the latency depth of the action object based on the latency location information of the action object and the water surface information of the water body region comprises:
determining a horizontal plane grid corresponding to first water surface information, transmitting a first ray between the action object and the horizontal plane grid, and determining the latency depth of the action object according to the first ray; and/or the number of the groups of groups,
determining a water bottom grid corresponding to second water surface information, transmitting a second ray between the action object and the water bottom grid, and determining the latency depth of the action object according to the second ray.
5. The method according to any one of claims 1-4, wherein the obtaining the latent animation corresponding to the latent depth according to the correspondence between the depth data and the depth animation, and playing the obtained latent animation comprises:
Acquiring the latent depth through an object animation model of the action object, and determining and playing a latent animation corresponding to the latent depth;
the object animation model of the action object is used for displaying the animation action of the action object, and the corresponding relation between the depth data and the depth animation is stored in the object animation model of the action object.
6. The method of any of claims 1-5, wherein the detecting that the action object enters a water body region comprises:
determining an action collision volume corresponding to the action object, and a grid collision volume corresponding to the water body region;
and when detecting that the action collision body and the grid collision body collide, determining that the action object enters a water body area.
7. The method of any of claims 1-6, wherein the acquiring a latent animation corresponding to the latent depth comprises:
acquiring water feature data corresponding to the latency depth, and determining the latency animation by combining the water feature data;
wherein the water body characteristic data includes: water body attribute information, and the water body attribute information includes: water flow speed information, water flow direction information, water flow temperature information and water flow illumination information; and, the water body characteristic data further includes: object attribute information of the water body associated object; wherein, the water body associated object includes: a biological class object.
8. An animation display device of an action object, comprising:
the detection module is suitable for calculating the latency depth of the action object according to the latency position information of the action object and the water surface information of the water body area under the condition that the action object is detected to enter the water body area;
the playing module is suitable for acquiring the latent animation corresponding to the latent depth according to the corresponding relation between the depth data and the depth animation and playing the acquired latent animation.
9. An electronic device, comprising: the device comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete communication with each other through the communication bus;
the memory is configured to store at least one executable instruction, where the executable instruction causes the processor to perform operations corresponding to the animation display method of an action object according to any one of claims 1 to 7.
10. A computer storage medium having stored therein at least one executable instruction for causing a processor to perform operations corresponding to the animation rendering method of an action object according to any one of claims 1-7.
CN202111628131.0A 2021-12-28 2021-12-28 Animation display method and device for action object Pending CN116363267A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111628131.0A CN116363267A (en) 2021-12-28 2021-12-28 Animation display method and device for action object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111628131.0A CN116363267A (en) 2021-12-28 2021-12-28 Animation display method and device for action object

Publications (1)

Publication Number Publication Date
CN116363267A true CN116363267A (en) 2023-06-30

Family

ID=86925450

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111628131.0A Pending CN116363267A (en) 2021-12-28 2021-12-28 Animation display method and device for action object

Country Status (1)

Country Link
CN (1) CN116363267A (en)

Similar Documents

Publication Publication Date Title
CN108714303B (en) Collision detection method in game, apparatus and computer-readable storage medium
US6642947B2 (en) Method and apparatus for dynamic cursor configuration
US9569885B2 (en) Technique for pre-computing ambient obscurance
US8237722B2 (en) Systems and method for visualization of fluids
CN102496177B (en) Method for producing three-dimensional water-and-ink animation
KR20140090683A (en) Adaptive area cursor
JP2020042773A (en) Method, device for optimizing simulation data, and storage medium
US11062513B2 (en) Liquid simulation method, liquid interaction method and apparatuses
US8730264B1 (en) Determining when image elements intersect
CN112860839A (en) Water environment quality real-time monitoring method and device based on Unity3D
CN102592306A (en) Method for estimation of occlusion in a virtual environment
CN101154293A (en) Image processing method and image processing apparatus
CN101162525A (en) Human body multiple arthrosises characteristic tracking method based on shift Mean Shift and artificial fish school intelligent optimizing
CN116363267A (en) Animation display method and device for action object
CN102194246B (en) Hardware accelerated simulation of atmospheric scattering
US11830125B2 (en) Ray-guided water caustics
CN106485775B (en) Modeling method and apparatus using fluid animation graph
CN109816485B (en) Page display method and device
CN109410304B (en) Projection determination method, device and equipment
De Lucas Reducing redundancy of real time computer graphics in mobile systems
Furuta et al. Kinetic art design system comprising rigid body simulation
CN112546631A (en) Role control method, device, equipment and storage medium
CN112473135A (en) Real-time illumination simulation method, device, equipment and storage medium for mobile game
CN110738719A (en) Web3D model rendering method based on visual range hierarchical optimization
Kuo et al. Application of virtual reality in ecological farmland navigating system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination