CN114880053A - Animation generation method for object in interface, electronic equipment and storage medium - Google Patents

Animation generation method for object in interface, electronic equipment and storage medium Download PDF

Info

Publication number
CN114880053A
CN114880053A CN202110169797.8A CN202110169797A CN114880053A CN 114880053 A CN114880053 A CN 114880053A CN 202110169797 A CN202110169797 A CN 202110169797A CN 114880053 A CN114880053 A CN 114880053A
Authority
CN
China
Prior art keywords
parameters
generating
motion
color
animation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110169797.8A
Other languages
Chinese (zh)
Inventor
范振华
曹原
陈锋
张孟颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202110169797.8A priority Critical patent/CN114880053A/en
Priority to PCT/CN2021/140952 priority patent/WO2022166456A1/en
Publication of CN114880053A publication Critical patent/CN114880053A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application provides an animation generation method for objects in an interface, electronic equipment and a storage medium, relates to the technical field of man-machine interaction, and can generate differentiated object animations based on appearance differences among the objects in the interface so as to improve interaction experience of users. The method comprises the following steps: displaying a first object and a second object on a user interface, wherein the first object and the second object are the same in type and different in appearance; generating a first motion animation of the first object in response to a first operation acting on the first object; generating a second motion animation of the second object in response to a second operation acting on the second object, the first operation and the second operation being the same, the first motion animation and the second motion animation being different.

Description

Animation generation method for object in interface, electronic equipment and storage medium
Technical Field
The embodiment of the application relates to the field of human-computer interaction, in particular to an animation generation method for an object in an interface, electronic equipment and a storage medium.
Background
With the development of intelligent terminals, man-machine interaction modes are more and more diversified. A touch screen of the electronic device may display a human-computer interaction interface (also referred to as a user interface), and a user performs gesture operations on an object in the user interface, and the object in the user interface makes some animation responses based on the gesture operations of the user. For example, the object animates a drag gesture of the user as a deformation, and the object animates a tap gesture of the user as a fragmentation.
However, the animated response of the object in the user interface to the user's gesture operation is usually some fixed animation that is preset. I.e. the animation effect generated for different objects in the user interface is the same. The interactive experience of such animation in which the object participates is poor.
Disclosure of Invention
The embodiment of the application provides an animation generation method for objects in an interface, electronic equipment and a storage medium, which can generate differentiated animations based on appearance differences among the objects and improve the interaction experience of users.
In order to achieve the purpose, the technical scheme is as follows:
in a first aspect, an embodiment of the present application provides an animation generation method for an object in an interface, including:
the user interface displays a first object and a second object, wherein the first object and the second object are the same in type and different in appearance;
generating a first motion animation of the first object in response to a first operation acting on the first object;
and generating a second motion animation of the second object in response to a second operation acting on the second object, wherein the first operation and the second operation are the same, and the first motion animation and the second motion animation are different.
In the embodiment of the present application, the first object and the second object are of the same type, which means that the first object and the second object belong to the same type of object. The appearance of the first object is different from that of the second object, which means that the first object and the second object are objects with different appearances.
For example, when the first object is an icon of APP1 and the second object is an icon of APP2, the first object and the second object are both icons of APPs, but the icons of APP1 and APP2 are different in appearance, so the icons of APP1 and APP2 are the same in type and different in appearance. When the first object is the business card of the user 1 and the second object is the business card of the user 2, the first object and the second object are both the business cards of the users, but the content in the business card of the user 1 is different from the content in the business card of the user 2, so the types of the user 1 and the user 2 are the same, and the appearances of the user 1 and the user 2 are different. In the instant messaging application, the first object is the head portrait of the user 1, the second object is the head portrait of the user 2, and the first object and the second object are both the head portraits of the users, but the user 1 and the user 2 adopt different pictures as the head portraits, so that the head portraits of the user 1 and the head portraits of the user 2 are the same in type and different in appearance.
In the embodiment of the present application, the first operation and the second operation being the same means that the first operation and the second operation belong to the same class of operation.
By way of example, if the first operation is a sliding operation for sliding a first distance and the second operation is a sliding operation for sliding a second distance, both the first operation and the second operation are sliding operations, and therefore the sliding operation for sliding the first distance is the same as the sliding operation for sliding the second distance. The first operation is a pressing operation with a pressing duration as a first time, the second operation is a pressing operation with a pressing duration as a second time, and the first operation and the second operation both belong to pressing operations, so that the pressing operation with the pressing duration as the first time is the same as the pressing operation with the pressing duration as the second time. The first operation is a single-click operation, and the second operation is also a single-click operation, so that the first operation and the second operation are the same.
In the embodiment of the application, the first motion animation and the second motion animation are different, which means that the first motion animation and the second motion animation have difference.
By way of example, the motion displacement of the first object in the first motion animation of the first object is a third distance, and the motion displacement of the second object in the second motion animation of the second object is a fourth distance. The third distance and the fourth distance are different, the first motion animation and the second motion animation are different. In the first motion animation of the first object, the first object rebounds m times after being pressed, the second object rebounds n times after being pressed, and m and n are different, so that the first motion animation and the second motion animation are different. And if the deformation area of the first object after being pressed is larger than that of the second object after being pressed, the first motion animation and the second motion animation are different.
Through the above description, it can be understood that, for the same type of objects with different appearances, the same operation is adopted, the objects can generate differentiated motion animations, and the interaction experience of the user is improved.
In one possible implementation manner of the first aspect, generating a first motion animation of a first object in response to a first operation acting on the first object includes:
detecting a first operation acting on a first object;
acquiring appearance parameters of a first object, and endowing the first object with physical parameters according to the appearance parameters of the first object;
acquiring initial parameters of a first object;
based on the physical parameters of the first object and the initial parameters of the first object, a motion animation of the first object is generated.
In the embodiment of the present application, the first operation acting on the first object may be a click operation, a press operation (long press operation), a slide operation, or the like of the user on the first object. The appearance parameters of the first object may include: the color of the pixel point in the first object, the coordinate of the pixel point in the first object, the area or volume of the first object, the color of the subject in the user interface where the first object is located, the transparency of the first object, the ambiguity of the first object, and the like. As can be understood from the description of the appearance parameters of the first object, the appearance parameters of the first object are parameters at which the difference between the first object and the other objects can be visually observed from the visual point of view of the user. The embodiments of the application assign physical parameters to the first object based on the appearance parameters of the first object such that the physical parameters of the first object follow the visual perception of the user in the real world. When the first operation is a pressing operation, the initial parameter of the first object may be a value of a pressing force applied to the first object (the value may be a fixed value or may be obtained from a parameter such as a contact area of the pressing operation), or may be a position of the pressing force applied to the first object. When the first operation is a pushing operation, the initial parameter of the first object may be an initial speed set for the first object or a pushing force generated according to a pushing distance of the pushing operation. The initial parameters of the first object may specifically refer to the description in the subsequent embodiments.
The motion parameters of the object generated by the physical parameters of the object can better conform to the motion process of the object in the real world. Therefore, the motion animation of the object obtained by the embodiment of the application can provide differentiated animation effects among different objects; and the generated motion animation of the object can better accord with the motion process of the object in the real world, and the interaction experience of the user is improved.
In this embodiment of the application, in response to the second operation acting on the second object, the process of generating the second motion animation of the second object may refer to a description of "generating the first motion animation of the first object in response to the first operation acting on the first object", which is not described herein again. In a possible implementation manner of the first aspect, the appearance parameter of the first object includes: the color of a pixel point in the first object, and the physical parameters of the first object include: a mass of the first object;
assigning physical parameters to the first object based on the appearance parameters of the first object comprises:
and generating the quality of the first object according to the color of the pixel point in the first object.
In a possible implementation manner of the first aspect, generating the quality of the first object according to the color of a pixel point in the first object includes:
and generating a first quality value of the first object according to a first difference between the color of the pixel point in the first object and the theme color of the user interface where the first object is located, and taking the first quality value of the first object as the quality of the first object.
In a possible implementation manner of the first aspect, generating the first quality value of the first object according to a first difference between a color of a pixel point in the first object and a theme color of the user interface where the first object is located includes:
generating pixel quality of a pixel point in the first object according to a first difference between the color of the pixel point in the first object and the theme color of the user interface where the first object is located;
and generating a first quality value of the first object according to the pixel quality of the pixel points in the first object.
As an example, the pixel quality of each pixel point in the first object is calculated by the following formula:
Figure BDA0002936884420000031
or the like, or, alternatively,
Figure BDA0002936884420000032
wherein m is i Representing the quality of the ith pixel in the first object, a i The color of the ith pixel point in the first object is represented, and the color of the theme of the user interface is represented by b.
In practical application, the quality of each pixel point obtained by calculation can be normalized:
m i '=(m i -μ)/(m max -m min );
wherein m is i ' represents the normalized quality of the ith pixel point in the first object, mu is a constant defined by self, m max The maximum value representing the color can be set to 0 XFFFFFFF, m min Represents the minimum value of color and may be set to 0X 000000.
Finally, a first quality value of the first object (which may be taken as the quality of the first object) is calculated by:
Figure BDA0002936884420000033
wherein M represents a first objectMass of (c), m i ' represents the normalized quality of the ith pixel point in the first object, and n represents the number of pixel points in the first object.
In a possible implementation manner of the first aspect, the appearance parameter of the first object includes: a transparency of the first object; before the first quality value of the first object is taken as the quality of the first object, the method further comprises the following steps:
and generating a second quality value of the first object according to the transparency of the first object and the first quality value of the first object, and taking the second quality value of the first object as the quality of the first object.
As an example, when considering the influence of the transparency of the first object on the quality, the second quality value of the first object (which may be the quality of the first object) is calculated by the following formula:
Figure BDA0002936884420000034
where K represents a transparency coefficient. K may take a value in the range of 0 to 1. The more transparent the first object, the smaller K, the lighter the mass of the first object.
In a possible implementation manner of the first aspect, the appearance parameter of the first object includes: the color of the pixel point in the first object and the coordinate of the pixel point in the first object; the physical parameters of the first object include: barycentric coordinates of the first object;
assigning physical parameters to the first object based on the appearance parameters of the first object comprises:
generating pixel quality of a pixel point in the first object according to a first difference between the color of the pixel point in the first object and the theme color of the user interface where the first object is located;
and calculating the barycentric coordinate of the first object according to the coordinate of the pixel point in the first object and the pixel quality of the pixel point in the first object.
As an example, in a two-dimensional space, a spatial rectangular coordinate system O-XY is employed. The first object can be micro-divided into n particles (or n pixel points), and the coordinate of the ith particle is (x) i ,y i ) Mass of the ith particle is m i ', the mass M of the first object being M 1 '+…+m i '+…+m n '。
The coordinates of the center of gravity of the first object are G (x, y), which are calculated by the following formula.
x=(x 1 m 1 '+…+x i m i '+…+x n m n ')/M;
y=(y 1 m 1 '+…+y i m i '+…+y n m n ')/M。
In a three-dimensional space, a space rectangular coordinate system O-XYZ is determined. The first object can be micro-divided into n particles (or n pixel points), and the coordinate of the ith particle is (x) i ,y i ,z i ) Mass of the ith particle is m i ', the mass M of the first object being M 1 '+…+m i '+…+m n '。
The coordinates of the center of gravity of the first object are G (x, y, z), and are calculated by the following formula.
x=(x 1 m 1 '+…+x i m i '+…+x n m n ')/M;
y=(y 1 m 1 '+…+y i m i '+…+y n m n ')/M;
z=(z 1 m 1 '+…+z i m i '+…+z n m n ')/M。
In a possible implementation manner of the first aspect, the appearance parameter of the first object further includes: the outline color of the first object, the area or volume of the first object; the physical parameters of the first object include: a stiffness of the first object;
assigning physical parameters to the first object based on the appearance parameters of the first object comprises:
calculating a second difference between the color of the outer border of the first object and the color of the theme of the user interface where the first object is located;
generating a unit mass of the first object based on the mass of the first object and the area or volume of the first object;
generating a first stiffness of the first object from the second difference and the unit mass of the first object, and taking the first stiffness of the first object as the stiffness of the first object.
The stiffness of the first object is:
G=|b-c|×k G ×M s
where G represents the stiffness of the first object, b represents the theme color of the user interface, c represents the color of the outer border of the first object, k G Representing a stiffness conversion factor, M s Representing the unit mass of the first object. For convenience of description, the difference between the color of the outer border of the first object and the color of the subject of the user interface where the first object is located is denoted as a second difference. When the first object is a two-dimensional element, M s M/S, S denotes the area of the first object; when the first object is a three-dimensional element, M s M/V, V denotes the volume of the first object.
In a possible implementation manner of the first aspect, the appearance parameter of the first object further includes: a transparency of the first object; before the first rigidity of the first object is taken as the rigidity of the first object, the method further comprises the following steps:
and generating a second rigidity of the first object according to the first rigidity of the first object and the transparency of the first object, and taking the second rigidity of the first object as the rigidity of the first object.
G=|b-c|×K×k G ×M s
In a possible implementation manner of the first aspect, the appearance parameter of the first object further includes: an ambiguity of the first object; before the first rigidity of the first object is taken as the rigidity of the first object, the method further comprises the following steps:
and generating a third rigidity of the first object according to the first rigidity of the first object and the ambiguity of the first object, and taking the third rigidity of the first object as the rigidity of the first object.
G=|b-c|×k×M s ×A。
Where a represents the ambiguity of the first object.
Of course, in practical applications, the transparency and the ambiguity of the first object can also be considered simultaneously:
G=|b-c|×K×k G ×M s ×A。
for convenience of description, the stiffness of the first object generated without considering the transparency and the ambiguity of the first object may be referred to as a first stiffness, the stiffness of the first object generated with considering the transparency of the first object may be referred to as a second stiffness, the stiffness of the first object generated with considering the ambiguity of the first object may be referred to as a third stiffness, and the stiffness generated with considering both the transparency and the ambiguity of the first object may be referred to as a fourth stiffness. In practical applications, any one of the first stiffness, the second stiffness, the third stiffness, and the fourth stiffness may be used as the stiffness of the first object.
In a possible implementation manner of the first aspect, the method further includes:
generating a relative friction coefficient between the first object and the background of the first object according to the background color of the first object and the color of the background of the first object;
and calculating a first friction force acting on the first object according to the mass and the relative friction coefficient of the first object, and taking the first friction force as the friction force acting on the first object during the movement of the first object.
As an example, F f =M×O BE
Wherein, F f Is the friction acting on the first object, M is the mass of the first object, O BE Is the relative coefficient of friction between the first object and the background in which the first object is located.
In a possible implementation manner of the first aspect, generating a relative friction coefficient between the first object and a background of the first object according to a base color of the first object and a color of the background of the first object includes:
generating an object friction force of the first object according to the ground color of the first object;
generating background friction of the background of the first object according to the color of the background of the first object;
and generating a relative friction coefficient between the first object and the background where the first object is located according to the object friction force and the background friction force.
As an example, O BE =B f ×E f
B f =e 1 +|0XFFFFFF-d|×k fB
Wherein, B f As background friction force, e 1 To set minimum background friction, d is the color of the background against which the first object is placed, k fB And (4) converting the coefficient for the preset background friction force.
E f =e 2 +|0XFFFFFF-a|×k fE
Wherein E is f Is friction force of the object, e 2 To set minimum object friction, a is the color of the first object, k fE And the coefficient is converted into a preset object friction force.
In a possible implementation manner of the first aspect, the appearance parameter of the first object further includes: an ambiguity of the first object; before the first friction is taken as the friction acting on the first object in the process of moving the first object, the method further comprises the following steps:
and generating a second friction force acting on the first object according to the first friction force and the fuzzy degree of the first object, and taking the second friction force as the friction force acting on the first object during the motion of the first object.
As an example of this, the following is given,
Figure BDA0002936884420000051
where a represents the ambiguity of the first object, which may take a value between 0 and 1. The more the first object is blurred, the smaller the value of A is, and the larger the friction force is, so that the blurring degree of the first object and the friction force are in an inverse relation.
In a possible implementation manner of the first aspect, the method further includes:
the speed of the first object is acquired, and the air resistance acting on the first object during the movement of the first object is generated according to the speed of the first object.
In one possible implementation manner of the first aspect, generating a motion animation of the first object based on the physical parameter of the first object and the initial parameter of the first object includes:
generating a motion parameter of the first object based on the physical parameter of the first object and the initial parameter of the first object;
and generating the motion animation of the first object according to the motion parameters of the first object.
In one possible implementation manner of the first aspect, after the first operation acts on the first object, the first object and the third object collide;
the physical parameter of the first object comprises the mass of the first object, and the initial parameter of the first object comprises the first input speed of the first object;
generating motion parameters of the first object based on the physical parameters of the first object and the initial parameters of the first object, comprising:
acquiring physical parameters of a third object and initial parameters of the third object, wherein the physical parameters of the third object comprise the quality of the third object, and the initial parameters of the third object comprise a second input speed of the third object;
calculating a first exit velocity of the first object and a second exit velocity of the third object from the mass of the first object, the mass of the third object, the first entry velocity and the second entry velocity based on a momentum conservation law and an energy conservation law;
calculating a time-varying velocity and/or displacement of the first object after the collision based on the first velocity of the first object and the frictional force acting on the first object after the collision;
the velocity and/or displacement of the third object over time after the collision is calculated based on the second exit velocity of the third object and the frictional force acting on the third object after the collision.
As an example, a first exit velocity of the first object and a second exit velocity of the third object are obtained by calculation through formula (1) and formula (2). Then, the time-varying speed of the first object is obtained by calculation through the formula (3) and the formula (4).
(1)M A v ruA +M B v ruB =M A v A0 +M B v B0
(2)1/2M A v ruA 2 +1/2M B v ruB 2 =1/2M A v A0 2 +1/2M B v B0 2
(3)v At =v A0 +a A t;
(4)a A =F fA +F zA /M A
Wherein v is ruA Represents the velocity (first entry velocity) v of the object A (first object) at the time of collision ruB Represents the velocity (second entry velocity) v of the object B (third object) at the time of collision A0 Represents the velocity (first exit velocity), v, of the object A after the collision B0 Representing the velocity of the post-collision object B (second exit velocity). v. of At Representing the speed of the object a over time.
In one possible implementation manner of the first aspect, after the first operation acts on the first object, the first object and the third object collide;
the physical parameter of the first object comprises the mass of the first object, and the initial parameter of the first object comprises the first input speed of the first object;
generating motion parameters of the first object based on the physical parameters of the first object and the initial parameters of the first object, comprising:
acquiring physical parameters of a third object and initial parameters of the third object, wherein the physical parameters of the third object comprise the quality of the third object, and the initial parameters of the third object comprise a second input speed of the third object;
calculating a first exit velocity of the first object and a second exit velocity of the third object from the mass of the first object, the mass of the third object, the first entry velocity and the second entry velocity based on a momentum conservation law and an energy conservation law;
calculating a time-varying velocity and/or displacement of the first object after the collision based on the first velocity of the first object and the frictional force acting on the first object after the collision;
the velocity and/or displacement of the third object over time after the collision is calculated from the second exit velocity of the third object and the frictional force acting on the third object after the collision.
(5)v chu =v ru ×E lose
(6)E lose =(G A +G B )/2×G max ,G max Is the maximum constant of the rigidity of the user-defined.
When the object B is a fixed-position object, the velocity assigned to the object B is 0, and therefore v chu All assigned to object A, i.e. (7) v A0 =v chu ,v B0 =0。
When the object B is an object whose position is not fixed, (8)
Figure BDA0002936884420000061
The sum v of the third exit velocity of the object A and the fourth exit velocity of the object B is obtained by calculation through the formula (5) and the formula (6) chu . Then, the third exit velocity of the object a and the fourth exit velocity of the object B are obtained by calculation through formula (7) or formula (8).
And finally, calculating to obtain the time-varying speed of the first object through the formula (3) and the formula (4).
In one possible implementation of the first aspect, the first operation is a pressing operation acting on a first object, the first object generating press rebounding;
the physical parameters of the first object include: a mass of the first object; the initial parameters of the first object include: a degree of rebound acting on the first object and a coefficient of elasticity of the elastic member acting on the first object;
generating motion parameters of the first object based on the physical parameters of the first object and the initial parameters of the first object, comprising:
generating a first resilience force acting on the first object according to the degree of resilience and the mass of the first object;
calculating the pressing displacement of the first object according to the first resilience force and the elastic coefficient;
calculating elastic potential energy in the scene where the first object is located according to the pressing displacement and the elastic coefficient of the first object;
the motion parameters of the first object are obtained based on a model in which elastic potential energy is equal to kinetic energy of the first object and air resistance acting on the first object does work.
(9)F T =K T /M;
(10)x=F T /k,x p G/k, press displacement: x-x p
(11)
Figure BDA0002936884420000071
And (3) calculating to obtain a first resilience according to a formula (9), calculating to obtain a pressing displacement according to a formula (10), and expressing a relation among elastic potential energy, kinetic energy of the first object and air resistance acting according to a formula (11).
In a possible implementation manner of the first aspect, the physical parameter of the first object further includes: a stiffness of the first object;
according to the first resilience force and the elastic coefficient, calculating the pressing displacement of the first object comprises the following steps:
generating a second resilience force acting on the first object according to the first resilience force and the rigidity of the first object;
and generating the pressing displacement of the first object according to the second resilience force and the elastic coefficient.
In one possible implementation manner of the first aspect, the first operation is a pressing operation used on a first object, the first object generating a pressing inclination;
the physical parameters of the first object include: a center of gravity of the first object; the initial parameters of the first object include: a stress point of a pressing force acting on the first object;
when the first object generates the pressing tilt motion, the tilt axis of the first object is perpendicular to a first line which is a line between a point of application of a pressing force acting on the first object and the center of gravity of the first object.
In one possible implementation of the first aspect, the tilt axis of the first object coincides with the center of gravity of the first object.
In one possible implementation manner of the first aspect, the first operation is a pressing operation acting on a first object, and the first object generates a pressing deformation;
the physical parameters of the first object include: a stiffness of the first object; the initial parameters of the first object comprise pressing force acting on the first object and stress points of the pressing force; the motion parameters of the first object comprise a deformation region of the first object;
generating motion parameters of the first object based on the physical parameters of the first object and the initial parameters of the first object, comprising:
calculating a degree of deformation of the first object based on the pressing force acting on the first object and the rigidity of the first object;
calculating the deformation area of the first object according to the deformation degree of the first object and the area of the first object;
and generating a deformation area of the first object according to the deformation area of the first object, wherein the center of the deformation area is the stress point of the pressing force acting on the first object.
In one possible implementation manner of the first aspect, when the first operation acts on the first object, a chain motion is generated between the first object and a fourth object which is arranged in a chain manner with the first object, wherein a chain force applied to the fourth object is a chain force applied to a force application object of the fourth object divided by a mass of the fourth object.
In a second aspect, an embodiment of the present application provides an electronic device, including:
the object display module is used for displaying a first object and a second object on the user interface, wherein the first object and the second object are the same in type and different in appearance;
an animation generation module for generating a first motion animation of a first object in response to a first operation acting on the first object;
and the animation generation module is also used for responding to a second operation acted on the second object and generating a second motion animation of the second object, wherein the first operation and the second operation are the same, and the first motion animation and the second motion animation are different.
In a third aspect, an electronic device is provided, which includes a processor for executing a computer program stored in a memory, so as to enable the electronic device to implement the method of any one of the first aspect of the present application.
In a fourth aspect, a chip system is provided, which includes a processor coupled to a memory, the processor executing a computer program stored in the memory to cause an electronic device to implement the method of any one of the first aspect of the present application.
In a fifth aspect, there is provided a computer readable storage medium storing a computer program which, when executed by one or more processors, causes an electronic device to carry out the method of any of the first aspects of the present application.
In a sixth aspect, embodiments of the present application provide a computer program product, which when run on a device, causes the device to perform any one of the methods of the first aspect.
It is understood that the beneficial effects of the second to sixth aspects can be seen from the description of the first aspect, and are not described herein again.
Drawings
Fig. 1 is a schematic view of an application scenario of a method for generating an animation of an object in an interface according to an embodiment of the present application;
fig. 2 is a schematic diagram of a hardware structure of an electronic device that executes a method for generating an animation of an object in an interface according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating a method for generating an animation of an object in an interface according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating colors of outer borders of objects in a computing interface according to an embodiment of the present disclosure;
5(a) to 5(d) are scene diagrams of collision animations of an object in an interface provided in an embodiment of the present application;
6(a) to 6(d) are schematic diagrams of scenes of press rebound animations of an object in an interface provided in an embodiment of the present application;
7(a) to 7(c) are schematic views of scenes of a pressing and tilting animation of an object in an interface provided by an embodiment of the present application;
8(a) to 8(b) are force-bearing schematic diagrams of a pressing and tilting animation of an object in an interface provided by an embodiment of the present application;
9(a) to 9(c) are schematic views of scenes of a pressing animation of an object in an interface provided by an embodiment of the present application;
fig. 10 is a scene schematic diagram of a stereoscopic animation and a planar animation of an object in an interface according to an embodiment of the present application;
FIG. 11 is a schematic view of a scene of a stereoscopic animation and a planar animation of an object in another interface provided in an embodiment of the present application;
FIG. 12 is a schematic structural diagram illustrating a chain arrangement of objects in an interface according to an embodiment of the present disclosure;
FIG. 13 is an animation diagram illustrating a chain animation of an object in an interface according to an embodiment of the present disclosure;
fig. 14 is a schematic block diagram of functional architecture modules of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that in the embodiments of the present application, "one or more" means one, two, or more than two; "and/or" describes the association relationship of the associated object, and indicates that three relationships can exist; for example, a and/or B, may represent: a alone, both A and B, and B alone, where A, B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The animation generation method for the object in the interface provided by the embodiment of the application can be applied to the application scene shown in fig. 1. As shown in fig. 1, a display screen of the electronic device displays a user interface, and a contact interface is displayed in fig. 1, in which contact business cards exist, and the business card of each contact can be used as an object. In the 2 nd drawing in fig. 1, the main interface of the electronic device is shown, and an icon of each application in the main interface of the electronic device may also be an object. When a user's gesture acts on an object in the interface, the object is caused to move in the interface, thereby generating a motion animation of the object. In order to make the motion animation of the object in the interface closer to the motion process of the object in the real world, some physical parameters, such as the mass of the object, the rigidity of the object, etc., may be given to the object in the interface. Motion parameters of the object, e.g., a velocity of the object over time, a displacement of the object over time, etc., are then calculated based on the physical parameters of the object and initial parameters of the object (e.g., a thrust force acting on the object, an initial velocity of the object, etc.). And finally generating the motion animation of the object based on the motion parameters of the object.
As an example, when two objects in the interface collide, one of the objects may experience bouncing motion. However, for different objects, there may be differences in the velocity at which bounce occurs and the displacement of the bounce, which are generated based on the physical parameters of the object, rather than different animation effects that developers set in advance according to different objects. Since the motion animation of an object is related to the physical parameters of the object, in order to obtain differentiated motion animation between different objects and make the motion animation of the object more conform to the motion process of the object in the real world, it is necessary to assign personalized physical parameters to each object. The perception of the user to the object in the user interface comes from the appearance parameters of the object, for example, the color of the pixel points in the object, the transparency of the object, and the like, and therefore, the individualized physical parameters of each object can be generated based on the appearance parameters of each object.
In the animation generation method for the object in the interface provided by the embodiment of the application, firstly, physical parameters of the object are generated based on appearance parameters of the object; then generating the motion parameters of the object according to the physical parameters of the object and the initial parameters of the object, and finally generating the motion animation of the object according to the motion parameters of the object.
Based on the above understanding, at least two objects of the same type, application A (which may be the calendar icon shown in FIG. 4) and application C (which may be the calculator icon shown in FIG. 4) in a second user interface shown in FIG. 1, may be displayed in the user interface. Since the application a and the application C are both icons of APP, the types of the application a and the application C are the same. Since the application a and the application C are icons of different APPs, the appearances of the application a and the application C are different (the difference in appearance with respect to color is not shown in fig. 4).
When the user performs a pushing operation to the right on the application A, after the application A moves to the right and hits the application B, the application A is rebounded, the application A moves to the left, and the application B continues to remain still. The rebound motion of the application a can refer to the motion animation illustration of the object a shown in fig. 5(B), the rebound motion after the collision of the application a occurs in the direction opposite to the original motion direction, and the motion animation of the collided application B can refer to the motion animation illustration of the object B shown in fig. 5(B), and the application B is kept still.
When the user performs a pushing operation to the left on the application C, the application C moves to the left and hits the application B, and then the application C continues to move to the left and the application B also moves to the left. The motion animation of the application C may refer to the motion animation illustration of the object a shown in fig. 5(C), the application C moves in the same direction as the original motion direction after collision, and the motion animation of the collided application B may refer to the motion animation illustration of the object B shown in fig. 5(C), and the application B and the application C maintain the same direction of motion. The operation of the user on the application a is a pushing operation to the right, and the operation of the user on the application C is a pushing operation to the left. Although there is a difference in the operation direction, both are pushing operations, so the pushing operation to the right by the user acting on the application a and the pushing operation to the left by the user acting on the application C are the same operation. The above operation process may be understood as giving the same thrust value or the same initial velocity value to the application a and the application C.
As can be understood from the motion animation of the object a and the object B shown in fig. 5(B) and the motion animation of the object a and the object B shown in fig. 5(C), when the same operation acts on the application a and the application C and the objects collided by the application a and the application C are the same, the motion animations generated by the application a and the application C are different.
The reason why motion animation is generated is that the appearance of the application a makes the application a look like a light object in the real world. The appearance of application C is such that application C appears to be a heavy object in the real world. And the collision of these two objects against the same object in the real world may produce different motions. Therefore, the animation generation method for the objects in the interface provided by the embodiment of the application can generate differentiated animations based on appearance difference among the objects; and the generated animation of the object more closely resembles the motion of an object in the real world.
How to obtain the quality or other physical parameters of the object according to the appearance of the object in the above embodiments can refer to the description of the following embodiments.
As another embodiment, the application a and the application D displayed in the second interface shown in fig. 1 are both icons of APP, i.e. the types are the same. Due to the different appearances of the application a and the application D, when the user performs pressing operations on the application a and the application B respectively, the application a may rebound 2 times after being pressed, and the application B may rebound 4 times after being pressed. That is, the application a and the application D having different appearances generate different motion animations for the same operation. The generation process of the motion animation of press rebound can refer to the description in the subsequent embodiment.
It should be noted that the application scenario shown in fig. 1 is only an example, and in an actual application, there may be other application scenarios.
The embodiment of the application provides an animation generation method for an object in an interface, and the method can be applied to electronic equipment. The electronic device may be: the mobile phone, the tablet computer, the wearable device, the vehicle-mounted device, the smart sound box, the smart screen, the Augmented Reality (AR)/Virtual Reality (VR) device, the notebook computer, the ultra-mobile personal computer (UMPC), the netbook, the Personal Digital Assistant (PDA), and other electronic devices. The embodiment of the present application does not limit the specific type of the electronic device.
Fig. 2 shows a schematic structural diagram of an electronic device. The electronic device 200 may include a processor 210, an external memory interface 220, an internal memory 221, a Universal Serial Bus (USB) interface 230, a charging management module 240, a power management module 241, a battery 242, an antenna 1, an antenna 2, a mobile communication module 250, a wireless communication module 260, an audio module 270, a speaker 270A, a receiver 270B, a microphone 270C, an earphone interface 270D, a sensor module 280, a motor 291, a camera 293, a display 294, and a Subscriber Identity Module (SIM) card interface 295, and so on. The sensor module 280 may include a pressure sensor 280A, a gyro sensor 280B, an acceleration sensor 280E, a touch sensor 280K, and the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation to the electronic device 200. In other embodiments of the present application, the electronic device 200 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 210 may include one or more processing units, such as: the processor 210 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors. For example, the processor 210 is configured to execute an animation generation method for an object in an interface in an embodiment of the present application.
The controller may be, among other things, a neural center and a command center of the electronic device 200. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 210 for storing instructions and data. In some embodiments, the memory in the processor 210 is a cache memory. The memory may hold instructions or data that have just been used or recycled by processor 210. If the processor 210 needs to reuse the instruction or data, it may be called directly from memory. Avoiding repeated accesses reduces the latency of the processor 210, thereby increasing the efficiency of the system.
In some embodiments, processor 210 may include one or more interfaces. The interface may include a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, and/or the like.
The USB interface 230 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 230 may be used to connect a charger to charge the electronic device 200, and may also be used to transmit data between the electronic device 200 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
The external memory interface 220 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 200. The external memory card communicates with the processor 210 through the external memory interface 220 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
Internal memory 221 may be used to store computer-executable program code, which includes instructions. The processor 210 executes various functional applications of the electronic device 200 and data processing by executing instructions stored in the internal memory 221. The internal memory 221 may include a program storage area and a data storage area.
In addition, the internal memory 221 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The charge management module 240 is configured to receive a charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 240 may receive charging input from a wired charger via the USB interface 230. In some wireless charging embodiments, the charging management module 240 may receive a wireless charging input through a wireless charging coil of the electronic device 200. The charging management module 240 may also supply power to the electronic device through the power management module 241 while charging the battery 242.
The power management module 241 is used to connect the battery 242, the charging management module 240 and the processor 210. The power management module 241 receives input from the battery 242 and/or the charging management module 240, and provides power to the processor 210, the internal memory 221, the external memory, the display 294, the camera 293, and the wireless communication module 260. The power management module 241 may also be used to monitor parameters such as battery capacity, battery cycle number, battery state of health (leakage, impedance), etc.
In some other embodiments, the power management module 241 may also be disposed in the processor 210. In other embodiments, the power management module 241 and the charging management module 240 may be disposed in the same device.
The wireless communication function of the electronic device 200 may be implemented by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, the modem processor, the baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 200 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 250 may provide a solution including 2G/3G/4G/5G wireless communication applied on the electronic device 200. The mobile communication module 250 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 250 can receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 250 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 270A, the receiver 270B, etc.) or displays images or video through the display screen 294. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be separate from the processor 210, and may be disposed in the same device as the mobile communication module 250 or other functional modules.
The wireless communication module 260 may provide a solution for wireless communication applied to the electronic device 200, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 260 may be one or more devices integrating at least one communication processing module. The wireless communication module 260 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 210. The wireless communication module 260 may also receive a signal to be transmitted from the processor 210, frequency-modulate and amplify the signal, and convert the signal into electromagnetic waves via the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 200 is coupled to mobile communication module 250 and antenna 2 is coupled to wireless communication module 260 such that electronic device 200 may communicate with networks and other devices via wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technologies, among others. GNSS may include Global Positioning System (GPS), global navigation satellite system (GLONASS), beidou satellite navigation system (BDS), quasi-zenith satellite system (QZSS), and/or Satellite Based Augmentation System (SBAS).
The electronic device 200 implements display functions via the GPU, the display screen 294, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 294 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 210 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 294 is used to display images, video, and the like. The display screen 294 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 200 may include 1 or N display screens 294, N being a positive integer greater than 1. As an example, a display screen of the electronic device may display a user interface.
Electronic device 200 may implement audio functions via audio module 270, speaker 270A, receiver 270B, microphone 270C, headset interface 270D, and an application processor, among others. Such as music playing, recording, etc.
Audio module 270 is used to convert digital audio signals to analog audio signal outputs and also to convert analog audio inputs to digital audio signals. Audio module 270 may also be used to encode and decode audio signals. In some embodiments, the audio module 270 may be disposed in the processor 210, or some functional modules of the audio module 270 may be disposed in the processor 210.
The speaker 270A, also called a "horn", is used to convert an audio electrical signal into an acoustic signal. The electronic apparatus 200 can listen to music through the speaker 270A or listen to a handsfree call.
The receiver 270B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic apparatus 200 receives a call or voice information, it is possible to receive voice by placing the receiver 270B close to the human ear.
The microphone 270C, also referred to as a "microphone," is used to convert acoustic signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 270C by speaking the user's mouth near the microphone 270C. The electronic device 200 may be provided with at least one microphone 270C. In other embodiments, the electronic device 200 may be provided with two microphones 270C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 200 may further include three, four or more microphones 270C to collect sound signals, reduce noise, identify sound sources, implement directional recording functions, and so on. For example, the microphone 270C may be used to capture audio signals related to embodiments of the present application.
The headphone interface 270D is used to connect wired headphones. The headset interface 270D may be the USB interface 230, or may be an open mobile electronic device platform (OMTP) standard interface of 3.5mm, or a Cellular Telecommunications Industry Association (CTIA) standard interface.
The pressure sensor 280A is used to sense a pressure signal, which can be converted into an electrical signal. In some embodiments, pressure sensor 280A may be disposed on display screen 294. The pressure sensor 280A can be of a wide variety of types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 280A, the capacitance between the electrodes changes. The electronic device 200 determines the intensity of the pressure from the change in capacitance. When a touch operation is applied to the display screen 294, the electronic device 200 detects the intensity of the touch operation according to the pressure sensor 280A, so as to generate an external force applied to an object in the interface, and the object in the interface generates a corresponding motion based on the external force.
The gyro sensor 280B may be used to determine the motion pose of the electronic device 200. In some embodiments, the angular velocity of the electronic device 200 about three axes (i.e., x, y, and z axes) may be determined by the gyroscope sensor 280B. The gyro sensor 280B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 280B detects a shake angle of the electronic device 200, calculates a distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 200 through a reverse movement, thereby achieving anti-shake. The gyro sensor 280B may also be used for navigation, somatosensory gaming scenes. As an example, the user may adjust the pose of the electronic device 200 such that a freely moving object in the interface produces a corresponding motion based on its own weight.
The acceleration sensor 280E may detect the magnitude of acceleration of the electronic device 200 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 200 is stationary. As an example, the user may adjust an acceleration of the electronic device 200, apply a corresponding acceleration to the freely moving object in the interface, such that the freely moving object in the interface generates a corresponding motion based on the acceleration.
The touch sensor 280K is also referred to as a "touch panel". The touch sensor 280K may be disposed on the display screen 294, and the touch sensor 280K and the display screen 294 form a touch screen, which is also called a "touch screen". The touch sensor 280K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display screen 294. As an example, the electronic device 200 may detect a position of an external force applied by the user on the object through the touch sensor, so that the object in the interface generates a corresponding motion based on the position of the external force.
The motor 291 may generate a vibration cue. The motor 291 can be used for both incoming call vibration prompting and touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 291 may also respond to different vibration feedback effects for touch operations on different areas of the display 294. As an example, the electronic device 200 may generate a corresponding vibration effect based on motion animation of objects in the interface, e.g., when two objects collide, a vibration feedback effect may be generated.
The SIM card interface 295 is used to connect a SIM card. The SIM card can be attached to and detached from the electronic apparatus 200 by being inserted into the SIM card interface 295 or being pulled out from the SIM card interface 295. The electronic device 200 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 295 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. Multiple cards can be inserted into the same SIM card interface 295 at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 295 may also be compatible with different types of SIM cards. The SIM card interface 295 may also be compatible with external memory cards. The electronic device 200 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the electronic device 200 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 200 and cannot be separated from the electronic device 200.
The embodiment of the present application does not particularly limit a specific structure of an execution subject of the animation generation method for an object in an interface. It is sufficient that communication can be performed by the animation generation method of the object in the interface according to the embodiment of the present application by running the program recorded with the animation generation method of the object in the interface according to the embodiment of the present application. For example, an execution subject of the animation generation method for an object in an interface provided by the embodiment of the present application may be a functional module capable of calling a program and executing the program in an electronic device, or a communication device, such as a chip, applied to the electronic device.
In the embodiment of the application, after an object on a user interface is operated, an electronic device generates a scene of a motion animation of the object on the user interface, and a generation process of the motion animation of the object is described.
Referring to fig. 3, a schematic flowchart of a method for generating an animation of an object in an interface according to an embodiment of the present application is shown, where as shown in the drawing, the method includes:
in step 301, an operation acting on an object is detected.
As shown in FIG. 1, the object in the user interface may be an icon of an application in the main interface or may be a business card of a contact in the contact interface. In practical applications, the object in the user interface may be an element in the user interface having color features and boundaries. Such as icons, text with bounding boxes, images, various controls, and the like.
The detection of the operation acting on the object may be detection of a gesture operation acting on the object, for example, detection of a drag operation, a press operation, or the like acting on the object.
The operation acting on the object may be detected, and the operation of the object by other objects may also be detected, for example, if the object a collides with the object B during the movement, then the operation acting on the object B may be detected as the collision of the object a with the object B.
Step 302, obtaining the appearance parameters of the object, and giving physical parameters to the object according to the appearance parameters of the object.
In the embodiment of the application, a display screen of the electronic device can present a user interface, and an object in the user interface is regarded as an object in the real world in the embodiment of the application. Since objects in the real world have some physical parameters, for example: mass, center of gravity, stiffness, etc. Therefore, if an object in the user interface of the electronic device has a motion process similar to the real world, some physical parameters need to be assigned to the object in the user interface of the electronic device.
To enhance the user's interaction experience, physical parameters may be assigned to the object from the perspective of the user's perception of the object. The perception of the user of the object in the user interface comes from the appearance of the object, e.g. the color of the object, the color distribution of the object, the transparency of the object, the blur of the object, etc. In order to enable a processor of the electronic device to obtain the appearance of the object, the color of the pixel points in the object, the coordinates of the pixel points in the object, the transparency of the object, the ambiguity of the object, and the like can be obtained. The embodiments of the present application refer to these parameters as appearance parameters of the object.
As an example, the darker the color of the object, the larger the unit mass of the object, e.g., the color of the object is divided into multiple levels, with different levels corresponding to different unit masses. Or setting a functional relationship between the unit mass of the object and the color of the object, and generating the unit mass of the object through the functional relationship and the color of the object.
As another example, the more blurred the object, the greater the roughness of the object, and the greater the coefficient of friction when the object and other objects rub against each other. Or setting a functional relationship between the friction coefficient between the object and the background where the object is located, the ambiguity of the object and the ambiguity of the background, and obtaining the friction coefficient between the object and the background according to the functional relationship, the ambiguity of the object and the ambiguity of the background.
The above process of assigning physical parameters to an object according to appearance parameters of the object is only used as an example, and the process of assigning different physical parameters to an object according to different appearance parameters of the object may refer to the description in the following embodiments.
Step 303, obtaining initial parameters of the object.
In the real world, an object is subjected to external forces, which may produce motion. The external force acting on the object may be an initial parameter of the object.
As an example, when the operation acting on the object is a pressing operation, the initial parameter of the object may be a value of the pressing force exerted on the object (the value may be a fixed value or may be obtained according to a parameter such as a force-receiving area of the pressing operation), and the initial parameter of the object may also be a position of the pressing force exerted on the object. When the operation acting on the object is a push operation, the initial parameter of the object may be a thrust force generated according to a push distance of the push operation.
In some embodiments, to simplify the process of generating motion animation of an object, an initial velocity may be assigned to the object as an initial parameter of the object.
As an example, when the operation acting on the object is a pushing operation, the initial parameter of the object may be an initial velocity of the object at which the object moves at a uniform velocity after the user stops pushing, or at which the object moves at a reduced velocity according to the friction force received. When the object moves at a constant speed at an initial speed, another object exists on the motion trail of the object, and the object and the another object collide.
The initial parameter of the object may also be other parameters, and specifically, refer to the description in the following embodiments.
Step 304, generating a motion animation of the object based on the physical parameters of the object and the initial parameters of the object.
The movement of the object is not limited to the displacement in the narrow sense, and the object may be deformed. I.e. the object is considered to be moving as soon as its position and/or shape has changed.
Taking the motion of the object that generates displacement as an example, the object may also be subjected to air resistance, friction, and the like during the motion. In practical applications, parameters such as mass, center of gravity, and rigidity inherent to the object may be expressed as static physical parameters of the object, and friction and air resistance accompanying the object during a movement that generates a displacement may be expressed as dynamic physical parameters of the object. Of course, since the friction and the air resistance are considered only in the motion animation of the specific animation scene, parameters such as the weight, the center of gravity, and the rigidity inherent to the object may be regarded as physical parameters of the object, and the friction and the air resistance may be regarded as stress parameters in the specific animation scene. The embodiment of the present application does not limit the division of these parameters.
In practical applications, the motion models (one or more mathematical formula compositions) set under different animation scenes (e.g., a press rebound scene of an object, a press tilt scene of an object, a press deformation scene of an object) may be different, so that the required physical parameters may be different. Even in the same animation scene, due to the difference of the adopted motion models, the physical parameters adopted in the animation scene may be different. In view of the above understanding, in some embodiments of the animation scenario, some or all of the above physical parameters may be used, and other physical parameters besides the above physical parameters may also be used, which is not limited in this application.
Motion animation of objects is typically embodied in at least two aspects: location and shape. Therefore, in the case where a plurality of objects exist in one user interface, it is possible to set some objects to be objects whose positions and shapes are not fixed, some objects to be objects whose positions and shapes are fixed, and some objects to be objects whose shapes or positions are fixed. Applying an external force to an object whose position is not fixed, so that the object generates an animation related to the position change; applying an external force to an object with an unfixed shape, so that the object generates an animation related to the shape change; when a user applies an external force to the object with the fixed position and the fixed shape, the object with the fixed position and the fixed shape does not generate motion animation.
In addition, different animation scenes can be set for different objects, for example, if a spring is arranged below the contact name card, the set animation scene is a press rebound scene for the contact name card; and when the rigidity of the application icon is set to be low, aiming at the application icon, the set animation scene is a pressing deformation scene.
Of course, in practical applications, different animation scenes may also be set for the gesture of the user acting on the object, for example, a press-bounce scene is set for the click gesture of the user on the application icon, a scene moving along the sliding direction is set for the sliding gesture of the user on the application icon, and when there are other objects along the sliding direction, the application icon may collide with the other objects.
Of course, there may be other situations where when an object is affected by other objects, the motion scene of the object may be determined based on the motion states of the other objects. By way of example, there may be provided: when one object (object a) is hit by another object (object B), the motion animation in which the object a participates may be set to a collision scene (scene 1 for short).
When the first operation acting on one object (object a) is a user gesture, the object a may be set to press a tilted scene (simply, scene 2).
As can be understood from scene 1 and scene 2, in different animation scenes, different motion models (one or more mathematical formulas make up the motion models) are used. Meanwhile, because the motion of the object in the real world is a relatively complex motion process, in the process of generating the motion parameters of the object in the animation scene, the existing motion model may be improved or the motion model conforming to the animation scene may be reset, so as to reduce the calculation amount of the processor. The motion model adopted in the embodiment of the application is not limited.
As previously mentioned, the motion model of the object may be composed of one or more mathematical formulas, the input parameters in the motion model including physical parameters of the object (e.g. mass, stiffness, center of gravity, etc.) and initial parameters of the object (e.g. force acting on the object, initial velocity of the object, etc.), and the output parameters of the motion model including motion parameters of the object (e.g. velocity of the object over time, displacement over time, area of deformation of the object, etc.). The motion parameters of the object can be obtained according to a motion model formed by one or more mathematical formulas and the physical parameters and the initial parameters of the object. The process of generating the motion parameters of the object may refer to the description in the subsequent embodiments. After obtaining the motion parameters of the object, motion animation of the object can be obtained based on the motion parameter simulation of the object. During simulation, different animation engines may be used, corresponding motion parameters are set in the animation engines, and motion animation of the object is obtained through simulation by changing the motion parameters in the animation engines.
In this embodiment of the present application, the appearance parameters of the object may include: the color of the pixel points in the object, the coordinates of the pixel points in the object, the area or volume of the object, the subject color in the user interface where the object is located, the transparency of the object, the ambiguity of the object, and the like. As can be understood from the description of the appearance parameters of the objects, the appearance parameters of the objects are parameters that can visually observe the differences between different objects from the visual angle of the user. The method and the device for giving the physical parameters to the object based on the appearance parameters of the object enable the physical parameters of the object to follow the visual perception of the user in the real world. The motion parameters of the object generated by the physical parameters of the object can better conform to the motion process of the object in the real world. Therefore, the motion animation of the object obtained by the embodiment of the application can provide differentiated animation effects among different objects; and the generated motion animation of the object can be more consistent with the motion process of the object in the real world, and the interaction experience of the user is improved.
The following describes an implementation process of assigning physical parameters to the object according to the appearance parameters of the object in step 302.
A method of assigning a quality of an object in a user interface based on an appearance parameter of the object is described, taking an example of the quality of the object.
From experience in life, the darker the color of an object, the greater the mass of the object tends to be. Therefore, the quality of the object can be calculated according to the color of the pixel points in the object.
In the interface where the object is located, there may be a bright theme background and there may also be a dark theme background. In the bright theme background, the greater the difference between the color of the object in the user interface and the color of the theme background, the greater the quality of the object; conversely, in a dark background of the subject, the greater the difference between the color of the object in the user interface and the color of the background of the subject, the greater the quality of the object. Thus, the quality of the object may be calculated based on a first difference between the color of each pixel point in the object and the color of the subject of the user interface in which the object is located. In practical applications, a function that can represent the difference between two values (the color value of a pixel point and the color value of a subject) can be adopted as a calculation model of the quality of an object.
For example, the quality of each pixel in the object may be calculated through the following functional relationship, and then the quality of the object may be generated according to the quality of each pixel.
Figure BDA0002936884420000171
Or the like, or, alternatively,
Figure BDA0002936884420000172
wherein m is i Representing the quality of the ith pixel in the object, a i The color of the ith pixel point in the object is represented, and the color of the theme of the user interface is represented by b.
In practical application, the colors of the theme of the user interface can be preset into two types, namely a bright-color theme and a dark-color theme. Wherein the color value of the bright color theme is a value represented in white, e.g., 0 XFFFFFF; the color value of a dark theme is a value represented in black, for example, 0X 000000.
When the theme in the user interface of the electronic device is a user-defined picture, a gray level image can be generated from the theme picture, the gray level mean value of pixel points in the gray level image is calculated, a gray level threshold value is set, the current theme is considered as a bright theme when the gray level mean value of the theme picture is greater than the gray level threshold value, and the current theme is considered as a dark theme when the gray level mean value of the theme picture is less than the gray level threshold value.
After the quality of each pixel point in the object is obtained through calculation, normalization processing can be further performed on the quality of each pixel point obtained through calculation:
m i '=(m i -μ)/(m max -m min );
wherein m is i ' indicating the normalized quality of the ith pixel in the object, mu is a constant defined autonomously, m max The maximum value representing the color can be set to 0 XFFFFFFF, m min Represents the minimum value of color and may be set to 0X 000000.
If μ is 0, the normalized quality of each pixel is in the range of 0 to 1, and therefore, the normalized quality range of the pixels can be adjusted by the constant μ.
For real world objects, the larger the volume, the greater the mass of the object. Correspondingly, for a two-dimensional object, the larger the area, the greater the mass of the object. Objects in a user interface presented by a display screen of an electronic device are typically flat displays. It is understood that the object in the user interface is a two-dimensional object. Thus, the quality of an object in the user interface is related to the area of the object in the user interface. The mass of the object can be calculated in the following manner.
Figure BDA0002936884420000173
Where M represents the mass of the object, M i ' represents the normalized quality of the ith pixel in the object, and n represents the number of pixels in the object.
The more transparent the object, the lighter the mass from the user's perspective. Thus, the quality of an object in a user interface is also related to the transparency of the object.
In view of the above analysis it can be concluded that: when considering the impact of object transparency on quality,
Figure BDA0002936884420000174
wherein K represents transparency. K may take a value in the range of 0 to 1. The more transparent the object, the smaller K, the lighter the mass of the object. Of course, in practical applications, the forward relation between the transparency and the quality of the object may also be expressed by other functional relations.
The transparency of the object can be obtained by calling a transparency parameter inside the system. For example, there is an option in a setting interface of the electronic device to set transparency for the object, and the user can set transparency through the option. When quality needs to be given to the object and the influence of transparency on the quality needs to be considered, the transparency value set by the user in the transparency setting option can be obtained.
Of course, if the transparency of the object is not set by the system, the transparency of the object may be defaulted to a preset fixed value, for example, K may be 1.
For convenience of distinction, the quality of an object generated without considering the transparency of the object may be referred to as a first quality value, and the quality of an object generated with considering the transparency of the object may be referred to as a second quality value. In practical applications, the first quality value or the second quality value may be considered as a quality parameter of the object according to specific situations.
A method of assigning a center of gravity of an object in a user interface based on appearance parameters of the object is described, taking the center of gravity of the object as an example.
As described above, the mass of each pixel point in the object can be obtained through calculation, and the coordinate of each pixel point in the object can be obtained, that is, the mass distribution of the object is known, so that the center of gravity of the object can be obtained through calculation according to the mass of each pixel point in the object and the mass distribution of the object.
When calculating the center of gravity of the object, an iterative method may be adopted, and the following process may be specifically referred to:
in a two-dimensional space, a space rectangular coordinate system O-XY is adopted. The object can be micro-clustered into n particles (or n pixel points), and the coordinate of the ith particle is (x) i ,y i ) Mass of the ith particle is m i ', the mass M of the object being M 1 '+…+m i '+…+m n '。
The coordinates of the center of gravity of the subject are G (x, y), and are calculated by the following formula.
x=(x 1 m 1 '+…+x i m i '+…+x n m n ')/M;
y=(y 1 m 1 '+…+y i m i '+…+y n m n ')/M。
When the coordinate of the gravity center is calculated in the formula, the normalization quality of each pixel point is adopted, and in practical application, the quality of each pixel point before normalization can also be adopted.
Of course, in practical applications, when an animation effect of the three-dimensional object in the three-dimensional space is to be achieved in a user interface presented on a display screen of the electronic device, the spatial rectangular coordinate system O-XYZ may also be determined in the space where the three-dimensional object is located. The object can be micro-clustered into n particles (or n pixel points), and the coordinate of the ith particle is (x) i ,y i ,z i ) Mass of the ith particle is m i ', the mass M of the object being M 1 '+…+m i '+…+m n '。
The coordinates of the center of gravity of the subject are G (x, y, z), and are obtained by the following formula.
x=(x 1 m 1 '+…+x i m i '+…+x n m n ')/M;
y=(y 1 m 1 '+…+y i m i '+…+y n m n ')/M;
z=(z 1 m 1 '+…+z i m i '+…+z n m n ')/M。
When the coordinate of the gravity center is calculated in the formula, the normalization quality of each pixel point is adopted, and in practical application, the quality of each pixel point before normalization can also be adopted.
A method of imparting stiffness to an object in a user interface based on an appearance parameter of the object is described, taking the stiffness of the object as an example.
Generally, the stiffness is related to the unit mass of the object, for example, cotton and iron blocks, which are conventionally present, the area and the iron block having the same volume, the cotton has a light mass and a low stiffness, and the iron block has a heavy mass and a high stiffness. Therefore, in order to obtain an animation effect similar to the real world, an object having a larger unit mass may be set, the larger the rigidity; conversely, the smaller the object mass, the smaller the stiffness. I.e. the stiffness of the object has a positive relation to the unit mass of the object.
In addition, the color of the object may also be related to the rigidity of the object, and in the real world, the darker the color of the object, the thicker the user's feeling, and the greater the rigidity. In the bright theme background, the greater the difference between the outer border color of the object and the theme color of the user interface is, the greater the rigidity of the object is; in the dark theme background, it can be set that the greater the difference between the outer border color of the object and the theme color of the user interface, the greater the rigidity of the object. I.e. the stiffness of the object is positively related to the difference in the outer border of the object and the theme color of the user interface.
Based on the above analysis, the stiffness of the object in the user interface can be derived as:
G=|b-c|×k G ×M s
where G represents the stiffness of the object, b represents the color of the user interface theme, c represents the outer border color of the object, k G Representing a stiffness conversion factor, M s Representing the unit mass of the object. For convenience of description, the difference between the color of the outer border of the object and the color of the theme of the user interface where the object is located is denoted as a second difference.
Wherein the stiffness conversion coefficient k G It may be a preset positive number, and the stiffness conversion factor is set so that the stiffness value of the object obtained by calculation is within a reasonable range. For example, the rigidity value of a metal commonly used in nature is 10 5 In the order of MPa, the rigidity conversion coefficient ensures that the rigidity value of the object obtained by calculation is also 10 5 In MPa.
As described above, the colors of the theme of the user interface may be preset into two types, namely, a bright-color theme and a dark-color theme. Wherein the color value of the bright color theme is a value represented in white, e.g., 0 XFFFFFF; the color value of a dark theme is a value represented in black, for example, 0X 000000. c may be an average value of pixel points in an outer frame range of the object (e.g., the outer frame range in the calendar icon in fig. 4), and if the outer frame range of the object has no fixed area, the outermost periphery boundary of the object may be extended inward by a preset width as a color of the outer frame of the object, where the preset width may be set, of the average value of the pixel points in the outer frame range (a range between the outermost periphery boundary of the calculator icon and a dotted line in fig. 4).
When the object is a two-dimensional element, M s M/S, S denotes the area of the object; when the object is a three-dimensional element, M s M/V, V denotes the volume of the subject. The rigidity conversion coefficient is used for adjusting the rigidity value of an object, and in practical application, the rigidity conversion coefficient k can be set G =1。
In addition, the rigidity of the object and the transparency can be set to be in a positive relationship, and the transparency of the object can be set to be K, so that the rigidity of the object is as follows:
G=|b-c|×K×k G ×M s
the value and the obtaining mode of the transparency K of the object can refer to the description of the transparency of the object when the quality value of the object is generated.
Second, the stiffness of the object may also be related to the ambiguity (or sharpness) of the object in the user interface. When the fuzzy effect exists in the object, the feeling similar to cotton energy absorption is generated, and the rigidity of the object is reduced. Thus, the stiffness of the object, taking into account the effect of the ambiguity, is expressed as follows:
G=|b-c|×K×k G ×M s ×A。
where a represents the ambiguity of the object.
In practical application, a can be set to a value from 0 to 1, the more fuzzy the object is, the smaller the value of a is, and the definition of the object can be understood by the fuzziness of the object. The more blurred the object, the less sharp the object, and the less rigid the object.
Of course, in actual application, when a rigidity value is given to an object, the influence of the degree of blurring of the object may be considered without considering the influence of the transparency of the object. I.e. the stiffness of the object can also be expressed as follows:
G=|b-c|×k G ×M s ×A。
in practical applications, the ambiguity of an object can be obtained as follows: in the case where there is a setting interface of the degree of blur (or the degree of sharpness) of the object, the value of the degree of blur in the setting interface may be acquired. Under the condition that a setting interface is not available for a user to set the ambiguity of an object, the electronic equipment can capture an image of the object through screen capture and calculate the definition of an image area of the object in the screen capture.
For convenience of description, the rigidity of the object generated without considering the transparency and the blur degree of the object may be referred to as a first rigidity, the rigidity of the object generated with considering the transparency of the object may be referred to as a second rigidity, the rigidity of the object generated with considering the blur degree of the object may be referred to as a third rigidity, and the rigidity generated with considering both the transparency and the blur degree of the object may be referred to as a fourth rigidity.
As mentioned before, some parameters are only generated during movement in which the position of the object changes, for example, friction experienced by the object during movement, air resistance experienced by the object during movement. These parameters may or may not be related to the appearance parameters of the object.
In an animation scene in which the position of an object changes, a method of assigning dynamic physical parameters (or force-receiving parameters as described above) to the object will be described.
The method of imparting friction to an object in a user interface is described, by way of example, in terms of friction acting on the object.
In real-world motion scenes, friction is essential. Therefore, in order to obtain an animation effect similar to a motion in the real world, a friction force needs to be increased in an animation scene. In an animated scene of an object, however, sliding friction acting on the object is generally considered. Therefore, the sliding friction force can be taken as one physical parameter of the object.
In a user interface of an electronic device, a friction force between an object and another object that generates a relative motion with the object is mainly considered. When an object is in contact with the background in the user interface and in a scene that is moving relative to the background in the user interface, the other object is the background in the user interface.
In addition, since the mass of the object generally affects the magnitude of the frictional force, the frictional force acting on the object can be found as follows:
F f =M×O BE
wherein, F f Is the friction acting on the object, M is the mass of the object, O BE Is the relative coefficient of friction between the object and the background against which the object is placed.
The relative friction coefficient between an object and the background against which the object is placed can be obtained by:
the friction is first calculated for the background angle at which the object is located, typically considering darker objects, the greater the friction. In addition, since there is no completely smooth object in the real world, the minimum value of the friction force of the background angle needs to be set to be greater than 0, and based on the above description, the friction force of the background angle where the object is located can be obtained:
B f =e 1 +|0XFFFFFF-d|×k fB
wherein, B f As background friction force, e 1 For a set minimum background friction, d is the color of the background against which the object is placed, k fB And (4) converting the coefficient for the preset background friction force.
Wherein 0XFFFFFF is a value corresponding to white. Since there is no background object with a friction of 0 in nature, a positive number may be set as the minimum background friction, for example, the minimum background friction may be 0.001, 0.05, 0.2, 1, 1.2, etc., and the specific value of the minimum background friction is not limited in the embodiments of the present application.
The background friction conversion coefficient can adjust the value range of the obtained background friction, so that the obtained background friction and the friction in the real world belong to the same magnitude.
In practical application, d is a mean value of colors of pixels in a background where the object is located, for example, the object slides in a blue sky scene, and at this time, the mean value of the colors of the pixels in the background where the object is located represents the mean value of the pixels in the blue sky scene. Or randomly selecting a plurality of pixel points from the background of the object, calculating the mean value of the selected pixel points, and taking the mean value as the mean value of the colors of the pixel points in the background of the object.
The friction force of the object angle can be obtained by the calculation method based on the friction force of the background angle:
E f =e 2 +|0XFFFFFF-a|×k fE
wherein E is f Is friction force of the object, e 2 To set minimum object friction, a is the background color of the object, k fE And the coefficient is converted into a preset object friction force.
Wherein 0XFFFFFF is a value corresponding to white. Since there is no object with a friction of 0 in nature, a positive number may be set as the minimum object friction, for example, the minimum object friction may be 0.001, 0.05, 0.2, 1, 1.2, and the like, and the specific value of the minimum object friction is not limited in the embodiments of the present application.
The value range of the obtained object friction force can be adjusted by the object friction force conversion coefficient, so that the obtained object friction force and the friction force in the real world belong to the same magnitude.
In practical application, a may be an average value of colors of pixel points in the object.
Wherein the background friction scaling factor and the object friction scaling factor may be equal.
The relative friction coefficient between an object and the background against which the object is placed can be determined based on the background friction and the object friction, in practice the relative friction coefficient (O) between the object and the background against which the object is placed BE ) May be the product of background friction and object friction (B) f ×E f ) And other data characteristic values.
In the real world, the rougher an object is, the greater friction is generally. For an object in the user interface, the ambiguity of the object can also be understood as the roughness of the surface of the object in the real world, and thus the friction force is also related to the ambiguity of the object in the user interface.
Based on the above analysis, the friction force can be found to be:
Figure BDA0002936884420000211
where a represents the ambiguity of the object, which may take values between 0 and 1. The more the object is blurred, the smaller the value of A, the larger the friction force, and therefore, the blurring degree of the object and the friction force are in an inverse relationship.
In this example, the ambiguity of the object may refer to the above description, and the embodiments of the present application are not described herein again. For convenience of description, the friction force of the object generated without considering the degree of blur of the object may be referred to as a first friction force, and the friction force of the object generated with considering the degree of blur of the object may be referred to as a second friction force.
During the movement of real world objects, air resistance is inevitably also present. In order to make the motion animation effect of the object in the user interface closer to the motion effect in the real world, air resistance can be added to the motion process of the object to simulate the air resistance in the real world. Since the air resistance is attached to the object during the movement of the object, the air resistance can be set as a physical parameter of the object. It is generally understood that the faster the speed of movement of the object, the greater the air resistance. For an object in the user interface, it may also be set that the faster the movement speed of the object, the greater the air resistance. Thus, a positive relationship function between air resistance and speed of movement can be set.
F z =f(v);
Wherein, F z Representing the air resistance exerted on the moving object and v representing the speed of movement of the object.
Of course, in practical applications, a global resistance F may also be provided Q The global resistance is a fixed resistance, which can be set to be relatively small. Since the global resistance is small, the influence on the animation in which the object participates in the interface is small, and the global resistance may not be configured.
In order to match different animation scenes, other parameters may also be given to the object, or the physical parameters may be given to the object by using other motion models, which is not limited in the embodiment of the present application.
After some physical parameters are set for the object in the user interface, the motion parameters of the object can be generated for the motion model set in different animation scenes. The embodiment of the application is exemplified by a plurality of animation scenes in the following, and the motion parameters of the object can be obtained by adopting different motion models in different animation scenes, so that the motion animation of the object is generated. Of course, in the same animation scene, different motion models can be used to obtain the motion parameters of the object, so as to generate the motion animation of the object.
As an embodiment of the present application, in an animation in which an object in a user interface participates, there is often a process in which two or more objects collide with each other. In order to make the animation of the object in the user interface similar to the motion of two objects in the real world after collision, the physical parameters of the object can be generated according to the appearance parameters of the object, and then the motion parameters of the object can be generated based on the physical parameters of the object, so that the motion animation of the object can be generated according to the motion parameters of the object.
Taking the collision of object A and object B as an example, the law of conservation of momentum (M) can be followed A v ruA +M B v ruB =M A v A0 +M B v B0 ) Law of conservation of kinetic energy (1/2M) A v ruA 2 +1/2M B v ruB 2 =1/2M A v A0 2 +1/2M B v B0 2 ) The initial velocity after collision of the object a and the object B is calculated. Wherein v is ruA Indicates the velocity of the object A at the time of collision (first entry velocity), v ruB Indicates the velocity (second entry velocity) v of the object B at the time of collision A0 Represents the velocity (first exit velocity), v, of the object A after the collision B0 Representing the velocity of the post-collision object B (second exit velocity). After obtaining the velocity of the post-collision object a, the instantaneous velocity of the object a over time and the corresponding displacement may be obtained from the friction force experienced by the object a and the velocity of the object a after the collision of the object a. The motion parameters of the object a after the collision refer to the calculation process of the motion parameters of the object a, and are not described in detail.
Of course, in practical application, the model may also be modified to form a new motion model to obtain the motion parameters of the object a and the object B in the application scene.
As an example, referring to fig. 5(a), the gesture of the user gives a force to an object a (black ball) in the direction of an object B (icon of application D), the object a moves towards the object B, so that the object a and the object B collide, and the running scene is that the moving object a collides with the stationary object B. The stationary object B may be an object whose position is not fixed, or may be an object whose position is fixed.
Taking the example that the moving object a collides with the stationary object B, the process of generating the moving animation of the object a and the object B after the object a and the object B collide is described.
1.1, generating physical parameters of an object A according to the appearance parameters of the object A, and generating physical parameters of an object B according to the appearance parameters of the object B;
in the animation scene, the physical parameters comprise: the mass of object a and the mass of object B, the stiffness of object a and the stiffness of object B. Reference may be made to the above description to assign mass and stiffness to the object.
1.2, obtaining the instantaneous speed v of the object A before the collision between the object A and the object B ru (third input speed);
in the embodiment of the present application, the movement from the current position of the object a to the collision with the object B may be divided into two stages:
in the first stage, before the finger of the user is lifted from the touch screen of the electronic device, the object a is pushed downwards, and the object a starts moving from a rest state due to the upward friction force (the friction force given to the object by the above description). Here, the thrust force to which the object a is subjected may be set to a fixed value in advance, and the upward frictional force may be determined by the method of the frictional force given to the object described in the above-described embodiment.
In the second stage, after the finger of the user is lifted from the touch screen of the electronic device, the object a is subjected to upward friction force, and the object a starts to decelerate until colliding with the object B.
The instantaneous velocity v of the object a before the collision of the object a and the object B can be obtained by the above description ru
1.3, calculating the initial velocity v of the object A after the collision between the object A and the object B A0 (third exit velocity) and initial velocity v of object B B0 (fourth discharge speed).
In the embodiment of the present application, the initial velocity v of the object a may be set A0 And the initial velocity v of the object B B0 Is denoted by v chu
Wherein v is chu =v ru ×E lose 。E lose The rate of energy loss is expressed as,the energy absorption loss caused by collision is usually related to the physical rigidity characteristic, and is just caused by the incomplete rigidity characteristic of the object, so that the collision process can absorb energy. Therefore, it is possible to provide:
E lose =(G A +G B )/2×G max ,G max is the maximum constant of the stiffness of the user-defined.
Of course, in practical applications, it is also possible to set object a and object B as completely rigid objects, in which case there is no energy loss, and v chu =v ru
In calculating to obtain v chu Then, v needs to be adjusted chu Assigned to object a and object B.
Referring to fig. 5(B), when the object B is a fixed-position object, the velocity assigned to the object B is 0, and therefore v is chu All assigned to object A, i.e. v A0 =v chu ,v B0 =0。
Referring to FIG. 5(c), when the object B is not fixed, v is set chu Assigned to object a and object B. In practical applications, v can be set according to the relationship between the quality of object A and the quality of object B chu Assigned to object a and object B.
By way of example only, the following may be mentioned,
Figure BDA0002936884420000221
of course, there may also be application scenarios as shown in fig. 5 (d). Under the application scene, except through the law of conservation of momentum (M) A v ru =M A v A0 +M B v B0 ) Law of conservation of kinetic energy (1/2M) A v ru 2 =1/2M A v A0 2 +1/2M B v B0 2 ) The initial speed of the object a and the object B after the collision is calculated, and other motion models may also be used to calculate the initial speed of the object a and the object B after the collision, and the calculation process in the application scenario is not illustrated.
1.4, calculating the friction force to which the object A is subjectedF fA And air resistance F zA And the friction force F to which the object B is subjected fB And air resistance F zB
This step can be referred to the above description relating to imparting frictional force and air resistance to the subject.
When the object B is not fixed, the frictional force F applied to the object B needs to be calculated fB And air resistance F zB
In the case where the object is a fixed-position object, it is not necessary to consider the movement of the object B after the collision, and therefore it is not necessary to calculate the frictional force F received by the object B fB And air resistance F zB
1.5 by v At =v A0 +a A t calculating the time-varying velocity v of the object A At
Wherein the acceleration a of the object A A =F fA +F zA /M A
V when the object B is an object whose position is not fixed Bt =v B0 +a B t calculating the time-varying velocity v of the object B Bt Wherein the acceleration a of the object B B =F fB +F zB /M B
Based on the calculated speed (or displacement) of the object a and the object B with respect to time, a motion animation closer to the real world after the collision can be constructed.
As can be understood from the above description, in the process of calculating the motion parameters after the collision between the object a and the object B, an appropriate motion model may be selected according to a specific scene.
As another embodiment of the present application, in the real world, when a spring exists under some objects (or when an elastic force exists on a support on which the objects are located), when a user applies a downward pressing force to the objects, the objects may rebound. The embodiment of the present application exemplifies this application scenario, and describes how to simulate a press rebound animation in the near real world based on appearance parameters of an object.
Referring to fig. 6(a) to 6(d), when a user presses an object in a user interface, if an elastic force is assumed in the user interface below the object, a rebound phenomenon of the object may occur. Thus, press bounce animation is simulated with a contact card in the user interface. Before generating the rebound animation, the physical parameters required by the rebound animation are calculated firstly.
2.1, calculating the quality of the object according to the appearance parameters of the object.
This step may refer to the process of imparting mass to the object as previously described. Among them, the object may be a business card of lily in fig. 6(a) to 6 (d).
And 2.2, calculating the resilience force acting on the object according to the rebound degree and the mass of the object.
In practical applications, it can be considered that a spring is disposed at the bottom end of the object, and a user presses the object to apply a pressing force to the object through pressing on the touch screen of the electronic device. In the case where a pressure sensor is provided in a touch screen of the electronic device, a pressing force of a user on the screen, which is acquired by the pressure sensor, may be set as a pressing force acting on an object. Of course, when the touch screen of the electronic device is not provided with the pressure sensor, the pressing force applied to the object can be obtained according to the pressing time (contact time) of the contact or the contact area of the contact when the touch screen monitors the contact of the user, which is preset in the system. The longer the pressing time or the larger the contact area between the contact point of the gesture of the user and the touch screen is, the larger the corresponding pressing force is, and the larger the object is pressed downwards (which can also be understood as the larger the compression distance of the spring is).
Of course, it is also possible to set in the system that the pressing force exerted on the object is a fixed value. In particular implementations, a suitable model to obtain the pressing force to which the subject is subjected may be selected based on the processing capabilities of the processor of the electronic device.
Assuming that the pressing force acting on the object is F Y Gravity of the object is G g According to the principle that the acting force and the reacting force are equal, the resilience force F acting on the object can be obtained T =F Y +G g
As another embodiment of the present application, a rebound degree K opposite to the pressing direction can be set T . Since the degree of rebound needs to overcome the weight of the object to cause the object to rebound, the rebound force acting on the object can be obtained by the degree of rebound and the mass of the object.
I.e. F T =K T /M。
In practical application, the rebound degree can be set to be a fixed value, so that the rebound force of the object can be calculated and obtained; the corresponding degree of rebound can also be obtained based on the pressing force, the pressing time or the contact area of the user, so that the rebound force F acting on the object is obtained based on the calculation of the degree of rebound and the mass of the object T
As another embodiment of the present application, since the object may be a non-completely rigid object, the rigidity of the object may be calculated with reference to the above example.
In a specific implementation, the resilience force exerted on the subject is obtained by multiplying the stiffness of the subject when calculating the resilience force.
For example, F T =G×K T /M。
For convenience of description, the return force generated without considering the rigidity of the subject may be referred to as a first return force, and the return force generated with considering the rigidity of the subject may be referred to as a second return force.
And 2.3, calculating according to the resilience force to obtain the pressing displacement of the object.
In practical applications, it may be provided that the spring constant of the spring under the object is constant. It can be derived that the compression length of the spring (pressing displacement of the object) is:
x=F T /k。
of course, when the object and the spring under the object are in a balanced state due to the gravity of the object, the spring is in a compressed state, and thus, x can pass through p The compression length of the spring (or the equilibrium displacement of the object) with the object and the spring in equilibrium is calculated by G/k, by x-x p How much the spring compressed when pressed is obtained.
Referring to fig. 6(a), when a user presses down one business card, the business card may move down a certain distance. Objects that are further away appear smaller due to the user's perspective. Therefore, from the user's perspective, the object looks as if it is getting smaller, where the position indicated by the dotted line in fig. 6(a) is the position of the business card when no pressing force is applied.
As can be understood from steps 2.1 to 2.3, in the case where the pressing force acting on the object is not a constant value, the business card may become smaller as the degree of pressing force of the user increases, the contact time of the contact point in the gesture becomes longer, or the contact area of the contact point in the gesture increases.
2.4, under the condition that the spring coefficient of the spring, the compression length of the spring and the mass of the object above the spring are known, the motion parameters of the object after the user's gesture lifts off the touch screen of the electronic equipment can be calculated, and therefore motion animation of the object is generated.
After the gesture of the user lifts off the touch screen of the electronic device, before the object bounces to the position corresponding to the equilibrium state, the acting force applied to the object is downward gravity and upward resilience force. Wherein the weight of the subject is constant and the upward resilience of the subject is related to the compressed length of the spring.
In practical applications, according to the principle of conversion of elastic potential energy and kinetic energy of the spring, the object may bounce upward to a position corresponding to the equilibrium state (shown in fig. 6 (b)) under the action of the resilience force and gravity, and the spring is in a compressed state at the position due to the action of gravity; then continuously upwards bouncing to exceed the balance position (figure 6(c)), and after exceeding the position corresponding to the balance state, the business card becomes larger; finally, the card is dropped back to the position corresponding to the balanced state (fig. 6(d)), and the size of the card at the position corresponding to the balanced state is the same as that of the card at the position corresponding to the balanced state. Of course, in practical applications, there is also air resistance to return the object and the spring to equilibrium and to stabilize. The principle of conversion of elastic potential energy and kinetic energy of the spring is considered, and meanwhile the influence of air resistance acting is also considered.
Namely by
Figure BDA0002936884420000251
The velocity of the object at each position can be derived.
Of course, in practical applications, when the object has a light weight or a large pressing force, the object may rebound for several times before returning to the position corresponding to the equilibrium state; when the object has a heavy mass or a small pressing force, the object may directly rebound from the pressing position to a position corresponding to the equilibrium state.
It should be noted that, in practical applications, in the process of calculating the motion parameter of the object, some existing physical laws may be revised to reduce the amount of calculation, which is not limited in the embodiment of the present application.
As another embodiment of the present application, in the real world, in the case where a pressing force is applied to an object, there may also occur a situation such as inclination, deformation, or the like of the object. The embodiment of the present application will describe a method of generating a motion animation of an object when a pressing force is applied to the object in an interface.
3.1, calculating the mass and the gravity center of the object;
this step may be referred to as the method of assigning mass and center of gravity to the subject as previously described.
3.2, obtaining the pressing force.
In the embodiment of the present application, a corresponding relationship between the pressing force and the pressing area (a contact area between a gesture of a user and a touch screen) may be preset, and as an example, a forward function relationship between the contact area between the gesture of the user and the touch screen and the pressing force may be set.
Of course, in practical applications, the pressing force may also be obtained by the pressure sensor on the touch panel provided with the pressure sensor.
And 3.3, calculating the inclination displacement of the object according to the magnitude of the pressing force.
In this step, the principle that the center of gravity is not changed after the subject is pressed (i.e., the tilt axis when the subject is tilted coincides with the center of gravity of the subject) is followed. Referring to fig. 7(a) to 7(c), in fig. 7(a), a dot above the object indicates a position of the center of gravity of the object. When the user applies a downward pressing force at the position shown in fig. 7(b), the object is tilted with the axis of the center of gravity as the tilt axis (the straight line between the position of the pressing force and the position of the center of gravity is perpendicular to the tilt axis), and the tilt shown in fig. 7(b) is formed, where the dotted line indicates the position of the object before the pressing force is applied. When the user applies a downward pressing force at the position shown in fig. 7(c), the object is tilted with the axis of the center of gravity as the tilt axis (the straight line between the position of the pressing force and the position of the center of gravity is in a perpendicular relationship with the tilt axis), as shown in fig. 7 (c). The dotted line indicates the position of the object before the pressing force is applied.
In practical applications, the display interface of the electronic device needs to present not only the tilt effect, but also an animation process of the tilt. Therefore, it is necessary to calculate the inclination process of the object based on the position of the pressing force and the magnitude of the pressing force.
Fig. 7(a) to 7(c) show that, similarly to a scene in which two persons play with a seesaw in the real world, the weight of the persons on both sides of the seesaw is the same because the fulcrum is the center of gravity, and after a downward force is applied to one side of the seesaw, the person on the side needs to give an upward elastic force by stepping on the leg when the person on the side is tilted up again. Therefore, in the scene shown in fig. 7(a), the subject may be provided with springs respectively below the two sides of the center of gravity, the springs being used to give the subject an upward elastic force. Specifically, reference may be made to fig. 8(a) and 8 (b).
Referring to fig. 8(a), a spring is disposed under each side of the object, and the side receives an upward elastic force, which can be represented by kx, where k represents a spring coefficient, and x represents a stretching distance of the spring, and a downward gravity, which can be represented by k
Figure BDA0002936884420000252
Expressed, it can be derived based on the theorem F ═ ma
Figure BDA0002936884420000253
Referring to fig. 8(b), the spring constant of the spring may be set to be determined by the distance between the position of the applied force and the center of gravity and the moment, which may be represented by k ═ aL, for example. a is a moment coefficient and L represents the distance between the position of the applied force and the center of gravity.
The time-varying tilt displacement of the object can be obtained by the above two formulas.
Of course, in actual practice, other forms of tilt effects are possible, and as shown in fig. 9(a), the uppermost layer shows the state of the object when no pressing force is applied; the intermediate layer shows that when a pressing force is applied to one corner of the object, the other corner opposite to the corner is used as a fulcrum to generate a tilting effect (the dotted line represents the original state of the object); the lowermost layer shows that when a pressing force is applied to one of the sides of the object, the other side opposite to the one side is taken as a tilt axis, and a tilt effect is produced (a dotted line indicates an original state of the object). Of course, other tilting effects can also be generated, and the embodiment of the present application is not limited to this. As can be understood from fig. 9(a), the tilt axis is perpendicular to the first line (line between the point of application of the pressing force acting on the subject and the center of gravity of the subject), but the tilt axis is not coincident with the center of gravity.
And 3.4, calculating the rigidity of the object, and calculating the degree of shape change of the object caused by pressing according to the rigidity and the pressing force.
In practical applications, it is preset that the larger the rigidity of the object is, the smaller the strain is, and the larger the pressing force is, the larger the strain is. That is, the degree of deformation is in a reverse relationship with the rigidity of the object and in a forward relationship with the pressing force.
By way of example, B x =X×F Y /G;
Wherein, B x Representing the degree of deformation of the object, X representing a preset constant which allows the degree of deformation of the object to be adjusted, F Y Representing the pressing force, G representing the stiffness of the object.
The degree of deformation indicates the degree of deformation. After the degree of deformation is determined, the area of deformation can also be determined according to the area of the object. For example, the deformation area generated by the object can be obtained by multiplying the area of the object by the deformation degree. And finally, generating a deformation area of the object according to the deformation area. The center of the deformation region may be set in advance as a stress point of the pressing force acting on the object.
Further, since the deformation region and the stress point of the pressing force are centered, the calculated deformation region may exceed the region of the object itself, and in this case, the moving picture is generated in the region of the object, and the deformation moving picture is generated in the region other than the object.
Referring to fig. 9(b), the uppermost layer shows a state of the object when no pressing force is applied; the middle layer shows that when a pressing force is applied to one corner of the object, the corner generates a deformation effect (the dotted line represents the original state of the object); the lowermost layer shows that when a pressing force is applied to one of the sides of the object, the side produces a deformation effect (the dotted line indicates the original state of the object).
As shown in fig. 9(b), when the user presses a corner of the object, the resulting deformation region may be a deformation region centered at the corner, and since only 1/4 of the calculated deformation region is in the region where the object is located, the deformation region 3/4 is outside the region where the object is located, the generated deformation animation of the object is 1/4 deformation region in the region where the object is located. Of course, other deformation effects may also be generated, which is not limited in this application.
Of course, in addition to the need to generate a deformation region, it is also possible to generate a deformation depth (depth of depression) which is related to the degree of deformation, the greater the deformation depth. The depth of deformation being in a positive relationship with the degree of deformation, e.g. with reference to h F =k h B x Wherein h is F Is the depth of deformation, k h The depth coefficient is preset and is used for adjusting the deformation depth.
And 3.5, calculating the position and the elasticity change of the object caused by pressing according to the mass of the object.
Step 3.5 may refer to the process of generating the motion animation of press rebounding described in the previous embodiment, or may simplify the motion model adopted in the process of press rebounding described in the previous embodiment, which is not described herein again.
Referring to fig. 9(c), the upper layer of the 3 diagrams shows the state of the object when no pressing force is applied, and the lower layer shows the effect of different pressing forces, the 3 diagrams in fig. 9(c) show the pressing effect of the pressing force from small to large from left to right, the first diagram in fig. 9(c) shows the smallest pressing force acting on the object and the smallest pressing displacement of the object, and the third diagram in fig. 9(c) shows the largest pressing force acting on the object and the largest pressing displacement of the object.
As can be understood from fig. 9(a) to 9(c), step 3.3 takes into account the inclination of the object caused by the compression in the compression animation of the object, step 3.4 takes into account the deformation of the object caused by the compression in the compression animation of the object, and step 3.5 takes into account the displacement of the object caused by the compression and the compression rebound in the compression animation of the object. Of course, in practical application, the method may also be set in the pressing animation of the object, and one, two, or three of the inclination, the deformation, and the displacement rebound are considered, which is not limited in the embodiment of the present application.
In the embodiment of the present application, in generating the pressing animation of the object, a stereoscopic pressing animation effect or a planar pressing animation effect may be generated.
Referring to fig. 10, the first diagram in fig. 10 is a plan view of the object, the second diagram is a three-dimensional pressing effect diagram of the object, and the 3 rd diagram is a planar pressing effect diagram of the object. The dotted line in fig. 10 represents a plan position view before the object is pressed.
Referring to fig. 11, the first diagram in fig. 11 is a plan view of an object, the second diagram is a three-dimensional pressing effect diagram of the object, and the 3 rd diagram is a planar pressing effect diagram of the object. The dotted line in fig. 11 represents a plan position view before the object is pressed.
In practical applications, the position corresponding to the finger of the user in the second and third diagrams of fig. 11 may also have the effect of pressing deformation, for example, concave deformation centered on the position corresponding to the finger. The deformation of the recess centered at the position corresponding to the finger is not shown in fig. 11. The depression deformation in fig. 11 can be understood as a cross-sectional effect diagram of the depression deformation after the third diagram in fig. 11 is cut away with reference to the pressing deformation effect diagram shown in the third diagram in fig. 9(b), and the pressing deformation effect diagram shown in the third diagram in fig. 9(b) with the straight line of the force point of the pressing force as the section line in fig. 11.
As another embodiment of the present application, when a plurality of objects are arranged in a chain manner in the real world, an external force applied to one of the objects may cause the plurality of objects arranged in the chain manner to perform a chain movement. When a plurality of objects exist in the user interface and are arranged in a chain manner, if an external force is applied to one of the objects, the plurality of objects arranged in the chain manner can also generate animation which runs in a chain manner as in the real world.
By way of example, reference is made to fig. 12, which shows a perspective view (right-hand view in fig. 12) and a corresponding top view (left-hand view in fig. 12) and left-hand view (lower-hand view in fig. 12) of a plurality of objects arranged in a chain. If elastic connection exists among a plurality of objects, in a specific implementation process, a motion animation of chain motion can be generated as follows.
(1) The mass of each object in the chain arrangement is calculated.
This step is described with reference to the above description of the quality assigned to the object and will not be described in detail here.
(2) Based on the applied external force, the current object is caused to move.
By way of example, referring to FIG. 13, when an upward pulling force is applied to object A, the current object will have an upward motion.
(3) The motion effect of the current element is passed to other objects in the neighborhood.
As an example, in the case where the object a generates an upward motion, the object B1 and the object B2 adjacent to the object a also generate an upward motion effect.
(4) Each object is subjected to the transmitted external force, so that the current element generates movement and continues to influence the adjacent elements.
Similarly, when the object B1 generates corresponding movement, the C1 adjacent to the object B1 also generates corresponding movement; after the object B2 generates corresponding motion, the C2 adjacent to the object B2 also generates corresponding motion, and correspondingly, the objects D1 and D2 also generate corresponding motion. The object B1 is the object a, the object C1 is the object B1, and the object D1 is the object C1. The object B2 is the object a, the object C2 is the object B2, and the object D2 is the object C2.
Of course, in practical applications, the effect of the transmission may be different according to the mass of the object, for example, if the transmitted chain force (the force transmitted by the force application object in the objects arranged in a chain) is F, and the influence of the mass is not considered, the chain force received by the object receiving the chain force is F, and if the influence of the mass is considered, the chain force F received by the object receiving the chain force is F ch =F/M。
The quality factor is considered in the chain motion, because the larger the mass of the general object is, the smaller the dragging influence on the object is in the dragging process, and the smaller the mass of the object is, the larger the dragging influence on the object is in the dragging process.
It should be noted that, when generating the motion animation of the object, the object is not an object existing in the real world, and when calculating the motion parameters of the object, some motion models are inevitably revised to obtain different motion models in order to reduce the calculation amount, so that there are some differences in the generated motion animation. Or because the motion of the object in the real world is complex, the complex motion process is divided into a plurality of motion stages, and different motion models are adopted in different motion stages, so that the calculated amount is reduced. The embodiment of the application does not limit the motion model to obtain the motion parameters of the object after obtaining the physical parameters of the object, so as to generate the motion animation of the object.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional modules according to the method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, in the embodiment of the present application, the division of the module is schematic, and is only one logic function division, and there may be another division manner in actual implementation. The following description will be given by taking the case of dividing each function module corresponding to each function:
referring to fig. 14, the electronic device 1400 includes:
an object display module 1401, configured to display a first object and a second object on a user interface, where the first object and the second object are of the same type and have different appearances;
an animation generation module 1402 for generating a first motion animation of a first object in response to a first operation acting on the first object;
the animation generation module 1402 is further configured to generate a second motion animation of the second object in response to a second operation acting on the second object, where the first operation and the second operation are the same, and the first motion animation and the second motion animation are different.
As another embodiment of the present application, the animation generation module 1402 is further configured to:
detecting a first operation acting on a first object;
acquiring appearance parameters of a first object, and endowing the first object with physical parameters according to the appearance parameters of the first object;
acquiring initial parameters of a first object;
based on the physical parameters of the first object and the initial parameters of the first object, a motion animation of the first object is generated.
As another embodiment of the present application, the appearance parameters of the first object include: the color of a pixel point in the first object, and the physical parameters of the first object include: a mass of the first object;
the animation generation module 1402 is further configured to:
assigning physical parameters to the first object based on the appearance parameters of the first object comprises:
and generating the quality of the first object according to the color of the pixel point in the first object.
As another embodiment of the present application, the animation generation module 1402 is further configured to:
and generating a first quality value of the first object according to a first difference between the color of the pixel point in the first object and the theme color of the user interface where the first object is located, and taking the first quality value of the first object as the quality of the first object.
As another embodiment of the present application, the animation generation module 1402 is further configured to:
generating pixel quality of the pixel points in the first object according to a first difference between the color of the pixel points in the first object and the theme color of the user interface where the first object is located;
and generating a first quality value of the first object according to the pixel quality of the pixel points in the first object.
As another embodiment of the present application, the appearance parameters of the first object include: a transparency of the first object; the animation generation module 1402 is further configured to:
before the first quality value of the first object is taken as the quality of the first object, a second quality value of the first object is generated according to the transparency of the first object and the first quality value of the first object, and the second quality value of the first object is taken as the quality of the first object.
As another embodiment of the present application, the appearance parameters of the first object include: the color of the pixel point in the first object and the coordinate of the pixel point in the first object; the physical parameters of the first object include: barycentric coordinates of the first object;
the animation generation module 1402 is further configured to:
generating pixel quality of a pixel point in the first object according to a first difference between the color of the pixel point in the first object and the theme color of the user interface where the first object is located;
and calculating the barycentric coordinate of the first object according to the coordinate of the pixel point in the first object and the pixel quality of the pixel point in the first object.
As another embodiment of the present application, the appearance parameters of the first object further include: the outline color of the first object, the area or volume of the first object; the physical parameters of the first object include: a stiffness of the first object;
the animation generation module 1402 is further configured to:
calculating a second difference between the color of the outer border of the first object and the color of the theme of the user interface where the first object is located;
generating a unit mass of the first object based on the mass of the first object and the area or volume of the first object;
generating a first stiffness of the first object from the second difference and the unit mass of the first object, and taking the first stiffness of the first object as the stiffness of the first object.
As another embodiment of the present application, the appearance parameters of the first object further include: a transparency of the first object;
the animation generation module 1402 is further configured to:
generating a second stiffness of the first object according to the first stiffness of the first object and the transparency of the first object before the first stiffness of the first object is taken as the stiffness of the first object, and taking the second stiffness of the first object as the stiffness of the first object.
As another embodiment of the present application, the appearance parameters of the first object further include: an ambiguity of the first object;
the animation generation module 1402 is further configured to:
generating a third stiffness of the first object according to the first stiffness of the first object and the ambiguity of the first object before the first stiffness of the first object is taken as the stiffness of the first object, and taking the third stiffness of the first object as the stiffness of the first object.
As another embodiment of the present application, the animation generation module 1402 is further configured to:
generating a relative friction coefficient between the first object and the background of the first object according to the background color of the first object and the color of the background of the first object;
and calculating a first friction force acting on the first object according to the mass and the relative friction coefficient of the first object, and taking the first friction force as the friction force acting on the first object during the movement of the first object.
As another embodiment of the present application, the animation generation module 1402 is further configured to:
generating an object friction force of the first object according to the ground color of the first object;
generating background friction of the background of the first object according to the color of the background of the first object;
and generating a relative friction coefficient between the first object and the background where the first object is located according to the object friction force and the background friction force.
As another embodiment of the present application, the appearance parameters of the first object further include: an ambiguity of the first object;
the animation generation module 1402 is further configured to:
before the first frictional force is taken as the frictional force acting on the first object during the movement of the first object, a second frictional force acting on the first object is generated based on the first frictional force and the degree of blur of the first object, and the second frictional force is taken as the frictional force acting on the first object during the movement of the first object.
As another embodiment of the present application, the animation generation module 1402 is further configured to:
the speed of the first object is acquired, and the air resistance acting on the first object during the movement of the first object is generated according to the speed of the first object.
As another embodiment of the present application, the animation generation module 1402 is further configured to:
generating a motion parameter of the first object based on the physical parameter of the first object and the initial parameter of the first object;
and generating the motion animation of the first object according to the motion parameters of the first object. As another embodiment of the present application, after the first operation acts on the first object, the first object and the third object collide;
the physical parameter of the first object comprises the mass of the first object, and the initial parameter of the first object comprises the first input speed of the first object;
the animation generation module 1402 is further configured to:
acquiring physical parameters of a third object and initial parameters of the third object, wherein the physical parameters of the third object comprise the quality of the third object, and the initial parameters of the third object comprise a second input speed of the third object;
calculating a first exit velocity of the first object and a second exit velocity of the third object from the mass of the first object, the mass of the third object, the first entry velocity and the second entry velocity based on a momentum conservation law and an energy conservation law;
calculating a time-varying velocity and/or displacement of the first object after the collision based on the first velocity of the first object and the frictional force acting on the first object after the collision;
the velocity and/or displacement of the third object over time after the collision is calculated from the second exit velocity of the third object and the frictional force acting on the third object after the collision.
As another embodiment of the present application, after the first operation acts on the first object, the first object collides with a stationary third object;
the physical parameters of the first object comprise the mass of the first object and the stiffness of the first object, and the initial parameters of the first object comprise the third entry velocity of the first object;
the animation generation module 1402 is further configured to:
acquiring physical parameters of a third object, wherein the physical parameters of the third object comprise the mass of the third object and the rigidity of the third object;
calculating the sum of a third exit velocity of the first object and a fourth exit velocity of the third object according to the third entry velocity of the first object, the stiffness of the first object and the stiffness of the third object;
calculating a third exit velocity of the first object and a fourth exit velocity of the third object according to the sum of the third exit velocity of the first object and the fourth exit velocity of the third object and the ratio of the mass of the first object and the mass of the third object;
calculating the time-varying speed and/or displacement of the first object after the collision according to the third speed of the first object and the friction force acting on the first object after the collision;
the velocity and/or displacement of the third object over time after the collision is calculated based on the fourth exit velocity of the third object and the frictional force acting on the third object after the collision.
As another embodiment of the present application, the first operation is a pressing operation acting on the first object, the first object generating press rebounding;
the physical parameters of the first object include: a mass of the first object; the initial parameters of the first object include: a degree of rebound acting on the first object and a spring constant of the elastic member acting on the first object;
the animation generation module 1402 is further configured to: generating a first resilience force acting on the first object according to the degree of resilience and the mass of the first object;
calculating the pressing displacement of the first object according to the first resilience force and the elastic coefficient;
calculating elastic potential energy in the scene where the first object is located according to the pressing displacement and the elastic coefficient of the first object;
the motion parameters of the first object are obtained based on a model in which elastic potential energy is equal to kinetic energy of the first object and air resistance acting on the first object does work.
As another embodiment of the present application, the physical parameters of the first object further include: a stiffness of the first object;
the animation generation module 1402 is further configured to:
generating a second resilience force acting on the first object according to the first resilience force and the rigidity of the first object;
and generating the pressing displacement of the first object according to the second resilience force and the elasticity coefficient.
As another embodiment of the present application, the first operation is a pressing operation used on a first object, the first object generating a pressing inclination;
the physical parameters of the first object include: a center of gravity of the first object; the initial parameters of the first object include: a stress point of a pressing force acting on the first object;
when the first object generates the pressing tilt motion, the tilt axis of the first object is perpendicular to a first line which is a line between a point of application of a pressing force acting on the first object and the center of gravity of the first object.
As another embodiment of the present application, the tilt axis of the first object coincides with the center of gravity of the first object.
As another embodiment of the present application, the first operation is a pressing operation acting on the first object, and the first object generates a pressing deformation;
the physical parameters of the first object include: a stiffness of the first object; the initial parameters of the first object comprise pressing force acting on the first object and stress points of the pressing force; the motion parameters of the first object comprise a deformation region of the first object;
the animation generation module 1402 is further configured to:
calculating a degree of deformation of the first object based on the pressing force acting on the first object and the rigidity of the first object;
calculating the deformation area of the first object according to the deformation degree of the first object and the area of the first object;
and generating a deformation area of the first object according to the deformation area of the first object, wherein the center of the deformation area is the stress point of the pressing force acting on the first object.
As another embodiment of the present application, when the first operation acts on the first object, a chain motion is generated between the first object and a fourth object arranged in a chain with the first object, wherein a chain force applied to the fourth object is a chain force applied to a force application object of the fourth object divided by a mass of the fourth object.
It should be noted that, because the contents of information interaction, execution process, and the like between the electronic devices/modules are based on the same concept as that of the method embodiment of the present application, specific functions and technical effects thereof may be referred to specifically in the method embodiment section, and are not described herein again.
It will be clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely illustrated, and in practical applications, the above function distribution may be performed by different functional modules according to needs, that is, the internal structure of the electronic device is divided into different functional modules to perform all or part of the above described functions. Each functional module in the embodiments may be integrated into one processing module, or each module may exist alone physically, or two or more modules are integrated into one module, and the integrated module may be implemented in a form of hardware, or in a form of software functional module. In addition, specific names of the functional modules are only used for distinguishing one functional module from another, and are not used for limiting the protection scope of the application. For the specific working process of the module in the electronic device, reference may be made to the corresponding process in the foregoing method embodiment, which is not described herein again.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the electronic device may implement the steps in the above method embodiments.
Embodiments of the present application further provide a computer program product, which when run on a first device, enables the first device to implement the steps in the foregoing method embodiments.
The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or apparatus capable of carrying computer program code to a first device, including recording media, computer Memory, Read-Only Memory (ROM), Random-Access Memory (RAM), electrical carrier signals, telecommunications signals, and software distribution media. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
An embodiment of the present application further provides a chip system, where the chip system includes a processor, the processor is coupled to the memory, and the processor executes a computer program stored in the memory, so as to enable the electronic device to implement the steps of any of the method embodiments of the present application. The chip system may be a single chip or a chip module composed of a plurality of chips.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and method steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (25)

1. A method for generating animation of an object in an interface is characterized by comprising the following steps:
displaying a first object and a second object on a user interface, wherein the first object and the second object are the same in type and different in appearance;
generating a first motion animation of the first object in response to a first operation acting on the first object;
generating a second motion animation of the second object in response to a second operation acting on the second object, the first operation and the second operation being the same, the first motion animation and the second motion animation being different.
2. The method of claim 1, wherein generating a first motion animation of the first object in response to a first operation acting on the first object comprises:
detecting a first operation acting on the first object;
acquiring appearance parameters of the first object, and endowing physical parameters to the first object according to the appearance parameters of the first object;
acquiring initial parameters of the first object;
generating a motion animation of the first object based on the physical parameters of the first object and the initial parameters of the first object.
3. The method of claim 2, wherein the appearance parameters of the first object comprise: the color of a pixel point in the first object, and the physical parameters of the first object include: a mass of the first object;
said assigning physical parameters to the first object according to the appearance parameters of the first object comprises:
and generating the quality of the first object according to the color of the pixel point in the first object.
4. The method of claim 3, wherein said generating the quality of the first object based on the color of the pixel points in the first object comprises:
and generating a first quality value of the first object according to a first difference between the color of a pixel point in the first object and the color of the theme of the user interface where the first object is located, and taking the first quality value of the first object as the quality of the first object.
5. The method of claim 4, wherein generating the first quality value for the first object based on a first difference between a color of a pixel in the first object and a color of a subject matter of a user interface in which the first object is located comprises:
generating pixel quality of a pixel point in the first object according to a first difference between the color of the pixel point in the first object and the theme color of the user interface where the first object is located;
and generating a first quality value of the first object according to the pixel quality of the pixel point in the first object.
6. The method of claim 4 or 5, wherein the appearance parameters of the first object comprise: a transparency of the first object; before the first quality value of the first object is taken as the quality of the first object, the method further comprises the following steps:
and generating a second quality value of the first object according to the transparency of the first object and the first quality value of the first object, and taking the second quality value of the first object as the quality of the first object.
7. The method of any of claims 2 to 6, wherein the appearance parameters of the first object comprise: the color of the pixel point in the first object and the coordinate of the pixel point in the first object; the physical parameters of the first object include: barycentric coordinates of the first object;
said assigning physical parameters to the first object according to the appearance parameters of the first object comprises:
generating pixel quality of a pixel point in the first object according to a first difference between the color of the pixel point in the first object and the theme color of the user interface where the first object is located;
and calculating the barycentric coordinate of the first object according to the coordinate of the pixel point in the first object and the pixel quality of the pixel point in the first object.
8. The method of any of claims 3 to 7, wherein the appearance parameters of the first object further comprise: the outline color of the first object, the area or volume of the first object; the physical parameters of the first object include: a stiffness of the first object;
said assigning physical parameters to the first object according to the appearance parameters of the first object comprises:
calculating a second difference between the color of the outer border of the first object and the color of the theme of the user interface where the first object is located;
generating a unit mass of the first object from the mass of the first object and an area or volume of the first object;
generating a first stiffness of the first object from the second difference and a unit mass of the first object, and taking the first stiffness of the first object as the stiffness of the first object.
9. The method of claim 8, wherein the appearance parameters of the first object further comprise: a transparency of the first object; before taking the first stiffness of the first object as the stiffness of the first object, further comprising:
and generating a second rigidity of the first object according to the first rigidity of the first object and the transparency of the first object, and taking the second rigidity of the first object as the rigidity of the first object.
10. The method of claim 8, wherein the appearance parameters of the first object further comprise: an ambiguity of the first object; before taking the first stiffness of the first object as the stiffness of the first object, further comprising:
and generating a third rigidity of the first object according to the first rigidity of the first object and the ambiguity of the first object, and taking the third rigidity of the first object as the rigidity of the first object.
11. The method of any of claims 3 to 10, further comprising:
generating a relative friction coefficient between the first object and the background of the first object according to the background color of the first object and the color of the background of the first object;
and calculating a first friction force acting on the first object according to the mass of the first object and the relative friction coefficient, and taking the first friction force as the friction force acting on the first object during the motion of the first object.
12. The method of claim 11, wherein generating the relative friction coefficient between the first object and the background of the first object based on the background color of the first object and the color of the background of the first object comprises:
generating an object friction force of the first object according to the ground color of the first object;
generating background friction force of the background where the first object is located according to the color of the background where the first object is located;
and generating a relative friction coefficient between the first object and the background of the first object according to the object friction force and the background friction force.
13. The method of claim 11 or 12, wherein the appearance parameters of the first object further comprise: an ambiguity of the first object; before the first friction is taken as the friction acting on the first object in the process of moving the first object, the method further comprises the following steps:
and generating a second friction force acting on the first object according to the first friction force and the fuzzy degree of the first object, and taking the second friction force as the friction force acting on the first object during the motion of the first object.
14. The method of any of claims 2 to 13, further comprising:
acquiring the speed of the first object, and generating air resistance acting on the first object during the movement of the first object according to the speed of the first object.
15. The method of any of claims 2 to 14, wherein generating the motion animation of the first object based on the physical parameters of the first object and the initial parameters of the first object comprises:
generating a motion parameter of the first object based on a physical parameter of the first object and an initial parameter of the first object;
and generating the motion animation of the first object according to the motion parameters of the first object.
16. The method of claim 15, wherein the first object and a third object collide after the first operation acts on the first object;
the physical parameter of the first object comprises a mass of the first object, and the initial parameter of the first object comprises a first incoming velocity of the first object;
the generating of the motion parameters of the first object based on the physical parameters of the first object and the initial parameters of the first object comprises:
acquiring physical parameters of the third object and initial parameters of the third object, wherein the physical parameters of the third object comprise the mass of the third object, and the initial parameters of the third object comprise a second input speed of the third object;
calculating a first exit velocity of the first object and a second exit velocity of the third object from the mass of the first object, the mass of the third object, the first entry velocity, and the second entry velocity based on a momentum conservation law and an energy conservation law;
calculating a velocity and/or displacement of the first object over time after the collision from a first exit velocity of the first object and a frictional force acting on the first object after the collision;
calculating a velocity and/or displacement of the third object over time after the collision from a second exit velocity of the third object and a frictional force acting on the third object after the collision.
17. The method of claim 15, wherein upon the first operation acting on the first object, the first object collides with a stationary third object;
the physical parameters of the first object comprise the mass of the first object and the stiffness of the first object, and the initial parameters of the first object comprise the third entry velocity of the first object;
the generating motion parameters of the first object based on the physical parameters of the first object and the initial parameters of the first object comprises:
acquiring physical parameters of the third object, wherein the physical parameters of the third object comprise the mass of the third object and the rigidity of the third object;
calculating the sum of a third exit velocity of the first object and a fourth exit velocity of the third object according to the third entry velocity of the first object, the stiffness of the first object and the stiffness of the third object;
calculating a third exit velocity of the first object and a fourth exit velocity of the third object according to a sum of a third exit velocity of the first object and a fourth exit velocity of the third object and a ratio of a mass of the first object and a mass of the third object;
calculating a velocity and/or displacement of the first object over time after the collision based on the third velocity of the first object and the frictional force acting on the first object after the collision;
calculating a velocity and/or displacement of the third object over time after the collision based on the fourth exit velocity of the third object and the frictional force acting on the third object after the collision.
18. The method of claim 15, wherein the first operation is a press operation on the first object, the first object producing press rebounds;
the physical parameters of the first object include: a mass of the first object; the initial parameters of the first object include: a degree of rebound acting on the first object and a spring constant of a spring member acting on the first object;
the generating motion parameters of the first object based on the physical parameters of the first object and the initial parameters of the first object comprises:
generating a first resilience force acting on the first object according to the degree of resilience and the mass of the first object;
according to the first resilience force and the elasticity coefficient, calculating the pressing displacement of the first object;
calculating elastic potential energy in the scene where the first object is located according to the pressing displacement of the first object and the elastic coefficient;
obtaining a motion parameter of the first object based on a model in which the elastic potential energy is equal to the kinetic energy of the first object and the air resistance acting on the first object does work.
19. The method of claim 18, wherein the physical parameters of the first object further comprise: a stiffness of the first object;
the calculating the pressing displacement of the first object according to the first resilience force and the elasticity coefficient comprises:
generating a second resilience force acting on the first object according to the first resilience force and the rigidity of the first object;
generating a pressing displacement of the first object according to the second resilience force and the elastic coefficient.
20. The method of claim 15, wherein the first operation is a pressing operation used on the first object, the first object producing a pressing tilt;
the physical parameters of the first object include: a center of gravity of the first object; the initial parameters of the first object include: a stress point of a pressing force acting on the first object;
when the first object generates the pressing tilting motion, the tilting axis of the first object is perpendicular to a first connecting line, and the first connecting line is a connecting line between the stress point of the pressing force acting on the first object and the gravity center of the first object.
21. The method of claim 20, wherein the tilt axis of the first object coincides with a center of gravity of the first object.
22. The method of claim 15, wherein the first operation is a pressing operation on the first object, the first object being deformed by pressing;
the physical parameters of the first object include: a stiffness of the first object; the initial parameters of the first object comprise a pressing force acting on the first object and a stress point of the pressing force; the motion parameters of the first object comprise a deformation region of the first object;
the generating motion parameters of the first object based on the physical parameters of the first object and the initial parameters of the first object comprises:
calculating a degree of deformation of the first object based on the pressing force acting on the first object and the rigidity of the first object;
calculating the deformation area of the first object according to the deformation degree of the first object and the area of the first object;
and generating a deformation area of the first object according to the deformation area of the first object, wherein the center of the deformation area is a stress point of a pressing force acting on the first object.
23. The method of claim 15, wherein the first operation acts on a first object to generate a chain motion between the first object and a fourth object that is chain-arranged with the first object, wherein a chain force experienced by the fourth object is a chain force experienced by a force-applying object of the fourth object divided by a mass of the fourth object.
24. An electronic device, characterized in that the electronic device comprises a processor for executing a computer program stored in a memory for causing the electronic device to carry out the method according to any one of claims 1 to 23.
25. A computer-readable storage medium, in which a computer program is stored which, when run on a processor, causes an electronic device to carry out the method of any one of claims 1 to 23.
CN202110169797.8A 2021-02-05 2021-02-05 Animation generation method for object in interface, electronic equipment and storage medium Pending CN114880053A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110169797.8A CN114880053A (en) 2021-02-05 2021-02-05 Animation generation method for object in interface, electronic equipment and storage medium
PCT/CN2021/140952 WO2022166456A1 (en) 2021-02-05 2021-12-23 Animation generation method for objects in interface, and electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110169797.8A CN114880053A (en) 2021-02-05 2021-02-05 Animation generation method for object in interface, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114880053A true CN114880053A (en) 2022-08-09

Family

ID=82666949

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110169797.8A Pending CN114880053A (en) 2021-02-05 2021-02-05 Animation generation method for object in interface, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN114880053A (en)
WO (1) WO2022166456A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1692329A (en) * 2002-11-12 2005-11-02 索尼计算机娱乐公司 Method and apparatus for processing files utilizing a concept of weight so as to visually represent the files in terms of whether the weight thereof is heavy or light
CN103853423A (en) * 2012-11-28 2014-06-11 三星电子株式会社 Method for providing user interface based on physical engine and an electronic device thereof

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101354630B (en) * 2008-09-12 2010-08-11 华为终端有限公司 Human-machine interaction method of terminal equipment and terminal equipment thereof
KR101638056B1 (en) * 2009-09-07 2016-07-11 삼성전자 주식회사 Method for providing user interface in mobile terminal
CN108920229A (en) * 2018-06-11 2018-11-30 网易(杭州)网络有限公司 Information processing method, device and storage medium and terminal
CN111714880B (en) * 2020-04-30 2023-10-20 完美世界(北京)软件科技发展有限公司 Picture display method and device, storage medium and electronic device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1692329A (en) * 2002-11-12 2005-11-02 索尼计算机娱乐公司 Method and apparatus for processing files utilizing a concept of weight so as to visually represent the files in terms of whether the weight thereof is heavy or light
CN103853423A (en) * 2012-11-28 2014-06-11 三星电子株式会社 Method for providing user interface based on physical engine and an electronic device thereof

Also Published As

Publication number Publication date
WO2022166456A1 (en) 2022-08-11

Similar Documents

Publication Publication Date Title
CN109993823B (en) Shadow rendering method, device, terminal and storage medium
CN115964106B (en) Graphic interface display method, electronic device, medium and program product
CN110488977B (en) Virtual reality display method, device and system and storage medium
WO2020216025A1 (en) Face display method and apparatus for virtual character, computer device and readable storage medium
KR101859312B1 (en) Image processing method and apparatus, and computer device
CN110139033B (en) Photographing control method and related product
CN107580209B (en) Photographing imaging method and device of mobile terminal
CN107707827A (en) A kind of high-dynamics image image pickup method and mobile terminal
CN108898068A (en) A kind for the treatment of method and apparatus and computer readable storage medium of facial image
CN110139028A (en) A kind of method and head-mounted display apparatus of image procossing
CN108269230A (en) Certificate photo generation method, mobile terminal and computer readable storage medium
CN113436301B (en) Method and device for generating anthropomorphic 3D model
CN110263617B (en) Three-dimensional face model obtaining method and device
CN109753892B (en) Face wrinkle generation method and device, computer storage medium and terminal
CN107103581B (en) Image reflection processing method and device and computer readable medium
CN109978996B (en) Method, device, terminal and storage medium for generating expression three-dimensional model
CN110807769B (en) Image display control method and device
CN115398879A (en) Electronic device for communication with augmented reality and method thereof
CN110555815B (en) Image processing method and electronic equipment
CN113033341B (en) Image processing method, device, electronic equipment and storage medium
CN110189348A (en) Head portrait processing method, device, computer equipment and storage medium
CN109859115A (en) A kind of image processing method, terminal and computer readable storage medium
CN107913519B (en) Rendering method of 2D game and mobile terminal
CN114880053A (en) Animation generation method for object in interface, electronic equipment and storage medium
CN110660032A (en) Object shielding method, object shielding device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination