CN113808236A - Image processing method, image processing device, electronic equipment and storage medium - Google Patents
Image processing method, image processing device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN113808236A CN113808236A CN202010528831.1A CN202010528831A CN113808236A CN 113808236 A CN113808236 A CN 113808236A CN 202010528831 A CN202010528831 A CN 202010528831A CN 113808236 A CN113808236 A CN 113808236A
- Authority
- CN
- China
- Prior art keywords
- bone
- image
- mass point
- special effect
- particle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012545 processing Methods 0.000 title claims abstract description 28
- 238000003860 storage Methods 0.000 title claims abstract description 20
- 238000003672 processing method Methods 0.000 title claims abstract description 14
- 210000000988 bone and bone Anatomy 0.000 claims abstract description 123
- 230000000694 effects Effects 0.000 claims abstract description 118
- 239000002245 particle Substances 0.000 claims abstract description 85
- 230000005484 gravity Effects 0.000 claims abstract description 58
- 238000000034 method Methods 0.000 claims abstract description 28
- 238000009877 rendering Methods 0.000 claims abstract description 27
- 238000004088 simulation Methods 0.000 claims abstract description 26
- 238000006073 displacement reaction Methods 0.000 claims description 22
- 238000012937 correction Methods 0.000 claims description 20
- 238000013016 damping Methods 0.000 claims description 19
- 230000008859 change Effects 0.000 claims description 5
- 230000033001 locomotion Effects 0.000 abstract description 69
- 230000009471 action Effects 0.000 abstract description 9
- 238000004364 calculation method Methods 0.000 description 17
- 238000010586 diagram Methods 0.000 description 16
- 230000006870 function Effects 0.000 description 8
- 238000004590 computer program Methods 0.000 description 7
- 239000011324 bead Substances 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 239000011159 matrix material Substances 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000014509 gene expression Effects 0.000 description 3
- 238000005293 physical law Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 230000005856 abnormality Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000000704 physical effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application discloses an image processing method, an image processing device, electronic equipment and a storage medium, which are used for solving the problem of poor reality of dynamic control on a special effect object in the related art. The method comprises the following steps: collecting an image and a gravity direction; identifying a target object in the image; determining the initial position of each bone particle of the special effect object in the image by taking the target object as a reference object; aiming at mass points of each bone, performing spring simulation on the bone according to the gravity direction and the initial positions of the mass points, and correcting the initial positions of the mass points to obtain first positions; and rendering the special effect object into the image according to the first position of the mass point of each bone. According to the method and the device, based on the actually obtained gravity direction, spring simulation is carried out on the special effect object, so that the motion of the dynamic skeleton in the special effect object can be taken as reference with the gravity direction, the motion can accord with the natural law of gravity action, and the reality of controlling the motion of the special effect object under the action of gravity can be improved.
Description
Technical Field
The present application relates to the field of information processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
Most of the current shooting-type application programs support special effect functions. For example, a person on the screen is added with a head ornament, an interesting plant, and the like.
In the field of image processing, physical simulation is often employed to make virtual objects in a picture more realistic. The physical simulation is one of the technical means for acquiring the animation posture through real-time calculation in computer animation, and the effect of dynamic bones can be obtained through modeling and physical calculation of the bones. In magic expression based on human faces, dynamic bones have irreplaceable effects that enhance the sense of reality of actions.
The most direct scheme for realizing the current dynamic skeleton is to create a rigid body for each bone by using a current rigid body physical engine (such as Physx or Bullet), constrain the rigid bodies between the bones through joints, synchronously endow the spatial positions and rotation of the rigid bodies to the corresponding bones after simulation calculation, and generate a real-time posture. However, the method is complex in calculation, has high requirements on equipment hardware, and is difficult to apply and popularize to common terminals. Therefore, in order to be applied to common terminal equipment, the modeling process is generally simplified in the related technology, a rigid body is not used for carrying out simulation calculation on bones, the calculation is carried out through mass points of the positions of the bones, and the spring simulation is carried out on the bones, so that the difficulty of parameter adjustment is reduced, the calculated amount is reduced, the calculation result is more stable, and the specific application is more convenient. However, the inventor finds that the reality of the motion of the simplified method is weakened, and the motion of the virtual object is obviously not consistent with the physical law due to relative rough calculation.
Disclosure of Invention
The application aims to provide an image processing method, an image processing device, an electronic device and a storage medium, which are used for solving the problems that the reality sense of motion is weakened, and the motion without a virtual object is often clear and does not accord with the physical law due to relatively rough calculation.
In a first aspect, an embodiment of the present application provides an image processing method, including:
collecting an image and a gravity direction;
identifying a target object in the image, and determining the initial position of a mass point of each bone of the special effect object in the image by taking the target object as a reference object; the special effect object is obtained by physical modeling in advance, the special effect object comprises a plurality of bones, and each bone is provided with a corresponding mass point;
aiming at mass points of each bone, performing spring simulation on the bone according to the gravity direction and the initial positions of the mass points, and correcting the initial positions of the mass points to obtain first positions;
rendering the special effect object into the image according to the first position of the mass point of each bone.
In one embodiment, the performing a spring simulation on the bone according to the gravity direction and the initial position of the mass point, and correcting the initial position of the mass point to obtain a first position includes:
determining a displacement between a reference position of the mass point and an initial position of the mass point, wherein if the image is a first frame image, the reference position is a position of the mass point when the special effect object is in an initial state; if the image is not the first frame image, the reference position is the position of the mass point in the last frame image of the image;
determining a comprehensive acting force according to the gravity value of the gravity direction and a preset external force;
and performing spring simulation on the bone according to the displacement and the comprehensive acting force, and correcting the initial position of the mass point to obtain a first position.
In one embodiment, the performing a spring simulation on the bone according to the displacement and the combined acting force to correct the initial position of the mass point to obtain a first position includes:
the first position is obtained according to the following formula:
P1=V*(a-damping)*t+F/(b*M)*(a–force_damping)*t2
wherein, V ═ P-P'; f — ext + G;
wherein, P1Representing the first position, V representing the displacement, P representing an initial position of the particle, P' representing a reference position of the particle, a and b both being constant, M representing a preset mass of the particle, F ext representing the combined force, G representing the gravity, damming representing a spring damping, force _ damming representing an attenuation coefficient of the force, t representing a play duration of one frame of image of the video.
In one embodiment, the rendering the special effect object into the image according to the first position of the mass point of each bone includes:
determining a length between a parent node location of the particle and the first location; the parent node is connected with at least one child node, and the position of each child node changes along with the change of the position of the parent node; the particle belongs to a node of the at least one child node;
when the length is larger than the set length of the bone corresponding to the mass point, correcting the first position to be a second position, wherein the length between the first position and the parent node position of the mass point is equal to the set length, according to a preset stiffness coefficient;
and rendering the special effect object into the image after animation skinning is carried out by taking the second position of the mass point of each bone as a reference.
In one embodiment, when the length is greater than a set length of a bone corresponding to the mass point, correcting the first position according to a preset stiffness coefficient to obtain a second position includes:
correcting the first position according to the following formula to obtain the second position:
P2=D*(len(D)-max_len)/len(D);
wherein D ═ Pstart–P1;max_len=rest_len*(1-stiffness)*c;
P2Represents the second position, PstartRepresents the parent node location, P, of the particle1Representing the first position, len (D) representing the length of the D vector, rest _ len representing the particle to P when initializing the special effect objectstartLength of (c) represents a preset stiffness coefficient, and c represents a constant.
In one embodiment, when the length is less than a set length of a bone to which the particle corresponds, the method further comprises:
and correcting the first position according to a preset elastic coefficient to obtain the second position.
In one embodiment, the correcting the first position according to the preset elastic coefficient to obtain the second position includes:
correcting the first position according to the following formula to obtain the second position:
P2=(Pstart–P1)*elastic;
P2representing said second position, elastic representing said preset elastic coefficient, PstartRepresents the parent node location, P, of the particle1Representing the first position.
In one embodiment, before rendering the special effect object into the image after the skinning is performed by animation with reference to the second position of the mass point of each bone, the method further includes:
when the mass point collides with the target object, acquiring a motion range of the special effect object constructed by a coordinate system of the target object; when the second position is not within the range of motion, correcting the second position to be within the range of motion: and/or the presence of a gas in the gas,
respectively constructing the mass point and the sphere of the target object by taking the mass point and the target object as sphere central points, and determining a straight line formed by the sphere center of the sphere of the mass point and the sphere center of the sphere of the target object when the mass point is in the sphere of the target object; the particle is moved out of the sphere of the target object along the line and in a direction away from the target object.
In one embodiment, the motion range is a preset motion plane of the special effect object in a coordinate system of the target object; said correcting said second position to within said range of motion comprises:
correcting the second position to be within the range of motion according to the following equation:
P21=P2-N*dis;
wherein dis ═ dot (N, P)2)–d_plane;d_plane=dot(N,Pstart);
Wherein, P21Indicating the second after correctionPosition, P2Representing said second position, N being a normal vector, P, of said planestartRepresenting the parent node location of the particle.
In one embodiment, after correcting the second position to be within the motion range and before rendering the special effect object into the image after performing the animated skinning with reference to the second position of the mass point of each bone, the method further comprises:
and when the length from the particle to the parent node position of the particle is greater than the set length, correcting the corrected second position again to obtain the final position of the second position, so that the length from the final position to the parent node position of the particle is equal to the set length.
In one embodiment, the correcting the corrected second position again to obtain a final position of the second position includes:
the final position of the second position is obtained according to the following formula:
P22=D2*(len(D2)–rest_len)/len(D2)
wherein D is2=Pstart–P21
Wherein, P22Represents the final position, P, of the second position21Indicates the corrected second position, len (D)2) Represents D2Length of vector, PstartThe parent node location representing the particle, rest _ len, the particle to P when initializing the special effects objectstartLength of (d).
In a second aspect, an embodiment of the present application further provides an image processing apparatus, including:
an acquisition module configured to perform acquiring an image and a gravity direction;
a target object identification module configured to identify a target object in the image, and determine an initial position of a particle of each bone of the special effect object in the image by taking the target object as a reference object; the special effect object is obtained by physical modeling in advance, the special effect object comprises a plurality of bones, and each bone is provided with a corresponding mass point;
the position processing module is configured to perform spring simulation on the bone according to the gravity direction and the initial position of the mass point, and correct the initial position of the mass point to obtain a first position;
a rendering module configured to perform rendering the special effect object into the image according to a first location of a mass point of each bone.
In one embodiment, the location processing module includes:
a displacement determining unit configured to perform determination of a displacement between a reference position of the mass point and an initial position of the mass point, wherein the reference position is a position of the mass point when the special effect object is in an initial state if the image is a first frame image; if the image is not the first frame image, the reference position is the position of the mass point in the last frame image of the image;
an external force determination unit configured to perform determination of a comprehensive acting force according to a gravity value of the gravity direction and a preset external force;
and the first position correction unit is configured to perform spring simulation on the bone according to the displacement and the comprehensive acting force, and correct the initial position of the mass point to obtain a first position.
In one embodiment, the first position correction unit is configured to perform the obtaining of the first position according to the following formula:
P1=V*(a-damping)*t+F/(b*M)*(a–force_damping)*t2
wherein, V ═ P-P'; f — ext + G;
wherein, P1An attenuation system representing the first position, V representing the displacement, P representing an initial position of the mass point, P' representing a reference position of the mass point, a and b both being constant, M representing a predetermined mass of the mass point, F _ ext representing a sum of predetermined virtual external forces, G representing the gravitational force, damming representing spring damping, force _ damming representing forceThe number t represents the playing time of one frame of image of the video.
In one embodiment, the rendering module includes:
a length determination unit configured to perform determining a length between a parent node location of the particle and the first location; the parent node is connected with at least one child node, and the position of each child node changes along with the change of the position of the parent node; the particle belongs to a node of the at least one child node;
a length correction unit configured to perform, when the length is greater than a set length of a bone corresponding to the mass point, correction of the first position to a second position having a length equal to the set length from a parent node position of the mass point according to a preset stiffness coefficient;
and the rendering unit is configured to render the special effect object into the image after animation skinning is performed by taking the second position of the mass point of each bone as a reference.
In one embodiment, the length correction unit is configured to correct the first position to obtain the second position according to the following formula:
P2=D*(len(D)-max_len)/len(D);
wherein D ═ Pstart–P1;max_len=rest_len*(1-stiffness)*c;
P2Represents the second position, PstartRepresents the parent node location, P, of the particle1Representing the first position, len (D) representing the length of the D vector, rest _ len representing the particle to P when initializing the special effect objectstartRepresents the preset stiffness coefficient, c represents a constant.
In an embodiment, when the length is smaller than a set length of a bone corresponding to the mass point, the length correction unit is further configured to perform correction on the first position according to a preset elastic coefficient to obtain the second position.
In one embodiment, the length correction unit is configured to correct the first position to obtain the second position according to the following formula:
P2=(Pstart–P1)*elastic;
P2representing said second position, elastic representing said preset elastic coefficient, PstartRepresents the parent node location, P, of the particle1Representing the first position.
In one embodiment, before rendering the special effect object into the image after the skinning of the animation is performed based on the second position of the mass point of each bone, the apparatus further includes:
a motion definition module configured to acquire a motion range of the special effect object constructed in a coordinate system of the target object when the mass point collides with the target object; when the second position is not within the range of motion, correcting the second position to be within the range of motion: and/or the presence of a gas in the gas,
respectively constructing the mass point and the sphere of the target object by taking the mass point and the target object as sphere central points, and determining a straight line formed by the sphere center of the sphere of the mass point and the sphere center of the sphere of the target object when the mass point is in the sphere of the target object; the particle is moved out of the sphere of the target object along the line and in a direction away from the target object.
In one embodiment, the motion range is a preset motion plane of the special effect object in a coordinate system of the target object; the motion definition module configured to perform the correction of the second position into the range of motion according to the following equation:
P21=P2-N*dis;
wherein dis ═ dot (N, P)2)–d_plane;d_plane=dot(N,Pstart);
Wherein, P21Indicating the corrected second position, P2Representing said second position, N being a normal vector, P, of said planestartRepresenting the particlesA parent node location.
In one embodiment, after correcting the second position to be within the motion range and before rendering the special effect object into the image after performing the animated skinning with reference to the second position of the mass point of each bone, the apparatus further comprises:
and the length revision module is configured to revise the revised second position again to obtain a final position of the second position when the length of the particle to the parent node position of the particle is larger than the set length, so that the length of the final position to the parent node position of the particle is equal to the set length.
In one embodiment, the pair length revision module is configured to perform:
the final position of the second position is obtained according to the following formula:
P22=D2*(len(D2)–rest_len)/len(D2)
wherein D is2=Pstart–P21
Wherein, P22Represents the final position, P, of the second position21Indicates the corrected second position, len (D)2) Represents D2Length of vector, PstartThe parent node location representing the particle, rest _ len, the particle to P when initializing the special effects objectstartLength of (d).
In a third aspect, another embodiment of the present application further provides an electronic device, including at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute any image processing method provided by the embodiment of the application.
In a fourth aspect, another embodiment of the present application further provides a non-transitory computer-readable storage medium, where instructions in the storage medium, when executed by a processor of the electronic device, cause the electronic device to perform any one of the image processing methods in the embodiments of the present application.
According to the embodiment of the application, based on the actually acquired gravity direction, the spring simulation is carried out on the special effect object, so that the motion of the dynamic skeleton in the special effect object can be reversely taken as reference with the gravity, the motion can accord with the natural law of the gravity action, and the sense of reality of controlling the motion of the special effect object under the gravity action can be improved.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic illustration of an application environment according to one embodiment of the present application;
FIG. 2 is a schematic diagram of an image processing flow according to an embodiment of the present application;
3-5 are diagrams illustrating the effects of image processing according to one embodiment of the present application;
FIG. 6 is a schematic diagram illustrating the effect of moving a collider after collision according to one embodiment of the present application;
FIG. 7 is a schematic diagram of another image processing flow according to an embodiment of the present application;
FIG. 8 is a schematic diagram of an image processing apparatus according to one embodiment of the present application;
FIG. 9 is a schematic view of an electronic device according to one embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
As described above, the inventor proposes a solution to the problem mentioned in the background art that the reality of the movement is weakened due to the fact that the calculation is relatively rough and the movement of the virtual object often obviously does not conform to the physical law by simply calculating through mass points of the bone position and performing spring simulation on the bone, so as to improve the reality of the movement.
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It should be understood that in the following description, the recommended aspects of the present application are explained in detail by taking "magic expressions" as an example.
FIG. 1 is a schematic diagram of an application environment according to one embodiment of the present application.
As shown in fig. 1, the application environment may include, for example, at least one server 20 and a plurality of terminal devices 30. Any suitable electronic device that each terminal device 30 may use for network access includes, but is not limited to, a computer, a laptop, a smart phone, a tablet, or other type of terminal. The server 20 is any server capable of providing information required for an interactive service through a network. The terminal device 30 can perform information transmission and reception with the server 20 via the network 40, for example, download a magic expression package from the server 20 for special effect processing of the captured image. The server 20 can acquire and provide contents required by the terminal device 30, such as a photographing-type application, a multimedia resource, and the like, by accessing the database 50. Terminal devices (e.g., 30_1 and 30_2 or 30_ N) may also communicate with each other via network 40. Network 40 may be a network for information transfer in a broad sense and may include one or more communication networks such as a wireless communication network, the internet, a private network, a local area network, a metropolitan area network, a wide area network, or a cellular data network, among others.
In the following description, only a single server or terminal device is described in detail, but it should be understood by those skilled in the art that the single server 20, terminal device 30 and database 50 shown are intended to represent that the technical solution of the present application relates to the operation of the terminal device, server and database. The detailed description of a single terminal device and a single server and database is for convenience of description at least and does not imply limitations on the type or location of terminal devices and servers. It should be noted that the underlying concepts of the example embodiments of the present application may not be altered if additional modules are added or removed from the illustrated environments. In addition, although a bidirectional arrow from the database 50 to the server 20 is shown in the figure for convenience of explanation, it will be understood by those skilled in the art that the above-described data transmission and reception may be realized through the network 40.
In the embodiment of the disclosure, a particle modeling mode is adopted, each bone of the special effect object is assumed to have a mass point corresponding to the bone, and the connection relationship of the mass point is the same as that of the bone in the skeleton. Wherein the skeleton is free of scaling. The positions of the particles and the bones are always kept in the calculation of each frame of image, and the rotation is kept consistent.
The process flow of adding special effects to a video may include the following aspects:
1. and initializing the special effect object.
Taking the example that the terminal device shoots the short video, the terminal device starts shooting software according to the user operation and waits for further instruction of the user. For example, after receiving an operation instruction for an image acquisition interface, the user enters the image acquisition interface, and determines a used special effect object based on a special effect selection instruction. After determining the special effect object, the special effect object may be initialized, for example, an initialization pose of a bone (bone) is fully assigned to the particle, and the initialization formula may be as shown in formula (1):
Mc=Mbone (1)
in the formula (1), McA matrix representing the position and rotation of the particles; mboneRepresenting a matrix of corresponding bones. Here the matrix of particles per bone is not allowed to include scaling information, i.e. the scaled values may all be 1.0 by default.
For example, a 4 x 4 matrix of particles corresponding to a bone can be expressed as:
first row: a is00,a01,a02,a03
A second row: a is10,a11,a12,a13
Third row: a is20,a21,a22,a23
Fourth row: a is30,a31,a32,a33
Wherein the last element of the four elements in the last row (i.e. the fourth row) is scaling information, which can be set to 1, and the first three are position information. The first to third rows are used to represent rotation and translation.
Of course, the initialization of the special effect object may be performed after the selection of the special effect object or before the selection of the special effect object after the download of the special effect object. Of course, the present disclosure does not limit this as long as the initialization is completed before the first frame image is processed.
2. Gravity-induced special effect optimization
As shown in fig. 2, after determining the special effect object and starting to capture the video, in step 201, an image and a gravity direction are captured. Wherein the gravity direction can be obtained in real time by a gravity sensor of the terminal device. Then in step 202, identifying a target object in the image; determining the initial position of each bone particle of the special effect object in the image by taking the target object as a reference object; the special effect object is obtained through physical modeling in advance, the special effect object comprises a plurality of bones, and each bone is provided with a corresponding particle point. For example, if a tree seedling is added to the head of a person, the tree seedling is a special effect object, and the tree seedling may shake with the shaking of the head. Thus, the head is the target object. Then, in step 203, for the mass point of each bone, spring simulation is performed on the bone according to the gravity direction and the initial position of the mass point, and the initial position of the mass point is corrected to obtain a first position. Then, in step 204, the special effect object is rendered into the image according to the first position of the mass point of each bone.
For example, as shown in fig. 3, a hairpin effect is added to the head of a person. When the actual gravity direction is parallel to the long edge of the screen, the bead curtain in the hairpin flower special effect sags along the gravity direction. When the head of a person shakes, the direction of the head of the person is not parallel to the long edge of the screen any more, and as shown in fig. 4, the bead curtain with the hairpin flower special effect sags along the actual gravity direction. I.e. in line with the direction of gravity as collected by the gravity sensor. For better contrast and immediacy, fig. 5 shows a schematic diagram showing that the bead curtain is drooping along the gravity direction immediately after the face rotates in the related art, and the bead curtain in fig. 5 only rotates along with the head and is not influenced by gravity, so that the bead curtain is not in line with the actual situation.
Therefore, in the embodiment of the application, the actual gravity direction is collected, and the special effect object can be reversely adjusted according to the actual gravity to keep consistent with the gravity direction, so that the effect of dynamic skeleton action distortion under the gravity effect can be solved.
In one embodiment, in order to make the dynamic motion of the special effect object more natural and realistic, the correction of the initial position according to the gravity direction to obtain the first position may be implemented as:
step A1: determining a displacement between a reference position of the mass point and an initial position of the mass point, wherein if the image is a first frame image, the reference position is a position of the mass point when the special effect object is in an initial state; if the image is not the first frame image, the reference position is the position of the mass point in the last frame image of the image;
step A2: determining a comprehensive acting force according to the gravity value of the gravity direction and a preset external force;
step A2: and performing spring simulation on the bone according to the displacement and the comprehensive acting force, and correcting the initial position of the mass point to obtain a first position.
That is, the first frame image takes the position of the bone obtained by initializing the special effect object as the reference position, and the position of the same bone in the previous frame image of the middle frame image as the reference position, so that the position of the next state can be conveniently determined based on the previous state, the action of the special effect object has continuity, and the motion of the special effect object is more real.
In another embodiment, to simplify the calculation and reduce the calculation complexity, the performing a spring simulation on the bone according to the displacement and the combined acting force to correct the initial position of the mass point to obtain the first position may be implemented as obtaining the first position according to the following formula (2):
P1=V*(a-damping)*t+F/(b*M)*(a–force_damping)*t2 (2)
wherein, V ═ P-P'; f — ext + G;
wherein, in the formula (2), P1Representing the first position, V representing the displacement, P representing the initial position of the particle, P' representing the reference position of the particle, a and b both being constant (e.g., may both be 1), M representing the preset mass of the particle, F _ ext representing the combined force, G representing the gravity, damping representing spring damping, force _ damping representing the attenuation coefficient of the force, and t representing the play duration of one frame of image of the video. For example, a video frame rate of 30/second, the playback duration of a frame of image is (1/30) seconds.
In the formula (2), an external force F _ ext is introduced, and the external force may be wind force or resistance, or may be the sum of external forces, such as the combined force of wind force and resistance. So that the dynamic graph can be dynamically moved by referring to more external force without stiffness. Spring simulation is carried out through spring damping of the spring, so that the movement is more consistent with natural rules. And the gravity action is introduced, so that the physical effect can be better satisfied. Meanwhile, through the attenuation coefficient of the force, complex parameters and calculation modes are not introduced, so that the calculation does not need an excessively complex process, and the requirement of calculation processing on the hardware performance is not strict, so that the gravity can be introduced in a simple and convenient mode, and the motion effect of the skeleton of the special-effect object under the action of the gravity can be ensured to accord with the macroscopic visual effect.
3. Rigid restraint with respect to introducing springs
In one embodiment, in order to make the dynamic motion effect of the special effect object more realistic and the computational complexity can be applied to devices with various performances as much as possible, in the embodiment of the disclosure, a simplified rigid constraint condition is provided.
For example, in one embodiment, dynamic bone visualization is optimized in a simplified manner to ensure that large contours are as undistorted as possible. Rendering the special effect object into the image according to the first position of the mass point of each bone may be performed as:
step B1: determining a length between a parent node location of the particle and the first location;
the parent node is a node that drives the bone to move, that is, the mass point of the bone moves along with the motion of the parent node. Each father node is connected with at least one child node, and the position of each child node changes along with the change of the position of the father node; the particle belongs to a node of the at least one child node.
Step B2: when the length is larger than the set length of the bone corresponding to the mass point, correcting the first position to be a second position, wherein the length between the first position and the parent node position of the mass point is equal to the set length, according to a preset stiffness coefficient;
step B3: and rendering the special effect object into the image after animation skinning is carried out by taking the second position of the mass point of each bone as a reference.
That is, the dynamic effect of the bone is corrected by comparing the bone length before and after the adjustment of the special effect with the bone length originally set. This is implemented such that the original length of the bone is as undistorted as possible. Thus, after dynamically adjusting the bone positions, the bone length is properly maintained as consistent as possible with the original length.
In another embodiment, a simple way to calculate the second position is to correct the first position according to the following formula (3):
P2(len (D) -max _ len)/len (D), if len (D)>max_len; (3)
Wherein D ═ Pstart–P1;max_len=rest_len*(1-stiffness)*c;
In the formula (3), P2Represents the second position, PstartRepresents the parent node location, P, of the particle1Representing the first position, len (D) representing the length of the D vector, rest _ len representing the particle to P when initializing the special effect objectstartRepresents the preset stiffness factor, c represents a constant (e.g., c is 2).
The formula (3) can correct the bone length only by simple basic operation, has small calculation amount, and can be applied to terminal equipment with different processing performances.
On the other hand, in another embodiment, when the length is smaller than the set length of the bone corresponding to the mass point, the first position may be corrected to obtain the second position by using a preset elastic coefficient according to a rigid constraint condition, so that the motion of the dynamic bone is more realistic.
One simple implementation way to implement the restoration according to the preset elastic coefficient may be implemented by referring to the following formula (4), including:
P2=(Pstart–P1)*elastic; (4)
P2representing the second position, elastic representing the preset elasticityCoefficient, PstartRepresents the parent node location, P, of the particle1Representing the first position.
4. Concerning handling at collision
In order to enable dynamic bone motion of the special effect object to be more consistent with a natural motion law, in the embodiment of the disclosure, a simpler constraint mode is introduced to processing during touch if the collision occurs before the special effect object is rendered into the image after animation skinning is performed by taking the second position of a mass point of each bone as a reference. For example, the motion direction of the dynamic bone after collision can be generated along the direction of the following natural law by simple limitation of the motion range or simple translation operation, and the motion law does not need to be determined according to the complex conditions of the real environment such as the related acting force in the actual collision magnitude environment. Can be implemented in one or a combination of the following two ways:
mode 1: when the mass point collides with the target object, acquiring a motion range of the special effect object constructed by a coordinate system of the target object; when the second position is not within the range of motion, correcting the second position to be within the range of motion.
The setting of the movement range is set according to the requirements of each bone of the actual special effect object, and the setting is not limited in the present application. It should be noted that the motion range may be a three-dimensional space, or may be a motion plane defining the special effect object with respect to the target object (e.g., a human head). For example, the front surface of the hairpin flower special effect hairpin flower in fig. 3-4 is parallel to the front surface of the face of the human body, that is, the front surface of the hairpin flower is limited to be in the plane of the front surface of the face of the human body. In addition, in the disclosure, the motion range can be set to be a rough range, which is used for limiting the special effect object not to exceed a certain range, and does not need to construct a complex model according to a complex environment to calculate the real motion condition.
To simplify the computational complexity, in the embodiment of the present disclosure, the second position may be corrected to be within the motion range according to the following formula (5):
P21=P2-N*dis; (5)
wherein dis ═ dot (N, P)2)–d_plane;d_plane=dot(N,Pstart);
Wherein, in the formula (5), P21Indicating the corrected second position, P2Representing said second position, N being a normal vector, P, of said planestartRepresenting the parent node location of the particle.
step C1: and constructing the spheres of the mass points and the target object by taking the mass points and the target object as sphere central points respectively.
Step C2: when the particle is inside the sphere of the target object, a line is determined that the center of the sphere of the particle and the center of the sphere of the target object make up.
Step C3: the particle is moved out of the sphere of the target object along the line and in a direction away from the target object.
For example, as shown in fig. 6, one sphere is constructed with the mass point of the target object as the center of sphere, and the other sphere is constructed with the mass point of the bone of the special effect object as the center of sphere, and when the bone in the special effect object collides with the target object, the bone of the special effect object moves toward the direction of the reaction force given by the target object. I.e., along a line drawn by the centers of the two spheres, in a direction away from the target object.
With respect to the fusion of the mode 1 and the mode 2, it is meant that the motion of the special effect object conforms to the constraints of the mode 1 and the mode 2, i.e., the sphere of the special effect object is moved along the center of the sphere toward the reaction force, and is moved within a limited range of motion. The movement restriction adopts the restriction of the moving direction and the movement range without simulating each detail in the moving movement process, and the treatment after collision can be simpler and more convenient.
Similarly, as described above, after correcting the position information, there is still a possibility of bone distortion abnormality, such as bone length increase, and therefore, in the embodiment of the present disclosure, after correcting the second position within the motion range, and before rendering the special effect object into the image after performing animation skinning with reference to the second position of the particle of each bone, the bone position may be further corrected, for example, when the length of the particle to the parent node position of the particle is greater than the set length, the corrected second position is corrected again to obtain the final position of the second position, so that the length of the final position to the parent node position of the particle is equal to the set length.
A simple way can be implemented to obtain the final position of the second position according to the following equation (6):
P22=D2*(len(D2)–rest_len)/len(D2) (6)
wherein D is2=Pstart–P21
Wherein, P22Represents the final position, P, of the second position21Indicates the corrected second position, len (D)2) Represents D2Length of vector, PstartThe parent node location representing the particle, rest _ len, the particle to P when initializing the special effects objectstartLength of (d).
To facilitate understanding of the system, the image processing method provided by the embodiment of the present disclosure is described below with reference to fig. 7, and includes the following steps:
step 701: images and the direction of gravity were collected.
Step 702: identifying a target object in the image; and determining the initial position of the mass point of each bone of the special effect object in the image by taking the target object as a reference object.
Step 703: for a particle of each bone, a displacement between a reference location of the particle to an initial location of the particle is determined.
Step 704: and determining a comprehensive acting force according to the gravity value in the gravity direction and a preset external force, performing spring simulation on the bone according to the displacement and the comprehensive acting force, and correcting the initial position of the mass point to obtain a first position.
Step 705: determining a length between a parent node location of the particle and the first location.
Step 706: and when the length is larger than the set length of the bone corresponding to the mass point, correcting the first position according to a preset stiffness coefficient to obtain a second position.
Step 707: and when the length is smaller than the set length of the bone corresponding to the mass point, correcting the first position according to a preset elastic coefficient to obtain the second position.
Wherein the second position represents a length between the particle and its parent node position equal to the set length.
Step 708: when the mass point collides with the target object, acquiring a motion range of the special effect object constructed by a coordinate system of the target object; when the second position is not within the range of motion, correcting the second position to be within the range of motion.
Step 709: and when the length from the particle to the parent node position of the particle is larger than the set length, correcting the corrected second position again to obtain the final position of the second position.
Step 710: and rendering the special effect object into the image after animation skinning is carried out by taking the final position of the second position of the mass point of each bone as a reference.
Based on the same conception, the embodiment of the application also provides an image processing device.
Fig. 8 is a schematic diagram of an image processing apparatus according to an embodiment of the present application.
As shown in fig. 8, the recommendation apparatus 800 may include:
an acquisition module 801 configured to perform acquiring an image and a gravity direction;
a target object recognition module 802 configured to perform recognition of a target object in the image, and determine an initial position of a particle of each bone of the special effect object in the image with the target object as a reference object; the special effect object is obtained by physical modeling in advance, the special effect object comprises a plurality of bones, and each bone is provided with a corresponding mass point;
a position processing module 803, configured to perform spring simulation on the bone according to the gravity direction and the initial position of the mass point, and correct the initial position of the mass point to obtain a first position;
a rendering module 804 configured to perform rendering the special effect object into the image according to a first location of a mass point of each bone.
In one embodiment, the location processing module includes:
a displacement determining unit configured to perform determination of a displacement between a reference position of the mass point and an initial position of the mass point, wherein the reference position is a position of the mass point when the special effect object is in an initial state if the image is a first frame image; if the image is not the first frame image, the reference position is the position of the mass point in the last frame image of the image;
an external force determination unit configured to perform determination of a comprehensive acting force according to a gravity value of the gravity direction and a preset external force;
and the first position correction unit is configured to perform spring simulation on the bone according to the displacement and the comprehensive acting force, and correct the initial position of the mass point to obtain a first position.
In one embodiment, the first position correction unit is configured to perform the obtaining of the first position according to the following formula:
P1=V*(a-damping)*t+F/(b*M)*(a–force_damping)*t2
wherein, V ═ P-P'; f — ext + G;
wherein, P1Representing the first position, V representing the displacement, P representing an initial position of the mass point, P' representing a reference position of the mass point, a and b both being constant, M representing a predetermined mass of the mass point, F _ ext representing a sum of predetermined virtual external forces, G representing the gravitational force, damming representing a spring damping, force _ damming representing an attenuation coefficient of the force, B,And t represents the playing time length of one frame of image of the video.
In one embodiment, the rendering module includes:
a length determination unit configured to perform determining a length between a parent node location of the particle and the first location; the parent node is connected with at least one child node, and the position of each child node changes along with the change of the position of the parent node; the particle belongs to a node of the at least one child node;
a length correction unit configured to perform, when the length is greater than a set length of a bone corresponding to the mass point, correction of the first position to a second position having a length equal to the set length from a parent node position of the mass point according to a preset stiffness coefficient;
and the rendering unit is configured to render the special effect object into the image after animation skinning is performed by taking the second position of the mass point of each bone as a reference.
In one embodiment, the length correction unit is configured to correct the first position to obtain the second position according to the following formula:
P2=D*(len(D)-max_len)/len(D);
wherein D ═ Pstart–P1;max_len=rest_len*(1-stiffness)*c;
P2Represents the second position, PstartRepresents the parent node location, P, of the particle1Representing the first position, len (D) representing the length of the D vector, rest _ len representing the particle to P when initializing the special effect objectstartRepresents the preset stiffness coefficient, c represents a constant.
In an embodiment, when the length is smaller than a set length of a bone corresponding to the mass point, the length correction unit is further configured to perform correction on the first position according to a preset elastic coefficient to obtain the second position.
In one embodiment, the length correction unit is configured to correct the first position to obtain the second position according to the following formula:
P2=(Pstart–P1)*elastic;
P2representing said second position, elastic representing said preset elastic coefficient, PstartRepresents the parent node location, P, of the particle1Representing the first position.
In one embodiment, before rendering the special effect object into the image after the skinning of the animation is performed based on the second position of the mass point of each bone, the apparatus further includes:
a motion definition module configured to acquire a motion range of the special effect object constructed in a coordinate system of the target object when the mass point collides with the target object; when the second position is not within the range of motion, correcting the second position to be within the range of motion: and/or the presence of a gas in the gas,
respectively constructing the mass point and the sphere of the target object by taking the mass point and the target object as sphere central points, and determining a straight line formed by the sphere center of the sphere of the mass point and the sphere center of the sphere of the target object when the mass point is in the sphere of the target object; the particle is moved out of the sphere of the target object along the line and in a direction away from the target object.
In one embodiment, the motion range is a preset motion plane of the special effect object in a coordinate system of the target object; the motion definition module configured to perform the correction of the second position into the range of motion according to the following equation:
P21=P2-N*dis;
wherein dis ═ dot (N, P)2)–d_plane;d_plane=dot(N,Pstart);
Wherein, P21Indicating the corrected second position, P2Representing said second position, N being a normal vector, P, of said planestartRepresents a parent node of the particleThe point location.
In one embodiment, after correcting the second position to be within the motion range and before rendering the special effect object into the image after performing the animated skinning with reference to the second position of the mass point of each bone, the apparatus further comprises:
and the length revision module is configured to revise the revised second position again to obtain a final position of the second position when the length of the particle to the parent node position of the particle is larger than the set length, so that the length of the final position to the parent node position of the particle is equal to the set length.
In one embodiment, the pair length revision module is configured to perform:
the final position of the second position is obtained according to the following formula:
P22=D2*(len(D2)–rest_len)/len(D2)
wherein D is2=Pstart–P21
Wherein, P22Represents the final position, P, of the second position21Indicates the corrected second position, len (D)2) Represents D2Length of vector, PstartThe parent node location representing the particle, rest _ len, the particle to P when initializing the special effects objectstartLength of (d).
For implementation and beneficial effects of the operations in the image processing apparatus, reference is made to the description in the foregoing method, and details are not repeated here.
Having described an image processing method and apparatus according to an exemplary embodiment of the present application, an electronic device according to another exemplary embodiment of the present application is described next.
As will be appreciated by one skilled in the art, aspects of the present application may be embodied as a system, method or program product. Accordingly, various aspects of the present application may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
In some possible implementations, an electronic device according to the present application may include at least one processor, and at least one memory. Wherein the memory stores program code which, when executed by the processor, causes the processor to perform the steps in the image processing method according to various exemplary embodiments of the present application described above in the present specification. For example, the processor may perform the steps shown in fig. 2 or fig. 7.
The electronic device 130 according to this embodiment of the present application is described below with reference to fig. 9. The electronic device 130 shown in fig. 9 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 9, the electronic device 130 is represented in the form of a general electronic device. The components of the electronic device 130 may include, but are not limited to: the at least one processor 131, the at least one memory 132, and a bus 133 that connects the various system components (including the memory 132 and the processor 131).
The memory 132 may include readable media in the form of volatile memory, such as Random Access Memory (RAM)1321 and/or cache memory 1322, and may further include Read Only Memory (ROM) 1323.
The electronic device 130 may also communicate with one or more external devices 134 (e.g., keyboard, pointing device, etc.), with one or more devices that enable a user to interact with the electronic device 130, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 130 to communicate with one or more other electronic devices. Such communication may occur via input/output (I/O) interfaces 135. Also, the electronic device 130 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 136. As shown, network adapter 136 communicates with other modules for electronic device 130 over bus 133. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with electronic device 130, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
In some possible embodiments, aspects of an image processing method provided by the present application may also be implemented in the form of a program product including program code for causing a computer device to perform the steps of an image processing method according to various exemplary embodiments of the present application described above in this specification when the program product is run on the computer device, for example, the computer device may perform the steps as shown in fig. 2 or fig. 7.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product for image processing of the embodiments of the present application may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on an electronic device. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the consumer electronic device, partly on the consumer electronic device, as a stand-alone software package, partly on the consumer electronic device and partly on a remote electronic device, or entirely on the remote electronic device or server. In the case of remote electronic devices, the remote electronic devices may be connected to the consumer electronic device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external electronic device (e.g., through the internet using an internet service provider).
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, the features and functions of two or more units described above may be embodied in one unit, according to embodiments of the application. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Further, while the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.
Claims (10)
1. An image processing method, characterized in that the method comprises:
collecting an image and a gravity direction;
identifying a target object in the image, and determining the initial position of a mass point of each bone of the special effect object in the image by taking the target object as a reference object; the special effect object is obtained by physical modeling in advance, the special effect object comprises a plurality of bones, and each bone is provided with a corresponding mass point;
aiming at mass points of each bone, performing spring simulation on the bone according to the gravity direction and the initial positions of the mass points, and correcting the initial positions of the mass points to obtain first positions;
rendering the special effect object into the image according to the first position of the mass point of each bone.
2. The method of claim 1, wherein the spring simulation of the bone based on the gravity direction and the initial position of the mass point, and the correction of the initial position of the mass point to obtain the first position comprises:
determining a displacement between a reference position of the mass point and an initial position of the mass point, wherein if the image is a first frame image, the reference position is a position of the mass point when the special effect object is in an initial state; if the image is not the first frame image, the reference position is the position of the mass point in the last frame image of the image;
determining a comprehensive acting force according to the gravity value of the gravity direction and a preset external force;
and performing spring simulation on the bone according to the displacement and the comprehensive acting force, and correcting the initial position of the mass point to obtain a first position.
3. The method of claim 2, wherein the simulating a spring of the bone based on the displacement and the combined force to correct the initial position of the mass point to obtain the first position comprises:
the first position is obtained according to the following formula:
P1=V*(a-damping)*t+F/(b*M)*(a–force_damping)*t2
wherein, V ═ P-P'; f — ext + G;
wherein, P1Representing the first position, V representing the displacement, P representing an initial position of the particle, P' representing a reference position of the particle, a and b eachIs a constant, M represents the preset mass of the particle, F _ ext represents the combined acting force, G represents the gravity, damming represents the spring damping, force _ damming represents the damping coefficient of the force, and t represents the playing time length of one frame of image of the video.
4. The method of any of claims 1-3, wherein rendering the special effect object into the image according to the first location of the mass point of each bone comprises:
determining a length between a parent node location of the particle and the first location; the parent node is connected with at least one child node, and the position of each child node changes along with the change of the position of the parent node; the particle belongs to a node of the at least one child node;
when the length is larger than the set length of the bone corresponding to the mass point, correcting the first position to be a second position, wherein the length between the first position and the parent node position of the mass point is equal to the set length, according to a preset stiffness coefficient;
and rendering the special effect object into the image after animation skinning is carried out by taking the second position of the mass point of each bone as a reference.
5. The method of claim 4, wherein when the length is greater than a set length of a bone corresponding to the particle, correcting the first position according to a predetermined stiffness coefficient to obtain a second position comprises:
correcting the first position according to the following formula to obtain the second position:
P2=D*(len(D)-max_len)/len(D);
wherein D ═ Pstart–P1;max_len=rest_len*(1-stiffness)*c;
P2Represents the second position, PstartRepresents the parent node location, P, of the particle1Indicate the first position, len (D) indicate the length of the D vector, rest _ len indicate when the special effect object is initializedThe particle is to PstartLength of (c) represents a preset stiffness coefficient, and c represents a constant.
6. The method of claim 4, wherein when the length is less than a set length of bone to which the particle corresponds, the method further comprises:
and correcting the first position according to a preset elastic coefficient to obtain the second position.
7. The method of claim 6, wherein the modifying the first position to obtain the second position according to a predetermined elastic coefficient comprises:
correcting the first position according to the following formula to obtain the second position:
P2=(Pstart–P1)*elastic;
P2representing said second position, elastic representing said preset elastic coefficient, PstartRepresents the parent node location, P, of the particle1Representing the first position.
8. An image processing apparatus, characterized in that the apparatus comprises:
an acquisition module configured to perform acquiring an image and a gravity direction;
a target object identification module configured to identify a target object in the image, and determine an initial position of a particle of each bone of the special effect object in the image by taking the target object as a reference object; the special effect object is obtained by physical modeling in advance, the special effect object comprises a plurality of bones, and each bone is provided with a corresponding mass point;
the position processing module is configured to perform spring simulation on the bone according to the gravity direction and the initial position of the mass point, and correct the initial position of the mass point to obtain a first position;
a rendering module configured to perform rendering the special effect object into the image according to a first location of a mass point of each bone.
9. An electronic device comprising at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
10. A non-transitory computer-readable storage medium, wherein instructions in the storage medium, when executed by a processor of the electronic device, cause the electronic device to perform the method of any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010528831.1A CN113808236B (en) | 2020-06-11 | 2020-06-11 | Image processing method, device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010528831.1A CN113808236B (en) | 2020-06-11 | 2020-06-11 | Image processing method, device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113808236A true CN113808236A (en) | 2021-12-17 |
CN113808236B CN113808236B (en) | 2024-09-06 |
Family
ID=78943850
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010528831.1A Active CN113808236B (en) | 2020-06-11 | 2020-06-11 | Image processing method, device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113808236B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1975784A (en) * | 2006-12-28 | 2007-06-06 | 上海交通大学 | Point particle spring deformation simulating method based on skeleton linear net |
CN101719284A (en) * | 2009-12-25 | 2010-06-02 | 北京航空航天大学 | Method for physically deforming skin of virtual human based on hierarchical model |
CN104463934A (en) * | 2014-11-05 | 2015-03-25 | 南京师范大学 | Automatic generation method for point set model animation driven by mass point-spring system |
CN105551072A (en) * | 2015-12-11 | 2016-05-04 | 网易(杭州)网络有限公司 | Method and system for realizing local real-time movement of role model |
US20160267664A1 (en) * | 2015-03-11 | 2016-09-15 | Massachusetts Institute Of Technology | Methods and apparatus for modeling deformations of an object |
CN107610210A (en) * | 2017-09-15 | 2018-01-19 | 苏州蜗牛数字科技股份有限公司 | Skeleton cartoon system optimization method, device and skeleton cartoon system |
CN110136232A (en) * | 2019-05-16 | 2019-08-16 | 北京迈格威科技有限公司 | Processing method, device, electronic equipment and the storage medium of Skeletal Skinned Animation |
-
2020
- 2020-06-11 CN CN202010528831.1A patent/CN113808236B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1975784A (en) * | 2006-12-28 | 2007-06-06 | 上海交通大学 | Point particle spring deformation simulating method based on skeleton linear net |
CN101719284A (en) * | 2009-12-25 | 2010-06-02 | 北京航空航天大学 | Method for physically deforming skin of virtual human based on hierarchical model |
CN104463934A (en) * | 2014-11-05 | 2015-03-25 | 南京师范大学 | Automatic generation method for point set model animation driven by mass point-spring system |
US20160267664A1 (en) * | 2015-03-11 | 2016-09-15 | Massachusetts Institute Of Technology | Methods and apparatus for modeling deformations of an object |
CN105551072A (en) * | 2015-12-11 | 2016-05-04 | 网易(杭州)网络有限公司 | Method and system for realizing local real-time movement of role model |
CN107610210A (en) * | 2017-09-15 | 2018-01-19 | 苏州蜗牛数字科技股份有限公司 | Skeleton cartoon system optimization method, device and skeleton cartoon system |
CN110136232A (en) * | 2019-05-16 | 2019-08-16 | 北京迈格威科技有限公司 | Processing method, device, electronic equipment and the storage medium of Skeletal Skinned Animation |
Non-Patent Citations (1)
Title |
---|
丁维龙,徐岩: "一种基于质点-弹簧系统的植物形变模拟改进算法", 《浙江工业大学学报》, pages 597 - 603 * |
Also Published As
Publication number | Publication date |
---|---|
CN113808236B (en) | 2024-09-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210350630A1 (en) | Optimizing head mounted displays for augmented reality | |
US10867444B2 (en) | Synthetic data generation for training a machine learning model for dynamic object compositing in scenes | |
CN111260764B (en) | Method, device and storage medium for making animation | |
CN108335345B (en) | Control method and device of facial animation model and computing equipment | |
JP7299414B2 (en) | Image processing method, device, electronic device and computer program | |
EP3454302A1 (en) | Approximating mesh deformation for character rigs | |
US9892529B2 (en) | Constraint evaluation in directed acyclic graphs | |
CN112614213A (en) | Facial expression determination method, expression parameter determination model, medium and device | |
CN109144252B (en) | Object determination method, device, equipment and storage medium | |
CN114222076B (en) | Face changing video generation method, device, equipment and storage medium | |
WO2023240999A1 (en) | Virtual reality scene determination method and apparatus, and system | |
US20240062495A1 (en) | Deformable neural radiance field for editing facial pose and facial expression in neural 3d scenes | |
EP4148682A1 (en) | A method, an apparatus and a computer program product for computer graphics | |
CN117826989A (en) | Augmented reality immersive interaction method and device for electric power universe | |
CN114078181A (en) | Human body three-dimensional model establishing method and device, electronic equipment and storage medium | |
CN113808236B (en) | Image processing method, device, electronic equipment and storage medium | |
JP2024537259A (en) | Inferred Skeleton Structures for Practical 3D Assets | |
CN116188742A (en) | Virtual object control method, device, equipment and storage medium | |
CN111292234A (en) | Panoramic image generation method and device | |
CN115222854A (en) | Virtual image collision processing method and device, electronic equipment and storage medium | |
CN113761965A (en) | Motion capture method, motion capture device, electronic equipment and storage medium | |
Constantine et al. | Project Esky: an OpenSource Software Framework for High Fidelity Extended Reality | |
CN116805344B (en) | Digital human action redirection method and device | |
US11430184B1 (en) | Deformation joints | |
US11645797B1 (en) | Motion control for an object |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |