WO2017092196A1 - Method and apparatus for generating three-dimensional animation - Google Patents

Method and apparatus for generating three-dimensional animation Download PDF

Info

Publication number
WO2017092196A1
WO2017092196A1 PCT/CN2016/076742 CN2016076742W WO2017092196A1 WO 2017092196 A1 WO2017092196 A1 WO 2017092196A1 CN 2016076742 W CN2016076742 W CN 2016076742W WO 2017092196 A1 WO2017092196 A1 WO 2017092196A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
dimensional
point
skin
depth image
Prior art date
Application number
PCT/CN2016/076742
Other languages
French (fr)
Chinese (zh)
Inventor
黄源浩
肖振中
许宏淮
Original Assignee
深圳奥比中光科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳奥比中光科技有限公司 filed Critical 深圳奥比中光科技有限公司
Publication of WO2017092196A1 publication Critical patent/WO2017092196A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings

Definitions

  • the present invention relates to the field of computer technologies, and in particular, to a method and apparatus for generating a three-dimensional animation.
  • Three-dimensional animation is more and more popular because of its strong sense of space and realism.
  • a method for generating a three-dimensional animation comprising:
  • the first body depth image is a depth image in an RGBD image
  • the RGBD image further includes a corresponding color image
  • the animation is a colored three-dimensional animation
  • the first three-dimensional animation corresponding to the first three-dimensional model is generated according to the motion trajectory and the influence weight information
  • the changed accessory model is worn to a position corresponding to the first three-dimensional model.
  • the method before the step of acquiring the first body depth image, the method further includes:
  • the weight information of the influence of the model feature points on the skin points is determined according to the positional relationship between the skin point position and the three-dimensional animated bone.
  • the step of determining the influence weight information of the model feature point on the skin point according to the positional relationship between the skin point position and the three-dimensional animated bone comprises:
  • the weight coefficient of the influence of the model feature points on the skin points at different positions within the influence range is determined according to the body skeleton feature, wherein the influence of the model feature points on the skin points is inversely proportional to the distance between the two.
  • the step of generating the first three-dimensional animation corresponding to the first three-dimensional model according to the motion trajectory and the influence weight information comprises:
  • the step of generating the first three-dimensional animation corresponding to the first three-dimensional model according to the motion trajectory and the influence weight information comprises:
  • the skin point of the three-dimensional animation to be generated is a skin point of a corresponding position on the depth image acquired by the camera, and if so, directly generating a skin point of the first three-dimensional animation corresponding to the position according to the depth image, otherwise according to the motion
  • the trajectory and influence weight information generates other skin points of the first three-dimensional animation corresponding to the first three-dimensional model.
  • a method for generating a three-dimensional animation comprising:
  • the method before the step of acquiring the first body depth image, the method further includes:
  • the weight information of the influence of the model feature points on the skin points is determined according to the positional relationship between the skin point position and the three-dimensional animated bone.
  • the step of determining the influence weight information of the model feature point on the skin point according to the positional relationship between the skin point position and the three-dimensional animated bone comprises:
  • the weight coefficient of the influence of the model feature points on the skin points at different positions within the influence range is determined according to the body skeleton feature, wherein the influence of the model feature points on the skin points is inversely proportional to the distance between the two.
  • the step of generating the first three-dimensional animation corresponding to the first three-dimensional model according to the motion trajectory and the influence weight information comprises:
  • the method further includes:
  • the changed accessory model is worn to a position corresponding to the first three-dimensional model.
  • the step of generating the first three-dimensional animation corresponding to the first three-dimensional model according to the motion trajectory and the influence weight information comprises:
  • the skin point of the three-dimensional animation to be generated is a skin point of a corresponding position on the depth image acquired by the camera, and if so, directly generating a skin point of the first three-dimensional animation corresponding to the position according to the depth image, otherwise according to the motion
  • the trajectory and influence weight information generates other skin points of the first three-dimensional animation corresponding to the first three-dimensional model.
  • the first body depth image is a depth image in an RGBD image, the RGBD image further comprising a corresponding color image, the first three-dimensional animation being a colored three-dimensional animation.
  • a device for generating a three-dimensional animation comprising:
  • a depth image and model acquisition module configured to acquire a first body depth image, and acquire a pre-established first three-dimensional model corresponding to the first body depth image, the body being a human body or an animal having bones;
  • a feature point and weight acquisition module configured to acquire a first feature point matched by the first body depth image, map the first feature point to a first three-dimensional model to obtain a corresponding first feature point of the model, and acquire the model Weight information of the influence of the first feature point on the skin point;
  • a three-dimensional animation generating module configured to acquire a motion trajectory of the first feature point according to the first body depth image, and generate a first three-dimensional animation corresponding to the first three-dimensional model according to the motion trajectory and the influence weight information.
  • the apparatus further includes:
  • a pre-processing module configured to acquire a body depth image of different shapes, establish different three-dimensional models for the body depth images of the different shapes, set feature points corresponding to the body depth images of the different shapes, and map the feature points to
  • the three-dimensional model obtains corresponding model feature points, and the three-dimensional animated bone of the three-dimensional model is established according to the model feature points, and the weight information of the model feature points on the skin points is determined according to the positional relationship between the skin point position and the three-dimensional animated bone.
  • the pre-processing module is further configured to determine an influence range of the three-dimensional animated bone according to a subject bone feature, and determine, according to the subject bone feature, a skin point effect of the model feature point on the different position within the influence range
  • the weight coefficient, wherein the influence of the model feature points on the skin points when determining the weight coefficients is inversely proportional to the distance between the two.
  • the three-dimensional animation generation module includes:
  • a feature point coordinate unit configured to map the feature points on the motion track to the first three-dimensional model according to the depth information to obtain spatial three-dimensional coordinates of the model feature points
  • a spatial relationship calculation unit configured to acquire a first influence range of the model feature point according to the influence weight information, acquire a first skin point in the first influence range, and obtain a three-dimensional coordinate of the original space according to the first skin point
  • the spatial three-dimensional coordinates of the model feature points are used to calculate the spatial positional relationship between the first skin point and the model feature points;
  • An update unit configured to obtain a weight coefficient of the first skin point according to the spatial position relationship, calculate an updated spatial three-dimensional coordinate of the first skin point according to the weight coefficient, and use the first skin point from the original space three-dimensional coordinate Move to the update space three-dimensional coordinates.
  • the apparatus further includes:
  • An accessory module for acquiring an accessory model, obtaining a first feature point of the model and matching the accessory model
  • the influence weight information is changed, the shape of the accessory model is changed according to the position information of the first feature point of the model and the accessory influence weight information, and the changed accessory model is worn to a position corresponding to the first three-dimensional model.
  • the three-dimensional animation generation module includes:
  • a determining unit configured to determine whether a skin point of the three-dimensional animation to be generated is a skin point corresponding to a position on the depth image collected by the camera, and if yes, enter the first generating unit, and otherwise enter the second generating unit;
  • a first generating unit configured to directly generate a skin point of the first three-dimensional animation corresponding to the position according to the depth image
  • a second generating unit configured to generate, according to the motion trajectory and the influence weight information, other skin points of the first three-dimensional animation corresponding to the first three-dimensional model.
  • the first body depth image is a depth image in an RGBD image, the RGBD image further comprising a corresponding color image, the first three-dimensional animation being a colored three-dimensional animation.
  • the method and device for generating the three-dimensional animation the first body depth image is obtained by acquiring the first body depth image, the body is a human body or an animal having bones, and acquiring a first three-dimensional model corresponding to the first body depth image to obtain the first body depth image matching. a first feature point, mapping the first feature point to the first three-dimensional model to obtain a corresponding first feature point of the model, acquiring weight information of the first feature point of the model on the skin point, and acquiring the first feature according to the first body depth image
  • the motion trajectory of the point generates a first three-dimensional animation corresponding to the first three-dimensional model according to the motion trajectory and the influence weight information.
  • the depth image Since the depth image carries the depth information, it is three-dimensional spatial information, so that the motion of the first feature point acquired according to the depth image is obtained.
  • the trajectory is a three-dimensional motion trajectory, and the first three-dimensional animation corresponding to the first three-dimensional model can be automatically generated according to the motion trajectory and the influence weight information of the first feature point of the model on the skin point, and the three-dimensional position information is not required to be worn by the sensor, which is simple and convenient.
  • FIG. 1 is a flow chart of a method for generating a three-dimensional animation in an embodiment
  • FIG. 2 is a flow chart of establishing a three-dimensional model and determining weight information in one embodiment
  • FIG. 4 is a flow chart of generating a first three-dimensional animation corresponding to a first three-dimensional model according to a motion trajectory and an influence weight information in one embodiment
  • Figure 5 is a flow chart of wearing an accessory model in one embodiment
  • FIG. 6 is a structural block diagram of an apparatus for generating a three-dimensional animation in an embodiment
  • FIG. 7 is a structural block diagram of an apparatus for generating a three-dimensional animation in another embodiment
  • FIG. 8 is a structural block diagram of a three-dimensional animation generating module in an embodiment
  • FIG. 9 is a structural block diagram of an apparatus for generating a three-dimensional animation in still another embodiment
  • Figure 10 is a schematic view showing the generation of skin according to feature points in one embodiment
  • Figure 11 is a schematic diagram of a three-dimensional animated diagram in one embodiment
  • FIG. 12 is a schematic diagram of model feature points on a three-dimensional human body model in an embodiment
  • FIG. 13 is a schematic diagram of a three-dimensional skeleton of a human body half body established in an embodiment
  • FIG. 14 is a schematic diagram of a three-dimensional skeleton of a whole body of a human body established in an embodiment
  • Figure 15 is a schematic diagram of a three-dimensional skeleton of a dog established in an embodiment
  • 16 is a schematic diagram of a three-dimensional human body animation generated in one embodiment
  • 17 is a three-dimensional animation diagram of a dog generated in one embodiment
  • Figure 18 is a block diagram showing the structure of a three-dimensional animation generating module in one embodiment.
  • a method for generating a three-dimensional animation including the following steps:
  • Step S110 Acquire a first body depth image, and acquire a pre-established first three-dimensional model corresponding to the first body depth image, where the body is a human body or an animal having bones.
  • the body is a human body or an animal with bones, such as a dog, and the depth image can be acquired by a depth image camera, such as a binocular camera or a plurality of pairs of camera devices, which are obtained by averaging different depth images of the same image.
  • a depth image camera such as a binocular camera or a plurality of pairs of camera devices, which are obtained by averaging different depth images of the same image.
  • the first body depth image is processed, such as removing the background to separate the body contour and the like.
  • the analysis is performed, for example, the body contour is the complete master Body, then obtain the corresponding complete personal 3D model. If the body contour is only the head, the head is recognized, and a three-dimensional model of the head corresponding to the head is acquired.
  • the body contour includes the head and the arms or the two forelimbs of the animal
  • a three-dimensional three-dimensional model corresponding to the head and the arms or the two forelimbs is acquired. Since the first three-dimensional model is pre-established, it can be quickly matched according to the first body depth image, which improves the efficiency.
  • the pre-established first three-dimensional model may adjust the shape according to the first body depth image, such as adjusting the height, adjusting the length ratio of the limbs, and the like. The first three-dimensional model is made to match the first body depth image more closely.
  • the first three-dimensional model is dynamically generated according to the first body depth image, and is more matched.
  • the first body color image corresponding to the first body depth image may be acquired, and the first three-dimensional model is established according to the chromaticity information in the first body color image, such as adjusting the skin color of the skin color or the animal , clothes color, etc. It can be understood that if the three-dimensional animation is generated only from the depth image, the generated three-dimensional animation has no color, and if the three-dimensional animation is generated according to the depth image and the corresponding color image, the generated three-dimensional animation is colored.
  • Step S120 Acquire a first feature point that is matched by the first body depth image, map the first feature point to the first three-dimensional model to obtain a corresponding first feature point of the model, and obtain information about the influence weight of the first feature point of the model on the skin point.
  • different depth images correspond to different first feature points
  • the position and number of the first feature points correspond to the depth image.
  • the first feature point is the body limb end, the joint and the head facial position.
  • the first feature point is the facial facial position.
  • the color image corresponding to the first body depth image may be acquired, and the color image corresponding image point position is obtained according to the image recognition of the color image, such as the facial features position recognition, due to the color
  • the image corresponds to the depth image, and the feature point position on the depth image can be matched according to the position of the feature point on the color image.
  • the number of the first feature points can be customized as needed. If the number of the first feature points is appropriate, the accuracy of the three-dimensional animation obtained by the subsequent feature points is high.
  • the key parts such as the facial feature point density can be set high, the obtained 3D animation effect is more accurate, and the expression is more realistic. It should be noted that since the depth image carries the depth information, that is, the size of the pixel value in the depth map reflects the depth of the depth of field, the position of the obtained first feature point is It is the position in three-dimensional space.
  • mapping is performed to obtain a first feature point of the model corresponding to the first feature point on the first three-dimensional model. s position.
  • the first feature point generally includes a plurality of feature points, and each of the feature points may be connected to form a corresponding bone according to the distribution of the body, and the first feature points of the model may be connected according to the distribution of the body to form a corresponding three-dimensional animated bone.
  • the first three-dimensional model is independent of the three-dimensional animated bone.
  • the first three-dimensional model is equivalent to the skin, including the individual skin points. Once the skin is bound to the three-dimensional animated bone, the skin can follow the corresponding motion of the movement of the three-dimensional animated bone. In order for the skin to follow the motion of the 3D animated bone, it is necessary to set the corresponding weight of the 3D animated bone to each skin point on the first 3D model.
  • the corresponding influence weight of the three-dimensional animated bone on each skin point on the first three-dimensional model is converted into the weight of the first feature point of the model on the skin point, and the influence weight information includes: the range of the skin point affected by the first feature point of the model, the model The first feature point affects the weight coefficient of the skin point.
  • the weight coefficient size and the skin point are related to the position of the first feature point of the model.
  • the weight coefficient of the first feature point of the model to the skin points at different positions is generally determined in combination with the skeleton characteristics of the subject.
  • Step S130 Acquire a motion trajectory of the first feature point according to the first body depth image, and generate a first three-dimensional animation corresponding to the first three-dimensional model according to the motion trajectory and the influence weight information.
  • the first body depth image acquired at different times is obtained, and the motion track of the first feature point is obtained according to the coordinate position change of the corresponding first feature point on the first body depth image at different time points.
  • the motion trajectory of the first feature point is mapped to the motion trajectory of the first feature point of the model on the first three-dimensional model. Since the motion trajectory of the first feature point is obtained from the depth image, the depth information is a spatial three-dimensional motion trajectory, and the model The motion trajectory of the first feature point is also a spatial three-dimensional motion trajectory.
  • the range of the skin point and the weight coefficient affecting the skin point are determined, and the update of the skin point by the first feature point of the model is calculated according to the weight coefficient and the motion trajectory.
  • the spatial coordinates, which result in the updated skin, and the continuous skin changes form the first three-dimensional animation. If the first body depth image is the motion of the body head, the three-dimensional animated expression corresponding to the expression change is generated according to the feature points on the motion track and the influence weight information according to the motion track corresponding to the different expression changes collected. For example, when laughing, the corners of the mouth rise, and the feature points corresponding to the corners of the mouth form a movement trajectory toward the left and right sides.
  • a schematic diagram of generating skin points corresponding to the position according to the motion trajectory of the feature point and the influence of the weight information on the skin point the position of each skin point can be according to the formula: Calculated, where d 1 represents the starting skin point, d 2 represents the skin point after exercise, a 1 , b 1 , c 1 respectively represent the starting feature points, and a 2 , b 2 and c 2 respectively represent the characteristics after exercise.
  • Points, ⁇ , ⁇ , and ⁇ are the weight values of the respective feature points, f() represents the trajectory of the calculated feature points, and g(_) represents the trajectory of the skin points.
  • FIG. 11 a three-dimensional animated map generated after the skin position is completely determined is shown in FIG. 16 as a three-dimensional human body animation diagram, and FIG. 17 is a three-dimensional animated diagram of the produced dog.
  • the first body depth image by acquiring the first body depth image, acquiring a pre-established first three-dimensional model corresponding to the first body depth image, acquiring a first feature point matched by the first body depth image, and mapping the first feature point to The first three-dimensional model obtains the corresponding first feature point of the model, acquires the weight information of the first feature point of the model on the skin point, acquires the motion trajectory of the first feature point according to the first body depth image, and generates the motion trajectory according to the motion trajectory and the influence weight information.
  • the first three-dimensional animation corresponding to the first three-dimensional model because the depth image carries the depth information, is three-dimensional spatial information, so that the motion trajectory of the first feature point acquired according to the depth image is a three-dimensional motion trajectory, and according to the motion trajectory and the model
  • the weight information of the first feature point on the skin point can automatically generate the first three-dimensional animation corresponding to the first three-dimensional model, and the sensor does not need to wear the sensor to collect the three-dimensional position information, which is simple and convenient.
  • the method before step S110, the method further includes:
  • Step S210 acquiring body depth images of different forms, and establishing different three-dimensional models for body depth images of different forms.
  • the body depth image of different forms is processed, such as removing the background to separate the body contour.
  • the analysis is performed. If the body contour is a complete body, a corresponding complete personal three-dimensional model is established. If the body contour is only half-length and does not include the arm, the head is recognized, and a three-dimensional model of the head corresponding to the head is established. If the body contour includes the head and the arm, a half-length three-dimensional model corresponding to the head and the arm is established.
  • a body color image corresponding to the body depth image may be acquired, and a three-dimensional model may be established according to the chromaticity information in the body color image, such as adjusting skin color, clothing color, and the like.
  • the matching established 3D model can be directly searched according to the real-time acquired body depth image, and the speed of 3D animation generation is accelerated.
  • Step S220 setting feature points corresponding to the body depth images of different forms, and mapping the feature points to the three-dimensional model to obtain corresponding model feature points.
  • the position and the number of feature points corresponding to the body depth image of different forms may be customized.
  • the feature points are the end of the main limb, the joint, and the facial features of the head, such as depth.
  • the image corresponds to the outline of the body as the head, and the feature point is the facial features.
  • the position of the feature point can be manually calibrated or automatically recognized.
  • the color image corresponding to the depth image of the body can be obtained, and the color image can be image-recognized according to the facial image, such as the facial features recognition.
  • the image corresponds to the depth image, and the feature point position on the depth image can be matched according to the position of the feature point on the color image.
  • the mapping is performed to obtain the position of the model feature point corresponding to the feature point on the three-dimensional model.
  • FIG. 12 it is a schematic diagram of model feature points on the three-dimensional model of the human body. It can be seen from the figure that the feature points of the head and the hand are more dense, and a more accurate facial expression and hand motion three-dimensional animation can be generated.
  • FIG. 10 it is a schematic diagram of feature points on a three-dimensional model of a dog.
  • Step S230 the three-dimensional animated skeleton of the three-dimensional model is established according to the model feature points, and the weight information of the influence of the model feature points on the skin points is determined according to the positional relationship between the skin point position and the three-dimensional animated bone.
  • the range of influence of the model feature points on the skin points can be customized according to requirements, for example, the distance between the feature points and the skin points is set when the distance from the feature points is less than the preset threshold.
  • the weight coefficient can be customized according to the positional relationship between the skin point position and the three-dimensional animated bone, such as calculating the vertical distance between the skin point and the three-dimensional animated bone, and determining the weight coefficient of the feature point connecting the three-dimensional animated bone to the skin point according to the vertical distance.
  • FIG. 13 a schematic diagram of a three-dimensional skeleton of a human body half body is established, as shown in FIG. 14 , which is a schematic diagram of a three-dimensional skeleton of a human body.
  • a three-dimensional skeleton diagram of the dog is established.
  • step S230 includes:
  • Step S231 determining an influence range of the three-dimensional animated skeleton according to the skeleton feature of the subject.
  • the feature points of the model are connected according to the distribution of the subject.
  • the 3D animated bone determines the range of skin points affected by the 3D animated bone according to the skeleton characteristics of the subject.
  • the 3D animated bone is connected by the model feature points.
  • the model feature points on the 3D animated bone affect the skin point range.
  • Animated bones affect the range of skin points.
  • Step S232 determining, according to the skeleton characteristics of the subject, a weight coefficient of the influence of the model feature points on the skin points at different positions within the influence range, wherein the influence of the model feature points on the skin points when determining the weight coefficient is inversely proportional to the distance between the two main body.
  • the weight coefficient of the model feature points on the skin points is determined by the distance relationship between the model feature points and the skin points, and the body bone characteristics are determined according to the body bone characteristics, and the stretching degrees of the muscles at different positions are different, for example, the three-dimensional animated bone is an arm. Since the bone movement on the arm has little effect on the stretching motion of the muscle, a smaller weight coefficient a1 of the main body bone feature is set, and then the distance weight coefficient a2 is obtained according to the distance between the skin point and the feature point on the three-dimensional animated bone. Multiply a1 and a2 to obtain the final weight coefficient. The final weight coefficient is expressed as a function of the distance between the skin point and the feature point. When the 3D animation is subsequently generated, the corresponding weight coefficient can be directly obtained according to the distance.
  • step S130 includes:
  • Step S131 mapping feature points on the motion trajectory according to the depth information to the first three-dimensional model to obtain spatial three-dimensional coordinates of the model feature points.
  • the depth image includes depth information
  • the feature point includes three-dimensional spatial information, so that the feature point is mapped to the first three-dimensional model to obtain a spatial three-dimensional coordinate.
  • Step S132 Acquire a first influence range of the model feature point according to the influence weight information, obtain a first skin point in the first influence range, and calculate the first space according to the original space three-dimensional coordinate of the first skin point and the spatial three-dimensional coordinate of the model feature point. The spatial positional relationship between skin points and model feature points.
  • the influence weight information includes a range of the skin points affected by each feature point, and the first skin point in the first influence range is obtained, and the other skin points are not affected by the feature points because they are behind the influence range, for example, A model of 2 arms in which the movement of one arm does not affect the skin points on the other arm.
  • the spatial positional relationship can be calculated according to needs. If the obtained weight coefficient is related to the distance between the skin point and the model feature point, the point-to-point distance between the skin point and the model feature point is directly calculated according to the spatial three-dimensional coordinate. If the obtained weight coefficient is formed with skin points and different model feature points The bone distance is related, and the point-to-line distance between the skin points and the bones formed by the model feature points can be calculated according to the spatial three-dimensional coordinates.
  • Step S133 obtaining a weight coefficient of the first skin point according to the spatial positional relationship.
  • the weight coefficient corresponding to the distance may be obtained as the weight coefficient of the first skin point according to the calculated spatial distance. It can be understood that if there are multiple model feature points affecting the first skin point, the weight coefficients of the first skin point affected by the plurality of model feature points are obtained according to the spatial position relationship, and the weight coefficients are further weighted. Get the final weighting factor.
  • Step S134 calculating the updated spatial three-dimensional coordinates of the first skin point according to the weight coefficient, and moving the first skin point from the original space three-dimensional coordinates to the update space three-dimensional coordinates.
  • the motion trajectory of the feature point obtains the motion trajectory of each model feature point by mapping to the first three-dimensional model, and calculates the updated space coordinate of the first skin point according to the weight coefficient and the trend of the motion trajectory of each model feature point, such as the model feature.
  • the point is upward motion
  • the motion distance is b1
  • the weight coefficient is b2
  • the updated space coordinate of the first skin point is obtained as a function of the three-dimensional coordinates of b1 and b2 and the original space
  • the three-dimensional coordinates of the update space are calculated according to the function.
  • Specific function formulas can be customized as needed. Changes in the spatial coordinates of successive skin points form a three-dimensional animation.
  • the three-dimensional animation also follows the head rotation. If the captured body depth image corresponds to a smile face, the three-dimensional animation also follows the smile face.
  • the method further includes:
  • Step S310 acquiring an accessory model, and acquiring weight information of the accessory influence weight of the first feature point of the model on the accessory model.
  • the target model feature points of the accessory model can be affected.
  • the target model feature point is a head model feature point
  • the target model feature point is a nose.
  • the other model feature points have a weighting factor of 0 for the accessories of the accessory model.
  • the influence weight information includes the weight coefficient and influence range of the accessory's first feature point on the accessory model. Only the points on the accessories in the affected range will be affected by the first feature points.
  • the point on the edge of the hat will be affected by the first feature point of the model, and the point at the top of the hat will not be affected by the first feature point of the model.
  • the influence weight coefficient determines the influence of each model feature point on the accessory model when the first feature point of the model is moved.
  • the accessory model is a hat and a hat. The point of influence of the point closer to the head is larger.
  • Step S320 changing the form of the accessory model according to the position information of the first feature point of the model and the accessory influence weight information, and wearing the changed accessory model to a position corresponding to the first three-dimensional model.
  • the position information of the first feature point of the model includes a distance between two model feature points and a movement trajectory of the feature point
  • the shape of the accessory model can be changed according to the position information of the first feature point of the model and the weight information of the accessory influence, such as Size, location, etc.
  • the changed accessory model is matched with the first three-dimensional model so that the changed accessory model can be worn to a position corresponding to the first three-dimensional model. If the head of the user A is wider, the distance between the two model feature points of the user A's head is larger, so that the original accessory model, such as a hat, is enlarged so that it matches the user's head.
  • the first three-dimensional model is a three-dimensional head model
  • the first three-dimensional animation is an avatar three-dimensional animation
  • the obtained weight information of the first feature point on the skin point is matched with the structure of the head, and the motion of the skin is obtained according to the motion track of the feature point of the head.
  • step S130 includes: determining whether the skin point of the three-dimensional animation to be generated is a skin point of a corresponding position on the depth image acquired by the camera, and if yes, generating the first three-dimensional animation of the corresponding position according to the depth image. Skin points, otherwise other skin points of the first three-dimensional animation corresponding to the first three-dimensional model are generated according to the motion trajectory and the influence weight information.
  • the depth image carries the depth information and is the three-dimensional spatial information
  • the spatial position coordinates can be directly obtained according to the depth information for each point on the depth image, thereby determining the position of each skin point, such as the side facing the camera.
  • the skin points have corresponding points on the depth image, and the three-dimensional positions of the skin points can be directly obtained, thereby generating corresponding three-dimensional animated skin points.
  • the part facing away from the camera does not have a corresponding point on the depth image, and these skin points need to be generated according to the motion trajectory and the influence weight information.
  • the corresponding 3D animation can be generated directly from the depth information, which can speed up the generation of 3D animation.
  • the first body depth image is a depth image in the RGBD image
  • the RGBD image further includes a corresponding color image
  • the first three-dimensional animation being a colored three-dimensional animation.
  • the RGBD image is a synchronized depth image and a color image collected by the camera. Since the collected information includes a color image, and the color image corresponds to each point of the depth image one by one, the generated three-dimensional animation is colored.
  • an apparatus for generating a three-dimensional animation including:
  • the depth image and model acquisition module 410 is configured to acquire a first body depth image, and acquire a pre-established first three-dimensional model corresponding to the first body depth image, where the body is a human body or an animal having bones.
  • the feature point and weight acquisition module 420 is configured to acquire a first feature point of the first body depth image matching, map the first feature point to the first three-dimensional model to obtain a corresponding first feature point of the model, and acquire a first feature point pair of the model Skin point influence weight information.
  • the three-dimensional animation generating module 430 is configured to acquire a motion trajectory of the first feature point according to the first body depth image, and generate a first three-dimensional animation corresponding to the first three-dimensional model according to the motion trajectory and the influence weight information.
  • the apparatus further includes:
  • the pre-processing module 440 is configured to acquire body depth images of different forms, establish different three-dimensional models for body depth images of different forms, set feature points corresponding to body depth images of different forms, and map the feature points to the three-dimensional model to obtain corresponding
  • the model feature points are based on the model feature points to establish a three-dimensional animated skeleton of the three-dimensional model, and the weight information of the model feature points on the skin points is determined according to the positional relationship between the skin point position and the three-dimensional animated bone.
  • the pre-processing module 440 is further configured to determine a range of influence of the three-dimensional animated bone according to the subject bone feature, and determine, according to the subject bone feature, a weight coefficient of the model feature point affecting the skin point of the different position within the influence range, wherein When the weight coefficient is determined, the influence of the model feature points on the skin points is inversely proportional to the distance between the two.
  • the three-dimensional animation generating module 430 includes:
  • the feature point coordinate unit 431 is configured to map feature points on the motion track to the first three-dimensional model according to the depth information to obtain spatial three-dimensional coordinates of the model feature points.
  • the spatial relationship calculation unit 432 is configured to acquire the first impact range of the model feature point according to the influence weight information, and acquire the first skin point in the first influence range, according to the original space three-dimensional coordinate of the first skin point and the space of the model feature point. Calculate the spatial position of the first skin point and the model feature point in three-dimensional coordinates system.
  • the updating unit 433 is configured to obtain a weight coefficient of the first skin point according to the spatial position relationship, calculate a three-dimensional coordinate of the update space of the first skin point according to the weight coefficient, and move the first skin point from the original space three-dimensional coordinate to the update space three-dimensional coordinate.
  • the apparatus further includes:
  • the accessory module 450 is configured to acquire an accessory model, obtain the accessory influence weight information of the first feature point of the model on the accessory model, and change the shape of the accessory model according to the position information of the first feature point of the model and the influence weight information of the accessory, and then change The accessory model is worn to the position corresponding to the first three-dimensional model.
  • the first three-dimensional model is a three-dimensional head model
  • the first three-dimensional animation is an avatar three-dimensional animation
  • the three-dimensional animation generating module 430 includes:
  • the determining unit 434 is configured to determine whether the skin point of the three-dimensional animation to be generated is a skin point of a corresponding position on the depth image collected by the camera, and if yes, enter the first generating unit, and otherwise enter the second generating unit.
  • the first generating unit 435 is configured to directly generate a skin point of the first three-dimensional animation corresponding to the position according to the depth image.
  • the second generating unit 436 is configured to generate other skin points of the first three-dimensional animation corresponding to the first three-dimensional model according to the motion trajectory and the influence weight information.
  • the first body depth image is a depth image in the RGBD image
  • the RGBD image further includes a corresponding color image
  • the first three-dimensional animation being a colored three-dimensional animation.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).

Abstract

A method for generating a three-dimensional animation, comprising: acquiring a depth image of a first main body, and acquiring a pre-established first three-dimensional model corresponding to the depth image of the first main body, wherein the main body is a human body or an animal with bones (S110); acquiring a first feature point matching the depth image of the first main body, mapping the first feature point to the first three-dimensional model to obtain a corresponding model first feature point, and acquiring influence weight information of the model first feature point on a skin point (S120); and acquiring a movement trajectory of the first feature point according to the depth image of the first main body, and generating a first three-dimensional animation corresponding to the first three-dimensional model according to the movement trajectory and the influence weight information (S130). The present invention means it is not necessary to wear a sensor to collect three-dimensional position information, and is simple and convenient. In addition, further provided is an apparatus for generating a three-dimensional animation.

Description

三维动画生成的方法和装置Method and device for generating 3D animation 【技术领域】[Technical Field]
本发明涉及计算机技术领域,特别是涉及一种三维动画生成的方法和装置。The present invention relates to the field of computer technologies, and in particular, to a method and apparatus for generating a three-dimensional animation.
【背景技术】【Background technique】
随着计算机技术的发展和多媒体技术的进步,二维动画已经不能满足人们的视觉需求,三维动画由于其强烈的空间感和逼真感越来越多的受到人们的欢迎。With the development of computer technology and the advancement of multimedia technology, two-dimensional animation can no longer meet people's visual needs. Three-dimensional animation is more and more popular because of its strong sense of space and realism.
现有的三维动画生成的方法为了生成与人或动物运动匹配的三维动画,需要在人或动物身体上佩戴传感器,通过拍摄传感器,获得人或动物动作过程,以此捕捉传感器的轨迹,然后添加到三维动画的模型用传统的三维动画制作技术来制作表情、骨骼等生成三维动画,此种方法需要佩戴传感器,复杂度高。Existing 3D animation generation methods In order to generate 3D animations that match human or animal movements, it is necessary to wear sensors on the human or animal body, capture the sensor, obtain the motion of the human or animal, capture the trajectory of the sensor, and then add Models to 3D animation use traditional 3D animation techniques to create expressions, bones, etc. to generate 3D animations. This method requires sensors and is highly complex.
【发明内容】[Summary of the Invention]
基于此,有必要针对上述技术问题,提供一种三维动画生成的方法,提高三维动画生成的便利性。Based on this, it is necessary to provide a method for generating a three-dimensional animation for the above technical problems, and to improve the convenience of three-dimensional animation generation.
一种三维动画生成的方法,其特征在于,所述方法包括:A method for generating a three-dimensional animation, the method comprising:
获取第一主体深度图像,获取与所述第一主体深度图像对应的预先建立的第一三维模型,所述主体为人体或具有骨骼的动物;Obtaining a first body depth image, and acquiring a pre-established first three-dimensional model corresponding to the first body depth image, the body being a human body or an animal having bones;
获取所述第一主体深度图像匹配的第一特征点,将所述第一特征点映射至第一三维模型得到对应的模型第一特征点;Obtaining a first feature point that is matched by the first body depth image, and mapping the first feature point to a first three-dimensional model to obtain a corresponding first feature point of the model;
获取所述模型第一特征点对皮肤点的影响权重信息;Obtaining weight information of the first feature point of the model on the skin point;
根据所述第一主体深度图像获取所述第一特征点的运动轨迹,所述第一主体深度图像为RGBD图像中的深度图像,所述RGBD图像还包括对应的彩色图像,所述第一三维动画为彩色的三维动画,根据所述运动轨迹和影响权重信息生成所述第一三维模型对应的第一三维动画;Obtaining a motion trajectory of the first feature point according to the first body depth image, the first body depth image is a depth image in an RGBD image, and the RGBD image further includes a corresponding color image, the first three-dimensional The animation is a colored three-dimensional animation, and the first three-dimensional animation corresponding to the first three-dimensional model is generated according to the motion trajectory and the influence weight information;
获取配饰模型,获取所述模型第一特征点对配饰模型的配饰影响权重信息; Obtaining an accessory model, and acquiring weight information of the accessory influence attribute of the first feature point of the model on the accessory model;
根据模型第一特征点的位置信息和所述配饰影响权重信息改变所述配饰模型的形态;Changing a form of the accessory model according to position information of the first feature point of the model and the accessory influence weight information;
将所述改变后的配饰模型佩戴至所述第一三维模型对应的位置。The changed accessory model is worn to a position corresponding to the first three-dimensional model.
在其中一个实施例中,所述获取第一主体深度图像的步骤之前,还包括:In one embodiment, before the step of acquiring the first body depth image, the method further includes:
获取不同形态的主体深度图像,对所述不同形态的主体深度图像建立不同的三维模型;Obtaining a body depth image of different forms, and establishing different three-dimensional models for the body depth images of the different shapes;
设置所述不同形态的主体深度图像对应的特征点,将所述特征点映射至三维模型得到对应的模型特征点;And setting feature points corresponding to the different depths of the body depth image, and mapping the feature points to the three-dimensional model to obtain corresponding model feature points;
根据所述模型特征点建立所述三维模型的三维动画骨骼;Establishing a three-dimensional animated skeleton of the three-dimensional model according to the model feature point;
根据皮肤点位置与所述三维动画骨骼的位置关系确定所述模型特征点对皮肤点的影响权重信息。The weight information of the influence of the model feature points on the skin points is determined according to the positional relationship between the skin point position and the three-dimensional animated bone.
在其中一个实施例中,所述根据皮肤点位置与所述三维动画骨骼的位置关系确定所述模型特征点对皮肤点的影响权重信息的步骤包括:In one embodiment, the step of determining the influence weight information of the model feature point on the skin point according to the positional relationship between the skin point position and the three-dimensional animated bone comprises:
根据主体骨骼特征确定所述三维动画骨骼的影响范围;Determining the influence range of the three-dimensional animated bone according to the skeleton characteristics of the subject;
根据主体骨骼特征确定模型特征点在所述影响范围内对不同位置的皮肤点影响的权重系数,其中在确定权重系数时模型特征点对皮肤点的影响大小与两者之间的距离成反比。The weight coefficient of the influence of the model feature points on the skin points at different positions within the influence range is determined according to the body skeleton feature, wherein the influence of the model feature points on the skin points is inversely proportional to the distance between the two.
在其中一个实施例中,所述根据所述运动轨迹和影响权重信息生成所述第一三维模型对应的第一三维动画的步骤包括:In one embodiment, the step of generating the first three-dimensional animation corresponding to the first three-dimensional model according to the motion trajectory and the influence weight information comprises:
将所述运动轨迹上的特征点根据深度信息映射到第一三维模型得到模型特征点的空间三维坐标;Mapping the feature points on the motion trajectory according to the depth information to the first three-dimensional model to obtain spatial three-dimensional coordinates of the model feature points;
根据所述影响权重信息获取所述模型特征点的第一影响范围;Acquiring, according to the impact weight information, a first influence range of the model feature point;
获取所述第一影响范围内的第一皮肤点,根据第一皮肤点的原始空间三维坐标和模型特征点的空间三维坐标计算第一皮肤点与模型特征点的空间位置关系;Obtaining a first skin point in the first influence range, and calculating a spatial position relationship between the first skin point and the model feature point according to the original space three-dimensional coordinates of the first skin point and the spatial three-dimensional coordinates of the model feature point;
根据所述空间位置关系得到第一皮肤点的权重系数;Obtaining a weight coefficient of the first skin point according to the spatial position relationship;
根据所述权重系数计算所述第一皮肤点的更新空间三维坐标,将所述第一皮肤点由原始空间三维坐标移动至所述更新空间三维坐标。 Calculating an updated spatial three-dimensional coordinate of the first skin point according to the weight coefficient, and moving the first skin point from the original space three-dimensional coordinate to the update space three-dimensional coordinate.
在其中一个实施例中,所述根据所述运动轨迹和影响权重信息生成所述第一三维模型对应的第一三维动画的步骤包括:In one embodiment, the step of generating the first three-dimensional animation corresponding to the first three-dimensional model according to the motion trajectory and the influence weight information comprises:
判断待生成的三维动画的皮肤点是否为摄像头采集的深度图像上对应位置的皮肤点,如果是,则直接根据所述深度图像生成对应位置的第一三维动画的皮肤点,否则根据所述运动轨迹和影响权重信息生成所述第一三维模型对应的第一三维动画的其它皮肤点。Determining whether the skin point of the three-dimensional animation to be generated is a skin point of a corresponding position on the depth image acquired by the camera, and if so, directly generating a skin point of the first three-dimensional animation corresponding to the position according to the depth image, otherwise according to the motion The trajectory and influence weight information generates other skin points of the first three-dimensional animation corresponding to the first three-dimensional model.
一种三维动画生成的方法,所述方法包括:A method for generating a three-dimensional animation, the method comprising:
获取第一主体深度图像,获取与所述第一主体深度图像对应的预先建立的第一三维模型,所述主体为人体或具有骨骼的动物;Obtaining a first body depth image, and acquiring a pre-established first three-dimensional model corresponding to the first body depth image, the body being a human body or an animal having bones;
获取所述第一主体深度图像匹配的第一特征点,将所述第一特征点映射至第一三维模型得到对应的模型第一特征点;Obtaining a first feature point that is matched by the first body depth image, and mapping the first feature point to a first three-dimensional model to obtain a corresponding first feature point of the model;
获取所述模型第一特征点对皮肤点的影响权重信息;Obtaining weight information of the first feature point of the model on the skin point;
根据所述第一主体深度图像获取所述第一特征点的运动轨迹,根据所述运动轨迹和影响权重信息生成所述第一三维模型对应的第一三维动画。Obtaining a motion trajectory of the first feature point according to the first body depth image, and generating a first three-dimensional animation corresponding to the first three-dimensional model according to the motion trajectory and the influence weight information.
在其中一个实施例中,所述获取第一主体深度图像的步骤之前,还包括:In one embodiment, before the step of acquiring the first body depth image, the method further includes:
获取不同形态的主体深度图像,对所述不同形态的主体深度图像建立不同的三维模型;Obtaining a body depth image of different forms, and establishing different three-dimensional models for the body depth images of the different shapes;
设置所述不同形态的主体深度图像对应的特征点,将所述特征点映射至三维模型得到对应的模型特征点;And setting feature points corresponding to the different depths of the body depth image, and mapping the feature points to the three-dimensional model to obtain corresponding model feature points;
根据所述模型特征点建立所述三维模型的三维动画骨骼;Establishing a three-dimensional animated skeleton of the three-dimensional model according to the model feature point;
根据皮肤点位置与所述三维动画骨骼的位置关系确定所述模型特征点对皮肤点的影响权重信息。The weight information of the influence of the model feature points on the skin points is determined according to the positional relationship between the skin point position and the three-dimensional animated bone.
在其中一个实施例中,所述根据皮肤点位置与所述三维动画骨骼的位置关系确定所述模型特征点对皮肤点的影响权重信息的步骤包括:In one embodiment, the step of determining the influence weight information of the model feature point on the skin point according to the positional relationship between the skin point position and the three-dimensional animated bone comprises:
根据主体骨骼特征确定所述三维动画骨骼的影响范围;Determining the influence range of the three-dimensional animated bone according to the skeleton characteristics of the subject;
根据主体骨骼特征确定模型特征点在所述影响范围内对不同位置的皮肤点影响的权重系数,其中在确定权重系数时模型特征点对皮肤点的影响大小与两者之间的距离成反比。 The weight coefficient of the influence of the model feature points on the skin points at different positions within the influence range is determined according to the body skeleton feature, wherein the influence of the model feature points on the skin points is inversely proportional to the distance between the two.
在其中一个实施例中,所述根据所述运动轨迹和影响权重信息生成所述第一三维模型对应的第一三维动画的步骤包括:In one embodiment, the step of generating the first three-dimensional animation corresponding to the first three-dimensional model according to the motion trajectory and the influence weight information comprises:
将所述运动轨迹上的特征点根据深度信息映射到第一三维模型得到模型特征点的空间三维坐标;Mapping the feature points on the motion trajectory according to the depth information to the first three-dimensional model to obtain spatial three-dimensional coordinates of the model feature points;
根据所述影响权重信息获取所述模型特征点的第一影响范围;Acquiring, according to the impact weight information, a first influence range of the model feature point;
获取所述第一影响范围内的第一皮肤点,根据第一皮肤点的原始空间三维坐标和模型特征点的空间三维坐标计算第一皮肤点与模型特征点的空间位置关系;Obtaining a first skin point in the first influence range, and calculating a spatial position relationship between the first skin point and the model feature point according to the original space three-dimensional coordinates of the first skin point and the spatial three-dimensional coordinates of the model feature point;
根据所述空间位置关系得到第一皮肤点的权重系数;Obtaining a weight coefficient of the first skin point according to the spatial position relationship;
根据所述权重系数计算所述第一皮肤点的更新空间三维坐标,将所述第一皮肤点由原始空间三维坐标移动至所述更新空间三维坐标。Calculating an updated spatial three-dimensional coordinate of the first skin point according to the weight coefficient, and moving the first skin point from the original space three-dimensional coordinate to the update space three-dimensional coordinate.
在其中一个实施例中,所述方法还包括:In one embodiment, the method further includes:
获取配饰模型,获取所述模型第一特征点对配饰模型的配饰影响权重信息;Obtaining an accessory model, and acquiring weight information of the accessory influence attribute of the first feature point of the model on the accessory model;
根据模型第一特征点的位置信息和所述配饰影响权重信息改变所述配饰模型的形态;Changing a form of the accessory model according to position information of the first feature point of the model and the accessory influence weight information;
将所述改变后的配饰模型佩戴至所述第一三维模型对应的位置。The changed accessory model is worn to a position corresponding to the first three-dimensional model.
在其中一个实施例中,所述根据所述运动轨迹和影响权重信息生成所述第一三维模型对应的第一三维动画的步骤包括:In one embodiment, the step of generating the first three-dimensional animation corresponding to the first three-dimensional model according to the motion trajectory and the influence weight information comprises:
判断待生成的三维动画的皮肤点是否为摄像头采集的深度图像上对应位置的皮肤点,如果是,则直接根据所述深度图像生成对应位置的第一三维动画的皮肤点,否则根据所述运动轨迹和影响权重信息生成所述第一三维模型对应的第一三维动画的其它皮肤点。Determining whether the skin point of the three-dimensional animation to be generated is a skin point of a corresponding position on the depth image acquired by the camera, and if so, directly generating a skin point of the first three-dimensional animation corresponding to the position according to the depth image, otherwise according to the motion The trajectory and influence weight information generates other skin points of the first three-dimensional animation corresponding to the first three-dimensional model.
在其中一个实施例中,所述第一主体深度图像为RGBD图像中的深度图像,所述RGBD图像还包括对应的彩色图像,所述第一三维动画为彩色的三维动画。In one embodiment, the first body depth image is a depth image in an RGBD image, the RGBD image further comprising a corresponding color image, the first three-dimensional animation being a colored three-dimensional animation.
一种三维动画生成的装置,所述装置包括:A device for generating a three-dimensional animation, the device comprising:
深度图像和模型获取模块,用于获取第一主体深度图像,获取与所述第一主体深度图像对应的预先建立的第一三维模型,所述主体为人体或具有骨骼的动物; a depth image and model acquisition module, configured to acquire a first body depth image, and acquire a pre-established first three-dimensional model corresponding to the first body depth image, the body being a human body or an animal having bones;
特征点和权重获取模块,用于获取所述第一主体深度图像匹配的第一特征点,将所述第一特征点映射至第一三维模型得到对应的模型第一特征点,获取所述模型第一特征点对皮肤点的影响权重信息;a feature point and weight acquisition module, configured to acquire a first feature point matched by the first body depth image, map the first feature point to a first three-dimensional model to obtain a corresponding first feature point of the model, and acquire the model Weight information of the influence of the first feature point on the skin point;
三维动画生成模块,用于根据所述第一主体深度图像获取所述第一特征点的运动轨迹,根据所述运动轨迹和影响权重信息生成所述第一三维模型对应的第一三维动画。a three-dimensional animation generating module, configured to acquire a motion trajectory of the first feature point according to the first body depth image, and generate a first three-dimensional animation corresponding to the first three-dimensional model according to the motion trajectory and the influence weight information.
在其中一个实施例中,所述装置还包括:In one embodiment, the apparatus further includes:
前处理模块,用于获取不同形态的主体深度图像,对所述不同形态的主体深度图像建立不同的三维模型,设置所述不同形态的主体深度图像对应的特征点,将所述特征点映射至三维模型得到对应的模型特征点,根据所述模型特征点建立三维模型的三维动画骨骼,根据皮肤点位置与所述三维动画骨骼的位置关系确定所述模型特征点对皮肤点的影响权重信息。a pre-processing module, configured to acquire a body depth image of different shapes, establish different three-dimensional models for the body depth images of the different shapes, set feature points corresponding to the body depth images of the different shapes, and map the feature points to The three-dimensional model obtains corresponding model feature points, and the three-dimensional animated bone of the three-dimensional model is established according to the model feature points, and the weight information of the model feature points on the skin points is determined according to the positional relationship between the skin point position and the three-dimensional animated bone.
在其中一个实施例中,所述前处理模块还用于根据主体骨骼特征确定所述三维动画骨骼的影响范围,根据主体骨骼特征确定模型特征点在所述影响范围内对不同位置的皮肤点影响的权重系数,其中在确定权重系数时模型特征点对皮肤点的影响大小与两者之间的距离成反比主体。In one embodiment, the pre-processing module is further configured to determine an influence range of the three-dimensional animated bone according to a subject bone feature, and determine, according to the subject bone feature, a skin point effect of the model feature point on the different position within the influence range The weight coefficient, wherein the influence of the model feature points on the skin points when determining the weight coefficients is inversely proportional to the distance between the two.
在其中一个实施例中,所述三维动画生成模块包括:In one embodiment, the three-dimensional animation generation module includes:
特征点坐标单元,用于将所述运动轨迹上的特征点根据深度信息映射到第一三维模型得到模型特征点的空间三维坐标;a feature point coordinate unit, configured to map the feature points on the motion track to the first three-dimensional model according to the depth information to obtain spatial three-dimensional coordinates of the model feature points;
空间关系计算单元,用于根据所述影响权重信息获取所述模型特征点的第一影响范围,获取所述第一影响范围内的第一皮肤点,根据第一皮肤点的原始空间三维坐标和模型特征点的空间三维坐标计算第一皮肤点与模型特征点的空间位置关系;a spatial relationship calculation unit, configured to acquire a first influence range of the model feature point according to the influence weight information, acquire a first skin point in the first influence range, and obtain a three-dimensional coordinate of the original space according to the first skin point The spatial three-dimensional coordinates of the model feature points are used to calculate the spatial positional relationship between the first skin point and the model feature points;
更新单元,用于根据所述空间位置关系得到第一皮肤点的权重系数,根据所述权重系数计算所述第一皮肤点的更新空间三维坐标,将所述第一皮肤点由原始空间三维坐标移动至所述更新空间三维坐标。An update unit, configured to obtain a weight coefficient of the first skin point according to the spatial position relationship, calculate an updated spatial three-dimensional coordinate of the first skin point according to the weight coefficient, and use the first skin point from the original space three-dimensional coordinate Move to the update space three-dimensional coordinates.
在其中一个实施例中,所述装置还包括:In one embodiment, the apparatus further includes:
配饰模块,用于获取配饰模型,获取所述模型第一特征点对配饰模型的配 饰影响权重信息,根据模型第一特征点的位置信息和所述配饰影响权重信息改变所述配饰模型的形态,将所述改变后的配饰模型佩戴至所述第一三维模型对应的位置。An accessory module for acquiring an accessory model, obtaining a first feature point of the model and matching the accessory model The influence weight information is changed, the shape of the accessory model is changed according to the position information of the first feature point of the model and the accessory influence weight information, and the changed accessory model is worn to a position corresponding to the first three-dimensional model.
在其中一个实施例中,所述三维动画生成模块包括:In one embodiment, the three-dimensional animation generation module includes:
判断单元,用于判断待生成的三维动画的皮肤点是否为摄像头采集的深度图像上对应位置的皮肤点,如果是,则进入第一生成单元,否则进入第二生成单元;a determining unit, configured to determine whether a skin point of the three-dimensional animation to be generated is a skin point corresponding to a position on the depth image collected by the camera, and if yes, enter the first generating unit, and otherwise enter the second generating unit;
第一生成单元,用于直接根据所述深度图像生成对应位置的第一三维动画的皮肤点;a first generating unit, configured to directly generate a skin point of the first three-dimensional animation corresponding to the position according to the depth image;
第二生成单元,用于根据所述运动轨迹和影响权重信息生成所述第一三维模型对应的第一三维动画的其它皮肤点。And a second generating unit, configured to generate, according to the motion trajectory and the influence weight information, other skin points of the first three-dimensional animation corresponding to the first three-dimensional model.
在其中一个实施例中,所述第一主体深度图像为RGBD图像中的深度图像,所述RGBD图像还包括对应的彩色图像,所述第一三维动画为彩色的三维动画。In one embodiment, the first body depth image is a depth image in an RGBD image, the RGBD image further comprising a corresponding color image, the first three-dimensional animation being a colored three-dimensional animation.
上述三维动画生成的方法和装置,通过获取第一主体深度图像,主体为人体或具有骨骼的动物,获取与第一主体深度图像对应的预先建立的第一三维模型,获取第一主体深度图像匹配的第一特征点,将第一特征点映射至第一三维模型得到对应的模型第一特征点,获取模型第一特征点对皮肤点的影响权重信息,根据第一主体深度图像获取第一特征点的运动轨迹,根据运动轨迹和影响权重信息生成第一三维模型对应的第一三维动画,由于深度图像携带了深度信息,是三维的空间信息,使得根据深度图像获取的第一特征点的运动轨迹是三维的运动轨迹,并根据运动轨迹和模型第一特征点对皮肤点的影响权重信息可自动生成第一三维模型对应的第一三维动画,不需要佩戴传感器采集三维位置信息,简单方便。The method and device for generating the three-dimensional animation, the first body depth image is obtained by acquiring the first body depth image, the body is a human body or an animal having bones, and acquiring a first three-dimensional model corresponding to the first body depth image to obtain the first body depth image matching. a first feature point, mapping the first feature point to the first three-dimensional model to obtain a corresponding first feature point of the model, acquiring weight information of the first feature point of the model on the skin point, and acquiring the first feature according to the first body depth image The motion trajectory of the point generates a first three-dimensional animation corresponding to the first three-dimensional model according to the motion trajectory and the influence weight information. Since the depth image carries the depth information, it is three-dimensional spatial information, so that the motion of the first feature point acquired according to the depth image is obtained. The trajectory is a three-dimensional motion trajectory, and the first three-dimensional animation corresponding to the first three-dimensional model can be automatically generated according to the motion trajectory and the influence weight information of the first feature point of the model on the skin point, and the three-dimensional position information is not required to be worn by the sensor, which is simple and convenient.
【附图说明】[Description of the Drawings]
图1为一个实施例中三维动画生成的方法的流程图;1 is a flow chart of a method for generating a three-dimensional animation in an embodiment;
图2为一个实施例中建立三维模型和确定权重信息的流程图; 2 is a flow chart of establishing a three-dimensional model and determining weight information in one embodiment;
图3为一个实施例中确定权重信息的流程图;3 is a flow chart of determining weight information in an embodiment;
图4为一个实施例中根据运动轨迹和影响权重信息生成第一三维模型对应的第一三维动画的流程图;4 is a flow chart of generating a first three-dimensional animation corresponding to a first three-dimensional model according to a motion trajectory and an influence weight information in one embodiment;
图5为一个实施例中佩戴配饰模型的流程图;Figure 5 is a flow chart of wearing an accessory model in one embodiment;
图6为一个实施例中三维动画生成的装置的结构框图;6 is a structural block diagram of an apparatus for generating a three-dimensional animation in an embodiment;
图7为另一个实施例中三维动画生成的装置的结构框图;7 is a structural block diagram of an apparatus for generating a three-dimensional animation in another embodiment;
图8为一个实施例中三维动画生成模块的结构框图;FIG. 8 is a structural block diagram of a three-dimensional animation generating module in an embodiment; FIG.
图9为再一个实施例中三维动画生成的装置的结构框图;9 is a structural block diagram of an apparatus for generating a three-dimensional animation in still another embodiment;
图10为一个实施例中根据特征点生成皮肤的示意图;Figure 10 is a schematic view showing the generation of skin according to feature points in one embodiment;
图11为一个实施例中一幅三维动画图示意图;Figure 11 is a schematic diagram of a three-dimensional animated diagram in one embodiment;
图12为一个实施例中人体三维模型上的模型特征点示意图;12 is a schematic diagram of model feature points on a three-dimensional human body model in an embodiment;
图13为一个实施例中建立的人体半身三维骨骼示意图;FIG. 13 is a schematic diagram of a three-dimensional skeleton of a human body half body established in an embodiment; FIG.
图14为一个实施例中建立的人体全身三维骨骼示意图;14 is a schematic diagram of a three-dimensional skeleton of a whole body of a human body established in an embodiment;
图15为一个实施例中建立的狗的三维骨骼示意图;Figure 15 is a schematic diagram of a three-dimensional skeleton of a dog established in an embodiment;
图16为一个实施例中生成的三维人体动画示意图;16 is a schematic diagram of a three-dimensional human body animation generated in one embodiment;
图17为一个实施例中生成的狗的三维动画示意图;17 is a three-dimensional animation diagram of a dog generated in one embodiment;
图18为一个实施例中三维动画生成模块的结构框图。Figure 18 is a block diagram showing the structure of a three-dimensional animation generating module in one embodiment.
【具体实施方式】【detailed description】
在一个实施例中,如图1所示,提供了一种三维动画生成的方法,包括如下步骤:In one embodiment, as shown in FIG. 1, a method for generating a three-dimensional animation is provided, including the following steps:
步骤S110,获取第一主体深度图像,获取与第一主体深度图像对应的预先建立的第一三维模型,主体为人体或具有骨骼的动物。Step S110: Acquire a first body depth image, and acquire a pre-established first three-dimensional model corresponding to the first body depth image, where the body is a human body or an animal having bones.
具体的,主体为人体或具有骨骼的动物,如狗,深度图像可由深度图像摄像机采集得到,如双目摄像机或多对摄像装置采集同一图像的不同深度图像后进行平均得到。获得第一主体深度图像后,对第一主体深度图像进行处理,如去除背景分离出主体轮廓等。得到主体轮廓后进行分析,如主体轮廓为完整主 体,则获取对应的完整人身三维模型。如果主体轮廓只为头部,则识别头部,获取与头部对应的头部三维模型。如果主体轮廓包括头部和手臂或动物的两个前肢,则获取与头部和手臂或两个前肢对应的半身三维模型。由于第一三维模型为预先建立的,可根据第一主体深度图像快速匹配得到,提高了效率。在一个实施例中,预先建立的第一三维模型可根据第一主体深度图像调整形态,如调整高度、调整四肢长度比例等。使得第一三维模型与第一主体深度图像更匹配。在一个实施例中,获取第一主体深度图像后,根据第一主体深度图像建立与第一主体深度图像对应的第一三维模型,由于第一三维模型是根据第一主体深度图像实时建立得,使得第一三维模型可动态根据第一主体深度图像生成,更匹配。在建立第一三维模型时,还可获取与第一主体深度图像对应的第一主体彩色图像,根据第一主体彩色图像中的色度信息建立第一三维模型,如调整肤色或动物的皮毛颜色,衣服颜色等。可以理解的是,如果只根据深度图像生成三维动画,则生成的三维动画没有色彩,如果根据深度图像和对应的彩色图像生成三维动画,则生成的三维动画是彩色的。Specifically, the body is a human body or an animal with bones, such as a dog, and the depth image can be acquired by a depth image camera, such as a binocular camera or a plurality of pairs of camera devices, which are obtained by averaging different depth images of the same image. After obtaining the first body depth image, the first body depth image is processed, such as removing the background to separate the body contour and the like. After the body contour is obtained, the analysis is performed, for example, the body contour is the complete master Body, then obtain the corresponding complete personal 3D model. If the body contour is only the head, the head is recognized, and a three-dimensional model of the head corresponding to the head is acquired. If the body contour includes the head and the arms or the two forelimbs of the animal, a three-dimensional three-dimensional model corresponding to the head and the arms or the two forelimbs is acquired. Since the first three-dimensional model is pre-established, it can be quickly matched according to the first body depth image, which improves the efficiency. In one embodiment, the pre-established first three-dimensional model may adjust the shape according to the first body depth image, such as adjusting the height, adjusting the length ratio of the limbs, and the like. The first three-dimensional model is made to match the first body depth image more closely. In an embodiment, after acquiring the first body depth image, establishing a first three-dimensional model corresponding to the first body depth image according to the first body depth image, since the first three-dimensional model is established in real time according to the first body depth image, The first three-dimensional model is dynamically generated according to the first body depth image, and is more matched. When the first three-dimensional model is established, the first body color image corresponding to the first body depth image may be acquired, and the first three-dimensional model is established according to the chromaticity information in the first body color image, such as adjusting the skin color of the skin color or the animal , clothes color, etc. It can be understood that if the three-dimensional animation is generated only from the depth image, the generated three-dimensional animation has no color, and if the three-dimensional animation is generated according to the depth image and the corresponding color image, the generated three-dimensional animation is colored.
步骤S120,获取第一主体深度图像匹配的第一特征点,将第一特征点映射至第一三维模型得到对应的模型第一特征点,获取模型第一特征点对皮肤点的影响权重信息。Step S120: Acquire a first feature point that is matched by the first body depth image, map the first feature point to the first three-dimensional model to obtain a corresponding first feature point of the model, and obtain information about the influence weight of the first feature point of the model on the skin point.
具体的,不同的深度图像对应了不同的第一特征点,第一特征点的位置和个数与深度图像对应。如深度图像对应主体轮廓为完整主体,则第一特征点为主体四肢末端、连接处和头部五官位置,如深度图像对应主体轮廓为头部,则第一特征点为面部五官位置。在获取第一主体深度图像匹配的第一特征点时,可通过获取第一主体深度图像对应的彩色图像,根据对彩色图像进行图像识别,如五官位置识别得到彩色图像对应特征点位置,由于彩色图像与深度图像对应,可根据特征点在彩色图像上的位置,匹配得到深度图像上的特征点位置。第一特征点的个数可根据需要自定义,第一特征点的个数适当则后续利用特征点得到的三维动画的精确度高。关键部位如脸部特征点密度可设置高,得到的三维动画效果精度更高,表情更逼真。需要说明的是,由于深度图像携带了深度信息,即深度图中的像素值的大小反映了景深的远近,所得第一特征点的位置实 际上是三维空间中的位置。由于第一主体深度图像与第一三维模型对应,所以根据第一主体深度图像与第一三维模型的映射关系,进行映射则得到第一三维模型上与第一特征点对应的模型第一特征点的位置。Specifically, different depth images correspond to different first feature points, and the position and number of the first feature points correspond to the depth image. If the depth image corresponds to the body contour as a complete body, the first feature point is the body limb end, the joint and the head facial position. If the depth image corresponds to the body contour as the head, the first feature point is the facial facial position. When acquiring the first feature point of the first body depth image matching, the color image corresponding to the first body depth image may be acquired, and the color image corresponding image point position is obtained according to the image recognition of the color image, such as the facial features position recognition, due to the color The image corresponds to the depth image, and the feature point position on the depth image can be matched according to the position of the feature point on the color image. The number of the first feature points can be customized as needed. If the number of the first feature points is appropriate, the accuracy of the three-dimensional animation obtained by the subsequent feature points is high. The key parts such as the facial feature point density can be set high, the obtained 3D animation effect is more accurate, and the expression is more realistic. It should be noted that since the depth image carries the depth information, that is, the size of the pixel value in the depth map reflects the depth of the depth of field, the position of the obtained first feature point is It is the position in three-dimensional space. Since the first body depth image corresponds to the first three-dimensional model, according to the mapping relationship between the first body depth image and the first three-dimensional model, mapping is performed to obtain a first feature point of the model corresponding to the first feature point on the first three-dimensional model. s position.
第一特征点一般包括多个特征点,各个特征点根据主体的分布可连接形成相应的骨骼,将模型第一特征点根据主体的分布可连接形成相应的三维动画骨骼。第一三维模型与三维动画骨骼是相互独立的,第一三维模型相当于皮肤,包括各个皮肤点,一旦将皮肤绑定至三维动画骨骼上,则皮肤可跟随三维动画骨骼的运动相应的运动。为了使皮肤跟随三维动画骨骼的运动逼真,就需要设置三维动画骨骼对第一三维模型上每个皮肤点相应的影响权重。将三维动画骨骼对第一三维模型上每个皮肤点相应的影响权重转化为模型第一特征点对皮肤点的影响权重,影响权重信息包括:模型第一特征点影响的皮肤点的范围,模型第一特征点影响皮肤点的权重系数大小。权重系数大小和皮肤点与模型第一特征点的位置相关。一般结合主体骨骼特征确定模型第一特征点对不同位置的皮肤点的权重系数大小。The first feature point generally includes a plurality of feature points, and each of the feature points may be connected to form a corresponding bone according to the distribution of the body, and the first feature points of the model may be connected according to the distribution of the body to form a corresponding three-dimensional animated bone. The first three-dimensional model is independent of the three-dimensional animated bone. The first three-dimensional model is equivalent to the skin, including the individual skin points. Once the skin is bound to the three-dimensional animated bone, the skin can follow the corresponding motion of the movement of the three-dimensional animated bone. In order for the skin to follow the motion of the 3D animated bone, it is necessary to set the corresponding weight of the 3D animated bone to each skin point on the first 3D model. The corresponding influence weight of the three-dimensional animated bone on each skin point on the first three-dimensional model is converted into the weight of the first feature point of the model on the skin point, and the influence weight information includes: the range of the skin point affected by the first feature point of the model, the model The first feature point affects the weight coefficient of the skin point. The weight coefficient size and the skin point are related to the position of the first feature point of the model. The weight coefficient of the first feature point of the model to the skin points at different positions is generally determined in combination with the skeleton characteristics of the subject.
步骤S130,根据第一主体深度图像获取第一特征点的运动轨迹,根据运动轨迹和影响权重信息生成第一三维模型对应的第一三维动画。Step S130: Acquire a motion trajectory of the first feature point according to the first body depth image, and generate a first three-dimensional animation corresponding to the first three-dimensional model according to the motion trajectory and the influence weight information.
具体的,获取不同时间采集的第一主体深度图像,根据不同时间点的第一主体深度图像上对应的第一特征点的坐标位置变化得到第一特征点的运动轨迹。第一特征点的运动轨迹映射到第一三维模型上得到模型第一特征点的运动轨迹,由于第一特征点的运动轨迹由深度图像得到,包括了深度信息是一个空间三维的运动轨迹,模型第一特征点的运动轨迹也是一个空间三维的运动轨迹。根据运动轨迹上的模型第一特征点的影响权重信息,确定影响皮肤点的范围和影响皮肤点的权重系数,根据权重系数和运动轨迹计算出皮肤点受到模型第一特征点的影响后的更新空间坐标,从而得到更新后的皮肤,连续的皮肤变化形成了第一三维动画。如第一主体深度图像是主体头部的运动,则根据采集的不同的表情变化对应的运动轨迹,根据运动轨迹上的特征点和影响权重信息生成表情变化对应的三维动画表情。如大笑时,嘴角上扬,嘴角对应的特征点形成向左右两边向上的运动轨迹,根据嘴角对应的特征点对不同位置皮肤的影响权 重系数,得到嘴角上扬时影响范围内的皮肤点的变化,从而得到凸起的肌肉变化效果。如图10所示,为根据特征点的运动轨迹和影响权重信息对皮肤点的影响生成对应位置的皮肤点的示意图,各个皮肤点的位置可根据公式:
Figure PCTCN2016076742-appb-000001
计算得到,其中d1代表起始皮肤点,d2代表运动后的皮肤点,a1、b1、c1分别代表起始特征点,a2、b2、c2分别代表运动后的特征点,α、β、λ分别为各个特征点的权重值,f()表示计算特征点的轨迹,g(_)表示得到皮肤点的轨迹。如图11所示为皮肤位置完全确定后生成的一幅三维动画图,如图16所示为制作好的三维人体动画图,如图17所示为制作好的狗的三维动画图。
Specifically, the first body depth image acquired at different times is obtained, and the motion track of the first feature point is obtained according to the coordinate position change of the corresponding first feature point on the first body depth image at different time points. The motion trajectory of the first feature point is mapped to the motion trajectory of the first feature point of the model on the first three-dimensional model. Since the motion trajectory of the first feature point is obtained from the depth image, the depth information is a spatial three-dimensional motion trajectory, and the model The motion trajectory of the first feature point is also a spatial three-dimensional motion trajectory. According to the influence weight information of the first feature point of the model on the motion trajectory, the range of the skin point and the weight coefficient affecting the skin point are determined, and the update of the skin point by the first feature point of the model is calculated according to the weight coefficient and the motion trajectory. The spatial coordinates, which result in the updated skin, and the continuous skin changes form the first three-dimensional animation. If the first body depth image is the motion of the body head, the three-dimensional animated expression corresponding to the expression change is generated according to the feature points on the motion track and the influence weight information according to the motion track corresponding to the different expression changes collected. For example, when laughing, the corners of the mouth rise, and the feature points corresponding to the corners of the mouth form a movement trajectory toward the left and right sides. According to the weight coefficient of the characteristic points corresponding to the corners of the mouth on the skin at different positions, the change of the skin points within the influence range when the mouth angle is raised is obtained. Thereby a convex muscle change effect is obtained. As shown in FIG. 10, a schematic diagram of generating skin points corresponding to the position according to the motion trajectory of the feature point and the influence of the weight information on the skin point, the position of each skin point can be according to the formula:
Figure PCTCN2016076742-appb-000001
Calculated, where d 1 represents the starting skin point, d 2 represents the skin point after exercise, a 1 , b 1 , c 1 respectively represent the starting feature points, and a 2 , b 2 and c 2 respectively represent the characteristics after exercise. Points, α, β, and λ are the weight values of the respective feature points, f() represents the trajectory of the calculated feature points, and g(_) represents the trajectory of the skin points. As shown in FIG. 11 , a three-dimensional animated map generated after the skin position is completely determined is shown in FIG. 16 as a three-dimensional human body animation diagram, and FIG. 17 is a three-dimensional animated diagram of the produced dog.
本实施例中,通过获取第一主体深度图像,获取与第一主体深度图像对应的预先建立的第一三维模型,获取第一主体深度图像匹配的第一特征点,将第一特征点映射至第一三维模型得到对应的模型第一特征点,获取模型第一特征点对皮肤点的影响权重信息,根据第一主体深度图像获取第一特征点的运动轨迹,根据运动轨迹和影响权重信息生成第一三维模型对应的第一三维动画,由于深度图像携带了深度信息,是三维的空间信息,使得根据深度图像获取的第一特征点的运动轨迹是三维的运动轨迹,并根据运动轨迹和模型第一特征点对皮肤点的影响权重信息可自动生成第一三维模型对应的第一三维动画,不需要佩戴传感器采集三维位置信息,简单方便。In this embodiment, by acquiring the first body depth image, acquiring a pre-established first three-dimensional model corresponding to the first body depth image, acquiring a first feature point matched by the first body depth image, and mapping the first feature point to The first three-dimensional model obtains the corresponding first feature point of the model, acquires the weight information of the first feature point of the model on the skin point, acquires the motion trajectory of the first feature point according to the first body depth image, and generates the motion trajectory according to the motion trajectory and the influence weight information. The first three-dimensional animation corresponding to the first three-dimensional model, because the depth image carries the depth information, is three-dimensional spatial information, so that the motion trajectory of the first feature point acquired according to the depth image is a three-dimensional motion trajectory, and according to the motion trajectory and the model The weight information of the first feature point on the skin point can automatically generate the first three-dimensional animation corresponding to the first three-dimensional model, and the sensor does not need to wear the sensor to collect the three-dimensional position information, which is simple and convenient.
在一个实施例中,如图2所示,在步骤S110之前,还包括:In an embodiment, as shown in FIG. 2, before step S110, the method further includes:
步骤S210,获取不同形态的主体深度图像,对不同形态的主体深度图像建立不同的三维模型。Step S210: acquiring body depth images of different forms, and establishing different three-dimensional models for body depth images of different forms.
具体的,对不同形态的主体深度图像进行处理,如去除背景分离出主体轮廓等。得到主体轮廓后进行分析,如主体轮廓为完整主体,则建立对应的完整人身三维模型。如果主体轮廓只为半身不包括手臂,则识别头部,建立与头部对应的头部三维模型。如果主体轮廓包括头部和手臂,则建立与头部和手臂对应的半身三维模型。在建立三维模型时,还可获取与主体深度图像对应的主体彩色图像,根据主体彩色图像中的色度信息建立三维模型,如调整肤色,衣服颜色等。提前建立不同形态的主体深度图像对应的三维模型,在生成三维动画 时可直接根据实时采集的主体深度图像查找匹配的建立好的三维模型,加快了三维动画生成的速度。Specifically, the body depth image of different forms is processed, such as removing the background to separate the body contour. After the body contour is obtained, the analysis is performed. If the body contour is a complete body, a corresponding complete personal three-dimensional model is established. If the body contour is only half-length and does not include the arm, the head is recognized, and a three-dimensional model of the head corresponding to the head is established. If the body contour includes the head and the arm, a half-length three-dimensional model corresponding to the head and the arm is established. When the three-dimensional model is established, a body color image corresponding to the body depth image may be acquired, and a three-dimensional model may be established according to the chromaticity information in the body color image, such as adjusting skin color, clothing color, and the like. Create a three-dimensional model corresponding to the body depth image of different forms in advance, and generate a three-dimensional animation The matching established 3D model can be directly searched according to the real-time acquired body depth image, and the speed of 3D animation generation is accelerated.
步骤S220,设置不同形态的主体深度图像对应的特征点,将特征点映射至三维模型得到对应的模型特征点。Step S220, setting feature points corresponding to the body depth images of different forms, and mapping the feature points to the three-dimensional model to obtain corresponding model feature points.
具体的,可自定义不同形态的主体深度图像对应的特征点的位置和个数,如深度图像对应主体轮廓为完整主体,则特征点为主体四肢末端、连接处和头部五官位置,如深度图像对应主体轮廓为头部,则特征点为面部五官位置。特征点的位置可通过人工标定或自动识别,在自动识别时,可通过获取主体深度图像对应的彩色图像,根据对彩色图像进行图像识别,如五官位置识别得到彩色图像对应特征点位置,由于彩色图像与深度图像对应,可根据特征点在彩色图像上的位置,匹配得到深度图像上的特征点位置。由于主体深度图像与三维模型对应,所以根据主体深度图像与三维模型的映射关系,进行映射则得到三维模型上与特征点对应的模型特征点的位置。如图12所示,为人体三维模型上的模型特征点示意图,可从图中看出头部和手部的特征点更密集,可生成更精确的脸部表情和手部运动三维动画。如图10所示,为狗的三维模型上的特征点示意图。Specifically, the position and the number of feature points corresponding to the body depth image of different forms may be customized. For example, if the depth image corresponds to the body contour as a complete body, the feature points are the end of the main limb, the joint, and the facial features of the head, such as depth. The image corresponds to the outline of the body as the head, and the feature point is the facial features. The position of the feature point can be manually calibrated or automatically recognized. In the automatic recognition, the color image corresponding to the depth image of the body can be obtained, and the color image can be image-recognized according to the facial image, such as the facial features recognition. The image corresponds to the depth image, and the feature point position on the depth image can be matched according to the position of the feature point on the color image. Since the body depth image corresponds to the three-dimensional model, according to the mapping relationship between the body depth image and the three-dimensional model, the mapping is performed to obtain the position of the model feature point corresponding to the feature point on the three-dimensional model. As shown in FIG. 12, it is a schematic diagram of model feature points on the three-dimensional model of the human body. It can be seen from the figure that the feature points of the head and the hand are more dense, and a more accurate facial expression and hand motion three-dimensional animation can be generated. As shown in FIG. 10, it is a schematic diagram of feature points on a three-dimensional model of a dog.
步骤S230,根据模型特征点建立三维模型的三维动画骨骼,根据皮肤点位置与三维动画骨骼的位置关系确定模型特征点对皮肤点的影响权重信息。Step S230, the three-dimensional animated skeleton of the three-dimensional model is established according to the model feature points, and the weight information of the influence of the model feature points on the skin points is determined according to the positional relationship between the skin point position and the three-dimensional animated bone.
具体的,模型特征点对皮肤点的影响范围可根据需要自定义,如设定与特征点的距离小于预设阈值则为特征点对皮肤点的影响范围。权重系数的大小可根据皮肤点位置与三维动画骨骼的位置关系自定义算法,如计算皮肤点与三维动画骨骼的垂直距离,根据垂直距离确定连接三维动画骨骼的特征点对皮肤点的权重系数。如图13所示,为建立的人体半身三维骨骼示意图,如图14所示,为建立的人体全身三维骨骼示意图。如图15所示,为建立的狗的三维骨骼示意图。Specifically, the range of influence of the model feature points on the skin points can be customized according to requirements, for example, the distance between the feature points and the skin points is set when the distance from the feature points is less than the preset threshold. The weight coefficient can be customized according to the positional relationship between the skin point position and the three-dimensional animated bone, such as calculating the vertical distance between the skin point and the three-dimensional animated bone, and determining the weight coefficient of the feature point connecting the three-dimensional animated bone to the skin point according to the vertical distance. As shown in FIG. 13 , a schematic diagram of a three-dimensional skeleton of a human body half body is established, as shown in FIG. 14 , which is a schematic diagram of a three-dimensional skeleton of a human body. As shown in Figure 15, a three-dimensional skeleton diagram of the dog is established.
在一个实施例中,如图3所示,步骤S230包括:In an embodiment, as shown in FIG. 3, step S230 includes:
步骤S231,根据主体骨骼特征确定所述三维动画骨骼的影响范围。Step S231, determining an influence range of the three-dimensional animated skeleton according to the skeleton feature of the subject.
具体的,根据主体骨骼特征,将模型特征点根据主体的分布连接形成相应 的三维动画骨骼,根据主体骨骼特征确定三维动画骨骼的影响皮肤点范围,三维动画骨骼由模型特征点连接而成,则位于三维动画骨骼上的模型特征点的影响皮肤点范围就是它所在的三维动画骨骼的影响皮肤点范围。Specifically, according to the skeleton characteristics of the subject, the feature points of the model are connected according to the distribution of the subject. The 3D animated bone determines the range of skin points affected by the 3D animated bone according to the skeleton characteristics of the subject. The 3D animated bone is connected by the model feature points. The model feature points on the 3D animated bone affect the skin point range. Animated bones affect the range of skin points.
步骤S232,根据主体骨骼特征确定模型特征点在影响范围内对不同位置的皮肤点影响的权重系数,其中在确定权重系数时模型特征点对皮肤点的影响大小与两者之间的距离成反比主体。Step S232, determining, according to the skeleton characteristics of the subject, a weight coefficient of the influence of the model feature points on the skin points at different positions within the influence range, wherein the influence of the model feature points on the skin points when determining the weight coefficient is inversely proportional to the distance between the two main body.
具体的,模型特征点对皮肤点的影响权重系数由模型特征点与皮肤点的距离关系结合主体骨骼特征确定,根据主体骨骼特征,不同位置肌肉的拉伸度不同,如三维动画骨骼为手臂,则由于手臂上的骨骼运动对肌肉的位伸运动影响很小,所以设置一个较小的主体骨骼特征权重系数a1,再根据皮肤点与三维动画骨骼上的特征点的距离得到距离权重系数a2,将a1与a2相乘得到最终的权重系数。将最终的权重系数表达为以皮肤点和特征点之间的距离为变量的函数,则在后续生成三维动画时,可直接根据距离大小得到对应的权重系数。Specifically, the weight coefficient of the model feature points on the skin points is determined by the distance relationship between the model feature points and the skin points, and the body bone characteristics are determined according to the body bone characteristics, and the stretching degrees of the muscles at different positions are different, for example, the three-dimensional animated bone is an arm. Since the bone movement on the arm has little effect on the stretching motion of the muscle, a smaller weight coefficient a1 of the main body bone feature is set, and then the distance weight coefficient a2 is obtained according to the distance between the skin point and the feature point on the three-dimensional animated bone. Multiply a1 and a2 to obtain the final weight coefficient. The final weight coefficient is expressed as a function of the distance between the skin point and the feature point. When the 3D animation is subsequently generated, the corresponding weight coefficient can be directly obtained according to the distance.
在一个实施例中,如图4所示,步骤S130包括:In an embodiment, as shown in FIG. 4, step S130 includes:
步骤S131,将运动轨迹上的特征点根据深度信息映射到第一三维模型得到模型特征点的空间三维坐标。Step S131, mapping feature points on the motion trajectory according to the depth information to the first three-dimensional model to obtain spatial three-dimensional coordinates of the model feature points.
具体的,由于运动轨迹由深度图像得到,深度图像包括了深度信息,特征点包括了三维的空间信息,所以特征点映射到第一三维模型可得到一个空间三维坐标。Specifically, since the motion trajectory is obtained from the depth image, the depth image includes depth information, and the feature point includes three-dimensional spatial information, so that the feature point is mapped to the first three-dimensional model to obtain a spatial three-dimensional coordinate.
步骤S132,根据影响权重信息获取模型特征点的第一影响范围,获取第一影响范围内的第一皮肤点,根据第一皮肤点的原始空间三维坐标和模型特征点的空间三维坐标计算第一皮肤点与模型特征点的空间位置关系。Step S132: Acquire a first influence range of the model feature point according to the influence weight information, obtain a first skin point in the first influence range, and calculate the first space according to the original space three-dimensional coordinate of the first skin point and the spatial three-dimensional coordinate of the model feature point. The spatial positional relationship between skin points and model feature points.
具体的,影响权重信息包括了各个特征点影响皮肤点的范围,获取第一影响范围内的第一皮肤点,其它的皮肤点由于在影响范围之后,所以不会受到特征点的影响,如对于2只手臂的模型,其中一只手臂的运动不会影响另一只手臂上的皮肤点。空间位置关系可根据需要计算,如获取的权重系数是与皮肤点和模型特征点的距离相关的,则直接根据空间三维坐标计算皮肤点和模型特征点之间点到点的距离。如获取的权重系数是与皮肤点和不同模型特征点形成的 骨骼距离相关的,则可根据空间三维坐标计算皮肤点至模型特征点形成的骨骼之间的点到线的距离。Specifically, the influence weight information includes a range of the skin points affected by each feature point, and the first skin point in the first influence range is obtained, and the other skin points are not affected by the feature points because they are behind the influence range, for example, A model of 2 arms in which the movement of one arm does not affect the skin points on the other arm. The spatial positional relationship can be calculated according to needs. If the obtained weight coefficient is related to the distance between the skin point and the model feature point, the point-to-point distance between the skin point and the model feature point is directly calculated according to the spatial three-dimensional coordinate. If the obtained weight coefficient is formed with skin points and different model feature points The bone distance is related, and the point-to-line distance between the skin points and the bones formed by the model feature points can be calculated according to the spatial three-dimensional coordinates.
步骤S133,根据空间位置关系得到第一皮肤点的权重系数。Step S133, obtaining a weight coefficient of the first skin point according to the spatial positional relationship.
具体的,可根据计算出的空间距离得到与距离对应的权重系数作为第一皮肤点的权重系数。可以理解的是,如果影响第一皮肤点的模型特征点有多个,则分别根据空间位置关系得到第一皮肤点受到多个模型特征点影响的权重系数,再将不同的权重系数进行加权计算得到最终的权重系数。Specifically, the weight coefficient corresponding to the distance may be obtained as the weight coefficient of the first skin point according to the calculated spatial distance. It can be understood that if there are multiple model feature points affecting the first skin point, the weight coefficients of the first skin point affected by the plurality of model feature points are obtained according to the spatial position relationship, and the weight coefficients are further weighted. Get the final weighting factor.
步骤S134,根据权重系数计算第一皮肤点的更新空间三维坐标,将第一皮肤点由原始空间三维坐标移动至更新空间三维坐标。Step S134, calculating the updated spatial three-dimensional coordinates of the first skin point according to the weight coefficient, and moving the first skin point from the original space three-dimensional coordinates to the update space three-dimensional coordinates.
具体的,特征点的运动轨迹通过映射到第一三维模型得到各个模型特征点的运动轨迹,根据权重系数和各个模型特征点的运动轨迹的趋势计算第一皮肤点的更新空间坐标,如模型特征点是向上运动,运动距离为b1,并且权重系数为b2,则得到第一皮肤点的更新空间坐标为关于b1和b2和原始空间三维坐标的函数,根据函数计算得到更新空间三维坐标。具体的函数公式可根据需要自定义。连续的皮肤点空间坐标的变化形成了三维动画。如对于头部模型,当采集的主体深度图像对应为头部转动时,三维动画也跟随着头部转动,如采集的主体深度图像对应为笑脸时,三维动画也跟随呈现笑脸。Specifically, the motion trajectory of the feature point obtains the motion trajectory of each model feature point by mapping to the first three-dimensional model, and calculates the updated space coordinate of the first skin point according to the weight coefficient and the trend of the motion trajectory of each model feature point, such as the model feature. The point is upward motion, the motion distance is b1, and the weight coefficient is b2, then the updated space coordinate of the first skin point is obtained as a function of the three-dimensional coordinates of b1 and b2 and the original space, and the three-dimensional coordinates of the update space are calculated according to the function. Specific function formulas can be customized as needed. Changes in the spatial coordinates of successive skin points form a three-dimensional animation. For the head model, when the captured body depth image corresponds to the head rotation, the three-dimensional animation also follows the head rotation. If the captured body depth image corresponds to a smile face, the three-dimensional animation also follows the smile face.
在一个实施例中,如图5所示,方法还包括:In an embodiment, as shown in FIG. 5, the method further includes:
步骤S310,获取配饰模型,获取模型第一特征点对配饰模型的配饰影响权重信息。Step S310, acquiring an accessory model, and acquiring weight information of the accessory influence weight of the first feature point of the model on the accessory model.
具体的,根据配饰模型的种类匹配能影响配饰模型的目标模型特征点,如配饰模型为帽子,则目标模型特征点为头部模型特征点,如配饰模型为眼镜,则目标模型特征点为鼻部模型特征点。除了目标模型特征点,其它模型特征点对配饰模型的配饰影响权重系数为0。影响权重信息包括了模型第一特征点对配饰模型的配饰影响权重系数和影响范围。只有影响范围内配饰上的点才会受到第一特征点的影响,如帽子只有边缘上的点会受到模型第一特征点的影响,帽子顶部的点不会受到模型第一特征点的影响。影响权重系数确定了模型第一特征点运动时,各个模型特征点对配饰模型影响大小,如配饰模型为帽子,帽子 距离头部较近的点的影响权重系数较大。Specifically, according to the type matching of the accessory model, the target model feature points of the accessory model can be affected. For example, if the accessory model is a hat, the target model feature point is a head model feature point, and if the accessory model is glasses, the target model feature point is a nose. Part model feature points. In addition to the target model feature points, the other model feature points have a weighting factor of 0 for the accessories of the accessory model. The influence weight information includes the weight coefficient and influence range of the accessory's first feature point on the accessory model. Only the points on the accessories in the affected range will be affected by the first feature points. For example, the point on the edge of the hat will be affected by the first feature point of the model, and the point at the top of the hat will not be affected by the first feature point of the model. The influence weight coefficient determines the influence of each model feature point on the accessory model when the first feature point of the model is moved. For example, the accessory model is a hat and a hat. The point of influence of the point closer to the head is larger.
步骤S320,根据模型第一特征点的位置信息和配饰影响权重信息改变配饰模型的形态,将改变后的配饰模型佩戴至所述第一三维模型对应的位置。Step S320, changing the form of the accessory model according to the position information of the first feature point of the model and the accessory influence weight information, and wearing the changed accessory model to a position corresponding to the first three-dimensional model.
具体的,模型第一特征点的位置信息包括两个模型特征点之间的距离和特征点的移动轨迹,可根据模型第一特征点的位置信息和配饰影响权重信息改变配饰模型的形态,如大小、位置等。使得改变后的配饰模型与第一三维模型相匹配,从而能将改变后的配饰模型佩戴至第一三维模型对应的位置。如用户A的头部较宽,则用户A的头部2个模型特征点之间的距离较大,从而将原始配饰模型,如帽子调大,使得其与用户的头部相匹配。Specifically, the position information of the first feature point of the model includes a distance between two model feature points and a movement trajectory of the feature point, and the shape of the accessory model can be changed according to the position information of the first feature point of the model and the weight information of the accessory influence, such as Size, location, etc. The changed accessory model is matched with the first three-dimensional model so that the changed accessory model can be worn to a position corresponding to the first three-dimensional model. If the head of the user A is wider, the distance between the two model feature points of the user A's head is larger, so that the original accessory model, such as a hat, is enlarged so that it matches the user's head.
在一个实施例中,所述第一三维模型为头部三维模型,所述第一三维动画为头像三维动画。In one embodiment, the first three-dimensional model is a three-dimensional head model, and the first three-dimensional animation is an avatar three-dimensional animation.
具体的,当第一三维模型为头部三维模型时,获取的第一特征点对皮肤点的影响权重信息会与头部的结构相匹配,根据头部特征点的运动轨迹得到皮肤的运动,从而得到不同的表情,得到动态的头像三维动画。可用于用户间的视频通话实时生成对应的随人的表情改变的三维动画,增加通信的趣味性。Specifically, when the first three-dimensional model is a three-dimensional model of the head, the obtained weight information of the first feature point on the skin point is matched with the structure of the head, and the motion of the skin is obtained according to the motion track of the feature point of the head. Thereby getting different expressions and getting dynamic avatar 3D animation. It can be used for real-time video chat between users to generate corresponding three-dimensional animations with facial expression changes, which increases the interest of communication.
在一个实施例中,步骤S130包括:判断待生成的三维动画的皮肤点是否为摄像头采集的深度图像上对应位置的皮肤点,如果是,则直接根据深度图像生成对应位置的第一三维动画的皮肤点,否则根据运动轨迹和影响权重信息生成第一三维模型对应的第一三维动画的其它皮肤点。In an embodiment, step S130 includes: determining whether the skin point of the three-dimensional animation to be generated is a skin point of a corresponding position on the depth image acquired by the camera, and if yes, generating the first three-dimensional animation of the corresponding position according to the depth image. Skin points, otherwise other skin points of the first three-dimensional animation corresponding to the first three-dimensional model are generated according to the motion trajectory and the influence weight information.
具体的,由于深度图像携带了深度信息,是三维的空间信息,则对于深度图像上的各个点可直接根据深度信息得到空间位置坐标,从而确定各个皮肤点的位置,如面对摄像头的那一面的皮肤点在深度图像上有对应的点,则可直接得到这些皮肤点的三维位置,从而生成对应的三维动画皮肤点。对于摄像头采集不到的部分,背对摄像头的部分,在深度图像上没有对应的点,则需要根据运动轨迹和影响权重信息生成这些皮肤点。对于有深度信息的皮肤点,直接根据深度信息生成对应的三维动画,可加快三维动画的生成速度。Specifically, since the depth image carries the depth information and is the three-dimensional spatial information, the spatial position coordinates can be directly obtained according to the depth information for each point on the depth image, thereby determining the position of each skin point, such as the side facing the camera. The skin points have corresponding points on the depth image, and the three-dimensional positions of the skin points can be directly obtained, thereby generating corresponding three-dimensional animated skin points. For the part that is not captured by the camera, the part facing away from the camera does not have a corresponding point on the depth image, and these skin points need to be generated according to the motion trajectory and the influence weight information. For skin points with depth information, the corresponding 3D animation can be generated directly from the depth information, which can speed up the generation of 3D animation.
在一个实施例中,第一主体深度图像为RGBD图像中的深度图像,RGBD图像还包括对应的彩色图像,第一三维动画为彩色的三维动画。 In one embodiment, the first body depth image is a depth image in the RGBD image, and the RGBD image further includes a corresponding color image, the first three-dimensional animation being a colored three-dimensional animation.
具体的,RGBD图像为摄像机采集的同步的深度图像和彩色图像,由于采集的信息中包括了彩色图像,并且彩色图像与深度图像的各个点一一对应,生成的三维动画为彩色的。Specifically, the RGBD image is a synchronized depth image and a color image collected by the camera. Since the collected information includes a color image, and the color image corresponds to each point of the depth image one by one, the generated three-dimensional animation is colored.
在一个实施例中,如图6所示,提供了一种三维动画生成的装置,包括:In one embodiment, as shown in FIG. 6, an apparatus for generating a three-dimensional animation is provided, including:
深度图像和模型获取模块410,用于获取第一主体深度图像,获取与第一主体深度图像对应的预先建立的第一三维模型,主体为人体或具有骨骼的动物。The depth image and model acquisition module 410 is configured to acquire a first body depth image, and acquire a pre-established first three-dimensional model corresponding to the first body depth image, where the body is a human body or an animal having bones.
特征点和权重获取模块420,用于获取第一主体深度图像匹配的第一特征点,将第一特征点映射至第一三维模型得到对应的模型第一特征点,获取模型第一特征点对皮肤点的影响权重信息。The feature point and weight acquisition module 420 is configured to acquire a first feature point of the first body depth image matching, map the first feature point to the first three-dimensional model to obtain a corresponding first feature point of the model, and acquire a first feature point pair of the model Skin point influence weight information.
三维动画生成模块430,用于根据第一主体深度图像获取第一特征点的运动轨迹,根据运动轨迹和影响权重信息生成第一三维模型对应的第一三维动画。The three-dimensional animation generating module 430 is configured to acquire a motion trajectory of the first feature point according to the first body depth image, and generate a first three-dimensional animation corresponding to the first three-dimensional model according to the motion trajectory and the influence weight information.
在一个实施例中,如图7所示,所述装置还包括:In an embodiment, as shown in FIG. 7, the apparatus further includes:
前处理模块440,用于获取不同形态的主体深度图像,对不同形态的主体深度图像建立不同的三维模型,设置不同形态的主体深度图像对应的特征点,将特征点映射至三维模型得到对应的模型特征点,根据模型特征点建立三维模型的三维动画骨骼,根据皮肤点位置与三维动画骨骼的位置关系确定模型特征点对皮肤点的影响权重信息。The pre-processing module 440 is configured to acquire body depth images of different forms, establish different three-dimensional models for body depth images of different forms, set feature points corresponding to body depth images of different forms, and map the feature points to the three-dimensional model to obtain corresponding The model feature points are based on the model feature points to establish a three-dimensional animated skeleton of the three-dimensional model, and the weight information of the model feature points on the skin points is determined according to the positional relationship between the skin point position and the three-dimensional animated bone.
在一个实施例中,前处理模块440还用于根据主体骨骼特征确定三维动画骨骼的影响范围,根据主体骨骼特征确定模型特征点在影响范围内对不同位置的皮肤点影响的权重系数,其中在确定权重系数时模型特征点对皮肤点的影响大小与两者之间的距离成反比主体。In one embodiment, the pre-processing module 440 is further configured to determine a range of influence of the three-dimensional animated bone according to the subject bone feature, and determine, according to the subject bone feature, a weight coefficient of the model feature point affecting the skin point of the different position within the influence range, wherein When the weight coefficient is determined, the influence of the model feature points on the skin points is inversely proportional to the distance between the two.
在一个实施例中,如图8所示,三维动画生成模块430包括:In one embodiment, as shown in FIG. 8, the three-dimensional animation generating module 430 includes:
特征点坐标单元431,用于将运动轨迹上的特征点根据深度信息映射到第一三维模型得到模型特征点的空间三维坐标。The feature point coordinate unit 431 is configured to map feature points on the motion track to the first three-dimensional model according to the depth information to obtain spatial three-dimensional coordinates of the model feature points.
空间关系计算单元432,用于根据影响权重信息获取模型特征点的第一影响范围,获取第一影响范围内的第一皮肤点,根据第一皮肤点的原始空间三维坐标和模型特征点的空间三维坐标计算第一皮肤点与模型特征点的空间位置关 系。The spatial relationship calculation unit 432 is configured to acquire the first impact range of the model feature point according to the influence weight information, and acquire the first skin point in the first influence range, according to the original space three-dimensional coordinate of the first skin point and the space of the model feature point. Calculate the spatial position of the first skin point and the model feature point in three-dimensional coordinates system.
更新单元433,用于根据空间位置关系得到第一皮肤点的权重系数,根据权重系数计算第一皮肤点的更新空间三维坐标,将第一皮肤点由原始空间三维坐标移动至更新空间三维坐标。The updating unit 433 is configured to obtain a weight coefficient of the first skin point according to the spatial position relationship, calculate a three-dimensional coordinate of the update space of the first skin point according to the weight coefficient, and move the first skin point from the original space three-dimensional coordinate to the update space three-dimensional coordinate.
在一个实施例中,如图9所示,所述装置还包括:In an embodiment, as shown in FIG. 9, the apparatus further includes:
配饰模块450,用于获取配饰模型,获取模型第一特征点对配饰模型的配饰影响权重信息,根据模型第一特征点的位置信息和所述配饰影响权重信息改变配饰模型的形态,将改变后的配饰模型佩戴至第一三维模型对应的位置。The accessory module 450 is configured to acquire an accessory model, obtain the accessory influence weight information of the first feature point of the model on the accessory model, and change the shape of the accessory model according to the position information of the first feature point of the model and the influence weight information of the accessory, and then change The accessory model is worn to the position corresponding to the first three-dimensional model.
在一个实施例中,所述第一三维模型为头部三维模型,所述第一三维动画为头像三维动画。In one embodiment, the first three-dimensional model is a three-dimensional head model, and the first three-dimensional animation is an avatar three-dimensional animation.
在一个实施例中,如图18所示,三维动画生成模块430包括:In one embodiment, as shown in FIG. 18, the three-dimensional animation generating module 430 includes:
判断单元434,用于判断待生成的三维动画的皮肤点是否为摄像头采集的深度图像上对应位置的皮肤点,如果是,则进入第一生成单元,否则进入第二生成单元。The determining unit 434 is configured to determine whether the skin point of the three-dimensional animation to be generated is a skin point of a corresponding position on the depth image collected by the camera, and if yes, enter the first generating unit, and otherwise enter the second generating unit.
第一生成单元435,用于直接根据深度图像生成对应位置的第一三维动画的皮肤点。The first generating unit 435 is configured to directly generate a skin point of the first three-dimensional animation corresponding to the position according to the depth image.
第二生成单元436,用于根据运动轨迹和影响权重信息生成所述第一三维模型对应的第一三维动画的其它皮肤点。The second generating unit 436 is configured to generate other skin points of the first three-dimensional animation corresponding to the first three-dimensional model according to the motion trajectory and the influence weight information.
在一个实施例中,第一主体深度图像为RGBD图像中的深度图像,RGBD图像还包括对应的彩色图像,第一三维动画为彩色的三维动画。In one embodiment, the first body depth image is a depth image in the RGBD image, and the RGBD image further includes a corresponding color image, the first three-dimensional animation being a colored three-dimensional animation.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述程序可存储于一计算机可读取存储介质中,如本发明实施例中,该程序可存储于计算机系统的存储介质中,并被该计算机系统中的至少一个处理器执行,以实现包括如上述各方法的实施例的流程。其中,所述存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。 A person skilled in the art can understand that all or part of the process of implementing the above embodiments can be completed by a computer program to instruct related hardware, and the program can be stored in a computer readable storage medium, such as the present invention. In an embodiment, the program can be stored in a storage medium of the computer system and executed by at least one processor in the computer system to implement a process comprising an embodiment of the methods as described above. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。The technical features of the above-described embodiments may be arbitrarily combined. For the sake of brevity of description, all possible combinations of the technical features in the above embodiments are not described. However, as long as there is no contradiction between the combinations of these technical features, All should be considered as the scope of this manual.
以上所述实施例仅表达了本发明的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进,这些都属于本发明的保护范围。因此,本发明专利的保护范围应以所附权利要求为准。 The above-described embodiments are merely illustrative of several embodiments of the present invention, and the description thereof is more specific and detailed, but is not to be construed as limiting the scope of the invention. It should be noted that a number of variations and modifications may be made by those skilled in the art without departing from the spirit and scope of the invention. Therefore, the scope of the invention should be determined by the appended claims.

Claims (19)

  1. 一种三维动画生成的方法,其特征在于,所述方法包括:A method for generating a three-dimensional animation, the method comprising:
    获取第一主体深度图像,获取与所述第一主体深度图像对应的预先建立的第一三维模型,所述主体为人体或具有骨骼的动物;Obtaining a first body depth image, and acquiring a pre-established first three-dimensional model corresponding to the first body depth image, the body being a human body or an animal having bones;
    获取所述第一主体深度图像匹配的第一特征点,将所述第一特征点映射至第一三维模型得到对应的模型第一特征点;Obtaining a first feature point that is matched by the first body depth image, and mapping the first feature point to a first three-dimensional model to obtain a corresponding first feature point of the model;
    获取所述模型第一特征点对皮肤点的影响权重信息;Obtaining weight information of the first feature point of the model on the skin point;
    根据所述第一主体深度图像获取所述第一特征点的运动轨迹,所述第一主体深度图像为RGBD图像中的深度图像,所述RGBD图像还包括对应的彩色图像,所述第一三维动画为彩色的三维动画,根据所述运动轨迹和影响权重信息生成所述第一三维模型对应的第一三维动画;Obtaining a motion trajectory of the first feature point according to the first body depth image, the first body depth image is a depth image in an RGBD image, and the RGBD image further includes a corresponding color image, the first three-dimensional The animation is a colored three-dimensional animation, and the first three-dimensional animation corresponding to the first three-dimensional model is generated according to the motion trajectory and the influence weight information;
    获取配饰模型,获取所述模型第一特征点对配饰模型的配饰影响权重信息;Obtaining an accessory model, and acquiring weight information of the accessory influence attribute of the first feature point of the model on the accessory model;
    根据模型第一特征点的位置信息和所述配饰影响权重信息改变所述配饰模型的形态;Changing a form of the accessory model according to position information of the first feature point of the model and the accessory influence weight information;
    将所述改变后的配饰模型佩戴至所述第一三维模型对应的位置。The changed accessory model is worn to a position corresponding to the first three-dimensional model.
  2. 根据权利要求1所述的方法,其特征在于,所述获取第一主体深度图像的步骤之前,还包括:The method according to claim 1, wherein before the step of acquiring the first body depth image, the method further comprises:
    获取不同形态的主体深度图像,对所述不同形态的主体深度图像建立不同的三维模型;Obtaining a body depth image of different forms, and establishing different three-dimensional models for the body depth images of the different shapes;
    设置所述不同形态的主体深度图像对应的特征点,将所述特征点映射至三维模型得到对应的模型特征点;And setting feature points corresponding to the different depths of the body depth image, and mapping the feature points to the three-dimensional model to obtain corresponding model feature points;
    根据所述模型特征点建立所述三维模型的三维动画骨骼;Establishing a three-dimensional animated skeleton of the three-dimensional model according to the model feature point;
    根据皮肤点位置与所述三维动画骨骼的位置关系确定所述模型特征点对皮肤点的影响权重信息。The weight information of the influence of the model feature points on the skin points is determined according to the positional relationship between the skin point position and the three-dimensional animated bone.
  3. 根据权利要求2所述的方法,其特征在于,所述根据皮肤点位置与所述三维动画骨骼的位置关系确定所述模型特征点对皮肤点的影响权重信息的步骤 包括:The method according to claim 2, wherein the step of determining the influence weight information of the model feature point on the skin point according to the positional relationship between the skin point position and the three-dimensional animated bone include:
    根据主体骨骼特征确定所述三维动画骨骼的影响范围;Determining the influence range of the three-dimensional animated bone according to the skeleton characteristics of the subject;
    根据主体骨骼特征确定模型特征点在所述影响范围内对不同位置的皮肤点影响的权重系数,其中在确定权重系数时模型特征点对皮肤点的影响大小与两者之间的距离成反比。The weight coefficient of the influence of the model feature points on the skin points at different positions within the influence range is determined according to the body skeleton feature, wherein the influence of the model feature points on the skin points is inversely proportional to the distance between the two.
  4. 根据权利要求1所述的方法,其特征在于,所述根据所述运动轨迹和影响权重信息生成所述第一三维模型对应的第一三维动画的步骤包括:The method according to claim 1, wherein the step of generating the first three-dimensional animation corresponding to the first three-dimensional model according to the motion trajectory and the influence weight information comprises:
    将所述运动轨迹上的特征点根据深度信息映射到第一三维模型得到模型特征点的空间三维坐标;Mapping the feature points on the motion trajectory according to the depth information to the first three-dimensional model to obtain spatial three-dimensional coordinates of the model feature points;
    根据所述影响权重信息获取所述模型特征点的第一影响范围;Acquiring, according to the impact weight information, a first influence range of the model feature point;
    获取所述第一影响范围内的第一皮肤点,根据第一皮肤点的原始空间三维坐标和模型特征点的空间三维坐标计算第一皮肤点与模型特征点的空间位置关系;Obtaining a first skin point in the first influence range, and calculating a spatial position relationship between the first skin point and the model feature point according to the original space three-dimensional coordinates of the first skin point and the spatial three-dimensional coordinates of the model feature point;
    根据所述空间位置关系得到第一皮肤点的权重系数;Obtaining a weight coefficient of the first skin point according to the spatial position relationship;
    根据所述权重系数计算所述第一皮肤点的更新空间三维坐标,将所述第一皮肤点由原始空间三维坐标移动至所述更新空间三维坐标。Calculating an updated spatial three-dimensional coordinate of the first skin point according to the weight coefficient, and moving the first skin point from the original space three-dimensional coordinate to the update space three-dimensional coordinate.
  5. 根据权利要求1所述的方法,其特征在于,所述根据所述运动轨迹和影响权重信息生成所述第一三维模型对应的第一三维动画的步骤包括:The method according to claim 1, wherein the step of generating the first three-dimensional animation corresponding to the first three-dimensional model according to the motion trajectory and the influence weight information comprises:
    判断待生成的三维动画的皮肤点是否为摄像头采集的深度图像上对应位置的皮肤点,如果是,则直接根据所述深度图像生成对应位置的第一三维动画的皮肤点,否则根据所述运动轨迹和影响权重信息生成所述第一三维模型对应的第一三维动画的其它皮肤点。Determining whether the skin point of the three-dimensional animation to be generated is a skin point of a corresponding position on the depth image acquired by the camera, and if so, directly generating a skin point of the first three-dimensional animation corresponding to the position according to the depth image, otherwise according to the motion The trajectory and influence weight information generates other skin points of the first three-dimensional animation corresponding to the first three-dimensional model.
  6. 一种三维动画生成的方法,所述方法包括:A method for generating a three-dimensional animation, the method comprising:
    获取第一主体深度图像,获取与所述第一主体深度图像对应的预先建立的第一三维模型,所述主体为人体或具有骨骼的动物;Obtaining a first body depth image, and acquiring a pre-established first three-dimensional model corresponding to the first body depth image, the body being a human body or an animal having bones;
    获取所述第一主体深度图像匹配的第一特征点,将所述第一特征点映射至第一三维模型得到对应的模型第一特征点;Obtaining a first feature point that is matched by the first body depth image, and mapping the first feature point to a first three-dimensional model to obtain a corresponding first feature point of the model;
    获取所述模型第一特征点对皮肤点的影响权重信息; Obtaining weight information of the first feature point of the model on the skin point;
    根据所述第一主体深度图像获取所述第一特征点的运动轨迹,根据所述运动轨迹和影响权重信息生成所述第一三维模型对应的第一三维动画。Obtaining a motion trajectory of the first feature point according to the first body depth image, and generating a first three-dimensional animation corresponding to the first three-dimensional model according to the motion trajectory and the influence weight information.
  7. 根据权利要求6所述的方法,其特征在于,所述获取第一主体深度图像的步骤之前,还包括:The method according to claim 6, wherein before the step of acquiring the first body depth image, the method further comprises:
    获取不同形态的主体深度图像,对所述不同形态的主体深度图像建立不同的三维模型;Obtaining a body depth image of different forms, and establishing different three-dimensional models for the body depth images of the different shapes;
    设置所述不同形态的主体深度图像对应的特征点,将所述特征点映射至三维模型得到对应的模型特征点;And setting feature points corresponding to the different depths of the body depth image, and mapping the feature points to the three-dimensional model to obtain corresponding model feature points;
    根据所述模型特征点建立所述三维模型的三维动画骨骼;Establishing a three-dimensional animated skeleton of the three-dimensional model according to the model feature point;
    根据皮肤点位置与所述三维动画骨骼的位置关系确定所述模型特征点对皮肤点的影响权重信息。The weight information of the influence of the model feature points on the skin points is determined according to the positional relationship between the skin point position and the three-dimensional animated bone.
  8. 根据权利要求7所述的方法,其特征在于,所述根据皮肤点位置与所述三维动画骨骼的位置关系确定所述模型特征点对皮肤点的影响权重信息的步骤包括:The method according to claim 7, wherein the step of determining the influence weight information of the model feature point on the skin point according to the positional relationship between the skin point position and the three-dimensional animated bone comprises:
    根据主体骨骼特征确定所述三维动画骨骼的影响范围;Determining the influence range of the three-dimensional animated bone according to the skeleton characteristics of the subject;
    根据主体骨骼特征确定模型特征点在所述影响范围内对不同位置的皮肤点影响的权重系数,其中在确定权重系数时模型特征点对皮肤点的影响大小与两者之间的距离成反比。The weight coefficient of the influence of the model feature points on the skin points at different positions within the influence range is determined according to the body skeleton feature, wherein the influence of the model feature points on the skin points is inversely proportional to the distance between the two.
  9. 根据权利要求6所述的方法,其特征在于,所述根据所述运动轨迹和影响权重信息生成所述第一三维模型对应的第一三维动画的步骤包括:The method according to claim 6, wherein the step of generating the first three-dimensional animation corresponding to the first three-dimensional model according to the motion trajectory and the influence weight information comprises:
    将所述运动轨迹上的特征点根据深度信息映射到第一三维模型得到模型特征点的空间三维坐标;Mapping the feature points on the motion trajectory according to the depth information to the first three-dimensional model to obtain spatial three-dimensional coordinates of the model feature points;
    根据所述影响权重信息获取所述模型特征点的第一影响范围;Acquiring, according to the impact weight information, a first influence range of the model feature point;
    获取所述第一影响范围内的第一皮肤点,根据第一皮肤点的原始空间三维坐标和模型特征点的空间三维坐标计算第一皮肤点与模型特征点的空间位置关系;Obtaining a first skin point in the first influence range, and calculating a spatial position relationship between the first skin point and the model feature point according to the original space three-dimensional coordinates of the first skin point and the spatial three-dimensional coordinates of the model feature point;
    根据所述空间位置关系得到第一皮肤点的权重系数;Obtaining a weight coefficient of the first skin point according to the spatial position relationship;
    根据所述权重系数计算所述第一皮肤点的更新空间三维坐标,将所述第一 皮肤点由原始空间三维坐标移动至所述更新空间三维坐标。Calculating, according to the weight coefficient, an updated spatial three-dimensional coordinate of the first skin point, the first The skin point is moved from the original space three-dimensional coordinates to the three-dimensional coordinates of the update space.
  10. 根据权利要求6所述的方法,其特征在于,所述方法还包括:The method of claim 6 wherein the method further comprises:
    获取配饰模型,获取所述模型第一特征点对配饰模型的配饰影响权重信息;Obtaining an accessory model, and acquiring weight information of the accessory influence attribute of the first feature point of the model on the accessory model;
    根据模型第一特征点的位置信息和所述配饰影响权重信息改变所述配饰模型的形态;Changing a form of the accessory model according to position information of the first feature point of the model and the accessory influence weight information;
    将所述改变后的配饰模型佩戴至所述第一三维模型对应的位置。The changed accessory model is worn to a position corresponding to the first three-dimensional model.
  11. 根据权利要求6所述的方法,其特征在于,所述根据所述运动轨迹和影响权重信息生成所述第一三维模型对应的第一三维动画的步骤包括:The method according to claim 6, wherein the step of generating the first three-dimensional animation corresponding to the first three-dimensional model according to the motion trajectory and the influence weight information comprises:
    判断待生成的三维动画的皮肤点是否为摄像头采集的深度图像上对应位置的皮肤点,如果是,则直接根据所述深度图像生成对应位置的第一三维动画的皮肤点,否则根据所述运动轨迹和影响权重信息生成所述第一三维模型对应的第一三维动画的其它皮肤点。Determining whether the skin point of the three-dimensional animation to be generated is a skin point of a corresponding position on the depth image acquired by the camera, and if so, directly generating a skin point of the first three-dimensional animation corresponding to the position according to the depth image, otherwise according to the motion The trajectory and influence weight information generates other skin points of the first three-dimensional animation corresponding to the first three-dimensional model.
  12. 根据权利要求6所述的方法,其特征在于,所述第一主体深度图像为RGBD图像中的深度图像,所述RGBD图像还包括对应的彩色图像,所述第一三维动画为彩色的三维动画。The method according to claim 6, wherein the first body depth image is a depth image in an RGBD image, the RGBD image further comprising a corresponding color image, the first three-dimensional animation being a colored three-dimensional animation .
  13. 一种三维动画生成的装置,其特征在于,所述装置包括:A device for generating a three-dimensional animation, characterized in that the device comprises:
    深度图像和模型获取模块,用于获取第一主体深度图像,获取与所述第一主体深度图像对应的预先建立的第一三维模型,所述主体为人体或具有骨骼的动物;a depth image and model acquisition module, configured to acquire a first body depth image, and acquire a pre-established first three-dimensional model corresponding to the first body depth image, the body being a human body or an animal having bones;
    特征点和权重获取模块,用于获取所述第一主体深度图像匹配的第一特征点,将所述第一特征点映射至第一三维模型得到对应的模型第一特征点,获取所述模型第一特征点对皮肤点的影响权重信息;a feature point and weight acquisition module, configured to acquire a first feature point matched by the first body depth image, map the first feature point to a first three-dimensional model to obtain a corresponding first feature point of the model, and acquire the model Weight information of the influence of the first feature point on the skin point;
    三维动画生成模块,用于根据所述第一主体深度图像获取所述第一特征点的运动轨迹,根据所述运动轨迹和影响权重信息生成所述第一三维模型对应的第一三维动画。a three-dimensional animation generating module, configured to acquire a motion trajectory of the first feature point according to the first body depth image, and generate a first three-dimensional animation corresponding to the first three-dimensional model according to the motion trajectory and the influence weight information.
  14. 根据权利要求13所述的装置,其特征在于,所述装置还包括:The device according to claim 13, wherein the device further comprises:
    前处理模块,用于获取不同形态的主体深度图像,对所述不同形态的主体深度图像建立不同的三维模型,设置所述不同形态的主体深度图像对应的特征 点,将所述特征点映射至三维模型得到对应的模型特征点,根据所述模型特征点建立三维模型的三维动画骨骼,根据皮肤点位置与所述三维动画骨骼的位置关系确定所述模型特征点对皮肤点的影响权重信息。a pre-processing module, configured to acquire body depth images of different forms, and establish different three-dimensional models for the body depth images of different shapes, and set features corresponding to the body depth images of the different shapes Pointing, mapping the feature point to the three-dimensional model to obtain a corresponding model feature point, establishing a three-dimensional animated bone of the three-dimensional model according to the model feature point, and determining the model feature according to a positional relationship between the skin point position and the three-dimensional animated bone Point weight-to-skin point influence weight information.
  15. 根据权利要求14所述的装置,其特征在于,所述前处理模块还用于根据主体骨骼特征确定所述三维动画骨骼的影响范围,根据主体骨骼特征确定模型特征点在所述影响范围内对不同位置的皮肤点影响的权重系数,其中在确定权重系数时模型特征点对皮肤点的影响大小与两者之间的距离成反比。主体The apparatus according to claim 14, wherein the pre-processing module is further configured to determine an influence range of the three-dimensional animated bone according to a subject skeleton feature, and determine that the model feature point is within the influence range according to the subject skeleton feature The weight coefficient of the skin point effect at different positions, wherein the influence of the model feature point on the skin point when determining the weight coefficient is inversely proportional to the distance between the two. main body
  16. 根据权利要求13所述的装置,其特征在于,所述三维动画生成模块包括:The device according to claim 13, wherein the three-dimensional animation generating module comprises:
    特征点坐标单元,用于将所述运动轨迹上的特征点根据深度信息映射到第一三维模型得到模型特征点的空间三维坐标;a feature point coordinate unit, configured to map the feature points on the motion track to the first three-dimensional model according to the depth information to obtain spatial three-dimensional coordinates of the model feature points;
    空间关系计算单元,用于根据所述影响权重信息获取所述模型特征点的第一影响范围,获取所述第一影响范围内的第一皮肤点,根据第一皮肤点的原始空间三维坐标和模型特征点的空间三维坐标计算第一皮肤点与模型特征点的空间位置关系;a spatial relationship calculation unit, configured to acquire a first influence range of the model feature point according to the influence weight information, acquire a first skin point in the first influence range, and obtain a three-dimensional coordinate of the original space according to the first skin point The spatial three-dimensional coordinates of the model feature points are used to calculate the spatial positional relationship between the first skin point and the model feature points;
    更新单元,用于根据所述空间位置关系得到第一皮肤点的权重系数,根据所述权重系数计算所述第一皮肤点的更新空间三维坐标,将所述第一皮肤点由原始空间三维坐标移动至所述更新空间三维坐标。An update unit, configured to obtain a weight coefficient of the first skin point according to the spatial position relationship, calculate an updated spatial three-dimensional coordinate of the first skin point according to the weight coefficient, and use the first skin point from the original space three-dimensional coordinate Move to the update space three-dimensional coordinates.
  17. 根据权利要求13所述的装置,其特征在于,所述装置还包括:The device according to claim 13, wherein the device further comprises:
    配饰模块,用于获取配饰模型,获取所述模型第一特征点对配饰模型的配饰影响权重信息,根据模型第一特征点的位置信息和所述配饰影响权重信息改变所述配饰模型的形态,将所述改变后的配饰模型佩戴至所述第一三维模型对应的位置。The accessory module is configured to obtain an accessory model, obtain the accessory influence weight information of the first feature point of the model on the accessory model, and change the shape of the accessory model according to the position information of the first feature point of the model and the accessory influence weight information. The changed accessory model is worn to a position corresponding to the first three-dimensional model.
  18. 根据权利要求13所述的装置,其特征在于,所述三维动画生成模块包括:The device according to claim 13, wherein the three-dimensional animation generating module comprises:
    判断单元,用于判断待生成的三维动画的皮肤点是否为摄像头采集的深度图像上对应位置的皮肤点,如果是,则进入第一生成单元,否则进入第二生成单元; a determining unit, configured to determine whether a skin point of the three-dimensional animation to be generated is a skin point corresponding to a position on the depth image collected by the camera, and if yes, enter the first generating unit, and otherwise enter the second generating unit;
    第一生成单元,用于直接根据所述深度图像生成对应位置的第一三维动画的皮肤点;a first generating unit, configured to directly generate a skin point of the first three-dimensional animation corresponding to the position according to the depth image;
    第二生成单元,用于根据所述运动轨迹和影响权重信息生成所述第一三维模型对应的第一三维动画的其它皮肤点。And a second generating unit, configured to generate, according to the motion trajectory and the influence weight information, other skin points of the first three-dimensional animation corresponding to the first three-dimensional model.
  19. 根据权利要求13所述的装置,其特征在于,所述第一主体深度图像为RGBD图像中的深度图像,所述RGBD图像还包括对应的彩色图像,所述第一三维动画为彩色的三维动画。 The apparatus according to claim 13, wherein said first body depth image is a depth image in an RGBD image, said RGBD image further comprising a corresponding color image, said first three-dimensional animation being a colored three-dimensional animation .
PCT/CN2016/076742 2015-12-01 2016-03-18 Method and apparatus for generating three-dimensional animation WO2017092196A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510876008.9 2015-12-01
CN201510876008.9A CN105513114B (en) 2015-12-01 2015-12-01 The method and apparatus of three-dimensional animation generation

Publications (1)

Publication Number Publication Date
WO2017092196A1 true WO2017092196A1 (en) 2017-06-08

Family

ID=55721070

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/076742 WO2017092196A1 (en) 2015-12-01 2016-03-18 Method and apparatus for generating three-dimensional animation

Country Status (2)

Country Link
CN (1) CN105513114B (en)
WO (1) WO2017092196A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108765529A (en) * 2018-05-04 2018-11-06 北京比特智学科技有限公司 Video generation method and device
CN111105494A (en) * 2019-12-31 2020-05-05 长城汽车股份有限公司 Method and system for generating three-dimensional dynamic head portrait
CN111210495A (en) * 2019-12-31 2020-05-29 深圳市商汤科技有限公司 Three-dimensional model driving method, device, terminal and computer readable storage medium
CN111968169A (en) * 2020-08-19 2020-11-20 北京拙河科技有限公司 Dynamic human body three-dimensional reconstruction method, device, equipment and medium
US20210383605A1 (en) * 2020-10-30 2021-12-09 Beijing Baidu Netcom Science And Technology Co., Ltd. Driving method and apparatus of an avatar, device and medium

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023287B (en) * 2016-05-31 2019-06-18 中国科学院计算技术研究所 A kind of the interactive three-dimensional animation synthesizing method and system of data-driven
CN106611158A (en) * 2016-11-14 2017-05-03 深圳奥比中光科技有限公司 Method and equipment for obtaining human body 3D characteristic information
CN107066095B (en) * 2017-03-31 2020-09-25 联想(北京)有限公司 Information processing method and electronic equipment
CN107507269A (en) * 2017-07-31 2017-12-22 广东欧珀移动通信有限公司 Personalized three-dimensional model generating method, device and terminal device
CN109102559B (en) * 2018-08-16 2021-03-23 Oppo广东移动通信有限公司 Three-dimensional model processing method and device
CN109064551B (en) * 2018-08-17 2022-03-25 联想(北京)有限公司 Information processing method and device for electronic equipment
CN109389665B (en) * 2018-08-24 2021-10-22 先临三维科技股份有限公司 Texture obtaining method, device and equipment of three-dimensional model and storage medium
CN110312144B (en) * 2019-08-05 2022-05-24 广州方硅信息技术有限公司 Live broadcast method, device, terminal and storage medium
CN111613222A (en) * 2020-05-25 2020-09-01 广东电网有限责任公司 Transformer substation inspection system
WO2022168428A1 (en) * 2021-02-02 2022-08-11 ソニーグループ株式会社 Information processing method, information processing device, and program

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622774A (en) * 2011-01-31 2012-08-01 微软公司 Living room movie creation
CN103679783A (en) * 2013-10-18 2014-03-26 中国科学院自动化研究所 Geometric deformation based skin deformation method for three-dimensional animated character model
CN104008557A (en) * 2014-06-23 2014-08-27 中国科学院自动化研究所 Three-dimensional matching method of garment and human body models

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622774A (en) * 2011-01-31 2012-08-01 微软公司 Living room movie creation
CN103679783A (en) * 2013-10-18 2014-03-26 中国科学院自动化研究所 Geometric deformation based skin deformation method for three-dimensional animated character model
CN104008557A (en) * 2014-06-23 2014-08-27 中国科学院自动化研究所 Three-dimensional matching method of garment and human body models

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108765529A (en) * 2018-05-04 2018-11-06 北京比特智学科技有限公司 Video generation method and device
CN111105494A (en) * 2019-12-31 2020-05-05 长城汽车股份有限公司 Method and system for generating three-dimensional dynamic head portrait
CN111210495A (en) * 2019-12-31 2020-05-29 深圳市商汤科技有限公司 Three-dimensional model driving method, device, terminal and computer readable storage medium
CN111105494B (en) * 2019-12-31 2023-10-24 长城汽车股份有限公司 Three-dimensional dynamic head portrait generation method and system
CN111968169A (en) * 2020-08-19 2020-11-20 北京拙河科技有限公司 Dynamic human body three-dimensional reconstruction method, device, equipment and medium
CN111968169B (en) * 2020-08-19 2024-01-19 北京拙河科技有限公司 Dynamic human body three-dimensional reconstruction method, device, equipment and medium
US20210383605A1 (en) * 2020-10-30 2021-12-09 Beijing Baidu Netcom Science And Technology Co., Ltd. Driving method and apparatus of an avatar, device and medium

Also Published As

Publication number Publication date
CN105513114B (en) 2018-05-18
CN105513114A (en) 2016-04-20

Similar Documents

Publication Publication Date Title
WO2017092196A1 (en) Method and apparatus for generating three-dimensional animation
US20230351663A1 (en) System and method for generating an avatar that expresses a state of a user
US10846903B2 (en) Single shot capture to animated VR avatar
US9348950B2 (en) Perceptually guided capture and stylization of 3D human figures
CN103999126B (en) Method and device for estimating a pose
CN109427007B (en) Virtual fitting method based on multiple visual angles
CN106600626B (en) Three-dimensional human motion capture method and system
KR20160121379A (en) Apparatus and method for analyzing golf motion
CN108513089B (en) Method and device for group video session
CN109377557A (en) Real-time three-dimensional facial reconstruction method based on single frames facial image
Tulyakov et al. Robust real-time extreme head pose estimation
CN110363867A (en) Virtual dress up system, method, equipment and medium
CN106952335B (en) Method and system for establishing human body model library
CN108460398A (en) Image processing method, device, cloud processing equipment and computer program product
WO2021240848A1 (en) Three-dimensional avatar generation device, three-dimensional avatar generation method, and three-dimensional avatar generation program
JP6775669B2 (en) Information processing device
CN109903360A (en) 3 D human face animation control system and its control method
CA3204613A1 (en) Volumetric video from an image source
Wang et al. Hierarchical facial expression animation by motion capture data
JP7044846B2 (en) Information processing equipment
Chen et al. Ultraman: Single Image 3D Human Reconstruction with Ultra Speed and Detail
Zhang et al. Visual Error Correction Method for VR Image of Continuous Aerobics
Han et al. Intelligent Action Recognition and Dance Motion Optimization Based on Multi-Threshold Image Segmentation
CN112416124A (en) Dance posture feedback method and device
CN117496409A (en) Fine granularity dance action scoring method based on multi-view three-dimensional human body reconstruction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16869510

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16869510

Country of ref document: EP

Kind code of ref document: A1