CN114783039A - Motion migration method driven by 3D human body model - Google Patents

Motion migration method driven by 3D human body model Download PDF

Info

Publication number
CN114783039A
CN114783039A CN202210708260.9A CN202210708260A CN114783039A CN 114783039 A CN114783039 A CN 114783039A CN 202210708260 A CN202210708260 A CN 202210708260A CN 114783039 A CN114783039 A CN 114783039A
Authority
CN
China
Prior art keywords
model
human body
motion
posture
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210708260.9A
Other languages
Chinese (zh)
Other versions
CN114783039B (en
Inventor
罗冬
夏贵羽
张泽远
马芙蓉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202210708260.9A priority Critical patent/CN114783039B/en
Publication of CN114783039A publication Critical patent/CN114783039A/en
Application granted granted Critical
Publication of CN114783039B publication Critical patent/CN114783039B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a 3D human body model driven motion migration method, which comprises the steps of converting training data into a UV space and constructing and optimizing a 3D human body model by using complementary information between adjacent video frames; then projecting the optimized 3D human body model to a 2D plane to retain 3D information of original motion, and realizing that the optimized 3D human body model is driven by a target posture; taking the 2D projection and the posture of the training data as the input of a pre-training model, and storing the trained model; then normalizing the posture of the target person; and finally, the 2D projection of the optimized 3D human body model driven by the target person posture and the normalized target person posture are used as the input of a trained motion image generation model for final motion migration, so that the problems of blurring, shape distortion and the like in 2D plane image generation are solved, and the generated motion image is ensured to have reliable depth information, accurate shape and clear human face.

Description

Motion migration method driven by 3D human body model
Technical Field
The invention belongs to the technical field of motion migration, and particularly relates to a motion migration method driven by a 3D human body model.
Background
The human motion migration aims to synthesize a human motion image with the human texture and the target pose of the training image. It is currently used in film production, game design and medical rehabilitation. Based on the human motion migration technique, the character of the training image can be animated freely to perform the user-defined mimic action. The traditional motion migration method based on computer graphics requires complex rendering operation to generate appearance texture, is time-consuming and complex in calculation, but the ordinary user or small-sized organization cannot afford extremely high calculation amount and time cost.
Human motion is a complex natural phenomenon, all real motion occurs in 3D space, and the reason that real motion images look natural is that they are 2D projections of the original motion in 3D space, inheriting the 3D information naturally. Existing motion migration studies are mostly based on 2D motion data, such as images and video, which are 2D projections of true motion. From such motion migration studies, it is found that the generated moving images generally have problems such as blurring and shape distortion.
Disclosure of Invention
In order to solve the defects in the prior art, the invention provides a 3D human body model driven motion migration method, which not only solves the problems of blurring, shape distortion and the like in the generation of a 2D plane image, but also ensures that the generated motion image has reliable depth information, accurate shape and clear human face.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows: a 3D mannequin-driven motion migration method comprising: constructing a training data set by taking a video frame shot in advance as training data, and extracting the posture of the training data; converting the training data into a UV space, generating a UV image, and constructing and optimizing a 3D human body model by using complementary information between adjacent video frames; then projecting the optimized 3D human body model to a 2D plane to obtain a 2D projection retaining 3D information of the original motion, and driving the optimized 3D human body model in the posture of the target person; using the 2D projection of the 3D information with the original motion and the posture of the training data as the input of the motion image generation model, and storing the trained motion image generation model; normalizing the posture of the target person; and finally, taking the 2D projection of the optimized 3D human body model driven by the posture of the target person and the normalized posture of the target person as the input of the trained motion image generation model for final motion transfer.
Further, a posture estimation algorithm OpenPose is adopted to extract the posture of the training data.
Further, pixels of the images in the training data are converted into UV space using densipose, corresponding UV maps are generated, and the 3D human body model is constructed and optimized with complementary information between adjacent video frames, including: taking a set of images of different poses spaced apart by several frames from training data
Figure 908381DEST_PATH_IMAGE001
And corresponding to the generated UV map of DensePose, and then generating a set of local texture maps by UV conversion
Figure 558805DEST_PATH_IMAGE002
Local texture map to be generated
Figure 748478DEST_PATH_IMAGE002
Inputting the data into a texture filling network to generate a texture map with multi-pose texture information
Figure 225464DEST_PATH_IMAGE003
And applying the texture map by a lossy function pair
Figure 260417DEST_PATH_IMAGE003
Reduced set of "original images"
Figure 421139DEST_PATH_IMAGE004
With a set of real images
Figure 516134DEST_PATH_IMAGE001
And performing loss calculation to realize optimization of the 3D human body model.
Further, the loss function is expressed as:
Figure 981882DEST_PATH_IMAGE005
wherein, the first and the second end of the pipe are connected with each other,
Figure 820525DEST_PATH_IMAGE006
Figure 711121DEST_PATH_IMAGE004
from texture maps
Figure 101651DEST_PATH_IMAGE003
Is obtained by reduction, n represents the number of the reduced 'original images', texture map
Figure 444907DEST_PATH_IMAGE003
Obtained from the following equation:
Figure 195563DEST_PATH_IMAGE007
Figure 940665DEST_PATH_IMAGE008
representing a local texture map
Figure 377463DEST_PATH_IMAGE002
The total number of the (c) is,
Figure 332650DEST_PATH_IMAGE009
represents a probability map generated by the texture filling network that predicts
Figure 513095DEST_PATH_IMAGE003
The upper pixel point comes from the corresponding position
Figure 722491DEST_PATH_IMAGE002
Probability of an upper pixel point;
Figure 595769DEST_PATH_IMAGE009
obtained from the following equation:
Figure 913618DEST_PATH_IMAGE010
wherein, the first and the second end of the pipe are connected with each other,
Figure 22388DEST_PATH_IMAGE011
to represent
Figure 476503DEST_PATH_IMAGE012
The jth row and kth column of (c),
Figure 255103DEST_PATH_IMAGE013
to represent
Figure 434150DEST_PATH_IMAGE014
The element value of the jth row and kth column of (a),
Figure 956398DEST_PATH_IMAGE015
and
Figure 530599DEST_PATH_IMAGE016
each of which represents a value of one of the elements,
Figure 604734DEST_PATH_IMAGE014
which represents the output of the decoder, and,
Figure 897175DEST_PATH_IMAGE017
represents the number of channels output by the decoder,
Figure 832901DEST_PATH_IMAGE018
represents the amplification factor of the amplification module; in particular, the number n of restored "original images" and the total number of local texture maps are specified
Figure 261608DEST_PATH_IMAGE008
And number of channels output by decoder
Figure 913170DEST_PATH_IMAGE017
Are equal in number.
Further, the projecting the optimized 3D human body model to a 2D plane to obtain a 2D projection retaining 3D information of the original motion and driving the optimized 3D human body model in the pose of the target person includes: and predicting the posture of the 3D human body model through the HMR, and transmitting the predicted posture to the 3D human body model, so that the 3D human body model is driven.
Further, the motion image generation model is defined as a Face-Attention GAN model; the Face-Attention GAN model is based on the GAN model, uses Gaussian distribution to match an elliptical human Face area, configures a human Face enhancement loss function and introduces an Attention mechanism, wherein: the method for matching the elliptical face area by using Gaussian distribution is realized by designing a mean value and a covariance matrix, and comprises the following steps: the position of the image face region is determined by the pose estimation algorithm openpos,
Figure 286382DEST_PATH_IMAGE019
is the location of the nose, eyes and ears; the center of the ellipse is set as the nose
Figure 150433DEST_PATH_IMAGE020
The position of (a); two axes of the ellipse are eigenvectors of the covariance matrix, and the length of the axes is an eigenvalue of the covariance matrix; let a and b be the two axes of the ellipse, a and b both being unit vectors, and satisfy the following formula:
Figure 699226DEST_PATH_IMAGE021
wherein, the first and the second end of the pipe are connected with each other,
Figure 364431DEST_PATH_IMAGE022
is two elements of b, the relationship between the eigenvectors a and b and the covariance matrix sigma is asThe following:
Figure 365885DEST_PATH_IMAGE023
wherein, the first and the second end of the pipe are connected with each other,
Figure 299206DEST_PATH_IMAGE024
Figure 561560DEST_PATH_IMAGE025
is the characteristic value corresponding to the a, and is,
Figure 23766DEST_PATH_IMAGE026
Figure 387882DEST_PATH_IMAGE028
is the characteristic value corresponding to the b-number,
Figure 859315DEST_PATH_IMAGE029
is the axial length of the ellipse, σ is the scaling factor, a and b are orthogonal,
Figure 851541DEST_PATH_IMAGE030
are necessarily reversible; in a manner that
Figure 140440DEST_PATH_IMAGE020
In a Gaussian distribution where Σ is covariance as a mean, face-enhanced Gaussian weights are obtained by uniformly sampling at a distance interval of 1 within a rectangular region constructed by four points of (1, 1), (1, 512), (512, 1), (512 ), and obtaining a face-enhanced Gaussian weight
Figure 116487DEST_PATH_IMAGE031
And with the generated Gaussian weight
Figure 860452DEST_PATH_IMAGE031
Defining a face enhancement loss function; the face enhancement loss function is as follows:
Figure 370103DEST_PATH_IMAGE032
wherein, the first and the second end of the pipe are connected with each other,
Figure 439690DEST_PATH_IMAGE033
the gesture is represented by a gesture that is,
Figure 762087DEST_PATH_IMAGE034
representing a 2D projection of a 3D human model,ywhich represents a real image of the object,
Figure 575322DEST_PATH_IMAGE035
to represent
Figure 542141DEST_PATH_IMAGE033
And
Figure 923575DEST_PATH_IMAGE034
is input to the image generated by the generator G,
Figure 608635DEST_PATH_IMAGE031
representing a Gaussian weight generated by matching the face of the ellipse with a Gaussian distribution; the attention mechanism introduced includes channel attention and spatial attention; the final objective function is:
Figure 959981DEST_PATH_IMAGE036
wherein G denotes a generator, D denotes a discriminator,
Figure 171520DEST_PATH_IMAGE037
a loss function representing the GAN model,
Figure 582910DEST_PATH_IMAGE038
the fact that the discriminator can accurately judge the authenticity of the sample through minG and maxD and the sample generated by the generator can be distinguished through the discriminator is a mutual game process;
Figure 755265DEST_PATH_IMAGE039
representing face enhancement loss functions for enhancing imagesA face region;
Figure 18625DEST_PATH_IMAGE040
representing feature matching loss for ensuring global consistency of image content;
Figure 225615DEST_PATH_IMAGE041
representing perceptual reconstruction loss for ensuring global consistency of image content; parameter(s)
Figure 73486DEST_PATH_IMAGE042
For adjustment to balance these losses.
Further, in the attention mechanism introduced, a feature matching penalty based on discriminator D is employed, as follows:
Figure 592192DEST_PATH_IMAGE043
wherein the content of the first and second substances,
Figure 550920DEST_PATH_IMAGE044
is the second of discriminator DiA layer-feature extractor for extracting the layer feature,
Figure 956625DEST_PATH_IMAGE045
represents the firstiThe number of the elements of the layer is,Tis the total number of layers of discriminator D; and then inputting the generated image and the real image into a pre-trained VGG network, comparing the characteristics of different layers, and sensing reconstruction loss as follows:
Figure 975397DEST_PATH_IMAGE046
Figure 856765DEST_PATH_IMAGE047
represents the i-th layer feature extractor of the VGG network,
Figure 478239DEST_PATH_IMAGE048
indicates the number of elements in the i-th layer,
Figure 394243DEST_PATH_IMAGE049
is the total number of layers of the VGG network.
Further, normalizing the pose of the target person specifically comprises: the real length of the bone segments is approximated by the maximum bone segment length in the training set, and the real bone segment length of the new pose is approximated in the same way; then, the length of the bone segments displayed in the image is adjusted according to the proportion between the standard skeleton and the new skeleton; is provided with
Figure 318336DEST_PATH_IMAGE050
The ith joint coordinate representing the new pose,
Figure 60902DEST_PATH_IMAGE051
representing the coordinates of the father joint;
Figure 361434DEST_PATH_IMAGE050
by
Figure 256577DEST_PATH_IMAGE052
An adjustment is carried out, wherein,
Figure 351572DEST_PATH_IMAGE053
and
Figure 207533DEST_PATH_IMAGE054
respectively representing the maximum bone segment length between the ith joint and the father joint in the target person image and the training image.
Compared with the prior art, the invention has the following beneficial effects:
(1) converting training data into a UV space to generate a UV image, and constructing and optimizing a 3D human body model by using complementary information between adjacent video frames; then projecting the optimized 3D human body model to a 2D plane to obtain a 2D projection retaining 3D information of the original motion, and driving the optimized 3D human body model in the posture of the target person; using the 2D projection of the 3D information with the original motion and the posture of the training data as the input of the motion image generation model, and storing the trained motion image generation model; normalizing the posture of the target person; finally, the 2D projection of the optimized 3D human body model driven by the posture of the target person and the normalized posture of the target person are used as the input of a trained motion image generation model for final motion migration, so that the problems of blurring, shape distortion and the like in 2D plane image generation are solved, and the generated motion image is ensured to have reliable depth information, accurate shape and clear human face;
(2) the method has the advantages of small calculation burden and short time consumption, and can be mainly applied to three fields: 1) in the field of film and television industry, the method can be used for simulating real characters to make actions with ornamental value and high difficulty; 2) in the field of game design, the method can be used for action design of virtual characters; 3) in the field of medical rehabilitation, the method can be used for synthesizing the normal movement posture of a patient with dyskinesia.
Drawings
FIG. 1 is a model framework for optimizing a 3D human model in an embodiment of the invention;
FIG. 2 is a block diagram of a texture filling network in accordance with an embodiment of the present invention;
FIG. 3 is a schematic diagram of the pose drive of a 3D human body model in an embodiment of the invention;
FIG. 4 is a Face-Attention GAN model framework constructed in an embodiment of the present invention;
FIG. 5 is a diagram illustrating matching elliptical faces using Gaussian distributions in an embodiment of the present invention;
FIG. 6 is a schematic illustration of a CBAM attention mechanism in an embodiment of the present invention;
fig. 7 is a schematic diagram of a motion transfer process in an embodiment of the invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
A 3D mannequin-driven motion migration method comprising: constructing a training data set by taking a video frame shot in advance as training data, and extracting the posture of the training data; converting the training data into a UV space, generating a UV image, and constructing and optimizing a 3D human body model by using complementary information between adjacent video frames; then projecting the optimized 3D human body model to a 2D plane to obtain a 2D projection retaining 3D information of the original motion, and driving the optimized 3D human body model in the posture of the target person; using the 2D projection of the 3D information with the original motion and the posture of the training data as the input of a motion image generation model, and storing the trained motion image generation model; normalizing the posture of the target person; and finally, performing final motion migration by taking the 2D projection of the optimized 3D human body model driven by the posture of the target person and the normalized posture of the target person as the input of the trained motion image generation model.
Step 1, constructing a training data set by taking a video frame shot in advance as training data, and extracting the posture of the training data.
An average length of 3 minutes of motion video is taken for each person at 30 frames per second, and the training data is video frames for each person, each video frame having a resolution of 512 x 512. These videos are taken by cell phones or fixed locations, with a shooting distance of about 5 meters. After the training data set is prepared, the posture of the training data set is extracted by adopting the most advanced posture estimation algorithm OpenPose.
And 2, converting pixels of the image in the training data into a UV space by using DensePose to generate a corresponding UV image. And constructs and optimizes the 3D human body model with complementary information between adjacent video frames.
The framework of the method for optimizing the human body model based on the sequence images is shown in figure 1. Taking a set of images of different poses spaced apart by several frames from training data
Figure 187121DEST_PATH_IMAGE001
And generating a UV map corresponding to DensePose, then generating a group of local texture maps through UV conversion, and generating the local texture maps
Figure 546558DEST_PATH_IMAGE002
Input to the texture filling netLinking in the middle.
The texture filling network is shown in FIG. 2, and finally generates a complete texture map with multi-pose texture information
Figure 937088DEST_PATH_IMAGE003
By using
Figure 280345DEST_PATH_IMAGE003
Reduced set of "original images"
Figure 922679DEST_PATH_IMAGE004
With a set of real images
Figure 41682DEST_PATH_IMAGE001
L1 loss calculations are performed to cause the network to generate a more detailed texture map, which is ultimately used to generate a 3D phantom, enabling optimization of the 3D phantom. The corresponding loss function is expressed as:
Figure 478480DEST_PATH_IMAGE005
wherein the content of the first and second substances,
Figure 309033DEST_PATH_IMAGE006
Figure 614112DEST_PATH_IMAGE004
from texture maps
Figure 948142DEST_PATH_IMAGE003
The number of restored 'original images' is n, the texture map
Figure 696786DEST_PATH_IMAGE003
Obtained from the following equation:
Figure 749055DEST_PATH_IMAGE007
Figure 733192DEST_PATH_IMAGE008
representing a local texture map
Figure 577520DEST_PATH_IMAGE002
The total number of the (c) is,
Figure 356120DEST_PATH_IMAGE009
represents a probability map generated by the texture filling network that predicts
Figure 161265DEST_PATH_IMAGE003
The upper pixel point comes from the corresponding position
Figure 57415DEST_PATH_IMAGE002
Probability of upper pixel point;
Figure 631616DEST_PATH_IMAGE009
obtained from the following equation:
Figure 705751DEST_PATH_IMAGE010
wherein the content of the first and second substances,
Figure 732613DEST_PATH_IMAGE011
represent
Figure 58552DEST_PATH_IMAGE012
The jth row and kth column of (c),
Figure 362625DEST_PATH_IMAGE013
to represent
Figure 483028DEST_PATH_IMAGE014
The element value of the jth row and kth column of (1),
Figure 262765DEST_PATH_IMAGE015
and
Figure 251450DEST_PATH_IMAGE016
each of which represents a value of one of the elements,
Figure 534664DEST_PATH_IMAGE014
which represents the output of the decoder, and,
Figure 357126DEST_PATH_IMAGE017
represents the number of channels output by the decoder,
Figure 466902DEST_PATH_IMAGE018
representing the amplification factor of the amplification module; in particular, the number n of restored "original images" and the local texture map
Figure 400223DEST_PATH_IMAGE002
Total number of (2)
Figure 396998DEST_PATH_IMAGE008
And number of channels output by decoder
Figure 390362DEST_PATH_IMAGE017
Are equal in number.
The optimization of the 3D human body model is realized according to the method.
And 3, projecting the optimized 3D human body model to a 2D plane to keep the 3D information of the original motion, and designing a posture driving method of the 3D human body model. The method predicts the pose of the 3D human body model through HMR and transmits the predicted pose to the 3D human body model, thereby implementing the driving of the 3D human body model, as shown in fig. 3. Visual skeleton map representation 3D human body model's gesture is accepted to direct-viewing convenient to.
And 4, taking the 2D projection and the posture of the training data as the input of the motion image generation model, and storing the trained model.
The embodiment provides a motion image generation model for final motion migration, wherein the motion image generation model is defined as a Face-Attention GAN model; the Face-Attention GAN model is based on GAN model, uses gaussian distribution to match elliptical Face regions, and configures Face enhancement loss function, and introduces Attention mechanism, the model takes 2D projection obtained in step 3 and pose extracted in step 1 as input of the model, the model framework is as shown in fig. 4, wherein the confrontation loss of GAN is as follows:
Figure 613533DEST_PATH_IMAGE055
wherein G denotes a generator, D denotes a discriminator,
Figure 960332DEST_PATH_IMAGE033
the gesture is represented by a gesture that is,
Figure 952558DEST_PATH_IMAGE034
representing a 2D projection of a 3D human model, y representing a real image,
Figure 116824DEST_PATH_IMAGE035
to represent
Figure 951924DEST_PATH_IMAGE033
And
Figure 961469DEST_PATH_IMAGE034
input to the image generated by the generator G
Figure 339360DEST_PATH_IMAGE056
Figure 517270DEST_PATH_IMAGE057
The function of (a) is to ensure the basic judgment capability of the discriminator, and a larger one means that
Figure 980612DEST_PATH_IMAGE058
The larger, i.e., the more accurately the discriminator can identify the true sample as a true sample,
Figure 652902DEST_PATH_IMAGE059
the effect of (A) is to ensure that the discriminator is able to distinguish between false samples, the larger it is, meaning that
Figure 885300DEST_PATH_IMAGE060
The smaller the size, the more correctly the discriminator can distinguish between spurious samples.
Matching elliptical face regions using gaussian distributions is achieved by designing mean and covariance matrices of the gaussian distributions, including: the position of the image face region is determined by the pose estimation algorithm openpos,
Figure 860209DEST_PATH_IMAGE019
is the location of the nose, eyes and ears; the center of the ellipse is set as the nose
Figure 686214DEST_PATH_IMAGE020
The position of (a); two axes of the ellipse are eigenvectors of the covariance matrix, and the length of the axes is an eigenvalue of the covariance matrix; as shown in fig. 5, it is assumed that a and b are two axes of an ellipse, that a and b are both unit vectors, and that the following condition is satisfied:
Figure 303140DEST_PATH_IMAGE021
wherein, the first and the second end of the pipe are connected with each other,
Figure 124466DEST_PATH_IMAGE022
is two elements of b, the relationship between the eigenvectors a and b and the covariance matrix Σ is as follows:
Figure 926068DEST_PATH_IMAGE023
wherein, the first and the second end of the pipe are connected with each other,
Figure 832845DEST_PATH_IMAGE024
Figure 987882DEST_PATH_IMAGE026
Figure 303195DEST_PATH_IMAGE025
is the characteristic value corresponding to the a-value,
Figure 885486DEST_PATH_IMAGE028
is the characteristic value corresponding to the b-number,
Figure 404192DEST_PATH_IMAGE029
is the axial length of the ellipse, sigma is the scaling factor, a and b are orthogonal,
Figure 362921DEST_PATH_IMAGE030
are necessarily reversible; in the process of
Figure 158838DEST_PATH_IMAGE020
Is taken as the average value of the values,
Figure 52976DEST_PATH_IMAGE061
in a Gaussian distribution of covariance, uniform sampling is performed at a distance interval of 1 in a rectangular region constructed by four points (1, 1), (1, 512), (512, 1), (512 ), and face-enhanced Gaussian weights are obtained
Figure 934345DEST_PATH_IMAGE031
And with the generated Gaussian weight
Figure 431185DEST_PATH_IMAGE031
To define a face enhancement loss function;
the designed face enhancement loss function is as follows:
Figure 206243DEST_PATH_IMAGE032
wherein the content of the first and second substances,
Figure 395916DEST_PATH_IMAGE033
the posture is represented by a number of symbols,
Figure 138482DEST_PATH_IMAGE034
representing a 2D projection of a 3D human model, y representing a real image,
Figure 439013DEST_PATH_IMAGE035
to represent
Figure 943944DEST_PATH_IMAGE033
And
Figure 429152DEST_PATH_IMAGE034
input to the image generated by the generator G
Figure 550691DEST_PATH_IMAGE056
Figure 123755DEST_PATH_IMAGE031
Representing gaussian weights generated by a gaussian distribution matching an elliptical face.
And an attention mechanism is introduced into the model, and the attention mechanism structure is shown in fig. 6 and is formed by combining channel attention and space attention.
To further refine the details, a feature matching penalty based on discriminator D is employed, as follows:
Figure 889717DEST_PATH_IMAGE043
wherein, the first and the second end of the pipe are connected with each other,
Figure 155613DEST_PATH_IMAGE044
is the i-th layer feature extractor of discriminator D,
Figure 498870DEST_PATH_IMAGE045
represents the number of elements of the ith layer,Tis the total number of layers of discriminator D.
And then inputting the generated image and the real image into a pre-trained VGG network, and comparing the characteristics of different layers. The perceptual reconstruction loss is as follows:
Figure 265838DEST_PATH_IMAGE046
wherein, the first and the second end of the pipe are connected with each other,
Figure 10940DEST_PATH_IMAGE047
representing the i-th layer feature extractor of the VGG network,
Figure 821639DEST_PATH_IMAGE048
representing the number of elements in the i-th layer, N is the total number of layers of the VGG network.
The final objective function is:
Figure 652191DEST_PATH_IMAGE036
wherein the parameters
Figure 567058DEST_PATH_IMAGE042
For adjustment to balance these losses, G denotes a generator, D denotes a discriminator,
Figure 556879DEST_PATH_IMAGE037
a loss function that represents the loss of the GAN,
Figure 430158DEST_PATH_IMAGE038
the fact that the discriminator can accurately judge the authenticity of the sample through minG and maxD and the sample generated by the generator can be discriminated by the discriminator is a mutual game process.
Figure 482427DEST_PATH_IMAGE039
Representing a face enhancement loss function for enhancing the facial area of an image.
Figure 341930DEST_PATH_IMAGE040
And representing feature matching loss for ensuring the global consistency of the image content.
Figure 796045DEST_PATH_IMAGE041
Representing the perceptual reconstruction loss for ensuring global consistency of the image content.
Step 5, in this embodiment, the pose of the target person is normalized. Approximating true length of a bone segment, true of a new pose, using a maximum bone segment length in a training setThe solid skeleton segment length is also approximated in the same manner; then, the length of the bone segment displayed in the image is adjusted according to the proportion between the standard skeleton and the new skeleton; is provided with
Figure 840224DEST_PATH_IMAGE050
The ith joint coordinate representing the new pose,
Figure 504424DEST_PATH_IMAGE051
representing its parent joint coordinates;
Figure 292251DEST_PATH_IMAGE050
by
Figure 866452DEST_PATH_IMAGE052
An adjustment is made, wherein,
Figure 189855DEST_PATH_IMAGE053
and
Figure 216717DEST_PATH_IMAGE054
respectively representing the maximum bone segment length between the ith joint and the father joint in the target person image and the training image.
And 6, inputting the 2D projection of the optimized 3D human body model driven by the target person posture and the normalized target person posture into a trained motion image generation model to perform final motion migration, wherein the motion migration process comprises posture normalization of a new skeleton and generation of a target person image, and is shown in FIG. 7.
Converting training data into a UV space to generate a UV image, and constructing and optimizing a 3D human body model by using complementary information between adjacent video frames; then projecting the optimized 3D human body model to a 2D plane to retain 3D information of original motion, and realizing that the optimized 3D human body model is driven by a target posture; taking the 2D projection and the posture of the training data as the input of a pre-training model, and storing the trained model; then normalizing the posture of the target person; finally, the 2D projection of the optimized 3D human body model driven by the target person posture and the normalized target person posture are used as the input of a trained motion image generation model for carrying out final motion migration, so that the problems of blurring, shape distortion and the like in 2D plane image generation are solved, and the generated motion image is ensured to have reliable depth information, accurate shape and clear human face; the method has the advantages of small calculation burden and short time consumption, and can be mainly applied to three fields: (1) in the field of film and television industry, the method can be used for simulating real characters to make actions with ornamental value and high difficulty; (2) in the field of game design, the method can be used for action design of virtual characters; (3) in the field of medical rehabilitation, the method can be used for synthesizing the normal movement posture of a patient with dyskinesia.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (8)

1. A 3D mannequin-driven motion migration method, comprising:
constructing a training data set by taking a video frame shot in advance as training data, and extracting the posture of the training data;
converting the training data into a UV space, generating a UV map, and constructing and optimizing a 3D human body model by using complementary information between adjacent video frames;
then projecting the optimized 3D human body model to a 2D plane to obtain a 2D projection retaining 3D information of the original motion, and driving the optimized 3D human body model in the posture of the target person;
using the 2D projection of the 3D information with the original motion and the posture of the training data as the input of the motion image generation model, and storing the trained motion image generation model;
normalizing the posture of the target person;
and finally, taking the 2D projection of the optimized 3D human body model driven by the posture of the target person and the normalized posture of the target person as the input of the trained motion image generation model for final motion transfer.
2. The 3D human model-driven motion migration method according to claim 1, wherein a pose estimation algorithm OpenPose is used to extract the pose of the training data.
3. The 3D phantom-driven motion migration method according to claim 1, wherein converting pixels of the image in the training data to UV space using DensePose, generating corresponding UV maps, and constructing and optimizing the 3D phantom with complementary information between adjacent video frames comprises:
taking a set of images of different poses spaced by several frames from training data
Figure 534584DEST_PATH_IMAGE001
And generating a UV map corresponding to DensePose, and then generating a set of local texture maps through UV conversion
Figure 963291DEST_PATH_IMAGE002
Local texture map to be generated
Figure 208328DEST_PATH_IMAGE003
Inputting the texture into a texture filling network to generate a texture map with multi-pose texture information
Figure 722486DEST_PATH_IMAGE004
And applying the texture map by a lossy function pair
Figure 852116DEST_PATH_IMAGE004
Reduced set of "original images"
Figure 243652DEST_PATH_IMAGE005
With a set of real images
Figure 800535DEST_PATH_IMAGE006
Performing loss calculation to realizeOptimization of a 3D human body model.
4. The 3D human model driven motion transfer method of claim 3, wherein the loss function is expressed as:
Figure 926623DEST_PATH_IMAGE007
wherein the content of the first and second substances,
Figure 328785DEST_PATH_IMAGE008
Figure 732085DEST_PATH_IMAGE005
from texture maps
Figure 335236DEST_PATH_IMAGE004
The number of restored 'original images' is n, the texture map
Figure 558407DEST_PATH_IMAGE004
Obtained from the following equation:
Figure 154473DEST_PATH_IMAGE009
Figure 412279DEST_PATH_IMAGE010
representing a local texture map
Figure 45386DEST_PATH_IMAGE003
The total number of the (c) is,
Figure 395333DEST_PATH_IMAGE011
represents a probability map generated by a texture-filling network, which predicts
Figure 404878DEST_PATH_IMAGE004
With upper pixel points from corresponding positions
Figure 517190DEST_PATH_IMAGE003
Probability of an upper pixel point;
Figure 711411DEST_PATH_IMAGE011
obtained from the following equation:
Figure 909174DEST_PATH_IMAGE012
wherein the content of the first and second substances,
Figure 597776DEST_PATH_IMAGE013
to represent
Figure 564595DEST_PATH_IMAGE014
The jth row and kth column of (c),
Figure 805083DEST_PATH_IMAGE015
represent
Figure 614776DEST_PATH_IMAGE016
The element value of the jth row and kth column of (1),
Figure 231702DEST_PATH_IMAGE017
and
Figure 426929DEST_PATH_IMAGE018
each of which represents a value of one of the elements,
Figure 572740DEST_PATH_IMAGE016
which represents the output of the decoder, and,
Figure 745095DEST_PATH_IMAGE019
representing channels of decoder outputThe number of the first and second groups is counted,
Figure 290346DEST_PATH_IMAGE020
representing the amplification factor of the amplification module; in particular, the number n of restored "original images" and the total number of local texture maps are specified
Figure 966178DEST_PATH_IMAGE010
And number of channels output by decoder
Figure 423835DEST_PATH_IMAGE019
Are equal in number.
5. The method for 3D-human-model-driven motion migration according to claim 1, wherein the projecting the optimized 3D-human-model onto a 2D plane, obtaining a 2D projection retaining 3D information of the original motion, and driving the optimized 3D-human-model in the pose of the target person, comprises: and predicting the posture of the 3D human body model through the HMR, and transmitting the predicted posture to the 3D human body model, so that the 3D human body model is driven.
6. The 3D human model-driven motion transfer method according to claim 1, wherein the motion image generation model is defined as a Face-Attention GAN model; the Face-Attention GAN model is based on the GAN model, uses Gaussian distribution to match an elliptical human Face area, configures a human Face enhancement loss function and introduces an Attention mechanism, wherein:
the method for matching the elliptical face area by using Gaussian distribution is realized by designing a mean value and a covariance matrix, and comprises the following steps: the position of the image face region is determined by the pose estimation algorithm openpos,
Figure 83487DEST_PATH_IMAGE021
is the location of the nose, eyes and ears; the center of the ellipse is set as the nose
Figure 776636DEST_PATH_IMAGE022
The position of (a); two axes of the ellipse are eigenvectors of the covariance matrix, and the length of the axes is eigenvalues of the covariance matrix; let a and b be the two axes of the ellipse, a and b both being unit vectors, and satisfy the following formula:
Figure 962767DEST_PATH_IMAGE023
wherein, the first and the second end of the pipe are connected with each other,
Figure 981538DEST_PATH_IMAGE024
is two elements of b, the relationship between the eigenvectors a and b and the covariance matrix Σ is as follows:
Figure 597328DEST_PATH_IMAGE025
wherein, the first and the second end of the pipe are connected with each other,
Figure 468069DEST_PATH_IMAGE026
Figure 384073DEST_PATH_IMAGE027
is the characteristic value corresponding to the a-value,
Figure 432800DEST_PATH_IMAGE028
Figure 67044DEST_PATH_IMAGE029
is the characteristic value corresponding to the b-number,
Figure 101996DEST_PATH_IMAGE030
is the axial length of the ellipse, sigma is the scaling factor, a and b are orthogonal,
Figure 747872DEST_PATH_IMAGE031
are necessarily reversible; in the process of
Figure 842867DEST_PATH_IMAGE022
In a Gaussian distribution where Σ is covariance as a mean, uniform sampling is performed at a distance interval of 1 within a rectangular region constructed by four points (1, 1), (1, 512), (512, 1), (512 ), and face-enhanced Gaussian weights are obtained
Figure 698828DEST_PATH_IMAGE032
And with the generated Gaussian weight
Figure 662104DEST_PATH_IMAGE032
To define a face enhancement loss function;
the face enhancement loss function is as follows:
Figure 552700DEST_PATH_IMAGE033
wherein the content of the first and second substances,
Figure 192498DEST_PATH_IMAGE034
the gesture is represented by a gesture that is,
Figure 535754DEST_PATH_IMAGE035
representing a 2D projection of a 3D phantom,ya real image is represented by a real image,
Figure 912509DEST_PATH_IMAGE036
to represent
Figure 516666DEST_PATH_IMAGE034
And
Figure 219042DEST_PATH_IMAGE035
is input to the image generated by the generator G,
Figure 659382DEST_PATH_IMAGE032
representing a gaussian weight generated by a gaussian distribution matching elliptical face;
the attention mechanisms introduced include channel attention and spatial attention; the final objective function is:
Figure 839828DEST_PATH_IMAGE037
wherein G denotes a generator, D denotes a discriminator,
Figure 705016DEST_PATH_IMAGE038
a loss function representing the GAN model,
Figure 437348DEST_PATH_IMAGE039
the fact that the discriminator can accurately judge the authenticity of the sample through minG and maxD and the sample generated by the generator can be distinguished through the discriminator is a mutual game process;
Figure 489618DEST_PATH_IMAGE040
representing a face enhancement loss function for enhancing a face region of an image;
Figure 847656DEST_PATH_IMAGE041
representing feature matching loss for ensuring global consistency of image content;
Figure 567350DEST_PATH_IMAGE042
representing perceptual reconstruction loss for ensuring global consistency of image content; parameter(s)
Figure 345950DEST_PATH_IMAGE043
For adjustment to balance these losses.
7. The 3D human model driven motion transfer method according to claim 6, characterized in that in the induced attention mechanism, the feature matching penalty based on discriminator D is used, and the feature matching penalty is as follows:
Figure 10150DEST_PATH_IMAGE044
wherein the content of the first and second substances,
Figure 532398DEST_PATH_IMAGE045
is the second of discriminator DiA layer-feature extractor for extracting the layer feature,
Figure 106599DEST_PATH_IMAGE046
represents the firstiThe number of the elements of the layer is,Tis the total number of layers of discriminator D;
and then inputting the generated image and the real image into a pre-trained VGG network, comparing the characteristics of different layers, and perceiving reconstruction loss as follows:
Figure 931466DEST_PATH_IMAGE047
wherein, the first and the second end of the pipe are connected with each other,
Figure 223908DEST_PATH_IMAGE048
represents the i-th layer feature extractor of the VGG network,
Figure 408901DEST_PATH_IMAGE049
representing the number of elements in the i-th layer, N is the total number of layers of the VGG network.
8. The 3D human model-driven motion transfer method according to claim 1, wherein the pose of the target person is normalized, specifically: the real length of the bone segments is approximated by the maximum bone segment length in the training set, and the real bone segment length of the new pose is approximated in the same way; then, the length of the bone segment displayed in the image is adjusted according to the proportion between the standard skeleton and the new skeleton; is provided with
Figure 837609DEST_PATH_IMAGE050
The ith joint coordinate representing the new pose,
Figure 223590DEST_PATH_IMAGE051
representing the coordinates of the father joint;
Figure 846071DEST_PATH_IMAGE050
by
Figure 975701DEST_PATH_IMAGE052
An adjustment is carried out, wherein,
Figure 524494DEST_PATH_IMAGE053
and
Figure 940431DEST_PATH_IMAGE054
respectively representing the maximum bone segment length between the ith joint and the father joint in the target person image and the training image.
CN202210708260.9A 2022-06-22 2022-06-22 Motion migration method driven by 3D human body model Active CN114783039B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210708260.9A CN114783039B (en) 2022-06-22 2022-06-22 Motion migration method driven by 3D human body model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210708260.9A CN114783039B (en) 2022-06-22 2022-06-22 Motion migration method driven by 3D human body model

Publications (2)

Publication Number Publication Date
CN114783039A true CN114783039A (en) 2022-07-22
CN114783039B CN114783039B (en) 2022-09-16

Family

ID=82422416

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210708260.9A Active CN114783039B (en) 2022-06-22 2022-06-22 Motion migration method driven by 3D human body model

Country Status (1)

Country Link
CN (1) CN114783039B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071831A (en) * 2023-03-20 2023-05-05 南京信息工程大学 Human body image generation method based on UV space transformation

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111161200A (en) * 2019-12-22 2020-05-15 天津大学 Human body posture migration method based on attention mechanism
CN111640172A (en) * 2020-05-08 2020-09-08 大连理工大学 Attitude migration method based on generation of countermeasure network
CN111724414A (en) * 2020-06-23 2020-09-29 宁夏大学 Basketball movement analysis method based on 3D attitude estimation
CN111797753A (en) * 2020-06-29 2020-10-20 北京灵汐科技有限公司 Training method, device, equipment and medium of image driving model, and image generation method, device and medium
CN112215116A (en) * 2020-09-30 2021-01-12 江苏大学 Mobile 2D image-oriented 3D river crab real-time detection method
CN112651316A (en) * 2020-12-18 2021-04-13 上海交通大学 Two-dimensional and three-dimensional multi-person attitude estimation system and method
CN114049652A (en) * 2021-11-05 2022-02-15 成都艾特能电气科技有限责任公司 Human body posture migration method and system based on action driving
CN114612614A (en) * 2022-03-09 2022-06-10 北京大甜绵白糖科技有限公司 Human body model reconstruction method and device, computer equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111161200A (en) * 2019-12-22 2020-05-15 天津大学 Human body posture migration method based on attention mechanism
CN111640172A (en) * 2020-05-08 2020-09-08 大连理工大学 Attitude migration method based on generation of countermeasure network
CN111724414A (en) * 2020-06-23 2020-09-29 宁夏大学 Basketball movement analysis method based on 3D attitude estimation
CN111797753A (en) * 2020-06-29 2020-10-20 北京灵汐科技有限公司 Training method, device, equipment and medium of image driving model, and image generation method, device and medium
CN112215116A (en) * 2020-09-30 2021-01-12 江苏大学 Mobile 2D image-oriented 3D river crab real-time detection method
CN112651316A (en) * 2020-12-18 2021-04-13 上海交通大学 Two-dimensional and three-dimensional multi-person attitude estimation system and method
CN114049652A (en) * 2021-11-05 2022-02-15 成都艾特能电气科技有限责任公司 Human body posture migration method and system based on action driving
CN114612614A (en) * 2022-03-09 2022-06-10 北京大甜绵白糖科技有限公司 Human body model reconstruction method and device, computer equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MUHAMMED KOCABAS 等: "VIBE: Video Inference for Human Body Pose and Shape Estimation", 《ARXIV:1912.05656 [CS.CV]》 *
高翔等: "3DMM与GAN结合的实时人脸表情迁移方法", 《计算机应用与软件》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071831A (en) * 2023-03-20 2023-05-05 南京信息工程大学 Human body image generation method based on UV space transformation

Also Published As

Publication number Publication date
CN114783039B (en) 2022-09-16

Similar Documents

Publication Publication Date Title
CN108596024B (en) Portrait generation method based on face structure information
CN108537743B (en) Face image enhancement method based on generation countermeasure network
CN112887698B (en) High-quality face voice driving method based on nerve radiation field
CN110827193B (en) Panoramic video significance detection method based on multichannel characteristics
CN109376582A (en) A kind of interactive human face cartoon method based on generation confrontation network
CN108776983A (en) Based on the facial reconstruction method and device, equipment, medium, product for rebuilding network
CN110853119B (en) Reference picture-based makeup transfer method with robustness
CN110796593A (en) Image processing method, device, medium and electronic equipment based on artificial intelligence
CN113807265B (en) Diversified human face image synthesis method and system
CN110363770A (en) A kind of training method and device of the infrared semantic segmentation model of margin guide formula
CN115484410B (en) Event camera video reconstruction method based on deep learning
CN113112416B (en) Semantic-guided face image restoration method
CN110852935A (en) Image processing method for human face image changing with age
CN113724354A (en) Reference image color style-based gray level image coloring method
CN114783039B (en) Motion migration method driven by 3D human body model
CN112614070A (en) DefogNet-based single image defogging method
CN113888399B (en) Face age synthesis method based on style fusion and domain selection structure
CN111882516A (en) Image quality evaluation method based on visual saliency and deep neural network
CN111507276B (en) Construction site safety helmet detection method based on hidden layer enhanced features
Zheng et al. Overwater image dehazing via cycle-consistent generative adversarial network
CN111401209B (en) Action recognition method based on deep learning
CN111080754B (en) Character animation production method and device for connecting characteristic points of head and limbs
CN115914505B (en) Video generation method and system based on voice-driven digital human model
CN116311472A (en) Micro-expression recognition method and device based on multi-level graph convolution network
CN116863069A (en) Three-dimensional light field face content generation method, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant