CN117132461B - Method and system for whole-body optimization of character based on character deformation target body - Google Patents

Method and system for whole-body optimization of character based on character deformation target body Download PDF

Info

Publication number
CN117132461B
CN117132461B CN202311403703.4A CN202311403703A CN117132461B CN 117132461 B CN117132461 B CN 117132461B CN 202311403703 A CN202311403703 A CN 202311403703A CN 117132461 B CN117132461 B CN 117132461B
Authority
CN
China
Prior art keywords
face
deformation
matrix
feature
face image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311403703.4A
Other languages
Chinese (zh)
Other versions
CN117132461A (en
Inventor
郭勇
苑朋飞
靳世凯
周浩
尚泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongying Nian Nian Beijing Technology Co ltd
Original Assignee
China Film Annual Beijing Culture Media Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Film Annual Beijing Culture Media Co ltd filed Critical China Film Annual Beijing Culture Media Co ltd
Priority to CN202311403703.4A priority Critical patent/CN117132461B/en
Publication of CN117132461A publication Critical patent/CN117132461A/en
Application granted granted Critical
Publication of CN117132461B publication Critical patent/CN117132461B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Image Processing (AREA)

Abstract

The utility model discloses a method and a system for whole body character optimization based on character deformation targets, and relates to the technical field of face deformation. Firstly, acquiring an undeformed face image and a deformed face image, then, extracting image features of the deformed face image and the undeformed face image to obtain a deformed face image feature matrix and an undeformed face image feature matrix, then, representing contrast features between the deformed face image feature matrix and the undeformed face image feature matrix to obtain a spatial intensified face deformation feature differential representation matrix, and finally, carrying out face deformation on a digital person based on the spatial intensified face deformation feature differential representation matrix. In this way, the deformation parameters of the digital face can be automatically calculated according to the undeformed face image and the deformed face image before and after the user operation, and the deformation parameters are applied to the digital face.

Description

Method and system for whole-body optimization of character based on character deformation target body
Technical Field
The application relates to the technical field of face deformation, and in particular relates to a method and a system for whole-body optimization of a target character based on character deformation.
Background
Face deformation is a common digital entertainment technology, and can realize the changes of the shape, expression and the like of the face. Face deformation is widely applied in the fields of entertainment, games, movies and the like, for example, users can experience different character images in virtual scenes or share own creatives on a social platform.
In the actual face deformation process, a mathematical model is usually needed, and some parameters are relied on to describe the deformation process of the face. How to accurately translate the user's operation into the appropriate digital human face deformation parameters is an important technical issue.
Disclosure of Invention
The present application has been made in order to solve the above technical problems. The embodiment of the application provides a method and a system for whole-body optimization of a target body character based on character deformation. According to the undeformed face images and deformed face images before and after the user operation, the method can automatically calculate the proper digital face deformation parameters and apply the parameters to the digital person.
According to one aspect of the present application, there is provided a method for whole-body optimization based on character deformation target characters, comprising:
acquiring an undeformed face image and a deformed face image;
extracting image features of the deformed face image and the undeformed face image to obtain a deformed face image feature matrix and an undeformed face image feature matrix;
representing the contrast characteristics between the deformed face image characteristic matrix and the undeformed face image characteristic matrix to obtain a space-enhanced face deformed characteristic differential representation matrix;
and carrying out facial deformation on the digital person based on the space-enhanced facial deformation characteristic differential characterization matrix.
According to another aspect of the present application, there is provided a system for whole-body optimization of a character-based deformation target character, comprising:
the data acquisition module is used for acquiring an undeformed face image and a deformed face image;
the image feature extraction module is used for extracting image features of the deformed face image and the undeformed face image to obtain a deformed face image feature matrix and an undeformed face image feature matrix;
the feature comparison module is used for representing the comparison features between the deformed face image feature matrix and the undeformed face image feature matrix to obtain a space-enhanced face deformed feature differential representation matrix;
and the deformation module is used for carrying out facial deformation on the digital person based on the space-enhanced facial deformation characteristic difference characterization matrix.
Compared with the prior art, the method and the system for optimizing the whole body of the character based on the character deformation target body are characterized in that firstly, an undeformed face image and a deformed face image are obtained, then, image feature extraction is carried out on the deformed face image and the undeformed face image to obtain a deformed face image feature matrix and an undeformed face image feature matrix, then, contrast features between the deformed face image feature matrix and the undeformed face image feature matrix are represented to obtain a space-enhanced face deformation feature differential representation matrix, and finally, face deformation is carried out on a digital person based on the space-enhanced face deformation feature differential representation matrix. In this way, the deformation parameters of the digital face can be automatically calculated according to the undeformed face image and the deformed face image before and after the user operation, and the deformation parameters are applied to the digital face.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly introduced below, which are not intended to be drawn to scale in terms of actual dimensions, with emphasis on illustrating the gist of the present application.
FIG. 1 is a flow chart of a method for role-based global optimization of a role-deforming object in accordance with an embodiment of the present application.
Fig. 2 is a schematic architecture diagram of a method for whole-body optimization based on character deformation target characters according to an embodiment of the application.
Fig. 3 is a flowchart of substep S130 of a method for role-based global optimization of a role-deforming target role in accordance with an embodiment of the present application.
Fig. 4 is a flowchart of substep S140 of a method for character-based global optimization of a character-deformable object character in accordance with an embodiment of the application.
FIG. 5 is a block diagram of a system based on whole body optimization of a character-variant target character according to an embodiment of the application.
Fig. 6 is an application scenario diagram of a method based on whole-body optimization of a character-deformed target character according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some, but not all embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments of the present application without making any inventive effort, are also within the scope of the present application.
As used in this application and in the claims, the terms "a," "an," "the," and/or "the" are not specific to the singular, but may include the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
Although the present application makes various references to certain modules in a system according to embodiments of the present application, any number of different modules may be used and run on a user terminal and/or server. The modules are merely illustrative, and different aspects of the systems and methods may use different modules.
Flowcharts are used in this application to describe the operations performed by systems according to embodiments of the present application. It should be understood that the preceding or following operations are not necessarily performed in order precisely. Rather, the various steps may be processed in reverse order or simultaneously, as desired. Also, other operations may be added to or removed from these processes.
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application and not all of the embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
Aiming at the technical problems, the technical concept of the application is to combine an artificial intelligence technology based on deep learning and an image processing technology, automatically calculate proper digital human face deformation parameters according to the undeformed human face images and deformed human face images before and after user operation, and apply the parameters to digital human faces.
Based on this, fig. 1 is a flowchart of a method for whole-body optimization based on character deformation target characters according to an embodiment of the present application. Fig. 2 is a schematic architecture diagram of a method for whole-body optimization based on character deformation target characters according to an embodiment of the application. As shown in fig. 1 and 2, a method for whole-body optimization based on a character deformation target character according to an embodiment of the present application includes the steps of: s110, acquiring an undeformed face image and a deformed face image; s120, extracting image features of the deformed face image and the undeformed face image to obtain a deformed face image feature matrix and an undeformed face image feature matrix; s130, representing the contrast characteristics between the deformed face image characteristic matrix and the undeformed face image characteristic matrix to obtain a space-enhanced face deformed characteristic differential representation matrix; and S140, carrying out face deformation on the digital person based on the space-enhanced face deformation characteristic differential characterization matrix.
Specifically, in the technical scheme of the application, firstly, an undeformed face image and a deformed face image are acquired. And then, extracting image features of the deformed face image and the undeformed face image to obtain a deformed face image feature matrix and an undeformed face image feature matrix. That is, implicit facial features contained in the deformed face image and the undeformed face image are captured.
In a specific example of the present application, the encoding process for extracting image features of the deformed face image and the undeformed face image to obtain a deformed face image feature matrix and an undeformed face image feature matrix includes: and respectively passing the deformed face image and the undeformed face image through a face feature extractor based on a convolutional neural network model to obtain a deformed face image feature matrix and an undeformed face image feature matrix.
And then, representing the contrast characteristics between the deformed face image characteristic matrix and the undeformed face image characteristic matrix to obtain a space-enhanced face deformed characteristic differential representation matrix. In a specific example of the present application, a coding process for characterizing a contrast characteristic between the deformed face image feature matrix and the undeformed face image feature matrix to obtain a spatially enhanced face deformed feature differential characterization matrix includes: firstly, calculating a position-based difference value between the deformed face image feature matrix and the undeformed face image feature matrix to obtain a face deformed feature difference characterization matrix; and then the human face deformation characteristic differential characterization matrix passes through a spatial attention module to obtain a spatial reinforced human face deformation characteristic differential characterization matrix.
That is, the difference between the deformed face image feature matrix and the undeformed face image feature matrix is measured in a high-dimensional feature space in a position-wise difference manner. However, in the face deformation feature difference characterization matrix obtained by taking account of the difference, the contribution degree of the feature value of each position to the target result is different, and therefore, in the technical scheme of the present application, it is expected to strengthen the important feature by using the spatial attention module.
Accordingly, in step S120, image feature extraction is performed on the deformed face image and the undeformed face image to obtain a deformed face image feature matrix and an undeformed face image feature matrix, including: and respectively passing the deformed face image and the undeformed face image through a face feature extractor based on a convolutional neural network model to obtain a deformed face image feature matrix and an undeformed face image feature matrix. It is worth mentioning that convolutional neural network (Convolutional Neural Network, CNN) is a deep learning model, dedicated to processing data with a grid structure. The convolutional neural network model consists of a plurality of convolutional layers, a pooling layer and a full-connection layer. It is able to automatically extract advanced features of an image by learning local features and context information in the image. The convolution layer captures spatial features in the image using convolution operations, the pooling layer is used to reduce the size and parameter amount of the feature map, and the full-join layer is used to map the extracted features to specific categories or attributes. In face image processing, a convolutional neural network model may be used as a face feature extractor. By inputting the deformed face image and the undeformed face image into the model, their corresponding feature matrices can be obtained. These feature matrices may represent information of advanced features in the face image, such as facial contours, eyes, nose, etc. The feature matrices can be used for subsequent tasks such as face recognition, expression analysis, face transformation and the like. In summary, a convolutional neural network model is a deep learning model for processing image data, which automatically extracts advanced features of an image by learning local features and context information of the image. In face image processing, it can be used as a face feature extractor to extract feature matrices of deformed and undeformed face images, providing useful information for subsequent tasks.
Accordingly, in step S130, as shown in fig. 3, the characterizing the contrast feature between the deformed face image feature matrix and the undeformed face image feature matrix to obtain a spatially enhanced face deformed feature differential characterization matrix includes: s131, calculating a position-based difference value between the deformed face image feature matrix and the undeformed face image feature matrix to obtain a face deformed feature differential characterization matrix; and S132, passing the face deformation characteristic differential characterization matrix through a spatial attention module to obtain the spatial enhancement face deformation characteristic differential characterization matrix. It should be understood that in this process, two steps S131 and S132 are performed to obtain the spatially enhanced face deformation characteristic differential characterization matrix. The purpose of step S131 is to obtain differences at each position by comparing the feature matrices of the deformed face image and the undeformed face image, and by calculating differences of the feature matrices, variation information of the deformed face relative to the undeformed face can be captured, and the differences can include variation of facial contours, variation of expressions, and the like. The human face deformation characteristic difference characterization matrix provides a mode for describing human face deformation, and can help understand the change condition of human face images. In this step S132, the face deformation feature differential characterization matrix is processed by the spatial attention module to enhance the important features therein. The spatial attention module can automatically learn the attention weights of different positions, and more attention is focused on important areas. By applying the spatial attention module, the characteristics related to the face deformation in the face deformation characteristic differential characterization matrix can be enhanced, and the characteristics unrelated to the deformation can be weakened. Therefore, the performance of subsequent tasks (such as face recognition, expression analysis and the like) can be improved, and key characteristics of face deformation are more focused. In other words, the step S131 is used for calculating a face deformation characteristic differential characterization matrix, and capturing the difference between the deformed face and the undeformed face; and S132, enhancing key features in the human face deformation feature differential characterization matrix through a spatial attention module to obtain a spatial enhanced human face deformation feature differential characterization matrix. The combination of the two steps can provide more accurate and useful facial deformation characteristic characterization and better input for subsequent facial related tasks.
It is worth mentioning that the spatial attention module (Spatial Attention Module) is a module for enhancing image features by automatically learning the attention weights at different locations in the image in order to focus more attention on important areas. In computer vision tasks, spatial attention modules are commonly applied in the architecture of convolutional neural networks to enhance useful information in feature maps, which can help the network to better focus on regions of interest and attenuate responses to unrelated regions. One common implementation of the spatial attention module is to use a technique called an attention mechanism (Attention Mechanism). The attention mechanism may automatically learn weights for each location based on the input feature map, the weights representing the importance of the location to the task. Common Attention mechanisms include Soft Attention (Soft Attention) and Hard Attention (Hard Attention). Soft attention is obtained by spatially weighted summing feature maps, where the weights for each location are learned. In this way, the network can adjust the importance of the feature according to the weights of the different locations. The hard attention then determines the weighted feature map by selecting the location in the feature map with the highest weight. This approach is more straightforward, but generally requires a discrete selection process based and is therefore more computationally complex. In general, a spatial attention module is a module for enhancing image features by learning the attention weights at different locations in the image so as to focus more attention on important areas, which can help extract key features and improve task performance and accuracy.
Further, the space-enhanced face deformation characteristic differential characterization matrix is passed through a decoder to obtain digital face deformation parameters; and performing facial deformation on the digital person based on the digital person facial deformation parameters.
Accordingly, in step S140, as shown in fig. 4, facial deformation is performed on the digital person based on the spatially-enhanced facial deformation characteristic differential characterization matrix, including: s141, expanding the space-enhanced face deformation characteristic difference characterization matrix to obtain a space-enhanced face deformation characteristic difference characterization vector; s142, carrying out feature distribution optimization on the space enhancement face deformation feature difference characterization vector to obtain an optimized space enhancement face deformation feature difference characterization vector; s143, the optimized space enhanced face deformation characteristic difference characterization vector passes through a decoder to obtain digital face deformation parameters; and S144, carrying out face deformation on the digital person based on the digital person face deformation parameters. It should be understood that the process of digital face deformation based on the spatial enhancement face deformation characteristic differential characterization matrix includes three steps S141, S142, S143 and S144. The purpose of step S141 is to reduce the dimension of the spatially enhanced face deformation characteristic differential token matrix to obtain a spatially enhanced face deformation characteristic differential token vector, and the purpose of step S142 is to optimize the characteristic distribution of the spatially enhanced face deformation characteristic differential token vector, so that the token vector is more accurate and useful. The feature distribution optimization may employ some optimization algorithm or adjustment method to make the key features in the feature vector more prominent while weakening the features that are independent of deformation. By optimizing the feature distribution, a more accurate and effective characterization vector can be obtained, and better input is provided for subsequent digital human face deformation. In step S143, the optimized space-enhanced face deformation characteristic differential token vector is processed by using a decoder to obtain parameters of digital face deformation, and the decoder may map the optimized token vector back to the original deformation parameter space, so as to obtain parameters describing digital face deformation, where the parameters may include deformation degrees, shape changes and other information of different parts. In step S144, the obtained digital facial deformation parameters are used to perform actual facial deformation operation on the digital human, and facial features of the digital human can be adjusted by applying the deformation parameters to realize facial deformation, which can include operations such as changing facial contours, adjusting expressions, etc., so as to realize personalized deformation on the digital human face. Namely, S142 step is used for expanding the space-enhanced face deformation characteristic difference characterization matrix; s142, performing feature distribution optimization on the space-enhanced face deformation feature difference characterization vector to obtain an optimized characterization vector; s143, mapping the optimized characterization vector into digital human face deformation parameters through a decoder; s144, performing actual face deformation operation on the digital person based on the digital person face deformation parameters. The combination of the steps can realize personalized deformation of the digital face so as to lead the digital face to have specific characteristics and expressions.
In the technical scheme of the application, the deformed face image feature matrix and the undeformed face image feature matrix respectively express image semantic local association features of the deformed face image and the undeformed face image, so that when the position difference between the deformed face image feature matrix and the undeformed face image feature matrix is calculated to obtain a face deformed feature differential representation matrix and local image semantic space distribution is enhanced through a spatial attention module, if the image semantic local association features respectively expressed by the deformed face image feature matrix and the undeformed face image feature matrix are used as foreground object features, background feature distribution differentiation relative to image source semantic features is also introduced when the spatial point-by-point difference and local space distribution enhancement is carried out, and in addition, the multi-dimensional sparsification of the image features is considered for the whole deformed face feature matrix, so that the difference regression of the deformed face feature differential representation matrix is enabled to have an influence on the decoding probability density difference and the decoding probability difference of the differential regression feature matrix is enhanced when decoding probability difference and the decoding probability density difference is caused. Therefore, preferably, the spatial enhancement face deformation characteristic difference characterization matrix is optimized when being decoded by a decoder, and the spatial enhancement face deformation characteristic difference characterization vector is obtained after being unfolded.
Accordingly, in one example, performing feature distribution optimization on the spatially-enhanced face deformation feature differential token vector to obtain an optimized spatially-enhanced face deformation feature differential token vector, includes:
carrying out feature distribution optimization on the space enhancement face deformation feature difference characterization vector by using the following optimization formula to obtain the optimized space enhancement face deformation feature difference characterization vector;
wherein, the optimization formula is:
wherein,is the differential characterization vector of the deformation characteristics of the spatially enhanced human face,>and->Is the differential characterization vector of the deformation characteristics of the spatially enhanced human face>Is>And->Characteristic value, and->Is the differential characterization vector of the deformation characteristics of the spatially enhanced human face>Global feature mean,/, of>Is the +.f. of the optimized space-enhanced facial deformation characteristic differential characterization vector>And characteristic values.
Specifically, the face deformation characteristic difference characterization vector is intensified aiming at the spaceLocal probability density mismatch of probability density distribution in probability space caused by sparse distribution in high-dimensional feature space, and the spatial enhancement face deformation feature differential characterization vector is imitated by regularized global self-consistent class coding>Global self-consistent relation of coding behaviors of high-dimensional characteristic manifold in probability space to adjust error landscape of characteristic manifold in high-dimensional open space domain and realize differential characterization vector of deformation characteristics of the spatially enhanced human face>Self-consistent matching type coding of explicit probability space embedding is carried out on the high-dimensional characteristic manifold, so that the spatial enhancement face deformation characteristic difference characterization vector +.>The convergence of the probability density distribution of the regression probabilities of (2) improves the effectiveness of the decoding regression and the accuracy of the decoding result.
Further, it is worth mentioning that the Decoder (Decoder) is a component in the neural network model, usually corresponding to the Encoder (Encoder). The encoder and decoder are a pair of cooperating modules, commonly used in models such as self-encoder (Autoencoder). In neural networks, the encoder is responsible for converting input data into a more compact, abstract representation, commonly referred to as encoding (or hidden representation). The encoder progressively compresses the input data into a lower dimensional representation through a series of layers and operations, capturing key features of the input data. The output of the encoder can be regarded as a feature extraction result of the input data. The decoder, as opposed to the encoder, receives as input the output of the encoder (i.e., the code) and gradually restores it to the original input data through a series of layers and operations. The task of the decoder is to reconvert the code into the form of the original data so that the output of the decoder is as close as possible to the input data, thus enabling reconstruction of the data. The decoder typically employs the inverse of the encoder structure and parameters to reverse the encoding. For example, if the encoder uses a convolutional neural network for feature extraction and compression, the decoder may use deconvolution (also known as transpose convolution) operations for feature expansion and reconstruction. In step S142, the decoder is configured to map the optimized spatially-enhanced face deformation feature differential token vector to a digital face deformation parameter. The decoder receives as input the optimized token vector, which is converted by an inverse operation into a parametric representation of the digital human face deformation. These parameters can be used for subsequent digital face deformation operation to realize personalized deformation of the digital face.
In summary, the method for optimizing the whole body of the character based on the character deformation target body according to the embodiment of the application is explained, and the method can automatically calculate the proper digital human face deformation parameters according to the undeformed human face images and deformed human face images before and after the user operation and apply the parameters to the digital human.
Fig. 5 is a block diagram of a system 100 for role-based global optimization of a role-deforming target role in accordance with an embodiment of the present application. As shown in fig. 5, a system 100 for whole-body optimization based on character deformation target characters according to an embodiment of the present application includes: a data acquisition module 110, configured to acquire an undeformed face image and a deformed face image; the image feature extraction module 120 is configured to perform image feature extraction on the deformed face image and the undeformed face image to obtain a deformed face image feature matrix and an undeformed face image feature matrix; the feature comparison module 130 is configured to characterize a comparison feature between the deformed face image feature matrix and the undeformed face image feature matrix to obtain a spatially enhanced face deformed feature differential characterization matrix; and a deformation module 140, configured to perform facial deformation on the digital person based on the spatially enhanced face deformation feature differential characterization matrix.
In one example, in the above system 100 for whole-body optimization based on character-deformed target characters, the image feature extraction module 120 is configured to: and respectively passing the deformed face image and the undeformed face image through a face feature extractor based on a convolutional neural network model to obtain a deformed face image feature matrix and an undeformed face image feature matrix.
In one example, in the above system 100 for whole-body optimization based on character-deformation target characters, the feature comparison module 130 includes: the difference value calculation unit is used for calculating the position-based difference value between the deformed face image feature matrix and the undeformed face image feature matrix to obtain a face deformed feature difference characterization matrix; and the spatial attention coding unit is used for passing the human face deformation characteristic differential characterization matrix through a spatial attention module to obtain the spatial enhancement human face deformation characteristic differential characterization matrix.
In one example, in the above system 100 for whole-body optimization based on character deformation target characters, the deformation module 140 includes: the matrix unfolding unit is used for unfolding the space-enhanced face deformation characteristic difference characterization matrix to obtain a space-enhanced face deformation characteristic difference characterization vector; the feature distribution optimizing unit is used for carrying out feature distribution optimization on the space enhancement face deformation feature difference characterization vector so as to obtain an optimized space enhancement face deformation feature difference characterization vector; the decoding unit is used for enabling the optimized space-enhanced human face deformation characteristic difference characterization vector to pass through a decoder so as to obtain digital human face deformation parameters; and a face deforming unit for deforming the face of the digital person based on the digital person face deforming parameter.
In one example, in the above system 100 for global optimization of a character based on a character deformation target, the feature distribution optimizing unit is configured to: carrying out feature distribution optimization on the space enhancement face deformation feature difference characterization vector by using the following optimization formula to obtain the optimized space enhancement face deformation feature difference characterization vector;
wherein, the optimization formula is:
wherein,is the differential characterization vector of the deformation characteristics of the spatially enhanced human face,>and->Is the differential characterization vector of the deformation characteristics of the spatially enhanced human face>Is>And->Characteristic value, and->Is the differential characterization vector of the deformation characteristics of the spatially enhanced human face>Global feature mean,/, of>Is the +.f. of the optimized space-enhanced facial deformation characteristic differential characterization vector>And characteristic values. Here, it will be understood by those skilled in the art that the specific functions and operations of the respective modules in the above-described character-deformation-target-character-whole-body-optimization-based system 100 have been described in detail in the above description of the character-deformation-target-character-whole-body-optimization-based method with reference to fig. 1 to 4, and thus, repetitive descriptions thereof will be omitted.
As described above, the system 100 based on character-deformation target character whole-body optimization according to the embodiment of the present application may be implemented in various wireless terminals, for example, a server or the like having an algorithm based on character-deformation target character whole-body optimization. In one example, the system 100 based on character-deformation target character whole-body optimization according to embodiments of the present application may be integrated into a wireless terminal as one software module and/or hardware module. For example, the system 100 for role-based global optimization of a role-shape-changing target may be a software module in the operating system of the wireless terminal, or may be an application developed for the wireless terminal; of course, the system 100 for role-based global optimization of role-shape objects can equally be one of many hardware modules of the wireless terminal.
Alternatively, in another example, the system 100 for character-based target body character whole-body optimization and the wireless terminal may be separate devices, and the system 100 for character-based target body character whole-body optimization may be connected to the wireless terminal through a wired and/or wireless network and transmit interactive information in a agreed-upon data format.
Fig. 6 is an application scenario diagram of a method based on whole-body optimization of a character-deformed target character according to an embodiment of the present application. As shown in fig. 6, in this application scenario, firstly, an undeformed face image (e.g., D1 illustrated in fig. 6) and a deformed face image (e.g., D2 illustrated in fig. 6) are acquired, then, the deformed face image and the undeformed face image are input into a server (e.g., S illustrated in fig. 6) in which an algorithm based on a character deformation target body whole-body optimization is deployed, wherein the server can process the deformed face image and the undeformed face image using the algorithm based on the character deformation target body whole-body optimization to obtain digital face deformation parameters, and then, facial deformation is performed on a digital person based on the digital face deformation parameters.
Furthermore, those skilled in the art will appreciate that the various aspects of the invention are illustrated and described in the context of a number of patentable categories or circumstances, including any novel and useful procedures, machines, products, or materials, or any novel and useful modifications thereof. Accordingly, aspects of the present application may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.) or by a combination of hardware and software. The above hardware or software may be referred to as a "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the present application may take the form of a computer product, comprising computer-readable program code, embodied in one or more computer-readable media.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The foregoing is illustrative of the present invention and is not to be construed as limiting thereof. Although a few exemplary embodiments of this invention have been described, those skilled in the art will readily appreciate that many modifications are possible in the exemplary embodiments without materially departing from the novel teachings and advantages of this invention. Accordingly, all such modifications are intended to be included within the scope of this invention as defined in the following claims. It is to be understood that the foregoing is illustrative of the present invention and is not to be construed as limited to the specific embodiments disclosed, and that modifications to the disclosed embodiments, as well as other embodiments, are intended to be included within the scope of the appended claims. The invention is defined by the claims and their equivalents.

Claims (2)

1. A method for whole-body character optimization based on character deformation target, comprising:
acquiring an undeformed face image and a deformed face image;
extracting image features of the deformed face image and the undeformed face image to obtain a deformed face image feature matrix and an undeformed face image feature matrix;
representing the contrast characteristics between the deformed face image characteristic matrix and the undeformed face image characteristic matrix to obtain a space-enhanced face deformed characteristic differential representation matrix;
and performing facial deformation on the digital person based on the spatial enhancement face deformation characteristic differential characterization matrix;
extracting image features of the deformed face image and the undeformed face image to obtain a deformed face image feature matrix and an undeformed face image feature matrix, including:
respectively passing the deformed face image and the undeformed face image through a face feature extractor based on a convolutional neural network model to obtain a deformed face image feature matrix and an undeformed face image feature matrix;
representing the contrast characteristics between the deformed face image characteristic matrix and the undeformed face image characteristic matrix to obtain a space-enhanced face deformed characteristic differential representation matrix, comprising:
calculating the position difference between the deformed face image feature matrix and the undeformed face image feature matrix to obtain a face deformed feature difference characterization matrix;
and the human face deformation characteristic differential characterization matrix is passed through a spatial attention module to obtain the spatial enhancement human face deformation characteristic differential characterization matrix;
based on the space-enhanced face deformation characteristic differential characterization matrix, carrying out face deformation on the digital person, and comprising the following steps:
expanding the space-enhanced face deformation characteristic difference characterization matrix to obtain a space-enhanced face deformation characteristic difference characterization vector;
performing feature distribution optimization on the space enhancement face deformation feature difference characterization vector to obtain an optimized space enhancement face deformation feature difference characterization vector;
the optimized space enhanced human face deformation characteristic difference characterization vector is passed through a decoder to obtain digital human face deformation parameters;
and performing facial deformation on the digital person based on the digital person facial deformation parameters;
performing feature distribution optimization on the spatially enhanced face deformation feature difference characterization vector to obtain an optimized spatially enhanced face deformation feature difference characterization vector, including:
carrying out feature distribution optimization on the space enhancement face deformation feature difference characterization vector by using the following optimization formula to obtain the optimized space enhancement face deformation feature difference characterization vector;
wherein, the optimization formula is:
wherein,is the differential characterization vector of the deformation characteristics of the spatially enhanced human face,>and->Is the differential characterization vector of the deformation characteristics of the spatially enhanced human face>Is>And->Characteristic value, and->Is the space-enhanced face deformation characteristic difference characterization vectorGlobal feature mean,/, of>Is the +.f. of the optimized space-enhanced facial deformation characteristic differential characterization vector>And characteristic values.
2. A system for character-based global optimization of a character-deformable object, comprising:
the data acquisition module is used for acquiring an undeformed face image and a deformed face image;
the image feature extraction module is used for extracting image features of the deformed face image and the undeformed face image to obtain a deformed face image feature matrix and an undeformed face image feature matrix;
the feature comparison module is used for representing the comparison features between the deformed face image feature matrix and the undeformed face image feature matrix to obtain a space-enhanced face deformed feature differential representation matrix;
the deformation module is used for carrying out facial deformation on the digital person based on the space-enhanced facial deformation characteristic difference characterization matrix;
the image feature extraction module is used for:
respectively passing the deformed face image and the undeformed face image through a face feature extractor based on a convolutional neural network model to obtain a deformed face image feature matrix and an undeformed face image feature matrix;
the feature comparison module comprises:
the difference value calculation unit is used for calculating the position-based difference value between the deformed face image feature matrix and the undeformed face image feature matrix to obtain a face deformed feature difference characterization matrix;
the spatial attention coding unit is used for enabling the human face deformation characteristic differential characterization matrix to pass through a spatial attention module to obtain the spatial enhancement human face deformation characteristic differential characterization matrix;
the deformation module comprises:
the matrix unfolding unit is used for unfolding the space-enhanced face deformation characteristic difference characterization matrix to obtain a space-enhanced face deformation characteristic difference characterization vector;
the feature distribution optimizing unit is used for carrying out feature distribution optimization on the space enhancement face deformation feature difference characterization vector so as to obtain an optimized space enhancement face deformation feature difference characterization vector;
the decoding unit is used for enabling the optimized space-enhanced human face deformation characteristic difference characterization vector to pass through a decoder so as to obtain digital human face deformation parameters;
and a face deforming unit configured to deform the face of the digital person based on the digital person face deforming parameter;
the feature distribution optimizing unit is used for:
carrying out feature distribution optimization on the space enhancement face deformation feature difference characterization vector by using the following optimization formula to obtain the optimized space enhancement face deformation feature difference characterization vector;
wherein, the optimization formula is:
wherein,is the differential characterization vector of the deformation characteristics of the spatially enhanced human face,>and->Is the differential characterization vector of the deformation characteristics of the spatially enhanced human face>Is>And->Characteristic value, and->Is the space-enhanced face deformation characteristic difference characterization vectorGlobal feature mean,/, of>Is the +.f. of the optimized space-enhanced facial deformation characteristic differential characterization vector>And characteristic values.
CN202311403703.4A 2023-10-27 2023-10-27 Method and system for whole-body optimization of character based on character deformation target body Active CN117132461B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311403703.4A CN117132461B (en) 2023-10-27 2023-10-27 Method and system for whole-body optimization of character based on character deformation target body

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311403703.4A CN117132461B (en) 2023-10-27 2023-10-27 Method and system for whole-body optimization of character based on character deformation target body

Publications (2)

Publication Number Publication Date
CN117132461A CN117132461A (en) 2023-11-28
CN117132461B true CN117132461B (en) 2023-12-22

Family

ID=88861380

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311403703.4A Active CN117132461B (en) 2023-10-27 2023-10-27 Method and system for whole-body optimization of character based on character deformation target body

Country Status (1)

Country Link
CN (1) CN117132461B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117388893B (en) * 2023-12-11 2024-03-12 深圳市移联通信技术有限责任公司 Multi-device positioning system based on GPS

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4791581A (en) * 1985-07-27 1988-12-13 Sony Corporation Method and apparatus of forming curved surfaces
CN101976453A (en) * 2010-09-26 2011-02-16 浙江大学 GPU-based three-dimensional face expression synthesis method
CN109584353A (en) * 2018-10-22 2019-04-05 北京航空航天大学 A method of three-dimensional face expression model is rebuild based on monocular video
WO2021223134A1 (en) * 2020-05-07 2021-11-11 浙江大学 Micro-renderer-based method for acquiring reflection material of human face from single image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220375258A1 (en) * 2019-10-29 2022-11-24 Guangzhou Huya Technology Co., Ltd Image processing method and apparatus, device and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4791581A (en) * 1985-07-27 1988-12-13 Sony Corporation Method and apparatus of forming curved surfaces
CN101976453A (en) * 2010-09-26 2011-02-16 浙江大学 GPU-based three-dimensional face expression synthesis method
CN109584353A (en) * 2018-10-22 2019-04-05 北京航空航天大学 A method of three-dimensional face expression model is rebuild based on monocular video
WO2021223134A1 (en) * 2020-05-07 2021-11-11 浙江大学 Micro-renderer-based method for acquiring reflection material of human face from single image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
全自动的基于MPEG-4的任意拓扑结构人脸动画;李治国, 姜大龙, 高文, 王巍;计算机辅助设计与图形学学报(第07期);全文 *

Also Published As

Publication number Publication date
CN117132461A (en) 2023-11-28

Similar Documents

Publication Publication Date Title
Kim et al. Deep convolutional neural models for picture-quality prediction: Challenges and solutions to data-driven image quality assessment
CN108520503B (en) Face defect image restoration method based on self-encoder and generation countermeasure network
Ye et al. Real-time no-reference image quality assessment based on filter learning
CN111612708B (en) Image restoration method based on countermeasure generation network
CN112541864A (en) Image restoration method based on multi-scale generation type confrontation network model
CN106548159A (en) Reticulate pattern facial image recognition method and device based on full convolutional neural networks
Jiang et al. Dual attention mobdensenet (damdnet) for robust 3d face alignment
CN117132461B (en) Method and system for whole-body optimization of character based on character deformation target body
CN112686817B (en) Image completion method based on uncertainty estimation
CN111126307B (en) Small sample face recognition method combining sparse representation neural network
CN105528620B (en) method and system for combined robust principal component feature learning and visual classification
CN108389189B (en) Three-dimensional image quality evaluation method based on dictionary learning
CN108564061B (en) Image identification method and system based on two-dimensional pivot analysis
Hu et al. Single sample face recognition under varying illumination via QRCP decomposition
CN111210382A (en) Image processing method, image processing device, computer equipment and storage medium
WO2024187901A1 (en) Image high-quality harmonization model training and device
CN113221794A (en) Training data set generation method, device, equipment and storage medium
CN111382601A (en) Illumination face image recognition preprocessing system and method for generating confrontation network model
KR102611121B1 (en) Method and apparatus for generating imaga classification model
CN115063847A (en) Training method and device for facial image acquisition model
CN114662666A (en) Decoupling method and system based on beta-GVAE and related equipment
CN112800882B (en) Mask face pose classification method based on weighted double-flow residual error network
CN113077379A (en) Method, device, equipment and storage medium for extracting characteristic latent codes
JP7479507B2 (en) Image processing method and device, computer device, and computer program
Junhua et al. No-reference image quality assessment based on AdaBoost_BP neural network in wavelet domain

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 701, 7th floor, and 801, 8th floor, Building 1, Courtyard 8, Gouzitou Street, Changping District, Beijing, 102200

Patentee after: Zhongying Nian Nian (Beijing) Technology Co.,Ltd.

Country or region after: China

Address before: No. 6304, Beijing shunhouyu Business Co., Ltd., No. 32, Wangfu street, Beiqijia Town, Changping District, Beijing 102200

Patentee before: China Film annual (Beijing) culture media Co.,Ltd.

Country or region before: China

CP03 Change of name, title or address