CN115564642B - Image conversion method, image conversion device, electronic apparatus, storage medium, and program product - Google Patents

Image conversion method, image conversion device, electronic apparatus, storage medium, and program product Download PDF

Info

Publication number
CN115564642B
CN115564642B CN202211545638.4A CN202211545638A CN115564642B CN 115564642 B CN115564642 B CN 115564642B CN 202211545638 A CN202211545638 A CN 202211545638A CN 115564642 B CN115564642 B CN 115564642B
Authority
CN
China
Prior art keywords
expression image
target
patch
template
vertex
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211545638.4A
Other languages
Chinese (zh)
Other versions
CN115564642A (en
Inventor
邱炜彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202211545638.4A priority Critical patent/CN115564642B/en
Publication of CN115564642A publication Critical patent/CN115564642A/en
Application granted granted Critical
Publication of CN115564642B publication Critical patent/CN115564642B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/10Selection of transformation methods according to the characteristics of the input images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

The application provides an image transformation method, an image transformation device, an electronic device, a computer readable storage medium and a computer program product; the method comprises the following steps: acquiring a first template expression image and a first target expression image of a template object, and acquiring a second template expression image of the target object; the first template expression image and the second template expression image respectively comprise a plurality of patches; deformation processing based on a plurality of patches is carried out on the first template expression image and the first target expression image, and deformation components of the first target expression image are obtained; performing expression migration processing on a second template expression image of the target object based on the deformation component of the first target expression image to obtain a plurality of migration vertex coordinates of the target object; and carrying out pixel assignment processing on the plurality of migration vertex coordinates of the target object to obtain a second target expression image of the target object. By the method and the device, intelligent and accurate expression transformation can be achieved based on deformation of the surface patch in the expression image.

Description

Image conversion method, image conversion device, electronic apparatus, storage medium, and program product
Technical Field
The present application relates to data processing technologies, and in particular, to an image transformation method, an image transformation apparatus, an electronic device, a computer-readable storage medium, and a computer program product.
Background
With the development of technology, expression images will be applied in more fields and play more and more important values, such as facial expression transformation. The facial expression transformation is a transformation of an expression by migrating a target expression to the face of a target subject while maintaining basic facial features and a background, given a face template image and a specific target expression image.
In the related art, facial expression transformation is mainly performed by manually creating tens to two hundreds of expression images one by one in professional software by a person skilled in The Art (TA) to rapidly switch from one expression image to another expression image.
Disclosure of Invention
The embodiment of the application provides an image transformation method, an image transformation device, electronic equipment, a computer readable storage medium and a computer program product, which can realize intelligent and accurate expression transformation based on the deformation of a patch in an expression image.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides an image transformation method, which comprises the following steps:
acquiring a first template expression image and a first target expression image of a template object, and acquiring a second template expression image of the target object;
the first template expression image and the second template expression image respectively comprise a plurality of patches, and the patches of the first template expression image correspond to the patches of the second template expression image in a one-to-one manner;
deformation processing based on the plurality of patches is carried out on the first template expression image and the first target expression image, so that deformation components of the first target expression image are obtained;
performing expression migration processing on a second template expression image of the target object based on the deformation component of the first target expression image to obtain a plurality of migration vertex coordinates of the target object;
and carrying out pixel assignment processing on the plurality of migration vertex coordinates of the target object to obtain a second target expression image of the target object.
An embodiment of the present application provides an image conversion apparatus, including:
the acquisition module is used for acquiring a first template expression image and a first target expression image of the template object and acquiring a second template expression image of the target object;
the first template expression image and the second template expression image respectively comprise a plurality of patches, and the patches of the first template expression image correspond to the patches of the second template expression image in a one-to-one manner;
the deformation module is used for carrying out deformation processing on the first template expression image and the first target expression image based on the plurality of patches to obtain a deformation component of the first target expression image;
the migration module is used for carrying out expression migration processing on a second template expression image of the target object based on the deformation component of the first target expression image to obtain a plurality of migration vertex coordinates of the target object;
and the processing module is used for carrying out pixel assignment processing on the plurality of migration vertex coordinates of the target object to obtain a second target expression image of the target object.
In the above technical solution, the deformation module is further configured to execute the following processing for any patch in the first template expression image:
determining a target patch corresponding to the patch in the first target expression image;
performing affine transformation processing on the surface patch and the target surface patch to obtain a deformation component of the surface patch;
and taking the set of the deformation components of the plurality of patches as the deformation component of the first target expression image.
In the above technical solution, the deformation module is further configured to determine first triangular patch information of the patch, and determine second triangular patch information of the target patch;
and carrying out affine transformation processing on the first triangular patch information and the second triangular patch information to obtain a deformation component of the patch.
In the above technical solution, the deformation module is further configured to determine three vertex coordinates included in the patch;
determining a normal vector and an edge vector of the patch based on the three vertex coordinates;
and combining the normal vector and the edge vector to obtain first triangular patch information of the patch.
In the above technical solution, the migration module is further configured to perform coordinate transformation processing on the deformation component of the first target expression image based on a correspondence between a patch of the first template expression image and a patch of the second template expression image, so as to obtain a plurality of candidate vertex coordinates of the target object;
determining a plurality of migration vertex coordinates for the target object based on the plurality of candidate vertex coordinates.
In the above technical solution, the migration module is further configured to determine a deformation component of the first target expression image as a deformation component of the target object;
and performing coordinate transformation processing on the deformation component of the target object based on the vertex coordinates of the second template expression image to obtain a plurality of candidate vertex coordinates of the target object.
In the above technical solution, the migration module is further configured to determine a candidate deformation component corresponding to the deformation component of the first target expression image based on a correspondence between a patch of the first template expression image and a patch of the second template expression image;
and performing least square processing on the candidate deformation components and the deformation component of the first target expression image to obtain a plurality of candidate vertex coordinates of the target object.
In the above technical solution, the migration module is further configured to split a deformation component of the first target expression image to obtain a triangle vector and a vertex matrix;
splitting the candidate deformation component to obtain the new triangle vector and a candidate vertex matrix;
and performing least square processing on the triangular vector, the vertex matrix and the candidate vertex matrix to obtain a plurality of candidate vertex coordinates of the target object.
In the above technical solution, the migration module is further configured to add the multiple candidate vertex coordinates and the set vertex coordinates to obtain multiple migration vertex coordinates of the target object.
In the above technical solution, the migration module is further configured to determine an anchor point in the second template expression image;
determining a target vertex coordinate corresponding to the anchor point in the candidate vertex coordinates;
and performing migration processing on the candidate vertex coordinates based on the target vertex coordinates to obtain a plurality of migration vertex coordinates of the target object.
In the above technical solution, the migration module is further configured to traverse vertices included in the first template expression image, and execute the following processing for the traversed vertices:
determining a target vertex corresponding to the vertex in the first target expression image;
when the deformation component corresponding to the patch where the vertex is located is an identity matrix and the coordinate of the vertex is the same as the coordinate of the target vertex, taking the vertex as an anchor point in the first target expression image;
determining a target vertex coordinate corresponding to the anchor point in the candidate vertex coordinates;
and performing migration processing on the candidate vertex coordinates based on the target vertex coordinates to obtain a plurality of migration vertex coordinates of the target object.
An embodiment of the present application provides an electronic device for image transformation, the electronic device including:
a memory for storing computer executable instructions;
and the processor is used for realizing the image transformation method provided by the embodiment of the application when executing the computer executable instructions stored in the memory.
The embodiment of the application provides a computer-readable storage medium, which stores computer-executable instructions for causing a processor to execute the computer-readable storage medium, so as to implement the image transformation method provided by the embodiment of the application.
The present application provides a computer program product, which includes a computer program or computer executable instructions, and when the computer program or computer executable instructions are executed by a processor, the image transformation method provided by the present application is implemented.
The embodiment of the application has the following beneficial effects:
the method comprises the steps of performing patch-based deformation processing on a first template expression image and a first target expression image of a template object to obtain a deformation component of the first target expression image, and transferring the first target expression image to a second template expression image of the target object based on the first target expression image, so that a natural and smooth second target expression image is obtained, and intelligent and accurate expression transformation is achieved.
Drawings
Fig. 1 is a schematic architecture diagram of an image transformation system provided in an embodiment of the present application;
FIG. 2 is a schematic structural diagram of an electronic device for image transformation provided in an embodiment of the present application;
fig. 3A is a first flowchart of an image transformation method according to an embodiment of the present application;
fig. 3B is a schematic flowchart of a second image transformation method provided in the embodiment of the present application;
fig. 3C is a schematic flowchart illustration three of an image transformation method provided in the embodiment of the present application;
FIG. 4 is a schematic diagram of expression migration provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of neutral expressions provided by embodiments of the present application;
FIG. 6 is a schematic diagram of a semi-open mouth expression provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of a semi-open mouth simultaneous eye-closing expression provided by an embodiment of the present application;
fig. 8 is a schematic diagram of a triangular patch provided by an embodiment of the present application;
FIG. 9 is a schematic diagram of an individual affine transformation of a co-edge triangular patch according to an embodiment of the present application;
fig. 10 is a schematic diagram of a transmission effect of an eye-closing expression provided by an embodiment of the present application;
fig. 11 is a schematic diagram of a delivery effect of a held mouth expression according to an embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, references to the terms "first", "second", and the like are only used for distinguishing similar objects and do not denote a particular order or importance, but rather the terms "first", "second", and the like may be used interchangeably with the order of priority or the order in which they are expressed, where permissible, to enable embodiments of the present application described herein to be practiced otherwise than as specifically illustrated and described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) In response to: for indicating the condition or state on which the performed operation depends, when the condition or state on which the performed operation depends is satisfied, the performed operation or operations may be in real time or may have a set delay; there is no restriction on the order of execution of the operations performed unless otherwise specified.
2) A client: and the terminal is used for running application programs for providing various services, such as a video playing client, a game client and the like.
3) Virtual roles: the image of various people and objects that can interact in the virtual scene, or the movable objects in the virtual scene. The movable object may be a virtual character, a virtual animal, an animated character, etc., such as a character, animal, etc., displayed in a virtual scene. The virtual character may be a virtual object in a virtual scene that is virtual to represent the user. The virtual scene can comprise a plurality of virtual characters, and each virtual character has a shape and a volume in the virtual scene and occupies a part of the space in the virtual scene.
4) Three-dimensional Mesh model (Mesh): the three-dimensional face mesh model is a three-dimensional mesh model, and the three-dimensional face mesh model mentioned in the embodiment of the application is a three-dimensional mesh model.
5) Fusion deformation (Blendshape) model: the three-dimensional model with different forms is generated by adjusting the weight of a group of three-dimensional models (such as a group of three-dimensional face models corresponding to the same face model) with slightly different forms through linear interpolation. For example, a three-dimensional face model of a character can be formed by linear interpolation of dozens or even two hundred of individual expression models such as a neutral face, a smile and a mouth opening, and the superposition effect of two expressions can be generated by adjusting the weights of the smile and the mouth opening.
6) The same topological model: the number of vertices and the patch composition of the two three-dimensional models (the vertices of the patches corresponding to the two three-dimensional models are the same, for example, the first patch of the model 1 includes vertex 1, vertex 2, and vertex 3, and the first patch of the model 2 also includes vertex 1, vertex 2, and vertex 3), are the same, and then the two three-dimensional models can be considered as the same topology model, and the vertex coordinate positions of the same topology model can be different. Obviously, the series of three-dimensional models formed by Blendshape are all topological relative to each other.
7) The optimization method comprises the following steps: refers to a method of solving the optimization problem. The optimization problem is a problem of determining what value certain selectable variables should take to optimize a selected objective function under certain constraint conditions.
8) Anchor point: points in the image that are used for locating, constraining, marking, for example, anchor points a in the image are used to constrain the absolute position of the triangular patch.
The embodiment of the invention provides an image transformation method, an image transformation device, electronic equipment and a storage medium, which are used for fusing target expressions in a plurality of target expression images and realizing intelligent surface emotion transformation based on the deformation of patches in the expression images.
The image transformation method provided by the embodiment of the application can be independently realized by a terminal; the terminal and the server may cooperate with each other, for example, the terminal may solely perform an image transformation method described below, or the terminal may send an image transformation request (including a first template expression image of a template object, a first target expression image, and a second template expression image of the target object) to the server, and the server may execute the image transformation method according to the received image transformation request to migrate the first target expression image to the second template expression image of the target object, so as to obtain a natural and smooth second target expression image, thereby implementing intelligent expression transformation.
An exemplary application of the electronic device provided by the embodiment of the present application is described below, and the electronic device provided by the embodiment of the present application may be implemented as various types of user terminals such as a notebook computer, a tablet computer, a desktop computer, a set-top box, a mobile device (e.g., a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, a portable game device, and an in-vehicle device).
Referring to fig. 1, fig. 1 is a schematic diagram of an architecture of an image transformation system 10 provided in an embodiment of the present application, and a terminal 200 is connected to a server 100 through a network 300, where the network 300 may be a wide area network or a local area network, or a combination of the two.
The terminal 200 may be used to acquire the first template expression image of the template object, the first target expression image, and the second template expression image of the target object, for example, when a user inputs the first template expression image of the template object, the first target expression image, and the second template expression image of the target object through the input interface, and after the input is completed, the terminal automatically acquires the first template expression image of the template object, the first target expression image, and the second template expression image of the target object, and generates an image transformation request (including the first template expression image of the template object, the first target expression image, and the second template expression image of the target object).
In some embodiments, the terminal 200 locally executes the image transformation method provided in this embodiment to perform deformation processing based on multiple patches on the first template expression image and the first target expression image according to the input first template expression image, the first target expression image, and the second template expression image of the target object, to obtain a deformation component of the first target expression image, perform expression migration processing on the second template expression image of the target object based on the deformation component of the first target expression image, to obtain multiple migration vertex coordinates of the target object, perform pixel assignment processing on the multiple migration vertex coordinates of the target object, to obtain a second target expression image of the target object (i.e., an expression image after migration, which is referred to as a migration image for short), and display the second target expression image on the display interface of the terminal 200.
As an application example, an expression transformation application is installed on the terminal 200, in which a user inputs a neutral face image of the user a (i.e., a first template expression image of a template object), a smile face image (i.e., a first target expression image), and a neutral face image of a target user (i.e., a second template expression image of a target object), the terminal 200 locally executes the image transformation method provided in the embodiment of the present application, and the smile face image of the user a is migrated to the neutral face image of the target user, so as to obtain a natural and smooth smile face image of the target user, and the smile face image of the target user is displayed on a display interface of the terminal 200, so as to implement intelligent expression transformation.
In some embodiments, the terminal 200 may also send an image transformation request to the server 100 through the network 300, and invoke an image transformation function based on artificial intelligence provided by the server 100, where the server 100 obtains a first template expression image, a first target expression image, and a second template expression image of the target object through an image transformation method provided in an embodiment of the present application, performs deformation processing based on multiple patches on the first template expression image and the first target expression image, so as to obtain a deformation component of the first target expression image, performs expression migration processing on the second template expression image of the target object based on the deformation component of the first target expression image, so as to obtain multiple migration vertex coordinates of the target object, performs pixel assignment processing on the multiple migration vertex coordinates of the target object, so as to obtain a second target expression image of the target object, and returns the second target expression image to the terminal 200, so as to display the second target expression image on a display interface of the terminal 200, or the server 100 directly outputs the second target expression image.
As an application example, an expression transformation application is installed on the terminal 200, a user inputs a neutral face image of the user a (i.e., a first template expression image of a template object), a smile face image (i.e., a first target expression image) and a neutral face image of a target user (i.e., a second template expression image of the target object) in the expression transformation application, the terminal 200 sends an image transformation request to the server 100, and after receiving the image transformation request, the server executes the image transformation method provided in the embodiment of the present application, migrates the smile face image of the user a onto the neutral face image of the target user, so as to obtain a natural and smooth smile face image of the target user, sends the smile face image of the target user to the expression transformation application, and displays the smile face image of the target user on a display interface of the terminal 200, so as to implement intelligent expression transformation.
In some embodiments, the terminal or the server may implement the image transformation method provided by the embodiments of the present application by running a computer program. For example, the computer program may be a native program or a software module in an operating system; can be a local (Native) Application program (APP), i.e. a program that needs to be installed in an operating system to run, such as a live Application program; or may be an applet, i.e. a program that can be run only by downloading it to the browser environment; but also an applet that can be embedded into any APP. In general, the computer programs described above may be any form of application, module or plug-in.
In some embodiments, the server 100 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, where the cloud service may be an image transformation service for a terminal to call.
In some embodiments, multiple servers may be grouped into a blockchain, and the server 100 is a node on the blockchain, and there may be an information connection between each node in the blockchain, and information transmission may be performed between the nodes through the information connection. Data (for example, logic of image transformation, and a second target expression image) related to the image transformation method provided in the embodiment of the present application may be stored in the block chain.
The structure of the electronic device for image transformation provided in the embodiment of the present application is described below, and referring to fig. 2, fig. 2 is a schematic structural diagram of the electronic device 500 for image transformation provided in the embodiment of the present application. Taking the example that the electronic device 500 is a terminal, the electronic device 500 for image conversion shown in fig. 2 includes: at least one processor 510, memory 550, at least one network interface 520, and a user interface 530. The various components in the electronic device 500 are coupled together by a bus system 540. It is understood that the bus system 540 is used to enable communications among the components. The bus system 540 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 540 in fig. 2.
The Processor 510 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The memory 550 may comprise volatile memory or nonvolatile memory, and may also comprise both volatile and nonvolatile memory. The non-volatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a Random Access Memory (RAM). The memory 550 described in embodiments herein is intended to comprise any suitable type of memory. Memory 550 optionally includes one or more storage devices physically located remote from processor 510.
In some embodiments, memory 550 can store data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 551 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
a network communication module 552 for communicating with other electronic devices via one or more (wired or wireless) network interfaces 520, exemplary network interfaces 520 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
in some embodiments, the image transformation apparatus provided in the embodiments of the present application may be implemented in software, and the image transformation apparatus provided in the embodiments of the present application may be provided in various software embodiments, including various forms of applications, software modules, scripts, or codes.
Fig. 2 shows an image transformation device 555 stored in the memory 550, which may be software in the form of programs and plug-ins, etc., and includes a series of modules, including an acquisition module 5551, a deformation module 5552, a migration module 5553, and a processing module 5554, which are logical and thus may be arbitrarily combined or further separated according to the functions implemented, and the functions of the respective modules will be described below.
As described above, the image transformation method provided by the embodiment of the present application can be implemented by various types of electronic devices. Referring to fig. 3A, fig. 3A is a schematic flowchart of an image transformation method provided in an embodiment of the present application, and is described with reference to the steps shown in fig. 3A.
In the following steps, the first template expression image of the template object (also called template character) refers to an expression image of the template object for the template expression (an image formed by a three-dimensional model corresponding to the template expression), for example, if the template expression is a neutral expression, the first template expression image is a neutral expression image of the template character; the first target expression image of the template object refers to an expression image (an image formed by a three-dimensional model corresponding to the target expression) of the template object for the target expression, for example, if the target expression is a smile expression, the first target expression image is the smile expression image of the template object; the second template expression image of the target object (also called target character) refers to an expression image of the target object for the template expression (an image formed by a three-dimensional model corresponding to the template expression), for example, if the template expression is a neutral expression, the second template expression image is a neutral expression image of the target character; the second target expression image of the target object refers to an expression image of the target object for a target expression (an image formed by a three-dimensional model corresponding to the target expression), for example, if the target expression is a smile expression, the second target expression image is a smile expression image of the target object.
In step 101, a first template expression image and a first target expression image of a template object are acquired, and a second template expression image of the target object is acquired.
The first template expression image and the second template expression image respectively comprise a plurality of patches, and the patches of the first template expression image correspond to the patches of the second template expression image in a one-to-one mode.
For example, the first template expression image includes N patches, the second template expression image includes N patches, where a patch of the first template expression image corresponds to a patch of the second template expression image one by one, for example, an ith patch of the first template expression image corresponds to an ith patch of the second template expression image, where i is greater than or equal to 0 and less than or equal to N, i is a positive integer, and N is a positive integer greater than 1, that is, the first template expression image and the second template expression image are mutually the same topology model.
As an example, when a user inputs a first template expression image, a first target expression image and a second template expression image of a target object through an input interface of a terminal, after the input is completed, the terminal automatically acquires the first template expression image, the first target expression image and the second template expression image of the target object, generates an image transformation request (including the first template expression image, the first target expression image and the second template expression image of the template object), sends the image transformation request to a server, and the server parses the image transformation request to acquire the first template expression image, the first target expression image and the second template expression image of the target object.
In step 102, a deformation processing based on a plurality of patches is performed on the first template expression image and the first target expression image to obtain a deformation component of the first target expression image.
And the deformation component is used for representing an affine transformation matrix of the deformation of the first target expression image relative to the first template expression image.
It should be noted that, because the deformation processing is directly performed based on the vertex, the migration effect is easily too hard and uneven, for example, if the size of the target head model (i.e., the head model of the target object) is different from that of the template head model (i.e., the head model of the template object), the vertex is directly subjected to addition and subtraction operation, which may cause the problem of unmatched change range, so that the embodiment of the present application performs deformation processing on the first template expression image and the first target expression image based on the patch instead of directly operating the change of the vertex, thereby effectively avoiding the problems of uneven migration effect and unmatched change range.
As shown in FIG. 3B, step 102 in FIG. 3A can be implemented by steps 1021-1023: executing the following processing aiming at any patch in the first template expression image: in step 1021, in the first target expression image, determining a target patch corresponding to the patch; in step 1022, affine transformation processing is performed on the basis of the patch and the target patch to obtain a deformation component of the patch; in step 1023, a set of the deformation components of the plurality of patches is used as the deformation component of the first target expression image.
As an example, the first template expression image includes N patches, the second template expression image includes N patches, the patches of the first template expression image and the patches of the second template expression image correspond to each other one by one, and the following processing is performed for an ith patch in the first template expression image: in the first target expression image, determining a target patch corresponding to the ith patch (namely the ith patch in the first target expression image); affine transformation processing is carried out on the ith patch and the target patch in the template expression image to obtain the deformation component of the ith patch, and the set of the deformation components of the patches is used as the deformation component of the first target expression image.
In some embodiments, performing affine transformation processing on the patch and the target patch to obtain a deformation component of the patch includes: determining first triangular patch information of a patch, and determining second triangular patch information of a target patch; and performing affine transformation processing on the first triangular patch information and the second triangular patch information to obtain a deformation component of the patch.
It should be noted that, since three non-collinear vertices determine a plane and have stability, the embodiment of the present application is implemented based on a triangular patch to ensure stability of the three-dimensional model. When the three-dimensional model is a head die with a large number of four-side patches, the embodiment of the application only needs to perform triangularization pretreatment on the four-side patches to obtain the triangular patches required by the embodiment of the application.
Taking the above example into account, determining first triangular patch information of the ith patch (i.e., the ith triangular patch), and determining second triangular patch information of the target patch (i.e., the triangular patch); and performing affine transformation processing on the first triangular patch information and the second triangular patch information to obtain a deformation component of the patch.
In some embodiments, determining first triangular patch information for a patch comprises: determining three vertex coordinates included by a patch; determining a normal vector and an edge vector of the patch based on the three vertex coordinates; and combining the normal vector and the edge vector to obtain first triangular patch information of the patch. Determining second triangular patch information of the target patch, comprising: determining three vertex coordinates included by a target patch; determining a normal vector and an edge vector of the target surface patch based on three vertex coordinates included by the target surface patch; and combining the normal vector and the edge vector of the target patch to obtain first triangular patch information of the target patch.
It should be noted that each vertex records coordinate information, and each patch records the order of the vertices in the form of a vertex index (i.e., which vertices form the current patch in order), that is, a triangle patch of the patch records vertex identifiers (for uniquely identifying the vertices) and vertex orders. As shown in the figure 8 of the drawings,
Figure SMS_1
Figure SMS_2
Figure SMS_3
three vertexes form a triangular patch, and the direction of a normal vector of the triangular patch is determined to be upward according to the vertex index sequence and the right-hand rule of the triangular patch, namely v 4 The direction shown by the vector, where v 4 The vector is a unitized normal vector consisting of vector v 1 And v 2 Cross-multiplying and normalizing the result. The embodiments of the present application adopt
Figure SMS_4
Is used to characterize the triangular patch information (i.e., the information of the triangular patch), i.e.,
Figure SMS_5
and
Figure SMS_6
the relative positions between the three vertices of the triangular patch are determined,
Figure SMS_7
the orientation of the triangular patch is determined.
The patch deformation is described below in conjunction with the formula:
the deformation of the surface patch can be regarded as affine transformation T, and the affine transformation T is used for any surface patch in the expression image of the first templateAssuming its original form as
Figure SMS_8
The shape after deformation is
Figure SMS_9
(i.e., the target patch corresponding to the patch in the first target expression image), then there is
Figure SMS_10
Wherein T is one
Figure SMS_11
The affine transformation matrix of (1).
For any patch in the first template expression image (i.e., one of the triangular patches of the template character), patch information is known for that patch in the first template expression image (i.e., the neutral expression image) and the first target expression image (i.e., one of the other expression images). That is to say that
Figure SMS_12
And
Figure SMS_13
as is known, again because both matrices are full rank, it is possible to formulate
Figure SMS_14
An affine transformation matrix T of the patch, i.e., a deformation component of the patch, is obtained.
In step 103, expression migration processing is performed on the second template expression image of the target object based on the deformation component of the first target expression image, so as to obtain a plurality of migration vertex coordinates of the target object.
For example, after obtaining the deformation component of the first target expression image, based on the deformation component of the first target expression image, the expression migration processing is performed on the second template expression image of the target object to obtain a plurality of migration vertex coordinates of the target object, that is, the deformation component is applied to the second template expression image of the target object (for example, a neutral expression image of a target head model), so that the second template expression image generates deformation with the same semantic meaning, and thus some other expression (that is, a target expression) of the template object is migrated to the target object.
As shown in fig. 3C, step 103 in fig. 3A can be implemented by steps 1031-1032: executing the following processing aiming at any patch in the first template expression image: in step 1031, based on the correspondence between the patch of the first template expression image and the patch of the second template expression image, performing coordinate transformation processing on the deformation component of the first target expression image to obtain a plurality of candidate vertex coordinates of the target object; in step 1032, a plurality of migration vertex coordinates of the target object are determined based on the plurality of candidate vertex coordinates.
The candidate vertex coordinates are used for assisting in determining the migration vertex coordinates of the target object, and the migration vertex coordinates are vertex coordinates after the expression is migrated. The first template expression image and the second template expression image are mutually homotopological models, the patch of the first template expression image and the patch of the second template expression image are in one-to-one correspondence, so that the deformation component is applied to the second template expression image of the target object, namely, the deformation component of the first target expression image is subjected to coordinate transformation processing to obtain a plurality of candidate vertex coordinates of the target object, and the candidate vertex coordinates enable the second template expression image to generate deformation with the same semantic as the first target expression image, so that the target expression of the template object is transferred to the target object.
In some embodiments, the coordinate transformation processing on the deformation component of the first target expression image to obtain a plurality of candidate vertex coordinates of the target object includes: determining the deformation component of the first target expression image as the deformation component of the target object; and performing coordinate transformation processing on the deformation component of the target object based on the vertex coordinates of the second template expression image to obtain a plurality of candidate vertex coordinates of the target object.
It should be noted that the patch of the first template expression image and the patch of the second template expression image are in a one-to-one correspondence relationship, so that the affine transformation matrix of the template object (i.e., the deformation component of the first target expression image) can be directly used as the affine transformation matrix of the target object, and the deformation component of the target object is subjected to coordinate transformation processing based on the vertex coordinates of the second template expression image to obtain a plurality of candidate vertex coordinates (i.e., the vertex coordinates after migration) of the target object, thereby saving the calculation amount and realizing the function of rapid expression migration.
In some embodiments, the coordinate transformation processing is performed on the deformation component of the first target expression image based on the correspondence between the patch of the first template expression image and the patch of the second template expression image, so as to obtain a plurality of candidate vertex coordinates of the target object, including: determining candidate deformation components corresponding to the deformation components of the first target expression image based on the corresponding relation between the patch of the first template expression image and the patch of the second template expression image; and performing least square processing on the candidate deformation components and the deformation component of the first target expression image to obtain a plurality of candidate vertex coordinates of the target object.
It should be noted that, since the patches of the first template expression image and the patches of the second template expression image are in a one-to-one correspondence relationship, the deformation components of the first target expression image and the candidate deformation components of the second target expression to be generated are also in a one-to-one correspondence relationship.
In connection with the above example, the deformation component (e.g. affine transformation matrix T) is applied to the target object, that is, the least square processing is performed on the deformation component based on the candidate deformation component and the deformation component of the first target expression image to obtain a plurality of candidate vertex coordinates of the target object
Figure SMS_15
By the formula
Figure SMS_16
Affine transformation matrix of patch corresponding to target object
Figure SMS_17
(i.e., candidate deformation components) and affine transformation matrix of patches of template object
Figure SMS_18
Tend to be uniform, wherein the subscripts
Figure SMS_19
Is shown as
Figure SMS_20
The number of the dough sheets is counted,
Figure SMS_21
the vertex coordinates of the target object to be solved after migration are represented, eg represents the variation gradients of two corresponding patches, and | M | represents the number of the patches of the target object.
It should be noted that the formula corresponding to the least square processing in the embodiment of the present application is not limited to the formula
Figure SMS_22
Other deformation formulas are also possible.
In some embodiments, performing least square processing based on the candidate deformation components and the deformation component of the first target expression image to obtain a plurality of candidate vertex coordinates of the target object includes: splitting the deformation component of the first target expression image to obtain a triangular vector and a vertex matrix; splitting the candidate deformation component to obtain a new triangular vector and a candidate vertex matrix; and performing least square processing on the triangular vector, the vertex matrix and the candidate vertex matrix to obtain a plurality of candidate vertex coordinates of the target object.
In accordance with the above example, by formula
Figure SMS_24
Can know that
Figure SMS_27
What is finally required is
Figure SMS_29
But the subject of the constraint term is
Figure SMS_32
Therefore, further deduction is needed to make the vertex
Figure SMS_33
Can effectively express
Figure SMS_34
. Taking a triangular patch of the template object as an example, the following formula
Figure SMS_35
Starting from the starting point, firstly transposing the affine transformation matrix T, and then transposing the affine transformation matrix T after deformation
Figure SMS_23
Further expanding the three-dimensional shape into a form of a triangular vector and a vertex, and finally obtaining the relation between the affine transformation matrix T and the vertex after the deformation to be solved
Figure SMS_25
Wherein p is 0 、p 1 、p 2 Representing the vertex coordinates, v, of a triangular patch 4 A normal vector is represented. Therefore, only the formula
Figure SMS_26
The main item is added with transposition operation to complete passing through the vertex
Figure SMS_28
And an auxiliary vector
Figure SMS_30
For unknown intermediate quantity
Figure SMS_31
Alternative expression of (a).
Therefore, the deformation component T of the first target expression image is split to obtain a triangular vector
Figure SMS_36
And vertex matrix
Figure SMS_37
(ii) a For candidate deformation component
Figure SMS_38
Splitting to obtain a new triangle vector and a candidate vertex matrix; by the formula
Figure SMS_39
And performing least square processing on the triangular vector, the vertex matrix and the candidate vertex matrix to obtain a plurality of candidate vertex coordinates of the target object.
In some embodiments, determining a plurality of migration vertex coordinates for the target object based on the plurality of candidate vertex coordinates comprises: and adding the candidate vertex coordinates and the set vertex coordinates to obtain a plurality of migration vertex coordinates of the target object.
It should be noted that the patch in the embodiment of the present application is characterized in that
Figure SMS_40
That is, the information of the triangular patch includes vectors of two edges and a unit normal vector of the triangular patch, and the absolute position of the triangular patch in the world coordinate system is not recorded. In other words, superimposing an overall offset on the triangular patch does not affect the triangular patch information V, that is, superimposing an overall offset on the candidate vertex coordinates does not affect the presentation of the second target expression image of the target object. Therefore, the candidate vertex coordinates are subjected to the summation processing with the set vertex coordinates to obtain a plurality of transition vertex coordinates of the target object, so that the candidate vertex coordinates are subjected to overall shift to present the target expression of the target object at the set position.
In some embodiments, determining a plurality of migration vertex coordinates for the target object based on the plurality of candidate vertex coordinates comprises: determining an anchor point in the second template expression image; determining a target vertex coordinate corresponding to the anchor point in the candidate vertex coordinates; and performing offset processing on the candidate vertex coordinates based on the target vertex coordinates to obtain a plurality of migration vertex coordinates of the target object.
Note that, if the candidate vertex coordinates
Figure SMS_41
Is to minimize the formula
Figure SMS_42
The obtained optimal solution, then
Figure SMS_43
Are optimal solutions where X is any real number.
In order to eliminate the influence of X randomness, the embodiment of the application fixes candidate vertex coordinates by setting an anchor point. For example, a certain vertex (e.g. the 1 st vertex) in the second template expression image is fixedly used as an anchor point, and an additional anchor point constraint is applied
Figure SMS_44
Wherein, in the step (A),
Figure SMS_45
indicating the 1 st vertex in the first target expression image,
Figure SMS_46
and the 1 st vertex of the second target expression image to be solved is represented.
In summary, since the patches in the first target expression image and the patches in the second target expression image to be solved are in a one-to-one correspondence relationship, that is, the vertices in the first target expression image and the vertices in the second target expression image to be solved are in a one-to-one correspondence relationship, and the patches in the first template expression image and the patches in the first target expression image are in a one-to-one correspondence relationship, that is, the vertices in the first template expression image and the vertices in the first target expression image are in a one-to-one correspondence relationship, the vertices in the second template expression image and the vertices in the second target expression image to be solved are in a one-to-one correspondence relationship.
Based on the one-to-one correspondence relationship between the vertex in the second template expression image and the vertex in the second target expression image to be solved, taking any vertex in the second template expression image as an anchor point, determining a target vertex coordinate corresponding to the anchor point from a plurality of candidate vertex coordinates, and performing offset processing on the plurality of candidate vertex coordinates based on the target vertex coordinate to obtain a plurality of migration vertex coordinates of the target object by the following processing: determining the difference value of the target vertex coordinates and the anchor point; and summing the coordinates of each migration vertex and the difference value to obtain the coordinates of the migration vertices of the target object so as to shift the coordinates of the migration vertices based on the difference value.
In some embodiments, determining a plurality of migration vertex coordinates for the target object based on the plurality of candidate vertex coordinates comprises: traversing the vertexes included in the first template expression image, and executing the following processing for the traversed vertexes: determining a target vertex of a corresponding vertex in the first target expression image; when the deformation component corresponding to the patch where the vertex is located is an identity matrix and the coordinate of the vertex is the same as the coordinate of the target vertex, taking the vertex as an anchor point in the first target expression image; determining a target vertex coordinate corresponding to the anchor point in the candidate vertex coordinates; and performing offset processing on the candidate vertex coordinates based on the target vertex coordinates to obtain a plurality of migration vertex coordinates of the target object.
It should be noted that, since the patch in the first target expression image and the patch in the second target expression image to be solved are in a one-to-one correspondence relationship, the anchor point in the first target expression image and the anchor point in the second target expression image to be solved are in a one-to-one correspondence relationship.
Based on the one-to-one correspondence relationship between the anchor point in the first target expression image and the anchor point in the second target expression image to be solved, after the anchor point in the first target expression image is determined, the target vertex coordinate corresponding to the anchor point can be determined from the candidate vertex coordinates, based on the target vertex coordinate, the candidate vertex coordinates are subjected to offset processing, and the multiple migration vertex coordinates of the target object are obtained through the following processing: determining the difference value of the target vertex coordinates and the anchor point; and summing the migration vertex coordinates and the difference values to obtain the migration vertex coordinates of the target object so as to shift the migration vertex coordinates based on the difference values.
It should be noted that, determining the anchor point in the first target expression image is obtained by the following processing: traversing the vertexes included in the first template expression image, and executing the following processing for the traversed vertexes: determining a target vertex of a corresponding vertex in the first target expression image; and when the deformation component corresponding to the patch where the vertex is located is the identity matrix and the coordinate of the vertex is the same as the coordinate of the target vertex, taking the vertex as an anchor point in the first target expression image.
In step 104, pixel assignment processing is performed on the multiple migration vertex coordinates of the target object, so as to obtain a second target expression image of the target object.
For example, after obtaining a plurality of migration vertex coordinates in step 103, step 104 is executed to perform pixel assignment processing on all the migration vertex coordinates to obtain a gray-scale or color second target expression image.
Next, an exemplary application of the embodiment of the present application in a practical application scenario will be described.
The method provided by the embodiment of the application can be applied to application scenes of various expression transformations, such as game scenes, the method provided by the embodiment of the application is integrated in a one-key mode, and the expression transformation function can be popularized and used by non-technical art personnel such as user groups, game player groups and the like through one-key simple operation, so that the ability of playing the facial expressions of three-dimensional characters is given, and a game planning and developing team can also develop more suitable playing methods.
In the related art, the Blendshape model of the three-dimensional character is generally produced by a Technical Art (TA) person in professional software (e.g., maya) by manually producing tens or even hundreds of expression images one by one. This method of production has several disadvantages, one is that it takes a long time to produce a complete expression Blendshape model for a fine head model (i.e., a head model), and the production time for a rough head model is several days. Secondly, the effect of the production depends on the personal level and style of the technical art personnel. Different technical and art personnel can make the Blenshape models of different roles (namely virtual roles) by division, and the two groups of the prepared Blenshape models are easy to have difference in semantics and style. For example, the mouth opening amplitude of the two Blendshape models may be different. The manual production has a plurality of difficult problems, but the currently game character production and entertainment scenes have greater and greater demands on blendshapes and animations of virtual characters, which causes contradictions between demand ends and production, so that the blendshapes generated by automatic migration from existing templates become a strong trend.
Automatic migration algorithms exist, the first method is to directly perform assignment operation on the vertex, for example, the vertex-by-vertex offset between the neutral expression image of the template and the mouth expression image of the template is directly added to the neutral expression image of the target head model to achieve the purpose of expression migration, but this method obviously has serious problems, especially scale problems, for example, the eyes of the target head model are very large, and the offset of the vertex generated by the closed-eye expression of the template head model is obviously insufficient to enable the target head model to close the eyes; the second method is a migration method based on Radial Basis Function (RBF), the core of the method is dependent on key points of a three-dimensional model, a deformation Function is fitted based on the key points, so that the three-dimensional model is integrally deformed along with the movement of the key points in a small range, and the method has poor deformation effect on large-scale models; the third is an Example-based method, which provides not only the Blendshape of the template, but also a few extreme expressions with different expression component combinations of the target head model, and then substitutes the extreme expressions into the Blendshape linear combination equation to perform simultaneous iterative solution, and this method needs to manually make some expression combinations of the target character, for Example, only the neutral face of the target character itself, and the final result of the algorithm is to transfer each expression of the template to the target character, and this algorithm requires to provide some expression combinations of the target character, which further increases the burden of manual making and is difficult to be satisfied in the actual requirement.
In conclusion, the manner of manually making the Blendshape is time-consuming, too inefficient, and difficult to meet the increasing demands of animated characters. The automatic migration algorithm of the related art has the defect of algorithm effect, or still requires a large amount of manual intervention, and cannot really fall to the ground to form a reliable automatic migration function.
In order to solve the above problems, an embodiment of the present application provides an automatic Blendshape migration method for a homotopology character head model based on patch deformation transfer (implemented by an image transformation method), where the method focuses on the automatic Blendshape transfer of a homotopology three-dimensional face model, and as shown in fig. 4, the core of the method is to automatically establish a patch correspondence between a template head model (i.e., a head model of a template character) and a target head model (a head model of a target character); calculating deformation components (namely affine transformation matrix) of each patch between a neutral expression (expression without any expression) image and some other expression image of the template head model; and applying the deformation component to the corresponding surface patch of the target head model to enable the corresponding surface patch of the target head model to generate deformation with the same semantic meaning, so that some other expression of the template head model is transferred to the target head model. It is worth noting that the migration effect is too hard and unsmooth easily due to the fact that the deformation processing is directly carried out on the basis of the vertex, if the size of the target head model is different from that of the template head model, the addition and subtraction operation is directly carried out on the vertex, and the problem that the change amplitude is not matched is caused.
It should be noted that the automatic Blendshape migration method for the same-topology character head model based on patch deformation transfer provided by the embodiment of the present application can be used for rapidly generating blendshapes of other virtual characters having the same topology as the template head model, and compared with the time consumption of several days for manually making blendshapes, the method generates blendshapes in a full-automatic manner by one key, and the time consumption is only one minute, so that on one hand, the production efficiency of art workers is greatly improved, and the development progress of related character games is accelerated; on the other hand, the one-button simple operation enables the migration function to be popularized and used by non-technical art personnel such as user groups, game player groups and the like, so that the ability of playing the facial expressions of the three-dimensional characters is given, and a game plan development team can develop more suitable playing methods from the migration function.
With the extensive use of facial animation in the film and television industry and the game industry, a scheme for rapidly driving characters to move is urgently needed. There are two broad categories of solutions in the industry, the first is a skeleton binding and skinning based approach, and the other is Blendshape for making characters. Either the former or the latter requires manual development and production of virtual characters by skilled artists. The automatic Blendshape migration function provided by the method can provide assistance for the second major class of character animation schemes. By using the method, technical art personnel can concentrate on making one two sets of templates Blenshape, and only the Blenshape automatic migration needs to be executed for other virtual characters, so that the subsequent making of character animation is greatly improved.
The method for automatically migrating the same-topology character head model Blendshape based on patch deformation transfer has the characteristic of high modularization, does not depend on any professional software, can directly complete deformation transfer of a three-dimensional model through an algorithm, and can be quickly integrated into any art professional software (such as Maya,3dsmax and the like) in a plug-in mode, so that powerful technical support is provided for game making and character making.
The method for automatically migrating the homotopology role head model Blendshape based on patch deformation transfer is specifically described as follows:
the method for automatically migrating the same-topology character head model Blendshape based on patch deformation transfer is integrated into a Superman tool set, the specific product form is a key, and technical art personnel or ordinary users can select a template head model and a target head model in Maya software and click a 'Blendshape automatic migration' button to obtain a final migration result. The logic behind the "Blendshape auto migration" button is to send the data of the template head model and the target head model, request the remote server deployed with the method, obtain the return result (i.e. the expression model after migration) of the remote server and write back the result into Maya. The remote server deployment approach facilitates iterative optimization algorithms without requiring local plug-in code updates.
As shown in fig. 5, a group of expressions is selected in the presentation form of Blendshape in Maya software, and a group of blendshapes is clicked to "morph" and "create a blending morph", so that a group of blendshapes can be simply integrated, and as a result, as shown in fig. 5, a left side 501 is a three-dimensional face (female character face) of a template head model, and a right side 502 is a "morph editor" panel corresponding to the integrated Blendshape, and a user drags and modifies the value of a certain slide bar parameter, and the three-dimensional face on the left side will generate a change of a corresponding expression, where the control relationship between the slide bar value and the expression form is a simple linear weighting relationship of each expression (based on the numerical value corresponding to the slide bar of each expression, each expression is linearly weighted to obtain a changed expression). The Maya software makes this control relationship an easy-to-operate interface. The purpose of the method is to generate a set of emotions corresponding to the emotions of an existing template for a given role (e.g., a neutral emoticon as shown in fig. 5) of a user, and then the set of emotions can be applied to professional art software such as Maya.
As for the neutral expression shown in fig. 5, the values of the slider bars corresponding to all expressions in the graph are all 0, so that the neutral expression of the template, that is, the expression without any expression, is shown; in the semi-open mouth expression shown in fig. 6, the value of the slide bar corresponding to the semi-open mouth (jaw _ drop) expression 601 in the drawing is 0.481; as shown in fig. 7, the value of the slider corresponding to the half-mouth (jaw _ drop) expression is 0.481, and the value of the slider corresponding to the left eye-closing (eye _ blink _ b _ l) expression 701 is 1.
The method for automatically migrating the homotopology character head model Blendshape based on patch deformation transfer provided by the embodiment of the present application will be described in terms of algorithm logic, and the method includes two parts, namely patch deformation migration and adaptive multi-anchor constraint, and the specific explanations of the two parts are as follows:
1) Dough sheet deformation migration
First, describing a characterization method of a patch according to an embodiment of the present application, a core of a three-dimensional mesh model is a vertex and a patch, where each vertex records coordinate information, and each patch records an order of vertices in a form of vertex indexes (that is, which vertices sequentially form a current patch). The method is implemented based on a triangular patch because three non-collinear vertices define a plane and have stability. For the head die with a large number of quadrilateral patches as shown in fig. 5, the embodiment of the present application only needs to perform triangulation preprocessing on the quadrilateral patches to obtain the triangular patches required by the embodiment of the present application.
As shown in FIG. 8, p 0 、p 1 、p 2 Three vertexes form a triangular patch, and the direction of a normal vector of the triangular patch is determined to be upward according to the vertex index sequence and the right-hand rule of the triangular patch, namely v 4 The direction indicated by the vector. The orientation of the normal vector of the triangular patch is important in relation to the direction (i.e., front or back) defined by the three-dimensional mesh model. Wherein v is 4 The vector is a unitized normal vector composed of a vector v 1 And v 2 Cross-multiplying and normalizing the result. The embodiment of the present application adopts
Figure SMS_47
Is used to characterize the information of the triangular patch, i.e., a full rank matrix of 3 x 3, wherein,
Figure SMS_48
and
Figure SMS_49
the relative positions between the three vertices of the triangular patch are determined,
Figure SMS_50
the orientation of the triangular patch is determined.
The patch deformation is explained below:
the deformation of a patch can be considered as an affine transformation T, for the patch shown in fig. 8, assuming its original form as
Figure SMS_51
The shape after deformation is
Figure SMS_52
Then there is
Figure SMS_53
Wherein T is one
Figure SMS_54
The affine transformation matrix of (1).
For a triangle patch of a template character, patch information of the triangle patch in neutral expression and some other expression is known. That is to say
Figure SMS_55
And
Figure SMS_56
as is known, since both matrices are full rank, the affine transformation matrix T of the triangular patch, i.e., the patch deformation information, can be obtained by the following formula (1).
Figure SMS_57
(1)
According to the embodiment of the application, the affine transformation matrix T is applied to the corresponding triangular patch of the target head die, and the affine transformation matrix of the corresponding triangular patch of the target head die is enabled to be in an affine transformation matrix T form according to the formula (2)
Figure SMS_58
Affine transformation matrix of triangular surface patch of template head die
Figure SMS_59
Tend to be consistent.
Figure SMS_60
(2)
Wherein the index i denotes the ith triangular patch,
Figure SMS_61
the method is characterized in that the vertex coordinate of a certain expression of a target head model to be solved is represented, eg represents the change gradient of two corresponding triangular patches, and | M | represents the number of the triangular patches of the head modelThe corresponding relationship of (a) is also one-to-one.
From the formula (2), it is known
Figure SMS_62
What is finally required is
Figure SMS_63
But the subject of the constraint term is
Figure SMS_64
Therefore, further deduction is needed to make the vertex
Figure SMS_65
Can effectively express
Figure SMS_66
. Taking a triangular patch as an example, starting from formula (1), transposing an affine transformation matrix T, and then transforming the affine transformation matrix T
Figure SMS_67
Further expanding the three-dimensional shape into a form of a triangular vector and a vertex, and finally obtaining a relation between the affine transformation matrix T and the vertex after the deformation to be solved, as shown in a formula (3):
Figure SMS_68
(3)
therefore, the main body item of the formula (2) can be completed by the vertex only by adding the transposition operation together
Figure SMS_69
And an auxiliary vector
Figure SMS_70
For unknown intermediate quantity
Figure SMS_71
As shown in minimization formula (4).
Figure SMS_72
(4)
It should be noted that, since a plurality of triangular patches having a common vertex and a common edge are separately deformed, the consistency of the change of the common vertex and the common edge cannot be maintained. For example, as shown in fig. 9, if the co-edge triangular patches j and k individually perform different affine transformations, the common edges 901 may not coincide, resulting in surface crack of the three-dimensional model, so that the embodiment of the present application does not directly cause the three-dimensional model to be cracked
Figure SMS_73
Then each is reused
Figure SMS_74
And removing the corresponding triangular patch of the neutral expression of the target head model.
As such, the embodiments of the present application use vertices
Figure SMS_75
To substitute for expressing unknown intermediate quantities
Figure SMS_76
And directly optimizing the formula (4) to obtain the global optimum instead of the local optimum of a single patch, thereby ensuring the overall accuracy of the expression form of the target head model.
2) Adaptive multi-anchor constraint
In addition, the triangular patch information in the embodiment of the present application is characterized in that
Figure SMS_77
That is, the information of the triangular patch includes vectors of two edges and a unit normal vector of the triangular patch, and the absolute position of the triangular patch in the world coordinate system is not recorded. In other words, superimposing an overall offset on the triangular patch does not affect the triangular patch information V.
According to the formula (4), if
Figure SMS_78
Is to minimize the formula4) The obtained optimal solution, then
Figure SMS_79
Are optimal solutions where X is any real number. Therefore, when solving formula (4), it is necessary to apply a certain anchor point constraint, i.e. to eliminate the influence of X randomness, and it is necessary to set an anchor point.
The simplest way is to set a single anchor point, for example, a certain vertex (for example, the 1 st vertex) of the target head model is fixedly used as the anchor point, and an additional anchor point constraint is applied
Figure SMS_80
As shown in equation (5):
Figure SMS_81
(5)
wherein the content of the first and second substances,
Figure SMS_82
the 1 st vertex representing some other expression in the template headform,
Figure SMS_83
the 1 st vertex representing some other expression of the target head model to be solved.
This will leave the 1 st vertex of the target head model permanently intact, eliminating the arbitrariness of X. However, the 1 st vertex of the target head model may be changed under different expressions. If the 1 st vertex is the vertex on the lips, the 1 st vertex should not move in the closed-eye expression, so it is reasonable to use the 1 st vertex as an anchor point. However, under the open-mouth expression, the 1 st vertex must have displacement, and if the 1 st vertex is still set as an anchor point and added into a simultaneous optimization equation, the solved open-mouth expression as a whole tends to have significant deviation relative to a neutral expression, and the anchor point on the lips is only kept still.
In view of the above problems of no anchor point and single anchor point, an embodiment of the present application provides an adaptive multi-anchor pointThe constraint method of (1). For different expressions, regions where the head model does not change are different, that is, anchor points are different, so that the corresponding anchor points can be determined adaptively through an adaptive multi-anchor-point constraint method in the embodiment of the application. Firstly, affine transformation information T between a neutral expression and a triangular surface patch corresponding to a certain expression in a template head model is calculated, and when the form of the triangular surface patch is not changed, T is an identity matrix I, namely the identity matrix T is
Figure SMS_84
The matrix has only three diagonal values of 1 and the remainder 0. Then, the F norm of the difference matrix T-I is calculated, and when the shape of the triangular patch is not changed, the F norm is 0. According to the self-adaptive multi-anchor-point constraint method, a triangular patch with the F norm smaller than 1e-4 is marked as an undeformed patch. The vertices of the entire target head model are then traversed and registered as anchor points if and only if all the triangular patches (each recording three vertices) on which the vertices lie are undeformed patches and the coordinates of the vertices are also unchanged. Therefore, the embodiment of the application adaptively determines a plurality of anchor points, and the adaptive anchor point constraint equation is shown in formula (6):
Figure SMS_85
(6)
where A represents the set of adaptive anchors.
And (4) and (6) are combined, and the vertex coordinates of the corresponding expression of the target head model can be obtained by adopting a least square optimization method, so that the rapid migration of the expression from the template head model to the target head model is completed.
It should be noted that, in the embodiment of the present application, static expressions are migrated one by one. After a series of expressions are grouped into a Blendshape, a slide bar of a 'deformation editor' in Maya can be dragged, and the dynamic state of the character changing from a neutral expression to a certain expression can be checked in real time.
As shown in fig. 10, the closed-eye expression of the template character 1002 is migrated to the target character 1003 by dragging the slider corresponding to the left closed-eye (eye _ blink _ b _ l) expression 1001; as shown in fig. 11, the closed-eye expression of the template character 1102 is migrated to the target character 1103 by dragging the slider corresponding to the left held mouth (mouth _ mov _ l) expression 1101. As shown in fig. 10 to fig. 11, the embodiment of the present application can well transfer a series of expressions of a template character to different target characters of the same topology, and the expression transition is more natural and is consistent with the expressions of the template character.
Compared with the method for automatically migrating the homotopology role head model Blendshape based on patch deformation transmission, the method for automatically migrating the Blendshape of the template role to the target role can rapidly migrate the Blendshape of the template role in about one minute, and greatly improves the production efficiency. In practical effect, the problem of style difference caused by that different technical and artistic personnel separately manufacture the Blendshape is avoided, the Blendshape of a complete template role can be accurately transmitted to a plurality of target roles, and the expression uniformity among the target roles is ensured.
In summary, the method for automatically migrating the homotopology role head model Blendshape based on patch deformation transfer provided by the embodiment of the present application has the following beneficial effects:
1. the method can quickly generate the target role expression with natural form from the existing Blendshape of the template role, and linearly superpose (i.e. fusion deformation) the generated target role expression through a Maya tool to form the Blendshape of the target role for subsequent practical application, thereby greatly saving the labor consumption;
2. compared with a vertex coordinate transmission-based mode, the method is based on patch deformation transmission, and can obtain a smoother, more natural and semantically clear expression transmission result;
3. compared with single anchor point constraint, the self-adaptive multi-anchor point constraint provided by the method can avoid the problem that the whole head die is subjected to wrong deviation after the expression is transmitted.
With reference to the exemplary application and implementation of the electronic device provided in the embodiment of the present application, the image transformation method provided in the embodiment of the present application is described, and a scheme for implementing image transformation by matching each module in the image transformation apparatus 555 provided in the embodiment of the present application is described below.
The acquiring module 5551 is configured to acquire a first template expression image and a first target expression image of a template object, and acquire a second template expression image of the target object; the first template expression image and the second template expression image respectively comprise a plurality of patches, and the patches of the first template expression image correspond to the patches of the second template expression image in a one-to-one manner; the deformation module 5552 is configured to perform deformation processing based on the multiple patches on the first template expression image and the first target expression image to obtain a deformation component of the first target expression image; the migration module 5553 is configured to perform expression migration processing on the second template expression image of the target object based on the deformation component of the first target expression image, so as to obtain a plurality of migration vertex coordinates of the target object; the processing module 5554 is configured to perform pixel assignment processing on the multiple migration vertex coordinates of the target object, so as to obtain a second target expression image of the target object.
In some embodiments, the deformation module 5552 is further configured to perform the following for any tile in the first template expression image: in the first target expression image, determining a target patch corresponding to the patch; performing affine transformation processing on the surface patch and the target surface patch to obtain a deformation component of the surface patch; and taking the set of the deformation components of the plurality of patches as the deformation component of the first target expression image.
In some embodiments, the deformation module 5552 is further configured to determine first triangular patch information for the patch, and determine second triangular patch information for the target patch; and carrying out affine transformation processing on the first triangular patch information and the second triangular patch information to obtain a deformation component of the patch.
In some embodiments, the deformation module 5552 is further configured to determine three vertex coordinates that the patch comprises; determining a normal vector and an edge vector of the patch based on the three vertex coordinates; and combining the normal vector and the edge vector to obtain first triangular patch information of the patch.
In some embodiments, the migration module 5553 is further configured to perform coordinate transformation processing on the deformation component of the first target expression image based on a correspondence between a patch of the first template expression image and a patch of the second template expression image, so as to obtain a plurality of candidate vertex coordinates of the target object; determining a plurality of migration vertex coordinates for the target object based on the plurality of candidate vertex coordinates.
In some embodiments, the migration module 5553 is further configured to determine a deformation component of the first target expression image as a deformation component of the target object; and performing coordinate transformation processing on the deformation component of the target object based on the vertex coordinates of the second template expression image to obtain a plurality of candidate vertex coordinates of the target object.
In some embodiments, the migration module 5553 is further configured to determine, based on a correspondence relationship between a patch of the first template expression image and a patch of the second template expression image, a candidate deformation component corresponding to the deformation component of the first target expression image; and performing least square processing on the candidate deformation components and the deformation components of the first target expression image to obtain a plurality of candidate vertex coordinates of the target object.
In some embodiments, the migration module 5553 is further configured to split the deformation component of the first target expression image to obtain a triangle vector and a vertex matrix; splitting the candidate deformation component to obtain the new triangle vector and a candidate vertex matrix; and performing least square processing on the triangular vector, the vertex matrix and the candidate vertex matrix to obtain a plurality of candidate vertex coordinates of the target object.
In some embodiments, the migration module 5553 is further configured to sum the candidate vertex coordinates and the set vertex coordinates to obtain a plurality of migration vertex coordinates of the target object.
In some embodiments, the migration module 5553 is further configured to determine an anchor point in the second template emoticon; determining a target vertex coordinate corresponding to the anchor point in the plurality of candidate vertex coordinates; and performing migration processing on the candidate vertex coordinates based on the target vertex coordinates to obtain a plurality of migration vertex coordinates of the target object.
In some embodiments, the migration module 5553 is further configured to traverse vertices included in the first template expression image, and perform the following processing for the traversed vertices: determining a target vertex corresponding to the vertex in the first target expression image; when the deformation component corresponding to the patch where the vertex is located is an identity matrix and the coordinate of the vertex is the same as the coordinate of the target vertex, taking the vertex as an anchor point in the first target expression image; determining a target vertex coordinate corresponding to the anchor point in the plurality of candidate vertex coordinates; and performing offset processing on the candidate vertex coordinates based on the target vertex coordinates to obtain a plurality of migration vertex coordinates of the target object.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the electronic device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the electronic device executes the image transformation method described in the embodiment of the present application.
Embodiments of the present application provide a computer-readable storage medium having stored therein computer-executable instructions that, when executed by a processor, cause the processor to perform an image transformation method provided by embodiments of the present application, for example, an image transformation method as illustrated in fig. 3A-3C.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, the computer-executable instructions may be in the form of programs, software modules, scripts or code written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and they may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, computer-executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, computer-executable instructions may be deployed to be executed on one electronic device or on multiple electronic devices located at one site or distributed across multiple sites and interconnected by a communication network.
It is understood that, in the embodiments of the present application, the data related to the user information and the like need to be approved or approved by the user when the embodiments of the present application are applied to specific products or technologies, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related countries and regions.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (10)

1. A method of image transformation, the method comprising:
acquiring a first template expression image and a first target expression image of a template object, and acquiring a second template expression image of the target object;
the first template expression image and the second template expression image respectively comprise a plurality of patches, and the patches of the first template expression image correspond to the patches of the second template expression image in a one-to-one manner;
deformation processing based on the plurality of patches is carried out on the first template expression image and the first target expression image, so that deformation components of the first target expression image are obtained;
performing coordinate transformation processing on the deformation component of the first target expression image based on the corresponding relation between the patch of the first template expression image and the patch of the second template expression image to obtain a plurality of candidate vertex coordinates of the target object;
traversing the vertexes included in the first template expression image, and executing the following processing aiming at the traversed vertexes:
determining a target vertex corresponding to the vertex in the first target expression image;
when the deformation component corresponding to the patch where the vertex is located is an identity matrix and the coordinate of the vertex is the same as the coordinate of the target vertex, taking the vertex as an anchor point in the first target expression image;
determining a target vertex coordinate corresponding to the anchor point in the candidate vertex coordinates;
performing migration processing on the candidate vertex coordinates based on the target vertex coordinates to obtain a plurality of migration vertex coordinates of the target object;
and carrying out pixel assignment processing on the plurality of migration vertex coordinates of the target object to obtain a second target expression image of the target object.
2. The method of claim 1, wherein the performing deformation processing on the first template expression image and the first target expression image based on the plurality of patches to obtain a deformation component of the first target expression image comprises:
executing the following processing aiming at any patch in the first template expression image:
determining a target patch corresponding to the patch in the first target expression image;
performing affine transformation processing on the surface patch and the target surface patch to obtain a deformation component of the surface patch;
and taking the set of the deformation components of the plurality of patches as the deformation component of the first target expression image.
3. The method of claim 2, wherein performing affine transformation processing on the patch and the target patch to obtain a deformation component of the patch comprises:
determining first triangular patch information of the patch, and determining second triangular patch information of the target patch;
and carrying out affine transformation processing on the first triangular patch information and the second triangular patch information to obtain a deformation component of the patch.
4. The method of claim 3, wherein determining the first triangular patch information for the patch comprises:
determining three vertex coordinates included by the patch;
determining a normal vector and an edge vector of the patch based on the three vertex coordinates;
and combining the normal vector and the edge vector to obtain first triangular patch information of the patch.
5. The method of claim 1, wherein the performing coordinate transformation processing on the deformation component of the first target expression image to obtain a plurality of candidate vertex coordinates of the target object comprises:
determining the deformation component of the first target expression image as the deformation component of the target object;
and performing coordinate transformation processing on the deformation component of the target object based on the vertex coordinates of the second template expression image to obtain a plurality of candidate vertex coordinates of the target object.
6. The method of claim 1, wherein the performing coordinate transformation processing on the deformation component of the first target expression image based on the correspondence between the patch of the first template expression image and the patch of the second template expression image to obtain a plurality of candidate vertex coordinates of the target object comprises:
determining candidate deformation components corresponding to the deformation components of the first target expression image based on the corresponding relation between the patch of the first template expression image and the patch of the second template expression image;
and performing least square processing on the candidate deformation components and the deformation components of the first target expression image to obtain a plurality of candidate vertex coordinates of the target object.
7. The method of claim 6, wherein performing least square processing based on the candidate deformation components and the deformation component of the first target expression image to obtain a plurality of candidate vertex coordinates of the target object comprises:
splitting the deformation component of the first target expression image to obtain a triangular vector and a vertex matrix;
splitting the candidate deformation component to obtain the new triangle vector and a candidate vertex matrix;
and performing least square processing on the triangular vector, the vertex matrix and the candidate vertex matrix to obtain a plurality of candidate vertex coordinates of the target object.
8. An image conversion apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring a first template expression image and a first target expression image of the template object and acquiring a second template expression image of the target object;
the first template expression image and the second template expression image respectively comprise a plurality of patches, and the patches of the first template expression image correspond to the patches of the second template expression image in a one-to-one manner;
the deformation module is used for carrying out deformation processing on the first template expression image and the first target expression image based on the plurality of patches to obtain a deformation component of the first target expression image;
the migration module is used for performing coordinate transformation processing on the deformation component of the first target expression image based on the corresponding relation between the patch of the first template expression image and the patch of the second template expression image to obtain a plurality of candidate vertex coordinates of the target object;
traversing the vertexes included in the first template expression image, and executing the following processing aiming at the traversed vertexes:
determining a target vertex corresponding to the vertex in the first target expression image;
when the deformation component corresponding to the patch where the vertex is located is an identity matrix and the coordinate of the vertex is the same as the coordinate of the target vertex, taking the vertex as an anchor point in the first target expression image;
determining a target vertex coordinate corresponding to the anchor point in the candidate vertex coordinates;
performing migration processing on the candidate vertex coordinates based on the target vertex coordinates to obtain a plurality of migration vertex coordinates of the target object;
and the processing module is used for carrying out pixel assignment processing on the plurality of migration vertex coordinates of the target object to obtain a second target expression image of the target object.
9. An electronic device, characterized in that the electronic device comprises:
a memory for storing computer executable instructions;
a processor for implementing the image transformation method of any one of claims 1 to 7 when executing computer executable instructions stored in the memory.
10. A computer-readable storage medium storing computer-executable instructions for implementing the image transformation method of any one of claims 1 to 7 when executed by a processor.
CN202211545638.4A 2022-12-05 2022-12-05 Image conversion method, image conversion device, electronic apparatus, storage medium, and program product Active CN115564642B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211545638.4A CN115564642B (en) 2022-12-05 2022-12-05 Image conversion method, image conversion device, electronic apparatus, storage medium, and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211545638.4A CN115564642B (en) 2022-12-05 2022-12-05 Image conversion method, image conversion device, electronic apparatus, storage medium, and program product

Publications (2)

Publication Number Publication Date
CN115564642A CN115564642A (en) 2023-01-03
CN115564642B true CN115564642B (en) 2023-03-21

Family

ID=84770151

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211545638.4A Active CN115564642B (en) 2022-12-05 2022-12-05 Image conversion method, image conversion device, electronic apparatus, storage medium, and program product

Country Status (1)

Country Link
CN (1) CN115564642B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116524165B (en) * 2023-05-29 2024-01-19 北京百度网讯科技有限公司 Migration method, migration device, migration equipment and migration storage medium for three-dimensional expression model
CN117974853B (en) * 2024-03-29 2024-06-11 成都工业学院 Self-adaptive switching generation method, system, terminal and medium for homologous micro-expression image

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269862A (en) * 2021-05-31 2021-08-17 中国科学院自动化研究所 Scene-adaptive fine three-dimensional face reconstruction method, system and electronic equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110399825B (en) * 2019-07-22 2020-09-29 广州华多网络科技有限公司 Facial expression migration method and device, storage medium and computer equipment
CN111583372B (en) * 2020-05-09 2021-06-25 腾讯科技(深圳)有限公司 Virtual character facial expression generation method and device, storage medium and electronic equipment
CN113674385B (en) * 2021-08-05 2023-07-18 北京奇艺世纪科技有限公司 Virtual expression generation method and device, electronic equipment and storage medium
CN115330979A (en) * 2022-08-15 2022-11-11 腾讯科技(深圳)有限公司 Expression migration method and device, electronic equipment and storage medium
CN115330980A (en) * 2022-08-16 2022-11-11 网易(杭州)网络有限公司 Expression migration method and device, electronic equipment and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269862A (en) * 2021-05-31 2021-08-17 中国科学院自动化研究所 Scene-adaptive fine three-dimensional face reconstruction method, system and electronic equipment

Also Published As

Publication number Publication date
CN115564642A (en) 2023-01-03

Similar Documents

Publication Publication Date Title
CN115564642B (en) Image conversion method, image conversion device, electronic apparatus, storage medium, and program product
US10867416B2 (en) Harmonizing composite images using deep learning
US11741668B2 (en) Template based generation of 3D object meshes from 2D images
CN110766776A (en) Method and device for generating expression animation
US11514638B2 (en) 3D asset generation from 2D images
JP2024522287A (en) 3D human body reconstruction method, apparatus, device and storage medium
CN112733044B (en) Recommended image processing method, apparatus, device and computer-readable storage medium
WO2023160051A1 (en) Skinning method and apparatus for virtual object, electronic device, storage medium, and computer program product
CN109816758B (en) Two-dimensional character animation generation method and device based on neural network
US11423617B2 (en) Subdividing a three-dimensional mesh utilizing a neural network
CN110458924B (en) Three-dimensional face model establishing method and device and electronic equipment
KR20210126697A (en) Method and device for driving animated image based on artificial intelligence
CN112085835A (en) Three-dimensional cartoon face generation method and device, electronic equipment and storage medium
CN111179391A (en) Three-dimensional animation production method, system and storage medium
JP2023545189A (en) Image processing methods, devices, and electronic equipment
CN109472104A (en) A kind of 500KV substation VR Construction simulation method and device
CN111142967B (en) Augmented reality display method and device, electronic equipment and storage medium
CN117390322A (en) Virtual space construction method and device, electronic equipment and nonvolatile storage medium
CN111739134B (en) Model processing method and device for virtual character and readable storage medium
CN109816744B (en) Neural network-based two-dimensional special effect picture generation method and device
CN110310352A (en) A kind of role action edit methods and device calculate equipment and storage medium
CN114049287A (en) Face model fusion method, device, equipment and computer readable storage medium
Izumi et al. Mass game simulator: an entertainment application of multiagent control
US20240173620A1 (en) Predicting the Appearance of Deformable Objects in Video Games
CN117557699B (en) Animation data generation method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant