CN114596395A - Digital character model adjusting method based on digital twin technology - Google Patents

Digital character model adjusting method based on digital twin technology Download PDF

Info

Publication number
CN114596395A
CN114596395A CN202210164407.2A CN202210164407A CN114596395A CN 114596395 A CN114596395 A CN 114596395A CN 202210164407 A CN202210164407 A CN 202210164407A CN 114596395 A CN114596395 A CN 114596395A
Authority
CN
China
Prior art keywords
deformation
parameter
character model
digital
deformation parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210164407.2A
Other languages
Chinese (zh)
Inventor
张岩
刘小叶
彭小波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai I2finance Software Co ltd
Original Assignee
Shanghai I2finance Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai I2finance Software Co ltd filed Critical Shanghai I2finance Software Co ltd
Priority to CN202210164407.2A priority Critical patent/CN114596395A/en
Publication of CN114596395A publication Critical patent/CN114596395A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/01Customer relationship services

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Finance (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application provides a digital character model adjusting method based on a digital twin technology, which comprises the following steps: establishing a mapping relation between a face deformation object of a real person and a fusion deformation object of a digital character model, and setting an initial weight of a first deformation parameter of the fusion deformation object in the mapping relation; after the face image of the target user is obtained, under the condition that a second deformation parameter of a target face deformation object in the face image is inconsistent with a standard deformation parameter, adjusting the initial weight of a first deformation parameter of a target fusion deformation object corresponding to the target face deformation object to obtain a weight adjustment value; and inputting the first deformation parameter, the weight adjustment value and the sum of the weight adjustment value into the digital character model, and displaying the first deformation parameter corresponding to the weight adjustment value through a target fusion deformation object of the digital character model.

Description

Digital character model adjusting method based on digital twin technology
Technical Field
The application relates to the technical field of digital twins, in particular to a digital character model adjusting method based on a digital twins technology.
Background
With the continuous development of internet technology, digital twin technology begins to be widely applied in the fields of product design, product manufacturing, medical analysis, finance and the like. The digital twinning technology is that a sensor is used for collecting relevant real-time states of physical objects in an actual environment, real-time data of the physical objects are collected and uploaded to a cloud-based system, the cloud-based system receives and processes the data collected by the sensor, analysis is carried out according to real services and relevant data, and the analysis results are visually presented to the physical objects through a virtual simulation technology.
In some scenes, in the human-computer conversation of online customer service in the field of financial industry, a digital twin technology is adopted to virtualize and digitize a customer service character to obtain a digitized character model, and the dynamic state of the digitized character model, particularly the amplitude of facial actions, is a key factor for ensuring the subjective, impression-friendly and experience of a user. In the practical application process, the facial movement of the real person is small or too large, and the human face is not obviously represented or is excessively exaggerated represented on the digital character model, for example, the mouth shape amplitude of the real person during speaking is small, and the human face is not obviously represented on the mouth shape of the digital character model, so that the dynamic visual display of the digital character model has great limitation, and the experience of a user is poor.
Disclosure of Invention
The embodiment of the application aims to provide a method and a device for adjusting a digital character model based on a digital twin technology and electronic equipment, so as to solve the problem that dynamic visual display of the digital character model is large in limitation.
In a first aspect, an embodiment of the present application provides a method for adjusting a digital character model based on a digital twinning technique, including: establishing a mapping relation between a face deformation object of a real person and a fusion deformation object of a digital character model, and setting an initial weight of a first deformation parameter of the fusion deformation object in the mapping relation; after a face image of a target user is obtained, under the condition that a second deformation parameter of a target face deformation object in the face image is inconsistent with a standard deformation parameter, adjusting an initial weight of a first deformation parameter of a target fusion deformation object corresponding to the target face deformation object to obtain a weight adjustment value, wherein the first deformation parameter corresponds to the second deformation parameter; and inputting the first deformation parameter, the weight adjustment value and the sum of the weight adjustment value into the digital character model, and displaying the first deformation parameter corresponding to the weight adjustment value through a target fusion deformation object of the digital character model.
In a second aspect, an embodiment of the present application provides an apparatus for adjusting a digital character model based on a digital twinning technique, including: the system comprises an establishing module, a calculating module and a calculating module, wherein the establishing module is used for establishing a mapping relation between a face deformation object of a real person and a fusion deformation object of a digital character model and setting an initial weight of a first deformation parameter of the fusion deformation object in the mapping relation; the adjusting module is used for adjusting the initial weight of a first deformation parameter of a target fusion deformation object corresponding to a target face deformation object to obtain a weight adjusting value under the condition that a second deformation parameter of the target face deformation object in the face image is inconsistent with a standard deformation parameter after the face image of a target user is obtained, wherein the first deformation parameter corresponds to the second deformation parameter; and the display module is used for inputting the first deformation parameter, the weight adjustment value and the sum to the digital character model, and displaying the first deformation parameter corresponding to the weight adjustment value through a target fusion deformation object of the digital character model.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a communication interface, a memory, and a communication bus; the processor, the communication interface and the memory complete mutual communication through a communication bus; the memory is used for storing a computer program; the processor is configured to execute the program stored in the memory to implement the steps of the method for adjusting the digital character model based on the digital twinning technique according to the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the method for adjusting a digital character model based on a digital twinning technique according to the first aspect.
According to the technical scheme provided by the embodiment of the application, the mapping relation between the face deformation object of the real person and the fusion deformation object of the digital character model is established, and the initial weight of the first deformation parameter of the fusion deformation object is set in the mapping relation; after the face image of the target user is obtained, under the condition that a second deformation parameter of a target face deformation object in the face image is inconsistent with a standard deformation parameter, adjusting the initial weight of a first deformation parameter of a target fusion deformation object corresponding to the target face deformation object to obtain a weight adjustment value; and inputting the first deformation parameter, the weight adjustment value and the sum of the weight adjustment value into the digital character model, and displaying the first deformation parameter corresponding to the weight adjustment value through a target fusion deformation object of the digital character model. The weight of the first deformation parameter of the target fusion deformation object on the digital character model can be adjusted when the action amplitude of the face of the real person is small or too large, so that the first deformation parameter can be displayed more suitably by the target fusion deformation object on the digital character model, and the limitation of dynamic visual display of the digital character model is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort.
Fig. 1 is a first flowchart of a method for adjusting a digital character model based on a digital twinning technique according to an embodiment of the present application;
fig. 2 is a second flowchart of a digital character model adjustment method based on a digital twinning technique according to an embodiment of the present application;
FIG. 3 is a schematic block diagram illustrating an adjustment apparatus for a digital character model based on digital twinning technology according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides a method and a device for adjusting a digital character model based on a digital twinning technology and electronic equipment, and solves the problem that dynamic visual display of the digital character model is large in limitation.
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Illustratively, as shown in fig. 1, the embodiment of the present application provides a method for adjusting a digital character model based on a digital twinning technique, and the execution subject of the method may be a terminal device, that is, the method for adjusting a digital character model based on a digital twinning technique provided by the embodiment of the present application may be implemented by hardware or software installed on a terminal device. The method for adjusting the digital character model based on the digital twinning technology specifically comprises the following steps:
in step S101, a mapping relationship between the human face deformation object and the fusion deformation object of the digital human model is established, and an initial weight of a first deformation parameter of the fusion deformation object is set in the mapping relationship.
Specifically, the digital character model utilizes a digital twin technology to collect real-time data of an actual real person in real time and upload the real-time data to a cloud-based system, the cloud system receives and processes data collected by a sensor, and the analysis data visually presents the real person through a virtual simulation technology. In this embodiment, the real person's facial deformation object is visually represented on the digital character model through a virtual simulation technology, where the facial deformation object refers to eyes, mouth, nose, eyebrows, and the like of the real person, for example, when the real person normally opens eyes and closes mouth, the facial deformation object is a basic shape of a default shape, and when the real person laughs, speaks, frowns, and closes eyelids, the basic shape of the facial deformation object changes. The fusion deformation object is established on the digital character model corresponding to the face deformation object, for example, the face deformation object comprises eyes, nose, mouth and eyebrows, and the fusion deformation object is established on the digital character model as the eyes, nose, mouth and eyebrows.
The mapping relationship refers to a mapping relationship between the captured deformation parameter data of the real person's facial deformation object and the fusion deformation object of the digital character model, for example, when the real person's facial deformation object is a mouth and the mouth makes a left-handed mouth shape, a fusion deformation object is designed for the digital character model, the fusion deformation object is a mouth, and a left-handed mouth shape is designed for the fusion deformation object, so as to establish the mapping relationship between the facial deformation object and the fusion deformation object. In this mapping relationship, an initial weight of the fusion deformation object is set, for example, in a real-human-face-unexplained state, an initial state value of the fusion deformation object of the digital character model is 0, the initial weight of the fusion deformation object is set to an intermediate value between a weight upper limit value and a weight lower limit value (including the weight upper limit value and the weight lower limit value), for example, a value range of the weight value is between 0 and 1, the initial weight may be set to 0.5, the first deformation parameter refers to a deformation degree at which a basic shape of the fusion deformation object changes, in some cases, the deformation degree corresponds to a deformation degree of the face deformation object, and in other cases, the initial weight of the first deformation parameter needs to be increased or decreased compared to the deformation degree of the face deformation object.
In step S103, after the face image of the target user is acquired, if the second deformation parameter of the target face deformation object in the face image does not match the standard deformation parameter, the initial weight of the first deformation parameter of the target fusion deformation object corresponding to the target face deformation object is adjusted to obtain a weight adjustment value.
Specifically, a camera may be used to capture facial images of a target user, the target user being a user within the image capture range of the camera, the facial images including but not limited to an image of the entire face of the target user, an image of a partial region of the face of the target user, and the like. The target face deformation object refers to a deformation object with a changed basic shape in the target user's face deformation object, for example, the target user has a mouth with a changed mouth shape and maintains the original state of eyes, eyebrows, nose, and the like during the acquisition period of the camera, and then has the mouth as the target face deformation object. The second deformation parameter is a deformation degree of the target face deformation object, and the standard deformation parameter is a standard deformation degree that can obviously display a deformed shape in the digital character model, and may be specifically set according to a presentation effect of the digital character model, which is not limited herein.
If the standard deformation parameters of the second deformation parameters are not consistent (smaller or larger), the weight of the first deformation parameter of the target fusion deformation object corresponding to the target face deformation object needs to be adjusted, for example, when the target user makes a left-handed mouth shape, and when the deformation degree of the left-handed mouth shape is smaller than the standard deformation degree, that is, the left-handed mouth has a smaller amplitude and is not clearly reflected on the digital character model, the initial weight of the left-handed mouth on the digital character model can be increased. It should be noted that the first deformation parameter and the second deformation parameter correspond to each other, that is, the face deformation object and the fusion deformation object correspond to each other, for example, when the face deformation object is a mouth, the fusion deformation object is also a mouth, the first deformation parameter is a deformation degree of the mouth in the digital character model, and the second deformation parameter is a deformation degree of the mouth of the target user.
In step S105, the first transformation parameter and the weight adjustment value are input to the digital character model, and the first transformation parameter corresponding to the weight adjustment value is displayed by the target fusion transformation object of the digital character model.
Specifically, after the initial weight of the first deformation parameter of the target fusion deformation object is adjusted, the weight adjustment value and the first deformation parameter are input to the digital character model, and the target fusion deformation object of the digital character model presents the first deformation parameter according to the adjusted weight. For example, when the degree of deformation of the shape of the left-handed mouth is smaller than the standard degree of deformation, that is, the left-handed mouth has a smaller amplitude and is not clearly represented on the digital character model, the initial weight of the left-handed mouth on the digital character model can be increased, so that the mouth shape amplitude of the mouth displayed on the digital character model is enlarged, and the mouth shape of the mouth is clearly represented on the digital character model.
According to the technical scheme, when the action amplitude of the face of the real person is small or too large, the weight of the first deformation parameter of the target fusion deformation object on the digital character model can be adjusted, so that the first deformation parameter can be displayed on the target fusion deformation object on the digital character model more appropriately, and limitation of dynamic visual display of the digital character model is reduced.
Illustratively, as shown in fig. 2, the embodiment of the present application provides a method for adjusting a digital character model based on a digital twinning technique, and the execution subject of the method may be a terminal device, that is, the method for adjusting a digital character model based on a digital twinning technique provided by the embodiment of the present application may be implemented by hardware or software installed on a terminal device. The method for adjusting the digital character model based on the digital twinning technology specifically comprises the following steps:
in step S201, deformation parameters of different deformation types corresponding to respective face deformation objects in a face image of a real person are determined, a fusion deformation object corresponding to the face deformation object is set for the digital character model, a mapping relationship between the face deformation object and the fusion deformation object is established according to the deformation types and the deformation parameters, an average value of third deformation parameters of the face deformation object is determined, and an initial weight of a first deformation parameter of the fusion deformation object is set according to the average value.
Specifically, the mapping relationship between the fusion deformation object and the face deformation object may be established by capturing face images of a large number of users, each face deformation correspondence may correspond to a different deformation type, for example, for a mouth, the deformation type may be left-handed mouth, right-handed mouth, smile, etc., the left-handed mouth, right-handed mouth, smile, and smile correspond to respective deformation parameters, depth information parameters (deformation parameters) of all key points (different deformation types of the face deformation object) in the face image are calculated corresponding to the face image of a real person, the fusion deformation object of the digital character model and the deformation parameters of the deformation type corresponding to the fusion deformation object are designed, and the fusion deformation object and the deformation parameters of the face deformation object correspond one-to-one. For example, the initial state values of the deformation parameters of the fusion deformation object in the non-expression state of the real person are all 0, the left-handed mouth movement with the largest amplitude of the real person is to obtain one deformation parameter, the deformation parameter of the left-handed mouth movement is designed for the digital character model, the left-handed mouth shape identical to that of the real person is made, the state value is set to be 1, and when the facial image of the target user is captured in real time, the target user calls the mouth of the digital character model and calls the left-handed mouth shape of the digital character model as long as the target user makes the left-handed mouth movement.
The third deformation parameter of the face deformation object refers to a deformation parameter of each face deformation object of a plurality of users, an average value of the deformation parameters of each face deformation object is calculated corresponding to each face deformation object, the average value of the deformation parameters can be used as a standard deformation parameter, after the standard deformation parameter is determined, an initial weight is set for each fusion deformation object according to the relative size of the standard deformation parameter, the initial weight and the standard deformation parameter are associated, if the standard deformation parameter is larger, a smaller amount of the initial weight can be set, such as the initial weight is set closer to a lower weight limit value, if the standard deformation parameter is smaller, a larger amount of the initial weight can be set, such as the initial weight is set closer to an upper weight limit value, further, the initial weight can be set to a middle value between the upper weight limit value and the lower weight limit value, if the weight upper limit value is 1 and the weight lower limit value is 0, the initial weight is set to 0.5.
In step S203, after the face image of the target user is acquired, if the second deformation parameter of the target face deformation object in the face image is smaller than the standard deformation parameter, the initial weight of the first deformation parameter is increased, and if the second deformation parameter of the target face deformation object is larger than the standard deformation parameter, the initial weight of the first deformation parameter is decreased.
Specifically, when the deformation parameter of the target facial deformation object of the target user is larger, namely the facial expression of the target user is too exaggerated, the initial weight of the first deformation parameter is reduced, and when the deformation parameter of the target facial deformation object of the target user is smaller, namely the facial expression amplitude of the target user is smaller, the initial weight of the first deformation parameter is increased, so that the facial expression of the target fusion deformation object in the digital character model is optimized, and the digital character model has a good display effect. For example, when the action of the left-handed mouth is small in amplitude, and the effect displayed on the digital character model is not obvious, the initial weight (for example, 0.5) of the first deformation parameter is increased (namely, greater than 0.5), so that the small-amplitude left-handed mouth action of the target user can be obviously displayed on the digital character model, and the user experience is improved.
Further, when the initial weight of the first deformation parameter is adjusted to be larger or smaller, the initial weight can be kept between the weight upper limit value and the weight lower limit value, and the problem that the adjusted deformation parameter is excessively deformed or insufficiently deformed on the digital character model is avoided. For example, when the upper limit value of the weight is 1, the lower limit value of the weight is 0, and the initial weight is 0.5, if the second deformation parameter of the face deformation object is smaller than the standard deformation parameter, the initial weight is increased to not more than 1, and if the second deformation parameter of the face deformation object is larger than the standard deformation parameter, the initial weight is decreased to not less than 0.
In step S205, the first deformation parameter and the weight adjustment value are input to the digital character model, and the first deformation parameter corresponding to the weight adjustment value is displayed by the target fusion deformation object of the digital character model.
It is to be noted that step S205 and step S105 have the same or similar implementation manners, which can be referred to each other, and the description of the embodiments of the present application is omitted here.
According to the technical scheme, when the action amplitude of the face of the real person is small or too large, the weight of the first deformation parameter of the target fusion deformation object on the digital character model can be adjusted, so that the first deformation parameter can be displayed on the target fusion deformation object on the digital character model more appropriately, and limitation of dynamic visual display of the digital character model is reduced. In addition, the initial weight is increased or decreased, so that the amplitude of the facial action of the digital character model seen by the user is consistent with the amplitude of the facial action of the real person, the experience of friendly appearance is brought to the user, even if different users drive the digital character model, the amplitude of the facial action of the digital character model is also ensured to be consistent with the amplitude of the facial action of the real person, and the experience of the user is further improved.
Based on the same technical concept, the embodiment of the present application further provides an adjusting apparatus of a digital character model based on a digital twin technology according to the above-mentioned embodiments, fig. 3 is a schematic block diagram of the adjusting apparatus of a digital character model based on a digital twin technology according to the embodiment of the present application, the adjusting apparatus of a digital character model based on a digital twin technology is used for executing the adjusting method of a digital character model based on a digital twin technology described in fig. 1 to 2, as shown in fig. 3, the adjusting apparatus of a digital character model based on a digital twin technology comprises: the establishing module 301 is configured to establish a mapping relationship between a facial deformation object of a real person and a fusion deformation object of a digital character model, and set an initial weight of a first deformation parameter of the fusion deformation object in the mapping relationship; an adjusting module 302, configured to, after a face image of a target user is obtained, adjust an initial weight of a first deformation parameter of a target fusion deformation object corresponding to a target face deformation object to obtain a weight adjustment value when a second deformation parameter of the target face deformation object in the face image is inconsistent with a standard deformation parameter, where the first deformation parameter corresponds to the second deformation parameter; and the display module 303 is configured to input the first deformation parameter and the weight adjustment value to the digital character model, and display the first deformation parameter corresponding to the weight adjustment value through the target fusion deformation object of the digital character model.
According to the technical scheme provided by the embodiment of the application, the weight of the first deformation parameter of the target fusion deformation object on the digital character model can be adjusted when the action amplitude of the real human face is small or too large, so that the first deformation parameter can be displayed more suitably by the target fusion deformation object on the digital character model, and the limitation of dynamic visual display of the digital character model is reduced.
In a possible implementation manner, the establishing module 301 is further configured to determine deformation parameters of different deformation types corresponding to each face deformation object in the face image of the real person, set a fusion deformation object corresponding to the face deformation object for the digital character model, and establish a mapping relationship between the face deformation object and the fusion deformation object according to the deformation types and the deformation parameters.
In a possible implementation manner, the establishing module 301 is further configured to determine an average value of the third deformation parameters of the face deformation object, and set an initial weight of the first deformation parameter of the fusion deformation object according to the average value.
In a possible implementation manner, the adjusting module 302 is further configured to increase the initial weight of the first deformation parameter if the second deformation parameter of the target facial deformation object in the facial image is smaller than the standard deformation parameter, and decrease the initial weight of the first deformation parameter if the second deformation parameter of the target facial deformation object is larger than the standard deformation parameter.
In one possible implementation, the initial weight is set to a value intermediate between the upper weight limit and the lower weight limit.
The adjustment device for the digital character model based on the digital twin technology provided in the embodiment of the present application can implement each process in the embodiment corresponding to the adjustment method for the digital character model based on the digital twin technology, and is not described here again to avoid repetition.
It should be noted that the adjustment apparatus for a digital character model based on a digital twin technology provided in the embodiment of the present application and the adjustment method for a digital character model based on a digital twin technology provided in the embodiment of the present application are based on the same application concept, and therefore, for specific implementation of the embodiment, reference may be made to implementation of the adjustment method for a digital character model based on a digital twin technology, and repeated details are not repeated.
Based on the same technical concept, the embodiment of the present application further provides an electronic device for executing the method for adjusting a digital character model based on a digital twinning technique, and fig. 4 is a schematic structural diagram of an electronic device for implementing various embodiments of the present application, as shown in fig. 4. Electronic devices may vary widely in configuration or performance and may include one or more processors 401 and memory 402, where the memory 402 may store one or more stored applications or data. Memory 402 may be, among other things, transient storage or persistent storage. The application program stored in memory 402 may include one or more modules (not shown), each of which may include a series of computer-executable instructions for the electronic device.
Still further, the processor 401 may be configured to communicate with the memory 402 to execute a series of computer-executable instructions in the memory 402 on the electronic device. The electronic device may also include one or more power supplies 403, one or more wired or wireless network interfaces 404, one or more input-output interfaces 405, one or more keyboards 406.
Specifically, in this embodiment, the electronic device includes a processor, a communication interface, a memory, and a communication bus; the processor, the communication interface and the memory complete mutual communication through a bus; a memory for storing a computer program; and the processor is used for executing the program stored in the memory and realizing the steps in the method embodiment.
The embodiment also provides a computer readable storage medium, on which a computer program is stored, and the computer program is executed by a processor to implement the steps in the above method embodiments.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, apparatus, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, an electronic device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include transitory computer readable media (transmyedia) such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, apparatus or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art to which the present application pertains. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A method for adjusting a digital character model based on a digital twinning technique, the method comprising:
establishing a mapping relation between a face deformation object of a real person and a fusion deformation object of a digital character model, and setting an initial weight of a first deformation parameter of the fusion deformation object in the mapping relation;
after a face image of a target user is obtained, under the condition that a second deformation parameter of a target face deformation object in the face image is inconsistent with a standard deformation parameter, adjusting an initial weight of a first deformation parameter of a target fusion deformation object corresponding to the target face deformation object to obtain a weight adjustment value, wherein the first deformation parameter corresponds to the second deformation parameter;
and inputting the first deformation parameter, the weight adjustment value and the sum of the weight adjustment value into the digital character model, and displaying the first deformation parameter corresponding to the weight adjustment value through a target fusion deformation object of the digital character model.
2. The method of claim 1, wherein the mapping the real person's facial deformation object to the merged deformation object of the digital human model comprises:
determining deformation parameters of different deformation types corresponding to each face deformation object in a face image of a real person, setting a fusion deformation object corresponding to the face deformation object for the digital character model, and establishing a mapping relation between the face deformation object and the fusion deformation object according to the deformation types and the deformation parameters.
3. The method for adjusting a digital character model based on a digital twinning technique as claimed in claim 1, wherein the setting of the initial weight of the first deformation parameter of the fusion deformation object in the mapping relationship comprises:
and determining an average value of the third deformation parameters of the face deformation object, and setting an initial weight of the first deformation parameter of the fusion deformation object according to the average value.
4. The method of claim 1, wherein in the case where the second deformation parameter of the target facial deformation object in the facial image is inconsistent with the standard deformation parameter, the adjusting the initial weight of the first deformation parameter of the target fusion deformation object corresponding to the target facial deformation object comprises:
and in the case that the second deformation parameter of the target face deformation object in the face image is smaller than the standard deformation parameter, the initial weight of the first deformation parameter is adjusted to be larger, and in the case that the second deformation parameter of the target face deformation object is larger than the standard deformation parameter, the initial weight of the first deformation parameter is adjusted to be smaller.
5. The method of adjusting a digital character model based on a digital twinning technique as claimed in claim 4, wherein the initial weight is set to a value intermediate between an upper weight limit value and a lower weight limit value.
6. An adjustment device of a digital character model based on a digital twinning technique, the adjustment device comprising:
the system comprises an establishing module, a calculating module and a calculating module, wherein the establishing module is used for establishing a mapping relation between a face deformation object of a real person and a fusion deformation object of a digital character model and setting an initial weight of a first deformation parameter of the fusion deformation object in the mapping relation;
the adjusting module is used for adjusting the initial weight of a first deformation parameter of a target fusion deformation object corresponding to a target face deformation object to obtain a weight adjusting value under the condition that a second deformation parameter of the target face deformation object in the face image is inconsistent with a standard deformation parameter after the face image of a target user is obtained, wherein the first deformation parameter corresponds to the second deformation parameter;
and the display module is used for inputting the first deformation parameter, the weight adjustment value and the weight adjustment value into the digital character model, and displaying the first deformation parameter corresponding to the weight adjustment value through a target fusion deformation object of the digital character model.
7. The apparatus for adjusting a digital character model based on a digital twinning technique as claimed in claim 6, wherein the establishing module is further configured to determine deformation parameters of different deformation types corresponding to each of the facial deformation objects in the facial image of the real person, set a fusion deformation object corresponding to the facial deformation object for the digital character model, and establish a mapping relationship between the facial deformation object and the fusion deformation object according to the deformation types and the deformation parameters.
8. The apparatus of claim 6, wherein the adjusting module is further configured to adjust the initial weight of the first deformation parameter to be larger if the second deformation parameter of the target facial deformation object in the facial image is smaller than the standard deformation parameter, and adjust the initial weight of the first deformation parameter to be smaller if the second deformation parameter of the target facial deformation object is larger than the standard deformation parameter.
9. An electronic device comprising a processor, a communication interface, a memory, and a communication bus; the processor, the communication interface and the memory complete mutual communication through a communication bus; the memory is used for storing a computer program; the processor is used for executing the program stored in the memory to realize the steps of the method for adjusting the digital character model based on the digital twinning technology according to any one of claims 1-5.
10. A computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the method of adjustment of a digital twin technology based digital character model according to any one of claims 1-5.
CN202210164407.2A 2022-02-22 2022-02-22 Digital character model adjusting method based on digital twin technology Pending CN114596395A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210164407.2A CN114596395A (en) 2022-02-22 2022-02-22 Digital character model adjusting method based on digital twin technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210164407.2A CN114596395A (en) 2022-02-22 2022-02-22 Digital character model adjusting method based on digital twin technology

Publications (1)

Publication Number Publication Date
CN114596395A true CN114596395A (en) 2022-06-07

Family

ID=81804395

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210164407.2A Pending CN114596395A (en) 2022-02-22 2022-02-22 Digital character model adjusting method based on digital twin technology

Country Status (1)

Country Link
CN (1) CN114596395A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115035580A (en) * 2022-06-24 2022-09-09 北京平视科技有限公司 Figure digital twinning construction method and system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115035580A (en) * 2022-06-24 2022-09-09 北京平视科技有限公司 Figure digital twinning construction method and system

Similar Documents

Publication Publication Date Title
EP3989111A1 (en) Video classification method and apparatus, model training method and apparatus, device and storage medium
US20210174072A1 (en) Microexpression-based image recognition method and apparatus, and related device
CN111476871B (en) Method and device for generating video
CN115049799B (en) Method and device for generating 3D model and virtual image
CN109815881A (en) Training method, the Activity recognition method, device and equipment of Activity recognition model
JP7268071B2 (en) Virtual avatar generation method and generation device
US10783716B2 (en) Three dimensional facial expression generation
CN112527115B (en) User image generation method, related device and computer program product
CN108875539A (en) Expression matching process, device and system and storage medium
CN107340964A (en) The animation effect implementation method and device of a kind of view
US20240046538A1 (en) Method for generating face shape adjustment image, model training method, apparatus and device
CN115601484B (en) Virtual character face driving method and device, terminal equipment and readable storage medium
CN111080746A (en) Image processing method, image processing device, electronic equipment and storage medium
CN114596395A (en) Digital character model adjusting method based on digital twin technology
CN114299270A (en) Special effect prop generation and application method, device, equipment and medium
CN110570375B (en) Image processing method, device, electronic device and storage medium
CN112906571B (en) Living body identification method and device and electronic equipment
CN113095134B (en) Facial expression extraction model generation method and device and facial image generation method and device
US20190371039A1 (en) Method and smart terminal for switching expression of smart terminal
CN112634413B (en) Method, apparatus, device and storage medium for generating model and generating 3D animation
CN115731341A (en) Three-dimensional human head reconstruction method, device, equipment and medium
CN113380269A (en) Video image generation method, apparatus, device, medium, and computer program product
CN112714337A (en) Video processing method and device, electronic equipment and storage medium
CN115512014A (en) Method for training expression driving generation model, expression driving method and device
EP4002280A1 (en) Method and apparatus for generating image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 306, No. 799, Ximen Road, Chengqiao Town, Chongming District, Shanghai 202150

Applicant after: SHANGHAI I2FINANCE SOFTWARE CO.,LTD.

Address before: Room 2076, area C, building 8, No.2, Guanshan Road, Chengqiao Town, Chongming District, Shanghai 202150

Applicant before: SHANGHAI I2FINANCE SOFTWARE CO.,LTD.

CB02 Change of applicant information