CN115546366A - Method and system for driving digital person based on different people - Google Patents

Method and system for driving digital person based on different people Download PDF

Info

Publication number
CN115546366A
CN115546366A CN202211470278.6A CN202211470278A CN115546366A CN 115546366 A CN115546366 A CN 115546366A CN 202211470278 A CN202211470278 A CN 202211470278A CN 115546366 A CN115546366 A CN 115546366A
Authority
CN
China
Prior art keywords
person
digital
facial
feature
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211470278.6A
Other languages
Chinese (zh)
Other versions
CN115546366B (en
Inventor
彭振昆
郑航
费元华
郭建君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Weiling Times Technology Co Ltd
Original Assignee
Beijing Weiling Times Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Weiling Times Technology Co Ltd filed Critical Beijing Weiling Times Technology Co Ltd
Priority to CN202211470278.6A priority Critical patent/CN115546366B/en
Publication of CN115546366A publication Critical patent/CN115546366A/en
Application granted granted Critical
Publication of CN115546366B publication Critical patent/CN115546366B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • G06F16/252Integrating or interfacing systems involving database management systems between a Database Management System and a front-end application
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole

Abstract

The application provides a method and a system for driving digital people based on different people, wherein the method comprises the following steps: collecting facial feature data of a person in the collection; carrying out homing and peeling calibration on the facial feature data of the centered person to obtain calibrated facial feature data; adding an expression characteristic coefficient for each facial characteristic data, and adjusting the action amplitude of each expression of the digital person; a feature model file is generated for each person in the population for driving the digital person. The method and the device realize the completion of the dynamic capture of different middle people, drive the digital people, realize better picture effect and improve the driving efficiency.

Description

Method and system for driving digital person based on different people
Technical Field
The application relates to the technical field of digital people, in particular to a method and a system for driving a digital person based on different people.
Background
With the continuous development of the internet, the live webcasting technology has also been rapidly improved, and with the development of the technology, live performances of virtual digital people have appeared at present. In the virtual digital human control, different persons mainly refer to actors who operate the vtube (virtual anchor) for live broadcasting.
As is well known, the current digital human performance requires the people in the game to be taken, but due to the date limit of the people in the game, the people in the game cannot be taken each time. Different people have different facial shapes and statures, so that the parameters of the digital people need to be adjusted before each performance, namely the preset values of the facial parameters. However, the official tool does not provide the function of presetting parameters, so the user is required to modify the UE blueprint for different people, the operation is complicated, and the quality of the capture driving is poor.
Therefore, how to accomplish the action capturing for different people and drive digital people, so as to achieve better image effect and improve the driving efficiency is a technical problem that needs to be solved at present.
Disclosure of Invention
The application aims to provide a method and a system for driving digital people based on different people, so that the people in different people can be caught dynamically, the digital people can be driven, a better picture effect can be realized, and the driving efficiency can be improved.
To achieve the above objects, the present application provides a method for driving a digital human based on different people, the method comprising the steps of: collecting facial feature data of a person in the collection; carrying out homing and peeling calibration on the facial feature data of the centered person to obtain calibrated facial feature data; adding an expression characteristic coefficient for each facial characteristic data, and adjusting the action amplitude of each expression of the digital person; a feature model file is generated for each person in the population for driving the digital person.
The method of driving a digital person based on a different person as described above, wherein the facial feature data includes an eyebrow feature value, an eye feature value, a cheek feature value, a nose feature value, a chin feature value, and a mouth feature value.
The method for driving digital people based on different people as described above, wherein the face feature data of the person in the pair is subjected to homing and peeling calibration, and the method for obtaining the calibrated face feature data comprises the following steps: pre-collecting basic facial feature data of a person in a relaxed state; and subtracting the basic facial feature data from the currently acquired facial feature data of the middle person to obtain the calibrated facial feature data.
The method for driving a digital person based on different persons as described above, wherein an expressive feature coefficient is added to each facial feature data, and the method for adjusting the motion amplitude of the expression of the digital person comprises: driving the digital person to display a visual picture by using the calibrated facial feature data; adding an expression characteristic coefficient for each facial characteristic data based on the visual picture to obtain an adjusted characteristic value; and driving the digital person by using the adjusted characteristic value.
The method for driving digital people based on different people as described above, wherein the method for adding the expression feature coefficient to each facial feature data to obtain the adjusted feature value comprises the following steps: and multiplying each characteristic value in the calibrated facial characteristic data by the expression characteristic coefficient to obtain an adjusted characteristic value.
The method of driving a digital person based on different persons as described above, wherein the method of generating the feature model file of the person in each includes: and generating a feature model file of each person based on the facial feature data added with the expression feature coefficients, and archiving the feature model files to a database.
The method for driving the digital person based on different persons as described above, wherein the feature model file corresponding to the person is read from the database according to the category of the person, and the digital person is driven by using the read feature model file.
The method of driving a digital person based on a different person as described above, wherein the digital person is driven in real time by the face tool.
The present application also provides a system for driving a digital person based on different people, the system comprising: the data acquisition module is used for acquiring the facial feature data of the people in the collection; the calibration module is used for carrying out homing and peeling calibration on the facial feature data of the middle person to obtain calibrated facial feature data; the adjusting module is used for adding an expression characteristic coefficient for each facial characteristic data and adjusting the action amplitude of each expression of the digital person; and the generating module is used for generating a characteristic model file of each person and driving the digital person.
The system for driving digital persons based on different persons as described above, wherein the system further comprises: the calling module is used for calling the feature model file corresponding to the person according to the type of the person; and the driving module is used for driving the digital person according to the called characteristic model file.
The beneficial effect that this application realized is as follows:
(1) The method and the device have different data models according to different people, generate the characteristic model file of each person and use the characteristic model file to drive the digital people, so that when the subsequent different people drive the digital people, the corresponding characteristic model file is directly called to achieve the best effect of driving the digital people.
(2) The method and the device adjust the action amplitude of each characteristic expression, so that the problem that different people drive digital people can achieve the same optimal effect is solved, and in addition, the adjusted expression characteristic coefficient configuration can be filed, so that the people in the subsequent differences can be directly used without secondary adjustment, and the driving efficiency of the digital people is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings.
Fig. 1 is a flowchart of a method for driving a digital human based on different people according to an embodiment of the present application.
Fig. 2 is a flowchart of a method for homing and skinning calibration according to an embodiment of the present disclosure.
Fig. 3 is a flowchart of a method for adjusting motion amplitude of a digital human expression according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram of a system for driving a digital human based on different people according to an embodiment of the present application.
Reference numerals: 10-a data acquisition module; 20-a calibration module; 30-a conditioning module; 40-a generating module; 50-calling a module; 60-a drive module; 100-system for driving digital people based on different people.
Detailed Description
The technical solutions in the embodiments of the present application are clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Example one
As shown in fig. 1, the present application provides a method of driving a digital person based on a different person, the method comprising the steps of:
step S1, facial feature data of the persons in the collection are collected.
The facial feature data is facial expression data of a person, and the facial feature data comprises an eyebrow feature value, an eye feature value, a cheek feature value, a nose feature value, a chin feature value, a mouth feature value and the like.
Specifically, facial feature data of a person in the collection is collected in real time by a faceID depth sensor. Preferably, facial feature data of the person in the set is collected in real time at a standard of 60 frames per second, and facial expression data of a state of the person in each frame is collected. Preferably, 52 facial expression data of the person in each frame is collected.
And S2, carrying out homing, peeling and calibration on the facial feature data of the middle person to obtain calibrated facial feature data.
As shown in fig. 2, step S2 includes the following sub-steps:
step S210, collecting the basic facial feature data of the relaxed face of the person in advance.
Specifically, the basic facial feature data of the person in the middle is collected as one frame of facial expression feature data before calibration while the face of the person in the middle is kept in a relaxed state, and the data is recorded. And then subtracting the frame data before calibration from each frame data acquired in real time to realize the homing peeling calibration.
As one specific embodiment of the present invention, the basic facial feature data includes 52 facial expression feature data, including, for example, an eyebrow feature value, an eye feature value, a cheek feature value, a nose feature value, a chin feature value, a mouth feature value, and the like.
Step S220, subtracting the basic facial feature data from the currently collected facial feature data of the person, and using the subtracted data as the calibrated facial feature data.
And S3, adding an expression characteristic coefficient for each facial characteristic data, and adjusting the action amplitude of the digital human expression.
Specifically, an expression characteristic coefficient is added to each facial characteristic data, and the action amplitude of each expression of the digital person is adjusted.
As shown in fig. 3, step S3 includes the following sub-steps:
and step S310, driving the digital person to display a visual picture by using the calibrated facial feature data.
Specifically, a basic visualization picture is displayed according to the calibrated facial feature data, the basic visualization picture is used for adjusting the expression feature coefficient of each expression based on the visualization picture, and then the action amplitude of each expression in the visualization picture is adjusted, so that the adjusted expression feature data drives the digital person to achieve the desired effect.
And step S320, adding an expression characteristic coefficient for each facial characteristic data based on the visual picture to obtain an adjusted characteristic value.
Specifically, the method for adding the expression characteristic coefficient to each facial characteristic data to obtain the adjusted characteristic value includes: and multiplying each characteristic value in the calibrated facial characteristic data by the expression characteristic coefficient to obtain an adjusted characteristic value. And adjusting the size of the expression characteristic coefficient based on the visual picture so that the action amplitude of the expression of the digital person reaches a desired state.
Because the sheets and the amplitudes of facial expressions are inconsistent when different people acquire facial features, the effect of the different people driving digital people is not ideal, and no way is available for the different people driving digital people to achieve good consistency, so the expression feature coefficients, i.e., the sheets and the coefficients, of each facial feature data need to be adjusted to enable the different people driving digital people to achieve the same optimal effect.
As an embodiment of the invention, the feature value configuration data (e.g. expression feature coefficients) adjusted by different persons are archived so that the different persons can use the feature value configuration data without secondary adjustment.
And step S330, driving the digital person by using the adjusted characteristic value. So that different types of expression characteristic data can drive the digital person to achieve the best effect.
As an embodiment of the present invention, under the factor of 1.0 times, the limit value of JawOpen (open jaw) after the mouth is enlarged is 42%, that is, the limit value of JawOpen (open jaw) after the mouth is enlarged before the adjustment of the factor cannot reach 100%, and the best effect of driving the digital human expression cannot be achieved. After the expression characteristic coefficient is adjusted to 1.9, the limit value of JawOpen is 100%, the best mouth opening effect of the digital person can be achieved, and the coefficient parameters of the rest parts are set in the same way.
And S4, generating a characteristic model file of each person in the digital image, wherein the characteristic model file is used for driving the digital person.
Specifically, a feature model file of each person is generated based on the facial feature data to which the expression feature coefficients are added, and the feature model file is filed in a database. Different people have different data models, and the digital people are driven by reading the feature model files of the called people, so that the optimal effect of the digital people can be achieved by driving the digital people by different people in the follow-up process.
As a specific embodiment of the invention, according to the category of the person in the digital image, the feature model file corresponding to the person in the digital image is read from the database, and the digital person is driven by directly using the read feature model file without adjusting the expression feature coefficient for the second time, so that the driving efficiency of the person in the digital image is improved.
As a specific embodiment of the invention, the digital person is driven in real time through a facial tool, the functions of parameter presetting and model generation archiving and file reading are provided, higher capturing quality requirements are set, the functions of calibration and facial expression correction are provided, the digital person can be easily and dynamically captured by aiming at different persons, so that the digital person is driven, and a better picture effect is realized.
As a specific embodiment of the invention, the expression characteristic coefficient in the characteristic model file is obtained according to the called characteristic model file of the current person, the acquired facial characteristic data of the current person is multiplied by the obtained expression characteristic coefficient to obtain the adjusted characteristic value, and the digital person is driven based on the adjusted characteristic value, so that the expression effect displayed by the digital person is better, and different persons can drive the digital person to display the same expression effect by adding the expression characteristic coefficient to the facial characteristic data.
Example two
As shown in fig. 4, the present application provides a system 100 for driving a digital person based on a different person, the system comprising:
the data acquisition module 10 is used for acquiring facial feature data of a person in the collection. Specifically, the data acquisition module is a depth sensor and is used for acquiring eyebrow characteristic values, eye characteristic values, cheek characteristic values, nose characteristic values, jaw characteristic values, mouth characteristic values and the like of the person in real time.
And the calibration module 20 is configured to perform homing, peeling and calibration on the facial feature data of the middle person to obtain calibrated facial feature data. The calibration module 20 collects the basic facial feature data of the person in the collection as one frame of facial expression feature data before calibration, and records the data. And then subtracting the frame data before calibration from each frame data acquired in real time to realize the homing and peeling calibration.
And the adjusting module 30 is used for adding an expression characteristic coefficient to each facial characteristic data and adjusting the action amplitude of each expression of the digital person. The adjusting module 30 adjusts the size of the expression feature coefficient according to the action amplitude of each expression of the digital person, so that each expression of the digital person reaches the desired action amplitude, records the expression feature coefficient when each expression of the digital person reaches the desired action amplitude, and stores the expression feature coefficient for subsequent direct calling to adjust the value of the facial feature data.
A generating module 40 for generating a feature model file of the person in each for driving the digital person. The generating module 40 generates a data model for each person in the database, and on the basis of the data model, associates the adjusted expression feature coefficient to the facial feature data in the data module for directly calling the expression feature coefficient in the subsequent process, and multiplies the acquired facial feature data by the adjusted expression feature coefficient to obtain an adjusted feature value.
And the calling module 50 is used for calling the feature model file corresponding to the person according to the type of the person.
And the driving module 60 is used for driving the digital person according to the called feature model file. The driving module 60 includes a face tool by which the digital person is driven in real time to exhibit the picture effect.
Specifically, the expression characteristic coefficients in the characteristic model files are obtained according to the called characteristic model files of the current person, the collected facial characteristic data of the current person are multiplied by the obtained expression characteristic coefficients to obtain the adjusted characteristic values, and the digital person is driven based on the adjusted characteristic values, so that the expression effect displayed by the digital person is better.
The beneficial effect that this application realized is as follows:
(1) The method and the device have different data models according to different people, generate the characteristic model file of each person and use the characteristic model file to drive the digital people, so that when the subsequent different people drive the digital people, the corresponding characteristic model file is directly called to achieve the best effect of driving the digital people.
(2) The method and the device adjust the action amplitude of each characteristic expression, so that the problem that different people drive digital people can achieve the same optimal effect is solved, and in addition, the adjusted expression characteristic coefficient configuration can be filed, so that the people in the subsequent differences can be directly used without secondary adjustment, and the driving efficiency of the digital people is improved.
The above description is only an embodiment of the present invention, and is not intended to limit the present invention. Various modifications and alterations to this invention will become apparent to those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the claims of the present invention.

Claims (10)

1. A method for driving a digital person based on a different person, the method comprising the steps of:
collecting facial feature data of a person in the collection;
carrying out homing and peeling calibration on the facial feature data of the centered person to obtain calibrated facial feature data;
adding an expression characteristic coefficient for each facial characteristic data, and adjusting the action amplitude of each expression of the digital person;
a feature model file is generated for each person in the population for driving the digital person.
2. The method of driving a digital human being based on a different human being according to claim 1, wherein the facial feature data includes an eyebrow feature value, an eye feature value, a cheek feature value, a nose feature value, a chin feature value, and a mouth feature value.
3. The method of claim 1, wherein the face feature data of the middle person is calibrated by a homing and skinning method, and the method of obtaining the calibrated face feature data comprises:
pre-collecting basic facial feature data of a person in a relaxed state;
and subtracting the basic facial feature data from the currently acquired facial feature data of the middle person to obtain the calibrated facial feature data.
4. The method of claim 1, wherein an expressive feature coefficient is added to each facial feature data, and the method of adjusting the motion amplitude of the digital human expression comprises:
driving the digital person to display a visual picture by using the calibrated facial feature data;
adding an expression characteristic coefficient for each facial characteristic data based on the visual picture to obtain an adjusted characteristic value;
and driving the digital person by using the adjusted characteristic value.
5. The method of claim 1, wherein an expressive feature coefficient is added to each facial feature data, and the adjusted feature value is obtained by: and multiplying each characteristic value in the calibrated facial characteristic data by the expression characteristic coefficient to obtain an adjusted characteristic value.
6. The method of claim 1, wherein the method of generating a feature model file of the person in each comprises: and generating a feature model file of each person based on the facial feature data added with the expression feature coefficients, and archiving the feature model files to a database.
7. The method of claim 6, wherein the feature model file corresponding to the person is read from the database according to the category of the person, and the digital person is driven by using the read feature model file.
8. The method of claim 7, wherein the digital person is driven in real time by a facial tool.
9. A system for driving a digital person based on a different person, performing the method of one of claims 1 to 8, the system comprising:
the data acquisition module is used for acquiring the facial feature data of the persons in the collection;
the calibration module is used for carrying out homing and peeling calibration on the facial feature data of the middle person to obtain calibrated facial feature data;
the adjusting module is used for adding an expression characteristic coefficient for each facial characteristic data and adjusting the action amplitude of each expression of the digital person;
and the generating module is used for generating a characteristic model file of each person and driving the digital person.
10. The system for driving digital people based on different people according to claim 9, further comprising:
the calling module is used for calling the feature model file corresponding to the person according to the type of the person;
and the driving module is used for driving the digital person according to the called characteristic model file.
CN202211470278.6A 2022-11-23 2022-11-23 Method and system for driving digital person based on different people Active CN115546366B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211470278.6A CN115546366B (en) 2022-11-23 2022-11-23 Method and system for driving digital person based on different people

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211470278.6A CN115546366B (en) 2022-11-23 2022-11-23 Method and system for driving digital person based on different people

Publications (2)

Publication Number Publication Date
CN115546366A true CN115546366A (en) 2022-12-30
CN115546366B CN115546366B (en) 2023-02-28

Family

ID=84720473

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211470278.6A Active CN115546366B (en) 2022-11-23 2022-11-23 Method and system for driving digital person based on different people

Country Status (1)

Country Link
CN (1) CN115546366B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080019576A1 (en) * 2005-09-16 2008-01-24 Blake Senftner Personalizing a Video
CN112700523A (en) * 2020-12-31 2021-04-23 魔珐(上海)信息科技有限公司 Virtual object face animation generation method and device, storage medium and terminal
CN114049418A (en) * 2021-11-15 2022-02-15 拓胜(北京)科技发展有限公司 Live broadcasting method and system based on virtual anchor
CN114373044A (en) * 2021-12-16 2022-04-19 深圳云天励飞技术股份有限公司 Method, device, computing equipment and storage medium for generating three-dimensional face model
CN114638918A (en) * 2022-01-26 2022-06-17 武汉艺画开天文化传播有限公司 Real-time performance capturing virtual live broadcast and recording system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080019576A1 (en) * 2005-09-16 2008-01-24 Blake Senftner Personalizing a Video
CN112700523A (en) * 2020-12-31 2021-04-23 魔珐(上海)信息科技有限公司 Virtual object face animation generation method and device, storage medium and terminal
CN114049418A (en) * 2021-11-15 2022-02-15 拓胜(北京)科技发展有限公司 Live broadcasting method and system based on virtual anchor
CN114373044A (en) * 2021-12-16 2022-04-19 深圳云天励飞技术股份有限公司 Method, device, computing equipment and storage medium for generating three-dimensional face model
CN114638918A (en) * 2022-01-26 2022-06-17 武汉艺画开天文化传播有限公司 Real-time performance capturing virtual live broadcast and recording system

Also Published As

Publication number Publication date
CN115546366B (en) 2023-02-28

Similar Documents

Publication Publication Date Title
RU2719839C2 (en) Methods and systems for retrieving user motion characteristics using hall sensor for providing feedback to user
CN101751209B (en) Method and computer for adjusting screen display element
US20240065429A1 (en) Intelligent visualizing electric tooth brush
US20020067362A1 (en) Method and system generating an avatar animation transform using a neutral face image
CN108401154B (en) Image exposure degree non-reference quality evaluation method
CN103428461A (en) System and method for recording teaching video
CN111147873A (en) Virtual image live broadcasting method and system based on 5G communication
CN105657243A (en) Anti-jitter delay photographing method and device
CN109302628A (en) A kind of face processing method based on live streaming, device, equipment and storage medium
CN109803096A (en) A kind of display methods and system based on pulse signal
CN114245007A (en) High frame rate video synthesis method, device, equipment and storage medium
CN115546366B (en) Method and system for driving digital person based on different people
CN109859297B (en) Mark point-free face capturing device and method
CN111931608A (en) Operation management method and system based on student posture and student face recognition
CN109618111B (en) Cloud-shear multi-channel distribution system
WO2018040332A1 (en) Method and apparatus for generating plant photo album
CN108833740B (en) Real-time prompter method and device based on three-dimensional animation live broadcast
CN206472279U (en) Wearable audiovisual ganged test device
CN211744560U (en) Intelligent image processing device
JP2010199968A (en) Digital camera
CN111814581A (en) Student behavior identification method and system based on classroom scene
CN115661005A (en) Generation method and device for customized digital person
CN115209121B (en) Full-range simulation system and method with intelligent integration function
CN116009700A (en) Data processing method and electronic equipment
CN116112745B (en) Artificial intelligence video editing system for AIGC

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant