CN117351121A - Digital person editing control method, device, electronic equipment and storage medium - Google Patents

Digital person editing control method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117351121A
CN117351121A CN202311043748.5A CN202311043748A CN117351121A CN 117351121 A CN117351121 A CN 117351121A CN 202311043748 A CN202311043748 A CN 202311043748A CN 117351121 A CN117351121 A CN 117351121A
Authority
CN
China
Prior art keywords
digital person
digital
user
stored
feature data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311043748.5A
Other languages
Chinese (zh)
Inventor
沈中熙
钱晓亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Black Mirror Technology Co ltd
Original Assignee
Xiamen Black Mirror Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Black Mirror Technology Co ltd filed Critical Xiamen Black Mirror Technology Co ltd
Priority to CN202311043748.5A priority Critical patent/CN117351121A/en
Publication of CN117351121A publication Critical patent/CN117351121A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention discloses a digital person editing control method, a device, electronic equipment and a storage medium, wherein the method comprises the following steps: the method comprises the steps of obtaining a first digital person after editing, wherein the first digital person comprises a plurality of types of storable feature data, determining at least one type of target feature data from the feature data according to a storage instruction of a user, storing the target feature data according to a preset general format, generating at least one type of stored feature data, determining a second digital person to be edited according to a calling instruction if the calling instruction of the user on the stored feature data is received, applying the stored feature data to the second digital person to generate a third digital person, and directly calling the stored feature data to edit other digital persons by storing the user-defined digital person feature data according to the preset general format.

Description

Digital person editing control method, device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technology, and more particularly, to a digital person editing control method, apparatus, electronic device, and storage medium.
Background
The digital person is a product of information science and life science fusion, performs virtual simulation on the forms and functions of the human body at different levels by using an information science method, and is widely applied to the fields of live broadcasting, news broadcasting, voice prompt and the like.
In the prior art, because uniform digital person storage formats are not determined among different service ends, the derived digital person storage formats may have differences and cannot be quickly migrated to other clients for use, in addition, the digital person is imported in a non-dynamic manner, and the imported digital person cannot be subjected to subsequent editing and interaction, so that the digital person generation efficiency is low.
Therefore, how to improve the generation efficiency of the digital person is a technical problem to be solved at present.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The embodiment of the application provides a digital person editing control method, a device, electronic equipment and a storage medium, wherein user-defined digital person characteristic data are stored according to a preset general format, so that the stored characteristic data can be directly called to edit other digital persons, and the generation efficiency of the digital persons is improved.
In a first aspect, there is provided a digital person editing control method, the method comprising: acquiring a first digital person with editing completed, wherein the first digital person comprises a plurality of kinds of characteristic data which can be saved; determining at least one target feature data from the feature data according to a storage instruction of a user, storing the target feature data according to a preset general format, and generating at least one stored feature data; if a call instruction of the user to the stored feature data is received, determining a second digital person to be edited according to the call instruction; and applying the stored characteristic data to the second digital person to generate a third digital person.
In a second aspect, there is provided a digital person editing control apparatus, the apparatus comprising: the system comprises an acquisition module, a storage module and a storage module, wherein the acquisition module is used for acquiring a first digital person with the editing completion, and the first digital person comprises various kinds of characteristic data which can be stored; the storage module is used for determining at least one target characteristic data from the characteristic data according to a storage instruction of a user, storing the target characteristic data according to a preset general format and generating at least one stored characteristic data; the determining module is used for determining a second digital person to be edited according to the calling instruction if the calling instruction of the user on the stored characteristic data is received; and the generation module is used for applying the stored characteristic data to the second digital person to generate a third digital person.
In a third aspect, there is provided an electronic device comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the digital person editing control method of the first aspect via execution of the executable instructions.
In a fourth aspect, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the digital person editing control method of the first aspect.
By applying the technical scheme, the edited first digital person is obtained, the first digital person comprises various kinds of storable characteristic data, at least one kind of target characteristic data is determined from the characteristic data according to a storage instruction of a user, the target characteristic data is stored according to a preset general format, at least one kind of stored characteristic data is generated, if a calling instruction of the user for the stored characteristic data is received, a second digital person to be edited is determined according to the calling instruction, the stored characteristic data is applied to the second digital person, a third digital person is generated, and the user-defined digital person characteristic data is stored according to the preset general format, so that the purpose that the stored characteristic data can be directly called to edit other digital persons is realized, and therefore the generation efficiency of the digital person is improved, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 shows a schematic flow chart of a digital person editing control method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of adding tag information to stored feature data in an embodiment of the invention;
FIG. 3 is a schematic flow chart of a third digital person preservation in an embodiment of the invention;
FIG. 4 is a schematic flow chart of a first digital person acquisition in an embodiment of the invention;
fig. 5 shows a schematic structural diagram of a digital human editing control device according to an embodiment of the present invention;
fig. 6 shows a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
It is noted that other embodiments of the present application will be readily apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the present application is not limited to the precise construction set forth herein below and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.
The subject application is operational with numerous general purpose or special purpose computing device environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet devices, multiprocessor devices, distributed computing environments that include any of the above devices or devices, and the like.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiment of the application provides a digital person editing control method, which stores the user-defined digital person characteristic data according to a preset general format, so that the stored characteristic data can be directly called to edit other digital persons, the generation efficiency of the digital person is improved, and the user experience is improved.
As shown in fig. 1, the method comprises the steps of:
step S101, a first digital person with the edited first digital person comprising various kinds of characteristic data which can be saved is obtained.
In this embodiment, the first digital person may be a digital person obtained by the user editing the designated digital person in real time, or a digital person selected by the user from a plurality of digital persons edited previously. When the user edits the digital person, the user can pinch the face, beautify the face (such as changing the hairstyle, increasing eye shadow, lip makeup, beard and other types of makeup), adjust the body shape (such as adjusting the height, the thickness, the concave-convex curve of the figure and the like of the digital person), change the dress (such as changing the clothes style and the color of the digital person), reconstruct (such as reconstructing the face of the digital person through the uploaded face photo) and other editing operations. The edited digital person includes various kinds of characteristic data which can be saved, such as body type, hairstyle, teeth, dressing and the like which are edited by the user, and one or more kinds of characteristic data can be saved by the subsequent user.
Step S102, determining at least one target feature data from the feature data according to a storage instruction of a user, and storing the target feature data according to a preset general format to generate at least one stored feature data.
The user can select at least one feature data from the feature data and send a preservation instruction, the target feature data to be preserved is determined according to the preservation instruction, the target feature data is preserved according to a preset general format, corresponding preserved feature data is generated, and the preserved feature data can be directly applied to other digital persons by the subsequent user to edit the other digital persons.
The preset general format may be a GLB format, and the GLB file is a 3D model stored in a graphic language transmission format (glTF) and stores information about the 3D model in a binary format, including a node hierarchy, a camera, a material, an animation, and a mesh. The GLB file is a binary version of the GLTF file that stores components of the glTF, such as JSON, BIN files, and pictures.
Other types of preset general formats can be adopted by those skilled in the art according to actual needs, and the protection scope of the application is not affected.
Step S103, if a call instruction of the user to the saved feature data is received, determining a second digital person to be edited according to the call instruction.
In this embodiment, when the user edits the second digital person, if the saved feature data needs to be directly applied to the second digital person, the user selects corresponding saved feature data from the saved feature data, and issues a call instruction to the saved feature data. And after receiving the call instruction, determining the second digital person to be edited according to the call instruction.
The interface for selecting the stored feature data can be provided in the editing interface of the second digital person, for example, a picture or description information of all the stored feature data can be displayed to the user, the user selects one of the stored feature data and triggers to send a corresponding calling instruction, or a search box is provided to the user, after the user inputs a keyword into the search box, if the stored feature data matched with the keyword exists, the corresponding stored feature data is displayed to the user, the user selects one of the displayed stored feature data and triggers to send a corresponding calling instruction.
Optionally, the user may issue a call instruction to the stored feature data before the user, or may issue a call instruction to the stored feature data of other users.
In some embodiments of the present application, after receiving the call instruction from the user to the saved feature data, the method further includes:
judging whether the user has the use authority of the stored characteristic data or not according to the user identification of the user;
if the feature data is not available, a preset prompt message is sent to the user, and the stored feature data is refused to be called;
if so, determining the saved characteristic data as callable characteristic data.
In this embodiment, the user may set a corresponding usage right when saving the feature data, for example, only allow the user to use the feature data, only allow friends to use the feature data, or allow all people to use the feature data, use the feature data after payment, or use the feature data after completing a preset task, and so on. The calling instruction carries a user identification, after the calling instruction is received, whether the user has the use authority for the stored feature data is judged according to the user identification, if not, prompt information for refusing to call is sent to the user, and the stored feature data is refused to call. For example, the prompt information may include that the user has no use right, or the prompt information is used after recharging, or the prompt information is used after completing the preset task, etc. If the use authority is provided, the stored feature data is determined to be callable feature data, and the stored feature data can be subsequently applied to the second digital person.
When the user calls the stored feature data, the user is authenticated, so that only the user with the use authority uses the stored feature data, and the safety and the user experience are improved.
Step S104, the stored characteristic data is applied to the second digital person to generate a third digital person.
In this embodiment, since the saved feature data is saved in the preset general format, the saved feature data may be directly applied to the second digital person, and the third digital person may be generated from completing the editing operation corresponding to the second digital person.
It will be appreciated that the stored feature data may be applied to the second digital person based on the feature data that the second digital person currently possesses. And if the characteristic data corresponding to the stored characteristic data exists in the second digital person, replacing the corresponding characteristic data in the second digital person based on the stored characteristic data, and if the characteristic data corresponding to the stored characteristic data does not exist in the second digital person, adding the stored characteristic data to the second digital person.
For example, if the stored feature data is a hand and there is a hand in the second digital person, the stored hand is replaced with the hand in the second digital person. If the saved feature data is an eye shadow and the eye shadow does not exist in the second digital person, the saved eye shadow is added to the corresponding position in the second digital person.
In some embodiments of the present application, the saved feature data includes an avatar feature, and/or an audio feature, and/or a driver feature, and the applying the saved feature data to the second digital person generates a third digital person, including:
if the stored feature data is the image feature, adjusting the global image or the local image of the second digital person according to the stored feature data to generate the third digital person;
if the stored feature data is the audio feature, adjusting the pronunciation feature of the second digital person according to the stored feature data to generate the third digital person;
and if the stored characteristic data is the driving characteristic, adjusting the mouth shape driving parameter and/or the emotion driving parameter of the second digital person according to the stored characteristic data to generate the third digital person.
In this embodiment, the stored feature data includes an image feature, and/or an audio feature, and/or a driving feature, and the image feature may be a global image or a local image of the digital person, the global image is an overall appearance of the digital person, the local image is a local appearance of the digital person, and the local image may be, for example, teeth, arms, feet, hair styles, eyes, mouth, ears, clothing, and the like. The audio features are the pronunciation features of the digital person, such as the gender of the speaker, accent region, language information, speed of speech information, pronunciation clarity, or mandarin chinese normative. The mouth shape driving parameters and/or emotion driving parameters are driving parameters of mouth shape animation or emotion animation of the digital person.
The stored characteristic data is applied to the second digital person in accordance with different types of stored characteristic data. Specifically, if the stored feature data is the image feature, the global image or the local image of the second digital person is adjusted according to the stored feature data, and a third digital person is generated. And if the stored feature data is an audio feature, adjusting the pronunciation feature of the second digital person according to the stored feature data to generate a third digital person. If the stored characteristic data is a driving characteristic, the mouth shape driving parameter and/or emotion driving parameter of the second digital person are adjusted according to the stored characteristic data, a third digital person is generated, and the second digital person is edited according to different types of stored characteristic data, so that more flexible editing control of the digital person is realized, and the generation efficiency of the digital person is improved.
In some embodiments of the present application, before adjusting the global or local persona of the second digital person according to the saved characteristic data to generate the third digital person, the method further comprises:
if the stored characteristic data can cause the second digital person to have mutual exclusion and decoration, prompt information that the stored characteristic data cannot be applied is sent out.
In this embodiment, the mutually exclusive decoration data is not compatible in terms of both attribute and category, and the decoration data that cannot exist simultaneously, for example, one of the shoes is worn on the foot, and the other is worn on the high-heeled shoe. If the stored feature data can cause the second digital person to have mutual exclusive decoration, the stored feature data cannot be applied to the second digital person, prompt information that the stored feature data cannot be applied is sent, and a user can replace other stored feature data or give up to apply the stored feature data according to the prompt information, so that the generated third digital person is more in line with a real human body, and more accurate generation of the digital person is realized.
According to the digital person editing control method, a first digital person which is edited is obtained, the first digital person comprises various kinds of storable characteristic data, at least one kind of target characteristic data is determined from the characteristic data according to a storage instruction of a user, the target characteristic data is stored according to a preset general format, at least one kind of stored characteristic data is generated, if a calling instruction of the user for the stored characteristic data is received, a second digital person to be edited is determined according to the calling instruction, the stored characteristic data is applied to the second digital person, a third digital person is generated, and the user-defined digital person characteristic data is stored according to the preset general format, so that the stored characteristic data can be directly called to edit other digital persons, the generation efficiency of the digital person is improved, and the user experience is improved.
On the basis of any embodiment of the present application, after generating at least one saved characteristic data, as shown in fig. 2, the method further comprises the steps of:
and S21, displaying preset prompt information so that the user adds description information of the stored characteristic data according to the preset prompt information.
In this embodiment, after at least one stored feature data is generated, preset prompting information is displayed, where the preset prompting information is used to prompt a user to add description information to the stored feature data, and optionally, the preset prompting information may be set on an input interface capable of inputting the description information.
And S22, generating tag information according to the description information, and adding the tag information to the stored characteristic data.
In this embodiment, after the user adds the description information, the tag information is generated according to the description information, and the tag information is added to the stored feature data, so that each stored feature data can be identified according to each tag information, and the stored feature data can be conveniently and more efficiently invoked by different subsequent users.
For example, if the stored feature data is audio data, after the audio data is stored, a text input box including a preset prompt message is displayed, and the user can input in the text input box: the gender of the speaker, accent region, language information, speed information, pronunciation clarity or mandarin chinese normative level (e.g., class information such as first and second), etc. And generating tag information according to the description information input by the user in the text input box, and adding the tag information to the audio data.
In some embodiments of the present application, before generating the tag information according to the description information, the method further includes: and detecting whether the description information accords with preset conditions, if not, prompting a user to modify according to the preset conditions, wherein the preset conditions can be that the description information accords with preset languages (such as Chinese), and/or the number of words does not exceed the preset number of words (such as 100 words), and the like, so that the user can more conveniently and rapidly identify different stored characteristic data according to the label information, and the user experience is improved.
On the basis of any embodiment of the present application, after generating the third digital person, as shown in fig. 3, the method further includes:
step S31, if a storage instruction of the user for the third digital person is received, a digital person identifier uniquely corresponding to the third digital person is generated.
In this embodiment, after the third digital person is generated, the user may save the third digital person, and if a save instruction for the third digital person is received, a digital person identifier uniquely corresponding to the third digital person is generated.
The digital person identifier may be a URL (Uniform Resource Locator ) of the user, and may also include protocol information, such as HTTP protocol, or may be regular letters, or numbers, or a combination of letters and numbers, etc., for example, the time information may be combined with a pseudo-random number to form an identifier, where the digital person identifier uniquely corresponds to the third digital person.
And S32, adding the digital person identification to the third digital person, and storing the third digital person to a preset storage path.
In the embodiment, the digital person identifier is added to the third digital person, and the third digital person with the digital person identifier is stored in the preset storage path, so that the third digital person is efficiently stored.
Optionally, the preset storage path may be a local storage path or a cloud storage path, such as a network disk or a mailbox. In addition, the third digital person can be stored according to a preset general format, and can be directly processed on different clients or used for generating digital person videos, so that the universality of the digital person is improved.
In some embodiments of the present application, after saving the third digital person to a preset storage path, the method further includes:
and acquiring a digital person replacement instruction carrying the digital person identifier, replacing the digital person in a preset digital person video template with the third digital person according to the digital person replacement instruction, and generating a digital person video corresponding to the third digital person.
In this embodiment, the user may generate a digital person video based on the third digital person, create a plurality of preset digital person video templates in advance, where the preset digital person video templates include digital person-replaceable digital persons, and the preset digital person video templates may be different video styles, such as broadcast video, product introduction video, and the user may select a preset digital person video template of a corresponding video style according to needs. After a digital person replacement instruction carrying a digital person identifier is acquired, a third digital person is acquired according to the digital person identifier, and the digital person in a preset digital person video template is replaced by the third digital person according to the digital person replacement instruction, so that a corresponding digital person video is generated, and the generation efficiency of the digital person video is improved.
On the basis of any embodiment of the present application, the method for obtaining the first digital person with the edited first digital person, as shown in fig. 4, includes the following steps:
step S41, obtaining target digital persons selected by the user from a plurality of preset digital persons.
A plurality of preset digital persons are created in advance, a user can select one digital person from the plurality of preset digital persons to edit according to the needs, and the digital person selected by the user is taken as a target digital person.
As an alternative, the target digital person may also be uploaded by the user or created in real time, the digital person being created by the following steps S51-S52:
step S51, receiving the photo, the gender data and the wind pattern data input by the user.
The face photo is used for constructing the image of the digital person, the user can set the sex of the digital person by inputting sex data, the user can set the painting type of the digital person by inputting painting type data, and various painting types such as reality, beauty, delicacy, lovely and the like are preset, each painting type can be previewed through the sample photo, and the user can conveniently select according to own requirements.
Step S52, judging whether the photo meets the preset condition, if so, generating a target digital person according to the photo, the gender data and the picture type data by calling a face reconstruction interface, and if not, prompting that the photo does not meet the requirement.
In order to match the digital person with the person in the photo, a better visual effect is ensured, the photo needs to meet certain preset conditions, the preset conditions can comprise a front face photo, the illumination is uniform and sufficient, the expression is naturally relaxed, the photo belongs to a preset format (such as JPG or PNG format), and the size of the photo does not exceed a preset size (such as 10 MB); no one of the following can occur: deflection tilt, laughing, mouth opening, tooth leakage, facial shadows, and facial shadows, etc.
The face reconstruction interface may be based on a 3DMM (3D Morphable Face Model, face 3D deformation statistical model) or a DECA (Detailed Expression Capture and Animation ) model. And if the photo meets the preset condition, calling a face reconstruction interface according to the photo, the gender data and the wind pattern data to generate a target digital person.
And step S42, editing the target digital person according to the editing instruction input by the user, and obtaining the first digital person.
After the target digital person is acquired, a corresponding editing interface is popped up, the target digital person is edited according to an editing instruction input by a user in the editing interface, for example, editing operations such as face pinching, face beautifying, body type conversion, reloading, reconstruction and the like are performed on the target digital person, and a first digital person is generated after editing is completed, so that the first digital person is efficiently acquired.
In some embodiments of the present application, after obtaining the target digital person selected by the user from the plurality of preset digital persons, the method further comprises:
if the digital person video generation process corresponding to the user does not exist, displaying an editing interface of the target digital person;
and if the digital person which corresponds to the user and is in use exists, displaying the non-editable prompt information of the target digital person by the digital person video generation process.
In this embodiment, after the target digital person is obtained, it is determined whether a digital person video generation process corresponding to the user exists, and whether a digital person in use corresponding to the user exists, and if the digital person video generation process does not exist, it is determined that the user can edit the target digital person, and an editing interface of the target digital person is displayed. If the digital person is in use and the digital person video generation process exists, the fact that the target digital person cannot be edited at present is indicated, and the prompt information that the target digital person cannot edit is displayed, so that the situation that the same user performs digital person editing and video generation at the same time is avoided, and further the load of a server is reduced.
The embodiment of the application also provides a digital person editing control device, as shown in fig. 5, the device comprises: an obtaining module 501, configured to obtain a first digital person after editing, where the first digital person includes a plurality of kinds of feature data that can be saved; the saving module 502 is configured to determine at least one target feature data from the feature data according to a saving instruction of a user, save the target feature data according to a preset general format, and generate at least one saved feature data; a determining module 503, configured to determine a second digital person to be edited according to a call instruction if the call instruction of the user to the saved feature data is received; a generating module 504, configured to apply the saved feature data to the second digital person, and generate a third digital person.
In a specific application scenario, the device further includes an adding module, configured to: displaying preset prompt information so that the user adds description information of the stored characteristic data according to the preset prompt information; generating tag information according to the description information, and adding the tag information to the stored feature data.
In a specific application scenario, the saved feature data includes an image feature, and/or an audio feature, and/or a driving feature, and the generating module 504 is specifically configured to: if the stored feature data is the image feature, adjusting the global image or the local image of the second digital person according to the stored feature data to generate the third digital person; if the stored feature data is the audio feature, adjusting the pronunciation feature of the second digital person according to the stored feature data to generate the third digital person; and if the stored characteristic data is the driving characteristic, adjusting the mouth shape driving parameter and/or the emotion driving parameter of the second digital person according to the stored characteristic data to generate the third digital person.
In a specific application scenario, the device further includes a storage module, configured to: if a storage instruction of the user for the third digital person is received, generating a digital person identifier uniquely corresponding to the third digital person; and adding the digital person identification to the third digital person, and storing the third digital person to a preset storage path.
In a specific application scenario, the device further comprises a replacing module, configured to: and acquiring a digital person replacement instruction carrying the digital person identifier, replacing the digital person in a preset digital person video template with the third digital person according to the digital person replacement instruction, and generating a digital person video corresponding to the third digital person.
In a specific application scenario, the obtaining module 501 is specifically configured to: acquiring target digital persons selected by the user from a plurality of preset digital persons; and editing the target digital person according to the editing instruction input by the user to obtain the first digital person.
In a specific application scenario, the obtaining module 501 is further configured to: if the digital person video generation process corresponding to the user does not exist, displaying an editing interface of the target digital person; and if the digital person which corresponds to the user and is in use exists, displaying the non-editable prompt information of the target digital person by the digital person video generation process.
The digital person editing control device in the embodiment of the application comprises: the acquisition module is used for acquiring a first digital person after editing, wherein the first digital person comprises a plurality of types of characteristic data which can be saved; the storage module is used for determining at least one target characteristic data from the characteristic data according to a storage instruction of a user, storing the target characteristic data according to a preset general format and generating at least one stored characteristic data; the determining module is used for determining a second digital person to be edited according to the calling instruction if the calling instruction of the user on the stored characteristic data is received; the generation module is used for applying the stored characteristic data to the second digital person to generate a third digital person, and storing the user-defined digital person characteristic data according to a preset general format to realize that the stored characteristic data can be directly called to edit other digital persons, so that the generation efficiency of the digital person is improved, and the user experience is improved.
The embodiment of the invention also provides an electronic device, as shown in fig. 6, which comprises a processor 601, a communication interface 602, a memory 603 and a communication bus 604, wherein the processor 601, the communication interface 602 and the memory 603 complete communication with each other through the communication bus 604,
a memory 603 for storing executable instructions of the processor;
a processor 601 configured to execute via execution of the executable instructions:
acquiring a first digital person with editing completed, wherein the first digital person comprises a plurality of kinds of characteristic data which can be saved; determining at least one target feature data from the feature data according to a storage instruction of a user, storing the target feature data according to a preset general format, and generating at least one stored feature data; if a call instruction of the user to the stored feature data is received, determining a second digital person to be edited according to the call instruction; and applying the stored characteristic data to the second digital person to generate a third digital person.
The communication bus may be a PCI (Peripheral Component Interconnect, peripheral component interconnect standard) bus, or an EISA (Extended Industry Standard Architecture ) bus, or the like. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the terminal and other devices.
The memory may include RAM (Random Access Memory ) or may include non-volatile memory, such as at least one disk memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a CPU (Central Processing Unit ), NP (Network Processor, network processor), etc.; but also DSP (Digital Signal Processing, digital signal processor), ASIC (Application Specific Integrated Circuit ), FPGA (Field Programmable Gate Array, field programmable gate array) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
In yet another embodiment of the present invention, there is also provided a computer-readable storage medium having stored therein a computer program which, when executed by a processor, implements the digital human editing control method as described above.
In yet another embodiment of the present invention, there is also provided a computer program product containing instructions that, when run on a computer, cause the computer to perform the digital human edit control method as described above.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present invention, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), etc.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention are included in the protection scope of the present invention.

Claims (10)

1. A digital human edit control method, said method comprising:
acquiring a first digital person with editing completed, wherein the first digital person comprises a plurality of kinds of characteristic data which can be saved;
determining at least one target feature data from the feature data according to a storage instruction of a user, storing the target feature data according to a preset general format, and generating at least one stored feature data;
if a call instruction of the user to the stored feature data is received, determining a second digital person to be edited according to the call instruction;
and applying the stored characteristic data to the second digital person to generate a third digital person.
2. The method of claim 1, wherein after generating the at least one saved characteristic data, the method further comprises:
displaying preset prompt information so that the user adds description information of the stored characteristic data according to the preset prompt information;
generating tag information according to the description information, and adding the tag information to the stored feature data.
3. The method of claim 1, wherein the saved feature data includes an avatar feature, and/or an audio feature, and/or a driver feature, the applying the saved feature data to the second digital person generating a third digital person, comprising:
if the stored feature data is the image feature, adjusting the global image or the local image of the second digital person according to the stored feature data to generate the third digital person;
if the stored feature data is the audio feature, adjusting the pronunciation feature of the second digital person according to the stored feature data to generate the third digital person;
and if the stored characteristic data is the driving characteristic, adjusting the mouth shape driving parameter and/or the emotion driving parameter of the second digital person according to the stored characteristic data to generate the third digital person.
4. The method of claim 1, wherein after generating the third digital person, the method further comprises:
if a storage instruction of the user for the third digital person is received, generating a digital person identifier uniquely corresponding to the third digital person;
and adding the digital person identification to the third digital person, and storing the third digital person to a preset storage path.
5. The method of claim 4, wherein after saving the third digital person to a preset storage path, the method further comprises:
and acquiring a digital person replacement instruction carrying the digital person identifier, replacing the digital person in a preset digital person video template with the third digital person according to the digital person replacement instruction, and generating a digital person video corresponding to the third digital person.
6. The method of claim 1, wherein the obtaining the first digital person for which editing is complete comprises:
acquiring target digital persons selected by the user from a plurality of preset digital persons;
and editing the target digital person according to the editing instruction input by the user to obtain the first digital person.
7. The method of claim 6, wherein after obtaining the target digital person selected by the user from a plurality of preset digital persons, the method further comprises:
if the digital person video generation process corresponding to the user does not exist, displaying an editing interface of the target digital person;
and if the digital person which corresponds to the user and is in use exists, displaying the non-editable prompt information of the target digital person by the digital person video generation process.
8. A digital human edit control device, said device comprising:
the system comprises an acquisition module, a storage module and a storage module, wherein the acquisition module is used for acquiring a first digital person with the editing completion, and the first digital person comprises various kinds of characteristic data which can be stored;
the storage module is used for determining at least one target characteristic data from the characteristic data according to a storage instruction of a user, storing the target characteristic data according to a preset general format and generating at least one stored characteristic data;
the determining module is used for determining a second digital person to be edited according to the calling instruction if the calling instruction of the user on the stored characteristic data is received;
and the generation module is used for applying the stored characteristic data to the second digital person to generate a third digital person.
9. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the digital person editing control method of any of claims 1 to 7 via execution of the executable instructions.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed by a processor, implements the digital person editing control method of any one of claims 1 to 7.
CN202311043748.5A 2023-08-18 2023-08-18 Digital person editing control method, device, electronic equipment and storage medium Pending CN117351121A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311043748.5A CN117351121A (en) 2023-08-18 2023-08-18 Digital person editing control method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311043748.5A CN117351121A (en) 2023-08-18 2023-08-18 Digital person editing control method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117351121A true CN117351121A (en) 2024-01-05

Family

ID=89362028

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311043748.5A Pending CN117351121A (en) 2023-08-18 2023-08-18 Digital person editing control method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117351121A (en)

Similar Documents

Publication Publication Date Title
KR102658960B1 (en) System and method for face reenactment
CN107392783B (en) Social contact method and device based on virtual reality
US20210227152A1 (en) Method and apparatus for generating image
KR102117433B1 (en) Interactive video generation
CN109189544B (en) Method and device for generating dial plate
WO2013120851A1 (en) Method for sharing emotions through the creation of three-dimensional avatars and their interaction through a cloud-based platform
KR20210040882A (en) Method and apparatus for generating video
US20230057566A1 (en) Multimedia processing method and apparatus based on artificial intelligence, and electronic device
WO2019114328A1 (en) Augmented reality-based video processing method and device thereof
WO2020150693A1 (en) Systems and methods for generating personalized videos with customized text messages
US11308677B2 (en) Generating personalized videos with customized text messages
CN108346171A (en) A kind of image processing method, device, equipment and computer storage media
KR102656497B1 (en) Customization of text messages in editable videos in a multimedia messaging application
US20230107213A1 (en) Method of generating virtual character, electronic device, and storage medium
EP3744088A1 (en) Techniques to capture and edit dynamic depth images
WO2022237633A1 (en) Image processing method, apparatus, and device, and storage medium
WO2017219967A1 (en) Virtual keyboard generation method and apparatus
CN107609487B (en) User head portrait generation method and device
CN112837213A (en) Face shape adjustment image generation method, model training method, device and equipment
CN112184540A (en) Image processing method, image processing device, electronic equipment and storage medium
CN113453027B (en) Live video and virtual make-up image processing method and device and electronic equipment
CN117519825A (en) Digital personal separation interaction method and device, electronic equipment and storage medium
CN117351121A (en) Digital person editing control method, device, electronic equipment and storage medium
CN117095097A (en) Virtual image generation method and device, electronic equipment and storage medium
CN117632109A (en) Virtual digital assistant construction method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination