CN117132711A - Digital portrait customizing method, device, equipment and storage medium - Google Patents

Digital portrait customizing method, device, equipment and storage medium Download PDF

Info

Publication number
CN117132711A
CN117132711A CN202311098352.0A CN202311098352A CN117132711A CN 117132711 A CN117132711 A CN 117132711A CN 202311098352 A CN202311098352 A CN 202311098352A CN 117132711 A CN117132711 A CN 117132711A
Authority
CN
China
Prior art keywords
parameters
portrait
preset
appearance
expression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311098352.0A
Other languages
Chinese (zh)
Inventor
李媛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Changan Automobile Co Ltd
Original Assignee
Chongqing Changan Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Changan Automobile Co Ltd filed Critical Chongqing Changan Automobile Co Ltd
Priority to CN202311098352.0A priority Critical patent/CN117132711A/en
Publication of CN117132711A publication Critical patent/CN117132711A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Geometry (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a digital portrait customizing method, a device, equipment and a storage medium, the method obtains a face image and key point data of a preset main body key point in the face image by obtaining initial gesture data of a template person, extracts expression parameters in the face image, establishes key point coordinates according to the key point data, generates a three-dimensional human body model based on the key point coordinates, renders the expression parameters into the three-dimensional human body model to obtain an initial portrait model, constructs the appearance of the initial portrait model based on appearance materials in a preset image design database, and performs customizing adjustment on the appearance of the initial portrait model based on the preset portrait parameters to obtain a target digital portrait; the three-dimensional model is built by acquiring the data of the template portrait, and the three-dimensional model is customized and adjusted based on the data of the preset material library, so that the customized digital portrait with individuation can be generated more rapidly.

Description

Digital portrait customizing method, device, equipment and storage medium
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a digital portrait customizing method, a digital portrait customizing device, digital portrait customizing equipment and a storage medium.
Background
Along with the rapid development of artificial intelligence technology, AI technology has been advanced to aspects of production and life, and by comprehensively applying a plurality of advanced technologies such as speech synthesis, speech recognition, machine translation, expression recognition, human motion recognition, high-definition image processing, etc., the construction of a virtual task is realized, and the virtual task can be widely applied to many scenes interacting with people, such as news broadcasting, classroom education, caretaking, man-machine interaction, etc. In consideration of the vividness and the interestingness of the video interactive content, different character images can be generated in a customized mode, so that special requirements of different application scenes and different application parties can be met.
In general, a portrait model of a real person style or cartoon image is generated by adopting a technical means of computer vision or computer graphics, however, a common virtual digital person generating method usually obtains a virtual digital person through an image segmentation recognition algorithm, a mapping projection matrix, a face recognition algorithm and the like, so that a large amount of data acquisition and later modification exist in the process of creating the digital portrait model, so that the creation of the digital portrait always needs to consume a large amount of time and cannot realize quick customization.
Disclosure of Invention
In view of the above drawbacks of the prior art, the present application provides a method, apparatus, device and storage medium for customizing a digital portrait, so as to solve the technical problem that creating the digital portrait always requires a lot of time, and cannot realize quick customization.
The application provides a digital portrait customizing method, which comprises the following steps: acquiring initial posture data of a template person, and acquiring a face image and key point data of preset main body key points in the face image based on the initial posture data; extracting expression parameters in the face image, wherein the expression parameters represent the intensity degree of each muscle action; establishing key point coordinates according to the key point data, generating a three-dimensional human body model based on the key point coordinates, and rendering the expression parameters into the three-dimensional human body model to obtain an initial portrait model; and constructing the appearance of the initial portrait model based on appearance materials in a preset image design database, and performing customized adjustment on the appearance of the initial portrait model based on preset portrait parameters so as to obtain a target digital portrait.
In one embodiment of the present application, extracting expression parameters in the face image includes: recognizing expression features in the facial image, extracting expression feature parameters of the expression features based on a preset feature extraction model, and obtaining action feature parameters of different muscle actions according to the expression feature parameters; obtaining a plurality of refined motion unit values of each expression according to the action characteristic parameters and a preset expression evaluation model; and determining all the refined motion unit values as expression parameters.
In one embodiment of the present application, before establishing the keypoint coordinates according to the keypoint data, the method further includes: obtaining standard three-dimensional coordinates of key points of a standard face main body; obtaining two-dimensional coordinates of preset main body key points based on key point data of the preset main body key points in the face image; and determining pose parameters of the template person based on the three-dimensional coordinates, the two-dimensional coordinates and a preset perspective projection method.
In one embodiment of the present application, generating a three-dimensional mannequin based on the keypoint coordinates includes: acquiring the key point coordinates and confidence coefficient values of the key point coordinates, and determining the key point coordinates and the confidence coefficient values as initial parameters; the data format of the initial parameters is arranged to obtain intermediate parameters, and the data format of the intermediate parameters is consistent with a preset data format; inputting the intermediate parameters into a preset background removal model to remove background parameters in the intermediate parameters, so as to obtain target parameters; and generating the three-dimensional human body model based on the target parameters and a preset human body modeling algorithm.
In one embodiment of the present application, rendering the expression parameters into the three-dimensional mannequin includes: and sending the expression parameters to a cloud rendering and end rendering engine to control the cloud rendering and end rendering engine to render the expression and action of the three-dimensional human model according to the expression parameters and the pose parameters.
In one embodiment of the present application, constructing the appearance of the initial portrait model based on appearance materials in a preset image design database includes: obtaining appearance design parameters depicted by the target person, wherein the appearance design parameters comprise hairstyle parameters, face parameters and clothing parameters; searching the preset image design database based on the hairstyle parameters to obtain a target hairstyle, searching the preset image design database based on the face parameters to obtain a target face, and searching the preset image design database based on the clothing parameters to obtain target clothing; and rendering the target hairstyle, the target face and the target garment to the initial portrait model to construct the appearance of the initial portrait model, and determining the initial portrait model after the appearance is constructed as an intermediate portrait model.
In one embodiment of the present application, the customized adjustment of the appearance of the initial portrait model based on the preset portrait parameters to obtain the target digital portrait includes: acquiring preset appearance design parameters, wherein the appearance design parameters comprise contour parameters, five sense organs parameters, dressing parameters and appearance parameters; adjusting the facial contour of the intermediate portrait model based on the contour parameters, wherein the contour parameters comprise two cheek parameters and a chin parameter; adjusting the five sense organs of the intermediate portrait model based on the five sense organ parameters, wherein the five sense organ parameters comprise eye parameters, nose parameters, mouth parameters, ear parameters and eyebrow parameters, the eye parameters comprise eye size parameter adjustment, eye distance parameter adjustment and rotation degree parameter adjustment, the eyebrow parameters comprise eyebrow distance parameter adjustment and position parameter adjustment, and the nose parameters comprise nose bridge parameter adjustment, nose length parameter adjustment and nose width parameter adjustment; adjusting the makeup of the intermediate portrait model based on the makeup parameters, wherein the makeup parameters comprise eyebrow makeup parameters, eye makeup parameters and lip makeup parameters, the eye makeup parameters comprise eye line tone adjustment parameters and silkworm lying tone adjustment parameters, and the lip makeup parameters comprise lipstick thickness adjustment and lipstick glaze parameters; and adjusting the appearance of the middle portrait model based on the appearance parameters, wherein the appearance parameters comprise hair style parameters and clothing parameters, the hair style parameters comprise hair length parameters, hair curliness parameters and hair color parameters, and the clothing parameters comprise clothing style parameters and clothing color parameters.
The application provides a digital portrait customizing device, which comprises: the data acquisition module is used for acquiring initial posture data of the template character and acquiring a face image and character main body key point data based on the initial posture data; the data processing module is used for extracting expression parameters in the face image, wherein the expression parameters represent the intensity degree of each muscle action; the model construction module is used for establishing key point coordinates according to the key point data of the character main body, generating a three-dimensional human body model based on the key point coordinates, and rendering the expression parameters into the three-dimensional human body model to obtain an initial portrait model; and the customizing module is used for constructing the appearance of the initial portrait model based on appearance materials in a preset database and adjusting the appearance of the initial portrait model based on preset portrait parameters so as to obtain a target digital portrait.
The application provides an electronic device, which comprises: one or more processors; and a storage means for storing one or more programs which, when executed by the one or more processors, cause the electronic device to implement the digital portrait customization method as described above.
The present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor of a computer, causes the computer to perform the digital portrait customizing method as described above.
The application has the beneficial effects that: the digital portrait customizing method, the device, the equipment and the storage medium of the application obtain a face image and key point data of a preset main body key point in the face image by acquiring initial gesture data of a template person, extract expression parameters in the face image, establish key point coordinates according to the key point data, generate a three-dimensional human model based on the key point coordinates, render the expression parameters into the three-dimensional human model to obtain an initial portrait model, construct the appearance of the initial portrait model based on appearance materials in a preset portrait design database, and customize and adjust the appearance of the initial portrait model based on the preset portrait parameters to obtain a target digital portrait; the three-dimensional model is built by acquiring the data of the template portrait, and the three-dimensional model is customized and adjusted based on the data of the preset material library, so that the customized digital portrait with individuation can be generated more rapidly.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application. It is evident that the drawings in the following description are only some embodiments of the present application and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art. In the drawings:
FIG. 1 is a schematic diagram of an implementation environment of a digital portrait customization method according to an exemplary embodiment of the present application;
FIG. 2 is a flow chart of a digital portrait customization method according to an exemplary embodiment of the present application;
FIG. 3 is a schematic diagram of a digital portrait customization method according to an exemplary embodiment of the present application;
FIG. 4 is a block diagram of a digital portrait customizing apparatus according to an exemplary embodiment of the present application;
fig. 5 shows a schematic diagram of a computer system suitable for use in implementing an embodiment of the application.
Detailed Description
Further advantages and effects of the present application will become readily apparent to those skilled in the art from the disclosure herein, by referring to the accompanying drawings and the preferred embodiments. The application may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present application. It should be understood that the preferred embodiments are presented by way of illustration only and not by way of limitation.
It should be noted that the illustrations provided in the following embodiments merely illustrate the basic concept of the present application by way of illustration, and only the components related to the present application are shown in the drawings and are not drawn according to the number, shape and size of the components in actual implementation, and the form, number and proportion of the components in actual implementation may be arbitrarily changed, and the layout of the components may be more complicated.
In the following description, numerous details are set forth in order to provide a more thorough explanation of embodiments of the present application, it will be apparent, however, to one skilled in the art that embodiments of the present application may be practiced without these specific details, in other embodiments, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the embodiments of the present application.
Firstly, it should be noted that the interaction between the virtual portrait and the user is already an important component in the current production and life, and the functions of the virtual portrait are also increasingly perfect. The user can interact with the digital person in the form of voice, text and the like, the virtual digital person drives the facial expression, mouth shape and limb action change through an algorithm, and interacts with the user to give a response in cooperation with sound, so that the digital person is widely applied to government affairs, finance, scenic spots, electronic commerce and other scenes at present, for example, the digital person provides explanation service in the scenic spot, provides customer consultation service on the website of the electronic commerce and other related industries or posts.
FACS (Facial Action Coding System) are 44 face motion units (AU) defined by the international standards organization of faces, which can combine all possible expressions (including frowning, coupe, etc.) representing facial expressions, AU being the basis stone constituting the facial expression.
Openpost algorithm: openpost is a bottom-up detection algorithm, which is characterized in that key points are detected first, and then people are detected. Is an open source library based on convolutional neural network Q and supervised learning and developed by taking Caffe as a framework. The gesture estimation of human body actions, facial expressions, finger movements and the like can be realized. Is suitable for single person and multiple persons, and has excellent robustness.
COCO (Common Objects in COntext), which is a dataset that can be used for image recognition, wherein the images in the MS COCO dataset are divided into training, validation and test sets.
The background removing algorithm is a generic term of an algorithm for removing irrelevant background information in an image, and has the function of removing irrelevant information, namely interference information, in the image so as to reduce the processing amount of data processing and achieve the purpose of saving calculation force. And the Pose2Seg background removal algorithm is one of the background removal algorithms.
Perspective projection (perspective projection method) is a projection. So-called perspective, i.e. the observation of an object through a transparent body. A transparent plane is set between the person and the object, called a picture (i.e. a projection plane), the position of the eyes of the person is called a viewpoint (i.e. a projection center), the connecting line from the viewpoint to each point on the object is called a line of sight (i.e. a projection line), the intersection point of each line of sight and the picture is the perspective projection of each point on the object, and the perspective projection of each point is connected to obtain the perspective of the object. It follows that each projection line passes through the viewpoint and that the perspective projection is a central projection. Such a perspective view, which is made according to perspective projection, is the most realistic depiction of an object as seen by the human eye. Whereas weak perspective projection is directed to a specific perspective projection that further simplifies the pinhole imaging model.
The smply-x is an implementation of the smpl-x, and the main function is to directly obtain a model with the current body shape, motion and expression from an RGB image and an openpoint.
FIG. 1 is a schematic diagram of an implementation environment of a digital portrait customization method according to an exemplary embodiment of the present application. As shown in fig. 1, the real-time environment of the digital portrait customizing method includes a data acquisition device 101 and a computer device 102. The data collection device 101 is configured to collect template portrait data, setting data of a template portrait, and basic materials, and send all collected data to the computer device 102. It should be noted that, the data acquisition device 101 includes an image acquisition device, a text acquisition device, and other information acquisition devices, where the image acquisition device may be an optical photographic device, or any device that may be used to acquire image data, such as a point cloud information acquisition device, and therefore, the present application does not limit the type or model of the information acquisition device. In addition, the computer device 102 is configured to construct a virtual portrait model based on the received template portrait data and a preset algorithm configured in the computer device 102, and perform a customizing process on the virtual portrait model based on the setting data and the base material, so as to obtain a target portrait model. It should be noted that, the computer device 102 may be at least one of a desktop graphics processor (Graphic Processing Unit, GPU) computer, a GPU computing cluster, a neural network computer, and the like, or may be an intelligent processor integrated on the current vehicle, which is not limited in this way.
FIG. 2 is a flow chart of a digital portrait customization method according to an exemplary embodiment of the present application.
As shown in fig. 2, in an exemplary embodiment, the digital portrait customizing method at least includes steps S210 to S240, which are described in detail as follows:
step S210, initial gesture data of the template person are obtained, and a face image and key point data of preset main body key points in the face image are obtained based on the initial gesture data.
In one embodiment of the application, the initial pose data of the template character is acquired based on a three-dimensional scanner. And respectively arranging a plurality of three-dimensional scanning devices at a plurality of different angles to shoot the person to be shot, and simultaneously, shooting three-dimensional data of different parts of the person by using the scanner. In addition, in the process of collecting the three-dimensional data of the person, the person needs to be set with multi-dimensional gestures to obtain three-dimensional data of the person with different gestures, wherein the gestures can be standing gestures, sitting and lying gestures and any other gestures, and the purpose of the method is to collect as many template portrait data as possible based on different visual angles and different person states so as to facilitate subsequent data processing and model construction.
Step S220, extracting expression parameters in the face image, wherein the expression parameters represent the intensity degree of each muscle action.
In one embodiment of the present application, extracting expression parameters in a face image includes: identifying expression features in the facial image, extracting expression feature parameters of the expression features based on a preset feature extraction model, and obtaining action feature parameters of different muscle actions according to the expression feature parameters; obtaining a plurality of refined motion unit values of each expression according to the action characteristic parameters and a preset expression evaluation model; and determining all the refined motion unit values as expression parameters.
In a specific embodiment of the present application, firstly, a face image is segmented based on the acquired template portrait data, and then expression parameters in the face image are extracted. The extraction step of the expression parameters comprises the steps of firstly extracting the expression characteristic information in the face image through a preset expression characteristic extraction model, then determining the working characteristics of different muscles in the face image according to the expression characteristic information, and determining AU values of a plurality of refined motion units corresponding to each expression based on a preset expression estimation model, so that the AU values of the refined motion units of each expression are determined as the expression parameters.
Step S230, key point coordinates are established according to the key point data, a three-dimensional human body model is generated based on the key point coordinates, and expression parameters are rendered into the three-dimensional human body model to obtain an initial portrait model.
In one embodiment of the present application, before establishing the key point coordinates according to the key point data, the method further includes: obtaining standard three-dimensional coordinates of key points of a standard face main body; obtaining two-dimensional coordinates of preset main body key points based on key point data of the preset main body key points in the face image; and determining pose parameters of the template person based on the three-dimensional coordinates and the two-dimensional coordinates and a preset perspective projection method.
In one embodiment of the present application, amblyopia projection is used as an example. Firstly, obtaining a face image corresponding to the template portrait data based on the template portrait data; and then, acquiring two-dimensional coordinates of key points on the face image, determining pose parameters of the key points according to the two-dimensional coordinates of the key points, and determining the pose parameters by using a amblyopia projection method according to the two-dimensional coordinates of the key points on the face image and three-dimensional coordinates of the key points of a pre-defined standard face. It should be noted that the amblyopia projection method proposed in this embodiment is only an exemplary method, and does not limit the specific implementation of the present application.
In one embodiment of the application, generating a three-dimensional mannequin based on keypoint coordinates includes: acquiring key point coordinates and confidence coefficient values of the key point coordinates, and determining the key point coordinates and the confidence coefficient values as initial parameters; the data format of the initial parameters is arranged to obtain intermediate parameters, and the data format of the intermediate parameters is consistent with a preset data format; inputting the intermediate parameters into a preset background removal model to remove background parameters in the intermediate parameters, so as to obtain target parameters; and generating a three-dimensional human body model based on the target parameters and a preset human body modeling algorithm.
In one embodiment of the application, the 3D model is constructed by using an Openpost algorithm, a COCO dataset, a Pose2Seg background removal algorithm, and a smify-x modeling algorithm. Firstly, acquiring coordinates of key points of a human body and confidence values of objects in each human body position frame by using an Openphase algorithm capable of identifying 25 key parts; then, sorting the Openpose results according to the format of the COCO data set; then, sending the character main body key point data which are arranged according to the format of the COCO data set into a Pose2Seg background removal algorithm to finish the background removal of the human body region; and finally, sending the human body area with the background removed and the skeletal key point data into a simulation-x modeling algorithm to obtain a 3D model of the human body, namely the three-dimensional human body model.
In one embodiment of the application, rendering the expression parameters into a three-dimensional mannequin includes: and sending the expression parameters to a cloud rendering and end rendering engine to control the cloud rendering and end rendering engine to render the expression and action of the three-dimensional human model according to the expression parameters and the pose parameters.
Step S240, the appearance of the initial portrait model is built based on the appearance materials in the preset portrait design database, and customized adjustment is carried out on the appearance of the initial portrait model based on preset portrait parameters so as to obtain the target digital portrait.
In one embodiment of the present application, constructing the appearance of an initial portrait model based on appearance materials in a preset portrait design database includes: obtaining appearance design parameters of the target person, wherein the appearance design parameters comprise hairstyle parameters, face parameters and clothing parameters; searching a preset image design database based on the hairstyle parameters to obtain a target hairstyle, searching a preset image design database based on the face parameters to obtain a target face, and searching a preset image design database based on the clothing parameters to obtain target clothing; and rendering the target hairstyle, the target face and the target garment to the initial portrait model to construct the appearance of the initial portrait model, and determining the initial portrait model after the appearance is constructed as an intermediate portrait model.
In another embodiment of the present application, the customized adjustment of the appearance of the initial portrait model based on the preset portrait parameters to obtain the target digital portrait includes: acquiring preset appearance design parameters, wherein the appearance design parameters comprise contour parameters, five sense organs parameters, dressing parameters and appearance parameters; adjusting the facial contour of the intermediate portrait model based on contour parameters, wherein the contour parameters comprise two cheek parameters and chin parameters; adjusting the five sense organs of the intermediate portrait model based on the five sense organ parameters, wherein the five sense organ parameters comprise eye parameters, nose parameters, mouth parameters, ear parameters and eyebrow parameters, the eye parameters comprise eye size parameter adjustment, eye distance parameter adjustment and rotation degree parameter adjustment, the eyebrow parameters comprise eyebrow distance parameter adjustment and position parameter adjustment, and the nose parameters comprise nose bridge parameter adjustment, nose length parameter adjustment and nose width parameter adjustment; adjusting the makeup of the intermediate figure model based on the makeup parameters, wherein the makeup parameters comprise eyebrow makeup parameters, eye makeup parameters and lip makeup parameters, the eye makeup parameters comprise eye line tone adjustment parameters and silkworm lying tone adjustment parameters, and the lip makeup parameters comprise lipstick thickness adjustment and lipstick glaze parameters; the method comprises the steps of adjusting the appearance of the intermediate portrait model based on appearance parameters, wherein the appearance parameters comprise hair style parameters and clothing parameters, the hair style parameters comprise hair length parameters, hair curl parameters and hair color parameters, and the clothing parameters comprise clothing style parameters and clothing color parameters.
FIG. 3 is a schematic diagram of a digital portrait customization method according to an exemplary embodiment of the present application; as shown in fig. 3, the digital portrait customizing method mainly comprises portrait data acquisition, data preprocessing, material library creation, human body 3D model establishment, digital portrait appearance construction, and multi-dimensional custom digital portrait setting.
In one embodiment of the application, firstly, data acquisition is carried out, namely, a plurality of three-dimensional scanning devices are respectively arranged at a plurality of angles to carry out three-dimensional data shooting on different parts of a person, and in the process of collecting the three-dimensional data of the person, the three-dimensional gesture setting is carried out on the person to obtain the three-dimensional data of the person with different gestures; then, carrying out data preprocessing on the acquired data, namely acquiring the template character data obtained by shooting, obtaining a face image corresponding to the template character data based on the template character data, obtaining expression parameters corresponding to the face image according to the face image, and sending the obtained expression parameters to a cloud rendering and end rendering engine, wherein the expression parameters are used for representing the intensity degree of each muscle action on the face image and are used for indicating the cloud rendering and end rendering engine to render the expression and action of the virtual portrait; then, constructing a human body 3D model based on the obtained template portrait data and expression parameters, namely obtaining key points of a human body in a human body area to be processed based on the obtained target portrait three-dimensional data, establishing key point coordinates of the human body, finishing the key point coordinates of the human body, performing background removal by using a background removal algorithm, sending the human body area with the background removed key point coordinates of the human body to a 3D modeling algorithm to obtain a 3D model of the human body, and introducing a processed face image into the 3D modeling to obtain an initial portrait model, wherein the key points of the human body are preset key points, are distinguished based on different target task models, and are generally human skeleton key points, namely some key parts of the human body such as joints, five sense organs and the like; meanwhile, a private color library is also required to be established based on related material information, wherein the private color library comprises but is not limited to a material library formed by a hairstyle material library, a face material and a clothing material, and the materials of the material library can be downloaded and stored through a network to provide a large number of hairstyle materials, face materials and clothing materials for the creation of the figures of digital people; then, the appearance of the obtained initial portrait is constructed based on a material library, namely, a hairstyle material, a face material and a clothing material in the material library are applied to 3D modeling of a digital person (namely, the initial portrait model), and the digital person model is rendered and replaced through the hairstyle material, the face material and the clothing material, so that an intermediate portrait model is obtained, and the style and the appearance of the virtual image can be customized according to the use scene and the requirement of a customer in the link, so that the personalized virtual image is more special, and the digital person with the customized image can be conveniently and rapidly generated; finally, based on the constructed intermediate portrait model, the model can be subjected to multidimensional custom digital portrait, namely digital portrait parameter adjustment is performed in digital portrait modeling, wherein the parameter adjustment comprises facial contours, eyes, eyelashes, nose, makeup, hairstyle and clothing, and the digital portrait is generated after the digital portrait is set.
The digital portrait customizing method provided by the application has the advantages that the virtual human body of the digital human is quickly built by adopting the real human body data, the building efficiency of the digital human body model is improved, the key points of the character main body are built by adopting the body of the acquired three-dimensional data of the character, the background irrelevant to the character body is removed, the fluency of the character body is improved, the material appearance building is carried out aiming at the built digital human body model, the portrait appearance change of the digital human can be quickly realized by adopting the hairstyle materials, the face materials and the clothing materials in the material library, the quantitative quick customization can be carried out according to the requirements of the portrait of the digital human, the multi-dimensional custom digital human setting and adjusting of the digital human with customized appearance can be carried out, the adjusting direction comprises the face contour, eyes, eyelashes, nose, dressing, hairstyle and clothing, and the digital portrait which are finely created, and the quick face changing, hairstyle changing and clothing changing can be carried out by the human body model data, so that the quick digital portrait customization can be realized.
Fig. 4 is a block diagram of a digital portrait customizing apparatus according to an exemplary embodiment of the present application. The device may be applied to the implementation environment shown in fig. 1. The apparatus may also be adapted to other exemplary implementation environments and may be specifically configured in other devices, and the present embodiment is not limited to the implementation environments to which the apparatus is adapted.
As shown in fig. 4, the exemplary digital portrait customizing apparatus includes: a data acquisition module 410, a data processing module 420, a model construction module 430, and a customization module 440.
The data acquisition module 410 is configured to acquire initial pose data of a template character, and obtain a face image and character main body key point data based on the initial pose data; the data processing module 420 is configured to extract expression parameters in the face image, where the expression parameters represent intensity of each muscle action; the model construction module 430 is configured to establish key point coordinates according to the key point data of the character main body, generate a three-dimensional human body model based on the key point coordinates, and render expression parameters into the three-dimensional human body model to obtain an initial portrait model; the customizing module 440 is configured to construct an appearance of the initial portrait model based on the appearance materials in the preset database, and adjust a appearance of the initial portrait model based on the preset portrait parameters, so as to obtain the target digital portrait.
It should be noted that, the digital portrait customizing device provided by the above embodiment and the digital portrait customizing method provided by the above embodiment belong to the same concept, where the specific manner of executing the operations by each module and unit has been described in detail in the method embodiment, and will not be described here again. In practical application, the digital portrait customizing device provided in the above embodiment may distribute the functions to be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above, which is not limited herein.
The embodiment of the application also provides electronic equipment, which comprises: one or more processors; and a storage device for storing one or more programs which, when executed by the one or more processors, cause the electronic device to implement the digital portrait customizing method provided in the above embodiments.
Fig. 5 shows a schematic diagram of a computer system suitable for use in implementing an embodiment of the application. It should be noted that, the computer system 500 of the electronic device shown in fig. 5 is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present application.
As shown in fig. 5, the computer system 500 includes a central processing unit (Central Processing Unit, CPU) 501, which can perform various appropriate actions and processes, such as performing the methods described in the above embodiments, according to a program stored in a Read-only memory (ROM) 502 or a program loaded from a storage section 508 into a random access memory (Random Access Memory, RAM) 503. In the RAM 503, various programs and data required for the system operation are also stored. The CPU 501, ROM 502, and RAM 503 are connected to each other through a bus 504. An Input/Output (I/O) interface 505 is also connected to bus 504.
The following components are connected to the I/O interface 505: an input section 506 including a keyboard, a mouse, and the like; an output portion 507 including a Cathode Ray Tube (CRT), a liquid crystal display (Liquid Crystal Display, LCD), and the like, and a speaker, and the like; a storage portion 508 including a hard disk and the like; and a communication section 509 including a network interface card such as a LAN (Local AreaNetwork ) card, a modem, or the like. The communication section 509 performs communication processing via a network such as the internet. The drive 510 is also connected to the I/O interface 505 as needed. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 510 as needed so that a computer program read therefrom is mounted into the storage section 508 as needed.
In particular, according to embodiments of the present application, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising a computer program for performing the method shown in the flowchart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 509, and/or installed from the removable media 511. When executed by a Central Processing Unit (CPU) 501, performs the various functions defined in the system of the present application.
It should be noted that, the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium may be, for example, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-Only Memory (ROM), an erasable programmable read-Only Memory (Erasable Programmable Read Only Memory, EPROM), flash Memory, an optical fiber, a portable compact disc read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with a computer-readable computer program embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. A computer program embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Where each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present application may be implemented by software, or may be implemented by hardware, and the described units may also be provided in a processor. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
Another aspect of the application also provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor of a computer, causes the computer to perform a digital portrait customizing method as described above. The computer-readable storage medium may be included in the electronic device described in the above embodiment or may exist alone without being incorporated in the electronic device.
Another aspect of the application also provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the digital portrait customizing method provided in the above embodiments.
The above embodiments are merely illustrative of the principles of the present application and its effectiveness, and are not intended to limit the application. Modifications and variations may be made to the above-described embodiments by those skilled in the art without departing from the spirit and scope of the application. It is therefore intended that all equivalent modifications and changes made by those skilled in the art without departing from the spirit and technical spirit of the present application shall be covered by the appended claims.

Claims (10)

1. A method for customizing a digital portrait, the method comprising:
acquiring initial posture data of a template person, and acquiring a face image and key point data of preset main body key points in the face image based on the initial posture data;
extracting expression parameters in the face image, wherein the expression parameters represent the intensity degree of each muscle action;
establishing key point coordinates according to the key point data, generating a three-dimensional human body model based on the key point coordinates, and rendering the expression parameters into the three-dimensional human body model to obtain an initial portrait model;
and constructing the appearance of the initial portrait model based on appearance materials in a preset image design database, and performing customized adjustment on the appearance of the initial portrait model based on preset portrait parameters so as to obtain a target digital portrait.
2. The digital portrait customizing method as claimed in claim 1, wherein extracting expression parameters in the face image comprises:
identifying the expression features in the face image,
extracting expression characteristic parameters of the expression characteristics based on a preset characteristic extraction model, and obtaining action characteristic parameters of different muscle actions according to the expression characteristic parameters;
obtaining a plurality of refined motion unit values of each expression according to the action characteristic parameters and a preset expression evaluation model;
and determining all the refined motion unit values as expression parameters.
3. The digital portrait customization method according to claim 1, further comprising, before establishing keypoint coordinates from the keypoint data:
obtaining standard three-dimensional coordinates of key points of a standard face main body;
obtaining two-dimensional coordinates of preset main body key points based on key point data of the preset main body key points in the face image;
and determining pose parameters of the template person based on the three-dimensional coordinates, the two-dimensional coordinates and a preset perspective projection method.
4. The digital portrait customization method according to claim 3, characterized in that generating a three-dimensional mannequin based on the keypoint coordinates includes:
acquiring the key point coordinates and confidence coefficient values of the key point coordinates, and determining the key point coordinates and the confidence coefficient values as initial parameters;
the data format of the initial parameters is arranged to obtain intermediate parameters, and the data format of the intermediate parameters is consistent with a preset data format;
inputting the intermediate parameters into a preset background removal model to remove background parameters in the intermediate parameters, so as to obtain target parameters;
and generating the three-dimensional human body model based on the target parameters and a preset human body modeling algorithm.
5. The digital portrait customization method according to claim 3, characterized in that rendering the expression parameters into the three-dimensional mannequin includes:
and sending the expression parameters to a cloud rendering and end rendering engine to control the cloud rendering and end rendering engine to render the expression and action of the three-dimensional human model according to the expression parameters and the pose parameters.
6. The digital portrait customization method according to any one of claims 1 to 5, wherein constructing an appearance of the initial portrait model based on appearance materials in a preset portrait design database includes:
obtaining appearance design parameters depicted by the target person, wherein the appearance design parameters comprise hairstyle parameters, face parameters and clothing parameters;
searching the preset image design database based on the hairstyle parameters to obtain a target hairstyle, searching the preset image design database based on the face parameters to obtain a target face, and searching the preset image design database based on the clothing parameters to obtain target clothing;
and rendering the target hairstyle, the target face and the target garment to the initial portrait model to construct the appearance of the initial portrait model, and determining the initial portrait model after the appearance is constructed as an intermediate portrait model.
7. The method of customizing a digital still image according to claim 6, wherein customizing the appearance of the initial portrait model based on preset portrait parameters to obtain a target digital portrait comprises:
acquiring preset appearance design parameters, wherein the appearance design parameters comprise contour parameters, five sense organs parameters, dressing parameters and appearance parameters;
adjusting the facial contour of the intermediate portrait model based on the contour parameters, wherein the contour parameters comprise two cheek parameters and a chin parameter;
adjusting the five sense organs of the intermediate portrait model based on the five sense organ parameters, wherein the five sense organ parameters comprise eye parameters, nose parameters, mouth parameters, ear parameters and eyebrow parameters, the eye parameters comprise eye size parameter adjustment, eye distance parameter adjustment and rotation degree parameter adjustment, the eyebrow parameters comprise eyebrow distance parameter adjustment and position parameter adjustment, and the nose parameters comprise nose bridge parameter adjustment, nose length parameter adjustment and nose width parameter adjustment;
adjusting the makeup of the intermediate portrait model based on the makeup parameters, wherein the makeup parameters comprise eyebrow makeup parameters, eye makeup parameters and lip makeup parameters, the eye makeup parameters comprise eye line tone adjustment parameters and silkworm lying tone adjustment parameters, and the lip makeup parameters comprise lipstick thickness adjustment and lipstick glaze parameters;
and adjusting the appearance of the middle portrait model based on the appearance parameters, wherein the appearance parameters comprise hair style parameters and clothing parameters, the hair style parameters comprise hair length parameters, hair curliness parameters and hair color parameters, and the clothing parameters comprise clothing style parameters and clothing color parameters.
8. A digital portrait customizing apparatus, the apparatus comprising:
the data acquisition module is used for acquiring initial posture data of the template character and acquiring a face image and character main body key point data based on the initial posture data;
the data processing module is used for extracting expression parameters in the face image, wherein the expression parameters represent the intensity degree of each muscle action;
the model construction module is used for establishing key point coordinates according to the key point data of the character main body, generating a three-dimensional human body model based on the key point coordinates, and rendering the expression parameters into the three-dimensional human body model to obtain an initial portrait model;
and the customizing module is used for constructing the appearance of the initial portrait model based on appearance materials in a preset database and adjusting the appearance of the initial portrait model based on preset portrait parameters so as to obtain a target digital portrait.
9. An electronic device, the electronic device comprising:
one or more processors;
storage means for storing one or more programs that, when executed by the one or more processors, cause the electronic device to implement the digital portrait customizing method according to any one of claims 1 to 7.
10. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor of a computer, causes the computer to perform the digital portrait customizing method according to any one of claims 1 to 7.
CN202311098352.0A 2023-08-29 2023-08-29 Digital portrait customizing method, device, equipment and storage medium Pending CN117132711A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311098352.0A CN117132711A (en) 2023-08-29 2023-08-29 Digital portrait customizing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311098352.0A CN117132711A (en) 2023-08-29 2023-08-29 Digital portrait customizing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117132711A true CN117132711A (en) 2023-11-28

Family

ID=88862489

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311098352.0A Pending CN117132711A (en) 2023-08-29 2023-08-29 Digital portrait customizing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117132711A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117348736A (en) * 2023-12-06 2024-01-05 彩讯科技股份有限公司 Digital interaction method, system and medium based on artificial intelligence

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117348736A (en) * 2023-12-06 2024-01-05 彩讯科技股份有限公司 Digital interaction method, system and medium based on artificial intelligence
CN117348736B (en) * 2023-12-06 2024-03-19 彩讯科技股份有限公司 Digital interaction method, system and medium based on artificial intelligence

Similar Documents

Publication Publication Date Title
US11688120B2 (en) System and method for creating avatars or animated sequences using human body features extracted from a still image
CN110390704B (en) Image processing method, image processing device, terminal equipment and storage medium
US12002160B2 (en) Avatar generation method, apparatus and device, and medium
EP3885965B1 (en) Image recognition method based on micro facial expressions, apparatus and related device
CN110796593A (en) Image processing method, device, medium and electronic equipment based on artificial intelligence
CN115049016B (en) Model driving method and device based on emotion recognition
CN110909680A (en) Facial expression recognition method and device, electronic equipment and storage medium
CN114821675B (en) Object processing method and system and processor
CN117132711A (en) Digital portrait customizing method, device, equipment and storage medium
CN116630495B (en) Virtual digital human model planning system based on AIGC algorithm
CN115482062A (en) Virtual fitting method and device based on image generation
WO2023185398A1 (en) Facial processing method and apparatus, and computer device and storage medium
CN111597926A (en) Image processing method and device, electronic device and storage medium
CN116229548A (en) Model generation method and device, electronic equipment and storage medium
WO2021155666A1 (en) Method and apparatus for generating image
Beacco et al. Automatic 3D avatar generation from a single RBG frontal image
JP2003030684A (en) Face three-dimensional computer graphic generation method and device, face three-dimensional computer graphic generation program and storage medium storing face three-dimensional computer graphic generation program
US12039454B2 (en) Microexpression-based image recognition method and apparatus, and related device
CN117689782B (en) Method, device, equipment and storage medium for generating poster image
Zhang et al. Development and application of facial makeup transfer
CN117830527A (en) Digital person customizable portrait implementing method, system and storage medium
CN116778107A (en) Expression model generation method, device, equipment and medium
CN113469874A (en) Beauty treatment method and device, electronic equipment and storage medium
Zhao 3D Human Face Reconstruction and 2D Appearance Synthesis
CN115861536A (en) Method for optimizing face driving parameters and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination