CN116137080A - Virtual image rendering method, device, system, electronic equipment, medium and product - Google Patents

Virtual image rendering method, device, system, electronic equipment, medium and product Download PDF

Info

Publication number
CN116137080A
CN116137080A CN202111369600.1A CN202111369600A CN116137080A CN 116137080 A CN116137080 A CN 116137080A CN 202111369600 A CN202111369600 A CN 202111369600A CN 116137080 A CN116137080 A CN 116137080A
Authority
CN
China
Prior art keywords
face
avatar
image block
face image
prediction result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111369600.1A
Other languages
Chinese (zh)
Inventor
李少君
李明
谢申汝
王明
丁云
裴峥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pateo Connect Nanjing Co Ltd
Original Assignee
Pateo Connect Nanjing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pateo Connect Nanjing Co Ltd filed Critical Pateo Connect Nanjing Co Ltd
Priority to CN202111369600.1A priority Critical patent/CN116137080A/en
Publication of CN116137080A publication Critical patent/CN116137080A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention provides a rendering method, a rendering device, a rendering system, electronic equipment, a storage medium and a product of an avatar, wherein the rendering method comprises the following steps: acquiring a face image block of a user and acquiring a face motion unit coding list corresponding to the face image block; predicting facial features of the face image block to obtain a prediction result; determining and displaying an avatar recommendation list corresponding to the prediction result; acquiring a target avatar selected by the user from the avatar recommendation list; and rendering the human face movement unit coding list to the face of the target virtual image to generate a personalized target virtual image. By adopting the embodiment of the invention, the operation steps for generating the personalized avatar are simplified, the modeling cost of the avatar is reduced, and the diversity of the avatar is enriched.

Description

Virtual image rendering method, device, system, electronic equipment, medium and product
Technical Field
The present invention relates to image processing technology, and more particularly, to a method, apparatus, system, electronic device, computer readable storage medium, and computer program product for rendering an avatar.
Background
The avatar is an imaginary character and is an avatar that does not exist in reality. Currently, avatars are widely used in social, live, game, cartoon, movie and other scenes. In the related art, the virtual image is modeled by manual making, and each virtual image and the expression of the virtual image need to be made separately, so that the making process is time-consuming and labor-consuming, and the cost is high. The rendering effect of each virtual image also needs to be independently customized, so that the cost is high, the expandability is low, the real-time interactive rendering is not possible, and the generation process of the personalized virtual image is complex.
Disclosure of Invention
The invention provides a rendering method, a device, a system, electronic equipment, a computer readable storage medium and a computer program product for an avatar, which at least solve the technical problems that the manufacturing cost is high and the operation steps for generating an individualized avatar are complex because the avatar and the expression thereof need to be manufactured independently and cannot be rendered in real time in the related technology. The technical scheme of the invention is as follows:
according to a first aspect of an embodiment of the present invention, there is provided a rendering method of an avatar, including:
acquiring a face image block of a user and acquiring a face motion unit coding list corresponding to the face image block;
Predicting facial features of the face image block to obtain a prediction result;
determining and displaying an avatar recommendation list corresponding to the prediction result;
acquiring a target avatar selected by the user from the avatar recommendation list;
and rendering the human face movement unit coding list to the face of the target virtual image to generate a personalized target virtual image.
Optionally, the predicting the facial features of the face image block, to obtain a prediction result includes:
age prediction is carried out on the face characteristics of the face image block, and an age prediction result of the face image block is obtained; and
and carrying out gender prediction on the face characteristics of the face image block to obtain a gender prediction result of the face image block.
Optionally, the step of performing age prediction on the face feature of the face image block to obtain an age prediction result of the face image block includes:
age prediction is carried out on the face characteristics of the face image blocks through an age group classification model or an age group recognition algorithm, and an age prediction result of the face image blocks is obtained;
the step of predicting the gender of the face features of the face image block to obtain a gender prediction result of the face image block comprises the following steps:
And carrying out gender prediction on the face characteristics of the face image block through a gender classification model or a gender recognition algorithm to obtain a gender prediction result of the face image block.
Optionally, the determining and displaying the avatar recommendation list corresponding to the prediction result includes:
according to the age prediction result and the gender prediction result of the face image block, a corresponding virtual image recommendation list is retrieved from an virtual image material library;
and displaying the avatar recommendation list to the user so that the user can select a target avatar.
Optionally, the obtaining a face motion unit coding list corresponding to the face image block includes:
acquiring a face motion unit coding list corresponding to the face image block from a cloud face motion unit coding library; or alternatively
And classifying the face motion units of the face image blocks to obtain a face motion unit coding list.
Optionally, after classifying the face motion units of the face image block, the method further includes:
and storing the obtained facial movement unit coding list into a cloud facial movement unit coding library.
According to a second aspect of an embodiment of the present invention, there is provided an avatar rendering apparatus including:
The first acquisition module is used for acquiring a face image block of a user;
the second acquisition module is used for acquiring a face motion unit coding list corresponding to the face image block;
the prediction module is used for predicting the facial features of the face image block to obtain a prediction result;
a determining module, configured to determine an avatar recommendation list corresponding to the prediction result;
the display module is used for displaying the virtual image recommendation list corresponding to the prediction result, which is determined by the determination module;
a third acquisition module for acquiring a target avatar selected by the user from the avatar recommendation list;
and the rendering module is used for rendering the human face movement unit coding list to the face of the target virtual image to obtain the personalized target virtual image.
Optionally, the prediction module includes:
the age prediction module is used for carrying out age prediction on the face characteristics of the face image block to obtain an age prediction result of the face image block; and
and the gender prediction module is used for carrying out gender prediction on the face characteristics of the face image block to obtain a gender prediction result of the face image block.
Optionally, the age prediction module is specifically configured to perform age prediction on the face feature of the face image block through an age bracket classification model or an age bracket recognition algorithm, so as to obtain an age prediction result of the face image block;
the gender prediction module is specifically configured to perform gender prediction on the face features of the face image block through a gender classification model or a gender recognition algorithm, so as to obtain a gender prediction result of the face image block.
Optionally, the determining module is specifically configured to retrieve a corresponding avatar recommendation list from an avatar material library according to an age prediction result and a gender prediction result of the face image block;
the display module is specifically configured to display the avatar recommendation list to the user, so that the user selects a target avatar.
Optionally, the second obtaining module includes:
the code list acquisition module is used for acquiring a face motion unit code list corresponding to the face image block from a cloud face motion unit code library; or alternatively
And the classification module is used for classifying the face motion units of the face image blocks to obtain a face motion unit coding list.
Optionally, the apparatus further includes:
and the storage module is used for storing the facial movement unit coding list obtained by the classification module into a cloud facial movement unit coding library.
According to a third aspect of an embodiment of the present invention, there is provided a rendering system of an avatar, including:
the face detection module is used for detecting a face to obtain a face image block;
the virtual image retrieval module is used for predicting the facial features of the face image blocks to obtain a prediction result, determining and displaying an virtual image recommendation list corresponding to the prediction result, and acquiring a target virtual image selected by the user from the virtual image recommendation list;
the face motion unit code acquisition module is used for classifying the face motion units of the face image blocks detected by the face detection module to obtain a face motion unit code list; or acquiring a face motion unit coding list corresponding to the face image block from a cloud face motion unit coding library;
and the virtual image rendering module is used for rendering the human face movement unit coding list to the face of the target virtual image to generate a personalized target virtual image.
Optionally, the system further comprises:
the cloud storage module is used for storing the face motion unit coding list obtained by the face motion unit coding obtaining module.
According to a fourth aspect of an embodiment of the present invention, there is provided an electronic apparatus including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the avatar rendering method as described above.
According to a fifth aspect of embodiments of the present invention, there is provided a computer-readable storage medium, which when executed by a processor of an electronic device, causes the electronic device to perform the avatar rendering method as described above.
According to a sixth aspect of embodiments of the present invention, there is provided a computer program product comprising a computer program/instruction which, when executed by a processor, implements the avatar rendering method as described above.
The technical scheme provided by the embodiment of the invention at least has the following beneficial effects:
in the embodiment of the invention, the facial features of the acquired facial image blocks are predicted to obtain a prediction result; determining and displaying an avatar recommendation list corresponding to the prediction result; acquiring a target avatar selected by the user from the avatar recommendation list; classifying the face motion units of the face image blocks to obtain a face motion unit coding list; and rendering the human face movement unit coding list to the face of the target virtual image to generate a personalized target virtual image. That is, in the embodiment of the present invention, by predicting the obtained face image block, obtaining the avatar recommendation list corresponding to the prediction result, selecting the favorite avatar from the avatar recommendation list according to the user selection operation, and performing AU unit classification on the selected avatar and the face image block to obtain an AU unit coding list for rendering, the personalized avatar is obtained, so that not only is the operation step of generating the personalized avatar simplified, but also the modeling cost of the avatar is reduced, and the diversity of the avatar is enriched.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention and do not constitute a undue limitation on the invention.
Fig. 1 is a flowchart of a method of rendering an avatar according to an embodiment of the present invention.
Fig. 2 is an application example diagram of a rendering method of an avatar according to an embodiment of the present invention.
Fig. 3 is another application example diagram of an avatar rendering method provided in an embodiment of the present invention.
Fig. 4 is a block diagram of an avatar rendering apparatus provided in an embodiment of the present invention.
Fig. 5 is a block diagram of a prediction module according to an embodiment of the present invention.
Fig. 6 is a block diagram of a rendering system of an avatar provided in an embodiment of the present invention.
Fig. 7 is a block diagram of a rendering system of an avatar provided in an embodiment of the present invention.
Fig. 8 is a block diagram of an electronic device according to an embodiment of the present invention.
Fig. 9 is a block diagram of an apparatus for rendering an avatar provided in an embodiment of the present invention.
Detailed Description
In order to enable a person skilled in the art to better understand the technical solutions of the present invention, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. The implementations described in the following exemplary examples do not represent all implementations consistent with the invention. Rather, they are merely examples of apparatus and methods consistent with aspects of the invention as detailed in the accompanying claims.
Fig. 1 is a flowchart of an avatar rendering method according to an embodiment of the present invention, and as shown in fig. 1, an avatar rendering method is used in a terminal or a server, and includes the steps of:
step 101: and acquiring a face image block of the user and acquiring a face motion unit coding list corresponding to the face image block.
In this step, the photographed face image may be obtained locally, or the face image of the user photographed in real time by the RGB camera may be obtained, and then the face image obtained is identified by using a face identification algorithm to obtain a corresponding face image block, and the face image identification process by using the face identification algorithm is a well known technology for those skilled in the art, and will not be described herein.
Of course, in this embodiment, the obtained face image may also be input into a face detection model for recognition, so as to obtain a face image block, where the face detection model is a trained face detection model, and the specific model training process is a well-known technology for those skilled in the art and will not be described herein.
The step of obtaining the face motion unit coding list corresponding to the face image block comprises the following steps:
the face motion unit coding list corresponding to the face image block can be directly obtained from the cloud face motion unit coding library, and of course, the face motion unit classification can be performed on the face image block to obtain the face motion unit coding list.
Specifically, the face image block can be input into a multi-classification model (namely an AU multi-classification model) of the face motion unit for classification and identification, so as to obtain a coding list of the face motion unit. In this embodiment, the facial motion Unit (AU, action Unit) recognizes the motion of the facial muscles of the current face, for example, AU1 represents the chin descent, AU2 represents the mouth stretching, and so on. The specific implementation process is as follows:
Firstly, inputting a human face image block into a human face key point detection model to obtain human face key point coordinates; secondly, all the coordinates of key points of the human face are connected with each other, so that the human face is divided into n areas; again, the N small regions continue to be divided into N large regions, each large region being referred to as an AU group (AU group), depending on the position of the eyes, nose, mouth, cheek, forehead, chin; finally, simultaneously inputting the face block image and the AU group into an AU multi-classification model to obtain an AU coding list; of course, each frame expression category of the current face can be obtained according to the AU coding list.
The face key point detection model and the AU multi-classification model are both trained models in advance.
It should be noted that if the face image block is classified into face motion units, the obtained face motion unit coding list is further stored in a cloud face motion unit coding library.
Step 102: and predicting the facial features of the face image block to obtain a prediction result.
In the step, the face characteristics of the face image block can be subjected to age prediction to obtain an age prediction result of the face image block; and carrying out gender prediction on the face characteristics of the face image block to obtain a gender prediction result of the face image block. Of course, in this embodiment, sex identification may be performed first, and then age identification may be performed.
Specifically, in this embodiment, the face feature of the face image block may be subjected to age prediction by using an age group classification model or an age group recognition algorithm, so as to obtain an age prediction result of the face image block, where the prediction result may be classified into four types of young, teenager, middle-aged and elderly. For example, based on the face image, the approximate age of the face can be predicted by using a machine learning method such as pca and svm.
Specifically, in this embodiment, the gender prediction may be performed on the face feature of the face image block by using a gender classification model or a gender recognition algorithm, so as to obtain a gender prediction result of the face image block, where the gender prediction result may include: male and female. The gender classification model or the gender identification algorithm is well known to those skilled in the art, and will not be described herein. For example, the detection of Face images, landmark positioning, pose estimation, gender recognition and the like are performed by using a Hyper Face algorithm and the like, namely a deep Convolutional Neural Network (CNN).
Step 103: and determining and displaying an avatar recommendation list corresponding to the prediction result.
In the step, a corresponding avatar recommendation list is searched from an avatar material library according to the age prediction result and the gender prediction result of the face image block, and then the avatar recommendation list is displayed to the user so that the user can select a favorite target avatar. Wherein the avatar material library may be pre-established to include designed basic avatars such as batman, pig pecies, and the like.
That is, in the present embodiment, a plurality of avatars conforming to the age prediction result and the gender prediction result are found from the avatar material library, and the plurality of avatar shapes constitute an avatar recommendation list common to the user selection. For example, if the age prediction result is 6 years old and the sex prediction result is girl, the retrieved avatar may be a barbiter, a piggy-back, or the like.
Step 104: and acquiring a target avatar selected by the user from the avatar recommendation list.
In this step, the user's own favorite avatar selected from the displayed avatar recommendation list, referred to herein as the target avatar shape, is received.
Step 105: and rendering the human face movement unit coding list to the face of the target virtual image to generate a personalized target virtual image.
In the step, the AU unit coding list corresponding to each frame of facial expression can be transmitted to the Unity by calling a tab tag (Avatar Mapping tab) of the Unity, and the Unity renders the AU unit coding list to the face of the target avatar to generate the personalized target avatar. That is, the Unity can correspondingly render the face of the target avatar through the control according to the AU unit coding list corresponding to the obtained facial expression, so as to generate the personalized target avatar.
Here, unity is an interface of a rendering tool, and a rendering process by calling Unity is a well-known technology in the art, and is not described herein.
In the embodiment of the invention, the facial features of the acquired facial image blocks are predicted to obtain a prediction result; determining and displaying an avatar recommendation list corresponding to the prediction result; acquiring a target avatar selected by the user from the avatar recommendation list; classifying the face motion units of the face image blocks to obtain a face motion unit coding list; and rendering the human face movement unit coding list to the face of the target virtual image to generate a personalized target virtual image. That is, in the embodiment of the present invention, by predicting the obtained face image block, obtaining the avatar recommendation list corresponding to the prediction result, selecting the favorite avatar from the avatar recommendation list according to the user selection operation, and performing AU unit classification on the selected avatar and the face image block to obtain an AU unit coding list for rendering, the personalized avatar is obtained, so that not only is the operation step of generating the personalized avatar simplified, but also the modeling cost of the avatar is reduced, and the diversity of the avatar is enriched.
Referring also to fig. 2, an application example diagram of a rendering method of an avatar according to an embodiment of the present invention includes:
step 201: acquiring a video stream of a user face through a camera;
step 202: performing face detection on the video stream to obtain a face image block; steps 203, 204 and 208 are then performed;
step 203: performing age bracket prediction on the facial features of the face image blocks to obtain an age prediction result;
step 204: carrying out gender prediction on the facial features of the facial image blocks to obtain gender measurement results;
step 205: and according to the age bracket prediction result and the gender prediction result of the face image block, retrieving a corresponding virtual image recommendation list from the virtual image material library.
Step 206: and displaying the avatar recommendation list to the user.
Step 207: and receiving a target avatar selected by the user from the avatar recommendation list.
Step 208: and classifying the face motion units of the obtained face image blocks to obtain a face motion unit coding list.
Step 209: and rendering the human face movement unit coding list to the face of the target virtual image to generate a personalized target virtual image.
In the embodiment of the invention, the age and sex of the obtained face image block are predicted, the avatar recommendation list corresponding to the age prediction result and the sex prediction result is obtained, the favorite avatar is selected from the avatar recommendation list according to the user selection operation, and the selected avatar and the face image block are subjected to face motion unit classification to obtain the face motion unit coding list for rendering, so that the personalized avatar is obtained, the operation step of generating the personalized avatar is simplified, the modeling cost of the avatar is reduced, and the diversity of the avatar is enriched
Referring also to fig. 3, another application example diagram of a rendering method of an avatar according to an embodiment of the present invention is provided, where the method includes:
step 301: acquiring a video stream of a user face through a camera;
step 302: performing face detection on the video stream to obtain a face image block; steps 303, 304 and 308 are then performed;
step 303: performing age bracket prediction on the facial features of the face image blocks to obtain an age prediction result;
step 304: carrying out gender prediction on the facial features of the facial image blocks to obtain gender measurement results;
Step 305: according to the age bracket prediction result and the gender prediction result of the face image block, a corresponding virtual image recommendation list is retrieved from an virtual image material library;
step 306: displaying the avatar recommendation list to the user;
step 307: and receiving a target avatar selected by the user from the avatar recommendation list.
Step 308: classifying the face motion units of the obtained face image blocks to obtain a face motion unit coding list;
step 309: storing the obtained face motion unit coding list into a cloud face operation unit coding library;
step 310: and rendering the face motion unit coding list acquired from the cloud face operation unit coding library to the face of the target virtual image to generate a personalized target virtual image.
In the embodiment of the invention, the difference from the above embodiment is that the face motion unit coding list of the face image can be obtained from the cloud face operation unit coding library, and the face motion unit coding list is rendered after the obtained selected virtual image and the face image block obtained from the cloud face operation unit coding library are subjected to face motion unit classification, so that the personalized virtual image is obtained, the operation steps for generating the personalized virtual image are simplified, the modeling cost of the virtual image is reduced, and the diversity of the virtual image is enriched.
Of course, in another embodiment, based on the above embodiment, after the user selects the favorite avatar, it is determined whether a face motion unit coding list corresponding to the face image block can be found from a cloud face operation unit coding library, and if found, the face motion unit coding list is rendered to the face of the target avatar to generate a personalized target avatar; if not, classifying the face motion units of the obtained face image blocks to obtain a face motion unit coding list; then, rendering the human face movement unit coding list to the face of the target virtual image to generate a personalized target virtual image; and storing the face motion unit coding list into a cloud face operation unit coding library. Not only simplifying the operation steps of generating the personalized avatar, but also reducing the modeling cost of the avatar and enriching the diversity of the avatar.
It should be noted that, for simplicity of description, the method embodiments are shown as a series of acts, but it should be understood by those skilled in the art that the present disclosure is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred embodiments, and that the acts are not necessarily required for the present invention.
Referring also to fig. 4, an apparatus for rendering an avatar according to an embodiment of the present invention includes: the first obtaining module 401, the second obtaining module 402, the predicting module 403, the determining module 404, the displaying module 405, the third obtaining module 406 and the rendering module 407 are schematically shown in fig. 4, wherein,
the first obtaining module 401 is configured to obtain a face image block of a user;
the second obtaining module 402 is configured to obtain a face motion unit coding list corresponding to the face image block;
the prediction module 403 is configured to predict facial features of the face image block to obtain a prediction result;
the determining module 404 is configured to determine an avatar recommendation list corresponding to the prediction result;
the display module 405 is configured to display the avatar recommendation list corresponding to the prediction result determined by the determination module;
the third obtaining module 406 is configured to obtain a target avatar selected by the user from the avatar recommendation list;
the rendering module 407 is configured to render the face motion unit coding list to the face of the target avatar, so as to obtain a personalized target avatar.
Optionally, in another embodiment, based on the foregoing embodiment, the prediction module 403 includes: age prediction 501 and gender prediction module 502 are schematically shown in fig. 5, wherein,
the age prediction module 501 is configured to perform age prediction on face features of the face image block to obtain an age prediction result of the face image block; and
the gender prediction module 502 is configured to perform gender prediction on the face features of the face image block, and obtain a gender prediction result of the face image block.
Optionally, in another embodiment, based on the foregoing embodiment, the age prediction module is specifically configured to perform age prediction on the face feature of the face image block by using an age bracket classification model or an age bracket recognition algorithm, so as to obtain an age prediction result of the face image block;
the gender prediction module is specifically configured to perform gender prediction on the face features of the face image block through a gender classification model or a gender recognition algorithm, so as to obtain a gender prediction result of the face image block.
Optionally, in another embodiment, based on the foregoing embodiment, the determining module is specifically configured to retrieve a corresponding avatar recommendation list from an avatar material library according to an age prediction result and a gender prediction result of the face image block;
The display module is specifically configured to display the avatar recommendation list to the user, so that the user selects a target avatar.
Optionally, in another embodiment, based on the foregoing embodiment, the second obtaining module 402 includes: a code list acquisition module and/or a classification module (not shown), wherein,
the code list acquisition module is used for acquiring a face motion unit code list corresponding to the face image block from a cloud face motion unit code library;
the classification module is used for classifying the face motion units of the face image blocks to obtain a face motion unit coding list.
Optionally, in another embodiment, based on the foregoing embodiment, the apparatus may further include: a memory module, wherein,
the storage module is used for storing the face motion unit coding list obtained by the classification module into a cloud face motion unit coding library.
Optionally, in another embodiment, based on the foregoing embodiment, the apparatus may further include:
the searching module is used for searching a face motion unit coding list corresponding to the face image block from a cloud face motion unit coding library after the third obtaining module obtains the target virtual image selected by the user from the virtual image recommending list;
The rendering module is further configured to render the found face motion unit coding list to the face of the target avatar when the finding module finds the face motion unit coding list corresponding to the face image block, so as to generate a personalized target avatar;
the expression classification module is further configured to classify the facial motion units of the facial image block acquired by the second acquisition module when the search module does not find a corresponding facial motion unit coding list corresponding to the facial image block, so as to obtain a facial motion unit coding list.
Referring also to fig. 6, a block diagram of an avatar rendering system according to an embodiment of the present invention includes: a face detection module 601, an avatar retrieval module 602, a face motion unit code acquisition module 603, and an avatar rendering module 604, wherein,
the face detection module 601 is configured to detect a face to obtain a face image block;
the avatar retrieval module 602 is configured to predict facial features of the face image block to obtain a prediction result, determine and display an avatar recommendation list corresponding to the prediction result, and obtain a target avatar selected by the user from the avatar recommendation list;
The facial motion unit code obtaining module 603 is configured to perform facial motion unit classification on the facial image block detected by the facial detection module, to obtain a facial motion unit code list; or acquiring a face motion unit coding list corresponding to the face image block from a cloud face motion unit coding library;
the avatar rendering module 604 is configured to render the face motion unit coding list to a face of the target avatar, and generate a personalized target avatar.
Optionally, in another embodiment, based on the foregoing embodiment, the system may further include: the cloud storage module 701 is schematically shown in fig. 7, in which,
the cloud storage module 701 is configured to store the face motion unit code list obtained by the face motion unit code obtaining module.
Optionally, an embodiment of the present invention further provides an electronic device, including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the avatar rendering method as described above.
Optionally, an embodiment of the present invention further provides a computer-readable storage medium, which when executed by a processor of an electronic device, enables the electronic device to perform the avatar rendering method as described above. Alternatively, the computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
Optionally, an embodiment of the present invention further provides a computer program product, including a computer program/instruction, which when executed by a processor implements the avatar rendering method as described above.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 8 is a block diagram of an electronic device 800 provided by an embodiment of the invention. For example, the electronic device 800 may be a mobile terminal or a server, and in the embodiment of the present invention, the electronic device is taken as an example of the mobile terminal. For example, electronic device 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to fig. 8, an electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interactions between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen between the electronic device 800 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. The front camera and/or the rear camera may receive external multimedia data when the device 800 is in an operational mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 further includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of the electronic device 800. For example, the sensor assembly 814 may detect an on/off state of the device 800, a relative positioning of the components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in position of the electronic device 800 or a component of the electronic device 800, the presence or absence of a user's contact with the electronic device 800, an orientation or acceleration/deceleration of the electronic device 800, and a change in temperature of the electronic device 800. The sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communication between the electronic device 800 and other devices, either wired or wireless. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi, an operator network (e.g., 2G, 3G, 4G, or 5G), or a combination thereof. In one exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for performing the above-described avatar rendering methods.
In an embodiment, a non-transitory computer readable storage medium including instructions, such as the memory 804 including instructions, executable by the processor 820 of the electronic device 800 to perform the above-described avatar rendering method is also provided. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
Fig. 9 is a block diagram of an apparatus 900 for rendering an avatar according to an embodiment of the present invention. For example, apparatus 900 may be provided as a server. Referring to FIG. 9, apparatus 900 includes a processing component 922 that further includes one or more processors, and memory resources represented by memory 932, for storing instructions, such as applications, executable by processing component 922. The application programs stored in memory 932 may include one or more modules that each correspond to a set of instructions. Further, processing component 922 is configured to execute instructions to perform the above-described methods.
The apparatus 900 may also include a power component 926 configured to perform power management of the apparatus 900, a wired or wireless network interface 950 configured to connect the apparatus 900 to a network, and an input output (I/O) interface 958. The device 900 may operate based on an operating system stored in memory 932, such as Windows Server, mac OS XTM, unixTM, linuxTM, freeBSDTM, or the like.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It is to be understood that the invention is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (11)

1. A method of rendering an avatar, comprising:
acquiring a face image block of a user and acquiring a face motion unit coding list corresponding to the face image block;
predicting facial features of the face image block to obtain a prediction result;
determining and displaying an avatar recommendation list corresponding to the prediction result;
acquiring a target avatar selected by the user from the avatar recommendation list;
and rendering the human face movement unit coding list to the face of the target virtual image to generate a personalized target virtual image.
2. The avatar rendering method of claim 1, wherein the predicting facial features of the face image block, to obtain a prediction result, comprises:
age prediction is carried out on the face characteristics of the face image block, and an age prediction result of the face image block is obtained; and
And carrying out gender prediction on the face characteristics of the face image block to obtain a gender prediction result of the face image block.
3. The avatar rendering method of claim 2, wherein the virtual image is displayed on the display screen,
the step of carrying out age prediction on the face characteristics of the face image block to obtain an age prediction result of the face image block comprises the following steps:
age prediction is carried out on the face characteristics of the face image blocks through an age group classification model or an age group recognition algorithm, and an age prediction result of the face image blocks is obtained;
the step of predicting the gender of the face features of the face image block to obtain a gender prediction result of the face image block comprises the following steps:
and carrying out gender prediction on the face characteristics of the face image block through a gender classification model or a gender recognition algorithm to obtain a gender prediction result of the face image block.
4. The avatar rendering method of claim 2, wherein the determining and displaying an avatar recommendation list corresponding to the prediction result comprises:
according to the age prediction result and the gender prediction result of the face image block, a corresponding virtual image recommendation list is retrieved from an virtual image material library;
And displaying the avatar recommendation list to the user so that the user can select a target avatar.
5. The avatar rendering method of any one of claims 1 to 4, wherein the acquiring the face motion unit code list corresponding to the face image block comprises:
acquiring a face motion unit coding list corresponding to the face image block from a cloud face motion unit coding library; or alternatively
And classifying the face motion units of the face image blocks to obtain a face motion unit coding list.
6. The avatar rendering method of claim 5, wherein after classifying the face image blocks as face motion units, the method further comprises:
and storing the obtained facial movement unit coding list into a cloud facial movement unit coding library.
7. An avatar rendering apparatus, comprising:
the first acquisition module is used for acquiring a face image block of a user;
the second acquisition module is used for acquiring a face motion unit coding list corresponding to the face image block;
the prediction module is used for predicting the facial features of the face image block to obtain a prediction result;
A determining module, configured to determine an avatar recommendation list corresponding to the prediction result;
the display module is used for displaying the virtual image recommendation list corresponding to the prediction result, which is determined by the determination module;
a third acquisition module for acquiring a target avatar selected by the user from the avatar recommendation list;
and the rendering module is used for rendering the human face movement unit coding list to the face of the target virtual image to obtain the personalized target virtual image.
8. A system for rendering an avatar, comprising:
the face detection module is used for detecting a face to obtain a face image block;
the virtual image retrieval module is used for predicting the facial features of the face image blocks to obtain a prediction result, determining and displaying an virtual image recommendation list corresponding to the prediction result, and acquiring a target virtual image selected by the user from the virtual image recommendation list;
the face motion unit code acquisition module is used for classifying the face motion units of the face image blocks detected by the face detection module to obtain a face motion unit code list; or acquiring a face motion unit coding list corresponding to the face image block from a cloud face motion unit coding library;
And the virtual image rendering module is used for rendering the human face movement unit coding list to the face of the target virtual image to generate a personalized target virtual image.
9. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the avatar rendering method of any one of claims 1 to 6.
10. A computer readable storage medium, characterized in that instructions in the computer readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the avatar rendering method of any one of claims 1 to 6.
11. A computer program product comprising computer programs/instructions which, when executed by a processor, implement the method of rendering an avatar as claimed in any one of claims 1 to 6.
CN202111369600.1A 2021-11-16 2021-11-16 Virtual image rendering method, device, system, electronic equipment, medium and product Pending CN116137080A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111369600.1A CN116137080A (en) 2021-11-16 2021-11-16 Virtual image rendering method, device, system, electronic equipment, medium and product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111369600.1A CN116137080A (en) 2021-11-16 2021-11-16 Virtual image rendering method, device, system, electronic equipment, medium and product

Publications (1)

Publication Number Publication Date
CN116137080A true CN116137080A (en) 2023-05-19

Family

ID=86334284

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111369600.1A Pending CN116137080A (en) 2021-11-16 2021-11-16 Virtual image rendering method, device, system, electronic equipment, medium and product

Country Status (1)

Country Link
CN (1) CN116137080A (en)

Similar Documents

Publication Publication Date Title
CN110662083B (en) Data processing method and device, electronic equipment and storage medium
CN110517185B (en) Image processing method, device, electronic equipment and storage medium
CN110825912B (en) Video generation method and device, electronic equipment and storage medium
CN109257645B (en) Video cover generation method and device
EP3179408A2 (en) Picture processing method and apparatus, computer program and recording medium
CN106485567B (en) Article recommendation method and device
CN113099297B (en) Method and device for generating click video, electronic equipment and storage medium
CN109961094B (en) Sample acquisition method and device, electronic equipment and readable storage medium
CN109410276B (en) Key point position determining method and device and electronic equipment
CN111753135B (en) Video display method, device, terminal, server, system and storage medium
CN111144266B (en) Facial expression recognition method and device
CN111526287A (en) Image shooting method, image shooting device, electronic equipment, server, image shooting system and storage medium
CN110929616B (en) Human hand identification method and device, electronic equipment and storage medium
CN112148980A (en) Item recommendation method, device, equipment and storage medium based on user click
CN112347911A (en) Method and device for adding special effects of fingernails, electronic equipment and storage medium
CN112015277B (en) Information display method and device and electronic equipment
CN112000266B (en) Page display method and device, electronic equipment and storage medium
CN111292743B (en) Voice interaction method and device and electronic equipment
CN116137080A (en) Virtual image rendering method, device, system, electronic equipment, medium and product
CN114299427A (en) Method and device for detecting key points of target object, electronic equipment and storage medium
CN113989424A (en) Three-dimensional virtual image generation method and device and electronic equipment
CN108154092B (en) Face feature prediction method and device
CN113709548A (en) Image-based multimedia data synthesis method, device, equipment and storage medium
CN112036247A (en) Expression package character generation method and device and storage medium
CN108227927B (en) VR-based product display method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination