CN110751717A - Virtual head model construction method and device, computer equipment and storage medium - Google Patents

Virtual head model construction method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN110751717A
CN110751717A CN201910851204.9A CN201910851204A CN110751717A CN 110751717 A CN110751717 A CN 110751717A CN 201910851204 A CN201910851204 A CN 201910851204A CN 110751717 A CN110751717 A CN 110751717A
Authority
CN
China
Prior art keywords
user
head
model
virtual
size
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910851204.9A
Other languages
Chinese (zh)
Inventor
夏新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910851204.9A priority Critical patent/CN110751717A/en
Publication of CN110751717A publication Critical patent/CN110751717A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a virtual head model construction method, a virtual head model construction device, computer equipment and a storage medium. The method includes receiving an entered user head size; constructing a three-dimensional model according to the head size of the user to obtain a head model of the user; receiving the uploaded user head image; according to the one-to-one mapping relation between key pixel points of the user head image and key nodes in the user head model, rendering and adding the user head image to the user head model to obtain a user virtual face model; receiving the uploaded facial model adjusting instruction, and analyzing an adjusting parameter corresponding to the facial model adjusting instruction; and correspondingly adjusting the user virtual face model according to the adjustment parameters to obtain the adjusted user virtual face model. The method realizes the rapid construction of the user virtual face model based on the head size of the user and the head image of the user, can perform model adjustment in a self-defining or automatic mode, reduces the operation difficulty of constructing the virtual head model, and can generate the virtual head model more rapidly.

Description

Virtual head model construction method and device, computer equipment and storage medium
Technical Field
The invention relates to the technical field of image recognition, in particular to a virtual head model construction method and device, computer equipment and a storage medium.
Background
At present, with the development of computer technology and the advancement of multimedia technology, three-dimensional models are increasingly popular due to their strong sense of space and reality. In the field of virtual reality, such as virtual fitting, virtual beauty and the like, the head three-dimensional model of the existing virtual user role does not completely restore the head model of the user, and the adjustment of the head three-dimensional model cannot be realized according to the real head parameters of the user, so that a more intuitive effect can be obtained. If the user head model needs to be subjected to three-dimensional modeling reduction with high reduction degree, the requirement on the professional performance of modeling personnel is high, and the efficiency is low due to high modeling difficulty.
Disclosure of Invention
The embodiment of the invention provides a virtual head model construction method, a virtual head model construction device, computer equipment and a storage medium, and aims to solve the problems that in the prior art, the requirement on the professional performance of a modeling worker is high and the efficiency is low due to high modeling difficulty when a user head model is subjected to three-dimensional modeling reduction.
In a first aspect, an embodiment of the present invention provides a virtual head model building method, which includes:
receiving the head size of a user input by a user side;
constructing a three-dimensional model according to the head size of the user to obtain a head model of the user; wherein the head size of the user comprises head width, head length, head height and head circumference;
receiving a user head image uploaded by a user side;
according to the one-to-one mapping relation between the key pixel points of the user head image and the key nodes in the user head model, rendering and adding the user head image to the user head model to obtain a user virtual face model;
receiving a facial model adjusting instruction uploaded by a user side, and analyzing an adjusting parameter corresponding to the facial model adjusting instruction; the facial model adjusting instruction is a user-defined manual adjusting instruction or a one-key automatic adjusting instruction;
correspondingly adjusting the user virtual face model according to the adjustment parameters to obtain an adjusted user virtual face model; and
and sending the adjusted user virtual face model to a user side for displaying.
In a second aspect, an embodiment of the present invention provides a virtual head model building apparatus, including:
the size receiving unit is used for receiving the size of the head of the user input by the user side;
the head model initial construction unit is used for constructing a three-dimensional model according to the head size of the user to obtain a head model of the user; wherein the head size of the user comprises head width, head length, head height and head circumference;
the head image receiving unit is used for receiving the head images of the users uploaded by the user terminals;
the virtual face model building unit is used for rendering and adding the user head image to the user head model according to the one-to-one mapping relation between the key pixel points of the user head image and the key nodes in the user head model so as to obtain a user virtual face model;
the adjustment instruction receiving unit is used for receiving a face model adjustment instruction uploaded by a user side and analyzing an adjustment parameter corresponding to the face model adjustment instruction; the facial model adjusting instruction is a user-defined manual adjusting instruction or a one-key automatic adjusting instruction;
the face model adjusting unit is used for correspondingly adjusting the user virtual face model according to the adjusting parameters to obtain an adjusted user virtual face model; and
and the current face model sending unit is used for sending the adjusted user virtual face model to the user side for displaying.
In a third aspect, an embodiment of the present invention further provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor, when executing the computer program, implements the virtual head model building method according to the first aspect.
In a fourth aspect, the present invention further provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and the computer program, when executed by a processor, causes the processor to execute the virtual head model building method according to the first aspect.
The embodiment of the invention provides a virtual head model construction method and device, computer equipment and a storage medium. The method comprises the steps of receiving a user head size input by a user side; constructing a three-dimensional model according to the head size of the user to obtain a head model of the user; wherein the head size of the user comprises head width, head length, head height and head circumference; receiving a user head image uploaded by a user side; according to the one-to-one mapping relation between the key pixel points of the user head image and the key nodes in the user head model, rendering and adding the user head image to the user head model to obtain a user virtual face model; receiving a facial model adjusting instruction uploaded by a user side, and analyzing an adjusting parameter corresponding to the facial model adjusting instruction; the facial model adjusting instruction is a user-defined manual adjusting instruction or a one-key automatic adjusting instruction; correspondingly adjusting the user virtual face model according to the adjustment parameters to obtain an adjusted user virtual face model; and sending the adjusted user virtual face model to a user side for display. The method realizes the rapid construction of the virtual face model of the user based on the head size of the user and the head image of the user, can perform model adjustment in a self-defining or automatic mode, reduces the operation difficulty of constructing the virtual head model, and can generate the virtual head model more rapidly.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario of a virtual head model construction method according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a virtual head model building method according to an embodiment of the present invention;
fig. 3 is a schematic sub-flow chart of a virtual head model construction method according to an embodiment of the present invention;
FIG. 4 is a schematic block diagram of a virtual head model building apparatus provided by an embodiment of the present invention;
FIG. 5 is a schematic block diagram of sub-units of a virtual head model building apparatus provided by an embodiment of the present invention;
FIG. 6 is a schematic block diagram of a computer device provided by an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Referring to fig. 1 and fig. 2, fig. 1 is a schematic view of an application scenario of a virtual head model construction method according to an embodiment of the present invention; fig. 2 is a flowchart illustrating a virtual head model building method according to an embodiment of the present invention, where the virtual head model building method is applied to a server and is executed by application software installed in the server.
As shown in fig. 2, the method includes steps S110 to S170.
S110, receiving the head size of the user input by the user side; wherein the head size of the user comprises head width, head length, head height and head circumference.
In this embodiment, before the user needs to view the virtual head portrait (i.e., before the user renders the real facial image of the user onto the virtual head model), the user head size (for example, the user head size includes key parameters such as head width, head length, head height, and head circumference) may be entered on the user interaction interface of the server corresponding to the user terminal according to the user head size measured by the user, and after the entry of the head width, head length, head height, and head circumference is completed, the user terminal sends the user head size to the server.
And S120, constructing a three-dimensional model according to the head size of the user to obtain a head model of the user.
In this embodiment, after the user enters the size of the user head and uploads the size to the server from the user side, the server may rapidly perform size adjustment on the common virtual head model pre-stored in the server according to the size of the user head to implement three-dimensional modeling, so as to rapidly obtain the user head model.
In an embodiment, step S120 further includes:
judging whether null values exist in a plurality of size parameter values included in the user head size;
if null values exist in a plurality of size parameter values included in the head size of the user, sending prompt information for prompting to supplement the corresponding null values to the user side;
and if the supplementary value of the size parameter value corresponding to the corresponding null value sent by the user side is not received within the preset time threshold, filling the corresponding null value by the average value of the size parameter value to obtain the updated head size of the user.
In this embodiment, after the user uploads the user header size to the server, the server performs null detection on a plurality of size parameter values included in the user header size uploaded by the user, so as to determine whether there is a null value in the plurality of size parameter values included in the user header size. If there is no null value in the plurality of size parameter values included in the user head size, it means that the server may directly construct a user head model according to the user head size and by combining a common head virtual model stored in advance. If the blank values exist in the plurality of size parameter values included in the head size of the user, it is indicated that unfilled size parameter values exist in the head size of the user uploaded by the user, and at this time, the user needs to be prompted to supplement blank values in the head size of the complete user in time and upload the supplemented head size of the user again. And if the user does not complete the head size of the user within the preset time threshold and uploads the head size to the server again once the server feeds back the prompt information to the user side, the server fills the head size of the user according to the average value of the parameter items corresponding to the blank value stored in the head size of the user so as to obtain the updated head size of the user. By the method, no blank value in the head parameters of the user can be ensured.
And S130, receiving the user head image uploaded by the user side.
In this embodiment, after the user finishes editing and uploading the user head size on the user side, the current user head image may be obtained by the camera on the user side and uploaded to the server, or the user head image of the user local to the user side may be selected and uploaded to the server. By the method, the head image of the user can be rapidly acquired to serve as an image material for simulating the head portrait of the user with high reduction degree.
In an embodiment, step S130 is followed by:
and obtaining key pixel points in the user head image through a cascade linear regression model.
In this embodiment, after receiving the user head image uploaded by the user, the server needs to obtain key pixel points in the user head image, such as eyes, nose tip, mouth corner points, eyebrows, and contour points of each part of a human face, in order to render the user head image to the user head model to obtain the virtual face model of the user.
In an embodiment, obtaining key pixel points in the user head image through a cascaded linear regression model includes:
and acquiring an eye key point set, a nose key point set, a mouth key point set and an eyebrow key point set in the head image of the user through a cascade linear regression model, and forming core key pixel points by the eye key point set, the nose key point set, the mouth key point set and the eyebrow key point set.
In specific implementation, a cascaded linear regression model can be used to locate key pixel points in the user head image. The problem of locating facial feature points can be seen as learning a regression function F with the image I as input and the output θ as the location of the feature points (face shape):
θ=F(I)=fn(fn-1(...f10,I),I),I);
θi=fii-1,I),i=1,...,n;
so-called cascading, i.e. current function fiIs dependent on the function f of the previous stagei-1Output theta ofiAnd each of fiThe learning targets are all true positions theta, theta approximating the characteristic points0Is in the original shape.
After the key pixel points in the user head image are located through the cascaded linear regression model, the core key points in the key pixel points can be screened again (for example, 4 pixel points at the left end of the left eyelid, the right end of the left eyelid, the top end of the left eyelid and the bottom end of the left eyelid can outline the shape of the left eye of the user). At least acquiring an eye key point set, a nose key point set, a mouth key point set and an eyebrow key point set in a head image of a user through a cascade linear regression model, and forming core key pixel points by the key point sets of the parts.
S140, according to the one-to-one mapping relation between the key pixel points of the user head image and the key nodes in the user head model, rendering and adding the user head image to the user head model to obtain a user virtual face model.
In the present embodiment, since the key point sets (e.g., eye key point set, nose key point set, mouth key point set, eyebrow key point set) of each face organ are set in advance for the user head model. At this time, core key points obtained by screening key pixel points in the user head image are mapped with key point sets of each face organ in the user head model one by one, and then the user head image can be rendered to the user head model to obtain the user virtual face model.
S150, receiving a facial model adjusting instruction uploaded by a user side, and analyzing an adjusting parameter corresponding to the facial model adjusting instruction; the facial model adjusting instruction is a user-defined manual adjusting instruction or a one-key automatic adjusting instruction.
In this embodiment, when a user selects to adjust the generated virtual facial model of the user on a user interaction interface for adjusting the facial model provided by the server, two ways are generally adopted, namely, a user-defined manual adjustment and a one-button automatic adjustment. In the user-defined manual adjustment mode, the user can adjust the user virtual facial model on an edit frame or a key point of the user virtual facial model displayed on the user interaction interface.
And S160, correspondingly adjusting the user virtual face model according to the adjustment parameters to obtain the adjusted user virtual face model.
In this embodiment, after the server obtains the adjustment parameters corresponding to the facial model adjustment instruction and the type corresponding to the facial model adjustment instruction, the server may quickly perform corresponding adjustment on the user virtual facial model to update the user virtual facial model. The adjustment parameters are determined only by manually inputting the parameters by a user or automatically determining the adjustment parameters by selecting the adjustment template, so that the adjustment process is simplified, and the model adjustment can be performed more quickly and simply.
In one embodiment, as shown in fig. 3, step S160 includes:
s161, if the adjustment mode corresponding to the facial model adjustment instruction is a user-defined manual adjustment instruction, acquiring corresponding adjustment parameters to correspondingly adjust the user virtual facial model to obtain an adjusted user virtual facial model;
and S162, if the adjustment mode corresponding to the face model adjustment instruction is a one-key automatic adjustment instruction, acquiring the selected adjustment template, and correspondingly adjusting the user virtual face model according to the adjustment parameters corresponding to the adjustment template to obtain the adjusted user virtual face model.
In this embodiment, for example, the edit box on the user interface for face model adjustment inputs any value of 30-34mm width of the eyelid from left to right, or directly pulls the key point at the right end of the eyelid to adjust to display any value of 30-34mm width of the eyelid from left to right. In a one-key automatic adjustment mode, an adjustment template (for example, a template A formed by size parameters of organs of the face of the star A) selected by a user is obtained, and the virtual face model of the user is correspondingly adjusted according to the adjustment template to obtain an adjusted virtual face model of the user. Through the two modes, the model adjusting process is simplified, and the professional requirement on the user is reduced.
And S170, sending the adjusted user virtual face model to a user side for displaying.
In this embodiment, after the adjusted virtual face model of the user is generated in the server, the adjusted virtual face model of the user can be sent to the user side, so that the user can view the adjusted virtual face model in the display area of the user interaction interface of the user side.
If the user needs to continue adjusting the currently generated adjusted virtual face model of the user, the process returns to step S150. That is, step S170 is followed by: and if a readjustment instruction is detected, acquiring the user virtual face model, and returning to execute the step S150. In this way, when the user is not satisfied with the next model adjustment result (i.e. the adjustment result of the virtual face model of the user after adjustment does not reach the effect expected by the user), the user can quickly return to readjust the virtual face model.
The method realizes the rapid construction of the virtual face model of the user based on the head size of the user and the head image of the user, can perform model adjustment in a self-defining or automatic mode, reduces the operation difficulty of constructing the virtual head model, and can generate the virtual head model more rapidly.
Embodiments of the present invention further provide a virtual head model building apparatus, where the virtual head model building apparatus is configured to execute any of the embodiments of the virtual head model building method described above. Specifically, referring to fig. 4, fig. 4 is a schematic block diagram of a virtual head model building apparatus according to an embodiment of the present invention. The virtual head model construction apparatus 100 may be configured in a server.
As shown in fig. 4, the virtual head model construction apparatus 100 includes a size receiving unit 110, a head model initial construction unit 120, a head image receiving unit 130, a virtual face model construction unit 140, an adjustment instruction receiving unit 150, a face model adjusting unit 160, and a current face model transmitting unit 170.
A size receiving unit 110, configured to receive a user header size entered by a user; wherein the head size of the user comprises head width, head length, head height and head circumference.
In this embodiment, before the user needs to view the virtual head portrait (i.e., before the user renders the real facial image of the user onto the virtual head model), the user head size (for example, the user head size includes key parameters such as head width, head length, head height, and head circumference) may be entered on the user interaction interface of the server corresponding to the user terminal according to the user head size measured by the user, and after the entry of the head width, head length, head height, and head circumference is completed, the user terminal sends the user head size to the server.
A head model initial constructing unit 120, configured to construct a three-dimensional model according to the size of the head of the user to obtain a head model of the user.
In this embodiment, after the user enters the size of the user head and uploads the size to the server from the user side, the server may rapidly perform size adjustment on the common virtual head model pre-stored in the server according to the size of the user head to implement three-dimensional modeling, so as to rapidly obtain the user head model.
In an embodiment, the virtual head model construction apparatus 100 further includes:
a null value determination unit configured to determine whether a null value exists in a plurality of size parameter values included in the user head size;
a null value prompting unit, configured to send, to the user side, a prompting message for prompting to supplement a corresponding null value if there is a null value in the multiple size parameter values included in the user head size;
and the null value filling unit is used for filling the corresponding null value by the average value of the size parameter values if the supplementary value of the size parameter value corresponding to the corresponding null value, which is sent by the user terminal, is not received within a preset time threshold value so as to obtain the updated head size of the user.
In this embodiment, after the user uploads the user header size to the server, the server performs null detection on a plurality of size parameter values included in the user header size uploaded by the user, so as to determine whether there is a null value in the plurality of size parameter values included in the user header size. If there is no null value in the plurality of size parameter values included in the user head size, it means that the server may directly construct a user head model according to the user head size and by combining a common head virtual model stored in advance. If the blank values exist in the plurality of size parameter values included in the head size of the user, it is indicated that unfilled size parameter values exist in the head size of the user uploaded by the user, and at this time, the user needs to be prompted to supplement blank values in the head size of the complete user in time and upload the supplemented head size of the user again. And if the user does not complete the head size of the user within the preset time threshold and uploads the head size to the server again once the server feeds back the prompt information to the user side, the server fills the head size of the user according to the average value of the parameter items corresponding to the blank value stored in the head size of the user so as to obtain the updated head size of the user. By the method, no blank value in the head parameters of the user can be ensured.
The header image receiving unit 130 is configured to receive a header image of a user uploaded by a user.
In this embodiment, after the user finishes editing and uploading the user head size on the user side, the current user head image may be obtained by the camera on the user side and uploaded to the server, or the user head image of the user local to the user side may be selected and uploaded to the server. By the method, the head image of the user can be rapidly acquired to serve as an image material for simulating the head portrait of the user with high reduction degree.
In an embodiment, the virtual head model construction apparatus 100 further includes:
and the key pixel point acquisition unit is used for acquiring the key pixel points in the user head image through a cascade linear regression model.
In this embodiment, after receiving the user head image uploaded by the user, the server needs to obtain key pixel points in the user head image, such as eyes, nose tip, mouth corner points, eyebrows, and contour points of each part of a human face, in order to render the user head image to the user head model to obtain the virtual face model of the user.
In an embodiment, the key pixel point obtaining unit is further configured to:
and acquiring an eye key point set, a nose key point set, a mouth key point set and an eyebrow key point set in the head image of the user through a cascade linear regression model, and forming core key pixel points by the eye key point set, the nose key point set, the mouth key point set and the eyebrow key point set.
In specific implementation, a cascaded linear regression model can be used to locate key pixel points in the user head image. The problem of locating facial feature points can be seen as learning a regression function F with the image I as input and the output θ as the location of the feature points (face shape):
θ=F(I)=fn(fn-1(...f10,I),I),I);
θi=fii-1,I),i=1,...,n;
so-called cascading, i.e. current function fiIs dependent on the function f of the previous stagei-1Output theta ofiAnd each of fiThe learning targets are all true positions theta, theta approximating the characteristic points0Is in the original shape.
After the key pixel points in the user head image are located through the cascaded linear regression model, the core key points in the key pixel points can be screened again (for example, 4 pixel points at the left end of the left eyelid, the right end of the left eyelid, the top end of the left eyelid and the bottom end of the left eyelid can outline the shape of the left eye of the user). At least acquiring an eye key point set, a nose key point set, a mouth key point set and an eyebrow key point set in a head image of a user through a cascade linear regression model, and forming core key pixel points by the key point sets of the parts.
The virtual face model constructing unit 140 is configured to render and add the user head image to the user head model according to a one-to-one mapping relationship between key pixel points of the user head image and key nodes in the user head model, so as to obtain a user virtual face model.
In the present embodiment, since the key point sets (e.g., eye key point set, nose key point set, mouth key point set, eyebrow key point set) of each face organ are set in advance for the user head model. At this time, core key points obtained by screening key pixel points in the user head image are mapped with key point sets of each face organ in the user head model one by one, and then the user head image can be rendered to the user head model to obtain the user virtual face model.
An adjustment instruction receiving unit 150, configured to receive a facial model adjustment instruction uploaded by a user side, and analyze an adjustment parameter corresponding to the facial model adjustment instruction; the facial model adjusting instruction is a user-defined manual adjusting instruction or a one-key automatic adjusting instruction.
In this embodiment, when a user selects to adjust the generated virtual facial model of the user on a user interaction interface for adjusting the facial model provided by the server, two ways are generally adopted, namely, a user-defined manual adjustment and a one-button automatic adjustment. In the user-defined manual adjustment mode, the user can adjust the user virtual facial model on an edit frame or a key point of the user virtual facial model displayed on the user interaction interface.
And the face model adjusting unit 160 is configured to perform corresponding adjustment on the user virtual face model according to the adjustment parameters, so as to obtain an adjusted user virtual face model.
In this embodiment, after the server obtains the adjustment parameters corresponding to the facial model adjustment instruction and the type corresponding to the facial model adjustment instruction, the server may quickly perform corresponding adjustment on the user virtual facial model to update the user virtual facial model. The adjustment parameters are determined only by manually inputting the parameters by a user or automatically determining the adjustment parameters by selecting the adjustment template, so that the adjustment process is simplified, and the model adjustment can be performed more quickly and simply.
In one embodiment, as shown in fig. 5, the face model adjusting unit 160 includes:
a custom adjusting unit 161, configured to, if an adjustment mode corresponding to the facial model adjustment instruction is a custom manual adjustment instruction, obtain a corresponding adjustment parameter to perform corresponding adjustment on the user virtual facial model, so as to obtain an adjusted user virtual facial model;
and an automatic adjusting unit 162, configured to obtain the selected adjusting template if the adjusting mode corresponding to the facial model adjusting instruction is a one-key automatic adjusting instruction, and perform corresponding adjustment on the user virtual facial model according to the adjusting parameters corresponding to the adjusting template to obtain an adjusted user virtual facial model.
In this embodiment, for example, the edit box on the user interface for face model adjustment inputs any value of 30-34mm width of the eyelid from left to right, or directly pulls the key point at the right end of the eyelid to adjust to display any value of 30-34mm width of the eyelid from left to right. In a one-key automatic adjustment mode, an adjustment template (for example, a template A formed by size parameters of organs of the face of the star A) selected by a user is obtained, and the virtual face model of the user is correspondingly adjusted according to the adjustment template to obtain an adjusted virtual face model of the user. Through the two modes, the model adjusting process is simplified, and the professional requirement on the user is reduced.
A current face model sending unit 170, configured to send the adjusted user virtual face model to the user side for display.
In this embodiment, after the adjusted virtual face model of the user is generated in the server, the adjusted virtual face model of the user can be sent to the user side, so that the user can view the adjusted virtual face model in the display area of the user interaction interface of the user side.
And if the user needs to continuously adjust the currently generated adjusted user virtual face model, returning to the step of executing the face model adjusting instruction uploaded by the receiving user side and analyzing the adjusting parameter corresponding to the face model adjusting instruction. That is, the virtual head model construction apparatus 100 further includes:
and the readjustment instruction detection unit is used for acquiring the user virtual face model if a readjustment instruction is detected, returning to execute the steps of receiving the face model adjustment instruction uploaded by the user side and analyzing the adjustment parameters corresponding to the face model adjustment instruction.
In this way, when the user is not satisfied with the next model adjustment result (i.e. the adjustment result of the virtual face model of the user after adjustment does not reach the effect expected by the user), the user can quickly return to readjust the virtual face model.
The device realizes the rapid construction of the virtual face model of the user based on the head size of the user and the head image of the user, can perform model adjustment in a self-defining or automatic mode, reduces the operation difficulty of constructing the virtual head model, and can generate the virtual head model more rapidly.
The virtual head model construction means described above may be implemented in the form of a computer program which may be run on a computer device as shown in fig. 6.
Referring to fig. 6, fig. 6 is a schematic block diagram of a computer device according to an embodiment of the present invention. The computer device 500 is a server, and the server may be an independent server or a server cluster composed of a plurality of servers.
Referring to fig. 6, the computer device 500 includes a processor 502, memory, and a network interface 505 connected by a system bus 501, where the memory may include a non-volatile storage medium 503 and an internal memory 504.
The non-volatile storage medium 503 may store an operating system 5031 and a computer program 5032. The computer program 5032, when executed, may cause the processor 502 to perform a virtual head model construction method.
The processor 502 is used to provide computing and control capabilities that support the operation of the overall computer device 500.
The internal memory 504 provides an environment for running the computer program 5032 in the non-volatile storage medium 503, and when the computer program 5032 is executed by the processor 502, the processor 502 may be caused to execute the virtual head model building method.
The network interface 505 is used for network communication, such as providing transmission of data information. Those skilled in the art will appreciate that the configuration shown in fig. 6 is a block diagram of only a portion of the configuration associated with aspects of the present invention and is not intended to limit the computing device 500 to which aspects of the present invention may be applied, and that a particular computing device 500 may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
The processor 502 is configured to run the computer program 5032 stored in the memory to implement the virtual head model building method disclosed in the embodiment of the present invention.
Those skilled in the art will appreciate that the embodiment of a computer device illustrated in fig. 6 does not constitute a limitation on the specific construction of the computer device, and that in other embodiments a computer device may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components. For example, in some embodiments, the computer device may only include a memory and a processor, and in such embodiments, the structures and functions of the memory and the processor are consistent with those of the embodiment shown in fig. 6, and are not described herein again.
It should be understood that, in the embodiment of the present invention, the Processor 502 may be a Central Processing Unit (CPU), and the Processor 502 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field-Programmable gate arrays (FPGAs) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like. Wherein a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In another embodiment of the invention, a computer-readable storage medium is provided. The computer readable storage medium may be a non-volatile computer readable storage medium. The computer readable storage medium stores a computer program, wherein the computer program, when executed by a processor, implements the virtual head model construction method disclosed by the embodiments of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses, devices and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided by the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only a logical division, and there may be other divisions when the actual implementation is performed, or units having the same function may be grouped into one unit, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, or an optical disk.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A virtual head model construction method, comprising:
receiving the head size of a user input by a user side; wherein the head size of the user comprises head width, head length, head height and head circumference;
constructing a three-dimensional model according to the head size of the user to obtain a head model of the user;
receiving a user head image uploaded by a user side;
according to the one-to-one mapping relation between the key pixel points of the user head image and the key nodes in the user head model, rendering and adding the user head image to the user head model to obtain a user virtual face model;
receiving a facial model adjusting instruction uploaded by a user side, and analyzing an adjusting parameter corresponding to the facial model adjusting instruction; the facial model adjusting instruction is a user-defined manual adjusting instruction or a one-key automatic adjusting instruction;
correspondingly adjusting the user virtual face model according to the adjustment parameters to obtain an adjusted user virtual face model; and
and sending the adjusted user virtual face model to a user side for displaying.
2. The virtual head model building method according to claim 1, wherein before building a three-dimensional model according to the head size of the user to obtain a head model of the user, the method further comprises:
judging whether null values exist in a plurality of size parameter values included in the user head size;
if null values exist in a plurality of size parameter values included in the head size of the user, sending prompt information for prompting to supplement the corresponding null values to the user side;
and if the supplementary value of the size parameter value corresponding to the corresponding null value sent by the user side is not received within the preset time threshold, filling the corresponding null value by the average value of the size parameter value to obtain the updated head size of the user.
3. The virtual head model building method according to claim 1, wherein after receiving the user head image uploaded by the user side, the method further comprises:
and obtaining key pixel points in the user head image through a cascade linear regression model.
4. The virtual head model construction method according to claim 3, wherein the obtaining key pixel points in the user head image through a cascaded linear regression model comprises:
and acquiring an eye key point set, a nose key point set, a mouth key point set and an eyebrow key point set in the head image of the user through a cascade linear regression model, and forming core key pixel points by the eye key point set, the nose key point set, the mouth key point set and the eyebrow key point set.
5. The method for constructing a virtual head model according to claim 1, wherein the correspondingly adjusting the user virtual face model according to the adjustment parameters to obtain an adjusted user virtual face model comprises:
if the adjustment mode corresponding to the facial model adjustment instruction is a user-defined manual adjustment instruction, acquiring corresponding adjustment parameters to correspondingly adjust the user virtual facial model to obtain an adjusted user virtual facial model;
and if the adjustment mode corresponding to the face model adjustment instruction is a one-key automatic adjustment instruction, acquiring the selected adjustment template, and correspondingly adjusting the user virtual face model according to the adjustment parameters corresponding to the adjustment template to obtain the adjusted user virtual face model.
6. A virtual head model construction apparatus, characterized by comprising:
the size receiving unit is used for receiving the size of the head of the user input by the user side; wherein the head size of the user comprises head width, head length, head height and head circumference;
the head model initial construction unit is used for constructing a three-dimensional model according to the head size of the user to obtain a head model of the user;
the head image receiving unit is used for receiving the head images of the users uploaded by the user terminals;
the virtual face model building unit is used for rendering and adding the user head image to the user head model according to the one-to-one mapping relation between the key pixel points of the user head image and the key nodes in the user head model so as to obtain a user virtual face model;
the adjustment instruction receiving unit is used for receiving a face model adjustment instruction uploaded by a user side and analyzing an adjustment parameter corresponding to the face model adjustment instruction; the facial model adjusting instruction is a user-defined manual adjusting instruction or a one-key automatic adjusting instruction;
the face model adjusting unit is used for correspondingly adjusting the user virtual face model according to the adjusting parameters to obtain an adjusted user virtual face model; and
and the current face model sending unit is used for sending the adjusted user virtual face model to the user side for displaying.
7. The virtual head model construction apparatus according to claim 6, further comprising:
a null value detection unit configured to determine whether a null value exists in a plurality of size parameter values included in the user head size;
a null value prompting unit, configured to send, to the user side, a prompting message for prompting to supplement a corresponding null value if there is a null value in the multiple size parameter values included in the user head size;
and the null value automatic filling unit is used for filling the corresponding null value through the average value of the size parameter values if the supplementary value of the size parameter value corresponding to the corresponding null value sent by the user side is not received within a preset time threshold value so as to obtain the updated head size of the user.
8. The virtual head model construction apparatus according to claim 6, further comprising:
and the key pixel point acquisition unit is used for acquiring the key pixel points in the user head image through a cascade linear regression model.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the virtual head model construction method according to any one of claims 1 to 5 when executing the computer program.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, causes the processor to execute the virtual head model construction method according to any one of claims 1 to 5.
CN201910851204.9A 2019-09-10 2019-09-10 Virtual head model construction method and device, computer equipment and storage medium Pending CN110751717A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910851204.9A CN110751717A (en) 2019-09-10 2019-09-10 Virtual head model construction method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910851204.9A CN110751717A (en) 2019-09-10 2019-09-10 Virtual head model construction method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110751717A true CN110751717A (en) 2020-02-04

Family

ID=69276246

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910851204.9A Pending CN110751717A (en) 2019-09-10 2019-09-10 Virtual head model construction method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110751717A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112598779A (en) * 2020-11-18 2021-04-02 广东三维家信息科技有限公司 Node modeling method and device, electronic equipment and storage medium
CN115294301A (en) * 2022-08-11 2022-11-04 广州沃佳科技有限公司 Head model construction method, device, equipment and medium based on face image
CN115981467A (en) * 2022-12-27 2023-04-18 北京字跳网络技术有限公司 Image synthesis parameter determining method, image synthesis method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106250573A (en) * 2015-09-14 2016-12-21 深圳市易知见科技有限公司 The modeling method of a kind of virtual three-dimensional human body and modeling
CN107679519A (en) * 2017-10-27 2018-02-09 北京光年无限科技有限公司 A kind of multi-modal interaction processing method and system based on visual human
CN108062783A (en) * 2018-01-12 2018-05-22 北京蜜枝科技有限公司 FA Facial Animation mapped system and method
CN108447017A (en) * 2018-05-31 2018-08-24 Oppo广东移动通信有限公司 Face virtual face-lifting method and device
CN109272566A (en) * 2018-08-15 2019-01-25 广州多益网络股份有限公司 Movement expression edit methods, device, equipment, system and the medium of virtual role

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106250573A (en) * 2015-09-14 2016-12-21 深圳市易知见科技有限公司 The modeling method of a kind of virtual three-dimensional human body and modeling
CN107679519A (en) * 2017-10-27 2018-02-09 北京光年无限科技有限公司 A kind of multi-modal interaction processing method and system based on visual human
CN108062783A (en) * 2018-01-12 2018-05-22 北京蜜枝科技有限公司 FA Facial Animation mapped system and method
CN108447017A (en) * 2018-05-31 2018-08-24 Oppo广东移动通信有限公司 Face virtual face-lifting method and device
CN109272566A (en) * 2018-08-15 2019-01-25 广州多益网络股份有限公司 Movement expression edit methods, device, equipment, system and the medium of virtual role

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112598779A (en) * 2020-11-18 2021-04-02 广东三维家信息科技有限公司 Node modeling method and device, electronic equipment and storage medium
CN112598779B (en) * 2020-11-18 2024-05-31 广东三维家信息科技有限公司 Node modeling method and device, electronic equipment and storage medium
CN115294301A (en) * 2022-08-11 2022-11-04 广州沃佳科技有限公司 Head model construction method, device, equipment and medium based on face image
CN115981467A (en) * 2022-12-27 2023-04-18 北京字跳网络技术有限公司 Image synthesis parameter determining method, image synthesis method and device
CN115981467B (en) * 2022-12-27 2023-09-01 北京字跳网络技术有限公司 Image synthesis parameter determining method, image synthesis method and device

Similar Documents

Publication Publication Date Title
US11861936B2 (en) Face reenactment
CN111754415B (en) Face image processing method and device, image equipment and storage medium
US11750786B2 (en) Providing apparatus, providing method and computer readable storage medium for performing processing relating to a virtual viewpoint image
US9905039B2 (en) View independent color equalized 3D scene texturing
CN110751717A (en) Virtual head model construction method and device, computer equipment and storage medium
US10062203B2 (en) Voxelization of mesh representations
CN110413108B (en) Virtual picture processing method, device and system, electronic equipment and storage medium
CN111369428B (en) Virtual head portrait generation method and device
CN109675315A (en) Generation method, device, processor and the terminal of avatar model
CN114025219B (en) Rendering method, device, medium and equipment for augmented reality special effects
US20230306685A1 (en) Image processing method, model training method, related apparatuses, and program product
CN113099298A (en) Method and device for changing virtual image and terminal equipment
CN110674837A (en) Video similarity obtaining method and device, computer equipment and storage medium
US11380037B2 (en) Method and apparatus for generating virtual operating object, storage medium, and electronic device
CN110610546A (en) Video image display method, device, terminal and storage medium
CN115398884A (en) Self-timer settings and inventory video creation
CN113380269B (en) Video image generation method, apparatus, device, medium, and computer program product
CN115512014A (en) Method for training expression driving generation model, expression driving method and device
CN115421639A (en) Panorama display method and device, electronic equipment and medium
CN112561822B (en) Beautifying method and device, electronic equipment and storage medium
CN115035248A (en) Graphic data processing method and system based on three-dimensional modeling
CN113408452A (en) Expression redirection training method and device, electronic equipment and readable storage medium
CN117078974B (en) Image processing method and device, electronic equipment and storage medium
CN116563505B (en) Avatar generation method, apparatus, electronic device, and storage medium
US11983819B2 (en) Methods and systems for deforming a 3D body model based on a 2D image of an adorned subject

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination