CN117218292A - 3D model generation method, device, equipment and medium - Google Patents

3D model generation method, device, equipment and medium Download PDF

Info

Publication number
CN117218292A
CN117218292A CN202311199405.8A CN202311199405A CN117218292A CN 117218292 A CN117218292 A CN 117218292A CN 202311199405 A CN202311199405 A CN 202311199405A CN 117218292 A CN117218292 A CN 117218292A
Authority
CN
China
Prior art keywords
model
human body
image
standard human
virtual character
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311199405.8A
Other languages
Chinese (zh)
Inventor
邱鹏帅
卢泽锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Yuanyi Universe Holdings Group Co ltd
Original Assignee
Wuhan Yuanyi Universe Holdings Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Yuanyi Universe Holdings Group Co ltd filed Critical Wuhan Yuanyi Universe Holdings Group Co ltd
Priority to CN202311199405.8A priority Critical patent/CN117218292A/en
Publication of CN117218292A publication Critical patent/CN117218292A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The application discloses a 3D model generation method, a device, equipment and a medium, and relates to the technical field of 3D model generation, wherein the method comprises the following steps: s1, acquiring a standard human body model, wherein the standard human body model comprises a male standard human body model and a female standard human body model; s2, acquiring a two-dimensional image of the virtual character needing to construct a 3D model, and identifying the gesture of the virtual character in the two-dimensional image; according to the 3D model generation method, device, equipment and medium, the human body 3D model constructed according to human anatomy knowledge is imported, the two-dimensional image of the virtual character needing to construct the 3D model is identified, one gesture of the human body 3D model and the virtual character in the two-dimensional image is adjusted, comparison is convenient, details of the model are adjusted on the basis of the human body 3D model, the 3D model of the virtual character is obtained, the step of constructing the 3D model of the virtual character is simplified, and model construction is more convenient and quicker.

Description

3D model generation method, device, equipment and medium
Technical Field
The application relates to the technical field of 3D model generation, in particular to a 3D model generation method, a device, equipment and a medium.
Background
With the rise of the metauniverse, a user has a more real metauniverse experience effect, and 3D modeling needs to be conducted on virtual characters in the metauniverse.
The Chinese patent with publication number CN111932672B discloses a method for automatically generating a super-realistic 3D facial model based on machine learning, which comprises the following steps: s1, adopting a 3DMM three-dimensional deformation model to fit a 2D face picture into a 3D face model; s2, generating a UV image by adopting a deep learning neural network; s3, applying the UV image to the 3D face model to generate a super-realistic 3D face model; super-realistic 3D facial models can be generated based on one 2D facial picture, other equipment and steps are not needed, and the generation steps and cost are greatly simplified.
The prior art discloses a convenient and rapid face modeling method, however, the metauniverse needs a whole-body 3D model, most of the current virtual characters only have two-dimensional plane images, the 3D model of the virtual character needs to be built according to the two-dimensional plane images, and the general 3D modeling process is complex and has large workload.
Disclosure of Invention
The application aims to provide a 3D model generation method, device, equipment and medium, so as to solve the defects in the prior art.
In order to achieve the above object, the present application provides the following technical solutions: a 3D model generation method, comprising the steps of:
s1, acquiring a standard human body model, wherein the standard human body model comprises a male standard human body model and a female standard human body model;
s2, acquiring a two-dimensional image of the virtual character needing to construct a 3D model, and identifying the posture of the virtual character, the proportion relation of each part and the sex of the virtual character in the two-dimensional image;
s3, selecting a corresponding standard human body model according to the gender of the identified virtual character, adjusting the gesture of the standard human body model into the gesture of the identified virtual character, and adjusting the proportion of each part of the standard human body model according to the proportion relation of each part of the identified virtual character;
s4, adjusting the display angle of the standard human body model to be consistent with the display angle of the virtual character of the two-dimensional image;
s5, adjusting the details of the standard human body model according to the two-dimensional image to generate a target model, obtaining a model image of the target model at the angle, and carrying out real-time image analysis on the model image and the two-dimensional image to obtain similarity;
s6, obtaining the maximum value of the similarity, repeating the step S5 until the corresponding similarity of the target model reaches the maximum value of the similarity, and determining the target model.
Furthermore, the standard human body model is a human body 3D model constructed according to human anatomy knowledge, the standard human body model comprises bones and muscles, and the standard human body model can be manually modeled by 3D modeling software such as 3DSMax, maya, silo, blender, rhin, zbrush, rhinocero and the like and then imported.
Further, in the step S3, the proportion of each part of the standard human body model is adjusted according to the identified proportion relation of each part, and the method specifically includes the following steps:
a1, dividing the standard human body model into six major classes, wherein the major classes comprise a head, a trunk, a big arm, a small arm, a hand, a thigh, a shank and a foot;
a2, correspondingly adjusting the sizes of all the major categories according to the proportion relation of all the identified parts; when the head and the trunk are regulated, the length, the width and the perimeter are regulated; wherein the circumference refers to the circumference of the head and the circumference of the torso. When the big arm, the small arm, the hand, the thigh, the small leg and the foot are adjusted, the adjustment is carried out from two aspects of length and thickness;
a3, adjusting the ratio of the lower arm to the hand and the ratio of the lower leg to the foot according to the identified proportion relation of all the parts.
Further, in the step a2, when the head and the trunk are adjusted, the length, the width and the circumference are adjusted;
when the big arm, the small arm, the hand, the thigh, the small leg and the foot are adjusted, the adjustment is carried out from two aspects of length and thickness.
Further, in the step S5, the details of the standard human body model are adjusted according to the two-dimensional image to generate a target model, which specifically includes the steps of:
b1, analyzing the shapes of the face, the eyebrow, the eyes, the cheek, the nose, the lips and the ears of the virtual character in the two-dimensional image, and correspondingly adjusting the shapes of the face, the eyebrow, the eyes, the cheek, the nose, the lips and the ears of the head;
b2, analyzing the three-dimensional degree of the eyebrows, eyes, cheeks, noses, lips and ears of the virtual figures in the two-dimensional images, and adjusting the three-dimensional degree of the eyebrows, eyes, cheeks, noses, lips and ears of the head; the three-dimensional degree includes the height of the eyebrow, the depth of the eye sockets, the cheek height, the nose bridge height, the lip height, and the length of the ear in the front-rear direction.
And b3, analyzing the trunk, the big arm, the small arm, the thigh and the small leg of the virtual character in the two-dimensional image, and adjusting the muscles of the trunk, the big arm, the small arm, the thigh and the small leg of the standard human body model.
Further, in the step S5, real-time image analysis is performed on the model image and the two-dimensional image, and the image similarity is calculated by using a perceptual hash algorithm, which specifically includes the steps of:
c1, reducing a large image of M rows and N columns of pixel points into a small image with the same size of M multiplied by M;
c2, converting the color small image into a gray small image, and representing the gray small image by using an m-level image gray pixel matrix;
c3, binarizing the gray level small image, wherein the specific formula is as follows:
wherein,representing the average gray value of the image, i and j respectively represent the row and the column on the m-order pixel matrix, and the value range is [1, m]Gray (i, j) represents the Gray value of the image in the ith row and j column of the m-order pixel matrix, and N represents the number of elements of the m-order pixel matrix. And binarized using the following formula:
where h (i, j) represents the value of the ith row and j column of the m-order binary pixel matrix. If the gray level of the pixel is greater than or equal to the average gray level, the binarization result is 1, otherwise, the binarization result is 0. In the DCT perceptual hash algorithm, firstly, calculating the DCT coefficient of the image, if the coefficient on the ith row j in the pixel matrix is larger than or equal to the average value of all the coefficients, the binarization result of the pixel at the position is 1, otherwise, the binarization result of the pixel at the position is 0.
A3D model generation device comprises a model import module, a 3D model module, an image recognition module and an image comparison module;
the model importing module is used for importing a standard human body model into the 3D model module;
the 3D model module is used for storing a standard human body model and a target model;
the 3D model module is also used for adjusting and regulating the standard human body model and the target model;
the image recognition module is used for recognizing the gestures of the virtual characters and the gestures of the virtual characters in the two-dimensional image and recognizing the image of the display angle of the model;
the image comparison module is used for comparing and calculating the similarity between the virtual character in the two-dimensional image and the image of the model display angle.
An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of generating a 3D model when executing the program.
A storage medium having stored thereon a computer program which when executed by a processor implements the method of generating a 3D model.
Compared with the prior art, the 3D model generation method, device, equipment and medium provided by the application have the advantages that the 3D model of the human body constructed according to human anatomy knowledge is imported, the two-dimensional image of the virtual character needing to construct the 3D model is identified, one gesture of the 3D model of the human body and the virtual character in the two-dimensional image is adjusted, comparison is convenient, details of the model are adjusted on the basis of the 3D model of the human body, the 3D model of the virtual character is obtained, the step of constructing the 3D model of the virtual character is simplified, and the model construction is more convenient and faster.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings required for the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments described in the present application, and other drawings may be obtained according to these drawings for a person having ordinary skill in the art.
FIG. 1 is a diagram of steps in a method according to an embodiment of the present application;
fig. 2 is a schematic diagram of an apparatus module according to an embodiment of the present application.
Detailed Description
In order to make the technical scheme of the present application better understood by those skilled in the art, the present application will be further described in detail with reference to the accompanying drawings.
In the description of the present application, it should be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings are merely for convenience in describing the present application and simplifying the description, and do not indicate or imply that the device or element referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present application.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more of the described features. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise. Furthermore, the terms "mounted," "connected," "coupled," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present application will be understood in specific cases by those of ordinary skill in the art.
Example embodiments will be described more fully hereinafter with reference to the accompanying drawings, but may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Embodiments of the disclosure and features of embodiments may be combined with each other without conflict.
As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Embodiments described herein may be described with reference to plan and/or cross-sectional views with the aid of idealized schematic diagrams of the present disclosure. Accordingly, the example illustrations may be modified in accordance with manufacturing techniques and/or tolerances. Thus, the embodiments are not limited to the embodiments shown in the drawings, but include modifications of the configuration formed based on the manufacturing process. Thus, the regions illustrated in the figures have schematic properties and the shapes of the regions illustrated in the figures illustrate the particular shapes of the regions of the elements, but are not intended to be limiting.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the present disclosure, and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Referring to fig. 1, the present application provides a 3D model generating method, which includes the following steps:
s1, acquiring a standard human body model, wherein the standard human body model comprises a male standard human body model and a female standard human body model; the standard manikin is a 3D model of the human body constructed from human anatomy, preferably the standard manikin comprises bones and muscles. The standard manikin can be manually modeled by 3D modeling software such as 3DSMax, maya, silo, blender, rhin, zbrush, rhinocero and the like and then imported.
S2, acquiring a two-dimensional image of the virtual character needing to construct a 3D model, and identifying the posture of the virtual character, the proportion relation of each part and the sex of the virtual character in the two-dimensional image; the gender of the virtual character can be identified by the posture, the proportion relation of each part and the like after the training of the convolutional neural network.
S3, selecting a corresponding standard human body model according to the identified sex of the virtual character, adjusting the posture of the standard human body model to the posture of the identified virtual character, and adjusting the proportion of each part of the standard human body model according to the identified proportion relation of each part; the method comprises the following steps of:
a1, dividing the standard human body model into six major classes, wherein the major classes comprise a head, a trunk, a big arm, a small arm, a hand, a thigh, a shank and a foot;
a2, correspondingly adjusting the sizes of all the major categories according to the proportion relation of all the identified parts; when the head and the trunk are regulated, the length, the width and the perimeter are regulated; wherein the circumference refers to the circumference of the head and the circumference of the torso. When the big arm, the small arm, the hand, the thigh, the small leg and the foot are adjusted, the adjustment is carried out from two aspects of length and thickness.
a3, adjusting the ratio of the lower arm to the hand and the ratio of the lower leg to the foot according to the identified proportion relation of all the parts.
S4, adjusting the display angle of the standard human body model to be consistent with the display angle of the virtual character of the two-dimensional image;
s5, adjusting the details of the standard human body model according to the two-dimensional image to generate a target model, obtaining a model image of the target model at the angle, and carrying out real-time image analysis on the model image and the two-dimensional image to obtain similarity; according to the two-dimensional image, the standard human body model detail is adjusted to generate a target model, and the specific steps are as follows:
b1, analyzing the shapes of the face, the eyebrow, the eyes, the cheek, the nose, the lips and the ears of the virtual character in the two-dimensional image, and correspondingly adjusting the shapes of the face, the eyebrow, the eyes, the cheek, the nose, the lips and the ears of the head;
b2, analyzing the three-dimensional degree of the eyebrows, eyes, cheeks, noses, lips and ears of the virtual figures in the two-dimensional images, and adjusting the three-dimensional degree of the eyebrows, eyes, cheeks, noses, lips and ears of the head; the three-dimensional degree includes the height of the eyebrow, the depth of the eye sockets, the cheek height, the nose bridge height, the lip height, and the length of the ear in the front-rear direction.
And b3, analyzing the trunk, the big arm, the small arm, the thigh and the small leg of the virtual character in the two-dimensional image, and adjusting the muscles of the trunk, the big arm, the small arm, the thigh and the small leg of the standard human body model.
And performing real-time image analysis on the model image and the two-dimensional image, and calculating the image similarity by using a perceptual hash algorithm. The method comprises the following specific steps:
c1, reducing a large image of M rows and N columns of pixel points into a small image with the same size of M multiplied by M;
c2, converting the color small image into a gray small image, and representing the gray small image by using an m-level image gray pixel matrix;
c3, binarizing the gray level small image, wherein the specific formula is as follows:
wherein,representing the average gray value of the image, i and j respectively represent the row and the column on the m-order pixel matrix, and the value range is [1, m]Gray (i, j) represents the Gray value of the image in the ith row and j column of the m-order pixel matrix, and N represents the number of m-order pixel matrix elements. And binarized using the following formula:
where h (i, j) represents the value of the ith row and j column of the m-order binary pixel matrix. If the gray level of the pixel is greater than or equal to the average gray level, the binarization result is 1, otherwise, the binarization result is 0. In the DCT perceptual hash algorithm, firstly, calculating the DCT coefficient of the image, if the coefficient on the ith row j in the pixel matrix is larger than or equal to the average value of all the coefficients, the binarization result of the pixel at the position is 1, otherwise, the binarization result of the pixel at the position is 0.
S6, obtaining the maximum value of the similarity, repeating the step S5 until the target model is adjusted to the maximum value of the similarity, which is reached by the corresponding similarity, and determining the target model.
Referring to fig. 2, the present application further provides a 3D model generating device, which includes a model importing module, a 3D model module, an image identifying module, and an image comparing module;
the model importing module is used for importing the standard human body model into the 3D model module;
the 3D model module is used for storing a standard human body model and a target model;
the 3D model module is also used for adjusting and regulating the standard human body model and the target model;
the image recognition module is used for recognizing the gestures of the virtual character and the virtual character in the two-dimensional image and recognizing the image of the model display angle;
the image comparison module is used for comparing and calculating the similarity between the virtual character in the two-dimensional image and the image of the model display angle.
An electronic device comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to provide a 3D model generating method.
A storage medium having stored thereon a computer program which, when executed by a processor, implements a 3D model generation method provided by the present application.
While certain exemplary embodiments of the present application have been described above by way of illustration only, it will be apparent to those of ordinary skill in the art that modifications may be made to the described embodiments in various different ways without departing from the spirit and scope of the application. Accordingly, the drawings and description are to be regarded as illustrative in nature and not as restrictive of the scope of the application, which is defined by the appended claims.

Claims (9)

1. The 3D model generation method is characterized by comprising the following steps of: the method comprises the following steps:
s1, acquiring a standard human body model, wherein the standard human body model comprises a male standard human body model and a female standard human body model;
s2, acquiring a two-dimensional image of the virtual character needing to construct a 3D model, and identifying the posture of the virtual character, the proportion relation of each part and the sex of the virtual character in the two-dimensional image;
s3, selecting a corresponding standard human body model according to the gender of the identified virtual character, adjusting the gesture of the standard human body model into the gesture of the identified virtual character, and adjusting the proportion of each part of the standard human body model according to the proportion relation of each part of the identified virtual character;
s4, adjusting the display angle of the standard human body model to be consistent with the display angle of the virtual character of the two-dimensional image;
s5, adjusting the details of the standard human body model according to the two-dimensional image to generate a target model, obtaining a model image of the target model at the angle, and carrying out real-time image analysis on the model image and the two-dimensional image to obtain similarity;
s6, obtaining the maximum value of the similarity, repeating the step S5 until the corresponding similarity of the target model reaches the maximum value of the similarity, and determining the target model.
2. The 3D model generation method according to claim 1, wherein: the standard mannequin is a 3D model of the human body constructed according to human anatomy knowledge.
3. A 3D model generation method according to claim 2, characterized in that: and S3, adjusting the proportion of each part of the standard human model according to the proportion relation of each part identified in the step, wherein the method specifically comprises the following steps:
a1, dividing the standard human body model into six major classes, wherein the major classes comprise a head, a trunk, a big arm, a small arm, a hand, a thigh, a shank and a foot;
a2, correspondingly adjusting the sizes of all the major categories according to the proportion relation of all the identified parts;
a3, adjusting the ratio of the lower arm to the hand and the ratio of the lower leg to the foot according to the identified proportion relation of all the parts.
4. A 3D model generation method according to claim 3, characterized in that: when the head and the trunk are regulated in the step a2, the regulation is carried out from three aspects of length, width and perimeter;
when the big arm, the small arm, the hand, the thigh, the small leg and the foot are adjusted, the adjustment is carried out from two aspects of length and thickness.
5. A 3D model generation method according to claim 3, characterized in that: and S5, according to the two-dimensional image, adjusting the details of the standard human body model to generate a target model, wherein the specific steps are as follows:
b1, analyzing the shapes of the face, the eyebrow, the eye, the cheek, the nose, the lip and the ear of the virtual character in the two-dimensional image, and correspondingly adjusting the shapes of the face, the eyebrow, the eye, the cheek, the nose, the lip and the ear of the head;
b2, analyzing the three-dimensional degree of the eyebrows, eyes, cheeks, noses, lips and ears of the virtual figures in the two-dimensional images, and adjusting the three-dimensional degree of the eyebrows, eyes, cheeks, noses, lips and ears of the head;
and b3, analyzing the trunk, the big arm, the small arm, the thigh and the small leg of the virtual character in the two-dimensional image, and adjusting the muscles of the trunk, the big arm, the small arm, the thigh and the small leg of the standard human body model.
6. The 3D model generation method according to claim 1, wherein: and S5, carrying out real-time image analysis on the model image and the two-dimensional image, and calculating the image similarity by using a perceptual hash algorithm.
7. A 3D model generation device, characterized in that: the system comprises a model importing module, a 3D model module, an image recognition module and an image comparison module;
the model importing module is used for importing a standard human body model into the 3D model module;
the 3D model module is used for storing a standard human body model and a target model;
the 3D model module is also used for adjusting and regulating the standard human body model and the target model;
the image recognition module is used for recognizing the gestures of the virtual characters and the gestures of the virtual characters in the two-dimensional image and recognizing the image of the display angle of the model;
the image comparison module is used for comparing and calculating the similarity between the virtual character in the two-dimensional image and the image of the model display angle.
8. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, characterized by: the processor, when executing a program, implements a 3D model generation method as claimed in any one of claims 1-6.
9. A storage medium having a computer program stored thereon, characterized by: the computer program, when executed by a processor, implements a 3D model generation method as claimed in any one of claims 1-6.
CN202311199405.8A 2023-09-18 2023-09-18 3D model generation method, device, equipment and medium Pending CN117218292A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311199405.8A CN117218292A (en) 2023-09-18 2023-09-18 3D model generation method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311199405.8A CN117218292A (en) 2023-09-18 2023-09-18 3D model generation method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN117218292A true CN117218292A (en) 2023-12-12

Family

ID=89043819

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311199405.8A Pending CN117218292A (en) 2023-09-18 2023-09-18 3D model generation method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN117218292A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114119909A (en) * 2020-08-27 2022-03-01 北京陌陌信息技术有限公司 Method and device for establishing controllable human body model and storage medium
CN114119907A (en) * 2020-08-27 2022-03-01 北京陌陌信息技术有限公司 Fitting method and device of human body model and storage medium
CN114119913A (en) * 2020-08-27 2022-03-01 北京陌陌信息技术有限公司 Human body model driving method, device and storage medium
CN114119911A (en) * 2020-08-27 2022-03-01 北京陌陌信息技术有限公司 Human body model neural network training method, device and storage medium
CN114202629A (en) * 2020-08-27 2022-03-18 北京陌陌信息技术有限公司 Human body model establishing method, system, equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114119909A (en) * 2020-08-27 2022-03-01 北京陌陌信息技术有限公司 Method and device for establishing controllable human body model and storage medium
CN114119907A (en) * 2020-08-27 2022-03-01 北京陌陌信息技术有限公司 Fitting method and device of human body model and storage medium
CN114119913A (en) * 2020-08-27 2022-03-01 北京陌陌信息技术有限公司 Human body model driving method, device and storage medium
CN114119911A (en) * 2020-08-27 2022-03-01 北京陌陌信息技术有限公司 Human body model neural network training method, device and storage medium
CN114202629A (en) * 2020-08-27 2022-03-18 北京陌陌信息技术有限公司 Human body model establishing method, system, equipment and storage medium

Similar Documents

Publication Publication Date Title
US20210012558A1 (en) Method and apparatus for reconstructing three-dimensional model of human body, and storage medium
CN106068514B (en) System and method for identifying face in free media
CN111460873A (en) Image processing method and apparatus, image device, and storage medium
EP3992919B1 (en) Three-dimensional facial model generation method and apparatus, device, and medium
Zhang et al. Computer models for facial beauty analysis
WO2022143645A1 (en) Three-dimensional face reconstruction method and apparatus, device, and storage medium
CN103208133A (en) Method for adjusting face plumpness in image
US20220148333A1 (en) Method and system for estimating eye-related geometric parameters of a user
WO2005020030A2 (en) Multi-modal face recognition
CN112308932B (en) Gaze detection method, device, equipment and storage medium
US10860755B2 (en) Age modelling method
Shu et al. Kinship-guided age progression
CN110796593A (en) Image processing method, device, medium and electronic equipment based on artificial intelligence
CN113570684A (en) Image processing method, image processing device, computer equipment and storage medium
CN109284778A (en) Face face value calculating method, computing device and electronic equipment
CN112836680A (en) Visual sense-based facial expression recognition method
Tsai et al. Human face aging with guided prediction and detail synthesis
CN111127642A (en) Human face three-dimensional reconstruction method
CN112699857A (en) Living body verification method and device based on human face posture and electronic equipment
CN115205933A (en) Facial expression recognition method, device, equipment and readable storage medium
CN113344837B (en) Face image processing method and device, computer readable storage medium and terminal
Giese et al. Metrics of the perception of body movement
Matuszewski et al. High-resolution comprehensive 3-D dynamic database for facial articulation analysis
CN108717730B (en) 3D character reconstruction method and terminal
CN107886568B (en) Method and system for reconstructing facial expression by using 3D Avatar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination