CN109147037B - Special effect processing method and device based on three-dimensional model and electronic equipment - Google Patents

Special effect processing method and device based on three-dimensional model and electronic equipment Download PDF

Info

Publication number
CN109147037B
CN109147037B CN201810934012.XA CN201810934012A CN109147037B CN 109147037 B CN109147037 B CN 109147037B CN 201810934012 A CN201810934012 A CN 201810934012A CN 109147037 B CN109147037 B CN 109147037B
Authority
CN
China
Prior art keywords
special effect
model
dimensional model
dimensional
face image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810934012.XA
Other languages
Chinese (zh)
Other versions
CN109147037A (en
Inventor
阎法典
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810934012.XA priority Critical patent/CN109147037B/en
Publication of CN109147037A publication Critical patent/CN109147037A/en
Priority to PCT/CN2019/088118 priority patent/WO2020034698A1/en
Application granted granted Critical
Publication of CN109147037B publication Critical patent/CN109147037B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Abstract

The application provides a special effect processing method and device based on a three-dimensional model and electronic equipment, wherein the method comprises the following steps: acquiring an acquired two-dimensional face image and depth information corresponding to the face image; performing three-dimensional reconstruction on the face according to the depth information and the face image to obtain a three-dimensional model corresponding to the face; recognizing expression categories corresponding to the two-dimensional face images; and fusing the three-dimensional model with the special effect model corresponding to the expression category to obtain the three-dimensional model after special effect processing. The method can realize that different special effect models do not need to be manually switched by a user, improve the automation degree of special effect addition, improve the enjoyment and playability of the user in the special effect addition process, improve the reality of the special effect addition and enable the treated effect to be better and natural.

Description

Special effect processing method and device based on three-dimensional model and electronic equipment
Technical Field
The present disclosure relates to the field of electronic devices, and in particular, to a special effect processing method and apparatus based on a three-dimensional model, and an electronic device.
Background
With the popularization of electronic devices, more and more users prefer to take pictures or record life by using the photographing function of the electronic devices. Also, in order to make the photographed image more interesting, various applications for beautifying the image or adding special effects have been developed. The user can select the favorite special effect from all the special effects of the application program to process the image according to the requirement of the user, so that the image is vivid and interesting.
The addition of the facial special effects such as tears depends on the active selection of a user, so that the automation degree of the addition of the special effects is low, and in addition, the addition of the special effects to the image is performed on the two-dimensional image, so that the special effects cannot be perfectly attached to or matched with the image, the image processing effect is poor, and the reality of the addition of the special effects is not strong.
Disclosure of Invention
The application provides a special effect processing method and device based on a three-dimensional model and electronic equipment, so that the effect that a user does not need to manually switch different special effect models is achieved, the automation degree of special effect adding is improved, the enjoyment and the playability of the user in the special effect adding process are improved, the sense of reality of the special effect adding is improved, the processed effect is better and natural, and the technical problems that the sense of reality of the special effect adding is not strong and the automation degree is low in the prior art are solved.
An embodiment of one aspect of the present application provides a special effect processing method based on a three-dimensional model, including:
acquiring an acquired two-dimensional face image and depth information corresponding to the face image;
according to the depth information and the face image, three-dimensional reconstruction is carried out on a face to obtain a three-dimensional model corresponding to the face;
identifying an expression category corresponding to the two-dimensional face image;
and fusing the three-dimensional model and the special effect model corresponding to the expression category to obtain the three-dimensional model after special effect processing.
According to the three-dimensional model-based special effect processing method, the collected two-dimensional face image and the depth information corresponding to the face image are obtained, then the face is subjected to three-dimensional reconstruction according to the depth information and the face image to obtain a three-dimensional model corresponding to the face, the expression category corresponding to the two-dimensional face image is identified, and finally the three-dimensional model and the special effect model corresponding to the expression category are fused. Therefore, a user does not need to manually switch different special effect models, the automation degree of special effect addition is improved, and the enjoyment and the playability of the user in the special effect addition process are improved. In addition, according to the expression made by the user, the corresponding special effect model is determined, so that the special effect model and the three-dimensional model are fused, the reality of special effect addition can be improved, and the processed effect is better and natural.
In another aspect of the present application, an embodiment of the present application provides a special effect processing apparatus based on a three-dimensional model, including:
the acquisition module is used for acquiring the acquired two-dimensional face image and depth information corresponding to the face image;
the reconstruction module is used for performing three-dimensional reconstruction on the human face according to the depth information and the human face image so as to obtain a three-dimensional model corresponding to the human face;
the recognition module is used for recognizing the expression type corresponding to the two-dimensional face image;
and the fusion module is used for fusing the three-dimensional model and the special effect model corresponding to the expression category to obtain the three-dimensional model after special effect processing.
According to the three-dimensional model-based special effect processing device, the collected two-dimensional face image and the depth information corresponding to the face image are obtained, then the face is subjected to three-dimensional reconstruction according to the depth information and the face image to obtain a three-dimensional model corresponding to the face, the expression category corresponding to the two-dimensional face image is identified, and finally the three-dimensional model and the special effect model corresponding to the expression category are fused to obtain the three-dimensional model after special effect processing. Therefore, a user does not need to manually switch different special effect models, the automation degree of special effect addition is improved, and the enjoyment and the playability of the user in the special effect addition process are improved. In addition, according to the expression made by the user, the corresponding special effect model is determined, so that the special effect model and the three-dimensional model are fused, the reality of special effect addition can be improved, and the processed effect is better and natural.
An embodiment of another aspect of the present application provides an electronic device, including: the system comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the three-dimensional model-based special effect processing method provided by the previous embodiment of the application.
In another aspect of the present application, a computer-readable storage medium is provided, on which a computer program is stored, where the computer program is configured to, when executed by a processor, implement the three-dimensional model-based special effects processing method as set forth in the foregoing embodiments of the present application.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of a three-dimensional model-based special effect processing method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a special effect processing method based on a three-dimensional model according to a second embodiment of the present application;
fig. 3 is a schematic flowchart of a three-dimensional model-based special effect processing method according to a third embodiment of the present application;
fig. 4 is a schematic structural diagram of a three-dimensional model-based special effect processing apparatus according to a fourth embodiment of the present application;
fig. 5 is a schematic structural diagram of a three-dimensional model-based special effect processing apparatus according to a fifth embodiment of the present application;
FIG. 6 is a schematic diagram showing an internal configuration of an electronic apparatus according to an embodiment;
FIG. 7 is a schematic diagram of an image processing circuit as one possible implementation;
fig. 8 is a schematic diagram of an image processing circuit as another possible implementation.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
The application mainly aims at the technical problems that in the prior art, the reality sense of special effect addition is not strong and the automation degree is low, and provides a special effect processing method based on a three-dimensional model.
According to the three-dimensional model-based special effect processing method, the collected two-dimensional face image and the depth information corresponding to the face image are obtained, then the face is subjected to three-dimensional reconstruction according to the depth information and the face image to obtain a three-dimensional model corresponding to the face, the expression category corresponding to the two-dimensional face image is identified, and finally the three-dimensional model and the special effect model corresponding to the expression category are fused to obtain the three-dimensional model after special effect processing. Therefore, a user does not need to manually switch different special effect models, the automation degree of special effect addition is improved, and the enjoyment and the playability of the user in the special effect addition process are improved. In addition, according to the expression made by the user, the corresponding special effect model is determined, so that the special effect model and the three-dimensional model are fused, the reality of special effect addition can be improved, and the processed effect is better and natural.
The following describes a three-dimensional model-based special effect processing method, apparatus, and electronic device according to an embodiment of the present application with reference to the drawings.
Fig. 1 is a schematic flowchart of a special effect processing method based on a three-dimensional model according to an embodiment of the present application.
As shown in fig. 1, the three-dimensional model-based special effect processing method includes the following steps:
step 101, acquiring a two-dimensional face image and depth information corresponding to the face image.
In the embodiment of the application, the electronic device may include a visible light image sensor, and the two-dimensional face image may be acquired based on the visible light image sensor in the electronic device. Specifically, the visible light image sensor may include a visible light camera, and the visible light camera may capture visible light reflected by a human face for imaging, so as to obtain a two-dimensional human face image.
In this embodiment, the electronic device may further include a structured light image sensor, and the depth information corresponding to the face image may be acquired based on the structured light image sensor in the electronic device. Alternatively, the structured light image sensor may include a laser lamp and a laser camera. Pulse Width Modulation (PWM) can modulate the laser lamp to emit structured light, the structured light irradiates to the adult face, and the laser camera can capture the structured light reflected by the human face to image, so as to obtain a structured light image corresponding to the human face. The depth engine can calculate and obtain depth information corresponding to the face according to the structured light image corresponding to the face, namely the depth information corresponding to the two-dimensional face image.
And 102, performing three-dimensional reconstruction on the face according to the depth information and the face image to obtain a three-dimensional model corresponding to the face.
In the embodiment of the application, after the depth information and the face image are obtained, the face can be subjected to three-dimensional reconstruction according to the depth information and the face image, so that a three-dimensional model corresponding to the face is obtained. In the application, the three-dimensional model corresponding to the human face is constructed by performing three-dimensional reconstruction according to the depth information and the human face image, rather than simply acquiring RGB data and depth data.
As a possible implementation manner, the depth information and the color information corresponding to the two-dimensional face image may be fused to obtain a three-dimensional model corresponding to the face. Specifically, the key points of the face can be extracted from the depth information and the key points of the face can be extracted from the color information based on a face key point detection technology, then the key points extracted from the depth information and the key points extracted from the color information are subjected to registration and key point fusion, and finally, a three-dimensional model corresponding to the face is generated according to the fused key points. The key points are obvious points on the human face or points at key positions, for example, the key points can be the canthus, the tip of the nose, the corner of the mouth, and the like.
As another possible implementation manner, the method may include performing key point identification on a face image based on a face key point detection technology to obtain a second key point corresponding to the face image, and then determining a relative position of a first key point corresponding to the second key point in a three-dimensional model of the face according to depth information of the second key point and a position of the second key point on the face image, so that adjacent first key points may be connected according to the relative position of the first key point in a three-dimensional space to generate a local face three-dimensional frame. The local face may include facial parts such as a nose, lips, eyes, cheeks, and the like.
After the local face three-dimensional frames are generated, different local face three-dimensional frames can be spliced according to the same first key points contained in different local face three-dimensional frames, and a three-dimensional model corresponding to the face is obtained.
And 103, identifying expression categories corresponding to the two-dimensional face image.
As a possible implementation manner, the user may record reference expressions corresponding to different expression categories in advance, for example, the user may record reference expressions corresponding to expression categories such as heartburn, happiness, depression, anger, thinking, and the like in advance, after obtaining a two-dimensional face image, the face image may be matched with the reference expressions, and the expression category corresponding to the target reference expression in the matching is used as the expression category of the face image.
As another possible implementation manner, at least one frame of face image acquired before the current frame may be acquired, and then the expression category may be determined according to the face image of the current frame and the at least one frame of face image. For example, according to the face image of the current frame and at least one frame of face image, whether eyebrows are raised or pulled down, whether eyes are enlarged or reduced, whether mouth corners are raised or pulled down, and the like can be determined, and then the expression type can be determined. For example, when it is determined that eyes are small, eyes are raised, and mouth is raised from the face image of the current frame and at least one frame of face image, it may be determined that the expression category is happy.
And 104, fusing the three-dimensional model with the special effect model corresponding to the expression category to obtain the three-dimensional model after special effect processing.
In the embodiment of the present application, a correspondence between the expression category and the special effect model may be stored in advance, for example, when the expression category is casualty, the special effect model may be tears, when the expression category is anger, the special effect model may be flames, when the expression category is tension, the special effect model may be cold sweat, and the like.
The special effect models can be stored in a material library of an application program of the electronic equipment, different special effect models are stored in the material library, or the application program on the electronic equipment can download new special effect models from a server in real time, and the newly downloaded special effect models can be stored in the material library.
Optionally, after the expression category is determined, the correspondence may be queried to obtain a special effect model matched with the expression category, and then the three-dimensional model and the special effect model corresponding to the expression category are fused to obtain a three-dimensional model after special effect processing.
As a possible implementation manner, in order to improve the display effect of the three-dimensional model after the special effect processing and enhance the sense of reality of the three-dimensional model after the special effect addition, in the embodiment of the application, the angle of the special effect model relative to the three-dimensional model may be adjusted so that the three-dimensional model and the special effect model are angularly matched, and then the special effect model is rendered and then is mapped to the three-dimensional model.
Further, after the three-dimensional model after special effect processing is obtained, the three-dimensional model after special effect processing can be displayed on a display interface of the electronic device, so that a user can conveniently and visually know the three-dimensional model after special effect processing.
According to the three-dimensional model-based special effect processing method, the collected two-dimensional face image and the depth information corresponding to the face image are obtained, then the face is subjected to three-dimensional reconstruction according to the depth information and the face image to obtain a three-dimensional model corresponding to the face, the expression category corresponding to the two-dimensional face image is identified, and finally the three-dimensional model and the special effect model corresponding to the expression category are fused to obtain the three-dimensional model after special effect processing. Therefore, a user does not need to manually switch different special effect models, the automation degree of special effect addition is improved, and the enjoyment and the playability of the user in the special effect addition process are improved. In addition, according to the expression made by the user, the corresponding special effect model is determined, so that the special effect model and the three-dimensional model are fused, the reality of special effect addition can be improved, and the processed effect is better and natural.
As a possible implementation manner, in order to improve the efficiency of expression category identification and the accuracy of expression category identification, in the present application, after at least one frame of face image acquired before a current frame is acquired, only when the position of each key point in the at least one frame of face image is determined and the difference between the position of each key point in the face image of the current frame is greater than a threshold, the expression category corresponding to the current frame is identified. The above process is described in detail below with reference to fig. 2.
Fig. 2 is a schematic flowchart of a special effect processing method based on a three-dimensional model according to a second embodiment of the present application.
Referring to fig. 2, based on the embodiment shown in fig. 1, step 103 may specifically include the following sub-steps:
step 201, identifying the position of each key point in the face image of the current frame.
Specifically, the positions of the key points in the face image of the current frame may be identified based on a key point identification technique.
Step 202, identifying the position of each key point in at least one frame of face image collected before the current frame.
In the embodiment of the application, at least one frame of face image acquired before the current frame can be acquired, and the position of each key point in the at least one frame of face image is determined based on a key point identification technology.
Step 203, determining the position of each key point in at least one frame of face image, wherein the difference between the position of each key point in at least one frame of face image and the position of each key point in the current frame of face image is greater than a threshold, if yes, executing step 204, otherwise, executing step 205.
The threshold may be preset in a built-in program of the electronic device, or the threshold may be set by a user, which is not limited to this.
And step 204, identifying the expression type corresponding to the current frame.
In the embodiment of the present application, when a difference between the position of each key point in at least one frame of face image and the position of each key point in the current frame of face image is greater than a threshold, it indicates that the expression continuously made by the user has a large change, and at this time, the user may want to add a special effect, so that the expression category corresponding to the current frame may be further identified, thereby triggering a subsequent step of adding a special effect.
Step 205, no processing is done.
In the embodiment of the application, when the difference between the position of each key point in at least one frame of face image and the position of each key point in the current frame of face image is not greater than the threshold value, it indicates that the expression continuously made by the user has not changed greatly, and at this time, the user may not want to add a special effect, so that no processing is needed.
As a possible implementation manner, referring to fig. 3, on the basis of the embodiment shown in fig. 1, step 104 may specifically include the following sub-steps:
step 301, obtaining a corresponding special effect model according to the expression category.
As a possible implementation manner, the corresponding relationship between the expression category and the special effect model may be stored in advance, and after the expression category corresponding to the face image is determined, the corresponding relationship may be queried to obtain the special effect model matched with the expression category, which is simple to operate and easy to implement.
Step 302, adjusting the angle of the special effect model relative to the three-dimensional model so as to enable the three-dimensional model to be matched with the special effect model in angle.
Before the special effect model is mapped to the three-dimensional model, the angle of the special effect model relative to the three-dimensional model needs to be adjusted so that the three-dimensional model and the special effect model are matched in angle. For example, when the special effect model is a tear, and when a human face in the three-dimensional model is directly opposite to the screen or laterally opposite to the screen, the tear display effect is different, so that the rotation angle of the special effect model needs to be adjusted according to the deflection angle of the human face, so that the three-dimensional model and the special effect model are matched in angle, and the subsequent special effect processing effect is further improved.
As a possible implementation manner, angle parameters applicable to different special effect models may be predetermined, where the angle parameters may be a fixed value or a value range (e.g., [ -45 °, 45 °), which is not limited thereto. After the special effect model corresponding to the expression category is determined, the angle parameter applicable to the special effect model can be inquired, and then the special effect model is rotated, so that the included angle between a first connecting line of a preset target key point in the special effect model and a second connecting line of a preset reference key point in the three-dimensional model accords with the angle parameter.
And 303, inquiring corresponding key points of the to-be-pasted map in the three-dimensional model according to the special effect model.
It should be understood that different special effect models differ in the three-dimensional model at the key points to be mapped. For example, when the special effect model is a tear, the tear generally flows down from the key point corresponding to the corner of the eye to the key point corresponding to the alar part of the nose, and then flows down from the key point corresponding to the alar part of the nose to the key point corresponding to the corner of the mouth, so that the key points corresponding to the corner of the eye to the alar part of the nose and the key points corresponding to the alar part to the corner of the mouth can be used as the key points to be mapped. Or, when the special effect model is cold sweat, the cold sweat generally flows down from the key point corresponding to the forehead to the key point corresponding to the brow tail, then flows down from the key point corresponding to the brow tail to the key point corresponding to the cheek, and then flows down from the key point corresponding to the cheek to the key point corresponding to the chin, so that the key points corresponding to the forehead to the brow tail, the brow tail to the cheek, and the cheek to the chin can be used as the key points to be mapped.
As a possible implementation manner, the corresponding relationship between different special effect models and the key points of the to-be-pasted drawing may be established in advance, and after the special effect model corresponding to the expression category is determined, the corresponding relationship may be queried to obtain the key points of the to-be-pasted drawing corresponding to the special effect model in the three-dimensional model.
And 304, taking the area where the key point of the to-be-pasted picture corresponding to the special effect model is located as the area of the to-be-pasted picture in the three-dimensional model.
In the embodiment of the application, when the special effect models are different and the regions to be pasted are different, when the key points to be pasted corresponding to the special effect models in the three-dimensional models are determined, the regions where the corresponding key points to be pasted are located can be used as the regions to be pasted.
And 305, deforming the special effect model according to the region to be pasted of the three-dimensional model, so that the deformed special effect model covers the region to be pasted.
In the embodiment of the application, after the region to be pasted in the three-dimensional model is determined, the special effect model can be deformed, so that the deformed special effect model covers the region to be pasted, and the special effect processing effect is improved.
And step 306, rendering the special effect model, and then pasting the special effect model to the three-dimensional model.
In order to match the special effect model with the three-dimensional model and further ensure the display effect of the three-dimensional model after special effect processing, the special effect model can be rendered and then mapped to the three-dimensional model.
As a possible implementation mode, the special effect model can be rendered according to the light effect of the three-dimensional model, so that the light effect of the rendered special effect model is matched with the three-dimensional model, and the display effect of the three-dimensional model after special effect processing is further improved.
In order to implement the above embodiments, the present application further provides a special effect processing apparatus based on a three-dimensional model.
Fig. 4 is a schematic structural diagram of a special effect processing apparatus based on a three-dimensional model according to a fourth embodiment of the present application.
As shown in fig. 4, the three-dimensional model-based special effects processing apparatus 100 includes: an acquisition module 110, a reconstruction module 120, an identification module 130, and a fusion module 140. Wherein the content of the first and second substances,
the obtaining module 110 is configured to obtain the acquired two-dimensional face image and depth information corresponding to the face image.
And the reconstruction module 120 is configured to perform three-dimensional reconstruction on the face according to the depth information and the face image, so as to obtain a three-dimensional model corresponding to the face.
And the recognition module 130 is configured to recognize an expression category corresponding to the two-dimensional face image.
And the fusion module 140 is configured to fuse the three-dimensional model with the special effect model corresponding to the expression category to obtain a three-dimensional model after special effect processing.
Further, in a possible implementation manner of the embodiment of the present application, referring to fig. 5, on the basis of the embodiment shown in fig. 4, the three-dimensional model-based special effect processing apparatus 100 may further include:
as a possible implementation, the identifying module 130 includes:
the first identifying submodule 131 is configured to identify positions of key points in the face image of the current frame.
The second identifying submodule 132 is configured to identify, for at least one frame of face image acquired before the current frame, positions of key points in the at least one frame of face image.
The third identifying submodule 133 is configured to identify an expression category corresponding to the current frame if a difference between the position of each key point in the at least one frame of face image and the position of each key point in the face image of the current frame is greater than a threshold.
As a possible implementation, the fusion module 140 includes:
the obtaining submodule 141 is configured to obtain a corresponding special effect model according to the expression category.
And the adjusting submodule 142 is used for adjusting the angle of the special effect model relative to the three-dimensional model so as to enable the three-dimensional model to be matched with the special effect model in angle.
As a possible implementation manner, the adjusting sub-module 142 is specifically configured to: inquiring the angle parameter applicable to the special effect model; and rotating the special effect model to enable an included angle between a first connecting line of a preset target key point in the special effect model and a second connecting line of a preset reference key point in the three-dimensional model to accord with the angle parameter.
And the mapping sub-module 143 is configured to render the special effect model and map the special effect model to the three-dimensional model.
As a possible implementation manner, the mapping sub-module 143 is specifically configured to: and rendering the special effect model according to the light effect of the three-dimensional model.
And the deformation submodule 144 is configured to deform the special effect model according to the region to be mapped of the three-dimensional model after rendering the special effect model and before mapping to the three-dimensional model, so that the deformed special effect model covers the region to be mapped.
And the query submodule 145 is configured to query, according to the special effect model, corresponding key points to be mapped in the three-dimensional model before the special effect model is deformed according to the region to be mapped of the three-dimensional model.
And the processing sub-module 146 is configured to, in the three-dimensional model, use the region where the to-be-pasted-drawing key point corresponding to the special effect model is located as the to-be-pasted-drawing region.
It should be noted that the foregoing explanation on the embodiment of the special effect processing method based on the three-dimensional model is also applicable to the special effect processing apparatus 100 based on the three-dimensional model of this embodiment, and details are not repeated here.
According to the three-dimensional model-based special effect processing device, the collected two-dimensional face image and the depth information corresponding to the face image are obtained, then the face is subjected to three-dimensional reconstruction according to the depth information and the face image to obtain a three-dimensional model corresponding to the face, the expression category corresponding to the two-dimensional face image is identified, and finally the three-dimensional model and the special effect model corresponding to the expression category are fused to obtain the three-dimensional model after special effect processing. Therefore, a user does not need to manually switch different special effect models, the automation degree of special effect addition is improved, and the enjoyment and the playability of the user in the special effect addition process are improved. In addition, according to the expression made by the user, the corresponding special effect model is determined, so that the special effect model and the three-dimensional model are fused, the reality of special effect addition can be improved, and the processed effect is better and natural.
In order to implement the above embodiments, the present application also provides an electronic device, including: the three-dimensional model-based special effect processing method provided by the foregoing embodiments of the present application is realized when the processor executes the program.
In order to achieve the foregoing embodiments, the present application also proposes a computer-readable storage medium on which a computer program is stored, wherein the program is configured to implement the three-dimensional model-based special effect processing method proposed in the foregoing embodiments of the present application when executed by a processor.
Fig. 6 is a schematic diagram of the internal structure of the electronic device 200 in one embodiment. The electronic device 200 includes a processor 220, a memory 230, a display 240, and an input device 250 connected by a system bus 210. Memory 230 of electronic device 200 stores, among other things, an operating system and computer-readable instructions. The computer readable instructions can be executed by the processor 220 to implement the three-dimensional model-based special effect processing method according to the embodiment of the present application. The processor 220 is used to provide computing and control capabilities that support the operation of the overall electronic device 200. The display 240 of the electronic device 200 may be a liquid crystal display or an electronic ink display, and the input device 250 may be a touch layer covered on the display 240, a button, a trackball or a touch pad arranged on a housing of the electronic device 200, or an external keyboard, a touch pad or a mouse. The electronic device 200 may be a mobile phone, a tablet computer, a notebook computer, a personal digital assistant, or a wearable device (e.g., a smart bracelet, a smart watch, a smart helmet, smart glasses), etc.
Those skilled in the art will appreciate that the configuration shown in fig. 6 is merely a schematic diagram of a portion of the configuration associated with the present application, and does not constitute a limitation on the electronic device 200 to which the present application is applied, and that a particular electronic device 200 may include more or less components than those shown in the drawings, or combine certain components, or have a different arrangement of components.
For clarity of the electronic device provided in this embodiment, please refer to fig. 7, which provides an image processing circuit according to this embodiment, and the image processing circuit can be implemented by hardware and/or software components.
It should be noted that fig. 7 is a schematic diagram of an image processing circuit as one possible implementation. For ease of illustration, only the various aspects associated with the embodiments of the present application are shown.
As shown in fig. 7, the image processing circuit specifically includes: an image unit 310, a depth information unit 320, and a processing unit 330. Wherein the content of the first and second substances,
and an image unit 310 for outputting a two-dimensional face image.
A depth information unit 320 for outputting depth information.
In the embodiment of the present application, a two-dimensional face image may be obtained by the image unit 310, and depth information corresponding to the face image may be obtained by the depth information unit 320.
The processing unit 330 is electrically connected to the image unit 310 and the depth information unit 320, respectively, and configured to perform three-dimensional reconstruction on the face according to the two-dimensional face image obtained by the image unit 310 and the corresponding depth information obtained by the depth information unit 320 to obtain a three-dimensional model corresponding to the face, identify an expression category corresponding to the two-dimensional face image, and fuse the three-dimensional model and a special effect model corresponding to the expression category to obtain a three-dimensional model after special effect processing.
In this embodiment, the two-dimensional face image obtained by the image unit 310 may be sent to the processing unit 330, the depth information corresponding to the face image obtained by the depth information unit 320 may be sent to the processing unit 330, the processing unit 330 may perform three-dimensional reconstruction on the face according to the face image and the depth information to obtain a three-dimensional model corresponding to the face, identify an expression category corresponding to the two-dimensional face image, and fuse the three-dimensional model and a special effect model corresponding to the expression category to obtain a three-dimensional model after special effect processing. For a specific implementation process, reference may be made to the explanation of the three-dimensional model-based special effect processing method in the embodiments of fig. 1 to fig. 3, which is not described herein again.
Further, as a possible implementation manner of the present application, referring to fig. 8, on the basis of the embodiment shown in fig. 7, the image processing circuit may further include:
as a possible implementation manner, the image unit 310 may specifically include: an Image sensor 311 and an Image Signal Processing (ISP) processor 312 electrically connected to each other. Wherein the content of the first and second substances,
and an image sensor 311 for outputting raw image data.
And an ISP processor 312, configured to output a face image according to the original image data.
In the embodiment of the present application, the raw image data captured by the image sensor 311 is first processed by the ISP processor 312, and the ISP processor 312 analyzes the raw image data to capture image statistics, including a human face image in YUV format or RGB format, which can be used to determine one or more control parameters of the image sensor 311. Where the image sensor 311 may include an array of color filters (e.g., Bayer filters), and corresponding photosites, the image sensor 311 may acquire light intensity and wavelength information captured by each photosite and provide a set of raw image data that may be processed by the ISP processor 312. The ISP processor 312 processes the original image data to obtain a face image in YUV format or RGB format, and sends the face image to the processing unit 330.
The ISP processor 312 may process the raw image data in a plurality of formats on a pixel-by-pixel basis when processing the raw image data. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 312 may perform one or more image processing operations on the raw image data, gathering statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
As a possible implementation manner, the depth information unit 320 includes a structured light sensor 321 and a depth map generating chip 322, which are electrically connected. Wherein the content of the first and second substances,
a structured light sensor 321 for generating an infrared speckle pattern.
The depth map generating chip 322 is used for outputting depth information according to the infrared speckle pattern; the depth information comprises a depth map.
In the embodiment of the present application, the structured light sensor 321 projects speckle structured light to a subject, obtains structured light reflected by the subject, and obtains an infrared speckle pattern according to imaging of the reflected structured light. The structured light sensor 321 sends the infrared speckle pattern to the Depth Map generating chip 322, so that the Depth Map generating chip 322 determines the morphological change condition of the structured light according to the infrared speckle pattern, and further determines the Depth of the shot object according to the morphological change condition, so as to obtain a Depth Map (Depth Map), wherein the Depth Map indicates the Depth of each pixel point in the infrared speckle pattern. The depth map generating chip 322 sends the depth map to the processing unit 330.
As a possible implementation manner, the processing unit 330 includes: a CPU331 and a GPU (Graphics Processing Unit) 332 electrically connected. Wherein the content of the first and second substances,
the CPU331 is configured to align the face image and the depth map according to the calibration data, and output a three-dimensional model corresponding to the face according to the aligned face image and depth map.
And the GPU332 is used for identifying the expression type corresponding to the two-dimensional face image, and fusing the three-dimensional model and the special effect model corresponding to the expression type to obtain the three-dimensional model after special effect processing.
In the embodiment of the present application, the CPU331 acquires a face image from the ISP processor 312, acquires a depth map from the depth map generating chip 322, and aligns the face image with the depth map by combining with calibration data obtained in advance, thereby determining depth information corresponding to each pixel point in the face image. Further, the CPU331 performs three-dimensional reconstruction of the face based on the depth information and the face image to obtain a three-dimensional model corresponding to the face.
The CPU331 sends the three-dimensional model corresponding to the face to the GPU332, so that the GPU332 executes the special effect processing method based on the three-dimensional model described in the foregoing embodiment according to the three-dimensional model corresponding to the face, and realizes fusion of the three-dimensional model and the special effect model corresponding to the expression category to obtain the three-dimensional model after special effect processing.
Further, the image processing circuit may further include: a display unit 340.
The display unit 340 is electrically connected to the GPU332, and is configured to display the three-dimensional model after the special effect processing.
Specifically, the three-dimensional model after the special effect processing obtained by the GPU332 may be displayed by the display 340.
Optionally, the image processing circuit may further include: an encoder 350 and a memory 360.
In this embodiment, the three-dimensional model after the special effect processing obtained by the GPU332 may be encoded by the encoder 350 and stored in the memory 360, where the encoder 350 may be implemented by a coprocessor.
In one embodiment, the memory 360 may be multiple or divided into multiple memory spaces, and the image data processed by the GPU312 may be stored in a dedicated memory, or a dedicated memory space, and may include a DMA (Direct memory access) feature. Memory 360 may be configured to implement one or more frame buffers.
The above process is described in detail below with reference to fig. 8.
As shown in fig. 8, the raw image data captured by the image sensor 311 is first processed by the ISP processor 312, and the ISP processor 312 analyzes the raw image data to capture image statistics, including a human face image in YUV format or RGB format, which can be used to determine one or more control parameters of the image sensor 311, and sends to the CPU 331.
As shown in fig. 8, the structured light sensor 321 projects speckle structured light to a subject, acquires structured light reflected by the subject, and forms an image according to the reflected structured light to obtain an infrared speckle pattern. The structured light sensor 321 sends the infrared speckle pattern to the Depth Map generating chip 322, so that the Depth Map generating chip 322 determines the morphological change condition of the structured light according to the infrared speckle pattern, and further determines the Depth of the shot object according to the morphological change condition, thereby obtaining a Depth Map (Depth Map). The depth map generating chip 322 sends the depth map to the CPU 331.
The CPU331 acquires a face image from the ISP processor 312, acquires a depth map from the depth map generation chip 322, and aligns the face image with the depth map by combining with calibration data obtained in advance, thereby determining depth information corresponding to each pixel point in the face image. Further, the CPU331 performs three-dimensional reconstruction of the face based on the depth information and the face image to obtain a three-dimensional model corresponding to the face.
The CPU331 sends the three-dimensional model corresponding to the face to the GPU332, so that the GPU332 executes the special effect processing method based on the three-dimensional model described in the foregoing embodiment according to the three-dimensional model of the face, and realizes fusion of the three-dimensional model and the special effect model corresponding to the expression category to obtain the three-dimensional model after special effect processing. The three-dimensional model after the special effect processing, which is processed by the GPU332, may be displayed by the display 340 and/or encoded by the encoder 350 and stored in the memory 360.
For example, the following steps are performed to implement the control method by using the processor 220 in fig. 6 or by using the image processing circuit (specifically, the CPU331 and the GPU332) in fig. 8:
the CPU331 acquires a two-dimensional face image and depth information corresponding to the face image; the CPU331 performs three-dimensional reconstruction on the face according to the depth information and the face image to obtain a three-dimensional model corresponding to the face; the GPU332 identifies expression categories corresponding to the two-dimensional face image; the GPU332 fuses the three-dimensional model and the special effect model corresponding to the expression category to obtain the three-dimensional model after special effect processing.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (7)

1. A special effect processing method based on a three-dimensional model is characterized by comprising the following steps:
acquiring an acquired two-dimensional face image and depth information corresponding to the face image;
performing three-dimensional reconstruction on a human face according to the depth information and the human face image to obtain a three-dimensional model corresponding to the human face, wherein the key points extracted from the depth information and the key points extracted from the color information corresponding to the human face image are subjected to registration and fusion processing to generate the three-dimensional model;
identifying an expression category corresponding to the two-dimensional face image;
acquiring a corresponding special effect model according to the expression category;
adjusting the angle of the special effect model relative to the three-dimensional model so that the three-dimensional model and the special effect model are angle-matched;
inquiring the corresponding relation between different pre-established special effect models and key points of a to-be-pasted drawing, and acquiring the key points of the to-be-pasted drawing corresponding to the special effect models in the three-dimensional model;
in the three-dimensional model, taking the region where the key point of the to-be-pasted picture corresponding to the special effect model is located as the region of the to-be-pasted picture;
according to the area to be pasted of the three-dimensional model, deforming the special effect model so that the deformed special effect model covers the area to be pasted;
and after rendering the special effect model, pasting a picture to the three-dimensional model.
2. The special effects processing method according to claim 1, wherein the identifying an expression class corresponding to the two-dimensional face image includes:
identifying the position of each key point in the face image of the current frame;
identifying the position of each key point in at least one frame of face image collected before the current frame;
and if the difference between the position of each key point in the at least one frame of face image and the position of each key point in the face image of the current frame is greater than a threshold value, identifying the expression type corresponding to the current frame.
3. The special effects processing method of claim 1, wherein the adjusting the angle of the special effects model relative to the three-dimensional model to angularly match the three-dimensional model and the special effects model comprises:
inquiring the angle parameter applicable to the special effect model;
and rotating the special effect model to enable an included angle between a first connecting line of a preset target key point in the special effect model and a second connecting line of a preset reference key point in the three-dimensional model to accord with the angle parameter.
4. The special effect processing method according to claim 1, wherein the rendering the special effect model includes:
and rendering the special effect model according to the light effect of the three-dimensional model.
5. A three-dimensional model-based special effects processing apparatus, the apparatus comprising:
the acquisition module is used for acquiring the acquired two-dimensional face image and depth information corresponding to the face image;
the reconstruction module is used for performing three-dimensional reconstruction on a human face according to the depth information and the human face image so as to obtain a three-dimensional model corresponding to the human face, wherein the key points extracted from the depth information and the key points extracted from the color information corresponding to the human face image are subjected to registration and fusion processing so as to generate the three-dimensional model;
the recognition module is used for recognizing the expression type corresponding to the two-dimensional face image;
the fusion module is used for acquiring a corresponding special effect model according to the expression category; adjusting the angle of the special effect model relative to the three-dimensional model so that the three-dimensional model and the special effect model are angle-matched; inquiring the corresponding relation between different pre-established special effect models and key points of a to-be-pasted drawing, and acquiring the key points of the to-be-pasted drawing corresponding to the special effect models in the three-dimensional model; in the three-dimensional model, taking the region where the key point of the to-be-pasted picture corresponding to the special effect model is located as the region of the to-be-pasted picture; according to the area to be pasted of the three-dimensional model, deforming the special effect model so that the deformed special effect model covers the area to be pasted; and after rendering the special effect model, pasting a picture to the three-dimensional model.
6. An electronic device, comprising: memory, processor and computer program stored on the memory and executable on the processor, which when executed by the processor implements the three-dimensional model based special effects processing method of any of claims 1 to 4.
7. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the three-dimensional model-based special effects processing method according to any one of claims 1 to 4.
CN201810934012.XA 2018-08-16 2018-08-16 Special effect processing method and device based on three-dimensional model and electronic equipment Active CN109147037B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810934012.XA CN109147037B (en) 2018-08-16 2018-08-16 Special effect processing method and device based on three-dimensional model and electronic equipment
PCT/CN2019/088118 WO2020034698A1 (en) 2018-08-16 2019-05-23 Three-dimensional model-based special effect processing method and device, and electronic apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810934012.XA CN109147037B (en) 2018-08-16 2018-08-16 Special effect processing method and device based on three-dimensional model and electronic equipment

Publications (2)

Publication Number Publication Date
CN109147037A CN109147037A (en) 2019-01-04
CN109147037B true CN109147037B (en) 2020-09-18

Family

ID=64789563

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810934012.XA Active CN109147037B (en) 2018-08-16 2018-08-16 Special effect processing method and device based on three-dimensional model and electronic equipment

Country Status (2)

Country Link
CN (1) CN109147037B (en)
WO (1) WO2020034698A1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109147037B (en) * 2018-08-16 2020-09-18 Oppo广东移动通信有限公司 Special effect processing method and device based on three-dimensional model and electronic equipment
CN110310318B (en) * 2019-07-03 2022-10-04 北京字节跳动网络技术有限公司 Special effect processing method and device, storage medium and terminal
CN111639613B (en) * 2020-06-04 2024-04-16 上海商汤智能科技有限公司 Augmented reality AR special effect generation method and device and electronic equipment
CN112004020B (en) * 2020-08-19 2022-08-12 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113538696A (en) * 2021-07-20 2021-10-22 广州博冠信息科技有限公司 Special effect generation method and device, storage medium and electronic equipment
CN114494556A (en) * 2022-01-30 2022-05-13 北京大甜绵白糖科技有限公司 Special effect rendering method, device and equipment and storage medium
CN114677386A (en) * 2022-03-25 2022-06-28 北京字跳网络技术有限公司 Special effect image processing method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6088040A (en) * 1996-09-17 2000-07-11 Atr Human Information Processing Research Laboratories Method and apparatus of facial image conversion by interpolation/extrapolation for plurality of facial expression components representing facial image
CN105118082A (en) * 2015-07-30 2015-12-02 科大讯飞股份有限公司 Personalized video generation method and system
CN106920274A (en) * 2017-01-20 2017-07-04 南京开为网络科技有限公司 Mobile terminal 2D key points rapid translating is the human face model building of 3D fusion deformations
CN107452034A (en) * 2017-07-31 2017-12-08 广东欧珀移动通信有限公司 Image processing method and its device

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE69915901T2 (en) * 1998-01-14 2004-09-02 Canon K.K. Image processing device
CN101021952A (en) * 2007-03-23 2007-08-22 北京中星微电子有限公司 Method and apparatus for realizing three-dimensional video special efficiency
CN101452582B (en) * 2008-12-18 2013-09-18 北京中星微电子有限公司 Method and device for implementing three-dimensional video specific action
CN102054291A (en) * 2009-11-04 2011-05-11 厦门市美亚柏科信息股份有限公司 Method and device for reconstructing three-dimensional face based on single face image
US20140088750A1 (en) * 2012-09-21 2014-03-27 Kloneworld Pte. Ltd. Systems, methods and processes for mass and efficient production, distribution and/or customization of one or more articles
US9378576B2 (en) * 2013-06-07 2016-06-28 Faceshift Ag Online modeling for real-time facial animation
CN104346824A (en) * 2013-08-09 2015-02-11 汉王科技股份有限公司 Method and device for automatically synthesizing three-dimensional expression based on single facial image
CN104978764B (en) * 2014-04-10 2017-11-17 华为技术有限公司 3 d human face mesh model processing method and equipment
CN104732203B (en) * 2015-03-05 2019-03-26 中国科学院软件研究所 A kind of Emotion identification and tracking based on video information
CN108154550B (en) * 2017-11-29 2021-07-06 奥比中光科技集团股份有限公司 RGBD camera-based real-time three-dimensional face reconstruction method
CN108062791A (en) * 2018-01-12 2018-05-22 北京奇虎科技有限公司 A kind of method and apparatus for rebuilding human face three-dimensional model
CN109147037B (en) * 2018-08-16 2020-09-18 Oppo广东移动通信有限公司 Special effect processing method and device based on three-dimensional model and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6088040A (en) * 1996-09-17 2000-07-11 Atr Human Information Processing Research Laboratories Method and apparatus of facial image conversion by interpolation/extrapolation for plurality of facial expression components representing facial image
CN105118082A (en) * 2015-07-30 2015-12-02 科大讯飞股份有限公司 Personalized video generation method and system
CN106920274A (en) * 2017-01-20 2017-07-04 南京开为网络科技有限公司 Mobile terminal 2D key points rapid translating is the human face model building of 3D fusion deformations
CN107452034A (en) * 2017-07-31 2017-12-08 广东欧珀移动通信有限公司 Image processing method and its device

Also Published As

Publication number Publication date
CN109147037A (en) 2019-01-04
WO2020034698A1 (en) 2020-02-20

Similar Documents

Publication Publication Date Title
CN109147037B (en) Special effect processing method and device based on three-dimensional model and electronic equipment
CN109118569B (en) Rendering method and device based on three-dimensional model
KR102362544B1 (en) Method and apparatus for image processing, and computer readable storage medium
CN108765272B (en) Image processing method and device, electronic equipment and readable storage medium
US8150205B2 (en) Image processing apparatus, image processing method, program, and data configuration
CN109102559B (en) Three-dimensional model processing method and device
CN109191584B (en) Three-dimensional model processing method and device, electronic equipment and readable storage medium
JP5463866B2 (en) Image processing apparatus, image processing method, and program
US11069151B2 (en) Methods and devices for replacing expression, and computer readable storage media
CN111754415B (en) Face image processing method and device, image equipment and storage medium
US9135726B2 (en) Image generation apparatus, image generation method, and recording medium
CN108764180A (en) Face identification method, device, electronic equipment and readable storage medium storing program for executing
CN108682050B (en) Three-dimensional model-based beautifying method and device
CN109272579B (en) Three-dimensional model-based makeup method and device, electronic equipment and storage medium
CN109191393B (en) Three-dimensional model-based beauty method
CN108876709A (en) Method for beautifying faces, device, electronic equipment and readable storage medium storing program for executing
US10162997B2 (en) Electronic device, computer readable storage medium and face image display method
JP5949030B2 (en) Image generating apparatus, image generating method, and program
CN109242760B (en) Face image processing method and device and electronic equipment
CN113221847A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN114926351A (en) Image processing method, electronic device, and computer storage medium
US8971636B2 (en) Image creating device, image creating method and recording medium
US9323981B2 (en) Face component extraction apparatus, face component extraction method and recording medium in which program for face component extraction method is stored
JP2017188787A (en) Imaging apparatus, image synthesizing method, and image synthesizing program
US11681397B2 (en) Position detection system, position detection apparatus, and position detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant