CN112562066B - Image reconstruction method and device and electronic equipment - Google Patents
Image reconstruction method and device and electronic equipment Download PDFInfo
- Publication number
- CN112562066B CN112562066B CN202011520673.1A CN202011520673A CN112562066B CN 112562066 B CN112562066 B CN 112562066B CN 202011520673 A CN202011520673 A CN 202011520673A CN 112562066 B CN112562066 B CN 112562066B
- Authority
- CN
- China
- Prior art keywords
- face
- model
- information
- reconstruction
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 61
- 238000013507 mapping Methods 0.000 claims abstract description 28
- 230000004927 fusion Effects 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 5
- 210000000697 sensory organ Anatomy 0.000 claims description 3
- 210000000056 organ Anatomy 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 27
- 210000001331 nose Anatomy 0.000 description 20
- 210000001508 eye Anatomy 0.000 description 7
- 230000001815 facial effect Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 210000004279 orbit Anatomy 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 210000000744 eyelid Anatomy 0.000 description 2
- 210000001061 forehead Anatomy 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000011282 treatment Methods 0.000 description 2
- 210000000216 zygoma Anatomy 0.000 description 2
- 210000005252 bulbus oculi Anatomy 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 239000004519 grease Substances 0.000 description 1
- 230000001795 light effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 210000000214 mouth Anatomy 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/506—Illumination models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/06—Topological mapping of higher dimensional structures onto lower dimensional surfaces
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application discloses an image reconstruction method and device and electronic equipment, and belongs to the field of image recognition. Wherein the image reconstruction method comprises: extracting face information in the first plane image; determining a three-dimensional face model based on the face information, and performing light and shadow reconstruction on the three-dimensional face model to generate a face reconstruction model; and mapping the face reconstruction model to generate a second planar image, so that the obtained second planar image contains the light and shadow reconstruction effect of the face reconstruction model, and the light and shadow effect of the finally obtained second planar image is better.
Description
Technical Field
The application belongs to the field of image recognition, and particularly relates to an image reconstruction method and device and electronic equipment.
Background
Along with the continuous iteration and development of image technology in recent years, better effects are achieved on capturing and beautifying real scenes, and in the fields of design, movies and games, virtual light and shadow reconstruction can be performed on 2D plane images for more realistic effects.
In the process of implementing the present application, the inventor finds that at least the following problems exist in the prior art: most of the existing methods for reconstructing 2D face images are based on the two-dimensional face image to perform whole-face light and shadow fusion or using a face light and shadow template to adjust, and the conditions of complex face light and shadow, complex light deconstruction and the like can occur, so that the whole effect is affected.
Content of the application
The embodiment of the application aims to provide an image reconstruction method and device and electronic equipment, which can solve the problem of poor effect of performing light and shadow reconstruction on a two-dimensional face image.
In order to solve the technical problems, the application is realized as follows:
in a first aspect, an embodiment of the present application provides an image reconstruction method, including:
extracting face information in the first plane image;
determining a three-dimensional face model based on the face information, and performing light and shadow reconstruction on the three-dimensional face model to generate a face reconstruction model;
and mapping the face reconstruction model to generate a second plane image.
In a second aspect, an embodiment of the present application provides an image reconstruction apparatus, including:
the information extraction module is used for extracting face information in the first plane image;
the model generation module is used for determining a three-dimensional face model based on the face information, and carrying out light and shadow reconstruction on the three-dimensional face model to generate a face reconstruction model;
and the image reconstruction module is used for mapping the face reconstruction model to generate a second plane image.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions implementing the steps of the image reconstruction method as described in the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which, when executed by a processor, implement the steps of the image reconstruction method as described in the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the image reconstruction method according to the first aspect.
According to the image reconstruction method, the image reconstruction device and the electronic equipment, the face information in the first plane image is extracted, the three-dimensional face model matched with the face information is determined, then the three-dimensional face model is subjected to light shadow reconstruction to obtain the face reconstruction model, and the face reconstruction model is mapped to generate the second plane image, so that the obtained second plane image contains the light shadow reconstruction effect of the face reconstruction model, and the light shadow effect of the finally obtained second plane image is good.
Drawings
Fig. 1 is a schematic flow chart of an image reconstruction method according to an embodiment of the present application;
FIG. 2 is a second flow chart of an image reconstruction method according to an embodiment of the present disclosure;
FIG. 3 is a schematic structural diagram of an image reconstruction device according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The image reconstruction method and apparatus, the electronic device and the readable storage medium provided in the embodiments of the present application are described in detail below with reference to the accompanying drawings by means of specific embodiments and application scenarios thereof.
The embodiment of the application discloses an image reconstruction method, referring to fig. 1, comprising the following steps:
101. face information in the first plane image is extracted.
In this embodiment, the first planar image may be a face image, or a two-dimensional planar image including a face image.
The key point information and the area brightness information of the face image are selected and extracted, so that information is provided for image reconstruction in the subsequent step.
It should be explained that the key point information refers to key feature points of the face, such as the position of the corners of eyes, the position of the nose, contour points of the face, and the like. By means of the key point information, the key area positions of the face of the human face can be located, including eyebrows, eyes, nose, mouth, facial contours and the like. By extracting key point information of the face, the outline and the position of the facial features and the facial forms can be primarily judged, for example, the face is a square face or a round face, the position and the proportion relation of the facial features and the like are judged.
In addition, the area luminance information refers to luminance information of each area of the face, and the luminance of each area may be an average value of the luminance of each point of the area.
The face area includes a plurality of areas, and the following areas of the face are schematically listed:
t-shaped region: the forehead and nasal passages form the area where the grease is most secreted.
Nose region: specifically, the nose bridge comprises a nose bridge, nose wings and a nose head, wherein the nose bridge is from the forehead to the front and is the highest place on the face; the nose wings are two sides of the nose and the widest part of the nose is in the area; the nose is the middle point area of the bottom of the bridge of the nose.
Cheekbones: a more prominent region under the eye.
Eye area: including upper eye sockets, canthus, eyeball, etc., wherein the upper eye sockets are the areas where the eyes are closed and the eyelids cover the eyes; the canthus is the area where the upper and lower eyelids meet on the side of the eye closest to the outside.
Mouth region: the junction between the upper lip and the lower lip forms a region in which the lip tip is the most prominent point in the middle of the upper lip and the lip peak is a curve from the center to both sides of the upper lip.
102. And determining a three-dimensional face model based on the face information, and performing light and shadow reconstruction on the three-dimensional face model to generate a face reconstruction model.
Specifically, the three-dimensional face model is generated by the following method: determining region depth information of the face based on the region brightness information; and mapping the key point information and the regional depth information to a face initial model to generate the three-dimensional face model.
The face initial model may be an initial model built in a system, and each parameter of the face initial model is an initial parameter, for example, region depth information, and the face initial model may be adjusted based on the received key point information and the region depth information in a subsequent step, so as to generate an adjusted three-dimensional face model.
The region depth information includes three-dimensional stereo information of each region of the face, and may represent the degree of protrusion or depression of each region of the face, for example, a depression region of the face at the eye sockets, a protrusion region of the face at the nose bridge, and the like.
And for each key point of the face, acquiring position information of the key point in the first plane image, determining the position of the key point in the initial model of the face, determining the relative position of the first plane image and the initial model of the face based on the position of the key point, and mapping the face information in the 2D first plane image to the 3D three-dimensional face model based on the region depth information.
After the three-dimensional face model is obtained, the light and shadow reconstruction is carried out on the three-dimensional face model, so that a more lifelike display effect can be obtained.
The light shadow reconstruction refers to that the light shadow of the original image is subjected to deepening or thinning treatment, and the light shadow direction or depth sense of an object in the original image is changed, so that the display effect is better. There are various light sources for light shadow reconstruction, such as direct light, oblique light, scattered light, spot light, and the like. There are also various factors that affect the shadow, such as spatial relationships, object shape, angle and intensity of light, etc., which are all factors to be considered in the shadow reconstruction process.
Specifically, the method comprises the following steps: determining a target light source for performing light shadow reconstruction on the three-dimensional face model; and attaching the target light source to the three-dimensional face model, and fusing the light source positions based on the regional depth information of the face to generate a face reconstruction model.
For example, in order to realize a realistic visual effect, a nose region is taken as an example, and a target light source is taken as a stereoscopic light.
Specifically, the stereoscopic light includes main light and auxiliary light:
if the main light is used at the front position (middle or middle-high central position) of the human face, the light effect is good, the near cheek lines are good in performance, the three-dimensional sense and the expressive force are good, and the whole picture is stable; the auxiliary light should be used in the middle position between the camera and the main light or in the camera position, and the auxiliary light can not be placed on the other side of the main light, so as to avoid the unnatural tone transition of the face of the person.
If the main light is used in the far side middle position and the far side cross light position of the human face, the three-dimensional effect is good, the style is strong, the line expression of the near side face is strong, and the outline expression is obvious; the auxiliary light is used in the front of the face or in the middle of the camera and the main light.
After the stereoscopic light is attached to the three-dimensional face model, if the main light of the stereoscopic light is positioned at the left side of the three-dimensional face model, the left side of the nose bridge is high light, and the right side of the nose bridge has obvious shadows, the auxiliary light is required to be adjusted according to the depth information of the nose region so as to adjust the brightness degree of the left side region and the right side region of the nose bridge, and therefore the fusion of the light source positions is realized, and the vivid visual effect is achieved.
103. And mapping the face reconstruction model to generate a second plane image.
Specifically, step 103 includes: and mapping the face reconstruction model according to the key point information to generate the second plane image.
In the process of mapping the face reconstruction model to generate the second planar image, the conversion from the three-dimensional model to the two-dimensional image is realized, and in the mapping process, the generated second planar image contains the face effect after the light and shadow reconstruction because the face reconstruction model is subjected to the light and shadow reconstruction, so that the overall face effect of the obtained second planar image is more vivid.
According to the image reconstruction method, the face information in the first plane image is extracted, the three-dimensional face model matched with the face information is determined, then the three-dimensional face model is subjected to light shadow reconstruction to obtain the face reconstruction model, and the face reconstruction model is mapped to generate the second plane image, so that the obtained second plane image contains the light shadow reconstruction effect of the face reconstruction model, and the light shadow effect of the finally obtained second plane image is good.
The image reconstruction method of the embodiment of the application can be applied to terminals, such as mobile phones, tablet computers, laptop computers, personal digital assistants, mobile internet surfing devices or wearable equipment. For further explanation of the method according to the embodiment of the present application, an example of application to a mobile phone is described.
The embodiment of the application discloses an image reconstruction method, referring to fig. 2, comprising the following steps:
201. and extracting key point information and area brightness information in the first plane image.
In this embodiment, the first planar image may be a face image, or a two-dimensional planar image including a face image.
For the key point information and the area brightness information, reference may be made to the explanation of the foregoing embodiments, and the details are not repeated here.
202. And determining the regional depth information of the face based on the regional brightness information.
The region of the face includes a plurality of cheek regions, T-shaped regions, five sense organs regions, and the like.
Specifically, step 202 includes the following steps 2021 to 2022:
2021. a luminance difference between regions is determined based on region luminance information of a plurality of regions of the face.
Wherein, the range of the brightness value is 0-255. By the brightness difference, the degree of concavity and convexity of each region of the face in the three-dimensional space can be determined. For example, the area where the luminance value is low is a facial depression area such as at the eye sockets; the areas with higher brightness values are facial bulge areas, such as nose bridge, cheekbones, etc.
2022. The high light intensity and the shadow intensity of the face are determined, and the region depth information of each region of the face is determined based on the brightness difference between the regions, the high light intensity and the shadow intensity.
Through step 202, the regional depth information of the face can be determined for mapping in the subsequent step to obtain a three-dimensional face model.
203. And mapping the key point information and the regional depth information to a face initial model to generate a three-dimensional face model.
The face initial model is an initial model built in the system, and all parameters of the face initial model are initial parameters, such as key point information, regional depth information and the like. The parameters of the three-dimensional face model can be adjusted to generate the three-dimensional face model corresponding to the adjusted face image.
204. And determining a target light source for performing light and shadow reconstruction on the three-dimensional face model.
Specifically, step 204 includes: determining a luminance ratio of the cheek region and the T-shaped region based on the region luminance information; under the condition that the brightness ratio is smaller than or equal to a threshold value, determining that the target light source is a three-dimensional light source so as to increase the three-dimensional sense of the human face; and under the condition that the brightness ratio is larger than the threshold value, determining the target light source as a front light source so as to weaken the three-dimensional sense of the human face and make the light and shadow softer.
The stereoscopic light source is described in the foregoing embodiments, and will not be described in detail herein.
For a frontal light source, i.e. the frontal face is facing the lens entirely, the face parts are symmetrical but lack a sense of depth.
The threshold value can be set according to actual requirements.
In this embodiment, the brightness of the cheek region and the brightness of the T-shaped region are selected for judgment, because the two regions are generally used as a brighter region and a darker region of the face image, the brightness ratio of the two regions can reflect the brightness structure relationship of the whole face image relatively considerably, so as to provide an adjustment direction for light and shadow reconstruction.
205. Attaching the target light source to the three-dimensional face model, and fusing the light source positions based on the regional depth information of the face to generate the face reconstruction model.
Furthermore, in the process of generating the face reconstruction model, more beautifying treatments such as beautifying and face beautifying can be performed on the three-dimensional face model.
In further application examples, other virtual objects may be added in the picture and rendered based on the reconstruction to produce an effect that is spurious. For example, when a face is illuminated with a stereoscopic light of 45 degrees on the left side, the left cheek is a highlight region, and the right cheek is a distinct shadow, the auxiliary light needs to be adjusted according to the depth information of the cheek region, so as to adjust the brightness of the left cheek region and the right cheek region, thereby realizing the fusion of the light source positions and achieving a realistic visual effect.
206. And mapping the face reconstruction model according to the key point information to generate the second plane image.
Specifically, based on the position information of each key point of the face in the first planar image and the position information of the key point in the face reconstruction model, the relative positions of the first planar image and the face reconstruction model are determined, and the 3D face reconstruction model is mapped into the 2D first planar image.
In the mapping process, the face reconstruction model is subjected to light and shadow reconstruction, so that the generated second plane image contains the face effect after light and shadow reconstruction, and the overall face effect of the obtained second plane image is more vivid.
According to the image reconstruction method, the face information in the first plane image is extracted, the three-dimensional face model matched with the face information is determined, then the three-dimensional face model is subjected to light shadow reconstruction to obtain the face reconstruction model, and the face reconstruction model is mapped to generate the second plane image, so that the obtained second plane image contains the light shadow reconstruction effect of the face reconstruction model, and the light shadow effect of the finally obtained second plane image is good.
And when the mobile phone camera is actually applied to the mobile phone terminal, the playability function and photographing effect experience of the mobile phone camera can be improved.
In other usage scenarios, the method of the present embodiment may also be used for reconstructing a photograph of a human body or reconstructing photographs of other target objects.
It should be noted that, in the image reconstruction method provided in the embodiment of the present application, the execution subject may be an image reconstruction device, or a control module in the image reconstruction device for executing the loaded image reconstruction method. In the embodiment of the present application, an image reconstruction device executes a loaded image reconstruction method as an example, and the image reconstruction method provided in the embodiment of the present application is described.
An embodiment of the present application provides an image reconstruction apparatus, referring to fig. 3, including:
an information extraction module 301, configured to extract face information in the first plane image;
the model generating module 302 is configured to determine a three-dimensional face model based on the face information, and perform light and shadow reconstruction on the three-dimensional face model to generate a face reconstruction model;
and the image reconstruction module 303 is configured to map the face reconstruction model to generate a second planar image.
Optionally, the information extraction module 301 is specifically configured to: and extracting key point information and area brightness information in the first plane image.
Optionally, the model generating module 302 includes:
an information determining unit configured to determine region depth information of a face based on the region brightness information, wherein a region of the face includes a cheek region, a T-shaped region, and a five-sense organ region;
and the first mapping unit is used for mapping the key point information and the regional depth information to the face initial model and generating the three-dimensional face model.
Optionally, the information determining unit is specifically configured to:
determining a luminance difference between regions based on region luminance information of a plurality of regions of the face;
and determining the high light intensity and the shadow intensity of the face, and determining the region depth information of each region of the face based on the brightness difference between the regions, the high light intensity and the shadow intensity.
Optionally, the model generating module 302 includes:
the light source determining unit is used for determining a target light source for performing light and shadow reconstruction on the three-dimensional face model;
and the fusion generation unit is used for attaching the target light source to the three-dimensional face model, and carrying out light source position fusion based on the regional depth information of the face to generate the face reconstruction model.
Optionally, the light source determining unit is specifically configured to:
determining a luminance ratio of the cheek region and the T-shaped region based on the region luminance information;
under the condition that the brightness ratio is smaller than or equal to a threshold value, determining that the target light source is a stereoscopic light source;
and under the condition that the brightness ratio is larger than a threshold value, determining that the target light source is a front light source.
Optionally, the image reconstruction module 303 is specifically configured to: and mapping the face reconstruction model according to the key point information to generate a second plane image.
The image reconstruction device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a cell phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, wearable device, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook or personal digital assistant (personal digital assistant, PDA), etc., and the non-mobile electronic device may be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The image reconstruction device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The image reconstruction device provided in the embodiment of the present application can implement each process implemented by the image reconstruction device in the method embodiment of fig. 1 to 2, and in order to avoid repetition, a description is omitted here.
According to the image reconstruction device provided by the embodiment of the application, the three-dimensional face model matched with the face information is determined by extracting the face information in the first plane image, then the three-dimensional face model is subjected to light shadow reconstruction to obtain the face reconstruction model, and the face reconstruction model is mapped to generate the second plane image, so that the effect of light shadow reconstruction of the obtained second plane image is good.
Optionally, the embodiment of the present application further provides an electronic device, including a processor 410, a memory 409, and a program or an instruction stored in the memory 409 and capable of running on the processor 410, where the program or the instruction implements each process of the embodiment of the image reconstruction method when executed by the processor 410, and the process can achieve the same technical effect, so that repetition is avoided, and no further description is given here.
It should be noted that, the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 4 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 400 includes, but is not limited to: radio frequency unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, and processor 410.
Those skilled in the art will appreciate that the electronic device 400 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 410 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 4 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
Wherein, the processor 410 is configured to extract face information in the first plane image;
determining a three-dimensional face model based on the face information, and performing light and shadow reconstruction on the three-dimensional face model to generate a face reconstruction model;
and mapping the face reconstruction model to generate a second plane image.
According to the electronic equipment provided by the embodiment of the application, the three-dimensional face model matched with the face information is determined by extracting the face information in the first plane image, then the three-dimensional face model is subjected to light shadow reconstruction to obtain the face reconstruction model, and the face reconstruction model is mapped to generate the second plane image, so that the light shadow reconstruction effect of the obtained second plane image is good.
Optionally, the processor 410 is further configured to: and extracting key point information and area brightness information in the first plane image.
Optionally, the processor 410 is further configured to: determining region depth information of a face based on the region brightness information, wherein the region of the face comprises a cheek region, a T-shaped region and a five sense organs region; and mapping the key point information and the regional depth information to a face initial model to generate the three-dimensional face model.
Optionally, the processor 410 is further configured to: determining a luminance difference between regions based on region luminance information of a plurality of regions of the face; the high light intensity and the shadow intensity of the face are determined, and the region depth information of each region of the face is determined based on the brightness difference between the regions, the high light intensity and the shadow intensity.
Optionally, the processor 410 is further configured to: determining a target light source for performing light and shadow reconstruction on the three-dimensional face model; attaching the target light source to the three-dimensional face model, and fusing the light source positions based on the regional depth information of the face to generate the face reconstruction model.
Optionally, the processor 410 is further configured to: determining a luminance ratio of the cheek region and the T-shaped region based on the region luminance information; under the condition that the brightness ratio is smaller than or equal to a threshold value, determining that the target light source is a stereoscopic light source; and under the condition that the brightness ratio is larger than a threshold value, determining that the target light source is a front light source.
Optionally, the processor 410 is further configured to: and mapping the face reconstruction model according to the key point information to generate the second plane image.
In this embodiment, the brightness of the cheek region and the brightness of the T-shaped region are selected to perform the judgment, because the two regions are generally used as a brighter region and a darker region of the face image, and the brightness ratio of the two regions can reflect the bright-dark structural relationship of the whole face image relatively considerably.
And secondly, mapping the face reconstruction model to a face image so that the obtained face reconstruction photo has a 3D light source effect.
It should be appreciated that in embodiments of the present application, the input unit 404 may include a graphics processor (Graphics Processing Unit, GPU) 4041 and a microphone 4042, with the graphics processor 4041 processing image data of still pictures or video obtained by an image capture device (e.g., a camera) in a video capture mode or an image capture mode. The display unit 406 may include a display panel 4061, and the display panel 4061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 407 includes a touch panel 4071 and other input devices 4072. The touch panel 4071 is also referred to as a touch screen. The touch panel 4071 may include two parts, a touch detection device and a touch controller. Other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein. Memory 409 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 410 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the processes of the embodiment of the image reconstruction method are implemented, and the same technical effects can be achieved, so that repetition is avoided, and no further description is given here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium such as a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, the processor is used for running a program or instructions, implementing each process of the embodiment of the image reconstruction method, and achieving the same technical effect, so as to avoid repetition, and no redundant description is provided herein.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), including several instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.
Claims (13)
1. An image reconstruction method, comprising:
extracting face information in the first plane image;
determining a three-dimensional face model based on the face information, and performing light and shadow reconstruction on the three-dimensional face model to generate a face reconstruction model;
mapping the face reconstruction model to generate a second plane image;
the method for generating the face reconstruction model comprises the following steps of:
determining a target light source for performing light and shadow reconstruction on the three-dimensional face model;
attaching the target light source to the three-dimensional face model, and carrying out light source position fusion based on the regional depth information of the face to generate the face reconstruction model; wherein the region depth information is determined by the face information.
2. The image reconstruction method according to claim 1, wherein the extracting face information in the first planar image includes: and extracting key point information and area brightness information in the first plane image.
3. The image reconstruction method according to claim 2, wherein the determining a three-dimensional face model based on the face information includes:
determining region depth information of a face based on the region brightness information, wherein the region of the face comprises a cheek region, a T-shaped region and a five sense organs region;
and mapping the key point information and the regional depth information to a face initial model to generate the three-dimensional face model.
4. The image reconstruction method according to claim 3, wherein the determining the region depth information of the face based on the region brightness information includes:
determining a luminance difference between regions based on region luminance information of a plurality of regions of the face;
and determining the high light intensity and the shadow intensity of the face, and determining the region depth information of each region of the face based on the brightness difference between the regions, the high light intensity and the shadow intensity.
5. The method of image reconstruction according to claim 1, wherein the determining a target light source for performing light and shadow reconstruction on the three-dimensional face model includes:
determining a luminance ratio of the cheek region and the T-shaped region based on the region luminance information;
under the condition that the brightness ratio is smaller than or equal to a threshold value, determining that the target light source is a stereoscopic light source;
and under the condition that the brightness ratio is larger than a threshold value, determining that the target light source is a front light source.
6. The method of image reconstruction according to claim 2, wherein said mapping the face reconstruction model to generate a second planar image comprises:
and mapping the face reconstruction model according to the key point information to generate the second plane image.
7. An image reconstruction apparatus, comprising:
the information extraction module is used for extracting face information in the first plane image;
the model generation module is used for determining a three-dimensional face model based on the face information, and carrying out light and shadow reconstruction on the three-dimensional face model to generate a face reconstruction model;
the image reconstruction module is used for mapping the face reconstruction model to generate a second plane image;
the model generation module comprises:
the light source determining unit is used for determining a target light source for performing light and shadow reconstruction on the three-dimensional face model;
and the fusion generation unit is used for attaching the target light source to the three-dimensional face model, and carrying out light source position fusion based on the regional depth information of the face to generate the face reconstruction model.
8. The image reconstruction apparatus according to claim 7, wherein the information extraction module is specifically configured to: and extracting key point information and area brightness information in the first plane image.
9. The image reconstruction apparatus according to claim 8, wherein the model generation module comprises:
an information determining unit configured to determine region depth information of a face based on the region brightness information, wherein a region of the face includes a cheek region, a T-shaped region, and a five-sense organ region;
and the first mapping unit is used for mapping the key point information and the regional depth information to the face initial model and generating the three-dimensional face model.
10. The image reconstruction apparatus according to claim 9, wherein the information determination unit is specifically configured to:
determining a luminance difference between regions based on region luminance information of a plurality of regions of the face;
and determining the high light intensity and the shadow intensity of the face, and determining the region depth information of each region of the face based on the brightness difference between the regions, the high light intensity and the shadow intensity.
11. The image reconstruction apparatus according to claim 7, wherein the light source determination unit is specifically configured to:
determining a luminance ratio of the cheek region and the T-shaped region based on the region luminance information;
under the condition that the brightness ratio is smaller than or equal to a threshold value, determining that the target light source is a stereoscopic light source;
and under the condition that the brightness ratio is larger than a threshold value, determining that the target light source is a front light source.
12. The image reconstruction apparatus according to claim 8, wherein the image reconstruction module is specifically configured to: and mapping the face reconstruction model according to the key point information to generate the second plane image.
13. An electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, which program or instruction when executed by the processor implements the steps of the image reconstruction method as claimed in any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011520673.1A CN112562066B (en) | 2020-12-21 | 2020-12-21 | Image reconstruction method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011520673.1A CN112562066B (en) | 2020-12-21 | 2020-12-21 | Image reconstruction method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112562066A CN112562066A (en) | 2021-03-26 |
CN112562066B true CN112562066B (en) | 2024-03-22 |
Family
ID=75030696
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011520673.1A Active CN112562066B (en) | 2020-12-21 | 2020-12-21 | Image reconstruction method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112562066B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115115781A (en) * | 2022-07-01 | 2022-09-27 | 郑州航空工业管理学院 | Cloud-collaborative image processing method and system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018114455A1 (en) * | 2016-12-20 | 2018-06-28 | Henkel Ag & Co. Kgaa | Method and device for computer-aided hair treatment consultation |
CN108460398A (en) * | 2017-12-27 | 2018-08-28 | 达闼科技(北京)有限公司 | Image processing method, device, cloud processing equipment and computer program product |
CN108876709A (en) * | 2018-05-31 | 2018-11-23 | Oppo广东移动通信有限公司 | Method for beautifying faces, device, electronic equipment and readable storage medium storing program for executing |
CN109272579A (en) * | 2018-08-16 | 2019-01-25 | Oppo广东移动通信有限公司 | Makeups method, apparatus, electronic equipment and storage medium based on threedimensional model |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030202120A1 (en) * | 2002-04-05 | 2003-10-30 | Mack Newton Eliot | Virtual lighting system |
CN108765537A (en) * | 2018-06-04 | 2018-11-06 | 北京旷视科技有限公司 | A kind of processing method of image, device, electronic equipment and computer-readable medium |
-
2020
- 2020-12-21 CN CN202011520673.1A patent/CN112562066B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018114455A1 (en) * | 2016-12-20 | 2018-06-28 | Henkel Ag & Co. Kgaa | Method and device for computer-aided hair treatment consultation |
CN108460398A (en) * | 2017-12-27 | 2018-08-28 | 达闼科技(北京)有限公司 | Image processing method, device, cloud processing equipment and computer program product |
CN108876709A (en) * | 2018-05-31 | 2018-11-23 | Oppo广东移动通信有限公司 | Method for beautifying faces, device, electronic equipment and readable storage medium storing program for executing |
CN109272579A (en) * | 2018-08-16 | 2019-01-25 | Oppo广东移动通信有限公司 | Makeups method, apparatus, electronic equipment and storage medium based on threedimensional model |
Also Published As
Publication number | Publication date |
---|---|
CN112562066A (en) | 2021-03-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11798246B2 (en) | Electronic device for generating image including 3D avatar reflecting face motion through 3D avatar corresponding to face and method of operating same | |
CN109978989B (en) | Three-dimensional face model generation method, three-dimensional face model generation device, computer equipment and storage medium | |
US10609334B2 (en) | Group video communication method and network device | |
CN109427083B (en) | Method, device, terminal and storage medium for displaying three-dimensional virtual image | |
US12002160B2 (en) | Avatar generation method, apparatus and device, and medium | |
CN113205568B (en) | Image processing method, device, electronic equipment and storage medium | |
CN111652123B (en) | Image processing and image synthesizing method, device and storage medium | |
CN112348937A (en) | Face image processing method and electronic equipment | |
CN109325450A (en) | Image processing method, device, storage medium and electronic equipment | |
CN113362263B (en) | Method, apparatus, medium and program product for transforming an image of a virtual idol | |
CN115082608B (en) | Virtual character clothing rendering method, device, electronic equipment and storage medium | |
CN112581518A (en) | Eyeball registration method, device, server and medium based on three-dimensional cartoon model | |
CN113409468A (en) | Image processing method and device, electronic equipment and storage medium | |
CN112562066B (en) | Image reconstruction method and device and electronic equipment | |
CN108921815A (en) | It takes pictures exchange method, device, storage medium and terminal device | |
CN112950753A (en) | Virtual plant display method, device, equipment and storage medium | |
CN116385615A (en) | Virtual face generation method, device, computer equipment and storage medium | |
CN113223128B (en) | Method and apparatus for generating image | |
CN113822964A (en) | Method, device and equipment for optimizing rendering of image and storage medium | |
CN114742970A (en) | Processing method of virtual three-dimensional model, nonvolatile storage medium and electronic device | |
CN116029912A (en) | Training of image processing model, image processing method, device, equipment and medium | |
CN112330571B (en) | Image processing method and device and electronic equipment | |
CN114004922B (en) | Bone animation display method, device, equipment, medium and computer program product | |
CN115147578B (en) | Stylized three-dimensional face generation method and device, electronic equipment and storage medium | |
JP2003141563A (en) | Method of generating face 3d computer graphics, its program, and recording medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |