CN109447931B - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN109447931B
CN109447931B CN201811264701.0A CN201811264701A CN109447931B CN 109447931 B CN109447931 B CN 109447931B CN 201811264701 A CN201811264701 A CN 201811264701A CN 109447931 B CN109447931 B CN 109447931B
Authority
CN
China
Prior art keywords
image
original image
original
face
fusion processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811264701.0A
Other languages
Chinese (zh)
Other versions
CN109447931A (en
Inventor
严琼
陈梓琪
柯章翰
任思捷
曾进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN201811264701.0A priority Critical patent/CN109447931B/en
Publication of CN109447931A publication Critical patent/CN109447931A/en
Application granted granted Critical
Publication of CN109447931B publication Critical patent/CN109447931B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image processing method and device. The method comprises the following steps: acquiring an original image, and performing three-dimensional image reconstruction according to the original image to obtain a first image; rendering the first image to obtain a second image; and carrying out fusion processing on the second image and the original image to obtain a target image. Correspondingly, a corresponding device is also provided. By adopting the method and the device, the working efficiency can be effectively improved.

Description

Image processing method and device
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus.
Background
In daily shooting scenes, there are often times when light is unsatisfactory, so professional photographers need to carry a large amount of manual lighting or light supplement equipment in order to cope with shooting in various scenes and angles.
However, in the above manner, the work efficiency may be low in a high-frequency photographing scene.
Disclosure of Invention
The application provides an image processing method and device, which can effectively improve the working efficiency.
In a first aspect, an embodiment of the present application provides an image processing method, including:
acquiring an original image, and performing three-dimensional image reconstruction according to the original image to obtain a first image;
rendering the first image to obtain a second image;
and carrying out fusion processing on the second image and the original image to obtain a target image.
In the embodiment of the application, the target image is obtained by performing three-dimensional (3D) reconstruction, rendering and fusion processing on the original image, and the target image is processed in an automatic mode, so that the working efficiency can be effectively improved in a high-frequency shooting scene.
In a possible implementation manner, after the obtaining of the original image and before the performing of the three-dimensional image reconstruction according to the original image to obtain the first image, the method further includes:
and reducing the resolution of the original image, and performing three-dimensional image reconstruction on the original image with the reduced resolution to obtain the first image.
In the embodiment of the application, the resolution is reduced, so that the three-dimensional image reconstruction is performed on the original image with the reduced resolution, the image reconstruction rate can be effectively improved, and the efficiency of the image processing device is improved.
In a possible implementation manner, the fusing the second image and the original image to obtain a target image includes:
fusing the second image and the original image with the reduced resolution to obtain an image after the fusion;
and adjusting the resolution of the image after the fusion processing based on the residual error between the original image and the original image with the reduced resolution to obtain the target image.
In the embodiment of the application, in the processes of 3D reconstruction, rendering processing and fusion processing, the low-resolution images are adopted, the processing speed of the image processing device can be effectively improved, and the resolution of the images after the fusion processing is adjusted according to the residual errors, so that the target images are obtained, and the images can be ensured not to be lost.
In a possible implementation manner, the first image includes three-dimensional face structure information and face position information, and before the rendering processing is performed on the first image to obtain the second image, the method further includes:
acquiring light parameters of a light source;
the rendering the first image to obtain a second image includes:
and rendering the first image to obtain the second image according to the lighting parameters, the face three-dimensional structure information and the face position information of the first image.
In the embodiment of the application, the obtained lighting parameters can be appointed or selected by the user and/or obtained by a classical lighting mode, so that the independent option of the user is effectively improved and the satisfaction degree of the user is improved; on the other hand, when the image processing device performs light distribution, the image processing device can be fixed at a certain angle to simulate a professional portrait light distribution method, such as a butterfly light distribution method or a Lunebullan light distribution method, so that the rendering effect of the second image is further improved.
In a possible implementation manner, the fusing the second image and the original image to obtain a target image includes:
performing fusion processing on the original image in a color space based on the second image to obtain the target image;
or, the original image is subjected to fusion processing in a gray scale space based on the second image, so that the target image is obtained.
According to the embodiment of the application, the second image and the original image can be fused in different color spaces according to different application scenes, for example, a colored target image can be output in a Red Green Blue (RGB) color space, and a black and white target image can also be output in a gray scale space, so that more possibilities are provided for obtaining the target image.
In a possible implementation manner, the reconstructing a three-dimensional image according to the original image to obtain a first image includes:
carrying out face detection on the original image to obtain a face image;
and reconstructing a three-dimensional image of the face image to obtain the first image.
In the embodiment of the application, the original image can comprise an image related to a human face, if the original image at least comprises the human face, the 3D reconstruction, the rendering processing and the fusion processing are carried out on the human face image, on one hand, the problem of uneven facial light can be solved, the human face brightness can be improved, and the human face is highlighted; on the other hand, the facial contour sense and the facial stereoscopic sense can be highlighted.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
an acquisition unit configured to acquire an original image;
the image reconstruction unit is used for reconstructing a three-dimensional image according to the original image to obtain a first image;
the rendering processing unit is used for rendering the first image to obtain a second image;
and the fusion processing unit is used for carrying out fusion processing on the second image and the original image to obtain a target image.
In one possible implementation, the apparatus further includes:
a resolution reduction unit for reducing a resolution of the original image;
the image reconstruction unit is specifically configured to perform three-dimensional image reconstruction on the original image with the reduced resolution to obtain the first image.
In one possible implementation manner, the fusion processing unit includes:
a fusion processing subunit, configured to perform fusion processing on the second image and the original image with the reduced resolution to obtain an image after the fusion processing;
and the adjusting subunit is configured to adjust the resolution of the image after the fusion processing based on a residual between the original image and the original image with the reduced resolution, so as to obtain the target image.
In a possible implementation manner, the first image includes three-dimensional face structure information and face position information, and the obtaining unit is further configured to obtain a light parameter of the light source;
and the rendering processing unit is specifically configured to render the first image to obtain the second image according to the lighting parameter, the three-dimensional face structure information of the first image, and the face position information.
In a possible implementation manner, the fusion processing unit is specifically configured to perform fusion processing on the original image in a color space based on the second image to obtain the target image;
or, the fusion processing unit is specifically configured to perform fusion processing on the original image in a gray scale space based on the second image to obtain the target image.
In one possible implementation, the image reconstruction unit includes:
the face detection subunit is used for carrying out face detection on the original image to obtain a face image;
and the reconstruction subunit is used for reconstructing a three-dimensional image of the face image to obtain the first image.
In a third aspect, an embodiment of the present application further provides an electronic device, including a processor and a memory; the memory is used for being coupled with the processor and storing readable instructions and data required by the electronic device; the processor is configured to enable the electronic device to perform the respective functions in the method of the first aspect described above.
In a possible implementation manner, the electronic device may further include an input/output interface, and the input/output interface is used for supporting communication between the electronic device and other devices.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having stored therein readable instructions, which, when executed on a computer, cause the computer to perform a method according to the above aspects.
In a fifth aspect, the present application provides a computer program product containing instructions which, when run on a computer, cause the computer to perform the method of the described aspects.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or the background art of the present application, the drawings required to be used in the embodiments or the background art of the present application will be described below.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 2a is a schematic diagram of an original image provided in an embodiment of the present application;
fig. 2b is a schematic diagram of a face image according to an embodiment of the present application;
FIG. 2c is a schematic diagram of a first image provided by an embodiment of the present application;
FIG. 2d is a schematic diagram of a second image provided by an embodiment of the present application;
FIG. 3 is a schematic flowchart of another image processing method provided in the embodiments of the present application;
fig. 4 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of another image processing apparatus provided in an embodiment of the present application;
fig. 6 is a schematic structural diagram of a fusion processing unit according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an image reconstruction unit according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clear, the present application will be further described in detail with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is an image processing method provided in an embodiment of the present application, where the image processing method is applicable to an electronic device, and the electronic device may be a terminal device, a server, or the like, and the embodiment of the present application is not limited. Wherein, this terminal equipment can include: the embodiment of the present application is not limited to a specific form of the terminal device, such as a mobile phone, a tablet computer, a desktop computer, a personal palm computer, and the like. As shown in fig. 1, the image processing method includes:
101. and acquiring an original image, and performing three-dimensional image reconstruction according to the original image to obtain a first image.
In general, the original image may include any two-dimensional (2D) image, such as a still image, a video frame image, and the like, and the embodiment of the present application is not limited to the type of the original image.
In this embodiment, the original image may be an image captured by an electronic device, or the original image may also be an image obtained by the electronic device from another device, and the like, and this embodiment of the present application is not limited.
In this embodiment, after 3D image reconstruction is performed on an original image, a three-dimensional image of the original image, that is, a first image, may be obtained. Optionally, the first image includes an obj format or fbx format 3D image, and the like, and the format of the first image is not limited in this embodiment of the application.
Optionally, the method for reconstructing a three-dimensional image according to an original image to obtain a first image may include:
carrying out face detection on the original image to obtain a face image;
and carrying out three-dimensional image reconstruction on the face image to obtain a first image.
In the embodiment of the present application, the original image may be an image including a human face. Optionally, when the original image is subjected to face detection, a face image is obtained, and after the face image is subjected to 3D reconstruction, three-dimensional structure information of a face and face position information, that is, the orientation of the face, are obtained. In this embodiment, on a user interface of a terminal device implementing the image processing method, a rectangular face detection frame may be used to frame a face image, and the size of the face detection frame may be adjusted according to actual needs. In addition, a circular or elliptical face detection frame may also be used, and the shape of the face detection frame is not specifically limited in this embodiment.
Optionally, in the case of performing 3D reconstruction, the reconstruction may be performed through a deep learning network, or may also be performed based on original images of multiple angles, and the like, and the embodiment of the present application is not limited. The original images based on multiple angles can be understood as shooting multiple images at different angles, for example, shooting the same face based on the right front, the left upper side and the right upper side respectively, and the like, so that the first image is obtained by reconstructing according to the multiple original images including the same face.
For example, fig. 2a is a schematic diagram of an original image provided in an embodiment of the present application, and as shown in fig. 2a, a face image as shown in fig. 2b can be obtained by performing face detection on the original image, so that the face image is subjected to 3D reconstruction, and a first image as shown in fig. 2c is obtained. After 3D reconstruction is performed on the face image, the three-dimensional structure of the face and the orientation of the face can be obtained, as shown in fig. 2 c. It is understood that the images shown in fig. 2a to 2c are only examples and should not be construed as limiting the embodiments of the present application. Through the 3D face image reconstruction of the embodiment, the two-dimensional original image is reconstructed into the three-dimensional first image, the problem of uneven facial light can be solved, the subsequent lighting effect is more three-dimensional and conforms to the shape of the face, and the face contour sense and the face stereoscopic impression are more prominent.
102. And rendering the first image to obtain a second image.
In the embodiment of the present application, the second image is an image obtained after rendering the first image. The rendering process may be understood as rendering the first image. Optionally, when the first image is rendered to obtain the second image, the obtained effect of the second image may be related to an angle, for example, the angle of the light is different during rendering, and the obtained effect of the second image is different.
Optionally, the rendering processing is performed on the first image to obtain a second image, where the first image includes face three-dimensional structure information and face position information, and the rendering processing includes:
and rendering the first image by using a light source to obtain a second image.
That is to say, before the rendering processing is performed on the first image to obtain the second image, the method further includes:
acquiring light parameters of a light source;
rendering the first image by using a light source to obtain a second image, comprising:
and rendering the first image to obtain a second image according to the lighting parameters, the three-dimensional structure information of the face in the first image and the position information of the face.
The lighting parameter may include not only the brightness of the light source, but also the lighting angle of the light source in one embodiment, and may also include the distance from the light source to the first image, and the like in one embodiment. The lighting parameters, that is, parameters of the light source used when the electronic device renders the first image, may be used to polish the first image, thereby implementing rendering of the first image to obtain the second image.
Optionally, rendering the first image according to the lighting parameters, the three-dimensional structure information of the face in the first image, and the position information of the face to obtain a second image, wherein the first image can be vividly understood as: the method comprises the steps of placing face three-dimensional structure information and face position information in a first image in a virtual 3D space, simulating the effect of a light source irradiating on a face according to the obtained light source, and finally obtaining a second image.
Alternatively, the electronic device may obtain different second images according to different angles of the obtained light source. For example, if the light source obtained by the electronic device illuminates the nose of the user in the first image, the second image may be obtained such that the nose is brighter than the other parts. If the light source obtained by the electronic device illuminates the forehead in the first image, the obtained second image may be the forehead highlighted in another portion. It is understood that the above is only an example and should not be interpreted as a limitation of the embodiments of the present application.
Optionally, based on the electronic device implementing the image processing method, in some embodiments, an implementation manner of obtaining the light parameter of the light source is further provided, as follows:
and outputting a light setting interface of the light source at a User Interface (UI) of the electronic equipment, wherein the light setting interface is used for setting light parameters of the light source.
Alternatively, the light parameters are set according to a classical lighting method, such as a butterfly light distribution method or a Luneberg light distribution method.
The electronic equipment can output the light setting interface, so that the user can independently adjust light, and the electronic equipment can set the light source according to the setting instruction of the user. After the light source is arranged, the electronic equipment can render the first image according to the light source, the three-dimensional structure information of the face in the first image and the face position information, so that a second image is obtained. For example, after the light is set at the upper right in fig. 2c, the second image shown in fig. 2d can be obtained through the rendering process.
It is understood that the light setting interface may be disposed at any interface of the electronic device, or may be disposed at any position on the interface of the electronic device, or the light setting interface may also be a floating interface, for example, before the electronic device performs step 102, the electronic device may automatically pop up the light setting interface, and after the electronic device performs step 102, the electronic device may also automatically turn off the light setting interface, and so on, which is not limited in the embodiments of the present application.
The electronic device can also be set by using a classical lighting method, such as a butterfly light distribution method or a Luneberg light distribution method, to obtain the lighting parameters of the light source. Specifically, when the butterfly light is distributed, the main light source can be projected to the face of a person from the top to the bottom in the 45-degree direction above the optical axis of the lens, and a shadow is projected below the nose, and the shadow is similar to the shape of a butterfly, so that the face of the person has a layering sense. Therefore, by implementing the embodiment of the application, the light of the butterfly light distribution method is simulated through the electronic equipment, and the rendering efficiency can be improved. Similarly, the electronic device may also simulate light by using the luneberg light method, and the embodiments of the present application are not limited thereto.
It can be understood that, in the method for obtaining the second image provided in the embodiment of the present application, the rendering processing mode of the electronic device may specifically include placing a face of the first image (i.e., an image obtained after 3D modeling of the original image) in a virtual 3D space, then simulating an effect of light rays irradiating on the face according to a given light setting, and finally saving the effect of simulated light rays irradiating on the face, for example, the first image in a picture form. The given light setting includes setting instructions input by a user or light obtained by a butterfly light distribution method or a Luneberg light distribution method. Furthermore, the electronic device can also perform rendering processing on the first image without using material information, so that the requirement of 3D reconstruction is lower, namely the complicated material information of the human face, such as the color and texture of the skin, does not need to be restored.
It can be understood that, in the embodiment of the present application, the electronic device may not only output the light setting interface of the light source at the UI of the electronic device, but also set the light parameter according to a classical lighting manner, for example, set the light parameter by using a butterfly light lighting method or a luneberg light lighting method.
103. And carrying out fusion processing on the second image and the original image to obtain a target image.
In the embodiment of the application, the target image is obtained after the second image and the original image are subjected to fusion processing. Effective information in the second image and the original image is extracted to the maximum extent after fusion processing, so that uneven light, weak illumination or under illumination or an under-exposed area in the original image is compensated.
By implementing the embodiment of the application, the target image is obtained by performing three-dimensional (3D) reconstruction processing, rendering processing and fusion processing on the original image, and the target image is processed in an automatic mode, so that the working efficiency can be effectively improved in a high-frequency photographing scene.
In some embodiments, the fusing the second image with the original image to obtain the target image includes:
performing fusion processing on the original image in a color space based on the second image to obtain a target image;
or, the original image is subjected to fusion processing in the gray scale space based on the second image to obtain the target image.
In the embodiment of the application, the second image and the original image may be fused in different color spaces according to different application scenarios, for example, a target image with a color output in an RGB color space may be selected, for example, in the RGB color space, the original image may be converted from the RGB color space into a color space with separated brightness and color, such as YCbCr or Lab, and then brightness and color information are processed respectively, and then the original image is fused based on the second image, and after the fusion processing, the original image is converted into the RGB color space, so as to obtain the target image.
That is, the original image may be adjusted based on the brightness of the second image. For example, if the second image obtained after the rendering process is bright and dark, the original image may be adjusted according to the brightness of the second image, such as performing brightness compensation on the original image. When the RGB color image is processed, the color can be enhanced according to the intensity of brightness compensation, and the final effect is ensured to be consistent with reality. Alternatively, the original image may be brightened where the pixel value is greater than 128 and darkened where the pixel value is less than 128 based on the second image. For example, under the setting of the lunebron illumination, the second image generates a shadow on one side of the nose bridge, and the pixel value is smaller (smaller than 128), so that the one side of the nose bridge of the original image is dark, and stereoscopic impression is generated.
The black and white target image can also be output in the gray scale space, thereby providing more possibilities for obtaining the target image. When processing the gray picture, the electronic equipment can only process the brightness because only the brightness information is contained.
By implementing the fusion processing mode of the above embodiment, the second image and the original image can be fused in different color spaces according to different application scenarios, for example, a target image that outputs color in a Red Green Blue (RGB) color space, and a target image that outputs black and white in a gray scale space, so as to provide more possibilities for obtaining the target image.
Referring to fig. 3, fig. 3 is a schematic flowchart of another image processing method provided in an embodiment of the present application, and as shown in fig. 3, the method includes:
301. and acquiring an original image, and reducing the resolution of the original image.
In the embodiment of the present application, the original image may be an image captured by the electronic device (that is, may include an image directly output by the electronic device through a camera), or the original image may also be an image obtained by the electronic device from another apparatus. It will be appreciated that regardless of where the original image was taken, the original image may also include images that were processed by other means or by the electronic device but did not alter the content of the human face. For example, the processed original image may include an original image obtained after a beauty process, an original image obtained by blurring a background, and the like. It will be appreciated that the original image may also comprise a face related image as shown in figure 2 a.
In the embodiment of the present application, the method for reducing the resolution of the original image may perform reduction processing on the original image. For example, the original image is 8 × 6, and the resolution may be reduced to 4 × 3. It is to be understood that the above are exemplary only and are not to be construed as limiting the embodiments of the present application. That is, the embodiment of the present application does not limit how much resolution is specifically reduced.
302. And 3D reconstruction is carried out on the original image with the reduced resolution ratio to obtain a first image.
The method for performing 3D reconstruction on the original image after resolution reduction can refer to the method shown in fig. 1, and is not described in detail here.
303. And rendering the first image to obtain a second image.
304. And performing fusion processing on the original image with the reduced resolution based on the second image to obtain an image after the fusion processing.
The specific implementation manner of performing the fusion processing on the original image with the reduced resolution based on the second image may be, for example, in an RGB color space, converting the original image with the reduced resolution from the RGB color space into a color space with separated brightness and color, such as YCbCr or Lab, then processing the brightness and color information based on the second image, performing the fusion, and converting the processed image into the RGB color space to obtain the image after the fusion processing.
It is understood that the specific implementation of the fusion process can also refer to the implementation shown in fig. 1, and is not described in detail here.
305. And adjusting the resolution of the image after the fusion processing based on the residual error between the original image and the original image with the reduced resolution to obtain the target image.
In this embodiment of the application, the calculation method of the residual may include: if the original image is denoted as I, the original image after resolution reduction can be denoted as J, the original image after resolution reduction is enlarged to the resolution of the original image again, and is denoted as K, then the residual error can be calculated in an I-K manner, that is, the residual error is obtained by subtracting the pixel value of the original image and the pixel value of the original image after twice scaling.
In the embodiment of the application, in the processes of 3D reconstruction, rendering processing and fusion processing, the low-resolution images are adopted, the processing speed of the electronic equipment can be effectively improved, and the resolution of the images after the fusion processing is adjusted according to the residual errors, so that the target images are obtained, and the images can be ensured not to be lost.
It will be appreciated that the methods illustrated in fig. 1 and 3 are each directed to implementations that are not described in detail in one embodiment, and reference may be made to another embodiment. For example, reference may be made to the method shown in fig. 1 for specific implementation of steps 302 to 304, which is not described in detail here.
The method of the embodiments of the present application is set forth above in detail and the apparatus of the embodiments of the present application is provided below.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an image processing apparatus provided in an embodiment of the present application, which may be used to execute the image processing methods shown in fig. 1 and fig. 3, and as shown in fig. 4, the image processing apparatus includes:
an acquisition unit 401 configured to acquire an original image;
an image reconstruction unit 402, configured to perform three-dimensional image reconstruction according to an original image to obtain a first image;
a rendering unit 403, configured to perform rendering processing on the first image to obtain a second image;
and a fusion processing unit 404, configured to perform fusion processing on the second image and the original image to obtain a target image.
In the embodiment of the application, the target image is obtained by performing three-dimensional (3D) reconstruction, rendering and fusion processing on the original image, and the target image is processed in an automatic mode, so that the working efficiency can be effectively improved in a high-frequency shooting scene.
Optionally, as shown in fig. 5, the image processing apparatus further includes:
a resolution reduction unit 405 for reducing the resolution of the original image;
the image reconstruction unit 402 is specifically configured to perform three-dimensional image reconstruction on the original image with the reduced resolution to obtain a first image.
Optionally, as shown in fig. 6, the fusion processing unit 404 includes:
a fusion processing subunit 4041, configured to perform fusion processing on the second image and the original image with the reduced resolution to obtain an image after the fusion processing;
an adjusting subunit 4042, configured to adjust the resolution of the image after the fusion processing based on the residual between the original image and the original image with the reduced resolution, so as to obtain the target image.
Optionally, the first image includes three-dimensional face structure information and face position information, and the obtaining unit 401 is further configured to obtain a light parameter of the light source;
the rendering processing unit 403 is specifically configured to render the first image to obtain a second image according to the lighting parameter, the three-dimensional face structure information of the first image, and the face position information.
Optionally, the fusion processing unit 404 is specifically configured to perform fusion processing on the original image in the color space based on the second image to obtain a target image, where the target image includes a color image;
or, the fusion processing unit 404 is specifically configured to perform fusion processing on the original image in the grayscale space based on the second image to obtain a target image, where the target image includes a black and white image.
Optionally, as shown in fig. 7, the image reconstruction unit 402 includes:
a face detection subunit 4021, configured to perform face detection on the original image to obtain a face image;
the reconstruction subunit 4022 is configured to perform three-dimensional image reconstruction on the face image to obtain a first image.
It is understood that the implementation of the respective units may also correspond to the respective description of the method embodiments illustrated with reference to fig. 1 and 3.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device comprises a processor 801 and may further comprise an input interface 802, an output interface 803 and a memory 804. The input interface 802, the output interface 803, the memory 804, and the processor 801 are connected to each other via a bus.
The memory includes, but is not limited to, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM), or a portable read-only memory (CD-ROM), which is used for storing instructions and data.
The input interface is used for inputting data and/or signals, and the output interface is used for outputting data and/or signals. The output interface and the input interface may be separate devices or may be an integral device.
The processor may include one or more processors, for example, one or more Central Processing Units (CPUs), and in the case of one CPU, the CPU may be a single-core CPU or a multi-core CPU.
The memory is used to store program codes and data of the electronic device.
The processor is used for calling the program codes and data in the memory and executing the steps in the method embodiment.
As in one embodiment, a processor may be used to implement the implementations shown in 101-103.
As another example, in one embodiment, the processor may be further configured to perform the methods shown by the obtaining unit 401, the image reconstruction unit 402, the rendering processing unit 403, and the fusion processing unit 404, and so on. Optionally, the input/output interface may also be used to execute the method shown by the obtaining unit 401, and this embodiment of the present application is not limited.
For specific implementation of the processor and/or the input/output interface, reference may be made to the description in the method embodiment, and details are not described here.
It will be appreciated that fig. 8 only shows a simplified design of the electronic device. In practical applications, the electronic devices may also respectively include other necessary components, including but not limited to any number of input/output interfaces, processors, controllers, memories, etc., and all electronic devices that can implement the embodiments of the present application are within the protection scope of the present application.
One of ordinary skill in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer-readable storage medium, and when executed, may include the processes of the above method embodiments. And the aforementioned storage medium includes: various media capable of storing program codes, such as ROM or RAM, magnetic or optical disks, etc.

Claims (12)

1. An image processing method, comprising:
acquiring an original image, and performing three-dimensional image reconstruction according to the original image to obtain a first image;
rendering the first image to obtain a second image;
fusing the second image and the original image to obtain a target image;
the fusing the second image and the original image to obtain a target image, including:
fusing the second image and the original image with the reduced resolution to obtain an image after the fusion;
and adjusting the resolution of the image after the fusion processing based on the residual error between the original image and the original image with the reduced resolution to obtain the target image.
2. The method of claim 1, wherein after the obtaining the original image and before the performing the three-dimensional image reconstruction from the original image to obtain the first image, the method further comprises:
reducing the resolution of the original image;
the reconstructing the three-dimensional image according to the original image to obtain a first image comprises:
and performing three-dimensional image reconstruction on the original image with the reduced resolution to obtain the first image.
3. The method according to any one of claims 1 to 2, wherein the first image includes face three-dimensional structure information and face position information, and before the rendering processing is performed on the first image to obtain a second image, the method further includes:
acquiring light parameters of a light source;
the rendering the first image to obtain a second image includes:
and rendering the first image to obtain the second image according to the lighting parameters, the face three-dimensional structure information and the face position information of the first image.
4. The method according to any one of claims 1 to 3, wherein the fusing the second image and the original image to obtain a target image comprises:
performing fusion processing on the original image in a color space based on the second image to obtain the target image;
or, the original image is subjected to fusion processing in a gray scale space based on the second image, so that the target image is obtained.
5. The method according to any one of claims 1 to 4, wherein the performing three-dimensional image reconstruction from the original image to obtain a first image comprises:
carrying out face detection on the original image to obtain a face image;
and reconstructing a three-dimensional image of the face image to obtain the first image.
6. An image processing apparatus characterized by comprising:
an acquisition unit configured to acquire an original image;
the image reconstruction unit is used for reconstructing a three-dimensional image according to the original image to obtain a first image;
the rendering processing unit is used for rendering the first image to obtain a second image;
the fusion processing unit is used for carrying out fusion processing on the second image and the original image to obtain a target image;
the fusion processing unit includes:
a fusion processing subunit, configured to perform fusion processing on the second image and the original image with the reduced resolution to obtain an image after the fusion processing;
and the adjusting subunit is configured to adjust the resolution of the image after the fusion processing based on a residual between the original image and the original image with the reduced resolution, so as to obtain the target image.
7. The apparatus of claim 6, further comprising:
a resolution reduction unit for reducing a resolution of the original image;
the image reconstruction unit is specifically configured to perform three-dimensional image reconstruction on the original image with the reduced resolution to obtain the first image.
8. The apparatus according to any one of claims 6 to 7, wherein the first image comprises face three-dimensional structure information and face position information,
the acquisition unit is also used for acquiring the light parameters of the light source;
and the rendering processing unit is specifically configured to render the first image to obtain the second image according to the lighting parameter, the three-dimensional face structure information of the first image, and the face position information.
9. The apparatus according to any one of claims 6 to 8,
the fusion processing unit is specifically configured to perform fusion processing on the original image in a color space based on the second image to obtain the target image;
or, the fusion processing unit is specifically configured to perform fusion processing on the original image in a gray scale space based on the second image to obtain the target image.
10. The apparatus according to any one of claims 6 to 9, wherein the image reconstruction unit comprises:
the face detection subunit is used for carrying out face detection on the original image to obtain a face image;
and the reconstruction subunit is used for reconstructing a three-dimensional image of the face image to obtain the first image.
11. An electronic device comprising a processor and a memory, the processor being coupled to the memory, the memory having stored therein computer-readable instructions that, when executed by the processor, cause the processor to perform the method of any of claims 1 to 5.
12. A computer-readable storage medium having computer-readable instructions stored therein, which, when executed by a processor, cause the processor to perform the method of any one of claims 1 to 5.
CN201811264701.0A 2018-10-26 2018-10-26 Image processing method and device Active CN109447931B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811264701.0A CN109447931B (en) 2018-10-26 2018-10-26 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811264701.0A CN109447931B (en) 2018-10-26 2018-10-26 Image processing method and device

Publications (2)

Publication Number Publication Date
CN109447931A CN109447931A (en) 2019-03-08
CN109447931B true CN109447931B (en) 2022-03-15

Family

ID=65548953

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811264701.0A Active CN109447931B (en) 2018-10-26 2018-10-26 Image processing method and device

Country Status (1)

Country Link
CN (1) CN109447931B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110379358B (en) * 2019-07-04 2021-03-30 南京宇丰晔禾信息科技有限公司 LED display screen image playing and controlling method and device
CN110717867B (en) * 2019-09-04 2023-07-11 北京达佳互联信息技术有限公司 Image generation method and device, electronic equipment and storage medium
CN111556255B (en) * 2020-04-30 2021-10-01 华为技术有限公司 Image generation method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104008539A (en) * 2014-05-29 2014-08-27 西安理工大学 Image super-resolution rebuilding method based on multiscale geometric analysis
CN105959705A (en) * 2016-05-10 2016-09-21 武汉大学 Video live broadcast method for wearable devices
CN107370952A (en) * 2017-08-09 2017-11-21 广东欧珀移动通信有限公司 Image capturing method and device
CN107506714A (en) * 2017-08-16 2017-12-22 成都品果科技有限公司 A kind of method of face image relighting

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8983156B2 (en) * 2012-11-23 2015-03-17 Icad, Inc. System and method for improving workflow efficiences in reading tomosynthesis medical image data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104008539A (en) * 2014-05-29 2014-08-27 西安理工大学 Image super-resolution rebuilding method based on multiscale geometric analysis
CN105959705A (en) * 2016-05-10 2016-09-21 武汉大学 Video live broadcast method for wearable devices
CN107370952A (en) * 2017-08-09 2017-11-21 广东欧珀移动通信有限公司 Image capturing method and device
CN107506714A (en) * 2017-08-16 2017-12-22 成都品果科技有限公司 A kind of method of face image relighting

Also Published As

Publication number Publication date
CN109447931A (en) 2019-03-08

Similar Documents

Publication Publication Date Title
Zhang et al. Nerfactor: Neural factorization of shape and reflectance under an unknown illumination
US11257286B2 (en) Method for rendering of simulating illumination and terminal
CN110910486B (en) Indoor scene illumination estimation model, method and device, storage medium and rendering method
JP6905602B2 (en) Image lighting methods, devices, electronics and storage media
US10223827B2 (en) Relightable texture for use in rendering an image
WO2019101113A1 (en) Image fusion method and device, storage medium, and terminal
CN112215934A (en) Rendering method and device of game model, storage medium and electronic device
CN111127591B (en) Image hair dyeing processing method, device, terminal and storage medium
CN109447931B (en) Image processing method and device
CN111066026B (en) Techniques for providing virtual light adjustment to image data
CN107665482B (en) Video data real-time processing method and device for realizing double exposure and computing equipment
JP2016208098A (en) Image processor, image processing method, and program
JP2013168146A (en) Method, device and system for generating texture description of real object
CN113327316A (en) Image processing method, device, equipment and storage medium
CN111612878A (en) Method and device for making static photo into three-dimensional effect video
CN107564085B (en) Image warping processing method and device, computing equipment and computer storage medium
CN111447428A (en) Method and device for converting plane image into three-dimensional image, computer readable storage medium and equipment
EP3652617B1 (en) Mixed reality object rendering based on ambient light conditions
JP6896811B2 (en) Image processing equipment, image processing methods, and programs
CN114187398A (en) Processing method and device for human body illumination rendering based on normal map
EP4150560B1 (en) Single image 3d photography with soft-layering and depth-aware inpainting
CN109446945A (en) Threedimensional model treating method and apparatus, electronic equipment, computer readable storage medium
RU2757563C1 (en) Method for visualizing a 3d portrait of a person with altered lighting and a computing device for it
CN114972466A (en) Image processing method, image processing device, electronic equipment and readable storage medium
O'Malley A simple, effective system for automated capture of high dynamic range images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant