CN110580688B - Image processing method and device, electronic equipment and storage medium - Google Patents

Image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110580688B
CN110580688B CN201910727362.3A CN201910727362A CN110580688B CN 110580688 B CN110580688 B CN 110580688B CN 201910727362 A CN201910727362 A CN 201910727362A CN 110580688 B CN110580688 B CN 110580688B
Authority
CN
China
Prior art keywords
image
frequency
processed
pixel value
low
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910727362.3A
Other languages
Chinese (zh)
Other versions
CN110580688A (en
Inventor
李雅子
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN201910727362.3A priority Critical patent/CN110580688B/en
Publication of CN110580688A publication Critical patent/CN110580688A/en
Application granted granted Critical
Publication of CN110580688B publication Critical patent/CN110580688B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure relates to an image processing method, an apparatus, an electronic device, and a storage medium, the image processing method including: acquiring an image to be processed; processing the image to be processed to obtain a high-frequency image and a non-high-frequency image which respectively correspond to the image to be processed, wherein the high-frequency image comprises high-frequency information of the image to be processed, and the non-high-frequency image comprises non-high-frequency information in the image to be processed; and performing fusion processing on the image to be processed and the non-high-frequency image through the mask image to obtain a processed image, wherein the mask image comprises a high-frequency image. Therefore, according to the technical scheme provided by the disclosure, the mask image comprises the high-frequency image, when the skin grinding treatment is carried out on the image to be treated, only the non-high-frequency image of the image to be treated is subjected to skin grinding, but not the high-frequency image of the image to be treated, so that the high-frequency image of the image to be treated can keep clear details, and the photo obtained by skin grinding looks truer and has a better effect.

Description

Image processing method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
With the continuous development of science and technology, the pixels of cameras on mobile terminals such as mobile phones are higher and higher, and many users like to take pictures through the cameras on the mobile phones.
In order to make the shot photo more effective, the user can beautify the shot photo through the existing beautifying software, wherein the beautifying generally comprises a skin-polishing function, for example, the skin of the face in the picture can be made more fine and smooth through skin polishing. However, the existing beautifying software eliminates the information of fine particles, wrinkles and the like in the skin of the human face and also eliminates the details, which need to be clearly shown, of the five sense organs of the human face, so that the problems of unreal appearance and poor effect of a photo obtained by skin grinding are caused.
Disclosure of Invention
The present disclosure provides an image processing method, an image processing apparatus, an electronic device, and a storage medium, so as to at least solve the problems existing in the related art: the photos obtained by skin grinding have the problems of unreal appearance and poor effect. The technical scheme of the disclosure is as follows:
according to a first aspect of an embodiment of the present disclosure, there is provided an image processing method, including;
acquiring an image to be processed;
processing the image to be processed to obtain a high-frequency image and a non-high-frequency image which respectively correspond to the image to be processed, wherein the high-frequency image comprises high-frequency information of the image to be processed, and the non-high-frequency image comprises non-high-frequency information in the image to be processed;
and performing fusion processing on the image to be processed and the non-high-frequency image through a mask image to obtain a processed image, wherein the mask image comprises the high-frequency image.
Optionally, the fusing the to-be-processed image and the non-high frequency image through the mask image includes:
respectively acquiring a high-frequency region and a non-high-frequency region in the image to be processed, wherein the high-frequency region is a region corresponding to the high-frequency information of the mask image in the image to be processed, and the non-high-frequency region is a region except the high-frequency region in the image to be processed;
keeping the pixel value corresponding to the high-frequency area in the image to be processed unchanged, and correspondingly replacing the pixel value corresponding to the non-high-frequency area with the corresponding pixel value in the non-high-frequency image.
Optionally, the non-high frequency image includes a medium frequency image and a low frequency image, and the mask image further includes the medium frequency image;
the fusion processing of the image to be processed and the non-high-frequency image through the mask image comprises:
respectively acquiring a high-frequency region, a medium-frequency region and a low-frequency region in the image to be processed, wherein the high-frequency region is a region corresponding to high-frequency information of the high-frequency image in the image to be processed, the medium-frequency region is a region corresponding to medium-frequency information of the medium-frequency image in the image to be processed, and the low-frequency region is a region corresponding to low-frequency information of the low-frequency image in the image to be processed;
according to a first weight, carrying out weighted summation on a pixel value corresponding to the high-frequency region in the image to be processed and a pixel value corresponding to the low-frequency image to obtain a first pixel value, and taking the first pixel value as the pixel value corresponding to the high-frequency region in the image to be processed;
according to a second weight, carrying out weighted summation on a pixel value corresponding to the intermediate frequency region in the image to be processed and a pixel value corresponding to the low frequency image to obtain a second pixel value, and taking the second pixel value as the pixel value corresponding to the intermediate frequency region in the image to be processed;
according to a third weight, carrying out weighted summation on a pixel value corresponding to the low-frequency area in the image to be processed and a pixel value corresponding to the low-frequency image to obtain a third pixel value, and taking the third pixel value as the pixel value corresponding to the low-frequency area in the image to be processed;
wherein the first weight, the second weight and the third weight are different from each other.
Optionally, the processing the image to be processed to obtain a high-frequency image and a non-high-frequency image respectively corresponding to the image to be processed includes:
carrying out low-pass filtering processing on the image to be processed to obtain the non-high-frequency image;
and performing difference on the non-high-frequency image and the image to be processed to obtain the high-frequency image.
Optionally, the processing the image to be processed to obtain a high-frequency image and a non-high-frequency image corresponding to the image to be processed respectively includes:
performing low-pass filtering processing on the image to be processed to obtain the low-frequency image, wherein the low-frequency image comprises low-frequency information of the image to be processed;
performing intermediate-pass filtering processing on the image to be processed to obtain an intermediate-frequency image, wherein the intermediate-frequency image comprises intermediate-frequency information of the image to be processed;
and obtaining the high-frequency image by subtracting the low-frequency image and the intermediate-frequency image from the image to be processed, wherein the high-frequency image comprises high-frequency information of the image to be processed.
Optionally, after the processing is performed on the image to be processed to obtain a high-frequency image and a non-high-frequency image corresponding to the image to be processed, the method further includes:
and carrying out low-pass filtering processing on the high-frequency image, and taking the high-frequency image after low-pass filtering as the mask image.
Optionally, the acquiring the image to be processed includes:
acquiring a target image;
and determining the foreground area of the target image as an image to be processed.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing apparatus, the apparatus including:
an image acquisition module configured to perform acquiring an image to be processed;
the image processing module is configured to process the image to be processed to obtain a high-frequency image and a non-high-frequency image which respectively correspond to the image to be processed, wherein the high-frequency image comprises high-frequency information of the image to be processed, and the non-high-frequency image comprises non-high-frequency information in the image to be processed;
and the image fusion module is configured to perform fusion processing on the image to be processed and the non-high-frequency image through a mask image to obtain a processed image, wherein the mask image comprises the high-frequency image.
Optionally, the image fusion module includes:
a first image region acquisition unit configured to perform acquisition of a high-frequency region and a non-high-frequency region in the image to be processed, respectively, the high-frequency region being a region in the image to be processed corresponding to the high-frequency information of the mask image, and the non-high-frequency region being a region in the image to be processed other than the high-frequency region;
the first image fusion unit is configured to keep the pixel value corresponding to the high-frequency region in the image to be processed unchanged, and replace the pixel value corresponding to the non-high-frequency region with the corresponding pixel value in the non-high-frequency image.
Optionally, the non-high frequency image includes a medium frequency image and a low frequency image, and the mask image further includes the medium frequency image;
the image fusion module comprises:
a second image area obtaining unit configured to perform obtaining a high frequency area, a medium frequency area and a low frequency area in the image to be processed respectively, wherein the high frequency area is an area corresponding to high frequency information of the high frequency image in the image to be processed, the medium frequency area is an area corresponding to medium frequency information of the medium frequency image in the image to be processed, and the low frequency area is an area corresponding to low frequency information of the low frequency image in the image to be processed;
the first image fusion unit is configured to perform weighted summation on a pixel value corresponding to the high-frequency region in the image to be processed and a pixel value corresponding to the low-frequency image according to a first weight to obtain a first pixel value, and the first pixel value is used as the pixel value corresponding to the high-frequency region in the image to be processed;
according to a second weight, carrying out weighted summation on the pixel value corresponding to the intermediate frequency region in the image to be processed and the pixel value corresponding to the low frequency image to obtain a second pixel value, and taking the second pixel value as the pixel value corresponding to the intermediate frequency region in the image to be processed;
according to a third weight, carrying out weighted summation on a pixel value corresponding to the low-frequency region in the image to be processed and a pixel value corresponding to the low-frequency image to obtain a third pixel value, and taking the third pixel value as the pixel value corresponding to the low-frequency region in the image to be processed;
wherein the first weight, the second weight and the third weight are different from each other.
Optionally, the image processing module is configured to perform:
performing low-pass filtering processing on the image to be processed to obtain the non-high-frequency image;
and performing difference on the non-high-frequency image and the image to be processed to obtain the high-frequency image.
Optionally, the image processing module is configured to perform:
performing low-pass filtering processing on the image to be processed to obtain the low-frequency image, wherein the low-frequency image comprises low-frequency information of the image to be processed;
performing a middle-pass filtering process on the image to be processed to obtain an intermediate-frequency image, wherein the intermediate-frequency image comprises intermediate-frequency information of the image to be processed;
and obtaining the high-frequency image by subtracting the low-frequency image and the intermediate-frequency image from the image to be processed, wherein the high-frequency image comprises high-frequency information of the image to be processed.
Optionally, the apparatus further comprises:
and the filtering processing module is configured to perform low-pass filtering processing on the high-frequency image after the image processing module processes the image to be processed to obtain a high-frequency image and a non-high-frequency image which respectively correspond to the image to be processed, and take the high-frequency image after the low-pass filtering as the mask image.
Optionally, the image obtaining module is configured to perform:
acquiring a target image;
and determining the foreground area of the target image as an image to be processed.
According to a third aspect of an embodiment of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the image processing method of the first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a storage medium having instructions that, when executed by a processor of an electronic device, enable the electronic device to implement the image processing method of the first aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product containing instructions which, when run on a computer, cause the computer to implement the image processing method of the first aspect.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects: acquiring an image to be processed; processing the image to be processed to obtain a high-frequency image and a non-high-frequency image which respectively correspond to the image to be processed, wherein the high-frequency image comprises high-frequency information of the image to be processed, and the non-high-frequency image comprises non-high-frequency information in the image to be processed; and performing fusion processing on the image to be processed and the non-high-frequency image through the mask image to obtain a processed image, wherein the mask image comprises a high-frequency image. Therefore, according to the technical scheme provided by the disclosure, the mask image comprises the high-frequency image, when the skin grinding treatment is carried out on the image to be treated, only the non-high-frequency image of the image to be treated is subjected to skin grinding, but not the high-frequency image of the image to be treated, so that the high-frequency image of the image to be treated can keep clear details, and the photo obtained by skin grinding looks truer and has a better effect.
Drawings
FIG. 1 is a flow diagram illustrating an image processing method according to an exemplary embodiment;
FIG. 2 is a flow diagram illustrating another method of image processing according to an exemplary embodiment;
FIG. 3 is a diagram illustrating an image processing process according to an exemplary embodiment;
FIG. 4 is a block diagram of an image processing apparatus shown in accordance with an exemplary embodiment;
FIG. 5 is a block diagram of an electronic device shown in accordance with an exemplary embodiment;
FIG. 6 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment;
fig. 7 is a block diagram illustrating another image processing apparatus according to an exemplary embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in other sequences than those illustrated or described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 1 is a flowchart illustrating an image processing method according to an exemplary embodiment, where the image processing method is used for an electronic device, and the electronic device may be a smartphone, a tablet computer, a server, or the like, and the electronic device is not particularly limited by the embodiment of the present disclosure.
As shown in fig. 1, the image processing method may include the following steps.
In step S11, an image to be processed is acquired.
The image to be processed may be any image captured by the electronic device serving as the execution subject, or may be any image acquired by the electronic device serving as the execution subject from another electronic device.
In addition, in practical application, the image to be processed may be not only a whole image but also a foreground region of the whole image, that is, only the foreground region of the whole image may be subjected to image processing, that is, only the foreground region is subjected to buffing, and the background region is not subjected to buffing. Wherein, the foreground area can be human face, plants, animals and the like.
In such an embodiment, the step of acquiring the image to be processed may include:
acquiring a target image;
and determining the foreground area of the target image as an image to be processed.
In this embodiment, when acquiring an image to be processed, a target image that a user wants to process may be acquired first, then a foreground region of the target image is obtained through techniques such as face key point recognition, and the obtained foreground region is determined as the image to be processed. In this way, only the foreground region of the target image can be processed, and the background region of the target image is not processed.
In step S12, the image to be processed is processed to obtain a high-frequency image and a non-high-frequency image corresponding to the image to be processed, where the high-frequency image includes high-frequency information of the image to be processed, and the non-high-frequency image includes non-high-frequency information in the image to be processed.
After the image to be processed is obtained, the image to be processed can be processed, and a high-frequency image and a non-high-frequency image corresponding to the image to be processed can be obtained. Wherein the high frequency image includes high frequency information of the image to be processed, for example, the high frequency image may include: and eyebrows, eyes, lips and other high-frequency information needing to clearly show details in the image to be processed. The low frequency image includes low frequency information of the image to be processed, for example, the low frequency image may include: and human face regions and the like in the image to be processed do not need to clearly display low-frequency information of details.
In one embodiment, the information in the image to be processed can be divided into two broad categories, high frequency information and non-high frequency information (low frequency information). At this time, processing the image to be processed to obtain a high-frequency image and a non-high-frequency image corresponding to the image to be processed respectively may include:
carrying out low-pass filtering processing on an image to be processed to obtain a non-high-frequency image;
and (5) performing subtraction on the non-high-frequency image and the image to be processed to obtain a high-frequency image.
In this embodiment, the low pass filtering may select a gaussian blur that is less computationally intensive. Moreover, the manner of obtaining the high-frequency image may be not only to perform subtraction between the non-high-frequency image and the image to be processed, but also other manners, which is not specifically limited in the embodiment of the present disclosure.
In another embodiment, the information in the image to be processed can be divided into three categories, i.e., high frequency information, medium frequency information, and low frequency information. For example, hair may be high frequency information, eyebrows may be medium frequency information, and face regions may be low frequency information. Of course, this is only an example, and in practical applications, the image to be processed includes a plurality of kinds of information, and each kind of information is specifically high-frequency information, intermediate-frequency information, or low-frequency information, and may be set according to practical situations. The embodiment of the present disclosure is not particularly limited to this.
At this time, processing the image to be processed to obtain a high-frequency image and a non-high-frequency image corresponding to the image to be processed respectively may include:
carrying out low-pass filtering processing on the image to be processed to obtain a low-frequency image, wherein the low-frequency image comprises low-frequency information of the image to be processed;
performing intermediate-pass filtering processing on the image to be processed to obtain an intermediate-frequency image, wherein the intermediate-frequency image comprises intermediate-frequency information of the image to be processed;
and (4) carrying out difference on the low-frequency image and the intermediate-frequency image and the image to be processed to obtain a high-frequency image, wherein the high-frequency image comprises high-frequency information of the image to be processed.
In step S13, the image to be processed and the non-high frequency image are fused by the mask image, so as to obtain a processed image, where the mask image includes the high frequency image.
Because the high-frequency image contains the high-frequency information of the image to be processed, and the high-frequency information is the information which needs to clearly show details, the high-frequency image can be used as a mask image, namely, the high-frequency region of the image to be processed is not subjected to buffing treatment, and only the non-high-frequency region of the image to be processed is subjected to buffing treatment.
In one embodiment, the information in the image to be processed can be divided into two broad categories, high frequency information and non-high frequency information (low frequency information). At this time, the fusion processing is performed on the image to be processed and the non-high frequency image through the mask image, and the fusion processing includes:
respectively acquiring a high-frequency region and a non-high-frequency region in an image to be processed, wherein the high-frequency region is a region corresponding to high-frequency information of a mask image in the image to be processed, and the non-high-frequency region is a region except the high-frequency region in the image to be processed;
and keeping the pixel value corresponding to the high-frequency area in the image to be processed unchanged, and correspondingly replacing the pixel value corresponding to the non-high-frequency area with the corresponding pixel value in the non-high-frequency image.
It can be understood that the mask image, the non-high frequency image and the image to be processed have the same size, and the pixel points correspond to each other.
In this embodiment, the pixel values corresponding to the high-frequency region in the image to be processed remain unchanged, that is, the high-frequency region of the image to be processed is not subjected to the buffing process; and correspondingly replacing the pixel value corresponding to the non-high frequency region with the corresponding pixel value in the non-high frequency image, namely, performing fuzzy processing on the non-high frequency region in the image to be processed, so that the effect of not displaying the details of the non-high frequency region is achieved, namely, the effect of peeling the non-high frequency region in the image to be processed is achieved.
In another embodiment, the information in the image to be processed can be divided into three categories, i.e., high frequency information, intermediate frequency information, and low frequency information. At this time, the non-high frequency image includes an intermediate frequency image and a low frequency image, and the mask image also includes an intermediate frequency image; the fusing the image to be processed and the non-high frequency image through the mask image may include:
respectively acquiring a high-frequency region, a medium-frequency region and a low-frequency region in an image to be processed, wherein the high-frequency region is a region corresponding to high-frequency information of the high-frequency image in the image to be processed, the medium-frequency region is a region corresponding to medium-frequency information of the medium-frequency image in the image to be processed, and the low-frequency region is a region corresponding to low-frequency information of the low-frequency image in the image to be processed;
according to the first weight, carrying out weighted summation on a pixel value corresponding to a high-frequency area in the image to be processed and a pixel value corresponding to a low-frequency image to obtain a first pixel value, and taking the first pixel value as the pixel value corresponding to the high-frequency area in the image to be processed;
according to the second weight, carrying out weighted summation on the pixel value corresponding to the intermediate frequency region in the image to be processed and the pixel value corresponding to the low frequency image to obtain a second pixel value, and taking the second pixel value as the pixel value corresponding to the intermediate frequency region in the image to be processed;
according to the third weight, carrying out weighted summation on the pixel value corresponding to the low-frequency region in the image to be processed and the pixel value corresponding to the low-frequency image to obtain a third pixel value, and taking the third pixel value as the pixel value corresponding to the low-frequency region in the image to be processed;
wherein the first weight, the second weight and the third weight are different from each other.
In this embodiment, information in an image to be processed is divided into high frequency information, intermediate frequency information, and low frequency information. The high frequency information may be information whose details need to be displayed very clearly; the intermediate frequency information may be information of which details need to be clearly shown somewhat; the low frequency information may be information that does not need to be explicitly shown in its details. Therefore, by generating a plurality of mask images, the high-frequency region, the intermediate-frequency region and the low-frequency region in the image to be processed can be subjected to different degrees of buffing processing, i.e. the high-frequency image and the intermediate-frequency image can be simultaneously used as mask images.
Specifically, the first weight may be 100% and 0, and the first pixel value = pixel value × 100% corresponding to the high-frequency region in the image to be processed + pixel value × 0 corresponding to the low-frequency image, that is, the high-frequency region in the image to be processed may not be subjected to the buffing processing, so that the detail information of the high-frequency region is clearly displayed. The second weight may be 50% or 50%, and the second pixel value = 50% of the pixel value corresponding to the intermediate frequency region in the image to be processed + 50% of the pixel value corresponding to the intermediate frequency region in the image to be processed, that is, the intermediate frequency region in the image to be processed may be properly pealed, so as to slightly clearly show the detail information of the intermediate frequency region in the image to be processed. The third weight may be 0 and 100%, and the third pixel value = pixel value × 0 corresponding to the high frequency region in the image to be processed + pixel value × 100 corresponding to the low frequency image, that is, the low frequency region in the image to be processed may be subjected to buffing processing, and detail information of the low frequency region in the image to be processed is not clearly shown.
It is understood that specific values of the first weight, the second weight and the third weight can be set according to actual situations, and the first weight, the second weight and the third weight are different from each other in the present invention.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects: acquiring an image to be processed; processing the image to be processed to obtain a high-frequency image and a non-high-frequency image which respectively correspond to the image to be processed, wherein the high-frequency image comprises high-frequency information of the image to be processed, and the non-high-frequency image comprises non-high-frequency information in the image to be processed; and performing fusion processing on the image to be processed and the non-high-frequency image through the mask image to obtain a processed image, wherein the mask image comprises a high-frequency image. Therefore, according to the technical scheme provided by the disclosure, the mask image comprises the high-frequency image, when the skin grinding treatment is carried out on the image to be treated, only the non-high-frequency image of the image to be treated is subjected to skin grinding, but not the high-frequency image of the image to be treated, so that the high-frequency image of the image to be treated can keep clear details, and the photo obtained by skin grinding looks truer and has a better effect.
In order to further improve the image processing effect, the present disclosure also provides an image processing method, as shown in fig. 2, which may include the following steps.
In step S21, an image to be processed is acquired.
Since step S21 is the same as step S11, step S11 has already been explained in detail in fig. 1, and step S21 will not be described again.
In step S22, the image to be processed is processed to obtain a high-frequency image and a non-high-frequency image corresponding to the image to be processed, where the high-frequency image includes high-frequency information of the image to be processed, and the non-high-frequency image includes non-high-frequency information in the image to be processed.
Since step S22 is the same as step S12, step S12 has already been explained in detail in fig. 1, and step S22 will not be described again.
In step S23, the high-frequency image is subjected to low-pass filtering processing, and the low-pass filtered high-frequency image is taken as a mask image.
After the high-frequency image is obtained, in order to feather the high-frequency image and make the mask image finer, the high-frequency image is subjected to low-pass filtering processing, and the high-frequency image after the low-pass filtering is used as the mask image.
In step S24, the image to be processed and the non-high frequency image are fused by the mask image, so as to obtain a processed image.
Since step S24 is the same as step S13, step S13 has already been explained in detail in fig. 1, and step S24 is not described again here.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects: acquiring an image to be processed; processing the image to be processed to obtain a high-frequency image and a non-high-frequency image which respectively correspond to the image to be processed, wherein the high-frequency image comprises high-frequency information of the image to be processed, and the non-high-frequency image comprises non-high-frequency information in the image to be processed; carrying out low-pass filtering processing on the high-frequency image, and taking the high-frequency image after low-pass filtering as a mask image; and performing fusion processing on the image to be processed and the non-high-frequency image through the mask image to obtain a processed image. Therefore, according to the technical scheme provided by the disclosure, the high-frequency image after low-pass filtering is used as the mask image, when the image to be processed is subjected to the skin grinding treatment, only the non-high-frequency image of the image to be processed is subjected to the skin grinding, but not the high-frequency image of the image to be processed, so that the high-frequency image of the image to be processed can keep clear details, and a photo obtained by the skin grinding looks more real and has a better effect.
For clarity of description, the embodiments of the present disclosure will be described below with reference to specific examples. As shown in fig. 3, the image to be processed is an image including a face region.
The method comprises the steps of firstly, performing low-pass filtering operation on an image to be processed to obtain a non-high-frequency image;
secondly, the difference is made between the non-high-frequency image and the image to be processed, so that a high-frequency image can be obtained;
thirdly, in order to enable the high-frequency image to be more continuous, low-pass filtering is carried out on the high-frequency image to obtain a low-pass filtered high-frequency image;
and fourthly, fusing the image to be processed and the non-high frequency image by taking the high frequency image after the low-pass filtering as a mask image to obtain a processed image.
As can be seen from FIG. 3, the processed image, the eyes, eyebrows, lips and other areas needing to clearly show details do not become blurred, the processed picture is more real, and the effect is better.
The present disclosure also provides an image processing apparatus, as shown in fig. 4, comprising
An image acquisition module 410 configured to perform acquiring an image to be processed;
an image processing module 420, configured to perform processing on the image to be processed to obtain a high-frequency image and a non-high-frequency image corresponding to the image to be processed, respectively, where the high-frequency image includes high-frequency information of the image to be processed, and the non-high-frequency image includes non-high-frequency information in the image to be processed;
an image fusion module 430 configured to perform fusion processing on the image to be processed and the non-high-frequency image through a mask image, so as to obtain a processed image, where the mask image includes the high-frequency image.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects: acquiring an image to be processed; processing the image to be processed to obtain a high-frequency image and a non-high-frequency image which respectively correspond to the image to be processed, wherein the high-frequency image comprises high-frequency information of the image to be processed, and the non-high-frequency image comprises non-high-frequency information in the image to be processed; and performing fusion processing on the image to be processed and the non-high-frequency image through the mask image to obtain a processed image, wherein the mask image comprises a high-frequency image. Therefore, according to the technical scheme provided by the disclosure, the mask image comprises the high-frequency image, when the skin grinding treatment is carried out on the image to be treated, only the non-high-frequency image of the image to be treated is subjected to skin grinding, but not the high-frequency image of the image to be treated, so that the high-frequency image of the image to be treated can keep clear details, and the photo obtained by skin grinding looks truer and has a better effect.
Optionally, the image fusion module includes:
a first image region acquisition unit configured to perform acquisition of a high-frequency region and a non-high-frequency region in the image to be processed, respectively, the high-frequency region being a region in the image to be processed corresponding to the high-frequency information of the mask image, and the non-high-frequency region being a region in the image to be processed other than the high-frequency region;
the first image fusion unit is configured to keep the pixel value corresponding to the high-frequency region in the image to be processed unchanged, and replace the pixel value corresponding to the non-high-frequency region with the corresponding pixel value in the non-high-frequency image.
Optionally, the non-high frequency image includes a medium frequency image and a low frequency image, and the mask image further includes the medium frequency image;
the image fusion module comprises:
a second image area obtaining unit configured to perform obtaining a high frequency area, a medium frequency area and a low frequency area in the image to be processed respectively, wherein the high frequency area is an area corresponding to high frequency information of the high frequency image in the image to be processed, the medium frequency area is an area corresponding to medium frequency information of the medium frequency image in the image to be processed, and the low frequency area is an area corresponding to low frequency information of the low frequency image in the image to be processed;
the first image fusion unit is configured to perform weighted summation on a pixel value corresponding to the high-frequency region in the image to be processed and a pixel value corresponding to the low-frequency image according to a first weight to obtain a first pixel value, and the first pixel value is used as the pixel value corresponding to the high-frequency region in the image to be processed;
according to a second weight, carrying out weighted summation on the pixel value corresponding to the intermediate frequency region in the image to be processed and the pixel value corresponding to the low frequency image to obtain a second pixel value, and taking the second pixel value as the pixel value corresponding to the intermediate frequency region in the image to be processed;
according to a third weight, carrying out weighted summation on a pixel value corresponding to the low-frequency region in the image to be processed and a pixel value corresponding to the low-frequency image to obtain a third pixel value, and taking the third pixel value as the pixel value corresponding to the low-frequency region in the image to be processed;
wherein the first weight, the second weight and the third weight are different from each other.
Optionally, the image processing module is configured to perform:
performing low-pass filtering processing on the image to be processed to obtain the non-high-frequency image;
and performing difference on the non-high-frequency image and the image to be processed to obtain the high-frequency image.
Optionally, the image processing module is configured to perform:
performing low-pass filtering processing on the image to be processed to obtain a low-frequency image, wherein the low-frequency image comprises low-frequency information of the image to be processed;
performing a middle-pass filtering process on the image to be processed to obtain an intermediate-frequency image, wherein the intermediate-frequency image comprises intermediate-frequency information of the image to be processed;
and obtaining the high-frequency image by subtracting the low-frequency image and the intermediate-frequency image from the image to be processed, wherein the high-frequency image comprises high-frequency information of the image to be processed.
Optionally, the apparatus further comprises:
and the filtering processing module is configured to perform low-pass filtering processing on the high-frequency image after the image processing module processes the image to be processed to obtain a high-frequency image and a non-high-frequency image respectively corresponding to the image to be processed, and take the high-frequency image after the low-pass filtering as the mask image.
Optionally, the image obtaining module is configured to perform:
acquiring a target image;
and determining the foreground area of the target image as an image to be processed.
FIG. 5 is a block diagram of an electronic device shown in accordance with an example embodiment. Referring to fig. 5, the electronic device includes:
a processor 510;
a memory 520 for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the image processing method provided by the present disclosure.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects: acquiring an image to be processed; processing the image to be processed to obtain a high-frequency image and a non-high-frequency image which respectively correspond to the image to be processed, wherein the high-frequency image comprises high-frequency information of the image to be processed, and the non-high-frequency image comprises non-high-frequency information in the image to be processed; and performing fusion processing on the image to be processed and the non-high-frequency image through the mask image to obtain a processed image, wherein the mask image comprises a high-frequency image. Therefore, according to the technical scheme provided by the disclosure, the mask image comprises the high-frequency image, when the skin grinding treatment is carried out on the image to be treated, only the non-high-frequency image of the image to be treated is subjected to skin grinding, but not the high-frequency image of the image to be treated, so that the high-frequency image of the image to be treated can keep clear details, and the photo obtained by skin grinding looks truer and has a better effect.
Fig. 6 is a block diagram illustrating an apparatus 600 for image processing according to an example embodiment. For example, the apparatus 600 may be a mobile phone, a computer, a digital broadcast electronic device, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 6, apparatus 600 may include one or more of the following components: a processing component 602, a memory 604, a power component 606, a multimedia component 608, an audio component 610, an interface to input/output (I/O) 612, a sensor component 614, and a communication component 616.
The processing component 602 generally controls overall operation of the device 600, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 602 may include one or more processors 620 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 602 can include one or more modules that facilitate interaction between the processing component 602 and other components. For example, the processing component 602 can include a multimedia module to facilitate interaction between the multimedia component 608 and the processing component 602.
The memory 604 is configured to store various types of data to support operation at the device 600. Examples of such data include instructions for any application or method operating on device 600, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 604 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power supply component 606 provides power to the various components of device 600. The power components 606 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 600.
The multimedia component 608 includes a screen that provides an output interface between the device 600 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 608 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 600 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 610 is configured to output and/or input audio signals. For example, audio component 610 includes a Microphone (MIC) configured to receive external audio signals when apparatus 600 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 404 or transmitted via the communication component 616. In some embodiments, audio component 610 further includes a speaker for outputting audio signals.
The I/O interface 612 provides an interface between the processing component 602 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 614 includes one or more sensors for providing various aspects of state assessment for the apparatus 600. For example, the sensor component 614 may detect the open/closed state of the device 600, the relative positioning of the components, such as a display and keypad of the apparatus 600, the change in position of the apparatus 600 or a component of the apparatus 600, the presence or absence of user contact with the apparatus 600, the orientation or acceleration/deceleration of the apparatus 600, and the change in temperature of the apparatus 600. The sensor assembly 614 may include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor assembly 614 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 614 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 616 is configured to facilitate communications between the apparatus 600 and other devices in a wired or wireless manner. The apparatus 600 may access a wireless network based on a communication standard, such as WiFi, an operator network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 416 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 616 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 600 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors, or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 604 comprising instructions, executable by the processor 620 of the apparatus 600 to perform the above-described method is also provided. Alternatively, for example, the storage medium may be a non-transitory computer-readable storage medium, such as a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects: acquiring an image to be processed; processing the image to be processed to obtain a high-frequency image and a non-high-frequency image which respectively correspond to the image to be processed, wherein the high-frequency image comprises high-frequency information of the image to be processed, and the non-high-frequency image comprises non-high-frequency information in the image to be processed; and performing fusion processing on the image to be processed and the non-high-frequency image through the mask image to obtain a processed image, wherein the mask image comprises a high-frequency image. Therefore, according to the technical scheme provided by the disclosure, the mask image comprises the high-frequency image, when the skin grinding treatment is carried out on the image to be treated, only the non-high-frequency image of the image to be treated is subjected to skin grinding, but not the high-frequency image of the image to be treated, so that the high-frequency image of the image to be treated can keep clear details, and the photo obtained by skin grinding looks truer and has a better effect.
Fig. 7 is a block diagram illustrating an apparatus 700 for image processing according to an example embodiment. For example, the apparatus 700 may be provided as a server. Referring to fig. 7, apparatus 700 includes a processing component 722 that further includes one or more processors and memory resources, represented by memory 732, for storing instructions, such as applications, that are executable by processing component 722. The application programs stored in memory 732 may include one or more modules that each correspond to a set of instructions. Further, the processing component 722 is configured to execute instructions to perform the image processing methods described above.
The apparatus 700 may also include a power component 726 configured to perform power management of the apparatus 700, a wired or wireless network interface 750 configured to connect the apparatus 700 to a network, and an input output (I/O) interface 758. The apparatus 700 may operate based on an operating system stored in memory 732, such as Windows Server, mac OS XTM, unixTM, linuxTM, freeBSDTM, or the like.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects: acquiring an image to be processed; processing the image to be processed to obtain a high-frequency image and a non-high-frequency image which respectively correspond to the image to be processed, wherein the high-frequency image comprises high-frequency information of the image to be processed, and the non-high-frequency image comprises non-high-frequency information in the image to be processed; and performing fusion processing on the image to be processed and the non-high-frequency image through the mask image to obtain a processed image, wherein the mask image comprises a high-frequency image. Therefore, according to the technical scheme provided by the disclosure, the mask image comprises the high-frequency image, when the skin grinding treatment is carried out on the image to be treated, only the non-high-frequency image of the image to be treated is subjected to skin grinding, but not the high-frequency image of the image to be treated, so that the high-frequency image of the image to be treated can keep clear details, and the photo obtained by skin grinding looks truer and has a better effect.
In yet another aspect of the present disclosure, the present disclosure also provides a storage medium, and when executed by a processor of an electronic device, the instructions in the storage medium enable the electronic device to execute the image processing method provided by the present disclosure.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects: acquiring an image to be processed; processing the image to be processed to obtain a high-frequency image and a non-high-frequency image which respectively correspond to the image to be processed, wherein the high-frequency image comprises high-frequency information of the image to be processed, and the non-high-frequency image comprises non-high-frequency information in the image to be processed; and performing fusion processing on the image to be processed and the non-high-frequency image through the mask image to obtain a processed image, wherein the mask image comprises a high-frequency image. Therefore, according to the technical scheme provided by the disclosure, the mask image comprises the high-frequency image, when the skin grinding treatment is carried out on the image to be treated, only the non-high-frequency image of the image to be treated is subjected to skin grinding, but not the high-frequency image of the image to be treated, so that the high-frequency image of the image to be treated can keep clear details, and the photo obtained by skin grinding looks truer and has a better effect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product containing instructions which, when run on a computer, cause the computer to implement the image processing method of the first aspect.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects: acquiring an image to be processed; processing the image to be processed to obtain a high-frequency image and a non-high-frequency image which respectively correspond to the image to be processed, wherein the high-frequency image comprises high-frequency information of the image to be processed, and the non-high-frequency image comprises non-high-frequency information in the image to be processed; and performing fusion processing on the image to be processed and the non-high-frequency image through the mask image to obtain a processed image, wherein the mask image comprises a high-frequency image. Therefore, according to the technical scheme provided by the disclosure, the mask image comprises the high-frequency image, when the skin grinding treatment is carried out on the image to be treated, only the non-high-frequency image of the image to be treated is subjected to skin grinding, but not the high-frequency image of the image to be treated, so that the high-frequency image of the image to be treated can keep clear details, and the photo obtained by skin grinding looks truer and has a better effect.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice in the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (16)

1. An image processing method, comprising
Acquiring an image to be processed;
processing the image to be processed to obtain a high-frequency image and a non-high-frequency image which correspond to the image to be processed respectively, wherein the high-frequency image comprises high-frequency information of the image to be processed, and the non-high-frequency image comprises non-high-frequency information in the image to be processed;
performing fusion processing on the image to be processed and the non-high-frequency image through a mask image to obtain a processed image, wherein the mask image comprises the high-frequency image;
the non-high frequency image comprises a medium frequency image and a low frequency image, the mask image further comprises the medium frequency image, the image to be processed comprises a high frequency region, a medium frequency region and a low frequency region, the image to be processed and the non-high frequency image are subjected to fusion processing through the mask image, and the method comprises the following steps: and performing weighted summation on pixel values corresponding to the high-frequency region, the intermediate-frequency region and the low-frequency region in the image to be processed according to different weights, and taking the weighted summation pixel values as the pixel values corresponding to the high-frequency region, the intermediate-frequency region and the low-frequency region in the image to be processed respectively.
2. The method according to claim 1, wherein the fusing the image to be processed and the non-high frequency image through the mask image comprises:
respectively acquiring a high-frequency region and a non-high-frequency region in the image to be processed, wherein the high-frequency region is a region corresponding to high-frequency information of the mask image in the image to be processed, and the non-high-frequency region is a region except the high-frequency region in the image to be processed;
keeping the pixel value corresponding to the high-frequency area in the image to be processed unchanged, and correspondingly replacing the pixel value corresponding to the non-high-frequency area with the corresponding pixel value in the non-high-frequency image.
3. The method according to claim 1, wherein the fusing the image to be processed and the non-high frequency image through the mask image comprises:
respectively acquiring a high-frequency region, a medium-frequency region and a low-frequency region in the image to be processed, wherein the high-frequency region is a region corresponding to high-frequency information of the high-frequency image in the image to be processed, the medium-frequency region is a region corresponding to medium-frequency information of the medium-frequency image in the image to be processed, and the low-frequency region is a region corresponding to low-frequency information of the low-frequency image in the image to be processed;
according to a first weight, carrying out weighted summation on a pixel value corresponding to the high-frequency region in the image to be processed and a pixel value corresponding to the low-frequency image to obtain a first pixel value, and taking the first pixel value as the pixel value corresponding to the high-frequency region in the image to be processed;
according to a second weight, carrying out weighted summation on a pixel value corresponding to the intermediate frequency region in the image to be processed and a pixel value corresponding to the low frequency image to obtain a second pixel value, and taking the second pixel value as the pixel value corresponding to the intermediate frequency region in the image to be processed;
according to a third weight, carrying out weighted summation on a pixel value corresponding to the low-frequency region in the image to be processed and a pixel value corresponding to the low-frequency image to obtain a third pixel value, and taking the third pixel value as the pixel value corresponding to the low-frequency region in the image to be processed;
wherein the first weight, the second weight and the third weight are different from each other.
4. The method according to claim 2, wherein the processing the image to be processed to obtain a high-frequency image and a non-high-frequency image respectively corresponding to the image to be processed comprises:
carrying out low-pass filtering processing on the image to be processed to obtain the non-high-frequency image;
and performing difference on the non-high-frequency image and the image to be processed to obtain the high-frequency image.
5. The method according to claim 3, wherein the processing the image to be processed to obtain a high-frequency image and a non-high-frequency image respectively corresponding to the image to be processed comprises:
performing low-pass filtering processing on the image to be processed to obtain the low-frequency image, wherein the low-frequency image comprises low-frequency information of the image to be processed;
performing intermediate-pass filtering processing on the image to be processed to obtain an intermediate-frequency image, wherein the intermediate-frequency image comprises intermediate-frequency information of the image to be processed;
and obtaining the high-frequency image by subtracting the low-frequency image and the intermediate-frequency image from the image to be processed, wherein the high-frequency image comprises high-frequency information of the image to be processed.
6. The method according to any one of claims 1 to 5, wherein after the processing the image to be processed to obtain a high-frequency image and a non-high-frequency image corresponding to the image to be processed, the method further comprises:
and performing low-pass filtering processing on the high-frequency image, and taking the high-frequency image subjected to low-pass filtering as the mask image.
7. The method according to any one of claims 1 to 5, wherein the acquiring the image to be processed comprises:
acquiring a target image;
and determining the foreground area of the target image as an image to be processed.
8. An image processing apparatus characterized by comprising:
an image acquisition module configured to perform acquiring an image to be processed;
the image processing module is configured to process the image to be processed to obtain a high-frequency image and a non-high-frequency image which respectively correspond to the image to be processed, wherein the high-frequency image comprises high-frequency information of the image to be processed, and the non-high-frequency image comprises non-high-frequency information in the image to be processed;
the image fusion module is configured to perform fusion processing on the image to be processed and the non-high-frequency image through a mask image to obtain a processed image, wherein the mask image comprises the high-frequency image;
the non-high frequency image comprises a medium frequency image and a low frequency image, the mask image further comprises the medium frequency image, the image to be processed comprises a high frequency area, a medium frequency area and a low frequency area, and the image fusion module is configured to perform weighted summation on a pixel value corresponding to the high frequency area, a pixel value corresponding to the medium frequency area and a pixel value corresponding to the low frequency area in the image to be processed according to different weights, and take the weighted summation pixel values as the pixel value corresponding to the high frequency area, the pixel value corresponding to the medium frequency area and the pixel value corresponding to the low frequency area in the image to be processed.
9. The apparatus of claim 8, wherein the image fusion module comprises:
a first image region acquisition unit configured to perform acquisition of a high-frequency region and a non-high-frequency region in the image to be processed, respectively, the high-frequency region being a region in the image to be processed corresponding to the high-frequency information of the mask image, and the non-high-frequency region being a region in the image to be processed other than the high-frequency region;
the first image fusion unit is configured to keep the pixel value corresponding to the high-frequency region in the image to be processed unchanged, and replace the pixel value corresponding to the non-high-frequency region with the corresponding pixel value in the non-high-frequency image.
10. The apparatus of claim 8, wherein the image fusion module comprises:
a second image area obtaining unit configured to perform obtaining a high frequency area, a medium frequency area and a low frequency area in the image to be processed respectively, wherein the high frequency area is an area corresponding to high frequency information of the high frequency image in the image to be processed, the medium frequency area is an area corresponding to medium frequency information of the medium frequency image in the image to be processed, and the low frequency area is an area corresponding to low frequency information of the low frequency image in the image to be processed;
the first image fusion unit is configured to perform weighted summation on a pixel value corresponding to the high-frequency region in the image to be processed and a pixel value corresponding to the low-frequency image according to a first weight to obtain a first pixel value, and the first pixel value is used as the pixel value corresponding to the high-frequency region in the image to be processed;
according to a second weight, carrying out weighted summation on the pixel value corresponding to the intermediate frequency region in the image to be processed and the pixel value corresponding to the low frequency image to obtain a second pixel value, and taking the second pixel value as the pixel value corresponding to the intermediate frequency region in the image to be processed;
according to a third weight, carrying out weighted summation on a pixel value corresponding to the low-frequency region in the image to be processed and a pixel value corresponding to the low-frequency image to obtain a third pixel value, and taking the third pixel value as the pixel value corresponding to the low-frequency region in the image to be processed;
wherein the first weight, the second weight and the third weight are different from each other.
11. The apparatus of claim 9, wherein the image processing module is configured to perform:
performing low-pass filtering processing on the image to be processed to obtain the non-high-frequency image;
and performing difference on the non-high-frequency image and the image to be processed to obtain the high-frequency image.
12. The apparatus of claim 10, wherein the image processing module is configured to perform:
performing low-pass filtering processing on the image to be processed to obtain the low-frequency image, wherein the low-frequency image comprises low-frequency information of the image to be processed;
performing intermediate-pass filtering processing on the image to be processed to obtain an intermediate-frequency image, wherein the intermediate-frequency image comprises intermediate-frequency information of the image to be processed;
and performing difference on the low-frequency image and the intermediate-frequency image and the image to be processed to obtain the high-frequency image, wherein the high-frequency image comprises high-frequency information of the image to be processed.
13. The apparatus of any one of claims 8 to 12, further comprising:
and the filtering processing module is configured to perform low-pass filtering processing on the high-frequency image after the image processing module processes the image to be processed to obtain a high-frequency image and a non-high-frequency image which respectively correspond to the image to be processed, and take the high-frequency image after the low-pass filtering as the mask image.
14. The apparatus according to any one of claims 8 to 12, wherein the image acquisition module is configured to perform:
acquiring a target image;
and determining the foreground area of the target image as an image to be processed.
15. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the image processing method of any one of claims 1 to 7.
16. A storage medium in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform the image processing method of any one of claims 1 to 7.
CN201910727362.3A 2019-08-07 2019-08-07 Image processing method and device, electronic equipment and storage medium Active CN110580688B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910727362.3A CN110580688B (en) 2019-08-07 2019-08-07 Image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910727362.3A CN110580688B (en) 2019-08-07 2019-08-07 Image processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110580688A CN110580688A (en) 2019-12-17
CN110580688B true CN110580688B (en) 2022-11-11

Family

ID=68810444

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910727362.3A Active CN110580688B (en) 2019-08-07 2019-08-07 Image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110580688B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113362262A (en) * 2020-03-05 2021-09-07 广州虎牙科技有限公司 Image fusion preprocessing method, device, equipment and storage medium
CN113496470B (en) * 2020-04-02 2024-04-09 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium
CN113673270B (en) * 2020-04-30 2024-01-26 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium
CN111784611B (en) * 2020-07-03 2023-11-03 厦门美图之家科技有限公司 Portrait whitening method, device, electronic equipment and readable storage medium
WO2022016326A1 (en) * 2020-07-20 2022-01-27 深圳市大疆创新科技有限公司 Image processing method, electronic device, and computer-readable medium
CN112258440B (en) * 2020-10-29 2024-01-02 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108898546A (en) * 2018-06-15 2018-11-27 北京小米移动软件有限公司 Face image processing process, device and equipment, readable storage medium storing program for executing
CN109741269A (en) * 2018-12-07 2019-05-10 广州华多网络科技有限公司 Image processing method, device, computer equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080050331A1 (en) * 2006-08-23 2008-02-28 Paolo Giacomoni Cosmetic Composition Containing A Protease Activator
CN106971165B (en) * 2017-03-29 2018-08-10 武汉斗鱼网络科技有限公司 A kind of implementation method and device of filter
CN108053377A (en) * 2017-12-11 2018-05-18 北京小米移动软件有限公司 Image processing method and equipment
CN109829864B (en) * 2019-01-30 2021-05-18 北京达佳互联信息技术有限公司 Image processing method, device, equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108898546A (en) * 2018-06-15 2018-11-27 北京小米移动软件有限公司 Face image processing process, device and equipment, readable storage medium storing program for executing
CN109741269A (en) * 2018-12-07 2019-05-10 广州华多网络科技有限公司 Image processing method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN110580688A (en) 2019-12-17

Similar Documents

Publication Publication Date Title
CN110580688B (en) Image processing method and device, electronic equipment and storage medium
CN108182730B (en) Virtual and real object synthesis method and device
CN107692997B (en) Heart rate detection method and device
CN108154465B (en) Image processing method and device
CN106331504B (en) Shooting method and device
CN106408603B (en) Shooting method and device
CN106210496B (en) Photo shooting method and device
CN107798654B (en) Image buffing method and device and storage medium
CN111340733B (en) Image processing method and device, electronic equipment and storage medium
CN107341777B (en) Picture processing method and device
CN107730448B (en) Beautifying method and device based on image processing
CN108154466B (en) Image processing method and device
CN111553864A (en) Image restoration method and device, electronic equipment and storage medium
CN112330570B (en) Image processing method, device, electronic equipment and storage medium
CN107015648B (en) Picture processing method and device
CN109784164B (en) Foreground identification method and device, electronic equipment and storage medium
CN111523346B (en) Image recognition method and device, electronic equipment and storage medium
CN112614064B (en) Image processing method, device, electronic equipment and storage medium
CN112188091B (en) Face information identification method and device, electronic equipment and storage medium
CN111127352B (en) Image processing method, device, terminal and storage medium
CN107507128B (en) Image processing method and apparatus
CN108961156B (en) Method and device for processing face image
CN111861942A (en) Noise reduction method and device, electronic equipment and storage medium
CN109509195B (en) Foreground processing method and device, electronic equipment and storage medium
CN109784327B (en) Boundary box determining method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant