CN111127352A - Image processing method, device, terminal and storage medium - Google Patents
Image processing method, device, terminal and storage medium Download PDFInfo
- Publication number
- CN111127352A CN111127352A CN201911285283.8A CN201911285283A CN111127352A CN 111127352 A CN111127352 A CN 111127352A CN 201911285283 A CN201911285283 A CN 201911285283A CN 111127352 A CN111127352 A CN 111127352A
- Authority
- CN
- China
- Prior art keywords
- frequency information
- low
- target
- image
- target object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 24
- 238000012545 processing Methods 0.000 claims abstract description 86
- 238000001914 filtration Methods 0.000 claims abstract description 65
- 238000000034 method Methods 0.000 claims abstract description 18
- 210000000697 sensory organ Anatomy 0.000 claims description 25
- 230000002146 bilateral effect Effects 0.000 claims description 22
- 210000004709 eyebrow Anatomy 0.000 claims description 7
- 210000000720 eyelash Anatomy 0.000 claims description 7
- 230000004927 fusion Effects 0.000 claims description 6
- 239000002537 cosmetic Substances 0.000 claims description 5
- 238000007667 floating Methods 0.000 abstract description 16
- 230000000694 effects Effects 0.000 description 29
- 238000004891 communication Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 7
- 230000003796 beauty Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000005498 polishing Methods 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000001815 facial effect Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000007517 polishing process Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20028—Bilateral filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The present disclosure relates to an image processing method, an apparatus, a terminal, and a storage medium, the method including: acquiring an image to be processed containing a target object; determining low-frequency information and high-frequency information of the target object; filtering the low-frequency information to obtain target low-frequency information; performing makeup application processing on the high-frequency information, and performing filtering processing on the high-frequency information after the makeup application processing to obtain target high-frequency information; and fusing the target low-frequency information and the target high-frequency information to obtain a processed target image. Therefore, according to the technical scheme provided by the embodiment of the disclosure, the low-frequency information of the target object is firstly ground to obtain the target low-frequency information, then the high-frequency information of the target object is pasted and made up, the high-frequency information pasted and made up is ground to obtain the target high-frequency information, and finally the target low-frequency information and the target low-frequency information are fused to obtain the processed target image, so that the appearance of the 'floating make up' condition is greatly reduced.
Description
Technical Field
The present application relates to the field of image technologies, and in particular, to an image processing method, an image processing apparatus, a terminal, and a storage medium.
Background
With the popularization of mobile terminals in users, more and more users prefer to use mobile terminals to capture images or videos, and in order to make the image of a person in the captured image or video more beautiful, the image of the person is usually beautified through the beautifying function on application software in the mobile terminal.
However, in the related art, when the beauty treatment is performed on the figure, usually, the skin-polishing treatment is performed on the area where the figure is located, and then the makeup overlaying is performed on the basis of the skin-polishing treatment.
However, in the related art, the cosmetic treatment method may cause a phenomenon of "makeup floating". For example, lipstick does not fit the lips particularly, giving the user the visual perception of: lipstick is a photographic added effect, not a lipstick smeared on the lips themselves.
Disclosure of Invention
In order to solve the technical problem that 'floating makeup' appears after the human figure is beautified in the related technology, the disclosure provides an image processing method, an image processing device, a terminal and a storage medium, and the technical scheme of the disclosure is as follows:
according to a first aspect of embodiments of the present disclosure, there is provided an image processing method, including:
acquiring an image to be processed containing a target object;
determining low-frequency information and high-frequency information of the target object;
filtering the low-frequency information to obtain target low-frequency information;
performing makeup application processing on the high-frequency information, and performing filtering processing on the high-frequency information after the makeup application processing to obtain target high-frequency information;
and fusing the target low-frequency information and the target high-frequency information to obtain a processed target image.
Optionally, the filtering the low-frequency information to obtain target low-frequency information includes:
and carrying out bilateral filtering processing with the first parameter as the parameter on the low-frequency information to obtain target low-frequency information.
Optionally, the filtering the high-frequency information after the makeup application processing to obtain target high-frequency information includes:
and carrying out bilateral filtering processing with a second parameter as a parameter on the high-frequency information after makeup application processing to obtain target high-frequency information, wherein the second parameter is smaller than the first parameter.
Optionally, before determining the low frequency information and the high frequency information of the target object, the method further includes;
detecting a skin color region of the target object;
correspondingly, the determining the low-frequency information and the high-frequency information of the target object includes:
and determining low-frequency information and high-frequency information of the skin color area.
Optionally, the target object is a face image, and the applying makeup processing on the high-frequency information includes:
detecting five sense organ regions on the face image;
acquiring makeup information corresponding to the five sense organ regions;
and adding makeup information corresponding to the five sense organ regions in each five sense organ region.
Optionally, the makeup information includes one or more of the following: eyelashes, lipstick, and eyebrows.
According to a second aspect of an embodiment of the present disclosure, there is provided an image processing apparatus including:
an image acquisition module configured to perform acquisition of an image to be processed containing a target object;
an image information acquisition module configured to perform determining low frequency information and high frequency information of the target object;
the first filtering module is configured to perform filtering processing on the low-frequency information to obtain target low-frequency information;
the second filtering module is configured to perform makeup application processing on the high-frequency information and perform filtering processing on the high-frequency information after the makeup application processing to obtain target high-frequency information;
and the image information fusion module is configured to perform fusion of the target low-frequency information and the target high-frequency information to obtain a processed target image.
Optionally, the first filtering module is configured to perform:
and carrying out bilateral filtering processing with the first parameter as the parameter on the low-frequency information to obtain target low-frequency information.
Optionally, the second filtering module is configured to perform:
and carrying out bilateral filtering processing with a second parameter as a parameter on the high-frequency information after makeup application processing to obtain target high-frequency information, wherein the second parameter is smaller than the first parameter.
Optionally, the apparatus further comprises;
a skin tone detection module configured to perform detecting a skin tone region of the target object before determining low frequency information and high frequency information of the target object;
correspondingly, the image information obtaining module is configured to perform:
and determining low-frequency information and high-frequency information of the skin color area.
Optionally, the target object is a face image, and the second filtering module is configured to perform:
detecting five sense organ regions on the face image;
acquiring makeup information corresponding to the five sense organ regions;
and adding makeup information corresponding to the five sense organ regions in each five sense organ region.
Optionally, the makeup information includes one or more of the following: eyelashes, lipstick, and eyebrows.
According to a third aspect of the embodiments of the present disclosure, there is provided a terminal, including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the image processing method of the first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a storage medium having instructions that, when executed by a processor of an electronic device, enable the electronic device to perform the image processing method of the first aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product containing instructions which, when run on a computer, cause the computer to implement the image processing method of the first aspect.
According to the technical scheme provided by the embodiment of the disclosure, the image to be processed containing the target object is obtained; determining low-frequency information and high-frequency information of a target object; filtering the low-frequency information to obtain target low-frequency information; performing makeup application processing on the high-frequency information, and performing filtering processing on the high-frequency information after the makeup application processing to obtain target high-frequency information; and fusing the target low-frequency information and the target high-frequency information to obtain a processed target image.
Therefore, according to the technical scheme provided by the embodiment of the disclosure, the low-frequency information of the target object is firstly ground to obtain the target low-frequency information, then the high-frequency information of the target object is pasted and made up, the high-frequency information pasted and made up is ground to obtain the target high-frequency information, and finally the target low-frequency information and the target low-frequency information are fused to obtain the processed target image. Unlike the prior art, the whole image to be processed is ground and then pasted, so that the pasting and making-up effect is more natural, the beautifying effect is more real, fine and smooth, the appearance is attractive, and the floating making-up condition is greatly reduced.
Drawings
FIG. 1 is a flow diagram illustrating an image processing method according to an exemplary embodiment;
FIG. 2 is a flow diagram illustrating another method of image processing according to an exemplary embodiment;
FIG. 3 is a flow diagram illustrating another method of image processing according to an exemplary embodiment;
FIG. 4 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment;
FIG. 5 is a block diagram illustrating an electronic device in accordance with an exemplary embodiment;
fig. 6 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
In order to solve the technical problem of 'floating makeup' after the figure image is beautified in the related art, the disclosure provides an image processing method, an image processing device, a terminal and a storage medium,
in a first aspect, an image processing method provided by an embodiment of the present disclosure is first described in detail.
It should be noted that an execution subject of the image processing method provided in the embodiment of the present disclosure may be an image processing apparatus, and the image processing apparatus may be operated in a terminal, where the terminal may be a mobile phone, a tablet computer, and the like, and the embodiment of the present disclosure is not particularly limited in this regard.
FIG. 1 is a flow diagram illustrating an image processing method according to an exemplary embodiment. As shown in fig. 1, the method may include the following steps.
In step S11, an image to be processed containing the target object is acquired.
The image to be processed may be an image currently captured by a terminal serving as an execution subject, or may also be an image of a certain frame in a video currently captured by the terminal serving as the execution subject, or may also be any image locally stored by the terminal serving as the execution subject, or may also be any image acquired by the terminal serving as the execution subject from another terminal.
The target object may be a human face, or a human body image including a human face, and the target object is not specifically limited in the embodiments of the present disclosure.
In step S12, low frequency information and high frequency information of the target object are determined.
After the image to be processed containing the target object is acquired, low frequency information and high frequency information of the target object may be determined. The method for determining the low-frequency information and the high-frequency information of the target object may be as follows: and copying the image to be processed into two image layers, wherein one image layer is used for extracting low-frequency information of the target object, and the other image layer is used for extracting high-frequency information of the target object. It can be understood that the low frequency information of the target object is: the image information of the area where the color slowly changes in the image area where the target object is located, and the high frequency information is the image information other than the low frequency information in the image area where the target object is located.
In step S13, the low frequency information is subjected to filtering processing to obtain target low frequency information.
After the high frequency information and the low frequency information of the target object are obtained, the low frequency information may be filtered, and the high frequency information may not be filtered for a while.
Wherein, the low-frequency information is filtered, namely, the low-frequency information is ground. The way of peeling the low frequency information can be various.
Specifically, in an alternative embodiment, bilateral filtering may be used to dermabrade the low frequency information. It is understood that bilateral filtering refers to an operation of filtering information of a specific band of frequencies from low frequency information. Moreover, bilateral filtering blurring is an upgrading version of Gaussian blurring, so that the image can be blurred, the information of the image edge can be kept, and the definition of the face edge can be ensured. Moreover, the bilateral filtering also uses the weight added to the gray information, the smaller the difference between the gray value of one pixel point and the gray value of the central pixel point is, the larger the weight corresponding to the pixel point is, and the larger the bilateral filtering strength is. Similarly, the larger the difference between the gray value of one pixel point and the gray value of the central pixel point is, the smaller the weight corresponding to the pixel point is, and the smaller the bilateral filtering strength is. Therefore, the image after the skin is ground has higher definition in the areas of five sense organs, face contour and the like, and the whole skin is layered.
In another alternative embodiment, gaussian filtering may be used to buff the low frequency information. It can be understood that the principle of gaussian filtering is to obtain a filtered result by weighted averaging of pixels around a current pixel point according to normal distribution, and the blurring effect is good, but there is no difference in blurring of the whole image, so that the sharpness of the image after buffing is reduced.
Of course, other filtering methods may also be adopted to perform buffing on the low-frequency information of the target object, which is not specifically limited in this disclosure.
In step S14, the high frequency information is subjected to makeup processing, and the high frequency information after the makeup processing is subjected to filtering processing, so that target high frequency information is obtained.
In order to make the target object more natural after makeup, that is, the condition of 'floating makeup' does not occur, the high-frequency information may be firstly subjected to makeup processing, and then the whole high-frequency information after makeup is subjected to filtering processing, that is, the whole high-frequency information after makeup is subjected to skin grinding processing.
Specifically, when the target object is a face image, the step of applying makeup to the high-frequency information includes the following three steps, which are steps a 1-A3:
step A1, detecting the five sense organ area on the face image.
Since facial features of the face image are not usually beautified, i.e., makeup needs to be applied, when the makeup processing is performed on the high-frequency information, the facial feature region of the face image can be detected first.
Step A2, obtaining makeup information corresponding to the five sense organ regions.
After the facial image is detected to be in the five sense organ region, cosmetic information corresponding to the five sense organ region can be acquired, wherein the cosmetic information comprises one or more of the following: eyelashes, lipstick, and eyebrows.
In step a3, cosmetic information corresponding to the region of the five sense organs is added to each region of the five sense organs.
After the makeup information corresponding to the five sense organ regions is acquired, the makeup information may be added to the corresponding five sense organ regions. Specifically, the eyelashes can be attached to the corresponding positions of the eyelashes in the face image. The lipstick can be pasted on the corresponding position of the lip in the face image. The eyebrows can be attached to positions of the eyebrows of the face image.
Furthermore, there are various ways of performing the skin polishing process on the whole of the high-frequency information after the makeup is applied.
In one embodiment, bilateral filtering may be used to scrub the whole of the applied high frequency information.
In another embodiment, the high-frequency information after makeup application can be entirely polished by Gaussian filtering.
The mode of the skin-polishing treatment of the whole high-frequency information after makeup application is not particularly limited in the embodiments of the present disclosure.
According to the above description, in the technical scheme of the present disclosure, the whole high-frequency information after makeup is applied is ground, so that the appearance of the "makeup floating" situation can be reduced.
In step S15, the target low-frequency information and the target high-frequency information are fused to obtain a processed target image.
After the low-frequency information of the target object is ground to obtain the target low-frequency information and the high-frequency information after makeup is pasted is ground to obtain the target high-frequency information, the target low-frequency information and the target high-frequency information can be fused, namely the target high-frequency information and the target low-frequency information are superposed, so that a processed target image is obtained, and the obtained target image is more exquisite, real and attractive in beauty effect.
According to the technical scheme provided by the embodiment of the disclosure, the image to be processed containing the target object is obtained; determining low-frequency information and high-frequency information of a target object; filtering the low-frequency information to obtain target low-frequency information; performing makeup application processing on the high-frequency information, and performing filtering processing on the high-frequency information after the makeup application processing to obtain target high-frequency information; and fusing the target low-frequency information and the target high-frequency information to obtain a processed target image.
Therefore, according to the technical scheme provided by the embodiment of the disclosure, the low-frequency information of the target object is firstly ground to obtain the target low-frequency information, then the high-frequency information of the target object is pasted and made up, the high-frequency information pasted and made up is ground to obtain the target high-frequency information, and finally the target low-frequency information and the target low-frequency information are fused to obtain the processed target image. Unlike the prior art, the whole image to be processed is ground and then pasted, so that the pasting and making-up effect is more natural, the beautifying effect is more real, fine and smooth, the appearance is attractive, and the floating making-up condition is greatly reduced.
In order to further improve the beautifying effect, the embodiment of the present disclosure also provides another image processing method, as shown in fig. 2, which may include the following steps.
In step S21, an image to be processed containing the target object is acquired.
Since step S21 is the same as step S11, step S11 has been described in detail in the embodiment shown in fig. 1, and thus, step S21 will not be described herein again.
In step S22, low frequency information and high frequency information of the target object are determined.
Since step S22 is the same as step S12, step S12 has been described in detail in the embodiment shown in fig. 1, and thus, step S22 will not be described herein again.
In step S23, bilateral filtering processing with the parameter as the first parameter is performed on the low-frequency information, so as to obtain the target low-frequency information.
Because the low-frequency information of the image is the image information with slowly changing color, the low-frequency information does not usually display the detail information of the image, and in order to achieve a better beautifying effect, the low-frequency information can be subjected to relatively strong skin grinding, namely, the low-frequency information is subjected to bilateral filtering processing with the parameter being the first parameter. The first parameter may be determined according to an actual situation, which is not specifically limited in the embodiment of the present disclosure.
In step S24, the high-frequency information is subjected to makeup processing, and bilateral filtering processing with a second parameter as a parameter is performed on the high-frequency information after makeup processing, so as to obtain target high-frequency information, where the second parameter is smaller than the first parameter used when bilateral filtering processing is performed on the low-frequency information.
Because the high-frequency information is image information with rapidly changing colors and generally displays details of the image, in order to achieve a better beautifying effect, relatively small-force skin grinding can be performed on a high-frequency area, namely, bilateral filtering processing with parameters of a second parameter is performed on the high-frequency information after makeup is applied, and the second parameter is smaller than a first parameter adopted when bilateral filtering processing is performed on the low-frequency information. The second parameter may be determined according to an actual situation, which is not specifically limited in the embodiment of the present disclosure. For example, the size of the second parameter may be 20% to 50% of the size of the first parameter.
In step S25, the target low-frequency information and the target high-frequency information are fused to obtain a processed target image.
Since step S25 is the same as step S15, step S15 has been described in detail in the embodiment shown in fig. 1, and thus, step S25 will not be described herein again.
Therefore, according to the technical scheme provided by the embodiment of the disclosure, the low-frequency information of the target object is firstly subjected to skin grinding with a large force to obtain the target low-frequency information, then the high-frequency information of the target object is subjected to makeup processing, the high-frequency information subjected to makeup processing is subjected to skin grinding with a small force to obtain the target high-frequency information, and finally the target low-frequency information and the target low-frequency information are fused to obtain the processed target image. Unlike the prior art, the whole image to be processed is ground and then pasted, so that the pasting and making-up effect is more natural, the beautifying effect is more real, fine and smooth, the appearance is attractive, and the floating making-up condition is greatly reduced.
In order to further improve the beautifying effect, the embodiment of the present disclosure also provides another image processing method, as shown in fig. 3, which may include the following steps.
S31, acquiring the image to be processed containing the target object.
Since step S31 is the same as step S11, step S11 has been described in detail in the embodiment shown in fig. 1, and thus, step S31 will not be described herein again.
And S32, detecting the skin color area of the target object.
Since the skin color area of the target object is usually the area needing to be buffed, and other areas do not need to be buffed, the skin color area of the target object can be detected, so that only the skin color area of the target object can be buffed without buffing other areas of the target object in subsequent buffing processing.
And S33, determining the low-frequency information and the high-frequency information of the skin color area.
After the skin color area of the target object is determined, the low-frequency information and the high-frequency information of the skin color area can be determined, so that the skin polishing treatment can be conveniently carried out on the low-frequency information of the skin color area and the high-frequency information after makeup is applied in the subsequent steps.
And S34, filtering the low-frequency information to obtain target low-frequency information.
Since step S34 is the same as step S13, step S13 has been described in detail in the embodiment shown in fig. 1, and thus, step S34 will not be described herein again.
And S35, performing makeup processing on the high-frequency information, and performing filtering processing on the high-frequency information after the makeup processing to obtain target high-frequency information.
Since step S35 is the same as step S14, step S14 has been described in detail in the embodiment shown in fig. 1, and thus, step S35 will not be described herein again.
And S36, fusing the target low-frequency information and the target high-frequency information to obtain a processed target image.
Since step S36 is the same as step S15, step S15 has been described in detail in the embodiment shown in fig. 1, and thus, step S36 will not be described herein again.
Therefore, according to the technical scheme provided by the embodiment of the disclosure, the skin color area of the target object is determined, the low-frequency information of the skin color area is firstly ground to obtain the target low-frequency information, then the high-frequency information of the skin color area is pasted and made up, the pasted and made up high-frequency information is ground to obtain the target high-frequency information, and finally the target low-frequency information and the target low-frequency information are fused to obtain the processed target image. Unlike the prior art, the whole image to be processed is ground and then pasted, so that the pasting and making-up effect is more natural, the beautifying effect is more real, fine and smooth, the appearance is attractive, and the floating making-up condition is greatly reduced.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing apparatus, as shown in fig. 4, including:
an image acquisition module 410 configured to perform acquiring an image to be processed containing a target object;
an image information acquisition module 420 configured to perform determining low frequency information and high frequency information of the target object;
a first filtering module 430 configured to perform filtering processing on the low-frequency information to obtain target low-frequency information;
in an alternative embodiment, the first filtering module is configured to perform:
and carrying out bilateral filtering processing with the first parameter as the parameter on the low-frequency information to obtain target low-frequency information.
And a second filtering module 440 configured to perform makeup processing on the high-frequency information and perform filtering processing on the high-frequency information after the makeup processing to obtain target high-frequency information.
In an optional embodiment, the second filtering module is configured to perform:
and carrying out bilateral filtering processing with a second parameter as a parameter on the high-frequency information after makeup application processing to obtain target high-frequency information, wherein the second parameter is smaller than the first parameter.
And an image information fusion module 450 configured to perform fusion of the target low-frequency information and the target high-frequency information to obtain a processed target image.
Therefore, according to the technical scheme provided by the embodiment of the disclosure, the low-frequency information of the target object is firstly ground to obtain the target low-frequency information, then the high-frequency information of the target object is pasted and made up, the high-frequency information pasted and made up is ground to obtain the target high-frequency information, and finally the target low-frequency information and the target low-frequency information are fused to obtain the processed target image. Unlike the prior art, the whole image to be processed is ground and then pasted, so that the pasting and making-up effect is more natural, the beautifying effect is more real, fine and smooth, the appearance is attractive, and the floating making-up condition is greatly reduced.
In an alternative embodiment, the apparatus may further comprise;
a skin tone detection module configured to perform detecting a skin tone region of the target object before determining low frequency information and high frequency information of the target object;
correspondingly, the image information obtaining module is configured to perform:
and determining low-frequency information and high-frequency information of the skin color area.
Therefore, the skin color area of the target object, the low-frequency information and the high-frequency information of the skin color area are determined, so that the skin color area is only ground in the subsequent steps, and a better skin beautifying effect can be achieved.
In an alternative embodiment, the target object is a face image, and the second filtering module is configured to perform:
detecting five sense organ regions on the face image;
acquiring makeup information corresponding to the five sense organ regions;
and adding makeup information corresponding to the five sense organ regions in each five sense organ region.
In an alternative embodiment, the makeup information includes one or more of the following: eyelashes, lipstick, and eyebrows.
Therefore, before the high-frequency information is polished, the high-frequency information is pasted and made up at the corresponding position of the high-frequency information, and the high-frequency information after the pasting and making up is polished in the subsequent steps, so that the pasting and making up effect is more natural, the beauty effect is more real, fine and smooth and attractive, the floating and making up condition is greatly reduced, and the beauty effect is further improved.
Fig. 5 is a block diagram of a terminal shown in accordance with an example embodiment. Referring to fig. 5, the terminal includes:
a processor 510;
a memory 520 for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the image processing method provided by the present disclosure.
Therefore, according to the technical scheme provided by the embodiment of the disclosure, the low-frequency information of the target object is firstly ground to obtain the target low-frequency information, then the high-frequency information of the target object is pasted and made up, the high-frequency information pasted and made up is ground to obtain the target high-frequency information, and finally the target low-frequency information and the target low-frequency information are fused to obtain the processed target image. Unlike the prior art, the whole image to be processed is ground and then pasted, so that the pasting and making-up effect is more natural, the beautifying effect is more real, fine and smooth, the appearance is attractive, and the floating making-up condition is greatly reduced.
Fig. 6 is a block diagram illustrating an image processing apparatus 600 for use in accordance with an exemplary embodiment. For example, the apparatus 600 may be a mobile phone, a computer, a digital broadcast electronic device, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 6, apparatus 600 may include one or more of the following components: a processing component 602, a memory 604, a power component 606, a multimedia component 608, an audio component 610, an interface to input/output (I/O) 612, a sensor component 614, and a communication component 616.
The processing component 602 generally controls overall operation of the device 600, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 602 may include one or more processors 620 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 602 can include one or more modules that facilitate interaction between the processing component 602 and other components. For example, the processing component 602 can include a multimedia module to facilitate interaction between the multimedia component 608 and the processing component 602.
The memory 604 is configured to store various types of data to support operation at the device 600. Examples of such data include instructions for any application or method operating on device 600, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 604 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The multimedia component 608 includes a screen that provides an output interface between the device 600 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 608 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 600 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 610 is configured to output and/or input audio signals. For example, audio component 610 includes a Microphone (MIC) configured to receive external audio signals when apparatus 600 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 404 or transmitted via the communication component 616. In some embodiments, audio component 610 further includes a speaker for outputting audio signals.
The I/O interface 612 provides an interface between the processing component 602 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 614 includes one or more sensors for providing status assessment of various aspects of the apparatus 600. For example, the sensor component 614 may detect an open/closed state of the device 600, the relative positioning of components, such as a display and keypad of the apparatus 600, the sensor component 614 may also detect a change in position of the apparatus 600 or a component of the apparatus 600, the presence or absence of user contact with the apparatus 600, orientation or acceleration/deceleration of the apparatus 600, and a change in temperature of the apparatus 600. The sensor assembly 614 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 614 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 614 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 616 is configured to facilitate communications between the apparatus 600 and other devices in a wired or wireless manner. The apparatus 600 may access a wireless network based on a communication standard, such as WiFi, an operator network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 416 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 616 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 600 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described image processing methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 604 comprising instructions, executable by the processor 620 of the apparatus 600 to perform the image processing method described above is also provided. Alternatively, for example, the storage medium may be a non-transitory computer-readable storage medium, such as a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Therefore, according to the technical scheme provided by the embodiment of the disclosure, the low-frequency information of the target object is firstly ground to obtain the target low-frequency information, then the high-frequency information of the target object is pasted and made up, the high-frequency information pasted and made up is ground to obtain the target high-frequency information, and finally the target low-frequency information and the target low-frequency information are fused to obtain the processed target image. Unlike the prior art, the whole image to be processed is ground and then pasted, so that the pasting and making-up effect is more natural, the beautifying effect is more real, fine and smooth, the appearance is attractive, and the floating making-up condition is greatly reduced.
In a fourth aspect implemented by the present disclosure, the embodiments of the present disclosure further provide a storage medium, where instructions executed by a processor of an electronic device enable the electronic device to execute the image processing method provided by the embodiments of the present disclosure.
Therefore, according to the technical scheme provided by the embodiment of the disclosure, the low-frequency information of the target object is firstly ground to obtain the target low-frequency information, then the high-frequency information of the target object is pasted and made up, the high-frequency information pasted and made up is ground to obtain the target high-frequency information, and finally the target low-frequency information and the target low-frequency information are fused to obtain the processed target image. Unlike the prior art, the whole image to be processed is ground and then pasted, so that the pasting and making-up effect is more natural, the beautifying effect is more real, fine and smooth, the appearance is attractive, and the floating making-up condition is greatly reduced. .
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product containing instructions which, when run on a computer, cause the computer to implement the image processing method of the first aspect.
Therefore, according to the technical scheme provided by the embodiment of the disclosure, the low-frequency information of the target object is firstly ground to obtain the target low-frequency information, then the high-frequency information of the target object is pasted and made up, the high-frequency information pasted and made up is ground to obtain the target high-frequency information, and finally the target low-frequency information and the target low-frequency information are fused to obtain the processed target image. Unlike the prior art, the whole image to be processed is ground and then pasted, so that the pasting and making-up effect is more natural, the beautifying effect is more real, fine and smooth, the appearance is attractive, and the floating making-up condition is greatly reduced.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (10)
1. An image processing method, comprising:
acquiring an image to be processed containing a target object;
determining low-frequency information and high-frequency information of the target object;
filtering the low-frequency information to obtain target low-frequency information;
performing makeup application processing on the high-frequency information, and performing filtering processing on the high-frequency information after the makeup application processing to obtain target high-frequency information;
and fusing the target low-frequency information and the target high-frequency information to obtain a processed target image.
2. The method of claim 1, wherein the filtering the low-frequency information to obtain target low-frequency information comprises:
and carrying out bilateral filtering processing with the first parameter as the parameter on the low-frequency information to obtain target low-frequency information.
3. The method according to claim 2, wherein the filtering the high-frequency information after the makeup application process to obtain target high-frequency information comprises:
and carrying out bilateral filtering processing with a second parameter as a parameter on the high-frequency information after makeup application processing to obtain target high-frequency information, wherein the second parameter is smaller than the first parameter.
4. The method according to any of claims 1-3, wherein prior to determining the low frequency information and the high frequency information of the target object, the method further comprises;
detecting a skin color region of the target object;
correspondingly, the determining the low-frequency information and the high-frequency information of the target object includes:
and determining low-frequency information and high-frequency information of the skin color area.
5. The method according to any one of claims 1 to 3, wherein the target object is a human face image, and the applying the high-frequency information includes:
detecting five sense organ regions on the face image;
acquiring makeup information corresponding to the five sense organ regions;
and adding makeup information corresponding to the five sense organ regions in each five sense organ region.
6. The method of claim 5, wherein the cosmetic information includes one or more of: eyelashes, lipstick, and eyebrows.
7. An image processing apparatus characterized by comprising:
an image acquisition module configured to perform acquisition of an image to be processed containing a target object;
an image information acquisition module configured to perform determining low frequency information and high frequency information of the target object;
the first filtering module is configured to perform filtering processing on the low-frequency information to obtain target low-frequency information;
the second filtering module is configured to perform makeup application processing on the high-frequency information and perform filtering processing on the high-frequency information after the makeup application processing to obtain target high-frequency information;
and the image information fusion module is configured to perform fusion of the target low-frequency information and the target high-frequency information to obtain a processed target image.
8. The apparatus of claim 7, wherein the first filtering module is configured to perform:
and carrying out bilateral filtering processing with the first parameter as the parameter on the low-frequency information to obtain target low-frequency information.
9. A terminal, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the image processing method of any one of claims 1 to 6.
10. A storage medium in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform the image processing method of any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911285283.8A CN111127352B (en) | 2019-12-13 | 2019-12-13 | Image processing method, device, terminal and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911285283.8A CN111127352B (en) | 2019-12-13 | 2019-12-13 | Image processing method, device, terminal and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111127352A true CN111127352A (en) | 2020-05-08 |
CN111127352B CN111127352B (en) | 2020-12-01 |
Family
ID=70498770
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911285283.8A Active CN111127352B (en) | 2019-12-13 | 2019-12-13 | Image processing method, device, terminal and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111127352B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113079378A (en) * | 2021-04-15 | 2021-07-06 | 杭州海康威视数字技术股份有限公司 | Image processing method and device and electronic equipment |
CN113160099A (en) * | 2021-03-18 | 2021-07-23 | 北京达佳互联信息技术有限公司 | Face fusion method, face fusion device, electronic equipment, storage medium and program product |
CN115953313A (en) * | 2022-12-23 | 2023-04-11 | 百度在线网络技术(北京)有限公司 | Method, apparatus, device and storage medium for processing image |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120301048A1 (en) * | 2011-05-25 | 2012-11-29 | Sony Corporation | Image processing apparatus and method |
CN105913373A (en) * | 2016-04-05 | 2016-08-31 | 广东欧珀移动通信有限公司 | Image processing method and device |
CN105931211A (en) * | 2016-04-19 | 2016-09-07 | 中山大学 | Face image beautification method |
CN106447638A (en) * | 2016-09-30 | 2017-02-22 | 北京奇虎科技有限公司 | Beauty treatment method and device thereof |
CN106920211A (en) * | 2017-03-09 | 2017-07-04 | 广州四三九九信息科技有限公司 | U.S. face processing method, device and terminal device |
CN107730448A (en) * | 2017-10-31 | 2018-02-23 | 北京小米移动软件有限公司 | U.S. face method and device based on image procossing |
CN107766831A (en) * | 2017-10-31 | 2018-03-06 | 广东欧珀移动通信有限公司 | Image processing method, device, mobile terminal and computer-readable recording medium |
CN108053377A (en) * | 2017-12-11 | 2018-05-18 | 北京小米移动软件有限公司 | Image processing method and equipment |
CN108875594A (en) * | 2018-05-28 | 2018-11-23 | 腾讯科技(深圳)有限公司 | A kind of processing method of facial image, device and storage medium |
CN108961156A (en) * | 2018-07-26 | 2018-12-07 | 北京小米移动软件有限公司 | The method and device of face image processing |
CN110490828A (en) * | 2019-09-10 | 2019-11-22 | 广州华多网络科技有限公司 | Image processing method and system in net cast |
-
2019
- 2019-12-13 CN CN201911285283.8A patent/CN111127352B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120301048A1 (en) * | 2011-05-25 | 2012-11-29 | Sony Corporation | Image processing apparatus and method |
CN105913373A (en) * | 2016-04-05 | 2016-08-31 | 广东欧珀移动通信有限公司 | Image processing method and device |
CN105931211A (en) * | 2016-04-19 | 2016-09-07 | 中山大学 | Face image beautification method |
CN106447638A (en) * | 2016-09-30 | 2017-02-22 | 北京奇虎科技有限公司 | Beauty treatment method and device thereof |
CN106920211A (en) * | 2017-03-09 | 2017-07-04 | 广州四三九九信息科技有限公司 | U.S. face processing method, device and terminal device |
CN107730448A (en) * | 2017-10-31 | 2018-02-23 | 北京小米移动软件有限公司 | U.S. face method and device based on image procossing |
CN107766831A (en) * | 2017-10-31 | 2018-03-06 | 广东欧珀移动通信有限公司 | Image processing method, device, mobile terminal and computer-readable recording medium |
CN108053377A (en) * | 2017-12-11 | 2018-05-18 | 北京小米移动软件有限公司 | Image processing method and equipment |
CN108875594A (en) * | 2018-05-28 | 2018-11-23 | 腾讯科技(深圳)有限公司 | A kind of processing method of facial image, device and storage medium |
CN108961156A (en) * | 2018-07-26 | 2018-12-07 | 北京小米移动软件有限公司 | The method and device of face image processing |
CN110490828A (en) * | 2019-09-10 | 2019-11-22 | 广州华多网络科技有限公司 | Image processing method and system in net cast |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113160099A (en) * | 2021-03-18 | 2021-07-23 | 北京达佳互联信息技术有限公司 | Face fusion method, face fusion device, electronic equipment, storage medium and program product |
WO2022193573A1 (en) * | 2021-03-18 | 2022-09-22 | 北京达佳互联信息技术有限公司 | Facial fusion method and apparatus |
CN113160099B (en) * | 2021-03-18 | 2023-12-26 | 北京达佳互联信息技术有限公司 | Face fusion method, device, electronic equipment, storage medium and program product |
CN113079378A (en) * | 2021-04-15 | 2021-07-06 | 杭州海康威视数字技术股份有限公司 | Image processing method and device and electronic equipment |
CN115953313A (en) * | 2022-12-23 | 2023-04-11 | 百度在线网络技术(北京)有限公司 | Method, apparatus, device and storage medium for processing image |
Also Published As
Publication number | Publication date |
---|---|
CN111127352B (en) | 2020-12-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110929651B (en) | Image processing method, image processing device, electronic equipment and storage medium | |
EP3582187B1 (en) | Face image processing method and apparatus | |
CN110675310B (en) | Video processing method and device, electronic equipment and storage medium | |
CN110580688B (en) | Image processing method and device, electronic equipment and storage medium | |
US20180286097A1 (en) | Method and camera device for processing image | |
CN111127352B (en) | Image processing method, device, terminal and storage medium | |
CN112766234B (en) | Image processing method and device, electronic equipment and storage medium | |
CN110599410B (en) | Image processing method, device, terminal and storage medium | |
CN107798654B (en) | Image buffing method and device and storage medium | |
CN107730448B (en) | Beautifying method and device based on image processing | |
CN105512605A (en) | Face image processing method and device | |
CN109325924B (en) | Image processing method, device, terminal and storage medium | |
CN112330570A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN105426079A (en) | Picture brightness adjustment method and apparatus | |
CN113870121A (en) | Image processing method and device, electronic equipment and storage medium | |
CN107507128B (en) | Image processing method and apparatus | |
CN111145110B (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN108961156B (en) | Method and device for processing face image | |
CN114463212A (en) | Image processing method and device, electronic equipment and storage medium | |
CN112184540A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN106469446B (en) | Depth image segmentation method and segmentation device | |
CN104902318B (en) | Control method for playing back and terminal device | |
CN113763285A (en) | Image processing method and device, electronic equipment and storage medium | |
CN113160099A (en) | Face fusion method, face fusion device, electronic equipment, storage medium and program product | |
CN111260581B (en) | Image processing method, device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |