CN108961156B - Method and device for processing face image - Google Patents
Method and device for processing face image Download PDFInfo
- Publication number
- CN108961156B CN108961156B CN201810835926.0A CN201810835926A CN108961156B CN 108961156 B CN108961156 B CN 108961156B CN 201810835926 A CN201810835926 A CN 201810835926A CN 108961156 B CN108961156 B CN 108961156B
- Authority
- CN
- China
- Prior art keywords
- image
- face image
- detail
- processing
- treatment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012545 processing Methods 0.000 title claims abstract description 88
- 238000000034 method Methods 0.000 title claims abstract description 57
- 230000009466 transformation Effects 0.000 claims description 40
- 238000001914 filtration Methods 0.000 claims description 31
- 230000002146 bilateral effect Effects 0.000 claims description 12
- 238000003672 processing method Methods 0.000 claims description 8
- 230000008569 process Effects 0.000 abstract description 27
- 238000005516 engineering process Methods 0.000 abstract description 12
- 238000010586 diagram Methods 0.000 description 17
- 238000004891 communication Methods 0.000 description 10
- 230000001815 facial effect Effects 0.000 description 7
- 239000012535 impurity Substances 0.000 description 6
- 239000011159 matrix material Substances 0.000 description 6
- 230000008859 change Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000005498 polishing Methods 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 238000007517 polishing process Methods 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000003796 beauty Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000011148 porous material Substances 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The disclosure relates to a method and a device for processing a face image, and belongs to the field of electronic technology application. The method comprises the following steps: carrying out first buffing treatment on the initial face image to obtain a basic image; acquiring a detail image based on the initial face image, wherein the detail image is used for reflecting the texture characteristics of the skin in the initial face image; carrying out second buffing treatment on the detail image to obtain a target detail image; and overlapping the target detail image and the basic image to obtain a target face image. The method and the device can solve the problem that in the process of processing the face image, the processed face image lacks texture characteristics and lacks texture. The method is used for processing the face image.
Description
Technical Field
The present disclosure relates to the field of image processing, and in particular, to a method and an apparatus for processing a face image.
Background
In the process of beautifying the face image, the face buffing is the core function of the beautifying technology, and can process the details such as impurities, textures and the like on the skin, so that the picture looks more beautiful.
At present, a face buffing technology is generally implemented by using an edge-preserving filter, wherein the edge-preserving filter filters an initial face image to be processed in a process of processing the face image, so as to obtain the face image after buffing processing.
However, when the human face image is subjected to the skin polishing process by the edge-preserving filter, the texture features of the skin in the human face image are smoothed, so that the skin after the skin polishing process lacks texture.
Disclosure of Invention
The embodiment of the disclosure provides a method and a device for processing a face image. The problems of the prior art can be solved. The technical scheme is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a method for processing a face image, including:
carrying out first buffing processing on the initial face image to obtain a basic image;
acquiring a detail image based on the initial face image, wherein the detail image is used for reflecting the texture characteristics of the skin in the initial face image;
performing second buffing treatment on the detail image to obtain a target detail image;
and overlapping the target detail image and the basic image to obtain a target face image.
Optionally, the obtaining a detail image based on the initial face image includes:
and performing subtraction on the initial face image and the basic image to obtain the detail image.
Optionally, the performing a second buffing process on the detail image to obtain a target detail image includes:
carrying out nonlinear transformation on the detail image to obtain a transformed detail image;
and carrying out second buffing treatment on the transformed detail image to obtain the target detail image.
Optionally, the performing nonlinear transformation on the detail image to obtain a transformed detail image includes:
and carrying out nonlinear transformation on the detail image based on a polynomial transformation formula to obtain a transformed detail image, wherein the polynomial transformation formula is as follows:
wherein D1 is the ith pixel value of the transformed detail image, D is the ith pixel value of the detail image, a and b are preset non-zero transformation coefficients, i is more than or equal to 1 and less than or equal to n, and n is the total number of pixel values in the detail image.
Optionally, the first skin grinding treatment and the second skin grinding treatment are both edge-preserving filtering treatment;
the first buffing treatment is bilateral filtering treatment, guided filtering treatment or weighted least square treatment;
the second grinding treatment is bilateral filtering treatment, guiding filtering treatment or weighted least square treatment.
Optionally, the method further includes:
collecting images through a camera shooting assembly;
carrying out face recognition on the acquired image;
and when the acquired image has a face image, determining an image in a predetermined area including the face image in the acquired image as an initial face image.
According to a second aspect of the embodiments of the present disclosure, there is provided an apparatus for processing a face image, including:
the first peeling module is configured to perform first peeling processing on the initial face image to obtain a basic image;
an obtaining module configured to obtain a detail image based on the initial face image, wherein the detail image is used for reflecting the texture characteristics of the skin in the initial face image;
the second peeling module is configured to perform second peeling processing on the detail image to obtain a target detail image;
and the superposition module is configured to carry out superposition processing on the target detail image and the basic image to obtain a target face image.
Optionally, the obtaining module is configured to:
and subtracting the initial face image from the basic image to obtain the detail image.
Optionally, the second peeling module includes:
the transformation submodule is configured to perform nonlinear transformation on the detail image to obtain a transformed detail image;
and the peeling submodule is configured to perform the second peeling processing on the transformed detail image to obtain the target detail image.
Optionally, the transformation submodule is configured to:
and carrying out nonlinear transformation on the detail image based on a polynomial transformation formula to obtain a transformed detail image, wherein the polynomial transformation formula is as follows:
wherein D1 is the ith pixel value of the transformed detail image, D is the ith pixel value of the detail image, a and b are preset non-zero transformation coefficients, i is more than or equal to 1 and less than or equal to n, and n is the total number of pixel values in the detail image.
Optionally, the first buffing and the second buffing are both edge-preserving filtering.
The first buffing treatment is bilateral filtering treatment, guided filtering treatment or weighted least square treatment;
the second grinding treatment is bilateral filtering treatment, guiding filtering treatment or weighted least square treatment.
Optionally, the apparatus further comprises:
an acquisition module configured to acquire an image through the camera assembly;
the recognition module is configured to perform face recognition on the acquired image;
the determining module is configured to determine an image in a predetermined area including a face image in the acquired image as an initial face image when the face image exists in the acquired image.
According to a third aspect of the embodiments of the present disclosure, there is provided an apparatus for processing a face image, the apparatus including:
a processing component;
a memory for storing executable instructions of the processing component;
wherein the processing component is configured to execute the processing method of the face image in any one of the first aspect.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium having instructions stored therein, which when run on a processing component, cause the processing component to execute the method for processing a face image according to any one of the first aspect.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
the method and the device for processing the face image can acquire the detail image capable of reflecting the texture characteristics of the face skin after the primary face image is subjected to the skin grinding processing to acquire the basic image, can acquire the target face image by performing the skin grinding processing on the detail image and overlapping the detail image subjected to the skin grinding processing with the basic image, and can effectively retain the texture characteristics of the skin in the target face image on the basis of less impurities, thereby increasing the texture of the skin.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure, the drawings that are needed in the description of the embodiments will be briefly described below, it is obvious that the drawings in the description below are only some embodiments of the present disclosure, and that other drawings may be obtained by those skilled in the art without inventive effort.
FIG. 1 is a schematic diagram illustrating an initial face image according to an exemplary embodiment;
FIG. 2 is a schematic illustration of a base image shown in accordance with an exemplary embodiment;
FIG. 3 is a flow diagram illustrating a method of facial image processing according to an exemplary embodiment;
FIG. 4 is a flow diagram illustrating another method of facial image processing according to an exemplary embodiment;
FIG. 5 is a flow diagram illustrating yet another method of facial image processing according to an exemplary embodiment;
FIG. 6 is a schematic diagram illustrating a detail image in accordance with an exemplary embodiment;
FIG. 7 is a flow diagram illustrating yet another method of facial image processing according to an exemplary embodiment;
FIG. 8 is a schematic diagram illustrating a target detail image in accordance with an exemplary embodiment;
FIG. 9 is a schematic diagram illustrating a target face image in accordance with an exemplary embodiment;
FIG. 10 is a schematic diagram illustrating an initial face image according to another exemplary embodiment;
FIG. 11 is a purposeful view of a target face image according to another exemplary embodiment;
FIG. 12 is a block diagram illustrating an apparatus for processing a face image according to an exemplary embodiment;
FIG. 13 is a block diagram illustrating another apparatus for processing a face image according to an exemplary embodiment;
FIG. 14 is a block diagram illustrating yet another apparatus for processing a face image according to an exemplary embodiment;
FIG. 15 is a block diagram illustrating yet another apparatus for processing a face image according to an exemplary embodiment;
fig. 16 is a block diagram illustrating an apparatus for facial image processing according to an exemplary embodiment.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the present disclosure more apparent, the present disclosure will be described in further detail with reference to the accompanying drawings, and it is to be understood that the described embodiments are only some embodiments of the present disclosure, rather than all embodiments. All other embodiments, which can be derived by one of ordinary skill in the art from the embodiments disclosed herein without making any creative effort, shall fall within the scope of protection of the present disclosure.
With the rapid development of image processing technology, more and more people can perform beauty treatment on a face image when shooting photos or video chatting, face buffing is a core function of the beauty treatment technology, the existing face buffing technology is generally realized by adopting an edge-preserving filter, the edge-preserving filter is a core device for realizing the function of the face buffing technology in a terminal, and the edge-preserving filter can perform buffing treatment on an initial face image to be treated in the process of processing the face image so as to obtain the face image after buffing treatment. Referring to fig. 1 and fig. 2, fig. 1 is a schematic diagram of an initial face image acquired by a terminal, and fig. 2 is a schematic diagram of an image obtained by performing a skin-polishing process on the initial face image by an edge-preserving filter. Fig. 2 also removes some texture features of the skin on the basis of removing the impurities of the original face image in fig. 1.
The embodiment of the present disclosure provides a method for processing a face image, which can solve the above problem, and as shown in fig. 3, the method is applied to a terminal, and includes:
And step 303, carrying out second buffing treatment on the detail image to obtain a target detail image.
And step 304, overlapping the target detail image and the basic image to obtain a target face image.
To sum up, the face image processing method provided by the embodiment of the present disclosure may obtain a detail image capable of reflecting texture features of a face skin after an initial face image is subjected to a skin grinding process to obtain a basic image, and obtain a target face image by performing the skin grinding process on the detail image and overlapping the detail image subjected to the skin grinding process with the basic image.
The embodiment of the disclosure provides a face image processing method, which is applied to a terminal, for example, the terminal may be a mobile phone, and the terminal may be installed with an image processing program, where the image processing program is used to execute a subsequent face image processing method. As shown in fig. 4, the method includes:
In the embodiment of the present disclosure, the terminal may obtain the initial face image in various ways, for example, a user may select the initial face image from locally stored images, and correspondingly, the terminal obtains the initial face image based on the selection operation of the user; for another example, the user may perform a shooting trigger operation, and accordingly, after detecting the shooting trigger operation, the terminal starts the shooting component, and obtains the initial face image through shooting by the shooting component. For example, the shooting component is a front camera or a rear camera.
It should be noted that, an image captured by the terminal through the capturing component may not include a face image, and the face image processing method provided by the present disclosure is directed to the face image, if the captured image does not include the face image, the corresponding processing is an invalidation processing, and in order to reduce the invalidation processing, the terminal may identify the captured image, as shown in fig. 5, the process may include:
and 4011, collecting images through a camera shooting assembly.
For example, a user may perform a shooting trigger operation, and accordingly, after detecting the shooting trigger operation, the terminal starts a shooting component, and an image is obtained through shooting by the shooting component.
And 4012, carrying out face recognition on the collected images.
The terminal can perform face recognition on the acquired image through a face recognition algorithm (also called a portrait recognition algorithm or a facial recognition algorithm).
And 4013, when the collected image has a face image, determining an image in a preset area including the face image in the collected image as an initial face image.
When the image acquired by the terminal has a face image, determining the image in the preset area including the face image in the acquired image as an initial face image, and when the acquired image does not have the face image, prompting the user that the face image is not acquired, or repeatedly executing the steps 4011 to 4013.
For example, assuming that the image acquired in step 4011 is an image in the terminal as shown in fig. 1, and a face image exists in the image which can be identified through step 4012, the terminal determines an image in a predetermined area including the face image in the image as an initial face image. For example, the predetermined region may be a central region of the acquired image, the region having a specified shape, and the specified shape may be a rectangle, a circle, an ellipse, or the like.
Taking the initial face image as the image shown in the terminal in fig. 1 as an example, the initial face image is subjected to the first peeling process, and the obtained basic image may be the face image shown in the terminal in fig. 2. The first peeling process may be an edge preserving filtering process, such as a bilateral filtering process, a guided filtering process, or a weighted least squares process.
And 403, acquiring a detail image based on the initial face image, wherein the detail image is used for reflecting the texture characteristics of the skin in the initial face image.
By way of example, the textural features may be fine lines of the corners of the eyes, pores, lips of a ordinance, and/or nasal grooves, among other features.
In the embodiment of the present disclosure, based on the initial face image, there may be multiple ways of obtaining the detail image, for example, in an optional implementation, the texture features in the image are extracted from the initial face image by a feature extraction way, so as to obtain the detail image. In another alternative implementation, the original face image and the base image may be subtracted to obtain a detail image. Namely: h = A-B, wherein A is a pixel value matrix of the initial face image, B is a pixel value matrix of the basic image, and H is a pixel value matrix of the detail image.
Taking the initial face image as the face image shown in the terminal in fig. 1 as an example, the base image obtained by performing the first peeling process on the initial face image may be as shown in fig. 2, and the obtained detail image may be the image shown in the terminal in fig. 6 by making a difference between the initial face image in fig. 1 and the base image in fig. 2.
And step 404, performing second buffing processing on the detail image to obtain a target detail image.
In the embodiment of the present disclosure, there may be a plurality of processes for performing the second peeling process on the detail image to obtain the target detail image, for example, as shown in fig. 7, the process includes:
Illustratively, the detail image is nonlinearly transformed based on a polynomial transformation formula to obtain a transformed detail image, where the polynomial transformation formula is:
d1 is the ith pixel value of the transformed detail image, D is the ith pixel value of the detail image, a and b are preset non-zero transformation coefficients, i is more than or equal to 1 and less than or equal to n, and n is the total number of pixel values in the detail image. The above polynomial change formula is satisfied for each pixel value of the transformed detail image. For example, a =1024,b =512.
It should be noted that the degree of the polynomial change formula provided in the embodiment of the present disclosure is greater than or equal to 2, which is only an illustrative example, and may have other forms, for example, the formula may be a 3 th degree polynomial or a 4 th degree polynomial.
The nonlinear transformation is carried out on the detail image, so that useful information in the detail image can be activated, irrelevant information in the detail image can be inhibited, the subsequent processing efficiency is improved, and the quality of the finally processed image is further improved.
And 4042, performing second skin grinding treatment on the transformed detail image to obtain a target detail image.
Referring to step 403, although the detail image reflects the texture features of the skin in the initial face image, the texture features are rough and not soft enough, and the detail image transformed in step 4041 still has a corresponding problem. The second peeling process may be an edge-preserving filtering process. Such as bilateral filtering, guided filtering, or weighted least squares.
For example, assuming that the face image displayed by the terminal in fig. 6 is a detail image after nonlinear transformation, the target detail image obtained by performing the second peeling process on the face image may be the image in the terminal shown in fig. 8.
And 405, overlapping the target detail image and the basic image to obtain a target face image.
Optionally, the target detail image and the base image may be superimposed to obtain the target face image, that is, E = F + B, where F is a pixel value matrix of the target detail image, B is a pixel value matrix of the base image, and E is a pixel value matrix of the target face image.
The target detail image is shown in fig. 8, the basic image is shown in fig. 2, and the target face image obtained by superimposing the target detail image and the basic image can be the face image shown in the terminal in fig. 9.
It should be noted that the target face image may also have other acquisition manners, for example, at least one of the target detail image and the base image may be subjected to preset transformation, and then, the superposition processing is performed, which is not limited in this disclosure.
And step 406, displaying the target face image.
Optionally, the terminal may display the initial face image and the target face image on the user interface together in a contrasting manner, so that the user can effectively see the image processing effect, for example, the initial face image and the target face image are displayed in two areas of the user interface, such as an upper half area and a lower half area of the user interface; optionally, the terminal may display the initial face image and the target face image on the user interface together in an overlapping manner, so that the user can see the image processing effect in a relatively novel manner, for example, the target face image is displayed in a semitransparent manner by being overlapped on the initial face image; optionally, the terminal may also directly display the target face image on the user interface. The above alternatives are merely illustrative of the present disclosure, which is not limited thereto.
Taking the initial face image as the image shown in the terminal of fig. 10 as an example, after the face image is subjected to the skin grinding processing in steps 401 to 405, the target face image displayed by the terminal is as shown in fig. 11, and on the basis that the skin in the target face image has less impurities, the texture features of the face, such as fine lines of the canthus, lip and nose furrows and the like, are retained, so that the texture of the skin is increased.
To sum up, in the face image processing method provided by the embodiment of the present disclosure, a terminal acquires an image through a camera component, performs face recognition on the acquired image, determines an image in a predetermined region including a face image in the acquired image as an initial face image, further performs a first skin grinding process on the initial face image to obtain a basic image, then acquires a detail image capable of reflecting a texture feature of skin in the initial face image, performs a second skin grinding process on the detail image to obtain a target detail image, and finally performs a superposition process on the target detail image and the basic image to obtain a target face image.
The present disclosure provides a processing apparatus 50 for a face image, as shown in fig. 12, the apparatus 50 includes:
a first peeling module 501 configured to perform a first peeling process on the initial face image to obtain a basic image;
an obtaining module 502 configured to obtain a detail image based on the initial face image, wherein the detail image is used for reflecting the texture features of the skin in the initial face image;
a second buffing module 503 configured to perform second buffing processing on the detail image to obtain a target detail image;
and the overlaying module 504 is configured to overlay the target detail image and the basic image to obtain a target face image.
To sum up, according to the face image processing apparatus provided by the embodiment of the present disclosure, the first skin polishing module may perform first skin polishing on an initial face image to obtain a basic image, the obtaining module obtains a detail image capable of reflecting a texture feature of skin in the initial face image based on the initial face image, the second skin polishing module may perform second skin polishing on the detail image to obtain a target detail image, the superimposing module may superimpose the target detail image with the basic image to obtain a target face image, and the obtained skin in the target face image effectively retains a texture feature of the skin on the basis of less impurities, thereby increasing a texture of the skin.
Optionally, the obtaining module 502 is configured to:
and subtracting the initial face image from the basic image to obtain the detail image.
Optionally, the second peeling module 503, as shown in fig. 13, the second peeling module 503 includes:
a transformation sub-module 5031 configured to perform nonlinear transformation on the detail image to obtain a transformed detail image;
a buffing sub-module 5032 configured to perform the second buffing processing on the transformed detail image to obtain the target detail image.
Optionally, the transformation submodule 5031 is configured to:
and carrying out nonlinear transformation on the detail image based on a polynomial transformation formula to obtain a transformed detail image, wherein the polynomial transformation formula is as follows:
wherein D1 is the ith pixel value of the transformed detail image, D is the ith pixel value of the detail image, a and b are preset non-zero transformation coefficients, i is more than or equal to 1 and less than or equal to n, and n is the total number of pixel values in the detail image.
Alternatively, a =1024,b =512.
Optionally, the first buffing and the second buffing are both edge-preserving filtering.
Optionally, the first buffing processing is bilateral filtering processing, guided filtering processing or weighted least square processing;
the second grinding treatment is bilateral filtering treatment, guiding filtering treatment or weighted least square treatment.
Optionally, as shown in fig. 14, the apparatus 50 further includes:
an acquisition module 505 configured to acquire an image by a camera assembly;
a recognition module 506 configured to perform face recognition on the acquired image;
a determining module 507 configured to determine an image in a predetermined region including a face image in the acquired image as an initial face image when the face image exists in the acquired image.
The image acquisition module in the device acquires images through the camera assembly, the face recognition module can perform face recognition on the acquired images, the determination module determines the images in the preset region including the face images in the acquired images as initial face images, the first skin grinding module can perform first skin grinding on the initial face images to obtain basic images, the acquisition module acquires detailed images capable of reflecting the texture characteristics of skin in the initial face images based on the initial face images, the second skin grinding module can perform second skin grinding on the detailed images to obtain target detailed images, the superposition module can perform superposition processing on the target detailed images and the basic images to obtain target face images, the texture characteristics of the skin in the obtained target face images are effectively reserved on the basis of less impurities, and the texture of the skin is increased.
An embodiment of the present disclosure provides a processing apparatus 60 for a face image, as shown in fig. 15, the apparatus includes:
a processing component 601;
a memory 602 for storing executable instructions of the processing component;
the processing component is configured to execute any one of the processing methods of the face image provided by the embodiment of the disclosure.
Fig. 16 is a block diagram illustrating an apparatus 700 for facial image processing according to an exemplary embodiment. For example, the apparatus 700 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 16, apparatus 700 may include one or more of the following components: a processing component 7002, a memory 7004, a power component 7006, a multimedia component 7008, an audio component 7010, an input/output (I/O) interface 7012, a sensor component 7014, and a communications component 7016.
The processing component 7002 generally controls the overall operation of the apparatus 700, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 7002 may include one or more processors 7020 to execute instructions to perform all or part of the steps of the methods described above. Additionally, the processing component 7002 may include one or more modules that facilitate interaction between the processing component 7002 and other components. For example, the processing component 7002 may include a multimedia module to facilitate interaction between the multimedia component 7008 and the processing component 7002.
The memory 7004 is configured to store various types of data to support operations at the apparatus 700. Examples of such data include instructions for any application or method operating on device 700, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 7004 may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as a Static Random Access Memory (SRAM), an electrically erasable programmable read-only memory (EEPROM), an erasable programmable read-only memory (EPROM), a programmable read-only memory (PROM), a read-only memory (ROM), a magnetic memory, a flash memory, a magnetic disk, or an optical disk.
The power supply assembly 7006 provides power to the various components of the device 700. The power components 7006 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device 700.
The multimedia assembly 7008 includes a screen providing an output interface between the device 700 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia assembly 7008 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 700 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 7010 is configured to output and/or input audio signals. For example, the audio component 7010 may include a Microphone (MIC) configured to receive external audio signals when the apparatus 700 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 7004 or transmitted via the communication component 7016. In some embodiments, the audio component 7010 further comprises a speaker for outputting audio signals.
The I/O interface 7012 provides an interface between the processing assembly 7002 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 7014 includes one or more sensors for providing various aspects of state assessment for the apparatus 700. For example, the sensor component 7014 may detect the open/closed state of the device 700, the relative positioning of components, such as a display and keypad of the device 700, the sensor component 7014 may also detect a change in the position of the device 700 or a component of the device 700, the presence or absence of user contact with the device 700, the orientation or acceleration/deceleration of the device 700, and a change in the temperature of the device 700. The sensor component 7014 may include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor component 7014 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor component 7014 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communications component 7016 is configured to facilitate communications between the apparatus 700 and other devices in a wired or wireless manner. The apparatus 700 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 7016 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communications component 7016 further comprises a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 700 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 7004 comprising instructions executable by the processor 7020 of the apparatus 700 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer readable storage medium, wherein instructions, when executed by a processor of the apparatus 700, enable the apparatus 700 to perform a method for processing a face image provided by the above embodiments.
With regard to the apparatus in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be described in detail here.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements that have been described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (10)
1. A processing method of a face image is applied to a terminal, and is characterized by comprising the following steps:
carrying out first buffing processing on the initial face image to obtain a basic image;
subtracting the initial face image from the basic image to obtain a detail image, wherein the detail image is used for reflecting the texture characteristics of the skin in the initial face image;
carrying out nonlinear transformation on the detail image to obtain a transformed detail image; carrying out second buffing treatment on the transformed detail image to obtain a target detail image;
and overlapping the target detail image and the basic image to obtain a target face image.
2. The method of claim 1,
the performing nonlinear transformation on the detail image to obtain a transformed detail image includes:
based on a polynomial transformation formula, carrying out nonlinear transformation on the detail image to obtain a transformed detail image, wherein the polynomial transformation formula is as follows:
wherein D1 is the ith pixel value of the transformed detail image, D is the ith pixel value of the detail image, a and b are preset non-zero transformation coefficients, i is more than or equal to 1 and less than or equal to n, and n is the total number of pixel values in the detail image.
3. The method of claim 1,
the first skin grinding treatment and the second skin grinding treatment are both edge-protecting filtering treatment;
wherein, the first buffing treatment is bilateral filtering treatment, guiding filtering treatment or weighted least square treatment;
and the second buffing treatment is bilateral filtering treatment, guiding filtering treatment or weighted least square treatment.
4. The method of claim 1, further comprising:
collecting images through a camera shooting assembly;
carrying out face recognition on the acquired image;
and when the acquired image has a face image, determining an image in a preset area including the face image in the acquired image as an initial face image.
5. A processing device of a face image is applied to a terminal, and is characterized by comprising:
the first peeling module is configured to perform first peeling processing on the initial face image to obtain a basic image;
an obtaining module configured to obtain a detail image based on the initial face image, wherein the detail image is used for reflecting texture features of skin in the initial face image;
the second peeling module is configured to perform second peeling processing on the detail image to obtain a target detail image;
the superposition module is configured to carry out superposition processing on the target detail image and the basic image to obtain a target face image;
the acquisition module configured to:
subtracting the initial face image from the basic image to obtain the detail image;
the second peeling module comprises:
the transformation submodule is configured to perform nonlinear transformation on the detail image to obtain a transformed detail image;
and the peeling submodule is configured to perform the second peeling processing on the transformed detail image to obtain the target detail image.
6. The apparatus of claim 5,
the transformation submodule configured to:
and carrying out nonlinear transformation on the detail image based on a polynomial transformation formula to obtain a transformed detail image, wherein the polynomial transformation formula is as follows:
wherein D1 is the ith pixel value of the transformed detail image, D is the ith pixel value of the detail image, a and b are preset non-zero transformation coefficients, i is more than or equal to 1 and less than or equal to n, and n is the total number of pixel values in the detail image.
7. The apparatus of claim 5,
the first skin grinding treatment and the second skin grinding treatment are both edge-protecting filtering treatment;
the first buffing treatment is bilateral filtering treatment, guided filtering treatment or weighted least square treatment;
and the second buffing treatment is bilateral filtering treatment, guiding filtering treatment or weighted least square treatment.
8. The apparatus of claim 5, further comprising:
an acquisition module configured to acquire an image by the camera assembly;
the recognition module is configured to perform face recognition on the acquired image;
the determining module is configured to determine an image in a predetermined area including a face image in the acquired image as an initial face image when the face image exists in the acquired image.
9. An apparatus for processing a face image, the apparatus comprising:
a processing component;
a memory for storing executable instructions of the processing component;
wherein the processing component is configured to perform the method of processing the face image of any of claims 1 to 4.
10. A computer-readable storage medium having stored thereon instructions which, when run on a processing component, cause the processing component to execute the method of processing a face image according to any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810835926.0A CN108961156B (en) | 2018-07-26 | 2018-07-26 | Method and device for processing face image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810835926.0A CN108961156B (en) | 2018-07-26 | 2018-07-26 | Method and device for processing face image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108961156A CN108961156A (en) | 2018-12-07 |
CN108961156B true CN108961156B (en) | 2023-03-14 |
Family
ID=64463925
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810835926.0A Active CN108961156B (en) | 2018-07-26 | 2018-07-26 | Method and device for processing face image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108961156B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110458771B (en) * | 2019-07-29 | 2022-04-08 | 深圳市商汤科技有限公司 | Image processing method and device, electronic equipment and storage medium |
CN111127352B (en) * | 2019-12-13 | 2020-12-01 | 北京达佳互联信息技术有限公司 | Image processing method, device, terminal and storage medium |
CN111798399B (en) * | 2020-07-10 | 2024-04-30 | 抖音视界有限公司 | Image processing method and device and electronic equipment |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016145830A1 (en) * | 2015-08-19 | 2016-09-22 | 中兴通讯股份有限公司 | Image processing method, terminal and computer storage medium |
CN106056562A (en) * | 2016-05-19 | 2016-10-26 | 京东方科技集团股份有限公司 | Face image processing method and device and electronic device |
CN106447620A (en) * | 2016-08-26 | 2017-02-22 | 北京金山猎豹科技有限公司 | Face image polishing method and device, and terminal device |
CN107798654A (en) * | 2017-11-13 | 2018-03-13 | 北京小米移动软件有限公司 | Image mill skin method and device, storage medium |
-
2018
- 2018-07-26 CN CN201810835926.0A patent/CN108961156B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016145830A1 (en) * | 2015-08-19 | 2016-09-22 | 中兴通讯股份有限公司 | Image processing method, terminal and computer storage medium |
CN106056562A (en) * | 2016-05-19 | 2016-10-26 | 京东方科技集团股份有限公司 | Face image processing method and device and electronic device |
CN106447620A (en) * | 2016-08-26 | 2017-02-22 | 北京金山猎豹科技有限公司 | Face image polishing method and device, and terminal device |
CN107798654A (en) * | 2017-11-13 | 2018-03-13 | 北京小米移动软件有限公司 | Image mill skin method and device, storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN108961156A (en) | 2018-12-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108182730B (en) | Virtual and real object synthesis method and device | |
CN108898546B (en) | Face image processing method, device and equipment and readable storage medium | |
US20180286097A1 (en) | Method and camera device for processing image | |
CN110580688B (en) | Image processing method and device, electronic equipment and storage medium | |
CN108154465B (en) | Image processing method and device | |
CN107798654B (en) | Image buffing method and device and storage medium | |
CN107730448B (en) | Beautifying method and device based on image processing | |
CN110599410B (en) | Image processing method, device, terminal and storage medium | |
CN105631803B (en) | The method and apparatus of filter processing | |
KR101906748B1 (en) | Iris image acquisition method and apparatus, and iris recognition device | |
CN112330570B (en) | Image processing method, device, electronic equipment and storage medium | |
CN107015648B (en) | Picture processing method and device | |
CN108154466B (en) | Image processing method and device | |
CN108961156B (en) | Method and device for processing face image | |
EP3905660A1 (en) | Method and device for shooting image, and storage medium | |
CN113870121A (en) | Image processing method and device, electronic equipment and storage medium | |
CN107507128B (en) | Image processing method and apparatus | |
CN111127352B (en) | Image processing method, device, terminal and storage medium | |
CN110728180A (en) | Image processing method, device and storage medium | |
CN111145110B (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN106469446B (en) | Depth image segmentation method and segmentation device | |
CN110796617A (en) | Face image enhancement method and device and electronic equipment | |
CN108257091B (en) | Imaging processing method for intelligent mirror and intelligent mirror | |
CN111340690A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN107085822B (en) | Face image processing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |