CN108038836B - Image processing method and device and mobile terminal - Google Patents

Image processing method and device and mobile terminal Download PDF

Info

Publication number
CN108038836B
CN108038836B CN201711229689.5A CN201711229689A CN108038836B CN 108038836 B CN108038836 B CN 108038836B CN 201711229689 A CN201711229689 A CN 201711229689A CN 108038836 B CN108038836 B CN 108038836B
Authority
CN
China
Prior art keywords
image
target
gray
value
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711229689.5A
Other languages
Chinese (zh)
Other versions
CN108038836A (en
Inventor
刘智杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201711229689.5A priority Critical patent/CN108038836B/en
Publication of CN108038836A publication Critical patent/CN108038836A/en
Application granted granted Critical
Publication of CN108038836B publication Critical patent/CN108038836B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides an image processing method, an image processing device and a mobile terminal, wherein the method comprises the following steps: acquiring a target portrait; generating a gray level image of at least one exposure parameter according to the target portrait; determining a position parameter of a region to be modified in a face region of the gray level image; and modifying the target area corresponding to the position parameter in the target portrait. According to the embodiment of the invention, the region to be modified is determined according to the gray level image corresponding to the target portrait, so that the fineness of image processing can be improved, the loss of details of the face is reduced, and the texture of the image is improved.

Description

Image processing method and device and mobile terminal
Technical Field
The embodiment of the invention relates to the field of communication, in particular to an image processing method and device and a mobile terminal.
Background
With the rapid development of digital technology in the last two thirty years, the way that people record the drops by taking pictures has become a fashion. Nowadays, more and more devices are used for taking pictures, various image processing methods are also diversified, the popularization rate of the people is rapidly increased, the requirements on the functions and the fineness of image processing are higher and higher, and particularly the function of beautifying is expected to make the effect of beautifying closer to the real self and more natural, but higher than the real effect.
At present, most image processing methods select a region to be modified through color values in red, green and blue channels, and then perform buffing by using various filtering methods, for example, blur processing is performed on the region to be modified by using gaussian filtering.
However, the above image processing method does not select the region to be modified finely enough, which causes the loss of fine features of the face, resulting in impaired three-dimensional appearance of the face and lack of texture of the image.
Disclosure of Invention
The embodiment of the invention provides an image processing method, an image processing device and a mobile terminal, and aims to solve the problem that an obtained image processed by the prior art is lack of texture.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, a method of image processing, the method comprising:
acquiring a target portrait;
generating a gray level image of at least one exposure parameter according to the target portrait;
determining a position parameter of a region to be modified in a face region of the gray level image;
and modifying the target area corresponding to the position parameter in the target portrait.
In a second aspect, an embodiment of the present invention further provides an image processing apparatus, where the apparatus includes:
the target portrait acquisition module is used for acquiring a target portrait;
the gray level image generation module is used for generating a gray level image of at least one exposure parameter according to the target portrait;
the position parameter determining module is used for determining the position parameter of a region to be modified in the face region of the gray level image;
and the target area modification module is used for modifying the target area corresponding to the position parameter in the target portrait.
In a third aspect, an embodiment of the present invention additionally provides a mobile terminal, including a processor, a memory, and a computer program stored on the memory and being executable on the processor, where the computer program, when executed by the processor, implements the steps of the image processing method according to any one of the preceding claims.
In a fourth aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the image processing method according to any one of the preceding claims.
In the embodiment of the invention, the region to be modified is determined according to the gray level image corresponding to the target portrait, so that the fineness of image processing can be improved, the loss of details of the face is reduced, and the texture of the image is improved.
Drawings
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
FIG. 1 is a flowchart illustrating steps of an image processing method according to a first embodiment of the present invention;
FIG. 2 is a flowchart of steps of an image processing method according to a second embodiment of the present invention;
FIG. 3 is a block diagram of an image processing apparatus according to a third embodiment of the present invention;
FIG. 4 is a second block diagram of an image processing apparatus according to a third embodiment of the present invention;
FIG. 5 is a third block diagram of an image processing apparatus according to a third embodiment of the present invention;
fig. 6 is a schematic diagram of a hardware structure of a mobile terminal according to a fourth embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
Referring to fig. 1, an embodiment of the present invention provides an image processing method, which may specifically include steps 101-104:
step 101: and acquiring a target portrait.
The embodiment of the invention can be applied to devices with the image shooting function, such as mobile terminals, cameras and the like, and can process the shot target portrait by the image processing method of the embodiment of the invention after the shooting event occurs; it is of course also possible to perform image processing on an existing target figure.
Of course, in the embodiment of the present invention, the step 101 may include: acquiring a target image; and carrying out face recognition on the target image to determine the target image as a target portrait.
Step 102: and generating a gray level image of at least one exposure parameter according to the target portrait.
Illustratively, regarding the human face area as a spherical surface, one lamp projects on the spherical surface, which naturally generates three black, white and gray surfaces: the white surface is a light receiving surface (bright surface); the black surface is a non-light-receiving surface (dark surface); the gray surface is a transition zone between the light receiving surface and the dark surface, and determines whether the texture of the face is strong, whether the face is smooth, and the like.
If there are blemishes or local features such as pocks, scars, pits, etc. in the above mentioned face area, like the small particles appearing on the sphere, the sphere with definite shadow relationship will generate many new shadow relationships, and the overall smooth shadow relationship will be destroyed, resulting in the unsmooth effect. It can be seen that whether the shadow relationship is correct or not determines whether a spherical surface is smooth or not. Therefore, when dealing with the problem of flaws in the face area, image processing can be performed using such a shadow relationship.
For example, when a face region has a bump such as a pox, if the color value of the bump region is close to the skin color value or the bright color of the surrounding region, the defect region is brighter than the surrounding region at low exposure, i.e. the gray value is higher; at high exposure levels, the above-mentioned defect regions are not clearly distinguished from the surrounding regions. If the color value of the convex area is dim or a defect area such as a pit appears in the face area, the defect area is darker relative to the surrounding area under high exposure, namely the gray value is low; at low exposure levels, the above-described defective regions are not significantly different from the surrounding regions.
Of course, in practice there is also a medium exposure level where the defect areas are more pronounced relative to the surrounding areas; and under high and low exposure, the defect area is not obvious relative to the surrounding area.
In addition, in order to avoid misjudgment of the light and shade relationship due to color, in the embodiment of the present invention, by obtaining the grayscale image of at least one exposure parameter of the target image, different light and shade relationships based on the local features of the face region under the exposure parameter are exhibited; specifically, the light-dark relationship is expressed as a gradation value under the exposure parameter.
Optionally, the step 102 of generating a gray scale image of at least one exposure parameter according to the target portrait may include: copying the target portrait into at least one intermediate image; acquiring an exposure parameter corresponding to the intermediate image; adjusting the intermediate image according to the exposure parameter; and converting the adjusted intermediate image into a gray image.
The exposure parameter may be a preset exposure parameter or an input exposure parameter. In the embodiment of the present invention, a gray image corresponding to the target portrait may be obtained according to the target portrait; and adjusting the exposure of the gray-scale image according to a preset exposure parameter.
Illustratively, the target image is pixel-level; the color value or pixel value of each pixel of each target image may be represented by RGB (red-green-blue color) value, HSB (hue-saturation-brightness), YUV (a color coding format represented by one brightness signal and two chrominance signals), or other color modes. Of course, the above color patterns may be mutually inverted.
For example, the target portrait may be copied, and then the copied target portrait may be converted into a gray image; the RGB values of the pixels of the target portrait, i.e. the gray values of the pixels correspondingly converted into a gray image, are, for example, a1 × R + a2 × G + A3 × B, where a1, a2, and A3 are coefficients greater than 0 and less than 1 of the R value, G value, and B value, respectively, and generally a1+ a2+ A3 is 1.
The change of the exposure parameter has influence on the pixel values of saturation, brightness and the like of each pixel point; the value is between 0 and 100 percent, and certainly can not be 0 or 100 percent; when the exposure parameter is 0, the gray image becomes a black image; when the exposure parameter is 100%, the gray image is a white image; the gray scale images under the exposure parameters cannot show the difference of the light and shade relations of the target portrait due to local features (such as pockmarks, scars, pits and the like) of the face area.
Step 103: and determining the position parameters of the region to be modified in the face region of the gray level image.
In the embodiment of the invention, based on the fact that the light and shade relations of the same region should be the same or close to each other, the region to be modified which does not conform to the light and shade relations of the surrounding region is screened out, and the face region in the gray level image can be selected; the light-dark relationship is represented as a gray value under the exposure parameter.
For example, the face area of the grayscale image may be determined by face recognition; then, carrying out contrast enhancement on the face area; and determining the position parameters of the area to be modified in the face area after the contrast enhancement processing.
The contrast enhancement may be performed in various ways such as a gray threshold, a gray level division, a linear stretching, a nonlinear stretching, and the like. In a simple example, for a mode of adopting a gray threshold, that is, by setting a gray threshold, the size relationship between the gray value of each pixel and the gray threshold is detected, all the pixels are at least classified into A, B, and then the pixels belonging to class a or the pixels belonging to class B are regions to be modified; the position parameters of the region to be modified include the position parameters of the pixels belonging to the class a or the pixels belonging to the class B.
And classifying the pixels according to the difference of the gray values of the pixels, and screening out the pixels which do not accord with the light-dark relation of the surrounding area.
In practical applications, the classification manner may be more refined, and may be divided into a plurality of classes, so as to more finely determine the region to be modified.
In the embodiment of the present invention, the facial region includes, but is not limited to, a cheek region, a forehead region, a chin region, and the like.
In the embodiment of the present invention, in the gray scale observation mode with at least one exposure parameter, because the gray scale value of each region in the gray scale image with at least one exposure parameter corresponding to the target portrait represents the light-dark relationship of each region, in the gray scale observation mode with at least one exposure parameter, the target region in the target portrait can be determined and modified.
Meanwhile, the prior image processing method cannot effectively process local characteristic regions such as the pox and the scar, the local characteristic regions are pasty color blocks which are not obvious but have darker colors than other facial regions after the skin grinding treatment, and the cleanliness and the skin beautifying degree of the face are reduced; the embodiment of the invention can process the local features more finely and improve the cleanliness and the skin beautifying degree of the face.
When a gray scale observation mode with a plurality of exposure parameters is adopted, a more comprehensive area to be modified can be determined, and the modification effect is further improved.
Optionally, the step 103 of determining the position parameter of the region to be modified in the face region of the grayscale image may include:
determining a face area of the gray image;
dividing the face area of the gray level image into a plurality of block images;
determining the characteristic gray value of the block image according to the gray value of each pixel point of the block image;
detecting the difference value between the gray value of each pixel point in the block image and the characteristic gray value;
and when the difference value between the gray value of the pixel point and the characteristic gray value is larger than a preset threshold value, determining that the position parameters of the area to be modified comprise the position parameters of the pixel point.
In the embodiment of the invention, the position parameters of the face area of the target portrait can be determined through face recognition; and determining the face area of the gray level image according to the position parameters of the face area of the target portrait.
The division of the face area is that a plurality of block images can be divided according to a preset size or a preset number of pixel points, and the division can also be performed according to a preset face area division template.
The characteristic gray value of the block image mentioned in the above description may be an average gray value, a median gray value of each pixel point of the block image, or a gray value obtained by other calculation methods that can represent the size of the whole gray value of the block image.
Obviously, the preset threshold may be preset, or may be adjusted according to the magnitude of the exposure parameter. It is understood that the closer the exposure parameter is to 100% or 10%, the more the gray image becomes white or black, and the preset threshold may be decreased.
In an implementation manner of the embodiment of the present invention, the step 103 of determining a position parameter of a region to be modified in a face region of the grayscale image may include:
when the selection operation of a preset manual map trimming mode is detected, entering the manual map trimming mode;
when the selection operation of the face area of the gray level image is detected, the position parameter corresponding to the selection operation is determined to be the position parameter of the area to be modified.
The embodiment of the invention at least comprises an automatic map repairing mode and a manual map repairing mode; therefore, the selection mechanisms of the multiple trimming modes can be preset for the user to select. For example, the selection buttons of the multiple modes may be displayed on a screen of a mobile terminal or a camera, and when a touch operation of the user on the selection button of the manual trimming mode is detected, the manual trimming mode is entered.
In the embodiment of the invention, the observation mode of the user can be converted according to the selection operation of the user on the gray level image under each exposure parameter; the preset enlarging operation or the preset reducing operation of the user on the gray scale image can be detected, and the gray scale image is correspondingly enlarged or reduced.
The user can select the abnormal area which does not accord with the light and shade relation in the gray scale image under the observation mode of the gray scale image with the at least one exposure parameter; the user may perform a selection operation by touching the screen, or may perform a selection operation by using a preset cropping tool, such as a lasso, a selection frame, etc., which is not described herein again. Compared with an automatic picture trimming mode, the user can actively trim the picture according to the preference tendency of the user, and the participation sense is strong.
Step 104: and modifying the target area corresponding to the position parameter in the target portrait.
In the embodiment of the present invention, after determining the position parameter of the region to be modified of the grayscale image, a target region corresponding to the position parameter in the target portrait may be determined according to the position parameter of the region to be modified, that is, a region with abnormal light-dark relation in the face region of the target portrait, which is likely to be a local feature region of the face region, such as pox, scar, etc.; then, the target region is modified according to various modification methods, such as various blurring methods and whitening methods.
For example, in the embodiment of the present invention, the average pixel value of the peripheral area of the target area may be used as the characteristic pixel value of the target area, and then the pixel value of each target area may be corrected to fluctuate around the average pixel value. Of course, the area range of the surrounding area may be determined according to the target area; the fluctuation range and the specific correction method can be preset; the embodiments of the present invention are not limited in this regard.
In the embodiment of the present invention, the pixel value of the target area may be obtained according to the average pixel value of the peripheral area of the target area; the target area may be a pixel level or a block composed of a plurality of pixels, which is not limited in the embodiment of the present invention.
In an embodiment of the present invention, the step of modifying the target region corresponding to the position parameter in the target portrait includes: determining a target pixel point corresponding to the position parameter in the target portrait as the target area; and correcting the pixel value of the target pixel point according to the pixel values of the surrounding pixel points of the target pixel point.
Exemplarily, an average value or a weighted average value of surrounding pixel points within a preset radius range of the target pixel point may be calculated to obtain a pixel value of the target pixel point. Of course, since the surrounding pixels of the target pixel may include other target pixels, there may be a problem that multiple calculations are required until the pixel value converges, and details are not repeated here.
It can be understood that the regions to be modified obtained in the grayscale image with the same exposure parameter in step 103 may be a plurality of color blocks; each color block consists of at least one pixel point; the color blocks may be connected to each other or separated from each other.
In order to achieve a more balanced image modification effect, in the embodiment of the present invention, when M regions to be modified need to be modified simultaneously, connection relationships between pixel points forming the M regions to be modified may be determined according to the position parameters of the M regions to be modified; recombining the pixel points of the M areas to be modified into N color blocks to be modified according to the connection relation; and modifying a plurality of target areas in the target portrait, wherein the target areas correspond to the N color blocks to be modified respectively.
Wherein, the connection relationship at least comprises an adjacent relationship and a separation relationship; according to the connection relationship, the step of recombining the pixel points of the M regions to be modified into the N color blocks can be performed by a clustering method, and details are not repeated here.
Therefore, the M areas to be modified can be divided into N color blocks to be modified which are separated from each other as much as possible, and the actual characteristics of local characteristic areas such as pockmarks and scars which are distributed independently in the face area are better met, so that the modification is more balanced and accurate.
In the embodiment of the invention, the region to be modified is determined according to the gray level image corresponding to the target portrait, so that the fineness of image processing can be improved, the loss of details of the face is reduced, and the texture of the image is improved.
Example two
Referring to fig. 2, an embodiment of the present invention provides an image processing method, which may specifically include steps 201 and 204:
step 201: and acquiring a target portrait.
Step 202: and generating a gray scale image with a high exposure parameter, a gray scale image with a medium exposure parameter and a gray scale image with a low exposure parameter which respectively correspond to the target portrait according to the target portrait.
Illustratively, the high exposure parameter, the medium exposure parameter, and the low exposure parameter may be preset, for example, 90%, 40%, 5%, respectively; the gray level images corresponding to the exposure parameters can respectively represent the bright-dark relation under the bright observation mode, the gray observation mode and the dark observation mode.
Exemplarily, in the embodiment of the present invention, the target portrait may be copied into three intermediate images; acquiring 90%, 40% and 5% of exposure parameters respectively corresponding to the three intermediate images, and respectively adjusting the 90%, 40% and 5% of exposure parameters of the three intermediate images; and then graying the three intermediate images to obtain three gray images of light, gray and dark, and simultaneously obtaining different bright-dark relations displayed by the gray images in the light, gray and dark modes.
Step 203: and determining the position parameters of the regions to be modified in the face regions of the gray level images.
Step 204: and modifying the target area corresponding to the position parameter in the target portrait.
In an implementation manner of the embodiment of the present invention, a first position parameter of a corresponding first region to be modified may be first obtained in a grayscale image with a high exposure parameter, and a first target region corresponding to the first position parameter in the target portrait may be modified; then obtaining a second position parameter of a corresponding second region to be modified in the gray level image corresponding to the middle exposure parameter, and modifying a second target region corresponding to the second position parameter in the target portrait; and finally, obtaining a third position parameter of a corresponding third area to be modified in the gray level image corresponding to the low exposure parameter, and modifying a third target area corresponding to the third position parameter in the target portrait.
The image modification sequence in the observation mode of the gray scale image with the high exposure parameter, the gray scale image with the medium exposure parameter and the gray scale image with the low exposure parameter is not limited in the embodiment of the invention.
In another implementation manner of the embodiment of the present invention, in an observation mode of the grayscale image with the high exposure parameter, the grayscale image with the medium exposure parameter, and the grayscale image with the low exposure parameter, each position parameter of each region to be modified may be determined, and a target region corresponding to each position parameter in the target portrait may be modified.
In the embodiment of the present invention, the modified target portrait may be subjected to the whole face region modification again according to the image processing method in the steps 202 to 204, so as to solve the problems of unnatural transition, recognition error, and the like, which may be caused by the previous modification.
In the embodiment of the invention, fine image noise points can be added to the face area of the modified target portrait to add noise points, so that the face texture is improved. For example, a preset number of pixel points may be randomly selected from the adding region, and then the pixel values of the pixel points may be modified to be a preset outlier pixel value.
In the embodiment of the present invention, because the gray scale values of the regions in the gray scale image with different exposure parameters corresponding to the target portrait represent the light-dark relationship of the regions at different levels, the target region in the target portrait can be determined and modified more comprehensively and more finely from three angles of light, gray and dark in the gray scale observation modes with the high exposure parameter, the medium exposure parameter and the low exposure parameter, respectively, so as to further improve the modification effect.
EXAMPLE III
Referring to fig. 3, an embodiment of the present invention provides an image processing apparatus, where the apparatus may specifically include:
a target portrait acquiring module 301, configured to acquire a target portrait;
a gray image generation module 302, configured to generate a gray image with at least one exposure parameter according to the target portrait;
a position parameter determining module 303, configured to determine a position parameter of a region to be modified in a face region of the grayscale image;
a target region modification module 304, configured to modify a target region corresponding to the position parameter in the target portrait.
Referring to fig. 4, on the basis of fig. 3, optionally, the grayscale image generation module 302 may include:
an image reproduction unit 3021 for reproducing the target portrait as at least one intermediate image;
a parameter acquiring unit 3022 configured to acquire an exposure parameter corresponding to the intermediate image;
an exposure adjusting unit 3023 for adjusting the intermediate image according to the exposure parameter;
a first grayscale image generating unit 3024, configured to convert the adjusted intermediate image into a grayscale image.
Optionally, the location parameter determining module 303 may include:
a face region determination unit configured to determine a face region of the grayscale image;
an image dividing unit configured to divide a face area of the grayscale image into a plurality of block images;
the gray value calculation unit is used for determining the characteristic gray value of the block image according to the gray value of each pixel point of the block image;
a pixel point detection unit for detecting the difference between the gray value of each pixel point in the block image and the characteristic gray value;
and the position parameter determining unit is used for determining that the position parameters of the area to be modified comprise the position parameters of the pixel points when the difference value between the gray value of the pixel points and the characteristic gray value is greater than a preset threshold value.
Optionally, the target region modification module 304 may include:
a target area determining unit, configured to determine a target pixel point in the target portrait corresponding to the position parameter as the target area;
and the target area modification unit is used for modifying the pixel value of the target pixel point according to the pixel values of the surrounding pixel points of the target pixel point.
Referring to fig. 5, optionally, on the basis of fig. 3, the grayscale image generation module 302 may include:
a second gray image generating unit 3025 configured to generate a gray image of a high exposure parameter, a gray image of a medium exposure parameter, and a gray image of a low exposure parameter, which correspond to the target portrait, respectively, based on the target portrait.
The image processing apparatus provided in the embodiment of the present invention can implement each process implemented by the mobile terminal in the method embodiments of fig. 1 to fig. 2, and is not described herein again to avoid repetition.
In the embodiment of the invention, the region to be modified is determined according to the gray level image corresponding to the target portrait, so that the fineness of image processing can be improved, the loss of details of the face is reduced, and the texture of the image is improved.
Example four
Figure 6 is a schematic diagram of a hardware configuration of a mobile terminal implementing various embodiments of the present invention,
the mobile terminal 600 includes, but is not limited to: a radio frequency unit 601, a network module 602, an audio output unit 603, an input unit 604, a sensor 605, a display unit 606, a user input unit 607, an interface unit 608, a memory 609, a processor 610, and a power supply 611. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 6 is not intended to be limiting of mobile terminals, and that a mobile terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The processor 610 is configured to obtain a target portrait; generating a gray level image of at least one exposure parameter according to the target portrait; determining the position parameters of a region to be modified in the face region of the gray level image; and modifying the target area corresponding to the position parameter in the target portrait.
In the embodiment of the present invention, because the gray scale value of each region in the gray scale image of the at least one exposure parameter corresponding to the target portrait represents the light-dark relationship of each region, the target region in the target portrait is determined and modified in the gray scale observation mode of the at least one exposure parameter; according to the image processing method, the region waiting for modification of the pox and the scar can be selected more finely, so that the phenomenon that the face is hard due to a large amount of loss of fine features of the face is avoided, and the face texture is stronger; when a gray scale observation mode with a plurality of exposure parameters is adopted, a more comprehensive area to be modified can be determined, and the modification effect is further improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 601 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 610; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 601 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio frequency unit 601 may also communicate with a network and other devices through a wireless communication system.
The mobile terminal provides the user with wireless broadband internet access through the network module 602, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 603 may convert audio data received by the radio frequency unit 601 or the network module 602 or stored in the memory 609 into an audio signal and output as sound. Also, the audio output unit 603 may also provide audio output related to a specific function performed by the mobile terminal 600 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 603 includes a speaker, a buzzer, a receiver, and the like.
The input unit 604 is used to receive audio or video signals. The input Unit 604 may include a Graphics Processing Unit (GPU) 6041 and a microphone 6042, and the Graphics processor 6041 processes image data of a still picture or video obtained by an image capturing apparatus (such as a camera) in a video capture mode or an image capture mode. The processed image frames may be displayed on the display unit 606. The image frames processed by the graphic processor 6041 may be stored in the memory 609 (or other storage medium) or transmitted via the radio frequency unit 601 or the network module 602. The microphone 6042 can receive sound, and can process such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 601 in case of the phone call mode.
The mobile terminal 600 also includes at least one sensor 605, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 6061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 6061 and/or the backlight when the mobile terminal 600 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 605 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 606 is used to display information input by the user or information provided to the user. The Display unit 606 may include a Display panel 6061, and the Display panel 6061 may be configured by a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 607 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 607 includes a touch panel 6071 and other input devices 6072. Touch panel 6071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 6071 using a finger, stylus, or any suitable object or accessory). The touch panel 6071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 610, receives a command from the processor 610, and executes the command. In addition, the touch panel 6071 can be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The user input unit 607 may include other input devices 6072 in addition to the touch panel 6071. Specifically, the other input devices 6072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 6071 can be overlaid on the display panel 6061, and when the touch panel 6071 detects a touch operation on or near the touch panel 6071, the touch operation is transmitted to the processor 610 to determine the type of the touch event, and then the processor 610 provides a corresponding visual output on the display panel 6061 according to the type of the touch event. Although the touch panel 6071 and the display panel 6061 are shown in fig. 6 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 6071 and the display panel 6061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 608 is an interface through which an external device is connected to the mobile terminal 600. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 608 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 600 or may be used to transmit data between the mobile terminal 600 and external devices.
The memory 609 may be used to store software programs as well as various data. The memory 609 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 609 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 610 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 609 and calling data stored in the memory 609, thereby integrally monitoring the mobile terminal. Processor 610 may include one or more processing units; preferably, the processor 610 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 610.
The mobile terminal 600 may further include a power supply 611 (e.g., a battery) for supplying power to the various components, and preferably, the power supply 611 is logically connected to the processor 610 via a power management system, so that functions of managing charging, discharging, and power consumption are performed via the power management system.
In addition, the mobile terminal 600 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides a mobile terminal, which includes a processor 610, a memory 609, and a computer program stored in the memory 609 and capable of running on the processor 610, where the computer program is executed by the processor 610 to implement each process of the image processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (8)

1. An image processing method, characterized in that the method comprises:
acquiring a target portrait;
generating a gray level image of at least one exposure parameter according to the target portrait;
determining a position parameter of a region to be modified in a face region of the gray level image;
modifying a target area corresponding to the position parameter in the target portrait;
the step of generating a gray scale image of at least one exposure parameter from the target portrait comprises:
generating a gray level image with a high exposure parameter, a gray level image with a medium exposure parameter and a gray level image with a low exposure parameter which respectively correspond to the target portrait according to the target portrait;
after the modifying the target region corresponding to the position parameter in the target portrait, the method further includes:
selecting pixel points of preset data from the modified face area of the target portrait;
correcting the pixel value of the selected pixel point to be a preset miscellaneous point pixel value;
the step of determining the position parameter of the region to be modified in the face region of the gray image comprises the following steps:
determining a face region of the gray image;
dividing a face area of the gray image into a plurality of block images;
determining a characteristic gray value of the block image according to the gray value of each pixel point of the block image;
detecting the difference value between the gray value of each pixel point in the block image and the characteristic gray value;
and when the difference value between the gray value of the pixel point and the characteristic gray value is larger than a preset threshold value, determining that the position parameters of the area to be modified comprise the position parameters of the pixel point.
2. The method of claim 1, wherein the step of generating a gray scale image of at least one exposure parameter from the target portrait comprises:
copying the target portrait into at least one intermediate image;
acquiring an exposure parameter corresponding to the intermediate image;
adjusting the intermediate image according to the exposure parameter;
and converting the adjusted intermediate image into a gray image.
3. The method of claim 1, wherein the step of modifying the target region in the target portrait corresponding to the location parameter comprises:
determining a target pixel point corresponding to the position parameter in the target portrait as the target area;
and correcting the pixel value of the target pixel point according to the pixel values of the surrounding pixel points of the target pixel point.
4. An image processing apparatus, characterized in that the apparatus comprises:
the target portrait acquisition module is used for acquiring a target portrait;
the gray level image generation module is used for generating a gray level image of at least one exposure parameter according to the target portrait;
the position parameter determining module is used for determining the position parameter of a region to be modified in the face region of the gray level image;
a target region modification module, configured to modify a target region corresponding to the position parameter in the target portrait, and select a pixel of preset data from a face region of the modified target portrait; correcting the pixel value of the selected pixel point to be a preset miscellaneous point pixel value;
the grayscale image generation module includes:
the second gray scale image generation unit is used for generating a gray scale image of a high exposure parameter, a gray scale image of a medium exposure parameter and a gray scale image of a low exposure parameter which respectively correspond to the target portrait;
the location parameter determination module comprises:
a face region determination unit for determining a face region of the gradation image;
an image dividing unit configured to divide a face area of the grayscale image into a plurality of block images;
the gray value calculation unit is used for determining the characteristic gray value of the block image according to the gray value of each pixel point of the block image;
the pixel point detection unit is used for detecting the difference value between the gray value of each pixel point in the block image and the characteristic gray value;
and the position parameter determining unit is used for determining that the position parameters of the to-be-modified area comprise the position parameters of the pixel points when the difference value between the gray value of the pixel points and the characteristic gray value is greater than a preset threshold value.
5. The apparatus of claim 4, wherein the grayscale image generation module comprises:
an image copying unit for copying the target portrait into at least one intermediate image;
a parameter acquisition unit configured to acquire an exposure parameter corresponding to the intermediate image;
an exposure adjusting unit for adjusting the intermediate image according to the exposure parameter;
a first grayscale image generation unit for converting the adjusted intermediate image into a grayscale image.
6. The apparatus of claim 4, wherein the target region modification module comprises:
a target area determining unit, configured to determine a target pixel point in the target portrait corresponding to the position parameter as the target area;
and the target area modification unit is used for modifying the pixel value of the target pixel point according to the pixel values of the surrounding pixel points of the target pixel point.
7. A mobile terminal, characterized in that it comprises a processor, a memory and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, implements the steps of the image processing method according to any one of claims 1 to 3.
8. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 1 to 3.
CN201711229689.5A 2017-11-29 2017-11-29 Image processing method and device and mobile terminal Active CN108038836B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711229689.5A CN108038836B (en) 2017-11-29 2017-11-29 Image processing method and device and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711229689.5A CN108038836B (en) 2017-11-29 2017-11-29 Image processing method and device and mobile terminal

Publications (2)

Publication Number Publication Date
CN108038836A CN108038836A (en) 2018-05-15
CN108038836B true CN108038836B (en) 2020-04-17

Family

ID=62094519

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711229689.5A Active CN108038836B (en) 2017-11-29 2017-11-29 Image processing method and device and mobile terminal

Country Status (1)

Country Link
CN (1) CN108038836B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584150B (en) * 2018-11-28 2023-03-14 维沃移动通信(杭州)有限公司 Image processing method and terminal equipment
CN109584153A (en) * 2018-12-06 2019-04-05 北京旷视科技有限公司 Modify the methods, devices and systems of eye
CN109697422B (en) * 2018-12-19 2020-12-04 深圳市瑞立视多媒体科技有限公司 Optical motion capture method and optical motion capture camera
CN109816602A (en) * 2018-12-29 2019-05-28 维沃移动通信有限公司 A kind of processing method and terminal of image
CN110188679B (en) * 2019-05-29 2021-09-14 Oppo广东移动通信有限公司 Calibration method and related equipment
CN111338545A (en) * 2020-02-24 2020-06-26 北京字节跳动网络技术有限公司 Image processing method, assembly, electronic device and storage medium
CN111563850B (en) * 2020-03-20 2023-12-05 维沃移动通信有限公司 Image processing method and electronic equipment
CN113743398B (en) 2020-05-29 2023-11-17 富泰华工业(深圳)有限公司 Image identification method, device, computer device and storage medium
TWI775087B (en) * 2020-05-29 2022-08-21 鴻海精密工業股份有限公司 Image identification method, device, computer device and storage media
CN112477438B (en) * 2020-11-13 2021-09-21 深圳汉弘软件技术有限公司 Printing method, printing device, ink-jet printer and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104994369B (en) * 2013-12-04 2018-08-21 南京中兴软件有限责任公司 A kind of image processing method, user terminal, image processing terminal and system
CN104318262A (en) * 2014-09-12 2015-01-28 上海明穆电子科技有限公司 Method and system for replacing skin through human face photos
CN105704395B (en) * 2016-04-05 2018-09-14 广东欧珀移动通信有限公司 Photographic method and camera arrangement

Also Published As

Publication number Publication date
CN108038836A (en) 2018-05-15

Similar Documents

Publication Publication Date Title
CN108038836B (en) Image processing method and device and mobile terminal
CN109639982B (en) Image noise reduction method and device, storage medium and terminal
CN107230182B (en) Image processing method and device and storage medium
CN110969981B (en) Screen display parameter adjusting method and electronic equipment
CN108234882B (en) Image blurring method and mobile terminal
CN108491775B (en) Image correction method and mobile terminal
CN108492246B (en) Image processing method and device and mobile terminal
CN108111754B (en) Method for determining image acquisition mode and mobile terminal
CN108712603B (en) Image processing method and mobile terminal
CN110781899B (en) Image processing method and electronic device
CN107566749B (en) Shooting method and mobile terminal
CN110706179A (en) Image processing method and electronic equipment
CN110930335B (en) Image processing method and electronic equipment
CN108513067B (en) Shooting control method and mobile terminal
CN112669197A (en) Image processing method, image processing device, mobile terminal and storage medium
CN109104578B (en) Image processing method and mobile terminal
CN110602424A (en) Video processing method and electronic equipment
CN109639981B (en) Image shooting method and mobile terminal
CN110868544B (en) Shooting method and electronic equipment
CN109727212B (en) Image processing method and mobile terminal
CN110944163A (en) Image processing method and electronic equipment
CN110807769A (en) Image display control method and device
CN113888447A (en) Image processing method, terminal and storage medium
WO2021185142A1 (en) Image processing method, electronic device and storage medium
CN107798662B (en) Image processing method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant