CN110490029B - Image processing method capable of performing differentiation processing on face data - Google Patents

Image processing method capable of performing differentiation processing on face data Download PDF

Info

Publication number
CN110490029B
CN110490029B CN201810462970.1A CN201810462970A CN110490029B CN 110490029 B CN110490029 B CN 110490029B CN 201810462970 A CN201810462970 A CN 201810462970A CN 110490029 B CN110490029 B CN 110490029B
Authority
CN
China
Prior art keywords
intensity
data
face region
sharpening
noise suppression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810462970.1A
Other languages
Chinese (zh)
Other versions
CN110490029A (en
Inventor
刘楷
邱仲毅
黄文聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Realtek Semiconductor Corp
Original Assignee
Realtek Semiconductor Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Realtek Semiconductor Corp filed Critical Realtek Semiconductor Corp
Priority to CN201810462970.1A priority Critical patent/CN110490029B/en
Publication of CN110490029A publication Critical patent/CN110490029A/en
Application granted granted Critical
Publication of CN110490029B publication Critical patent/CN110490029B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image processing method capable of carrying out differentiation processing on face data, which is executed by an image processing device. One embodiment of the method comprises the steps of: determining a face region, a non-face region and a transition region according to a face detection result of an image, wherein the transition region is between the face region and the non-face region; and applying different treatments to the data of the face region, the data of the non-face region and the data of the transition region respectively.

Description

Image processing method capable of performing differentiation processing on face data
Technical Field
The present invention relates to an image processing method, and more particularly, to an image processing method capable of performing a differentiation process on face data.
Background
The conventional image processing flow may apply one or more image processing processes to the input image, such as a noise suppression process, a sharpening and edge enhancement process, and a brightness adjustment process, the intensity (or parameters for the image processing) of these image processing processes is usually determined according to the data of the whole image, however, when the input image contains a human face, the image processing may distort the human face, and more clearly, the noise suppression process may cause the loss of details of the human face (e.g., the disappearance of pores) or the retention of too much noise due to the excessive or insufficient processing intensity; the sharpening and edge enhancement processing may cause the edges of the human face to be sharp (e.g., sharp) or blurred due to excessive or insufficient processing intensity; and the brightness adjustment process may result in too bright or too dark (e.g., unclear contours) faces due to excessive or insufficient processing intensity. Since the human eye is quite sensitive to the human face image, when the image processing described above causes human face distortion, the human eye usually immediately perceives the distortion and feels unnatural.
Although the prior art can realize face detection, most of the prior face detection techniques are used for identification and expression recognition, or for focusing, amplifying, exposing and the like of photographing, and the optimization of the face image is not inked, so that the problem of face distortion caused by the image processing still exists. Existing face detection techniques can be found in the following documents: chinese mainland patent (publication No. CN 103699888A); and U.S. patent (patent No. US8224108B 2).
Disclosure of Invention
It is an object of the present invention to provide an apparatus and method to avoid the problems of the prior art.
The invention discloses a method for performing differentiation processing on face data, which is executed by an image processing device. One embodiment of the method comprises the steps of: determining a face region, a non-face region and a transition region according to a face detection result of an image, wherein the transition region is between the face region and the non-face region; and applying different treatments to the data of the face region, the data of the non-face region and the data of the transition region respectively. An embodiment of the different processing includes at least one of: noise suppression treatment of different intensities; sharpening and edge enhancement processing with different intensities; adjusting brightness with different intensities; and high-frequency information addition processing of different intensities.
Another embodiment of the image processing method of the present invention comprises the following steps: determining a face region and a non-face region according to a face detection result of an image; and performing at least one of the plurality of image processes. The image processing includes: applying a first noise suppression process to the data of the face region and a second noise suppression process to the data of the non-face region, wherein the first noise suppression process is different from the second noise suppression process; applying a first sharpening and edge enhancement process to the data of the face region and a second sharpening and edge enhancement process to the data of the non-face region, wherein the first sharpening and edge enhancement process and the second sharpening and edge enhancement process are different; applying a first brightness adjustment process to the data of the face region and a second brightness adjustment process to the data of the non-face region, wherein the first brightness adjustment process and the second brightness adjustment process are different; and applying a first high-frequency information adding process to the data of the face area and applying a second high-frequency information adding process to the data of the non-face area, wherein the first high-frequency information adding process and the second high-frequency information adding process are different.
The features, implementations, and technical advantages of the present invention are described in detail below with reference to the accompanying drawings.
Drawings
FIG. 1 shows an embodiment of an image processing method according to the present invention;
FIG. 2 shows the face region, non-face region and transition region determined in step S110 of FIG. 1;
FIG. 3 shows an embodiment of step S120 of FIG. 1;
FIG. 4 shows an embodiment of processing the face region in step S340 of FIG. 3; and
FIG. 5 shows an embodiment of processing the non-face region in step S340 of FIG. 3.
Description of the symbols
S110 to S120
200 image frame
210 face region
220 transition region
230 non-face region
S310 to S340
Y original pixel value
R random value
YDITHERDithering the pixel values
Detailed Description
The terms in the following description refer to the conventional terms in the field, and some terms are defined or explained in the specification, and are to be interpreted according to the description or the definition of the specification.
The disclosure of the present invention includes an image processing method, which can perform differentiation processing on face data, thereby optimizing a face image and avoiding the problem of face distortion. The image processing method of the present invention is implemented in software and/or firmware, and can be executed by an existing or self-developed Image Signal Processor (ISP); in addition, the result of the face detection utilized by the present invention can be provided by an image signal processor with a face detection function, or by an external device, for example, an integrated circuit, which is not included in the integrated circuit, such as a device executing a computer operating system and computer software.
Fig. 1 shows an embodiment of an image processing method according to the present invention, which includes the following steps:
step S110: according to a face detection result of an image, a face region, a non-face region and a transition region (transition region) are determined, wherein the transition region is between the face region and the non-face region. For example, as shown in fig. 2, the present step may determine a face region 210 (i.e., the region surrounded by the dashed short-folded line (dash line) in fig. 2), a transition region 220 (i.e., the region between the dashed short-folded line and the dashed dotted line (dotted line) in fig. 2), and a non-face region 230 (i.e., the region outside the dashed dotted line in fig. 2) according to the face detection result of the Nth frame (Nth frame) 200 of the image, where N is a positive integer.
Step S120: the data of the face region, the data of the non-face region, and the data of the transition region are subjected to different processes, respectively. For example, the step may apply a first image process to the data of the face region, a second image process to the data of the non-face region, and a third image process to the data of the transition region, wherein the first, second, and third image processes are of the same type but different intensities, and each of the first, second, and third image processes may select at least one fixed set of image processing parameters or at least one of a plurality of sets of image processing parameters according to the information of the image (e.g., according to the scene detection result of the image).
FIG. 3 shows an embodiment of step S120 of FIG. 1, which includes steps S310-S340; however, in another embodiment of the present invention, step S120 includes at least one of steps S310 to S340. The steps of FIG. 3 are described below.
Referring to fig. 3, step S310 includes: applying a first noise suppression process to the data of the face region, applying a second noise suppression process to the data of the non-face region, and applying a third noise suppression process to the data of the transition region, wherein any two of the first noise suppression process, the second noise suppression process, and the third noise suppression process are different. In one embodiment, frames based on the image are continuously input, and if the step S110 refers to the face detection result of the nth frame of the image, the data processed in the steps S320 to S340 is the data of the (N + k) th frame of the image, where k is a positive integer (e.g., 1). In one embodiment, the intensity of the first noise suppression process is less than the intensity of the second noise suppression process, thereby avoiding losing too much detail of the face region; in other words, the stronger the intensity of the noise suppression process, the less noise and less texture detail in the processed image region and the more blurred the image region. In one embodiment, the intensity of the third noise suppression process is between the intensity of the first noise suppression process and the intensity of the second noise suppression process. In one embodiment, each noise suppression process performed in this step is a low pass filtering process. In one embodiment, the low-pass filtering process is a mean filtering process, and the noise suppression process is stronger when the sampling range of the mean filtering process is larger (e.g., a range of 5 × 5 pixels, wherein the central pixel is a target pixel and the mean value of the 5 × 5 pixels is used to replace the value of the target pixel); the smaller the sampling range of the mean filtering process (e.g., the range of 3 × 3 pixels, wherein the center pixel is a target pixel, and the average value of the 3 × 3 pixels is used to replace the value of the target pixel), the less the noise suppression process is performed; therefore, by setting the sampling range, the intensities of the first, second, and third noise suppression processes can be appropriately determined.
Referring to fig. 3, step S320 includes: applying a first sharpening and edge enhancement process to the data of the face region, a second sharpening and edge enhancement process to the data of the non-face region, and a third sharpening and edge enhancement process to the data of the transition region, wherein any two of the first sharpening and edge enhancement process, the second sharpening and edge enhancement process, and the third sharpening and edge enhancement process are different. In one embodiment, the intensity of the first sharpening and edge enhancement is less than the intensity of the second sharpening and edge enhancement, thereby preventing edges in the face region from being obtrusive and enhancing the sharpness and/or contrast of the non-face region; in other words, the stronger the sharpening and edge enhancement processes, the clearer the texture of the processed image area. In one embodiment, the intensity of the third sharpening and edge enhancement is between the intensity of the first sharpening and edge enhancement and the intensity of the second sharpening and edge enhancement. In one embodiment, each of the sharpening and edge enhancement processes performed in this step is an edge gradient enhancement (edge gradient enhancement) process that reduces the brightness of dark pixels of an image edge and/or increases the brightness of bright pixels of the image edge.
Referring to fig. 3, step S330 includes: applying a first brightness adjustment process to the data of the face region, applying a second brightness adjustment process to the data of the non-face region, and applying a third brightness adjustment process to the data of the transition region, wherein any two of the first brightness adjustment process, the second brightness adjustment process, and the third brightness adjustment process are different. In one embodiment. The intensity of the first brightness adjustment process is greater than the intensity of the second brightness adjustment process, thereby avoiding the face area from being too dark in a backlight scene; in other words, the stronger the intensity of the brightness adjustment process is, the brighter the image area to be processed is. In one embodiment, the intensity of the first brightness adjustment process is less than the intensity of the second brightness adjustment process, thereby avoiding overexposure of the face region to preserve background details. In one embodiment, the intensity of the third brightness adjustment process is between the intensity of the first brightness adjustment process and the intensity of the second brightness adjustment processAnd (3) removing the solvent. In one embodiment, each brightness adjustment process performed in this step is a gamma correction (gamma correction) process, wherein the relationship between the brightness output and the brightness input of the face region is IOUT=(IIN)r1The relationship between the luminance output and luminance input of the non-face region is IOUT=(IIN)r2,r1Less than r2
Referring to fig. 3, step S340 includes: applying a first high frequency information addition (high frequency information addition) process to the data of the face region, applying a second high frequency information addition process to the data of the non-face region, and applying a third high frequency information addition process to the data of the transition region, wherein any two of the first high frequency information addition process, the second high frequency information addition process and the third high frequency information addition process are different. In one embodiment, the intensity of the first high frequency information adding process is greater than the intensity of the second high frequency information adding process, thereby increasing the variation of the face region so that the face region appears natural; in other words, as the intensity of the high frequency information adding process is stronger, the image area to be processed is added with the larger high frequency information, and is less smooth. In one embodiment, the intensity of the third high frequency information adding process is between the intensity of the first high frequency information adding process and the intensity of the second high frequency information adding process. In one embodiment, each high frequency information adding process performed in this step is a dithering (coloring) process, and in detail, if an original pixel value is Y, a random value R (i.e. high frequency information) is added to the original pixel value Y to obtain a dithered pixel value YDITHERFor output, wherein the random value can be generated by any known random model (e.g., Cyclic Redundancy Check (CRC)) or self-developed model, and each of the random values R is in a larger range (i.e., R) when the original pixel value Y belongs to the face regionmin1≦R≦Rmax1) In this case, an example of the above-mentioned dithering process and its result is shown in fig. 4; each of the random values R when the original pixel value Y belongs to the non-face regionWithin a smaller range (i.e. R)min2≦R≦Rmax2Wherein R ismin1≦Rmin2And Rmax2≦Rmax1) In this case, an example of the above-described dithering and its result is shown in fig. 5.
In one embodiment, to make the image processing result of the transition region more natural, at least one of the third noise suppression processing, the third sharpening and edge enhancement processing, the third brightness adjustment processing and the third high frequency information addition processing is an asymptotic processing (e.g., linear processing), and the image processing intensity of the asymptotic processing increases or decreases along the direction from the face region to the non-face region, i.e., the closer to the face region, the closer to the image processing intensity of the asymptotic processing is, the closer to the non-face region, the closer to the image processing intensity of the asymptotic processing is, the closer to the image processing intensity of the non-face region is. In another embodiment, the image processing intensity of at least one of the third noise suppression process, the third sharpening and edge enhancement process, the third brightness adjustment process and the third high frequency information addition process is fixed, i.e., does not change as it approaches or moves away from the face region. In yet another embodiment, at least one of the first noise suppression processing, the first sharpening and edge enhancement processing, the first brightness adjustment processing, and the first high-frequency information addition processing is an asymptotic processing (e.g., a processing conforming to an increasing function of increasing decrease (such as X/(X +1)), the intensity of the image processing of the asymptotic processing increases or decreases along the direction from the center of the face region to the non-face region, in this case, the transition region is selective.
It should be noted that, when the implementation is possible, a person skilled in the art may selectively implement some or all of the technical features of any one of the foregoing embodiments, or selectively implement a combination of some or all of the technical features of the foregoing embodiments, thereby increasing the flexibility in implementing the invention.
In summary, the image processing method of the present invention can perform differentiation processing on the face data, so as to optimize the face image without affecting the image processing effect of the non-face area, so as to avoid the problem of face distortion.
Although the embodiments of the present invention have been described above, the embodiments are not intended to limit the present invention, and those skilled in the art can apply changes to the technical features of the present invention according to the explicit or implicit contents of the present invention, and all the changes may fall into the scope of the patent protection sought by the present invention.

Claims (8)

1. An image processing method capable of performing differentiation processing on face data is executed by an image processing device, and the method comprises the following steps:
determining a face region, a non-face region and a transition region according to a face detection result of an image, wherein the transition region is between the face region and the non-face region; and
the data of the face region, the data of the non-face region and the data of the transition region are respectively subjected to different treatments, wherein the data of the face region is subjected to a first color rendering treatment, the data of the non-face region is subjected to a second color rendering treatment, and the data of the transition region is subjected to a third color rendering treatment, wherein any two of the first color rendering treatment, the second color rendering treatment and the third color rendering treatment are different, and the intensity of the third color rendering treatment is between the intensity of the first color rendering treatment and the intensity of the second color rendering treatment.
2. The method of claim 1, wherein the step of applying different treatments comprises at least one of:
applying a first noise suppression process to the data of the face region, a second noise suppression process to the data of the non-face region, and a third noise suppression process to the data of the transition region, wherein any two of the first noise suppression process, the second noise suppression process, and the third noise suppression process are different;
applying a first sharpening and edge enhancement process to the data of the face region, a second sharpening and edge enhancement process to the data of the non-face region, and a third sharpening and edge enhancement process to the data of the transition region, wherein any two of the first sharpening and edge enhancement process, the second sharpening and edge enhancement process, and the third sharpening and edge enhancement process are different;
applying a first brightness adjustment process to the data of the face region, applying a second brightness adjustment process to the data of the non-face region, and applying a third brightness adjustment process to the data of the transition region, wherein any two of the first brightness adjustment process, the second brightness adjustment process, and the third brightness adjustment process are different.
3. The method of claim 2, wherein the intensity of the third noise suppression process is between the intensity of the first noise suppression process and the intensity of the second noise suppression process, the intensity of the third sharpening and edge enhancement process is between the intensity of the first sharpening and edge enhancement process and the intensity of the second sharpening and edge enhancement process, and the intensity of the third brightness adjustment process is between the intensity of the first brightness adjustment process and the intensity of the second brightness adjustment process.
4. The method of claim 3, wherein the intensity of the first noise suppression process is less than the intensity of the second noise suppression process, the intensity of the first sharpening and edge enhancement process is less than the intensity of the second sharpening and edge enhancement process, the intensity of the first intensity adjustment process is greater than or less than the intensity of the second intensity adjustment process, and the intensity of the first dithering process is greater than the intensity of the second dithering process.
5. The method of claim 2, wherein the intensity of the first noise suppression process is less than the intensity of the second noise suppression process, the intensity of the first sharpening and edge enhancement process is less than the intensity of the second sharpening and edge enhancement process, the intensity of the first intensity adjustment process is greater than or less than the intensity of the second intensity adjustment process, and the intensity of the first dithering process is greater than the intensity of the second dithering process.
6. The method of claim 2, wherein at least one of the third noise suppression process, the third sharpening and edge enhancement process, the third brightness adjustment process and the third dithering process is an asymptotic process, wherein an image processing intensity of the asymptotic process increases or decreases along a direction from the face region to the non-face region.
7. An image processing method capable of performing differentiation processing on face data is executed by an image processing device, and the method comprises the following steps:
determining a face region and a non-face region according to a face detection result of an image; and
performing at least one of the following image processing:
applying a first noise suppression process to the data of the face region and a second noise suppression process to the data of the non-face region, wherein the first noise suppression process is different from the second noise suppression process;
applying a first sharpening and edge enhancement process to the data of the face region and a second sharpening and edge enhancement process to the data of the non-face region, wherein the first sharpening and edge enhancement process and the second sharpening and edge enhancement process are different;
applying a first brightness adjustment process to the data of the face region and a second brightness adjustment process to the data of the non-face region, wherein the first brightness adjustment process and the second brightness adjustment process are different; and
applying a first dithering process to the data of the face region and applying a second dithering process to the data of the non-face region, wherein the first dithering process is different from the second dithering process.
8. The method of claim 7, wherein the intensity of the first noise suppression process is less than the intensity of the second noise suppression process, the intensity of the first sharpening and edge enhancement process is less than the intensity of the second sharpening and edge enhancement process, the intensity of the first intensity adjustment process is greater than or less than the intensity of the second intensity adjustment process, and the intensity of the first dithering process is greater than the intensity of the second dithering process.
CN201810462970.1A 2018-05-15 2018-05-15 Image processing method capable of performing differentiation processing on face data Active CN110490029B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810462970.1A CN110490029B (en) 2018-05-15 2018-05-15 Image processing method capable of performing differentiation processing on face data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810462970.1A CN110490029B (en) 2018-05-15 2018-05-15 Image processing method capable of performing differentiation processing on face data

Publications (2)

Publication Number Publication Date
CN110490029A CN110490029A (en) 2019-11-22
CN110490029B true CN110490029B (en) 2022-04-15

Family

ID=68545256

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810462970.1A Active CN110490029B (en) 2018-05-15 2018-05-15 Image processing method capable of performing differentiation processing on face data

Country Status (1)

Country Link
CN (1) CN110490029B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200601194A (en) * 2004-06-18 2006-01-01 Univ Southern Taiwan Tech Method of recognizing true human face for security system
TW200719871A (en) * 2005-11-30 2007-06-01 Univ Nat Kaohsiung Applied Sci A real-time face detection under complex backgrounds
CN101826150A (en) * 2009-03-06 2010-09-08 索尼株式会社 Head detection method and device and head detection and category judgment method and device
CN101867799A (en) * 2009-04-17 2010-10-20 北京大学 Video frame processing method and video encoder
CN103974043A (en) * 2013-01-24 2014-08-06 瑞昱半导体股份有限公司 Image processing device and image processing method
CN104751405A (en) * 2015-03-11 2015-07-01 百度在线网络技术(北京)有限公司 Method and device for blurring image
CN105590107A (en) * 2016-02-04 2016-05-18 山东理工大学 Face low-level feature constructing method
CN107680071A (en) * 2017-10-23 2018-02-09 深圳市云之梦科技有限公司 A kind of face and the method and system of body fusion treatment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006011685A (en) * 2004-06-24 2006-01-12 Noritsu Koki Co Ltd Photographic image processing method and its device
US8306262B2 (en) * 2008-05-15 2012-11-06 Arcsoft, Inc. Face tracking method for electronic camera device
TWI413936B (en) * 2009-05-08 2013-11-01 Novatek Microelectronics Corp Face detection apparatus and face detection method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200601194A (en) * 2004-06-18 2006-01-01 Univ Southern Taiwan Tech Method of recognizing true human face for security system
TW200719871A (en) * 2005-11-30 2007-06-01 Univ Nat Kaohsiung Applied Sci A real-time face detection under complex backgrounds
CN101826150A (en) * 2009-03-06 2010-09-08 索尼株式会社 Head detection method and device and head detection and category judgment method and device
CN101867799A (en) * 2009-04-17 2010-10-20 北京大学 Video frame processing method and video encoder
CN103974043A (en) * 2013-01-24 2014-08-06 瑞昱半导体股份有限公司 Image processing device and image processing method
CN104751405A (en) * 2015-03-11 2015-07-01 百度在线网络技术(北京)有限公司 Method and device for blurring image
CN105590107A (en) * 2016-02-04 2016-05-18 山东理工大学 Face low-level feature constructing method
CN107680071A (en) * 2017-10-23 2018-02-09 深圳市云之梦科技有限公司 A kind of face and the method and system of body fusion treatment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Improved automatic face detection technique in color images;Kah Phooi Seng et al;《 2004 IEEE Region 10 Conference TENCON 2004》;20050523;459-462页 *
基于肤色模型的人脸检测算法研究;王岩红等;《电视技术》;20120202;125-128页 *

Also Published As

Publication number Publication date
CN110490029A (en) 2019-11-22

Similar Documents

Publication Publication Date Title
CN112419162B (en) Image defogging method, device, electronic equipment and readable storage medium
US8018504B2 (en) Reduction of position dependent noise in a digital image
US9830690B2 (en) Wide dynamic range imaging method
US9240037B2 (en) Image processing apparatus and image processing method
JP5314271B2 (en) Apparatus and method for improving image clarity
CN112967273B (en) Image processing method, electronic device, and storage medium
CN114418879A (en) Image processing method, image processing device, electronic equipment and storage medium
CN106341613B (en) Wide dynamic range image method
JP3267200B2 (en) Image processing device
JP3581270B2 (en) Image processing apparatus, image processing method, and recording medium recording image processing program
US11574391B2 (en) Median based frequency separation local area contrast enhancement
CN113810674A (en) Image processing method and device, terminal and readable storage medium
KR20090117617A (en) Image processing apparatus, method, and program
CN110490029B (en) Image processing method capable of performing differentiation processing on face data
US10235742B2 (en) Image processing apparatus, image capturing apparatus, image processing method, and non-transitory computer-readable storage medium for adjustment of intensity of edge signal
KR101101434B1 (en) Apparatus for improving sharpness of image
TWI719305B (en) Image processing method capable of carrying out differentiation process on face data
US10142552B2 (en) Image processing apparatus that corrects contour, control method therefor, storage medium storing control program therefor, and image pickup apparatus
CN110086997B (en) Face image exposure brightness compensation method and device
CN109167892B (en) Video image detail enhancement method and system
EP1622080B1 (en) Signal processing device and method, recording medium, and program
JP2006180267A (en) Picture quality correcting circuit
JPH1132201A (en) Image processing unit
JP6879636B1 (en) Image processing method
JP3116936B2 (en) Image contrast enhancement method and image contrast processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant