JP4262151B2 - Image processing method, image processing apparatus, computer program, and storage medium - Google Patents

Image processing method, image processing apparatus, computer program, and storage medium Download PDF

Info

Publication number
JP4262151B2
JP4262151B2 JP2004194292A JP2004194292A JP4262151B2 JP 4262151 B2 JP4262151 B2 JP 4262151B2 JP 2004194292 A JP2004194292 A JP 2004194292A JP 2004194292 A JP2004194292 A JP 2004194292A JP 4262151 B2 JP4262151 B2 JP 4262151B2
Authority
JP
Japan
Prior art keywords
luminance
value
correction
corrected
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2004194292A
Other languages
Japanese (ja)
Other versions
JP2006018465A (en
JP2006018465A5 (en
Inventor
良介 井口
Original Assignee
キヤノン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by キヤノン株式会社 filed Critical キヤノン株式会社
Priority to JP2004194292A priority Critical patent/JP4262151B2/en
Publication of JP2006018465A5 publication Critical patent/JP2006018465A5/ja
Publication of JP2006018465A publication Critical patent/JP2006018465A/en
Application granted granted Critical
Publication of JP4262151B2 publication Critical patent/JP4262151B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Description

本発明は、画像に対して輝度値補正を行う画像処理方法および画像処理装置およびコンピュータプログラムおよび記憶媒体に関するものである。   The present invention relates to an image processing method, an image processing apparatus, a computer program, and a storage medium that perform luminance value correction on an image.

近年では、例えば、ディジタルスチルカメラ(以下、単に「デジカメ」と言う)等の普及により、写真画像のディジタル化が手軽になり、特に、パーソナルコンピュータ(以下、単に「パソコン」と言う)上において、写真調の画像をディジタル画像データとして扱う機会が増えてきた。   In recent years, for example, with the widespread use of digital still cameras (hereinafter simply referred to as “digital cameras”) and the like, digitization of photographic images has become easier, especially on personal computers (hereinafter simply referred to as “personal computers”). Opportunities for handling photographic images as digital image data have increased.

また、画像を補正や加工できるアプリケーションソフトの普及により、パソコン上でユーザが自在に画像処理を行えるようになった。例えば、暗く写ってしまった画像を明るくする補正や、肌を美白にする加工といったものが上げられる。   Also, with the spread of application software that can correct and process images, users can freely perform image processing on personal computers. For example, correction that brightens an image that appears dark and processing that whitens the skin can be raised.

このような背景から、画像の補正や加工処理においては、さらに高精度な画像処理の技術が必要となってくる。   From such a background, a more accurate image processing technique is required for image correction and processing.

近年では、オブジェクト抽出処理の技術開発が進歩し、画像の中から人物の顔を検出できるようになった。このため、顔領域を最適な明るさに補正するといった提案が数多くなされてきた。特に逆光画像の補正においては、顔抽出による輝度補正が有効となる。   In recent years, technological development of object extraction processing has advanced, and it has become possible to detect a human face from an image. For this reason, many proposals have been made to correct the face area to an optimal brightness. In particular, in correcting a backlight image, luminance correction by face extraction is effective.

例えば、特許文献1では、抽出した被写体の明度情報より補正特性を作成し、被写体のない領域より制御条件を求めて、輝度補正を行うことを提案している。また、特許文献2においては、肌色平均輝度値と画像全体平均輝度値よりガンマ補正を行うことを提案している。
特開2003−69822号公報特許第3203946号明細書特開2000−102033号公報
例えば、特許文献1では、抽出した被写体の明度情報より補正特性を作成し、被写体のない領域より制御条件を求めて、輝度補正を行うことを提案している。また、特許文献2においては、肌色平均輝度値と画像全体平均輝度値よりガンマ補正を行うことを提案している。
特開2003−69822号公報特許第3203946号明細書特開2000−102033号公報
例えば、特許文献1では、抽出した被写体の明度情報より補正特性を作成し、被写体のない領域より制御条件を求めて、輝度補正を行うことを提案している。また、特許文献2においては、肌色平均輝度値と画像全体平均輝度値よりガンマ補正を行うことを提案している。
特開2003−69822号公報特許第3203946号明細書特開2000−102033号公報
例えば、特許文献1では、抽出した被写体の明度情報より補正特性を作成し、被写体のない領域より制御条件を求めて、輝度補正を行うことを提案している。また、特許文献2においては、肌色平均輝度値と画像全体平均輝度値よりガンマ補正を行うことを提案している。
特開2003−69822号公報特許第3203946号明細書特開2000−102033号公報
For example, Patent Document 1 proposes that a correction characteristic is created from the extracted brightness information of a subject, and a brightness condition is corrected by obtaining a control condition from a region without the subject. Japanese Patent Application Laid-Open No. 2004-228867 proposes performing gamma correction based on the skin color average luminance value and the entire image average luminance value. For example, Patent Document 1 proposes that a correction characteristic is created from the extracted brightness information of a subject, and a brightness condition is corrected by obtaining a control condition from a region without the subject. Japanese Patent Application Laid-Open No. 2004- 228867 proposes performing gamma correction based on the skin color average luminance value and the entire image average luminance value.
JP 2003-69822 A JP 2003-69822 A Japanese Patent No. 3203946 Japanese Patent No. 3203946 JP 2000-102033 A JP 2000-102033 A

しかしながら、特許文献1の輝度補正方法では、複数の領域(被写体の領域と被写体の無い領域)を処理するため処理の負荷が大きくなってしまう。また、特許文献1の輝度補正では、輝度の被写体中央値と被写体最大値の範囲に対して直線を用いた補正特性を作成している。このように顔領域内で変曲点を持つ補正特性に従って輝度補正を行うと、補正後の画像において顔に擬似輪郭が発生することがある。更に、特許文献2の輝度補正では、肌色を検出して顔抽出を行うとする場合、逆光画像のように顔が暗いとうまく肌色を検出することができず顔抽出することができない。従って、顔領域は目や鼻などの位置を検出して空間的に抽出することが好ましい。   However, in the brightness correction method of Patent Document 1, a plurality of regions (a subject region and a region without a subject) are processed, which increases the processing load. In the luminance correction of Patent Document 1, a correction characteristic using a straight line is created for the range of the subject central value and the subject maximum value of luminance. When luminance correction is performed according to the correction characteristic having an inflection point in the face area in this way, a pseudo contour may occur in the face in the corrected image. Furthermore, in the luminance correction of Patent Document 2, when face extraction is performed by detecting skin color, if the face is dark like a backlight image, the skin color cannot be detected well and face extraction cannot be performed. Therefore, it is preferable that the face region is spatially extracted by detecting the positions of eyes and nose.

本発明は、上記の課題に鑑みてなされたものであり、画像中から抽出された所定領域内における擬似輪郭の発生を抑え、該所定領域について適切な輝度補正を実現可能とすることを目的とする。   The present invention has been made in view of the above problems, and an object of the present invention is to suppress the generation of a pseudo contour in a predetermined area extracted from an image and to realize appropriate luminance correction for the predetermined area. To do.

また、本発明は、画像より抽出した所定領域(例えば被写体の顔等の領域)から適切な補正特性を作成可能とし、画像から抽出された所定領域内において補正特性が変曲点を含まず、画像より空間的に抽出した所定領域(例えば顔領域)の情報を用いて適切な輝度補正を可能にすることを目的とする。   Further, the present invention makes it possible to create an appropriate correction characteristic from a predetermined area extracted from an image (for example, an area such as a subject's face), and the correction characteristic does not include an inflection point in the predetermined area extracted from the image. An object is to enable appropriate luminance correction using information of a predetermined area (for example, a face area) spatially extracted from an image.

上記の目的を達成するための本発明による画像処理方法は、
画像の輝度を補正する画像処理方法であって、
前記画像中より人物の顔領域を抽出して、該顔領域に含まれる画素の輝度値に基づいて該顔領域の平均輝度値を取得する取得工程と、
前記顔領域の平均輝度値と前記画像全体の平均輝度値とに基づいて、前記顔領域の平均輝度値に対する補正後の輝度値である補正輝度値を算出する算出工程と、
前記顔領域の平均輝度値から予め定められた輝度を減算した値から、前記顔領域の平均輝度値に予め定められた輝度を加算した値までの範囲を顔輝度範囲として設定する設定工程と、
前記顔輝度範囲内で傾きが一定であり、かつ、前記顔領域の平均輝度値が前記補正輝度値に変換される補正輝度特性を生成する生成工程と、 A generation step of generating a corrected luminance characteristic in which the inclination is constant within the face luminance range and the average luminance value of the face region is converted into the corrected luminance value.
前記生成工程で生成された補正輝度特性に基づいて、前記画像の輝度を補正する補正工程とを備える。 The present invention includes a correction step of correcting the brightness of the image based on the correction brightness characteristic generated in the generation step.
In order to achieve the above object, an image processing method according to the present invention comprises: In order to achieve the above object, an image processing method according to the present invention
An image processing method for correcting image brightness, An image processing method for correcting image brightness,
An obtaining step of extracting a human face area from the image and obtaining an average luminance value of the face area based on a luminance value of a pixel included in the face area; An obtaining step of extracting a human face area from the image and obtaining an average luminance value of the face area based on a luminance value of a pixel included in the face area;
A calculation step of calculating a corrected luminance value that is a corrected luminance value for the average luminance value of the face region based on the average luminance value of the face region and the average luminance value of the entire image; A calculation step of calculating a corrected luminance value that is a corrected luminance value for the average luminance value of the face region based on the average luminance value of the face region and the average luminance value of the entire image;
A setting step of setting a range from a value obtained by subtracting a predetermined luminance value from the average luminance value of the face area to a value obtained by adding a predetermined luminance value to the average luminance value of the face area as a face luminance range When, A setting step of setting a range from a value obtained by subtracting a predetermined luminance value from the average luminance value of the face area to a value obtained by adding a predetermined luminance value to the average luminance value of the face area as a face luminance range When,
Generating a corrected luminance characteristic in which the inclination is constant within the face luminance range and an average luminance value of the face area is converted into the corrected luminance value; Generating a corrected luminance characteristic in which the emission is constant within the face luminance range and an average luminance value of the face area is converted into the corrected luminance value;
A correction step of correcting the luminance of the image based on the corrected luminance characteristic generated in the generation step. A correction step of correcting the luminance of the image based on the corrected luminance characteristic generated in the generation step.

また、上記の目的を達成するための本発明の他の態様による画像処理装置は以下の構成を備える。すなわち、
画像の輝度を補正する画像処理装置であって、

前記画像中より人物の顔領域を抽出して、該顔領域に含まれる画素の輝度値に基づいて該顔領域の平均輝度値を取得する取得手段と、 An acquisition means for extracting a person's face region from the image and acquiring an average luminance value of the face region based on the luminance value of pixels included in the face region.
前記顔領域の平均輝度値と前記画像全体の平均輝度値とに基づいて、前記顔領域の平均輝度値に対する補正後の輝度値である補正輝度値を算出する算出手段と、 A calculation means for calculating a corrected luminance value, which is a corrected luminance value with respect to the average luminance value of the face region, based on the average luminance value of the face region and the average luminance value of the entire image.
前記顔領域の平均輝度値から予め定められた輝度を減算した値から、前記顔領域の平均輝度値に予め定められた輝度を加算した値までの範囲を顔輝度範囲として設定する設定手段と、 A setting means for setting a range from a value obtained by subtracting a predetermined brightness value from the average brightness value of the face region to a value obtained by adding a predetermined brightness value to the average brightness value of the face region as a face brightness range. When,
前記顔輝度範囲内で傾きが一定であり、かつ、前記顔領域の平均輝度値が前記補正輝度値に変換される補正輝度特性を生成する生成手段と、 A generation means for generating a corrected luminance characteristic in which the inclination is constant within the face luminance range and the average luminance value of the face region is converted into the corrected luminance value.
前記生成手段で生成された補正特性輝度特性に基づいて、前記画像の輝度値を補正する補正手段とを備える。 Correction characteristics generated by the generation means The correction means for correcting the luminance value of the image based on the luminance characteristics is provided.
An image processing apparatus according to another aspect of the present invention for achieving the above object has the following configuration. That is, An image processing apparatus according to another aspect of the present invention for achieving the above object has the following configuration. That is,
An image processing apparatus for correcting the brightness of an image, An image processing apparatus for correcting the brightness of an image,
Acquiring means for extracting a human face area from the image and obtaining an average luminance value of the face area based on a luminance value of a pixel included in the face area; Acquiring means for extracting a human face area from the image and obtaining an average luminance value of the face area based on a luminance value of a pixel included in the face area;
Calculation means for calculating a corrected luminance value that is a corrected luminance value for the average luminance value of the face region based on the average luminance value of the face region and the average luminance value of the entire image; Calculation means for calculating a corrected luminance value that is a corrected luminance value for the average luminance value of the face region based on the average luminance value of the face region and the average luminance value of the entire image;
Setting means for setting a range from a value obtained by subtracting a predetermined luminance value from the average luminance value of the face area to a value obtained by adding a predetermined luminance value to the average luminance value of the face area as a face luminance range When, Setting means for setting a range from a value obtained by subtracting a predetermined luminance value from the average luminance value of the face area to a value obtained by adding a predetermined luminance value to the average luminance value of the face area as a face luminance range When ,,
Generating means for generating a corrected luminance characteristic in which an inclination is constant within the face luminance range and an average luminance value of the face region is converted into the corrected luminance value; Generating means for generating a corrected luminance characteristic in which an approach is constant within the face luminance range and an average luminance value of the face region is converted into the corrected luminance value;
Correction means for correcting the luminance value of the image based on the correction characteristic luminance characteristic generated by the generation means. Correction means for correcting the luminance value of the image based on the correction characteristic luminance characteristic generated by the generation means.

本発明によれば、画像中から抽出された所定領域内における擬似輪郭の発生を抑えつつ、該所定領域について適切輝度補正を実現することができる。 ADVANTAGE OF THE INVENTION According to this invention, appropriate brightness correction | amendment is realizable about this predetermined area, suppressing generation | occurrence | production of the pseudo contour in the predetermined area extracted from the image.

以下、添付の図面を参照して本発明の好適な実施形態について説明する。 Hereinafter, preferred embodiments of the present invention will be described with reference to the accompanying drawings.

<第1実施形態>
第1実施形態による画像処理システムの概略の一例を図1に示す。 FIG. 1 shows an outline example of the image processing system according to the first embodiment. ホストコンピュータ100には、例えばデジカメ105とインクジェットプリンタなどのプリンタ106が接続されている。 For example, a digital camera 105 and a printer 106 such as an inkjet printer are connected to the host computer 100. ホストコンピュータ100は、画像処理ソフト、ワープロ、表計算、インターネットブラウザ等のアプリケーションソフトウエア101と、OS(Operating System)102、アプリケーションソフトウエア101によってOS102に発行される各種描画命令群(イメージ描画命令、テキスト描画命令、グラフィックス描画命令)を処理して印刷データを作成するプリンタドライバ103、およびデジカメ内のデータをOS102に転送するデジカメドライバ104をソフトウエアとして持つ。 The host computer 100 includes application software 101 such as image processing software, word processor, table calculation, and Internet browser, and various drawing command groups (image drawing command, image drawing command,) issued to OS 102 by OS (Operating System) 102 and application software 101. It has a printer driver 103 that processes text drawing commands (text drawing commands, graphics drawing commands) to create print data, and a digital camera driver 104 that transfers data in the digital camera to the OS 102. <First Embodiment> <First Embodiment>
An example of an outline of the image processing system according to the first embodiment is shown in FIG. For example, a digital camera 105 and a printer 106 such as an inkjet printer are connected to the host computer 100. The host computer 100 includes application software 101 such as image processing software, word processor, spreadsheet, and Internet browser, OS (Operating System) 102, and various drawing commands (image drawing commands, Software includes a printer driver 103 that generates print data by processing a text rendering command and a graphics rendering command, and a digital camera driver 104 that transfers data in the digital camera to the OS 102. An example of an outline of the image processing system according to the first embodiment is shown in FIG. For example, a digital camera 105 and a printer 106 such as an inkjet printer are connected to the host computer 100. The host computer 100 includes application software 101 such as image processing software, word processor, spreadsheet, and Internet browser, OS (Operating System) 102, and various drawing commands (image drawing commands, Software includes a printer driver 103 that generates print data by processing a text rendering command and a graphics rendering command, and a digital camera driver 104 that transfers data in the digital camera to the OS 102.

ホストコンピュータ100は、これらソフトウエアが動作可能な各種ハードウエアとして中央演算処理装置CPU108、ハードディスクドライバHD107、ランダムアクセスメモリ109、リードオンリーメモリROM110、モニタ111等を備える。   The host computer 100 includes a central processing unit CPU 108, a hard disk driver HD 107, a random access memory 109, a read only memory ROM 110, a monitor 111, and the like as various hardware capable of operating these software.

図1で示されるホストコンピュータ100の一例としては、一般的に普及しているIBM社のAT互換機のパーソナルコンピュータにMicrosoft社のWindows(登録商標)XPをOS102として使用し、任意の印刷可能なアプリケーションをインストールし、デジカメとプリンタを接続した形態が挙げられる。   As an example of the host computer 100 shown in FIG. 1, Windows (registered trademark) XP of Microsoft Corporation is used as the OS 102 for a personal computer of an IBM AT compatible machine of IBM, which can be used for arbitrary printing. An example is a form in which an application is installed and a digital camera and a printer are connected.

ホストコンピュータ100では、アプリケーション101により文字などのテキストに分類されるテキストデータ、図形などのグラフィックスに分類されるグラフィックスデータ、自然画などに分類されるイメージ画像データなどを用いて出力画像データが作成される。このような出力画像データはOS102によりモニタ111に表示させることができる。この出力画像データを印刷出力するときには、アプリケーション101からOS102に印刷出力要求を行い、グラフィックスデータ部分はグラフィックス描画命令で、イメージ画像データ部分はイメージ描画命令で構成される出力画像を示す描画命令群をOS102に発行する。OS102はアプリケーションの出力要求を受け、印刷出力を行わせるプリンタに対応するプリンタドライバ103に描画命令群を発行する。プリンタドライバ103はOS102から入力した印刷要求と描画命令群を処理し、プリンタ105で印刷可能な印刷データを作成してこれをプリンタ105に転送する。プリンタ105がラスタープリンタである場合は、プリンタドライバ103はOS102からの描画命令に対して、順次画像補正処理を行い、そして順次RGB24ビットページメモリにラスタライズする。すべての描画命令をラスタライズした後にRGB24ビットのページメモリの内容をプリンタ105が印刷可能なデータ形式、例えばCMYKデータに変換し、これをプリンタに転送する。   In the host computer 100, output image data is generated using text data classified into text such as characters by the application 101, graphics data classified into graphics such as graphics, image image data classified into natural images, and the like. Created. Such output image data can be displayed on the monitor 111 by the OS 102. When the output image data is printed out, the application 101 issues a print output request to the OS 102. The graphics data portion is a graphics rendering command, and the image image data portion is a rendering command indicating an output image composed of an image rendering command. The group is issued to the OS 102. The OS 102 receives an output request from the application and issues a drawing command group to the printer driver 103 corresponding to the printer that performs print output. The printer driver 103 processes the print request and drawing command group input from the OS 102, creates print data that can be printed by the printer 105, and transfers the print data to the printer 105. When the printer 105 is a raster printer, the printer driver 103 sequentially performs image correction processing in response to a drawing command from the OS 102 and sequentially rasterizes it into an RGB 24-bit page memory. After rasterizing all drawing commands, the contents of the RGB 24-bit page memory are converted into a data format printable by the printer 105, for example, CMYK data, and transferred to the printer.

ホストコンピュータ100によって実行される本実施形態の画像補正処理は、特に、デジカメ105で撮影して得られた画像データに対して好適である。このような画像データをパソコンに入力するため、デジカメ105からパソコン107へデータの送受信が可能な構成になっている。以下、本実施形態の画像補正処理について、図13に示したフローチャートに沿って具体的に説明する。また、本実施形態の画像補正処理は図1のアプリケーション101において行われる。   The image correction processing of the present embodiment executed by the host computer 100 is particularly suitable for image data obtained by photographing with the digital camera 105. In order to input such image data to the personal computer, the digital camera 105 can transmit and receive data to and from the personal computer 107. Hereinafter, the image correction processing according to the present embodiment will be specifically described with reference to the flowchart shown in FIG. Further, the image correction processing of the present embodiment is performed by the application 101 in FIG.

デジカメ105で撮影された画像データは、一般にはCFカードやSDカードに代表されるSRAMに記録保存される。ユーザはそのデータを、デジカメドライバ104を通じて、パソコン内のHD107に転送することができる。また、アプリケーション101のうち、画像処理ソフトを起動して該画像の輝度を補正加工処理する。   Image data captured by the digital camera 105 is generally recorded and saved in an SRAM typified by a CF card or an SD card. The user can transfer the data to the HD 107 in the personal computer through the digital camera driver 104. In the application 101, image processing software is activated to correct and process the brightness of the image.

本実施形態ではYCbCr信号に対して補正加工処理を行うものとする。従って、デジカメドライバ104で取り込まれた画像をアプリケーション101によって画像補正する際、処理対象の画像がYCbCr信号で保存されているJPEG画像であればそのまま処理を行うことが可能である。しかしビットマップ画像やTIFF画像はRGB信号で保存されているため、図2に示すようにRGB信号をYCbCr信号に変換する必要がある(ステップS11,S12)。   In this embodiment, it is assumed that correction processing is performed on the YCbCr signal. Therefore, when the image captured by the digital camera driver 104 is corrected by the application 101, if the image to be processed is a JPEG image stored as a YCbCr signal, the processing can be performed as it is. However, since the bitmap image and TIFF image are stored as RGB signals, it is necessary to convert the RGB signals into YCbCr signals as shown in FIG. 2 (steps S11 and S12).

次に、YCbCr信号の画像において、輝度値Yによる画像を空間的に解析して顔領域を抽出する(ステップS13)。なお、近年では、顔抽出技術が発達し、人物の写った画像の中から顔領域を自動的に抽出する技術が数多く提案されており、例えば、フィルタ解析を行って顔を抽出するものや、色差信号から肌色領域を抽出するものまで様々である。また、顔の輪郭に沿って顔領域を抽出するものや、顔の中心点を検出して所定の範囲を顔と定めるものまである。ステップS13における顔抽出処理は周知の方法のいずれを用いてもかまわない。   Next, in the image of the YCbCr signal, a face region is extracted by spatially analyzing the image with the luminance value Y (step S13). In recent years, face extraction technology has been developed, and many techniques for automatically extracting a face area from an image of a person have been proposed.For example, a face is extracted by performing a filter analysis, There are various types from a color difference signal to extracting a skin color region. Further, there are those that extract a face region along the outline of the face and those that detect a center point of the face and define a predetermined range as the face. Any known method may be used for the face extraction process in step S13.

次に、抽出された顔領域より、輝度のヒストグラムを算出して、顔領域の平均輝度値Y_avrを求める(ステップS14)。本実施形態では、図3に示すように顔領域を矩形で設定し、図4に示すようにその顔領域中に含まれる画素値を集計してヒストグラムを作成し、平均輝度値Y_avrを求めている。図4では顔領域の平均輝度値Y_avrが100となった状態が示されている。なお、本実施形態では、補正輝度特性を決定するための特定輝度値として所定領域(本実施形態では顔領域)の平均輝度値を用いるが、これに限られるものではない。例えば、顔領域のヒストグラムの中心値や、暗い画像を明るくする処理においては顔領域のヒストグラムの最大値、顔領域のヒストグラムが山型に分布していればヒストグラムのピーク値を特定輝度値として利用することが挙げられる。   Next, a luminance histogram is calculated from the extracted face area, and an average luminance value Y_avr of the face area is obtained (step S14). In the present embodiment, the face area is set as a rectangle as shown in FIG. 3, and the pixel values included in the face area are aggregated as shown in FIG. 4 to create a histogram, and the average luminance value Y_avr is obtained. Yes. FIG. 4 shows a state where the average luminance value Y_avr of the face area is 100. In the present embodiment, the average luminance value of a predetermined area (a face area in the present embodiment) is used as the specific luminance value for determining the corrected luminance characteristic, but the present invention is not limited to this. For example, the center value of the histogram of the face area, the maximum value of the histogram of the face area in the process of brightening a dark image, or the peak value of the histogram is used as the specific brightness value if the histogram of the face area is distributed in a mountain shape To do.

次に、求めた平均輝度値Y_avrより、補正輝度値Y'_avrを算出する(ステップS15)。なお、Y'_avrは平均輝度値Y_avrの補正後の値を表す。補正輝度値Y'_avrは、関数的にY'_avr=f(Y_avr)のように算出してもよいし、Y_avrの値に応じて場合分けし、LUTによりY'_avrを求めてもよい。本実施形態では画像全体の輝度値とのバランスを考慮すべく、例えば以下のようにしてY'_avrを求める。   Next, a corrected luminance value Y′_avr is calculated from the obtained average luminance value Y_avr (step S15). Y′_avr represents a corrected value of the average luminance value Y_avr. The corrected luminance value Y′_avr may be calculated functionally as Y′_avr = f (Y_avr), or may be divided according to the value of Y_avr, and Y′_avr may be obtained by LUT. In this embodiment, in order to take into account the balance with the luminance value of the entire image, for example, Y′_avr is obtained as follows.

まず、顔領域の平均輝度値Y_avrより、第1の補正倍率k1を算出する。これは例えば図5に示すような関数で求めてもよいし、LUTを参照して求めてもよい。次に、画像全体の平均輝度値より、第2の補正倍率k2を算出する。これも上記k1と同様、関数やLUTで求めることができる。   First, the first correction magnification k1 is calculated from the average luminance value Y_avr of the face area. For example, this may be obtained by a function as shown in FIG. 5 or may be obtained by referring to the LUT. Next, the second correction magnification k2 is calculated from the average luminance value of the entire image. This can also be obtained by a function or LUT, similar to k1 above.

2つの補正倍率k1、k2が得られたところで、Y'_avrを算出する。本実施形態では、k1、k2を掛け合わせたものをY_avrのY'_avrに対する倍率として、以下の(式1)により補正後の平均輝度値を算出する。
Y'_avr=k1×k2×Y_avr …(式1)
これにより、顔の平均輝度値と画像全体の平均輝度値の2つのパラメータから、顔平均輝度値に対する補正輝度値を得ることができる。
When two correction magnifications k1 and k2 are obtained, Y′_avr is calculated. In the present embodiment, the corrected average luminance value is calculated by the following (Equation 1) using the multiplication of k1 and k2 as the magnification of Y_avr with respect to Y′_avr.
Y'_avr = k1 × k2 × Y_avr (Formula 1)
Thereby, the corrected luminance value for the face average luminance value can be obtained from the two parameters of the average luminance value of the face and the average luminance value of the entire image. Thus, the corrected luminance value for the face average luminance value can be obtained from the two parameters of the average luminance value of the face and the average luminance value of the entire image.

上記の補正輝度値の取得方法において、例えば顔が暗く写った画像のように顔平均輝度値が低ければk1の値を大きく設定して倍率を高め、ハイライト画像のように顔平均輝度値が高ければk1の値を小さく設定して倍率を下げるようにすれば、補正輝度値をうまく調整することができる。   In the correction brightness value acquisition method described above, for example, if the face average brightness value is low, such as an image in which the face appears dark, the value of k1 is set large to increase the magnification. If the value is high, the correction luminance value can be adjusted well by setting the value of k1 small and decreasing the magnification.

また、露光アンダー画像のように顔も画像全体も暗い画像においては、k2の値を大きくすることでさらに補正輝度値を上げることができ、一方、逆光画像のように顔は暗いが画像全体が明るい場合は、k2の値を小さくすることで画像の背景部分が明るくなりすぎるのを抑えることができる。また、上述したように、顔の平均輝度値の変わりに、顔領域のヒストグラムの中心値や、暗い画像を明るくする処理においては顔領域のヒストグラムの最大値、顔領域のヒストグラムが山型に分布していればヒストグラムのピーク値などを用いることもできる。   On the other hand, if the face and the entire image are dark like an underexposed image, the correction brightness value can be further increased by increasing the value of k2.On the other hand, the face is dark but the entire image is dark like a backlight image. If it is bright, it is possible to prevent the background portion of the image from becoming too bright by reducing the value of k2. Further, as described above, instead of the average brightness value of the face, the center value of the histogram of the face area or the maximum value of the histogram of the face area in the process of brightening the dark image, the histogram of the face area is distributed in a mountain shape. If it does, the peak value of a histogram etc. can also be used.

なお、本実施形態では顔の平均輝度値と画像全体の平均輝度値の2つのパラメータから補正倍率を決定したが、さらに複数のパラメータを用いてもよい。例えば顔領域の最大輝度値や画像の最小輝度値といったパラメータを用いることが考えられる。また、本実施形態では算出された補正倍率k1とk2を掛け合わせたが、例えばこの2つの係数を足しあわせて用いるようにしてもよい。   In the present embodiment, the correction magnification is determined from the two parameters of the average luminance value of the face and the average luminance value of the entire image, but a plurality of parameters may be used. For example, it is conceivable to use parameters such as the maximum luminance value of the face area and the minimum luminance value of the image. In the present embodiment, the calculated correction magnifications k1 and k2 are multiplied. For example, these two coefficients may be added together.

以上のようにして平均輝度値Y_avrに対する補正輝度値Y'_avrを求めたら、この補正輝度値Y'_avrを用いて全輝度に対する補正輝度値を決定し、補正輝度特性を決定する(ステップS16〜S18)。第1実施形態では、以下に説明する方法で全輝度範囲にわたる補正輝度特性を作成して、全ての輝度値に対する補正輝度値を決定するものとする。なお補正輝度値Y'_avrはユーザが設定してもよい。なお、補正輝度特性は直線、曲線或いは直線と曲線の組み合わせによって表される。図6を用いて以下に説明を行う。   When the corrected luminance value Y′_avr for the average luminance value Y_avr is obtained as described above, the corrected luminance value for all luminances is determined using the corrected luminance value Y′_avr, and the corrected luminance characteristic is determined (steps S16 to S16). S18). In the first embodiment, corrected luminance characteristics over the entire luminance range are created by the method described below, and corrected luminance values for all luminance values are determined. The corrected luminance value Y′_avr may be set by the user. The corrected luminance characteristic is represented by a straight line, a curve, or a combination of a straight line and a curve. This will be described below with reference to FIG.

まず、全輝度範囲の中から顔輝度範囲を定める(ステップS16)。これは、画像中の顔領域を構成する画素、主に肌色部分の輝度範囲を表すものである。一般に、顔輝度範囲を求める場合には、顔領域のヒストグラムから最大輝度値、最小輝度値を求め、その範囲内を顔輝度範囲とすることが考えられる。しかしこの方法では、抽出した顔領域内に髪の毛や目の領域が多く含まれると、顔の肌領域が例え中間調に分布していても、ヒストグラム中の最小輝度値が顔の肌領域の最小輝度値より極端に小さな値を取ってしまうことになる。また、顔の鼻部分などに白飛びがあると、ヒストグラム中の最大輝度値が極端に大きくなってしまう。つまり、この方法では、顔輝度範囲を求める際に誤差を生じる可能性が高くなってしまう。   First, a face luminance range is determined from the entire luminance range (step S16). This represents the luminance range of the pixels constituting the face area in the image, mainly the skin color portion. In general, when the face luminance range is obtained, it is considered that the maximum luminance value and the minimum luminance value are obtained from the histogram of the face region, and the range is set as the face luminance range. However, with this method, if there are many hair and eye areas in the extracted face area, the minimum luminance value in the histogram is the minimum of the face skin area even if the face skin area is distributed in halftones. A value extremely smaller than the luminance value is taken. In addition, when there is a whiteout in the nose of the face, the maximum luminance value in the histogram becomes extremely large. That is, with this method, there is a high possibility that an error will occur when the face luminance range is obtained.

そこで第1実施形態では、顔平均輝度Y_avrを含む固定範囲を顔輝度範囲として設定する。具体的には、所定の値をもつ顔輝度範囲定数Y_rangeを定め、図6に示すようにY_avr−Y_rangeからY_avr+Y_rangeまでの範囲を顔輝度範囲とし、その範囲に顔領域の輝度値が分布するものとする。こうすることで、顔抽出の際に髪の毛や目などの暗い領域や、鼻などの白飛びが含まれてしまっても、大きな誤差を生じることなく顔の肌領域の輝度範囲を定めることができる。なお、顔輝度範囲定数Y_rangeは、複数の画像より顔のヒストグラムの統計を取り、分布範囲を調べて経験的に定めてもよいし、図4に示した顔領域のヒストグラムより標準偏差などを用いて分布を調べて決定してもよい。   Therefore, in the first embodiment, a fixed range including the face average luminance Y_avr is set as the face luminance range. Specifically, a face luminance range constant Y_range having a predetermined value is defined, and the range from Y_avr−Y_range to Y_avr + Y_range is set as the face luminance range as shown in FIG. 6, and the luminance values of the face area are distributed in the range. And In this way, even when dark areas such as hair and eyes and whiteout such as nose are included during face extraction, the luminance range of the facial skin area can be defined without causing a large error. . The face luminance range constant Y_range may be determined empirically by taking statistics of a face histogram from a plurality of images and examining the distribution range, or using a standard deviation or the like from the face area histogram shown in FIG. It may be determined by examining the distribution.

次に、求められた顔輝度範囲Y_avr±Y_rangeにおける補正輝度特性を定める(ステップS17)。この顔輝度範囲内において補正輝度特性が折れ線になってしまうと、その折れ点で傾き(つまりコントラスト)が変わるために、顔輝度範囲内で擬似輪郭が発生してしまう。そこで第1実施形態では、この顔輝度範囲内で補正輝度特性の傾きを一定とする。傾き値は予め定めてもよいし、顔輝度範囲の値に応じて可変にしてもよい。また、元の画像に対して明るさのみを補正したい場合は傾きを1とし、コントラストを強調したい場合は傾きを1以上に設定すればよい。こうして、設定された傾きを有し、平均輝度値Y_avrに対して補正輝度値Y'_avrを通る直線を定め、これを顔輝度範囲内の補正輝度特性とする。こうすることで、画像中の顔領域において擬似輪郭を発生させることなく、所定のコントラストを保って輝度補正することができる。   Next, a corrected luminance characteristic in the obtained face luminance range Y_avr ± Y_range is determined (step S17). If the corrected luminance characteristic becomes a broken line within the face luminance range, the inclination (that is, contrast) changes at the broken point, and a pseudo contour is generated within the face luminance range. Therefore, in the first embodiment, the inclination of the corrected luminance characteristic is constant within this face luminance range. The inclination value may be determined in advance or may be varied according to the value of the face luminance range. Further, if it is desired to correct only the brightness of the original image, the inclination may be set to 1. If the contrast is to be enhanced, the inclination may be set to 1 or more. Thus, a straight line having the set inclination and passing through the corrected luminance value Y′_avr with respect to the average luminance value Y_avr is determined, and this is set as a corrected luminance characteristic within the face luminance range. In this way, luminance correction can be performed while maintaining a predetermined contrast without generating a pseudo contour in the face area in the image.

以上のようにして顔輝度範囲Y_avr−Y_range〜Y_avr+Y_rangeにおける補正輝度特性が設定されると、次に顔輝度範囲よりも輝度の低い範囲0〜Y_avr−Y_rangeと、顔輝度より輝度の高い範囲Y_avr+Y_range〜255に対して補正輝度特性を設定する(ステップS18)。まず、顔輝度範囲より輝度の低い範囲について、輝度値0の点において所定の点を通り、さらに顔輝度範囲との境界点Y_avr−Y_rangeにおいて上記決定した顔輝度範囲の補正輝度特性とつながる直線を求める。本実施形態では、図6に示すように輝度値0においては(0,0)を通るよう定めた。次に、顔輝度範囲より輝度の高い範囲について、上記と同様に輝度値255の点において所定の点を通り、顔輝度範囲との境界点Y_avr+Y_rangeにおいて顔輝度範囲の補正輝度特性とつながる直線を求める。本実施形態では、図6に示すように輝度値255においては(255,255)を通るよう定めた。   When the corrected luminance characteristics in the face luminance ranges Y_avr−Y_range to Y_avr + Y_range are set as described above, the range 0 to Y_avr−Y_range having the next lower luminance than the face luminance range and the range Y_avr + Y_range to the luminance higher than the face luminance are set. A corrected luminance characteristic is set for 255 (step S18). First, for a range whose luminance is lower than the face luminance range, a straight line passing through a predetermined point at the point of luminance value 0 and connecting with the corrected luminance characteristic of the determined face luminance range at the boundary point Y_avr−Y_range with the face luminance range is shown. Ask. In this embodiment, as shown in FIG. 6, it is determined that (0, 0) is passed at a luminance value of 0. Next, for a range with higher brightness than the face brightness range, a straight line passing through a predetermined point at the brightness value 255 and connecting with the corrected brightness characteristic of the face brightness range at the boundary point Y_avr + Y_range with the face brightness range is obtained in the same manner as described above. . In the present embodiment, as shown in FIG. 6, it is determined that the luminance value 255 passes (255, 255).

このようにして、輝度値0〜255の範囲を、顔輝度範囲、顔輝度範囲より輝度の低い範囲、顔輝度範囲より輝度の高い範囲の3つに分け、それぞれの輝度範囲における補正輝度特性が境界点でつながるよう定め、最終的に図6のように3つの直線から成る補正輝度特性が作成される。   In this way, the range of luminance values 0 to 255 is divided into a face luminance range, a range where the luminance is lower than the face luminance range, and a range where the luminance is higher than the face luminance range, and the corrected luminance characteristics in each luminance range are It is determined that the boundary points are connected, and finally a corrected luminance characteristic including three straight lines is created as shown in FIG.

以上のようにして得られた補正輝度特性を用いて輝度信号を補正してもよいが、この補正輝度特性には傾きの異なる2直線が接合する部分が境界点に存在する。よって、境界点において補正輝度特性の傾きが急激に変化してしまうと、前述したように擬似輪郭を発生してしまう。そこで、第1実施形態では、更に補正輝度特性の2つの境界点で平滑化を行う。平滑化の方法としては例えばスプライン法などが考えられる。本実施形態では図7に示すように、境界点付近の所定の範囲を平滑化範囲として、境界点を挟む2つの直線に接する円を算出し、その円弧を補正輝度特性に置き換えることで平滑化を行う。   Although the luminance signal may be corrected using the corrected luminance characteristic obtained as described above, a portion where two straight lines having different inclinations are joined at the boundary point. Therefore, if the slope of the corrected luminance characteristic changes abruptly at the boundary point, a pseudo contour is generated as described above. Therefore, in the first embodiment, smoothing is further performed at two boundary points of the corrected luminance characteristic. As a smoothing method, for example, a spline method can be considered. In this embodiment, as shown in FIG. 7, a predetermined range near the boundary point is set as a smoothing range, a circle in contact with two straight lines sandwiching the boundary point is calculated, and the circular arc is replaced with a corrected luminance characteristic for smoothing. I do.

こうして図6における2つの境界点を上記方法で平滑化した補正輝度特性を、図8に示す。この補正輝度特性により、顔の平均輝度値Y_avrを所定の補正輝度値Y'_avrに補正し、かつ顔輝度範囲内で傾きを一定に保つことで、顔領域に対して適切な輝度補正を実現するとともに擬似輪郭の発生を抑えることができる。   FIG. 8 shows a corrected luminance characteristic obtained by smoothing the two boundary points in FIG. 6 by the above method. This correction brightness characteristic corrects the average brightness value Y_avr of the face to a predetermined correction brightness value Y'_avr and keeps the slope constant within the face brightness range, thereby realizing appropriate brightness correction for the face area. In addition, the occurrence of pseudo contours can be suppressed.

以上説明したように、第1実施形態によれば、画像中から顔を抽出し、その顔の輝度を適切に補正できるような補正輝度特性の作成方法が開示される。より具体的には、第1実施形態では顔領域の平均輝度値を算出し、平均輝度値を含む顔輝度範囲を設定し、その範囲内において傾きが一定の直線を補正輝度特性として用いることにより、補正画像中の顔に擬似輪郭を発生するのを抑えることができる。また、平均輝度値に対して適切な補正輝度値を算出し、平均輝度値を算出した補正輝度値へ変換するように上記顔範囲内の補正輝度特性を設定することにより、適切な輝度補正が実現できる。また、顔輝度範囲以外の範囲について、それぞれ(0,0)、(255,255)を通り、境界点で上記顔輝度範囲内の直線とつながる直線を定め、その境界点を平滑化することで、全ての輝度範囲にわたって擬似輪郭やノイズの発生といったような弊害のない補正輝度特性を作成することができる。   As described above, according to the first embodiment, a method of creating a corrected luminance characteristic that can extract a face from an image and appropriately correct the luminance of the face is disclosed. More specifically, in the first embodiment, the average luminance value of the face area is calculated, a face luminance range including the average luminance value is set, and a straight line having a constant inclination within the range is used as the corrected luminance characteristic. It is possible to suppress the generation of a pseudo contour on the face in the corrected image. Further, by calculating an appropriate corrected luminance value for the average luminance value and setting the corrected luminance characteristic within the face range so as to convert the average luminance value into the calculated corrected luminance value, an appropriate luminance correction can be performed. realizable. In addition, a range other than the face luminance range passes through (0, 0) and (255, 255), respectively, defines a straight line connected to the straight line in the face luminance range at the boundary point, and smoothes the boundary point. Thus, it is possible to create a corrected luminance characteristic that does not have a harmful effect such as generation of a pseudo contour or noise over the entire luminance range.

<第2実施形態>
第1実施形態の方法で補正特性を求めると、Y_avr−Y_rangeの値が0を下回る場合や、Y_avr+Y_rangeの値が255を上回る場合には、補正特性を作成できなくなる。そこで第2実施形態では、このような場合における輝度補正方法について述べる。なお、以下ではY_avr−Y_rangeの値が0以下となる場合、つまり顔が極端に暗い画像について補正輝度特性を設定する場合を説明する。
Second Embodiment
When the correction characteristic is obtained by the method of the first embodiment, the correction characteristic cannot be created when the value of Y_avr−Y_range is less than 0 or when the value of Y_avr + Y_range is greater than 255. Therefore, in the second embodiment, a luminance correction method in such a case will be described. In the following, a case where the value of Y_avr−Y_range is 0 or less, that is, a case where the corrected luminance characteristic is set for an image having an extremely dark face will be described. When the correction characteristic is obtained by the method of the first embodiment, the correction characteristic cannot be created when the value of Y_avr−Y_range is less than 0 or when the value of Y_avr + Y_range is greater than 255. Therefore, in the second embodiment , a luminance correction method in such a case will be described. In the following, a case where the value of Y_avr−Y_range is 0 or less, that is, a case where the corrected luminance characteristic is set for an image having an extremely dark face will be described.

第1実施形態では、Y_avr−Y_rangeの点と(0,0)とを結ぶ直線を補正特性の一部としていた。しかし、Y_avr−Y_rangeの値が0以下となる場合、この範囲の直線を作成することができなくなる。そこで第2実施形態では、まず点P1(Y_avr,Y'_avr)と(0,0)を結び、Y_avr+Y_rangeまで延びる直線を作成する。ここで、当該直線とY_avr+Y_rangeとの交点をP2とする。次に、Y_avr+Y_range〜255の範囲においては、(255,255)を通り、Y_avr+Y_rangeで上記直線と接続する直線(点P2で上記直線と接続する直線)を作成する。その後、上記同様に2つの直線の交点においてスムージングを行う。こうして、第1実施形態を踏まえた補正特性を1つ作成することができる。これは顔輝度範囲における補正輝度特性の直線の傾きが一定になるため、図9においてコントラスト重視特性92と称する。   In the first embodiment, a straight line connecting the point Y_avr−Y_range and (0, 0) is part of the correction characteristic. However, when the value of Y_avr−Y_range is 0 or less, a straight line in this range cannot be created. Therefore, in the second embodiment, the point P1 (Y_avr, Y′_avr) and (0, 0) are first connected to create a straight line extending to Y_avr + Y_range. Here, the intersection of the straight line and Y_avr + Y_range is P2. Next, in the range of Y_avr + Y_range to 255, a straight line that passes through (255, 255) and is connected to the straight line at Y_avr + Y_range (a straight line that is connected to the straight line at point P2) is created. After that, smoothing is performed at the intersection of the two straight lines as described above. In this way, one correction characteristic based on the first embodiment can be created. This is called a contrast emphasis characteristic 92 in FIG. 9 because the slope of the straight line of the corrected luminance characteristic in the face luminance range is constant.

しかしながら、このコントラスト重視特性92を用いると、0〜Y_avr+Y_rangeの輝度範囲においてコントラストが非常に高い補正特性となり、画像中の顔領域において擬似輪郭やノイズが発生してしまう。そこで第2実施形態では、コントラスト重視特性92の他に階調を重視した補正特性を作成し、これら2つの補正特性を合成することで1つの補正特性を作成する。   However, when this contrast-oriented characteristic 92 is used, a correction characteristic having a very high contrast in the luminance range of 0 to Y_avr + Y_range is generated, and a pseudo contour and noise are generated in the face area in the image. Therefore, in the second embodiment, a correction characteristic emphasizing gradation is created in addition to the contrast emphasis characteristic 92, and one correction characteristic is created by combining these two correction characteristics.

本実施形態では、階調を重視した補正特性として、(Y_avr,Y'_avr)と(0,0)、(255,255)を通るガンマ曲線を用いる。これを、図9に階調重視特性91として示した。この階調重視特性91はコントラスト重視特性92に比べて、顔輝度範囲内での傾きが全体的に小さく、また、傾きが大きく変わる点もないので、階調重視の補正特性と言える。   In this embodiment, a gamma curve that passes through (Y_avr, Y′_avr) and (0, 0), (255, 255) is used as the correction characteristic that emphasizes gradation. This is shown in FIG. Compared with the contrast emphasis characteristic 92, the tone emphasis characteristic 91 is a correction characteristic emphasizing gradation because the inclination in the face luminance range is generally small and the inclination does not change greatly.

コントラスト補正特性のみを用いると、輝度の高い部分では白トビを起してしまい、輝度の低い部分では黒っぽくなり顔の暗い部分が明るくならなくなってしまう。また、階調重視特性のみを用いると、輝度の高い部分では元画像と比べて極度に変化することはないが、顔の平均輝度周辺では、コントラスト重視特性に比べて画像にメリハリがなく、顔がのっぺりしたように仕上がってしまう。このことから、2つの補正特性を、合成して補正特性を生成する。複数の補正特性の合成においては、0〜255の全輝度範囲にわたって複数の補正特性を重み付け平均する方法が挙げられる。本実施形態では、更にその重み付けの仕方を輝度範囲に応じて異ならせる。すなわち、図9に示されるように、全輝度範囲を複数の範囲に分割し、それぞれの範囲に対して重み付け係数を設定して平均化するようにする。こうすることで、輝度値によって補正特性を調整することができる。図9においては、0〜Y_avrの範囲とY_avr〜255の2つの範囲に全輝度範囲を分割し、それぞれに重み付けの仕方を変えた様子が示されている。   If only the contrast correction characteristic is used, white spots occur in a portion with high luminance, and the portion with low luminance becomes black and the dark portion of the face does not become bright. In addition, when only the tone emphasis characteristic is used, the brightness does not change drastically compared to the original image in the high luminance part, but the image is not sharp compared with the contrast emphasis characteristic around the average luminance of the face. It will be finished as if it was covered. From this, the two correction characteristics are combined to generate a correction characteristic. In the synthesis of a plurality of correction characteristics, a method of weighted averaging the plurality of correction characteristics over the entire luminance range from 0 to 255 can be cited. In the present embodiment, the weighting method is further varied depending on the luminance range. That is, as shown in FIG. 9, the entire luminance range is divided into a plurality of ranges, and a weighting coefficient is set for each range and averaged. By doing so, the correction characteristic can be adjusted by the luminance value. FIG. 9 shows a state in which the entire luminance range is divided into two ranges of 0 to Y_avr and Y_avr to 255 and the weighting method is changed for each.

以下に、逆光画像を例に取って重み付け係数を求める。まず、0〜Y_avrにおいては顔領域のうち暗い部分が分布するので、ノイズの発生を抑えつつもコントラストをつけて明るくする必要がある。よって、この範囲では2つの補正特性は1:1で合成する。ちなみにコントラスト重視の補正特性91をf(Y)、階調重視の補正特性92をg(Y)とすると、合成補正特性Y’(93)は、
Y’=(f(Y)+g(Y))/2 …(式2)
と表される。 It is expressed as. In the following, the weighting coefficient is obtained by taking a backlight image as an example. First, since the dark part of the face area is distributed from 0 to Y_avr, it is necessary to brighten the contrast while suppressing the generation of noise. Therefore, in this range, the two correction characteristics are combined at 1: 1. Incidentally, if the contrast-oriented correction characteristic 91 is f (Y) and the gradation-oriented correction characteristic 92 is g (Y), the combined correction characteristic Y ′ (93) is In the following, the weighting coefficient is obtained by taking a backlight image as an example. First, since the dark part of the face area is distributed from 0 to Y_avr, it is necessary to brighten the contrast while suppressing the generation of noise. , in this range, the two correction characteristics are combined at 1: 1. 00, if the contrast-oriented correction characteristic 91 is f (Y) and the gradation-oriented correction characteristic 92 is g (Y), the combined correction characteristic Y ′ (93) is
Y ′ = (f (Y) + g (Y)) / 2 (Formula 2) Y ′ = (f (Y) + g (Y)) / 2 (Formula 2)
It is expressed. It is expressed.

一方、Y_avr〜255の範囲においては、逆光画像のうち主に背景部分が分布するので、輝度をなるべく抑える必要がある。そこで図9のように、この範囲での補正特性の値が低い階調重視曲線に重み付けして合成する。例えば比率を1:3とすると、合成補正特性Y’は、
Y’=(f(Y)+3×g(Y))/4 …(式3)
と表すことができる。
On the other hand, in the range of Y_avr to 255, since the background portion is mainly distributed in the backlight image, it is necessary to suppress the luminance as much as possible. Therefore, as shown in FIG. 9, the tone emphasis curve having a low correction characteristic value in this range is weighted and synthesized. For example, if the ratio is 1: 3, the combined correction characteristic Y ′ is
Y ′ = (f (Y) + 3 × g (Y)) / 4 (Formula 3)
It can be expressed as. It can be expressed as.

第2実施形態では、逆光画像を例に取って合成比率を定めたが、露光アンダーの画像やフラッシュが強いハイライト画像など、画像の特性に応じて合成比率を変化させることにより、各画像に適した補正を行うことができる。また、第2実施形態では、合成比率を切り替える点をY_avrとしたが、Y_avr+Y_rangeとしてもよいし、切り替える範囲を3つ以上(例えば、Y_avrとY_avr+Y_rangeの両方で切り替える)を設定してもよい。   In the second embodiment, the composition ratio is determined by taking a backlit image as an example. However, by changing the composition ratio according to the characteristics of the image, such as an underexposed image or a highlight image with strong flash, each image can be changed. Appropriate correction can be performed. Further, in the second embodiment, the point at which the combination ratio is switched is Y_avr, but it may be Y_avr + Y_range, or three or more switching ranges (for example, switching with both Y_avr and Y_avr + Y_range) may be set.

こうして、第1実施形態においては作成できなかった補正特性を作成することができる。なお、第2実施形態で説明した複数の特性の合成による補正特性の生成は、顔輝度範囲が全輝度範囲からはみ出さないような状態においても適用できる。すなわち、第1実施形態で生成した補正特性に対しても適用できる。この場合、例えば図8において(0,0)、(Y_avr,Y'_avr)と(255,255)を通るガンマ曲線を設定し、図8に示された補正特性とガンマ曲線とを合成して補正特性を得る。このとき、Y_avr等を境として重み付けを変更するようにする。   In this way, correction characteristics that could not be created in the first embodiment can be created. Note that the generation of the correction characteristic by combining a plurality of characteristics described in the second embodiment can be applied even in a state where the face luminance range does not protrude from the entire luminance range. That is, the present invention can be applied to the correction characteristic generated in the first embodiment. In this case, for example, a gamma curve passing through (0, 0), (Y_avr, Y′_avr) and (255, 255) in FIG. 8 is set, and the correction characteristic and gamma curve shown in FIG. 8 are synthesized. Get correction characteristics. At this time, the weight is changed with Y_avr or the like as a boundary.

以上のように、第2実施形態によれば、コントラスト重視の曲線と階調重視の曲線を作成し、複数に分割した輝度範囲ごとに重み付け係数を定め、それにより2つの補正特性を合成することで、擬似輪郭やノイズの発生を抑えた補正輝度特性を作成することができる。 As described above, according to the second embodiment, a curve emphasizing contrast and a curve emphasizing gradation are created, and a weighting coefficient is determined for each of the divided luminance ranges, thereby synthesizing two correction characteristics. Thus, it is possible to create a corrected luminance characteristic in which generation of pseudo contours and noise is suppressed.

<第3実施形態>
第3実施形態では、第1および第2実施形態で上述した補正特性によって輝度を補正した後、次に彩度補正を行う。 In the third embodiment, the brightness is corrected by the correction characteristics described above in the first and second embodiments, and then the saturation correction is performed. 以下に本実施形態の彩度補正について述べる。 The saturation correction of this embodiment will be described below. <Third Embodiment> <Third Embodiment>
In the third embodiment, the luminance is corrected by the correction characteristics described above in the first and second embodiments, and then saturation correction is performed. The saturation correction of this embodiment will be described below. In the third embodiment, the luminance is corrected by the correction characteristics described above in the first and second embodiments, and then saturation correction is performed. The saturation correction of this embodiment will be described below.

まず、第1或いは第2実施形態で説明した方法により補正輝度特性を求め、画像に対して輝度補正を行い、すべての画素において補正後の輝度値Y'が得られたものとする。第3実施形態では、更に彩度補正(色差補正)を行う。ここで、彩度値Sは色差値Cb、Crを用いて、下記の式4、
S=(Cb 2 +Cr 2 ) 1/2 …(式4) S = (Cb 2 + Cr 2 ) 1/2 ... (Equation 4)
により求まる。 Obtained by. First, it is assumed that a corrected luminance characteristic is obtained by the method described in the first or second embodiment, luminance correction is performed on an image, and a corrected luminance value Y ′ is obtained for all pixels. In the third embodiment, saturation correction (color difference correction) is further performed. Here, the saturation value S uses the color difference values Cb and Cr, and the following equation 4, First, it is assumed that a corrected luminance characteristic is obtained by the method described in the first or second embodiment, luminance correction is performed on an image, and a corrected luminance value Y ′ is obtained for all pixels. In the third embodiment, saturation correction (color difference correction) is further performed. Here, the saturation value S uses the color difference values ​​Cb and Cr, and the following equation 4,
S = (Cb 2 + Cr 2 ) 1/2 (Formula 4) S = (Cb 2 + Cr 2 ) 1/2 (Formula 4)
It is obtained by. It is obtained by.

一般に彩度補正は輝度補正とは独立に画像全体に一律に施される。例えば、彩度値Sに対する補正彩度値S’を求める場合、S’=1.3×Sという式で算出するのが一般的である。この場合、輝度補正がかからなかった画素(領域)まで彩度補正がかかってしまい、全体的に元の画像と色味が異なってします場合がある。または、特許文献3(特開2000−102033号公報)に述べられているように、輝度値Y、補正輝度値Y’よりS’=(Y’/Y)×Sとして該画素の輝度補正率Y’/Yに応じて彩度補正を行う方法もある。一方で、画像内でオブジェクト抽出を行い、抽出領域にのみ一律に輝度補正をするといった方法もある。本実施形態では特許文献3の方法のように、輝度補正を行った後、輝度補正の変化の度合いに応じて、画素の彩度補正の度合いも変化させる。特許文献3の手法と異なる点は、下記のように輝度補正の変化の度合いに係数を乗算し、自由度の高い彩度補正を行う点である。   In general, saturation correction is uniformly applied to the entire image independently of luminance correction. For example, when the corrected saturation value S ′ for the saturation value S is obtained, it is generally calculated by the equation S ′ = 1.3 × S. In this case, saturation correction is applied to pixels (regions) that have not been subjected to luminance correction, and the color may be different from the original image as a whole. Alternatively, as described in Patent Document 3 (Japanese Patent Laid-Open No. 2000-102033), the luminance correction rate of the pixel is set as S ′ = (Y ′ / Y) × S from the luminance value Y and the corrected luminance value Y ′. There is also a method of performing saturation correction according to Y ′ / Y. On the other hand, there is a method in which object extraction is performed in an image and luminance correction is uniformly performed only on the extraction region. In the present embodiment, as in the method of Patent Document 3, after performing luminance correction, the degree of pixel saturation correction is also changed according to the degree of change in luminance correction. The difference from the method of Patent Document 3 is that, as described below, the degree of change in luminance correction is multiplied by a coefficient to perform saturation correction with a high degree of freedom.

彩度補正において、まず輝度補正量を求める。輝度補正量は元の輝度値に対する補正後の輝度値の割合であり、Y’/Yとする。また、0≦a≦1の範囲を取る、正規化された係数aを設定し、輝度補正量Y’/Yに乗算する。つまり、係数aは輝度補正量Y’/Yが彩度補正に及ぼす起因度を表す係数である。そして、Y’/Y×aに(1−a)の項を足し合わせることで、補正彩度値S’を求める係数とする。以上より、彩度補正式は、
S’=((Y’/Y)×a+1−a)×S (0≦a≦1) …(式5)
のようになる。
こうすることで、輝度補正量が大きいほど彩度補正も強調され、輝度補正が行われない場合は彩度補正も行われなくなる。 By doing so, the larger the luminance correction amount is, the more the saturation correction is emphasized, and if the luminance correction is not performed, the saturation correction is not performed. In the saturation correction, first, a luminance correction amount is obtained. The luminance correction amount is a ratio of the corrected luminance value to the original luminance value, and is Y ′ / Y. Further, a normalized coefficient a that takes a range of 0 ≦ a ≦ 1 is set, and the luminance correction amount Y ′ / Y is multiplied. That is, the coefficient a is a coefficient representing the degree of cause that the luminance correction amount Y ′ / Y exerts on the saturation correction. Then, the term (1-a) is added to Y ′ / Y × a to obtain a coefficient for obtaining the corrected saturation value S ′. From the above, the saturation correction formula is In the saturation correction, first, a luminance correction amount is obtained. The luminance correction amount is a ratio of the corrected luminance value to the original luminance value, and is Y ′ / Y. Further, a normalized coefficient a that takes a range of 0 ≤ a ≤ 1 is set, and the luminance correction amount Y ′ / Y is multiplied. That is, the coefficient a is a coefficient representing the degree of cause that the luminance correction amount Y ′ / Y exerts on the saturation correction. Then , the term (1-a) is added to Y ′ / Y × a to obtain a coefficient for obtaining the corrected saturation value S ′. From the above, the saturation correction formula is
S ′ = ((Y ′ / Y) × a + 1−a) × S (0 ≦ a ≦ 1) (Formula 5) S ′ = ((Y ′ / Y) × a + 1−a) × S (0 ≤ a ≤ 1) (Formula 5)
become that way. become that way.
By doing so, the saturation correction is enhanced as the luminance correction amount is increased, and when the luminance correction is not performed, the saturation correction is not performed. By doing so, the saturation correction is enhanced as the luminance correction amount is increased, and when the luminance correction is not performed, the saturation correction is not performed.

彩度補正では彩度値Sに対して上記補正処理を行うが、色相を補正しない場合は、
Cb'=((Y'/Y)×a+1−a)×Cb
Cr'=((Y'/Y)×a+1−a)×Cr
というように色差値Cb、Crに補正処理を行っても等価である。
In the saturation correction, the above correction processing is performed on the saturation value S, but when the hue is not corrected,
Cb ′ = ((Y ′ / Y) × a + 1−a) × Cb
Cr ′ = ((Y ′ / Y) × a + 1−a) × Cr
Thus, it is equivalent even if correction processing is performed on the color difference values Cb and Cr.

上記の式5を見ると、Y’の値がYに比べて大きくなるにつれ、Sにかかる係数部分も大きくなる。つまり、輝度補正量が大きいときほど彩度補正も強くかかることになる。一方、Y’=Yのとき、つまり輝度補正がない場合は、S’=Sとなり、彩度補正は行われない。   As can be seen from Equation 5 above, as the value of Y ′ increases compared to Y, the coefficient portion related to S also increases. In other words, the saturation correction is more strongly applied as the luminance correction amount is larger. On the other hand, when Y ′ = Y, that is, when there is no luminance correction, S ′ = S, and saturation correction is not performed.

第3実施形態では、先に行った輝度補正量に応じて彩度補正を行った。つまり、輝度を補正した画素は彩度も補正し、輝度を補正しない画素は彩度も補正しないことになる。また、マウスなどで画像中の補正したい領域を選択し、選択された領域に対して第1或いは第2実施形態で説明した輝度補正特性を適用するようにすれば、選択された領域内の画素については輝度および彩度が補正され、選択された領域外の画素については輝度も彩度も補正させないようにすることができる。こうすることで、元画像に比べて補正したい画素に関しては、輝度補正をすることにより彩度補正も行い、元画像に比べて補正したくない画素は輝度補正を行わないことで彩度補正も行わないように制御することができる。   In the third embodiment, the saturation correction is performed according to the previously performed luminance correction amount. That is, a pixel whose luminance is corrected corrects the saturation, and a pixel whose luminance is not corrected does not correct the saturation. Further, if a region to be corrected in an image is selected with a mouse or the like, and the luminance correction characteristics described in the first or second embodiment are applied to the selected region, pixels in the selected region The luminance and the saturation are corrected for the pixel, and the luminance and the saturation are not corrected for the pixels outside the selected region. In this way, for pixels that are to be corrected compared to the original image, saturation correction is performed by performing luminance correction, and for pixels that are not to be corrected compared to the original image, saturation correction is also performed by not performing luminance correction. It can be controlled not to do so.

Y’/Yに乗算した係数aは、任意で設定することができる。aの値を大きくすれば、Y’/Yの起因度を上げることができ、逆にaの値を小さくすれば、起因度を下げることができる。また、a=0では彩度補正が行われなくなることを意味する。これにより、自由度の高い彩度補正を行うことができる。   The coefficient a multiplied by Y ′ / Y can be arbitrarily set. Increasing the value of a can increase the cause of Y ′ / Y, and conversely, decreasing the value of a can decrease the cause. Further, when a = 0, it means that saturation correction is not performed. Thereby, saturation correction with a high degree of freedom can be performed.

第3実施形態の用途例として、逆光補正処理が考えられる。まず、顔抽出処理によって、画像中から顔領域を抽出する。これは画像の周波数解析やフィルタリング処理、もしくは肌色検出法によるものでもよい。その後、図10に示すような暗く写った顔領域の輝度補正を行う。   As an application example of the third embodiment, backlight correction processing can be considered. First, a face area is extracted from an image by face extraction processing. This may be based on image frequency analysis, filtering processing, or skin color detection method. Thereafter, brightness correction is performed on the face area that appears dark as shown in FIG.

輝度補正が終了した後に、上記彩度補正を行う。逆光画像の場合、顔領域が元々暗く写っているために、その輝度値を上げても彩度値が非常に小さいままであることがある。そこで、輝度補正した後に彩度補正を行って、補正画像をさらに見栄え良くする必要がある。また、図10に示すように、逆光画像は背景は明るく写っているため、その部分に関しては輝度補正や彩度補正をする必要がない。そこで、輝度補正した領域にのみ彩度補正を行えば良く、さらには輝度補正量に応じて彩度補正量を強めることで、より精度の高い逆光補正を行うことができる。   After the luminance correction is completed, the saturation correction is performed. In the case of a backlight image, since the face area is originally dark, the saturation value may remain very small even if the luminance value is increased. Therefore, it is necessary to perform saturation correction after luminance correction to further improve the appearance of the corrected image. Further, as shown in FIG. 10, since the background of the backlight image is bright, it is not necessary to perform luminance correction or saturation correction for that portion. Therefore, it is only necessary to perform saturation correction only on the luminance-corrected region. Further, by increasing the saturation correction amount in accordance with the luminance correction amount, more accurate backlight correction can be performed.

以上のように第3実施形態によれば、式5に示すように、輝度補正量に応じた彩度補正を行うことで、輝度を補正させた画素は彩度も補正させ、輝度を補正させない画素は彩度も補正させないように制御することができる。 As described above, according to the third embodiment, as shown in Equation 5, by performing saturation correction according to the luminance correction amount, the pixel whose luminance has been corrected is also corrected for saturation and is not corrected for luminance. The pixel can be controlled so as not to correct the saturation.

<第4実施形態>
第3実施形態でも説明したように、YCbCr空間において輝度値Yのみを補正した場合、見た目に彩度が低下したように見える場合がある。 As described in the third embodiment, when only the brightness value Y is corrected in the YCbCr space, the saturation may appear to be reduced. これはYCbCr空間を均等色空間(L*a*b*空間)に変換したときに、彩度が保持されないためである。 This is because the saturation is not maintained when the YCbCr space is converted into the uniform color space (L * a * b * space). そこで、輝度値Yを補正した後に彩度もそれに合わせて補正する必要がある。 Therefore, after correcting the luminance value Y, it is necessary to correct the saturation accordingly. 第4実施形態では、L*a*b*空間において色差を保つという観点から彩度補正を行う方法について説明する。 In the fourth embodiment, a method of performing saturation correction from the viewpoint of maintaining a color difference in the L * a * b * space will be described. <Fourth embodiment> <Fourth embodiment>
As described in the third embodiment, when only the luminance value Y is corrected in the YCbCr space, it may appear that the saturation is visually reduced. This is because the saturation is not maintained when the YCbCr space is converted to a uniform color space (L * a * b * space). Therefore, after correcting the luminance value Y, it is necessary to correct the saturation accordingly. In the fourth embodiment, a method for performing saturation correction from the viewpoint of maintaining color differences in the L * a * b * space will be described. As described in the third embodiment, when only the luminance value Y is corrected in the YCbCr space, it may appear that the saturation is visually reduced. This is because the saturation is not maintained when the YCbCr space is converted to a uniform color space ( L * a * b * space). Therefore, after correcting the luminance value Y, it is necessary to correct the saturation accordingly. In the fourth embodiment, a method for performing saturation correction from the viewpoint of maintaining color differences in the L * a * b * space will be described.

sRGB空間からL*a*b*空間へ変換する場合を考える。図11に示すように、元のYCbCr空間内の値を、Y_orig、Cb_orig、Cr_orig、彩度値をS_origとする。それぞれの値をsRGB空間に変換し、さらにsRGB空間からL*a*b*空間へ変換した場合の、上記YCbCr値に対応するL*a*b*値をL*_orig、a*_orig、b*_origとする。一方、Y_origの補正輝度値をY_corrとし、Y_corr、Cb_orig、Cr_origに対応するL*a*b*値をL*_corr、a*_corr、b*_corrとする。また、補正後の明度値L*_corrと、元の色差値a*_orig、b*_origからなるL*a*b*値に対応するYCbCr値をY_after、Cb_after、Cr_after、彩度値をS_afterとする。また、輝度と彩度の変化量として、dY、dSを用い、dY=Y_corr−Y_orig、dS=S_after−S_origとする。   Consider the case of converting from sRGB space to L * a * b * space. As shown in FIG. 11, the values in the original YCbCr space are Y_orig, Cb_orig, Cr_orig, and the saturation value is S_orig. When each value is converted to sRGB space and further converted from sRGB space to L * a * b * space, L * a * b * values corresponding to the YCbCr values are expressed as L * _orig, a * _orig, b * _orig. On the other hand, the corrected luminance value of Y_orig is Y_corr, and the L * a * b * values corresponding to Y_corr, Cb_orig, and Cr_orig are L * _corr, a * _corr, and b * _corr. Also, the corrected brightness value L * _corr and the YCbCr values corresponding to the L * a * b * values composed of the original color difference values a * _orig and b * _orig are Y_after, Cb_after, Cr_after, and the saturation value is S_after. To do. Further, dY and dS are used as the amounts of change in luminance and saturation, and dY = Y_corr−Y_orig and dS = S_after−S_orig.

第4実施形態において、肌色領域に特定して、YCbCr値とL*a*b*値の変換を統計的に調べた。ここで、YCbCr空間内で肌色領域を、輝度30〜200、色相100〜150°、彩度10〜80とした。この肌色領域を、色相、彩度を一定のまま輝度のみを10〜100の範囲で10刻みに上げた場合の統計を取り、sRGB空間からL*a*b*空間へ変換を行うと、図12のようなグラフが得られた。ここで、横軸はdY/Y_origで、縦軸はdS/S_origである。   In the fourth embodiment, the skin color region is specified, and the conversion between the YCbCr value and the L * a * b * value is statistically examined. Here, the skin color area in the YCbCr space was set to luminance 30 to 200, hue 100 to 150 °, and saturation 10 to 80. If this skin color area is taken from 10 to 100 in the range of 10 to 100 while keeping the hue and saturation constant, statistics are taken and converted from sRGB space to L * a * b * space. A graph like 12 was obtained. Here, the horizontal axis is dY / Y_orig, and the vertical axis is dS / S_orig.

これより、輝度の上げ幅と彩度の上げ幅の間に相関関係があることが判る。つまり図12のグラフにおいて、傾きをaとする直線の式、
dS/S_orig=a×dY/Y_orig …(式6)
に近似することができる。すなわち、式6を満足する場合、輝度補正の前後で色差が保たれることになる。式6はさらに、
S_after=(Y_corr/Y_orig×a+1−a)×S_orig …(式7)
と書き換えることができる。これは、第3実施形態で記載した式5と等価である。
From this, it can be seen that there is a correlation between the increase in luminance and the increase in saturation. That is, in the graph of FIG.
dS / S_orig = a × dY / Y_orig (Formula 6)
Can be approximated. That is, when Expression 6 is satisfied, the color difference is maintained before and after the luminance correction. Equation 6 further Can be approximated. That is, when Expression 6 is satisfied, the color difference is maintained before and after the luminance correction. Equation 6 further
S_after = (Y_corr / Y_orig × a + 1−a) × S_orig (Expression 7) S_after = (Y_corr / Y_orig × a + 1−a) × S_orig (Expression 7)
Can be rewritten. This is equivalent to Formula 5 described in the third embodiment. Can be rewritten. This is equivalent to Formula 5 described in the third embodiment.

以上述べたように、YCbCr空間において輝度のみを明るく補正した場合、モニタ上、もしくはプリント物で確認すると、肌色が元の画像に比べて彩度が低下して見える傾向にある。そのため、その彩度低下分を補正するような上記の式7を用いて、輝度の補正度合いに応じて彩度も補正すればよい。   As described above, when only the luminance is corrected brightly in the YCbCr space, the skin color tends to appear to be less saturated than the original image when confirmed on the monitor or printed matter. Therefore, it is only necessary to correct the saturation according to the luminance correction degree using the above equation 7 that corrects the saturation reduction.

式7に示す係数aは、図12のグラフから算出した値でも、補正前と補正後の画像をモニタやプリント物で見比べて経験的に求めた値でもよい。   The coefficient a shown in Equation 7 may be a value calculated from the graph of FIG. 12 or a value empirically obtained by comparing the uncorrected and corrected images on a monitor or printed matter.

一般に逆光画像では、顔が暗く写り、輝度値にして30〜130の値を取ることが、統計的に判った。通常、逆光補正は顔領域の輝度を10〜80程度上げて、顔を明るく補正するのであるが、その場合、上記のように輝度値のみを補正しても見た目に彩度が低下してしまっていた。そこで本実施形態に基づき、輝度に応じた彩度補正をすることで、このような課題を解決することができる。   In general, it has been statistically found that in a backlit image, the face appears dark and takes a luminance value of 30 to 130. Normally, backlight correction increases the brightness of the face area by about 10 to 80 to brighten the face, but in that case, even if only the brightness value is corrected as described above, the saturation is reduced visually. It was. Therefore, based on this embodiment, such a problem can be solved by correcting the saturation according to the luminance.

以上、第3、第4実施形態で述べてきたように、輝度の補正量に応じて彩度の補正量を調整することで、従来技術に比べてより精度の高い画像変換が行えるようになった。   As described above, as described in the third and fourth embodiments, by adjusting the saturation correction amount in accordance with the luminance correction amount, it is possible to perform image conversion with higher accuracy than in the related art. It was.

例えば逆光補正においては、輝度補正を行った場合、暗く写った被写体が最も補正量が大きく、逆に背景は補正量が小さく補正される。そこで、彩度補正を行う場合、元画像と比べて補正したくない背景に関しては彩度補正をあまり行わず、補正を行いたい被写体において彩度補正をかけるよう、輝度の補正量に関連づける。   For example, in backlight correction, when luminance correction is performed, a dark subject is corrected with the largest correction amount, while the background is corrected with a small correction amount. Therefore, when performing saturation correction, the background is not to be corrected as compared with the original image, and the saturation correction is not performed so much, and the luminance correction amount is associated with the subject to be corrected.

さらには、YCbCr空間において、肌色を明るく変化させた場合、統計的に均等色空間において彩度低下の傾向が見られる。そこで、輝度の補正量に応じて、元の画像と比べて彩度が保たれるよう彩度補正を行う。   Further, when the skin color is brightly changed in the YCbCr space, there is a statistical tendency to decrease the saturation in the uniform color space. Therefore, saturation correction is performed according to the luminance correction amount so that the saturation is maintained as compared with the original image.

以上のように、YCbCr空間において輝度のみを明るく補正した場合、図12に示すようにモニタ上もしくはプリント物を見ると肌色が元の画像に比べて彩度が低下して見える傾向にある。第4実施形態によれば、式6を用いて輝度の補正度合いに応じて彩度も補正するように構成したので、その彩度低下分を補正することが可能となる。   As described above, when only the luminance is corrected brightly in the YCbCr space, the skin color tends to appear to be less saturated than the original image when viewed on the monitor or printed matter as shown in FIG. According to the fourth embodiment, since the saturation is also corrected according to the luminance correction degree using Expression 6, it is possible to correct the saturation reduction.

尚、本発明は、前述した実施形態の機能を実現するソフトウェアのプログラム(実施形態では図に示すフローチャートに対応したプログラム)を、システムあるいは装置に直接あるいは遠隔から供給し、そのシステムあるいは装置のコンピュータが該供給されたプログラムコードを読み出して実行することによっても達成される場合を含む。   In the present invention, a software program (in the embodiment, a program corresponding to the flowchart shown in the figure) that realizes the functions of the above-described embodiment is directly or remotely supplied to the system or apparatus, and the computer of the system or apparatus Is also achieved by reading and executing the supplied program code.

従って、本発明の機能処理をコンピュータで実現するために、該コンピュータにインストールされるプログラムコード自体も本発明を実現するものである。つまり、本発明は、本発明の機能処理を実現するためのコンピュータプログラム自体も含まれる。   Accordingly, since the functions of the present invention are implemented by computer, the program code installed in the computer also implements the present invention. In other words, the present invention includes a computer program itself for realizing the functional processing of the present invention.

その場合、プログラムの機能を有していれば、オブジェクトコード、インタプリタにより実行されるプログラム、OSに供給するスクリプトデータ等の形態であっても良い。   In that case, as long as it has the function of a program, it may be in the form of object code, a program executed by an interpreter, script data supplied to the OS, or the like.

プログラムを供給するための記録媒体としては、例えば、フロッピー(登録商標)ディスク、ハードディスク、光ディスク、光磁気ディスク、MO、CD−ROM、CD−R、CD−RW、磁気テープ、不揮発性のメモリカード、ROM、DVD(DVD−ROM,DVD−R)などがある。   As a recording medium for supplying the program, for example, floppy (registered trademark) disk, hard disk, optical disk, magneto-optical disk, MO, CD-ROM, CD-R, CD-RW, magnetic tape, nonvolatile memory card ROM, DVD (DVD-ROM, DVD-R) and the like.

その他、プログラムの供給方法としては、クライアントコンピュータのブラウザを用いてインターネットのホームページに接続し、該ホームページから本発明のコンピュータプログラムそのもの、もしくは圧縮され自動インストール機能を含むファイルをハードディスク等の記録媒体にダウンロードすることによっても供給できる。また、本発明のプログラムを構成するプログラムコードを複数のファイルに分割し、それぞれのファイルを異なるホームページからダウンロードすることによっても実現可能である。つまり、本発明の機能処理をコンピュータで実現するためのプログラムファイルを複数のユーザに対してダウンロードさせるWWWサーバも、本発明に含まれるものである。   As another program supply method, a client computer browser is used to connect to an Internet homepage, and the computer program of the present invention itself or a compressed file including an automatic installation function is downloaded from the homepage to a recording medium such as a hard disk. Can also be supplied. It can also be realized by dividing the program code constituting the program of the present invention into a plurality of files and downloading each file from a different homepage. That is, a WWW server that allows a plurality of users to download a program file for realizing the functional processing of the present invention on a computer is also included in the present invention.

また、本発明のプログラムを暗号化してCD−ROM等の記憶媒体に格納してユーザに配布し、所定の条件をクリアしたユーザに対し、インターネットを介してホームページから暗号化を解く鍵情報をダウンロードさせ、その鍵情報を使用することにより暗号化されたプログラムを実行してコンピュータにインストールさせて実現することも可能である。   In addition, the program of the present invention is encrypted, stored in a storage medium such as a CD-ROM, distributed to users, and key information for decryption is downloaded from a homepage via the Internet to users who have cleared predetermined conditions. It is also possible to execute the encrypted program by using the key information and install the program on a computer.

また、コンピュータが、読み出したプログラムを実行することによって、前述した実施形態の機能が実現される他、そのプログラムの指示に基づき、コンピュータ上で稼動しているOSなどが、実際の処理の一部または全部を行ない、その処理によっても前述した実施形態の機能が実現され得る。   In addition to the functions of the above-described embodiments being realized by the computer executing the read program, the OS running on the computer based on the instruction of the program is a part of the actual processing. Alternatively, the functions of the above-described embodiment can be realized by performing all of them and performing the processing.

さらに、記録媒体から読み出されたプログラムが、コンピュータに挿入された機能拡張ボードやコンピュータに接続された機能拡張ユニットに備わるメモリに書き込まれた後、そのプログラムの指示に基づき、その機能拡張ボードや機能拡張ユニットに備わるCPUなどが実際の処理の一部または全部を行ない、その処理によっても前述した実施形態の機能が実現される。   Furthermore, after the program read from the recording medium is written in a memory provided in a function expansion board inserted into the computer or a function expansion unit connected to the computer, the function expansion board or The CPU or the like provided in the function expansion unit performs part or all of the actual processing, and the functions of the above-described embodiments are realized by the processing.

第1実施形態による画像処理システムの構成を示すブロック図である。 It is a block diagram which shows the structure of the image processing system by 1st Embodiment. 画像を構成するRGB信号とYCbCr信号の間の変換処理を示す図である。 It is a figure which shows the conversion process between the RGB signal and YCbCr signal which comprise an image. 矩形で抽出された顔領域を表す図である。 It is a figure showing the face area extracted by the rectangle. 抽出された顔領域の輝度ヒストグラムを示す図である。 It is a figure which shows the brightness | luminance histogram of the extracted face area. 顔平均輝度値Y_avrより第1の補正倍率k1を算出する関数を表す図である。 It is a figure showing the function which calculates 1st correction magnification k1 from face average luminance value Y_avr. 第1実施形態における、平滑化する前の補正輝度特性を表す図である。 It is a figure showing the correction | amendment luminance characteristic before smoothing in 1st Embodiment. 円を用いた2直線の交点における平滑化処理を説明する図である。 It is a figure explaining the smoothing process in the intersection of two straight lines using a circle. 図6の補正輝度特性について図7の平滑化処理を行って得られた最終的な補正輝度特性を表す図である。 FIG. 8 is a diagram illustrating a final corrected luminance characteristic obtained by performing the smoothing process of FIG. 7 on the corrected luminance characteristic of FIG. 6. 第2実施形態による補正輝度特性を表す図である。 It is a figure showing the correction | amendment luminance characteristic by 2nd Embodiment. 逆光画像において、輝度および彩度を補正する領域としない領域を表す図である。 It is a figure showing the area | region which does not carry out the area | region which correct | amends brightness | luminance and saturation in a backlight image. YCbCr空間において輝度のみを補正した場合の、L*a*b*空間内の対応する値を示す変換図である。 It is a conversion figure which shows the corresponding value in L * a * b * space when only the brightness | luminance is correct | amended in YCbCr space. 輝度変化率dY/Yと彩度変化率dS/Sの関係を表す図である。 It is a figure showing the relationship between luminance change rate dY / Y and saturation change rate dS / S. 第1実施形態による補正処理を説明するフローチャートである。 It is a flowchart explaining the correction process by 1st Embodiment.

Claims (13)

  1. 画像の輝度を補正する画像処理方法であって、
    前記画像中より人物の顔領域を抽出して、該顔領域に含まれる画素の輝度値に基づいて該顔領域の平均輝度値を取得する取得工程と、
    前記顔領域の平均輝度値と前記画像全体の平均輝度値とに基づいて、前記顔領域の平均輝度値に対する補正後の輝度値である補正輝度値を算出する算出工程と、
    前記顔領域の平均輝度値から予め定められた輝度を減算した値から、前記顔領域の平均輝度値に予め定められた輝度を加算した値までの範囲を顔輝度範囲として設定する設定工程と、
    前記顔輝度範囲内で傾きが一定であり、かつ、前記顔領域の平均輝度値が前記補正輝度値に変換される補正輝度特性を生成する生成工程と、 A generation step of generating a corrected luminance characteristic in which the inclination is constant within the face luminance range and the average luminance value of the face region is converted into the corrected luminance value.
    前記生成工程で生成された補正輝度特性に基づいて、前記画像の輝度を補正する補正工程とを備えることを特徴とする画像処理方法。 An image processing method comprising a correction step of correcting the brightness of the image based on the correction brightness characteristic generated in the generation step. An image processing method for correcting image brightness, An image processing method for correcting image brightness,
    An obtaining step of extracting a human face area from the image and obtaining an average luminance value of the face area based on a luminance value of a pixel included in the face area; An obtaining step of extracting a human face area from the image and obtaining an average luminance value of the face area based on a luminance value of a pixel included in the face area;
    A calculation step of calculating a corrected luminance value that is a corrected luminance value for the average luminance value of the face region based on the average luminance value of the face region and the average luminance value of the entire image; A calculation step of calculating a corrected luminance value that is a corrected luminance value for the average luminance value of the face region based on the average luminance value of the face region and the average luminance value of the entire image;
    A setting step of setting a range from a value obtained by subtracting a predetermined luminance value from the average luminance value of the face area to a value obtained by adding a predetermined luminance value to the average luminance value of the face area as a face luminance range When, A setting step of setting a range from a value obtained by subtracting a predetermined luminance value from the average luminance value of the face area to a value obtained by adding a predetermined luminance value to the average luminance value of the face area as a face luminance range When,
    Generating a corrected luminance characteristic in which the inclination is constant within the face luminance range and an average luminance value of the face area is converted into the corrected luminance value; Generating a corrected luminance characteristic in which the emission is constant within the face luminance range and an average luminance value of the face area is converted into the corrected luminance value;
    An image processing method comprising: a correction step of correcting the luminance of the image based on the corrected luminance characteristic generated in the generation step. An image processing method comprising: a correction step of correcting the luminance of the image based on the corrected luminance characteristic generated in the generation step.
  2. 前記算出工程では、前記顔領域の平均輝度値から第1の補正倍率を算出し、前記画像全体の平均輝度値から第2の補正倍率を算出し、前記第1の補正倍率と前記第2の補正倍率を前記顔領域の平均輝度値に掛け合わせることで、前記補正輝度値を算出することを特徴とする請求項1に記載の画像処理方法。   In the calculating step, a first correction magnification is calculated from an average luminance value of the face area, a second correction magnification is calculated from an average luminance value of the entire image, and the first correction magnification and the second correction magnification are calculated. The image processing method according to claim 1, wherein the correction luminance value is calculated by multiplying a correction magnification by an average luminance value of the face area.
  3. 前記生成工程では、さらに、前記顔輝度範囲の外側の補正輝度特性が前記顔輝度範囲内の補正輝度特性と連続するように、前記顔輝度範囲の外側の補正輝度特性を生成することを特徴とする請求項1に記載の画像処理方法。   In the generating step, the correction luminance characteristic outside the face luminance range is generated so that the correction luminance characteristic outside the face luminance range is continuous with the correction luminance characteristic within the face luminance range. The image processing method according to claim 1.
  4. 前記生成工程では、前記顔輝度範囲内の補正輝度特性と前記顔輝度範囲の外側に設定された補正輝度特性との接続部分を曲線で補間した形状の補正輝度特性を有するように前記顔輝度範囲内と前記顔輝度範囲の外側の補正輝度特性を生成することを特徴とする請求項3に記載の画像処理方法。   In the generating step, the face luminance range has a corrected luminance characteristic having a shape obtained by interpolating a connection portion between the corrected luminance characteristic within the facial luminance range and the corrected luminance characteristic set outside the facial luminance range with a curve. The image processing method according to claim 3, wherein corrected luminance characteristics inside and outside the face luminance range are generated.
  5. 前記生成工程では、前記顔輝度範囲においては予め定めた傾きを持ち、前記平均輝度値と前記補正輝度値の点を通る第1の直線を設定し、
    前記顔輝度範囲以下の範囲においては入力輝度値と補正後の輝度値が最小である点を通り、前記第1の直線と接続する第2の直線を設定し、
    前記顔輝度範囲以上の範囲においては入力輝度値と補正後の輝度値が最大である点を通り、前記第1の直線と接続する第3の直線を設定し、前記第1、第2、第3の直線に基づいて前記補正輝度特性を生成することを特徴とする請求項1に記載の画像処理方法。 In the range above the face luminance range, a third straight line is set to pass through the point where the input luminance value and the corrected luminance value are maximum and are connected to the first straight line, and the first, second, and second straight lines are set. The image processing method according to claim 1, wherein the corrected luminance characteristic is generated based on the straight line of 3. In the generation step, a first straight line having a predetermined inclination in the face luminance range and passing through the points of the average luminance value and the corrected luminance value is set. In the generation step, a first straight line having a predetermined approach in the face luminance range and passing through the points of the average luminance value and the corrected luminance value is set.
    In the range below the face luminance range, a second line connected to the first line is set through the point where the input luminance value and the corrected luminance value are minimum, In the range below the face luminance range, a second line connected to the first line is set through the point where the input luminance value and the corrected luminance value are minimum,
    In a range equal to or greater than the face luminance range, a third straight line connected to the first straight line is set through the point where the input luminance value and the corrected luminance value are maximum, and the first, second, second The image processing method according to claim 1, wherein the corrected luminance characteristic is generated based on three straight lines. In a range equal to or greater than the face luminance range, a third straight line connected to the first straight line is set through the point where the input luminance value and the corrected luminance value are maximum, and the first, second, second The image processing method according to claim 1, wherein the corrected luminance characteristic is generated based on three straight lines.
  6. 前記生成工程では、前記第1の直線と前記第2の直線との接続部分、及び前記第1の直線と前記第3の直線との接続部分をそれぞれ曲線で補間して前記補正輝度特性を生成することを特徴とする請求項5に記載の画像処理方法。 In the generating step, the corrected luminance characteristic is generated by interpolating the connecting portion between the first straight line and the second straight line and the connecting portion between the first straight line and the third straight line with respective curves. The image processing method according to claim 5, wherein:
  7. 前記生成工程では、前記顔輝度範囲の一部が0以下となる場合には、
    前記顔領域の平均輝度値と前記補正輝度値の点を通るガンマ曲線を第1の補正特性とし、 The gamma curve passing through the points of the average brightness value of the face region and the correction brightness value is set as the first correction characteristic.
    前記顔輝度範囲における、入力輝度値と補正後の輝度値が最小である点と、前記顔領域の平均輝度値と前記補正輝度値の点とを通る第1の直線と、前記顔輝度範囲以上の領域において、入力輝度値と補正後の輝度値が最大である点を通り前記顔輝度範囲内の前記第1の直線と接続する第2の直線とに基づいて得られる補正特性を第2の補正特性とし、 The point where the input luminance value and the corrected luminance value in the face luminance range are the minimum, the first straight line passing between the average luminance value of the face region and the corrected luminance value point, and the face luminance range or more. In the region of, the correction characteristic obtained based on the second straight line connected to the first straight line in the face luminance range through the point where the input luminance value and the corrected luminance value are maximum is obtained. As a correction characteristic
    前記第1の補正特性と前記第2の補正特性を加重平均することにより前記補正輝度特性を生成することを特徴とする請求項1に記載の画像処理方法。 The image processing method according to claim 1, wherein the corrected luminance characteristic is generated by weighted averaging the first correction characteristic and the second correction characteristic. In the generating step, when a part of the face luminance range is 0 or less, In the generating step, when a part of the face luminance range is 0 or less,
    A gamma curve that passes through the average luminance value of the face area and the point of the corrected luminance value is a first correction characteristic, A gamma curve that passes through the average luminance value of the face area and the point of the corrected luminance value is a first correction characteristic,
    A point where the input luminance value and the corrected luminance value are minimum in the face luminance range, a first straight line passing through the average luminance value of the face region and the point of the corrected luminance value, and the face luminance range or more Correction characteristics obtained on the basis of the second straight line connected to the first straight line in the face luminance range through the point where the input luminance value and the corrected luminance value are maximum in the region As a correction characteristic, A point where the input luminance value and the corrected luminance value are minimum in the face luminance range, a first straight line passing through the average luminance value of the face region and the point of the corrected luminance range, and the face luminance range or more Correction characteristics obtained on the basis of the second straight line connected to the first straight line in the face luminance range through the point where the input luminance value and the corrected luminance value are maximum in the region As a correction characteristic,
    The image processing method according to claim 1, wherein the corrected luminance characteristic is generated by performing weighted averaging of the first correction characteristic and the second correction characteristic. The image processing method according to claim 1, wherein the corrected luminance characteristic is generated by performing weighted averaging of the first correction characteristic and the second correction characteristic.
  8. 前記生成工程では、前記第1の直線と前記第2の直線との接続部分を曲線で補間して前記第2の特性を生成することを特徴とする請求項6に記載の画像処理方法。   The image processing method according to claim 6, wherein in the generation step, the second characteristic is generated by interpolating a connection portion between the first straight line and the second straight line with a curve.
  9. 前記補正工程では、更に、前記補正後の輝度値への補正量に応じて当該画素の彩度を補正することを特徴とする請求項1乃至8のいずれか1項に記載の画像処理方法。 9. The image processing method according to claim 1, wherein in the correcting step, the saturation of the pixel is corrected in accordance with a correction amount to the corrected luminance value.
  10. 前記補正工程では、補正前の輝度値をY、補正後の輝度値をY'、補正前の彩度値をSとした場合に、補正後の彩度値をS'をS'=(Y'/Y×a+(1−a))×S (0≦a≦1)
    により得ることを特徴とする請求項9に記載の画像処理方法。 The image processing method according to claim 9, wherein the image processing method is obtained by the above method. In the correction step, when the luminance value before correction is Y, the luminance value after correction is Y ′, and the saturation value before correction is S, the saturation value after correction is S ′ = (Y '/ Y × a + (1-a)) × S (0 ≦ a ≦ 1) In the correction step, when the luminance value before correction is Y, the luminance value after correction is Y ′, and the saturation value before correction is S, the saturation value after correction is S ′ = (Y'/ Y × a + ( 1-a)) × S (0 ≤ a ≤ 1)
    The image processing method according to claim 9, wherein the image processing method is obtained by: The image processing method according to claim 9, wherein the image processing method is obtained by:
  11. 画像の輝度を補正する画像処理装置であって、
    前記画像中より人物の顔領域を抽出して、該顔領域に含まれる画素の輝度値に基づいて該顔領域の平均輝度値を取得する取得手段と、

    前記顔領域の平均輝度値と前記画像全体の平均輝度値とに基づいて、前記顔領域の平均輝度値に対する補正後の輝度値である補正輝度値を算出する算出手段と、 A calculation means for calculating a corrected brightness value, which is a corrected brightness value with respect to the average brightness value of the face region, based on the average brightness value of the face region and the average brightness value of the entire image.
    前記顔領域の平均輝度値から予め定められた輝度を減算した値から、前記顔領域の平均輝度値に予め定められた輝度を加算した値までの範囲を顔輝度範囲として設定する設定手段と、 A setting means for setting a range from a value obtained by subtracting a predetermined brightness value from the average brightness value of the face area to a value obtained by adding a predetermined brightness value to the average brightness value of the face area as a face brightness range. When,
    前記顔輝度範囲内で傾きが一定であり、かつ、前記顔領域の平均輝度値が前記補正輝度値に変換される補正輝度特性を生成する生成手段と、 A generation means for generating a corrected luminance characteristic in which the inclination is constant within the face luminance range and the average luminance value of the face region is converted into the corrected luminance value.
    前記生成手段で生成された補正特性輝度特性に基づいて、前記画像の輝度値を補正する補正手段とを備えることを特徴とする画像処理装置。 An image processing apparatus including a correction means for correcting a brightness value of the image based on the correction characteristic luminance characteristic generated by the generation means. An image processing apparatus for correcting the brightness of an image, An image processing apparatus for correcting the brightness of an image,
    Acquiring means for extracting a human face area from the image and obtaining an average luminance value of the face area based on a luminance value of a pixel included in the face area; Acquiring means for extracting a human face area from the image and obtaining an average luminance value of the face area based on a luminance value of a pixel included in the face area;
    Calculation means for calculating a corrected luminance value that is a corrected luminance value for the average luminance value of the face region based on the average luminance value of the face region and the average luminance value of the entire image; Calculation means for calculating a corrected luminance value that is a corrected luminance value for the average luminance value of the face region based on the average luminance value of the face region and the average luminance value of the entire image;
    Setting means for setting a range from a value obtained by subtracting a predetermined luminance value from the average luminance value of the face area to a value obtained by adding a predetermined luminance value to the average luminance value of the face area as a face luminance range When, Setting means for setting a range from a value obtained by subtracting a predetermined luminance value from the average luminance value of the face area to a value obtained by adding a predetermined luminance value to the average luminance value of the face area as a face luminance range When ,
    Generating means for generating a corrected luminance characteristic in which an inclination is constant within the face luminance range and an average luminance value of the face region is converted into the corrected luminance value; Generating means for generating a corrected luminance characteristic in which an approach is constant within the face luminance range and an average luminance value of the face region is converted into the corrected luminance value;
    An image processing apparatus comprising: a correction unit that corrects a luminance value of the image based on the correction characteristic luminance characteristic generated by the generation unit. An image processing apparatus comprising: a correction unit that corrects a luminance value of the image based on the correction characteristic luminance characteristic generated by the generation unit.
  12. 請求項1乃至10のいずれか1項に記載の画像処理方法をコンピュータに実行させるための制御プログラム。 A control program for causing a computer to execute the image processing method according to any one of claims 1 to 10.
  13. 請求項1乃至10のいずれか1項に記載の画像処理方法をコンピュータに実行させるための制御プログラムを格納したコンピュータ読み取り可能な記憶媒体。   A computer-readable storage medium storing a control program for causing a computer to execute the image processing method according to claim 1.
JP2004194292A 2004-06-30 2004-06-30 Image processing method, image processing apparatus, computer program, and storage medium Active JP4262151B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2004194292A JP4262151B2 (en) 2004-06-30 2004-06-30 Image processing method, image processing apparatus, computer program, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2004194292A JP4262151B2 (en) 2004-06-30 2004-06-30 Image processing method, image processing apparatus, computer program, and storage medium

Publications (3)

Publication Number Publication Date
JP2006018465A5 JP2006018465A5 (en) 2006-01-19
JP2006018465A JP2006018465A (en) 2006-01-19
JP4262151B2 true JP4262151B2 (en) 2009-05-13

Family

ID=35792697

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2004194292A Active JP4262151B2 (en) 2004-06-30 2004-06-30 Image processing method, image processing apparatus, computer program, and storage medium

Country Status (1)

Country Link
JP (1) JP4262151B2 (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4867365B2 (en) 2006-01-30 2012-02-01 ソニー株式会社 Imaging control apparatus, imaging apparatus, and imaging control method
US7999863B2 (en) 2006-02-01 2011-08-16 Fujifilm Corporation Image correction apparatus and method
JP4895839B2 (en) * 2006-02-01 2012-03-14 富士フイルム株式会社 Image correction apparatus and method
JP4788394B2 (en) * 2006-02-24 2011-10-05 セイコーエプソン株式会社 Image processing apparatus, image processing method, and image processing program
JP4839106B2 (en) * 2006-03-15 2011-12-21 株式会社リコー Image processing apparatus and image processing method
US8369645B2 (en) 2006-05-17 2013-02-05 Sony Corporation Image correction circuit, image correction method and image display
JP5012195B2 (en) * 2006-05-17 2012-08-29 ソニー株式会社 Image correction circuit, image correction method, and image display apparatus
JP2008118383A (en) * 2006-11-02 2008-05-22 Matsushita Electric Ind Co Ltd Digital camera
JP5074066B2 (en) * 2007-03-27 2012-11-14 オリンパス株式会社 Image processing apparatus and image processing method
JP4525719B2 (en) * 2007-08-31 2010-08-18 カシオ計算機株式会社 Gradation correction apparatus, gradation correction method, and program
JP5132470B2 (en) * 2008-08-01 2013-01-30 三洋電機株式会社 Image processing device
JP4979090B2 (en) * 2008-08-19 2012-07-18 株式会社リコー Image processing apparatus, image processing method, program, and recording medium
JP5487597B2 (en) * 2008-11-13 2014-05-07 セイコーエプソン株式会社 Image processing apparatus, image display apparatus, and image processing method
US8891865B2 (en) 2008-12-26 2014-11-18 Nec Corporation Image processing device, image processing method, and storage medium for performing a gradation correction operation in a color space
JP5031877B2 (en) * 2010-01-06 2012-09-26 キヤノン株式会社 Image processing apparatus and image processing method
JP5933332B2 (en) * 2012-05-11 2016-06-08 シャープ株式会社 Image processing apparatus, image processing method, image processing program, and recording medium storing image processing program
JP6257268B2 (en) * 2013-10-30 2018-01-10 キヤノン株式会社 Image processing apparatus and image processing method
WO2020140986A1 (en) * 2019-01-04 2020-07-09 Oppo广东移动通信有限公司 Image denoising method and apparatus, storage medium and terminal

Also Published As

Publication number Publication date
JP2006018465A (en) 2006-01-19

Similar Documents

Publication Publication Date Title
US9639965B2 (en) Adjusting color attribute of an image in a non-uniform way
US9760761B2 (en) Image processing method and image processing apparatus
US7636496B2 (en) Histogram adjustment for high dynamic range image mapping
CN100476874C (en) Image processing method and apparatus
US6535301B1 (en) Image processing apparatus, image processing method, image processing program recording medium, color adjustment method, color adjustment device, and color adjustment control program recording medium
EP1450551B1 (en) Image data output image adjustment
US7358994B2 (en) Image processing apparatus, image processing method, recording medium thereof, and program thereof
US7885459B2 (en) Image processing method and apparatus therefor
US7945109B2 (en) Image processing based on object information
ES2304419T3 (en) Apparatus for the image process for image printing processes.
US6664973B1 (en) Image processing apparatus, method for processing and image and computer-readable recording medium for causing a computer to process images
US6493468B1 (en) Image processing apparatus and method
US6980326B2 (en) Image processing method and apparatus for color correction of an image
JP4217398B2 (en) Image data processing method, image data processing apparatus, storage medium, and program
US7072084B2 (en) Color converting device emphasizing a contrast of output color data corresponding to a black character
JP3902265B2 (en) Contrast improvement method
JP5116393B2 (en) Image processing apparatus and image processing method
US7940434B2 (en) Image processing apparatus, image forming apparatus, method of image processing, and a computer-readable storage medium storing an image processing program
JP4118749B2 (en) Image processing apparatus, image processing program, and storage medium
JP4870618B2 (en) Image data automatic mapping method and image processing device
US5872895A (en) Method for object based color matching when printing a color document
JP5469209B2 (en) Luminance conversion curve creation device and method, and brightness conversion curve creation program
US7433079B2 (en) Image processing apparatus and method
JP4902837B2 (en) How to convert to monochrome image
JP4065482B2 (en) Image data processing method, apparatus, storage medium, and program

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20051207

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20051207

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20080722

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20080922

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20081024

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20081215

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20090123

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20090206

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20120220

Year of fee payment: 3

R150 Certificate of patent or registration of utility model

Ref document number: 4262151

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130220

Year of fee payment: 4

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20140220

Year of fee payment: 5