WO2005064539A1 - Image area extraction and image processing method, device, and program - Google Patents

Image area extraction and image processing method, device, and program Download PDF

Info

Publication number
WO2005064539A1
WO2005064539A1 PCT/JP2004/018816 JP2004018816W WO2005064539A1 WO 2005064539 A1 WO2005064539 A1 WO 2005064539A1 JP 2004018816 W JP2004018816 W JP 2004018816W WO 2005064539 A1 WO2005064539 A1 WO 2005064539A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pixel
pixels
extracting
image area
Prior art date
Application number
PCT/JP2004/018816
Other languages
French (fr)
Japanese (ja)
Inventor
Takeshi Nakajima
Hiroaki Takano
Tsukasa Ito
Daisuke Sato
Original Assignee
Konica Minolta Photo Imaging, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Konica Minolta Photo Imaging, Inc. filed Critical Konica Minolta Photo Imaging, Inc.
Publication of WO2005064539A1 publication Critical patent/WO2005064539A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation

Definitions

  • the present invention relates to an image region extraction method, an image region extraction device, an image region extraction program, an image processing method, an image processing device, and an image processing program for extracting an image region from an image.
  • Such image signals are subjected to various image processing such as negative / positive inversion, luminance adjustment, color balance adjustment, grain removal, sharpness enhancement, and the like, and then are subjected to CD-R (CD-Recordable), CD-RW. (CD—Rewritable), FD (Floppy (registered trademark) disk), distributed via recording media such as memory cards and the Internet.
  • CD-R CD-Recordable
  • CD-RW CD—Rewritable
  • FD Floppy (registered trademark) disk
  • the distributed image signal is output as a hard copy image using silver halide photographic paper, an ink jet printer, a thermal printer, etc., or displayed on a CRT (Cathode Ray Tube), liquid crystal display, plasma display, etc. for viewing. You.
  • DSCs digital still cameras
  • an image to be viewed includes a person's face
  • the person's face is most watched at the time of viewing. For this reason, in order to output a high-quality image, it is necessary to impart appropriate color, brightness, sharpness, noise, three-dimensionality, and the like to a person's face.
  • the simple area extension method is to extract an image area by expanding adjacent pixels whose data difference between pixels is equal to or less than a threshold value as belonging to the same image area, and extracting the image area.
  • a threshold value for example, see Non-Patent Document 1.
  • the data difference between pixels adjacent to the initial pixel is less than or equal to the threshold. It is assumed that the adjacent pixel and the initial pixel belong to the same image area, and the same judgment is performed for the pixel adjacent to the pixel belonging to the same image area, and the same area is determined starting from the initial pixel.
  • This is an image processing method that extracts an image area by gradually expanding it.
  • Patent Document 1 Japanese Patent Application Laid-Open No. 2001-57630
  • Non-Patent Document 1 Mikio Takagi and Director of Director Hisahisa Shimoda, "Image Analysis Handbook", First Edition, The University of Tokyo Press, January 17, 1991
  • it is difficult to accurately extract a desired image region from an image and perform appropriate image processing on the extracted image region by the method for extracting an image region as disclosed in Japanese Patent Application Laid-Open No. H11-157163. Was.
  • the image processing may not be properly performed. That is, if image processing is performed on a specific image area, accurate extraction of the image area is premised. Even if applied, the desired effect cannot be obtained, and the possibility is high.
  • An object of the present invention is to appropriately extract a desired image area from an image and perform appropriate image processing on the extracted image area.
  • the image area is extended from the initial pixels using the low-frequency image signal. Thereafter, when the image area reaches the pixel at the image edge in the extension process, the extension is stopped and the image area after the extension is extended. Extracting pixels of
  • the luminance of the image signal between the pixels is determined.
  • the rate of change is normalized by either the luminance value of the target pixel or the average value of the luminance of the pixel for which the luminance change rate is to be calculated, and when the normalized value is equal to or greater than a predetermined threshold, the pixel to be expanded is Is preferably extracted as a pixel corresponding to the image edge.
  • the extraction condition is preferably a condition for extracting a pixel representing the skin of a person.
  • the image area is extended from the initial pixels using the low-frequency image signal. Thereafter, when the image area reaches the pixel at the image edge in the extension process, the extension is stopped and the image area after the extension is extended. Extracting pixels of
  • the image area is expanded from the initial pixels using the low-frequency image signal, and thereafter, when the image area reaches the pixel at the image edge in the expansion process, the expansion is stopped and expanded. It is a step of extracting pixels of each subsequent image area,
  • the step of determining whether or not the face of the person is represented A step of determining whether or not each of the image areas after the expansion represents a person's face.
  • a luminance change rate of an image signal between pixels is determined by: Normalization is performed using either the luminance value of the target pixel or the average value of the luminance of the pixel whose luminance change rate is to be calculated. If the standardized value is equal to or greater than a predetermined threshold, the pixel to be expanded is set as the image edge. It is preferable to extract as corresponding pixels.
  • FIG. 1 is a perspective view showing an external configuration of an image processing apparatus to which the present invention is applied.
  • FIG. 2 is a block diagram showing an internal structure of an image processing device to which the present invention is applied.
  • FIG. 3 is a block diagram mainly showing an internal configuration of an image processing unit shown in FIG. 2.
  • FIG. 4 is a flowchart illustrating image processing contents to which the present invention is applied.
  • Acquiring means for acquiring an image signal composed of signals of a plurality of pixels
  • Edge extracting means for extracting a pixel corresponding to an image edge from the plurality of pixels, creating means for creating a low-frequency image signal from the image signal,
  • Initial pixel extracting means for extracting a pixel satisfying a predetermined extraction condition from the plurality of pixels as an initial pixel
  • the image area is extended from the initial pixels using the low-frequency image signal. Thereafter, when the image area reaches the pixel at the image edge in the extension process, the extension is stopped and the image area after the extension is extended. Expanded region extracting means for extracting pixels of
  • the edge extraction means normalizes the luminance change rate of the image signal between pixels by using either the luminance value of the target pixel or the average value of the luminance of the pixel for which the luminance change rate is to be calculated. Is greater than or equal to a predetermined threshold, the pixel to be expanded is paired with the image edge. Preferably, it is extracted as the corresponding pixel.
  • the extraction condition is preferably a condition for extracting a pixel representing the skin of a person.
  • the image area is extended from the initial pixels using the low-frequency image signal. Thereafter, when the image area reaches the pixel at the image edge in the extension process, the extension is stopped and the image area after the extension is extended. Function to extract the pixels of
  • the luminance change rate of the image signal between pixels is normalized by either the luminance value of the target pixel or the average value of the luminance of the pixel for which the luminance change rate is to be calculated, and the normalized value is equal to or greater than a predetermined threshold value. In this case, it is preferable to further realize a function of extracting the pixel to be extended as a pixel corresponding to an image edge.
  • the extraction condition is preferably a condition for extracting a pixel representing the skin of a person.
  • Acquiring means for acquiring an image signal composed of signals of a plurality of pixels
  • Edge extracting means for extracting a pixel corresponding to an image edge from the plurality of pixels, creating means for creating a low-frequency image signal from the image signal,
  • Initial pixel extracting means for extracting a pixel satisfying an extraction condition for extracting a pixel representing a human skin from the plurality of pixels as an initial pixel;
  • the image area is extended from the initial pixels using the low-frequency image signal. Thereafter, when the image area reaches the pixel at the image edge in the extension process, the extension is stopped and the image area after the extension is extended. Expanded region extracting means for extracting pixels of
  • the means for extracting the initial image is the means for extracting the initial image
  • Initial pixel extraction means for extracting, as initial pixels, a plurality of pixels each satisfying a plurality of types of extraction conditions for extracting a pixel representing a human skin from the plurality of pixels
  • the image area is expanded from the initial pixels using the low-frequency image signal, and thereafter, when the image area reaches the pixel at the image edge in the expansion process, the expansion is stopped and expanded.
  • the means for determining whether or not the face of the person is represented
  • a determination unit configured to determine whether each of the extended image areas represents a person's face.
  • the edge extraction means normalizes the luminance change rate of the image signal between pixels by using either the luminance value of the target pixel or the average value of the luminance of the pixel for which the luminance change rate is to be calculated. When is greater than or equal to a predetermined threshold, it is preferable to extract the pixel to be expanded as a pixel corresponding to an image edge.
  • the image area is extended from the initial pixels using the low-frequency image signal. Thereafter, when the image area reaches the pixel at the image edge in the extension process, the extension is stopped and the image area after the extension is extended. Function to extract the pixels of
  • the function to extract is
  • the image area is expanded from the initial pixels using the low-frequency image signal, and thereafter, when the image area reaches the pixel at the image edge in the expansion process, the expansion is stopped and expanded. It is a function to extract the pixels of each subsequent image area,
  • the luminance change rate of the image signal between pixels is normalized by either the luminance value of the target pixel or the average value of the luminance of the pixel for which the luminance change rate is to be calculated, and the normalized value is determined by a predetermined threshold.
  • the value is equal to or larger than the value, it is preferable to further realize a function of extracting the pixel to be extended as a pixel corresponding to an image edge.
  • the image processing apparatus 1 is provided with a magazine loading section 3 for loading a photosensitive material on one side surface of a housing 2. Inside the housing 2, there are provided an exposure processing section 4 for exposing the photosensitive material, and a print creating section 5 for developing and drying the exposed photosensitive material to create a print. On the other side of the housing 2, a tray 6 for discharging the print created by the print creating section 5 is provided.
  • a CRT 8 as a display device
  • a film scanner unit 9 that is a device for reading a transparent original
  • a reflective original input device 10 and an operation unit 11 are provided at an upper portion of the housing 2.
  • the housing 2 is provided with an image reading section 14 for reading images recorded on various digital recording media, and an image writing section 15 for writing (outputting) image signals on various digital recording media.
  • a control unit 7 for integrally controlling each unit constituting the image processing apparatus 1 is provided inside the housing 2.
  • the image reading unit 14 is provided with a PC card adapter 14a and an FD adapter 14b, so that the PC card 13a and the FD 13b can be inserted.
  • the image writing unit 15 is provided with an FD adapter 15a, a MO (Magneto-Optical) adapter 15b, and an optical disk adapter 15c, and the FD 16a, M016b, and the optical disk 16c can be inserted thereinto, respectively.
  • the optical disk 16c includes CD_R, DVD-R (Digita 1 Versatile Disk—Recordable), DVD—RW (DVD—Rewritable), and the like.
  • the operation unit 11, the CRT 8, the film scanner unit 9, the reflection document input device 10, and the image reading unit 14 are configured integrally with the housing 2. May be provided separately.
  • the image processing apparatus 1 includes a control unit 7, an exposure processing unit 4, a print generation unit 5, a film scanner unit 9, a reflection document input device 10, an image reading unit 14, a communication unit (input) 32, It has an image writing unit 15, a data storage unit 71, an operation unit 11, a CRT 8, and a communication unit (output) 33.
  • the exposure processing section 4 exposes the photosensitive material to an image, and outputs the photosensitive material to the print creating section 5.
  • the print creating section 5 develops the exposed photosensitive material and dries it to create prints Pl, P2 and P3.
  • the control unit 7 includes a microcomputer, and various control programs such as an image processing program stored in a ROM (Read Only Memory) or the like (not shown) and a CPU (Central Processing Unit) (not shown). In cooperation with (abbreviation), the operation of each unit constituting the image processing apparatus 1 is controlled in a comprehensive manner.
  • ROM Read Only Memory
  • CPU Central Processing Unit
  • the control unit 7 includes an image processing unit 70, and based on an input signal (command information) from the operation unit 11, a film scanner unit 9, a reflection document input device 10, an image reading unit 14, an external device
  • the communication unit (input) forms an image for exposure for each image input from the input unit 32 and outputs the image to the exposure processing unit 4.
  • the image processing unit 70 will be described later in detail.
  • the film scanner unit 9 reads an image recorded on a transparent original such as a developed negative film N or a reversal film captured by an analog camera.
  • the reflection document input device 10 reads an image formed on a print P (photo print, document, various prints) by a flatbed scanner (not shown).
  • the operation unit 11 has information input means 12.
  • the information input means 12 is composed of, for example, a touch panel or the like, and outputs a press signal of the information input means 12 to the control section 7 as an input signal.
  • the operation unit 11 may be configured to include a keyboard, a mouse, and the like.
  • the CRT 8 displays an image or the like according to the display control signal input from the control unit 7.
  • the image reading unit 14 includes an image transfer unit 30, reads an image recorded on the PC card 13a or the FD 13b, and transfers the image to the control unit 7.
  • the image transfer means 30 has a PC card adapter 14a, an FD adapter 14b, and the like.
  • the image reading section 14 reads an image recorded on the PC card 13a inserted into the PC card adapter 14a or the FD 13b inserted into the FD adapter 14b, and transfers the read image to the image transfer means 30. Is transferred to the control unit 7 using.
  • the image writing unit 15 includes an image transport unit 31, and the image transport unit 31 includes an FD adapter 15a, an MO adapter 15b, an optical disk adapter 15c, and the like.
  • the image writing unit 15 is configured such that the FD 16a inserted into the FD adapter 15a, the M ⁇ 16b inserted into the MO adapter 15b, and the optical disk inserted into the optical disk adapter 15c, according to the write signal input from the control unit 7. Write various data to 16c.
  • the communication means (input) 32 receives images, various commands, and the like from another computer in the facility where the image processing apparatus 1 is installed or a distant computer via the Internet or the like.
  • the communication means (output) 33 transmits an image, order information, and the like to another computer in the facility where the image processing apparatus 1 is installed, or to a remote computer via the Internet or the like.
  • the data storage unit 71 stores data such as images and order information (information on how many prints are to be made from which frame images, print size information, etc.).
  • the image processing unit 70 includes a film scan data processing unit 701, a reflection original scan data processing unit 702, an image data format decoding processing unit 703, and an image adjustment processing unit 704 (described in the claims). Acquisition unit, edge extraction unit, creation unit, initial pixel extraction unit, expanded area extraction unit, discrimination unit, and processing unit.), CRT specific processing unit 705, printer specific processing unit 706, printer specific processing And an image data format creation processing unit 708.
  • the film scan data processing unit 701 performs a calibration operation unique to the film scanner unit 9, a negative-positive inversion in the case of a negative original, a gray balance adjustment, a contrast adjustment, and the like for the image input from the film scanner unit 9. And outputs it to the image adjustment processing unit 704. Further, the film scan data processing unit 701 includes a film size, a negative / positive type, ISO (International Organization for Standardization) sensitivity optically or magnetically recorded on the film, a manufacturer name, information on a main subject, and shooting conditions. Information (example For example, APS (Advanced Photo System) information) is also output to the image adjustment processing unit 704.
  • ISO International Organization for Standardization
  • APS Advanced Photo System
  • the reflection document scan data processing unit 702 performs a calibration operation unique to the reflection document input device 10, negative-positive inversion for a negative document, gray balance adjustment, and contrast adjustment for an image input from the reflection document input device 10. And outputs the result to the image adjustment processing unit 704.
  • the image data format decryption processing unit 703 performs restoration of a compression code, conversion of a method of expressing color data, and the like in accordance with the data format of the image signal input from the image transfer unit 30 or the communication unit (input) 32. Are output to the image adjustment processing unit 704.
  • the image adjustment processing unit 704 performs various types of image processing on images input from the film scanner unit 9, the reflection original input device 10, the image transfer unit 30, and the communication unit (input) 32. In particular, the image adjustment processing unit 704 executes the image processing shown in the flowchart of FIG.
  • the image adjustment processing unit 704 sends the processed image to the CRT-specific processing unit 705, the printer-specific processing unit 706, the printer-specific processing unit 707, the image data format creation unit 708, and the data storage unit 71. Output.
  • the CRT-specific processing unit 705 performs processing such as changing the number of pixels and color matching on the image input from the image adjustment processing unit 704, and outputs the processed image to the CRT 8 together with various display information.
  • the printer-specific processing unit 706 performs printer-specific calibration processing, color matching, changing the number of pixels, and the like on the image-processed image signal input from the image adjustment processing unit 704, and an exposure processing unit. Output to 4.
  • the image processing apparatus 1 of the present embodiment is provided with a printer-specific processing unit 707 corresponding to the external printer 34 such as an inkjet printer.
  • the printer-specific processing unit 707 performs an appropriate printer-specific calibration process, color matching, change of the number of pixels, and the like on the image input from the image adjustment processing unit 704.
  • the image data format creation processing unit 708 converts the image input from the image adjustment processing unit 704 into a JPEG (Joint Photographic Experts Group), a TIFF fagged Image File Format, an Exif (Exchangea Die image file format), or the like. (This is typical.) Conversion to various general-purpose image formats is performed, and the image is transferred to the image transport unit 31 and communication means (output) 33. Output.
  • JPEG Joint Photographic Experts Group
  • TIFF fagged Image File Format an Exif (Exchangea Die image file format)
  • Exif Exchangea Die image file format
  • control unit 7 eg, a film scan data processing unit 701, a reflection original scan data processing unit 702, an image data format decoding processing unit 703, an image adjustment processing unit 704, a CRT specific processing unit 705, a printer
  • the unique processing units 706 and 707, the image data format creation processing unit 708, etc. do not necessarily have to be realized as physically independent devices. It may be something that is represented. Further, the image processing apparatus 1 can be applied to various modes, such as a digital photo printer, a printer dryino, and a plug-in of various image processing software, which are not limited to the above-described contents.
  • image processing to which the present invention is applied will be described with reference to FIG.
  • the image processing described below is executed by the image adjustment processing unit 704.
  • Step S1 when an original image is acquired via the film scanner unit 9, the reflection original input device 10, the image transfer device 30, the communication means (input) 32 (Step S1), the edges of the original image are extracted and ( Step S2), a low-frequency image is created for the original image (step S3).
  • the luminance change rate ⁇ ⁇ is represented by the luminance value of the target pixel (or the luminance average value of a plurality of pixels for which the change rate is calculated, etc.).
  • the value ⁇ ⁇ / ⁇ normalized by Y is calculated, and if the calculated value ⁇ / ⁇ is equal to or larger than a specific threshold, it is determined to be an edge, and information indicating the position of the edge pixel (for identifying the pixel) Is stored in the data storage means 71 or the internal memory of the control unit 7.
  • image edge extraction can be performed using a known edge extraction filter, in the present embodiment, it is appropriate to perform image edge extraction using high-frequency components obtained by binomial wavelet transform. Is preferred for extracting
  • the generation of the low-frequency image can be performed using a known low-pass filter. In the present embodiment, it is preferable to use a low-frequency component obtained by the binomial-wavelet transform.
  • a skin color region is extracted from the low-frequency image created in step S3 by using the simple region extension method.
  • an initial pixel (one or more pixels) is selected from the pixels of the image signal satisfying the extraction condition. It consists. ) Is specified, and simple area expansion is started from the initial pixel (step S4).
  • simple area expansion method not only the simple area expansion method but also a known expansion method can be used.
  • the extraction condition may be such that the user specifies a point (pixel) on the image with a mouse or the like and is set based on the image signal of the specified pixel, or may be predetermined.
  • the initial pixel is selected based on the conditions defining the hue and the saturation. Is preferred. It is preferable that the conditions defining the hue and saturation be changed depending on the type of light source at the time of photographing. Further, it is preferable that the type of light source at the time of photographing is automatically determined by a known method.
  • step S4 simple area expansion for extracting a skin color area is performed while sequentially comparing the difference in image signal between adjacent pixels (step S5). That is, it is determined whether the image signal of the adjacent pixel satisfies the extraction condition (step S6). If the image signal of the adjacent pixel satisfies the extraction condition (step S6; Yes), It is further determined whether or not is an edge (step S7). If the adjacent pixel is not an edge (step S7; No), the adjacent pixel is included in the skin color area (step S8).
  • information information for identifying the pixels representing each position is sequentially stored in the data storage unit 71, the built-in memory of the control unit 7, and the like.
  • step S8 the process proceeds to step S5, and the above processing is repeated.
  • step S6 if the image signal of the adjacent pixel does not satisfy the extraction condition in step S6 (step S6; No), or if the adjacent pixel is an edge in step S7 (step S7; Yes). , The extraction process of the skin color area using the simple area expansion is ended.
  • the eyes, the mouth, and the like are not extracted as a part of the skin-color area that is simply expanded even though it is formed in the skin-color area.
  • the eyes and mouth are included as closed areas in the skin color area.
  • Etc. can also be extracted by including it in the skin color area expanded by simple area Become.
  • the image adjustment processing unit 704 first extracts edges to create a low-frequency image for the original image. Then, the skin color region is expanded to the edge of the low-frequency image using the simple region expansion method, and the skin color region is extracted.
  • the simple area expansion when the pixel corresponding to the image edge extracted in advance is reached, the simple area expansion is forcibly terminated, so that the desired skin color area can be more appropriately extracted. It becomes.
  • each face region is extracted using the simple region expansion method based on each of the extraction conditions, for example, an image in which the light source type at the time of shooting cannot be specified, Even if the image includes a plurality of faces, each of which is illuminated by a different light source, a face area can be properly extracted from the image.
  • the description in the present embodiment shows an example of the image area extracting method, the image area extracting apparatus, the image area extracting program, the image processing method, the image processing apparatus, and the image processing program according to the present invention.
  • the present invention is not limited to this.
  • the detailed configuration and detailed operation of the image processing device 1 according to the present embodiment can be appropriately changed without departing from the spirit of the present invention.
  • a process of determining whether or not the skin color region extracted in the image processing can be specified as a face may be added.
  • a known determination method such as a method using a neural network or pattern matching can be used.
  • image processing to be performed on the skin color area when it is determined to be a face includes processing such as color adjustment, grain removal, sharpness enhancement, and dynamic range adjustment.
  • the flesh-color area is extracted from the reduced image obtained by reducing the image size of the original image, and the area of the original image corresponding to the flesh-color area of the extracted reduced image is extracted. It is good to perform image processing for Further, in the present embodiment, the case where simple area expansion is started from one initial pixel corresponding to one type of extraction condition has been described. However, the present invention is not limited to this.
  • the simple area expansion may be performed independently from the initial pixels of the above.
  • the processing power from step S4 to step S8 in FIG. 4 is performed according to the types of the initial pixels, and a plurality of skin color regions corresponding to the number of types of the initial pixels are extracted. Therefore, a plurality of different extraction conditions are applied, and each skin color region is extracted using the simple region expansion method for each extraction condition. , And even if the images are each illuminated by a different light source, the flesh color region can be accurately extracted from the images.
  • the image area to be extracted using the simple area expansion method has been described as the skin color area.
  • the present invention is not limited to this.
  • the image area to be extracted may be other areas such as the head hair. It is also applicable.
  • the influence of granular noise or the like can be excluded, so that a desired image area can be properly extracted.
  • region expansion when the pixel corresponding to the image edge extracted in advance is reached, the region expansion is forcibly stopped, so that a desired image region can be more appropriately extracted.
  • each face region is extracted based on the respective extraction conditions, for example, an image in which the light source type at the time of shooting cannot be specified, or a plurality of faces are included. Furthermore, even if the images are each illuminated by different light sources, the face region can be properly extracted from the images.

Abstract

There are provided an image area extraction method, an image area extraction device, an image area extraction program, an image processing method, an image processing device, and an image processing program for extracting an image area from an image. When extracting a desired image area from a source image, an image adjustment/processing unit firstly extracts an edge and creates a low-frequency image corresponding to the source image. For this low-frequency image, pixels satisfying a predetermined extraction condition are extracted as initial pixels and by using the low-frequency image signal, an image area is extended from the initial pixels. After this, when the image area has reached the image edge pixels, the extension is terminated and pixels of the image area after the extension are extracted.

Description

明 細 書  Specification
画像領域抽出および画像処理の方法、装置、プログラム  Method, apparatus, and program for image region extraction and image processing
技術分野  Technical field
[0001] 本発明は、画像から画像領域を抽出するための画像領域抽出方法、画像領域抽 出装置、画像領域抽出プログラム、画像処理方法、画像処理装置及び画像処理プ ログラムに関する。  The present invention relates to an image region extraction method, an image region extraction device, an image region extraction program, an image processing method, an image processing device, and an image processing program for extracting an image region from an image.
背景技術  Background art
[0002] 従来、カラー写真フィルムに形成された画像を CCD (Charge Coupled Device )センサー等で光電的に読み取って画像信号に変換する技術が広く用いられている  Conventionally, a technique of photoelectrically reading an image formed on a color photographic film with a CCD (Charge Coupled Device) sensor or the like and converting it into an image signal has been widely used.
[0003] このような画像信号は、ネガポジ反転、輝度調整、カラーバランス調整、粒状除去、 鮮鋭性強調等の種々の画像処理が施された後、 CD-R (CD— Recordable) , CD- RW (CD— Rewritable)、 FD (フロッピー(登録商標)ディスク)、メモリーカード等の 記録媒体やインターネット等を介して配布される。当該配布された画像信号は、銀塩 印画紙、インクジェットプリンタ、サーマルプリンタ等によりハードコピー画像として出 力されたり、 CRT (Cathode Ray Tube)、液晶ディスプレイ、プラズマディスプレイ 等に表示されたりして鑑賞される。 [0003] Such image signals are subjected to various image processing such as negative / positive inversion, luminance adjustment, color balance adjustment, grain removal, sharpness enhancement, and the like, and then are subjected to CD-R (CD-Recordable), CD-RW. (CD—Rewritable), FD (Floppy (registered trademark) disk), distributed via recording media such as memory cards and the Internet. The distributed image signal is output as a hard copy image using silver halide photographic paper, an ink jet printer, a thermal printer, etc., or displayed on a CRT (Cathode Ray Tube), liquid crystal display, plasma display, etc. for viewing. You.
[0004] また、近年では、デジタルスチルカメラ(携帯電話やパソコン等の機器に組み込ま れたものを含み、以下、 DSCと略称する。)が広く普及し、カラー写真フィルムと同様 にハードコピー画像として出力されたり、 CRT等に表示されたりして鑑賞される。  [0004] In recent years, digital still cameras (including those incorporated in devices such as mobile phones and personal computers; hereinafter, abbreviated as DSCs) have become widespread, and, like color photographic films, have been used as hard copy images. It is output and displayed on a CRT or the like for viewing.
[0005] ここで、鑑賞対象となる画像に人物の顔が含まれてレ、る場合には、鑑賞時に最も注 目されるのは人物の顔である。このため、高品質な画像を出力するためには、人物の 顔に適正な色、明るさ、鮮鋭感、ノイズ感、立体感等を持たせる必要がある。  [0005] Here, when an image to be viewed includes a person's face, the person's face is most watched at the time of viewing. For this reason, in order to output a high-quality image, it is necessary to impart appropriate color, brightness, sharpness, noise, three-dimensionality, and the like to a person's face.
[0006] しかし、画像撮影時には、人物の顔が、適正な色、明るさ、鮮鋭感、ノイズ感、立体 感等を持った画像として撮影されるような環境になっていないのが一般的である。  [0006] However, when photographing an image, it is generally not an environment in which a person's face is photographed as an image having an appropriate color, brightness, sharpness, noise, stereoscopic effect, and the like. is there.
[0007] 例えば、逆光での撮影では、顔が暗く撮影されてしまうため、高品質画像の出力は 困難となる。また安価なカメラ 'DSCでの撮影では、固定焦点での撮影となり、画像 全体にピントがあつたような状態で撮影されるため、人物の顔部分に立体感が感じら れない画像が出力されることとなる。 [0007] For example, in shooting against backlight, the face is shot dark, and it is difficult to output a high-quality image. In addition, shooting with an inexpensive camera 'DSC' is a fixed focus shooting, Since the image is shot with the whole object in focus, an image in which the face of the person does not have a three-dimensional effect will be output.
[0008] このような問題を解決するために、例えば、単純領域拡張法を用いて画像から特定 の画像領域を抽出し、当該抽出した画像領域が主要部 (顔)か否力、を判別し、当該 主要部以外の領域にボカシ処理を行う技術が提案されている(特許文献 1を参照。 )  [0008] In order to solve such a problem, for example, a specific image area is extracted from an image using a simple area expansion method, and it is determined whether the extracted image area is a main part (face) or not. In addition, a technique for performing a blurring process on an area other than the main part has been proposed (see Patent Document 1).
[0009] ここで、単純領域拡張法とは、画素間のデータ差が閾値以下の互いに隣接する画 素を同一画像領域に属するものとして当該領域を拡張してレ、くことにより画像領域の 抽出を行う画像処理方法である(例えば、非特許文献 1を参照。)。すなわち、指定さ れた特定の条件に合致する初期画素から出発し、当該初期画素に対して隣接する 画素(8連結、 4連結の何れでも可。)のデータ差が閾値以下である場合に当該隣接 画素と初期画素とを同一画像領域に属するものとし、更に、当該同一画像領域に属 するとした画素に対し隣接する画素についても同様の判定を行っていき、初期画素 から出発して同一領域を徐々に拡張させることによって画像領域の抽出を行う画像 処理方法である。 [0009] Here, the simple area extension method is to extract an image area by expanding adjacent pixels whose data difference between pixels is equal to or less than a threshold value as belonging to the same image area, and extracting the image area. (For example, see Non-Patent Document 1). In other words, starting from an initial pixel that satisfies a specified specific condition, if the data difference between pixels adjacent to the initial pixel (either 8-connected or 4-connected) is less than or equal to the threshold, It is assumed that the adjacent pixel and the initial pixel belong to the same image area, and the same judgment is performed for the pixel adjacent to the pixel belonging to the same image area, and the same area is determined starting from the initial pixel. This is an image processing method that extracts an image area by gradually expanding it.
特許文献 1:特開 2001 - 57630号公報  Patent Document 1: Japanese Patent Application Laid-Open No. 2001-57630
非特許文献 1:高木幹雄、下田陽久監督, 「画像解析ハンドブック」、初版、財団法人 東京大学出版会、 1991年 1月 17日 しかし、本発明者等は、詳細な検討の結果、 特許文献 1に開示されたような画像領域の抽出方法では、所望の画像領域を画像か ら正確に抽出し、当該抽出した画像領域に対し適切な画像処理を行うのは困難であ るとの結論に至った。  Non-Patent Document 1: Mikio Takagi and Director of Director Hisahisa Shimoda, "Image Analysis Handbook", First Edition, The University of Tokyo Press, January 17, 1991 However, as a result of detailed examination, It has been concluded that it is difficult to accurately extract a desired image region from an image and perform appropriate image processing on the extracted image region by the method for extracting an image region as disclosed in Japanese Patent Application Laid-Open No. H11-157163. Was.
[0010] 例えば、画像信号が、カラー写真フィルムから CCD等により読み取られて生成され たものである場合、写真フィルムの粒状ノイズの存在が原因となり、隣り合う画素間の データ差が閾値よりも大きくなる可能性が生じ、所望の画像領域を正確に抽出するの が困難となる。  [0010] For example, when an image signal is generated by reading a color photographic film by a CCD or the like, a data difference between adjacent pixels is larger than a threshold value due to the presence of granular noise in the photographic film. And it becomes difficult to accurately extract a desired image region.
[0011] また、画像信号が、比較的安価な DSC等により生成されたものであっても、所望の 画像領域を正確に抽出するのが困難となる。すなわち、このような DSCは、イメージ センサの画素ピッチが短いので、低感度でショットノイズが発生し易い上、イメージセ ンサの加熱により生じる喑電流ノイズも発生し易ぐ更に、 CMOSイメージセンサが搭 載された DSCの場合には、当該イメージセンサから生じるリーク電流によるノイズも発 生し易い。こうしたノイズ力 S、カラーフィルタ配列の補間やエッジ強調の画像処理を経 ることにより、モトル状の粒状ムラが形成され易くなる。このため、このような粒状ムラの 存在が原因となり、隣り合う画素間のデータ差が閾値よりも大きくなる可能性が生じ、 画像領域を正確に抽出するのが困難となる。 [0011] Even if the image signal is generated by a relatively inexpensive DSC or the like, it becomes difficult to accurately extract a desired image area. In other words, in such a DSC, the pixel pitch of the image sensor is short, so that shot noise is easily generated at low sensitivity, and the image The current noise generated by the heating of the sensor is likely to occur. Further, in the case of a DSC equipped with a CMOS image sensor, noise due to the leak current generated from the image sensor is also likely to occur. Through such noise processing S, color filter array interpolation and edge enhancement image processing, mottle-like granular unevenness is likely to be formed. For this reason, due to the presence of such granular unevenness, there is a possibility that the data difference between adjacent pixels becomes larger than a threshold value, and it becomes difficult to accurately extract an image region.
[0012] また、上記した問題を解決するために閾値を更に大きな値に設定しても、当該閾値 に基づいて抽出された画像領域が、所望の画像領域よりも大きな領域となる等の不 都合が生じ得るため、所望の画像領域を正確に抽出するのがなお困難となる。  [0012] Furthermore, even if the threshold is set to a larger value in order to solve the above-described problem, an inconvenience such that an image area extracted based on the threshold becomes an area larger than a desired image area. , It is still difficult to accurately extract a desired image area.
[0013] 更に、画像領域の抽出が正確でないと、画像処理を適切に施し得ない場合が生じ る。すなわち、画像処理が特定の画像領域に対して施されるものである場合には画 像領域の正確な抽出が前提となるため、正確に抽出された画像領域でなければ、例 え画像処理を施したとしても、所望とする効果が得られなレ、可能性が高レ、。  [0013] Further, if the extraction of the image area is not accurate, the image processing may not be properly performed. That is, if image processing is performed on a specific image area, accurate extraction of the image area is premised. Even if applied, the desired effect cannot be obtained, and the possibility is high.
発明の開示  Disclosure of the invention
[0014] 本発明の課題は、画像から所望の画像領域を適正に抽出し、当該抽出した画像領 域に対して適切な画像処理が行えるようにすることである。  An object of the present invention is to appropriately extract a desired image area from an image and perform appropriate image processing on the extracted image area.
[0015] 上記課題を解決するため、項 1に記載の形態は、  [0015] In order to solve the above-mentioned problems, the mode described in Item 1
複数画素の信号から成る画像信号を取得する工程と、  Obtaining an image signal composed of signals of a plurality of pixels;
前記複数画素から画像エッジに対応する画素を抽出する工程と、  Extracting a pixel corresponding to an image edge from the plurality of pixels;
前記画像信号から低周波画像信号を作成する工程と、  Creating a low frequency image signal from the image signal;
前記複数画素の中から所定の抽出条件を満たす画素を初期画素として抽出する 工程と、  Extracting a pixel satisfying a predetermined extraction condition from the plurality of pixels as an initial pixel;
前記低周波画像信号を用いて前記初期画素から画像領域の拡張を行い、その後 、当該拡張過程で画像領域が前記画像エッジの画素に到達した場合には当該拡張 を停止して拡張後の画像領域の画素を抽出する工程と、  The image area is extended from the initial pixels using the low-frequency image signal. Thereafter, when the image area reaches the pixel at the image edge in the extension process, the extension is stopped and the image area after the extension is extended. Extracting pixels of
を含むことを特徴とする。  It is characterized by including.
[0016] 更に、項 2に記載の形態のように、項 1に記載の形態において、 [0016] Further, as in the form described in Item 2, in the form described in Item 1,
前記画像エッジに対応する画素を抽出する工程では、画素間の画像信号の輝度 変化率を、対象画素の輝度値又は輝度変化率の算出対象となる画素の輝度の平均 値の何れかによつて規格化し、当該規格化値が所定閾値以上の場合に、当該拡張 対象の画素を画像エッジに対応する画素として抽出するのが好ましい。 In the step of extracting pixels corresponding to the image edge, the luminance of the image signal between the pixels is determined. The rate of change is normalized by either the luminance value of the target pixel or the average value of the luminance of the pixel for which the luminance change rate is to be calculated, and when the normalized value is equal to or greater than a predetermined threshold, the pixel to be expanded is Is preferably extracted as a pixel corresponding to the image edge.
[0017] 更に、項 3に記載の形態のように、項 1又は項 2に記載の形態において、 [0017] Further, as in the embodiment described in Item 3, in the embodiment described in Item 1 or 2,
前記抽出条件は、人物の肌を表す画素を抽出するための条件であるのが好ましい  The extraction condition is preferably a condition for extracting a pixel representing the skin of a person.
[0018] また、上記課題を解決するため、項 10に記載の形態は、 [0018] In order to solve the above-mentioned problems, the mode described in Item 10 is
複数画素の信号から成る画像信号を取得する工程と、  Obtaining an image signal composed of signals of a plurality of pixels;
前記複数画素から画像エッジに対応する画素を抽出する工程と、  Extracting a pixel corresponding to an image edge from the plurality of pixels;
前記画像信号から低周波画像信号を作成する工程と、  Creating a low frequency image signal from the image signal;
前記複数画素の中から、人物の肌を表す画素を抽出するための抽出条件を満た す画素を初期画素として抽出する工程と、  Extracting a pixel that satisfies an extraction condition for extracting a pixel representing a human skin from the plurality of pixels as an initial pixel;
前記低周波画像信号を用いて前記初期画素から画像領域の拡張を行い、その後 、当該拡張過程で画像領域が前記画像エッジの画素に到達した場合には当該拡張 を停止して拡張後の画像領域の画素を抽出する工程と、  The image area is extended from the initial pixels using the low-frequency image signal. Thereafter, when the image area reaches the pixel at the image edge in the extension process, the extension is stopped and the image area after the extension is extended. Extracting pixels of
前記拡張後の画像領域が人物の顔を表しているか否かを判別する工程と、 人物の顔を表していると判定された画像領域に対し、所定の画像処理を施す工程 と、  A step of determining whether or not the image area after the expansion represents a person's face; a step of performing predetermined image processing on the image area determined to represent a person's face;
を含むことを特徴とする。  It is characterized by including.
[0019] 更に、項 11に記載の形態のように、項 10に記載の形態において、 [0019] Further, as in the mode described in Item 11, in the mode described in Item 10,
前記初期画像を抽出する工程は、  Extracting the initial image,
前記複数画素の中から、人物の肌を表す画素を抽出するための複数種類の抽出 条件を各々満たす複数の画素を、それぞれ初期画素として抽出する工程であり、 前記拡張後の画像領域の画素を抽出する工程は、  It is a step of extracting, as initial pixels, a plurality of pixels each satisfying a plurality of types of extraction conditions for extracting a pixel representing a person's skin from the plurality of pixels, and extracting pixels of the expanded image area. The process of extracting
前記低周波画像信号を用いて前記各初期画素から画像領域の拡張を行い、その 後、当該各拡張過程で画像領域が前記画像エッジの画素に到達した場合には当該 各拡張を停止して拡張後の各画像領域の画素を抽出する工程であり、  The image area is expanded from the initial pixels using the low-frequency image signal, and thereafter, when the image area reaches the pixel at the image edge in the expansion process, the expansion is stopped and expanded. It is a step of extracting pixels of each subsequent image area,
前記人物の顔を表しているか否かを判断する工程は、 前記拡張後の各画像領域が人物の顔を表しているか否力を判別する工程、 であることが好ましい。 The step of determining whether or not the face of the person is represented, A step of determining whether or not each of the image areas after the expansion represents a person's face.
[0020] 更に、項 12に記載の形態のように、項 10又は項 11に記載の形態において、 前記画像エッジに対応する画素を抽出する工程では、画素間の画像信号の輝度 変化率を、対象画素の輝度値又は輝度変化率の算出対象となる画素の輝度の平均 値の何れかによつて規格化し、当該規格化値が所定閾値以上の場合に、当該拡張 対象の画素を画像エッジに対応する画素として抽出するのが好ましい。  [0020] Further, as in the mode described in [12], in the aspect described in [10] or [11], in the step of extracting a pixel corresponding to the image edge, a luminance change rate of an image signal between pixels is determined by: Normalization is performed using either the luminance value of the target pixel or the average value of the luminance of the pixel whose luminance change rate is to be calculated.If the standardized value is equal to or greater than a predetermined threshold, the pixel to be expanded is set as the image edge. It is preferable to extract as corresponding pixels.
図面の簡単な説明  Brief Description of Drawings
[0021] [図 1]図 1は本発明を適用した画像処理装置の外観構成を示す斜視図である。  FIG. 1 is a perspective view showing an external configuration of an image processing apparatus to which the present invention is applied.
[図 2]図 2は本発明を適用した画像処理装置の内部構造を示すブロック図である。  FIG. 2 is a block diagram showing an internal structure of an image processing device to which the present invention is applied.
[図 3]図 3は図 2に示す画像処理部の内部構成を主に示すブロック図である。  FIG. 3 is a block diagram mainly showing an internal configuration of an image processing unit shown in FIG. 2.
[図 4]図 4は本発明を適用した画像処理内容を説明するフローチャートである。 発明を実施するための最良の形態  FIG. 4 is a flowchart illustrating image processing contents to which the present invention is applied. BEST MODE FOR CARRYING OUT THE INVENTION
[0022] 以下、本発明の別の好ましい実施形態を説明する。 Hereinafter, another preferred embodiment of the present invention will be described.
[0023] 項 4に記載の形態は、 [0023] The form described in Item 4 is
複数画素の信号から成る画像信号を取得する取得手段と、  Acquiring means for acquiring an image signal composed of signals of a plurality of pixels;
前記複数画素から画像エッジに対応する画素を抽出するエッジ抽出手段と、 前記画像信号から低周波画像信号を作成する作成手段と、  Edge extracting means for extracting a pixel corresponding to an image edge from the plurality of pixels, creating means for creating a low-frequency image signal from the image signal,
前記複数画素の中から所定の抽出条件を満たす画素を初期画素として抽出する 初期画素抽出手段と、  Initial pixel extracting means for extracting a pixel satisfying a predetermined extraction condition from the plurality of pixels as an initial pixel,
前記低周波画像信号を用いて前記初期画素から画像領域の拡張を行い、その後 、当該拡張過程で画像領域が前記画像エッジの画素に到達した場合には当該拡張 を停止して拡張後の画像領域の画素を抽出する拡張後領域抽出手段と、  The image area is extended from the initial pixels using the low-frequency image signal. Thereafter, when the image area reaches the pixel at the image edge in the extension process, the extension is stopped and the image area after the extension is extended. Expanded region extracting means for extracting pixels of
を備えたことを特徴とする。  It is characterized by having.
[0024] 更に、項 5に記載の形態は、項 4に記載の形態のように、 [0024] Further, the form described in Item 5 is similar to the form described in Item 4,
前記エッジ抽出手段は、画素間の画像信号の輝度変化率を、対象画素の輝度値 又は輝度変化率の算出対象となる画素の輝度の平均値の何れかによつて規格化し 、当該規格化値が所定閾値以上の場合に、当該拡張対象の画素を画像エッジに対 応する画素として抽出するのが好ましい。 The edge extraction means normalizes the luminance change rate of the image signal between pixels by using either the luminance value of the target pixel or the average value of the luminance of the pixel for which the luminance change rate is to be calculated. Is greater than or equal to a predetermined threshold, the pixel to be expanded is paired with the image edge. Preferably, it is extracted as the corresponding pixel.
[0025] 更に、項 6に記載の形態は、項 4又は項 5に記載の形態のように、  [0025] Further, the form described in Item 6 is similar to the form described in Item 4 or 5,
前記抽出条件は、人物の肌を表す画素を抽出するための条件であるのが好ましい  The extraction condition is preferably a condition for extracting a pixel representing the skin of a person.
[0026] 項 7に記載の形態は、 [0026] The form described in Item 7 is
画像処理を行うコンピュータに、  In the computer that performs image processing,
複数画素の信号から成る画像信号を取得する機能と、  A function of acquiring an image signal composed of signals of a plurality of pixels,
前記複数画素から画像エッジに対応する画素を抽出する機能と、  A function of extracting a pixel corresponding to an image edge from the plurality of pixels,
前記画像信号から低周波画像信号を作成する機能と、  A function of creating a low-frequency image signal from the image signal;
前記複数画素の中から所定の抽出条件を満たす画素を初期画素として抽出する 機能と、  A function of extracting a pixel satisfying a predetermined extraction condition from the plurality of pixels as an initial pixel;
前記低周波画像信号を用いて前記初期画素から画像領域の拡張を行い、その後 、当該拡張過程で画像領域が前記画像エッジの画素に到達した場合には当該拡張 を停止して拡張後の画像領域の画素を抽出する機能と、  The image area is extended from the initial pixels using the low-frequency image signal. Thereafter, when the image area reaches the pixel at the image edge in the extension process, the extension is stopped and the image area after the extension is extended. Function to extract the pixels of
を実現させる。  To realize.
[0027] 更に、項 8に記載の形態のように、項 7に記載の形態のように、  [0027] Further, as in the mode described in Item 8, as in the mode described in Item 7,
前記コンピュータに、  To the computer,
画素間の画像信号の輝度変化率を、対象画素の輝度値又は輝度変化率の算出対 象となる画素の輝度の平均値の何れかによつて規格化し、当該規格化値が所定閾 値以上の場合に、当該拡張対象の画素を画像エッジに対応する画素として抽出する 機能を更に実現させるのが好ましい。  The luminance change rate of the image signal between pixels is normalized by either the luminance value of the target pixel or the average value of the luminance of the pixel for which the luminance change rate is to be calculated, and the normalized value is equal to or greater than a predetermined threshold value. In this case, it is preferable to further realize a function of extracting the pixel to be extended as a pixel corresponding to an image edge.
[0028] 更に、項 9に記載の形態のように、項 7又は項 8に記載の形態のように、 [0028] Further, as in the form described in Item 9, as in the form described in Item 7 or 8,
前記抽出条件は、人物の肌を表す画素を抽出するための条件であるのが好ましい  The extraction condition is preferably a condition for extracting a pixel representing the skin of a person.
[0029] 項 13に記載の形態は、 [0029] The form described in Item 13 is:
複数画素の信号から成る画像信号を取得する取得手段と、  Acquiring means for acquiring an image signal composed of signals of a plurality of pixels;
前記複数画素から画像エッジに対応する画素を抽出するエッジ抽出手段と、 前記画像信号から低周波画像信号を作成する作成手段と、 前記複数画素の中から、人物の肌を表す画素を抽出するための抽出条件を満た す画素を初期画素として抽出する初期画素抽出手段と、 Edge extracting means for extracting a pixel corresponding to an image edge from the plurality of pixels, creating means for creating a low-frequency image signal from the image signal, Initial pixel extracting means for extracting a pixel satisfying an extraction condition for extracting a pixel representing a human skin from the plurality of pixels as an initial pixel;
前記低周波画像信号を用いて前記初期画素から画像領域の拡張を行い、その後 、当該拡張過程で画像領域が前記画像エッジの画素に到達した場合には当該拡張 を停止して拡張後の画像領域の画素を抽出する拡張後領域抽出手段と、  The image area is extended from the initial pixels using the low-frequency image signal. Thereafter, when the image area reaches the pixel at the image edge in the extension process, the extension is stopped and the image area after the extension is extended. Expanded region extracting means for extracting pixels of
前記拡張後の画像領域が人物の顔を表しているか否かを判別する判別手段と、 人物の顔を表していると判定された画像領域に対し、所定の画像処理を施す処理 手段と、  Determining means for determining whether or not the image area after the expansion represents a person's face; processing means for performing predetermined image processing on the image area determined to represent a person's face;
を備えたことを特徴とする。  It is characterized by having.
[0030] 更に、項 14に記載の形態のように、項 13に記載の形態において、 [0030] Further, as in the form of Item 14, in the form of Item 13,
前記初期画像を抽出する手段は、  The means for extracting the initial image,
前記複数画素の中から、人物の肌を表す画素を抽出するための複数種類の抽出 条件を各々満たす複数の画素を、それぞれ初期画素として抽出する初期画素抽出 手段であり、  Initial pixel extraction means for extracting, as initial pixels, a plurality of pixels each satisfying a plurality of types of extraction conditions for extracting a pixel representing a human skin from the plurality of pixels,
前記拡張後の画像領域の画素を抽出する手段は、  Means for extracting pixels of the image area after the expansion,
前記低周波画像信号を用いて前記各初期画素から画像領域の拡張を行い、その 後、当該各拡張過程で画像領域が前記画像エッジの画素に到達した場合には当該 各拡張を停止して拡張後の各画像領域の画素を抽出する拡張後領域抽出手段であ り、  The image area is expanded from the initial pixels using the low-frequency image signal, and thereafter, when the image area reaches the pixel at the image edge in the expansion process, the expansion is stopped and expanded. A post-expansion area extracting means for extracting pixels of each subsequent image area,
前記人物の顔を表わしているか否力を判断する手段は、  The means for determining whether or not the face of the person is represented,
前記拡張後の各画像領域が人物の顔を表しているか否かを判別する判別手段、 であることが好ましい。  A determination unit configured to determine whether each of the extended image areas represents a person's face.
[0031] 更に、項 15に記載の形態のように、項 13又は項 14に記載の形態において、  [0031] Further, as in the embodiment described in Item 15, in the embodiment described in Item 13 or 14,
前記エッジ抽出手段は、画素間の画像信号の輝度変化率を、対象画素の輝度値 又は輝度変化率の算出対象となる画素の輝度の平均値の何れかによつて規格化し 、当該規格化値が所定閾値以上の場合に、当該拡張対象の画素を画像エッジに対 応する画素として抽出するのが好ましい。  The edge extraction means normalizes the luminance change rate of the image signal between pixels by using either the luminance value of the target pixel or the average value of the luminance of the pixel for which the luminance change rate is to be calculated. When is greater than or equal to a predetermined threshold, it is preferable to extract the pixel to be expanded as a pixel corresponding to an image edge.
[0032] 項 16に記載の形態のように、 画像処理を行うコンピュータに、 [0032] As in the form described in Item 16, In the computer that performs image processing,
複数画素の信号から成る画像信号を取得する機能と、  A function of acquiring an image signal composed of signals of a plurality of pixels,
前記複数画素から画像エッジに対応する画素を抽出する機能と、  A function of extracting a pixel corresponding to an image edge from the plurality of pixels,
前記画像信号から低周波画像信号を作成する機能と、  A function of creating a low-frequency image signal from the image signal;
前記複数画素の中から、人物の肌を表す画素を抽出するための抽出条件を満た す画素を初期画素として抽出する機能と、  A function of extracting a pixel that satisfies an extraction condition for extracting a pixel representing a human skin from the plurality of pixels as an initial pixel;
前記低周波画像信号を用いて前記初期画素から画像領域の拡張を行い、その後 、当該拡張過程で画像領域が前記画像エッジの画素に到達した場合には当該拡張 を停止して拡張後の画像領域の画素を抽出する機能と、  The image area is extended from the initial pixels using the low-frequency image signal. Thereafter, when the image area reaches the pixel at the image edge in the extension process, the extension is stopped and the image area after the extension is extended. Function to extract the pixels of
前記拡張後の画像領域が人物の顔を表しているか否かを判別する機能と、 人物の顔を表していると判定された画像領域に対し、所定の画像処理を施す機能 と、  A function of determining whether or not the image area after the expansion represents a human face; a function of performing predetermined image processing on the image area determined to represent a human face;
を実現させる。  To realize.
[0033] 更に、項 17に記載の形態のように、項 16に記載の形態において、  [0033] Further, as in the form described in Item 17, in the form described in Item 16,
前記初期画像を抽出する機能は、  The function of extracting the initial image,
前記複数画素の中から、人物の肌を表す画素を抽出するための複数種類の抽出 条件を各々満たす複数の画素を、それぞれ初期画素として抽出する機能であり、 前記拡張後の画像領域の画素を抽出する機能は、  A function of extracting, as initial pixels, a plurality of pixels each satisfying a plurality of types of extraction conditions for extracting a pixel representing a person's skin from the plurality of pixels, and extracting pixels of the image area after the expansion. The function to extract is
前記低周波画像信号を用いて前記各初期画素から画像領域の拡張を行い、その 後、当該各拡張過程で画像領域が前記画像エッジの画素に到達した場合には当該 各拡張を停止して拡張後の各画像領域の画素を抽出する機能であり、  The image area is expanded from the initial pixels using the low-frequency image signal, and thereafter, when the image area reaches the pixel at the image edge in the expansion process, the expansion is stopped and expanded. It is a function to extract the pixels of each subsequent image area,
前記人物の顔を表しているか否かを判断する機能は、  The function of determining whether or not the face of the person is represented,
前記拡張後の各画像領域が人物の顔を表しているか否かを判別する機能 を実現させるのが好ましい。  It is preferable to realize a function of determining whether or not each of the extended image areas represents a human face.
[0034] 更に、項 18に記載の形態のように、項 16又は項 17に記載の形態のように、 [0034] Further, as in the embodiment described in Item 18, as in the embodiment described in Item 16 or 17,
前記コンピュータに、  To the computer,
画素間の画像信号の輝度変化率を、対象画素の輝度値又は輝度変化率の算出対 象となる画素の輝度の平均値の何れかによつて規格化し、当該規格化値が所定閾 値以上の場合に、当該拡張対象の画素を画像エッジに対応する画素として抽出する 機能を更に実現させるのが好ましい。 The luminance change rate of the image signal between pixels is normalized by either the luminance value of the target pixel or the average value of the luminance of the pixel for which the luminance change rate is to be calculated, and the normalized value is determined by a predetermined threshold. When the value is equal to or larger than the value, it is preferable to further realize a function of extracting the pixel to be extended as a pixel corresponding to an image edge.
[0035] 以下、図面を参照して本発明を適用した一実施の形態について詳細に説明する。  Hereinafter, an embodiment of the present invention will be described in detail with reference to the drawings.
〈画像処理装置 1の外観構成〉  <External configuration of image processing device 1>
まず、図 1を参照して画像処理装置 1の外観構成を説明する。  First, the external configuration of the image processing apparatus 1 will be described with reference to FIG.
[0036] 画像処理装置 1は、図 1に示すように、筐体 2の一側面に、感光材料を装填するた めのマガジン装填部 3が設けられている。筐体 2の内側には、感光材料に露光する露 光処理部 4と、露光された感光材料を現像処理して乾燥し、プリントを作成するため のプリント作成部 5が設けられている。筐体 2の他側面には、プリント作成部 5で作成 されたプリントを排出するためのトレー 6が設けられている。  As shown in FIG. 1, the image processing apparatus 1 is provided with a magazine loading section 3 for loading a photosensitive material on one side surface of a housing 2. Inside the housing 2, there are provided an exposure processing section 4 for exposing the photosensitive material, and a print creating section 5 for developing and drying the exposed photosensitive material to create a print. On the other side of the housing 2, a tray 6 for discharging the print created by the print creating section 5 is provided.
[0037] また、筐体 2の上部には、表示装置としての CRT8、透過原稿を読み込む装置であ るフィルムスキャナ部 9、反射原稿入力装置 10、操作部 11が設けられている。更に、 筐体 2には、各種デジタル記録媒体に記録された画像を読み取る画像読込部 14、 各種デジタル記録媒体に画像信号を書き込む(出力)画像書込部 15が設けられて いる。また、筐体 2の内部には、画像処理装置 1を構成する各部を統括的に制御する 制御部 7が設けられている。  In addition, a CRT 8 as a display device, a film scanner unit 9 that is a device for reading a transparent original, a reflective original input device 10, and an operation unit 11 are provided at an upper portion of the housing 2. Further, the housing 2 is provided with an image reading section 14 for reading images recorded on various digital recording media, and an image writing section 15 for writing (outputting) image signals on various digital recording media. Further, inside the housing 2, a control unit 7 for integrally controlling each unit constituting the image processing apparatus 1 is provided.
[0038] 画像読込部 14には、 PCカード用アダプタ 14a、 FD用アダプタ 14bが設けられ、 P Cカード 13aや FD13bが差し込み可能になってレ、る。  [0038] The image reading unit 14 is provided with a PC card adapter 14a and an FD adapter 14b, so that the PC card 13a and the FD 13b can be inserted.
[0039] 画像書込部 15には、 FD用アダプタ 15a、 MO (Magneto— Optical)用アダプタ 15 b、光ディスク用アダプタ 15cが設けられ、それぞれ、 FD16a、 M016b、光ディスク 1 6cが差し込み可能になっている。光ディスク 16cとしては、 CD_R、 DVD-R (Digita 1 Versatile Disk— Recordable)、 DVD— RW (DVD— Rewritable)等がある。  [0039] The image writing unit 15 is provided with an FD adapter 15a, a MO (Magneto-Optical) adapter 15b, and an optical disk adapter 15c, and the FD 16a, M016b, and the optical disk 16c can be inserted thereinto, respectively. I have. The optical disk 16c includes CD_R, DVD-R (Digita 1 Versatile Disk—Recordable), DVD—RW (DVD—Rewritable), and the like.
[0040] なお、図 1では、操作部 11、 CRT8、フィルムスキャナ部 9、反射原稿入力装置 10、 画像読込部 14が、筐体 2に一体的に設けられた構成となっているが、これらのうち何 れかを別体として設けた構成であってもよい。  In FIG. 1, the operation unit 11, the CRT 8, the film scanner unit 9, the reflection document input device 10, and the image reading unit 14 are configured integrally with the housing 2. May be provided separately.
[0041] なお、図 1に示した画像処理装置 1では、感光材料に露光して現像してプリントを作 成するものが例示されているが、プリント作成方式はこれに限定されず、例えば、イン クジェット方式、電子写真方式、感熱方式、昇華方式等の方式を用いてもよい。 〈画像処理装置 1の内部構成〉 In the image processing apparatus 1 shown in FIG. 1, an example in which a photosensitive material is exposed to light and developed to create a print is exemplified. However, the print creating method is not limited to this. A system such as an ink jet system, an electrophotographic system, a heat-sensitive system, and a sublimation system may be used. <Internal configuration of image processing device 1>
次に、図 2を参照して、画像処理装置 1の内部構造を説明する。画像処理装置 1は 、図 2に示すように、制御部 7、露光処理部 4、プリント生成部 5、フィルムスキャナ部 9 、反射原稿入力装置 10、画像読込部 14、通信手段 (入力) 32、画像書込部 15、デ ータ蓄積手段 71、操作部 11、 CRT8、通信手段(出力) 33を備える。  Next, an internal structure of the image processing apparatus 1 will be described with reference to FIG. As shown in FIG. 2, the image processing apparatus 1 includes a control unit 7, an exposure processing unit 4, a print generation unit 5, a film scanner unit 9, a reflection document input device 10, an image reading unit 14, a communication unit (input) 32, It has an image writing unit 15, a data storage unit 71, an operation unit 11, a CRT 8, and a communication unit (output) 33.
[0042] 露光処理部 4は、感光材料に画像の露光を行い、この感光材料をプリント作成部 5 に出力する。プリント作成部 5は、露光された感光材料を現像処理して乾燥し、プリン 卜 Pl、 P2、 P3を作成する。  The exposure processing section 4 exposes the photosensitive material to an image, and outputs the photosensitive material to the print creating section 5. The print creating section 5 develops the exposed photosensitive material and dries it to create prints Pl, P2 and P3.
[0043] 制御部 7は、マイクロコンピュータにより構成され、 ROM (Read Only Memory) 等(図示略)に記憶されてレ、る画像処理プログラム等の各種制御プログラムと、 CPU ( Central Processing Unit) (図示略)との協働により、画像処理装置 1を構成する 各部の動作を統括的に制御する。  The control unit 7 includes a microcomputer, and various control programs such as an image processing program stored in a ROM (Read Only Memory) or the like (not shown) and a CPU (Central Processing Unit) (not shown). In cooperation with (abbreviation), the operation of each unit constituting the image processing apparatus 1 is controlled in a comprehensive manner.
[0044] 制御部 7は、画像処理部 70を有し、操作部 11からの入力信号 (指令情報)に基づ いて、フィルムスキャナ部 9、反射原稿入力装置 10、画像読込部 14、外部機器から 通信手段 (入力) 32から入力された各画像に対して露光用の画像を形成し、露光処 理部 4に出力する。画像処理部 70については後に詳述する。  The control unit 7 includes an image processing unit 70, and based on an input signal (command information) from the operation unit 11, a film scanner unit 9, a reflection document input device 10, an image reading unit 14, an external device The communication unit (input) forms an image for exposure for each image input from the input unit 32 and outputs the image to the exposure processing unit 4. The image processing unit 70 will be described later in detail.
[0045] フィルムスキャナ部 9は、アナログカメラにより撮像された現像済みのネガフィルム N やリバーサルフィルム等の透過原稿に記録された画像を読み取る。  The film scanner unit 9 reads an image recorded on a transparent original such as a developed negative film N or a reversal film captured by an analog camera.
[0046] 反射原稿入力装置 10は、図示しないフラットベットスキャナにより、プリント P (写真 プリント、書画、各種の印刷物)に形成された画像を読み取る。  The reflection document input device 10 reads an image formed on a print P (photo print, document, various prints) by a flatbed scanner (not shown).
[0047] 操作部 11は、情報入力手段 12を有する。情報入力手段 12は、例えば、タツチパネ ル等により構成されており、情報入力手段 12の押下信号を入力信号として制御部 7 に出力する。なお、操作部 11は、キーボードやマウス等を備えて構成されるようであ つてもよい。 CRT8は、制御部 7から入力された表示制御信号に従って画像等を表示 する。  The operation unit 11 has information input means 12. The information input means 12 is composed of, for example, a touch panel or the like, and outputs a press signal of the information input means 12 to the control section 7 as an input signal. The operation unit 11 may be configured to include a keyboard, a mouse, and the like. The CRT 8 displays an image or the like according to the display control signal input from the control unit 7.
[0048] 画像読込部 14は、画像転送手段 30を有し、 PCカード 13aや FD13bに記録された 画像を読み出して制御部 7に転送する。画像転送手段 30は、 PCカード用アダプタ 1 4a、 FD用アダプタ 14b等を有する。 [0049] 画像読込部 14は、 PCカード用アダプタ 14aに差し込まれた PCカード 13aや、 FD 用アダプタ 14bに差し込まれた FD 13bに記録された画像を読み出し、当該読み出し た画像を画像転送手段 30を用いて制御部 7に転送する。 [0048] The image reading unit 14 includes an image transfer unit 30, reads an image recorded on the PC card 13a or the FD 13b, and transfers the image to the control unit 7. The image transfer means 30 has a PC card adapter 14a, an FD adapter 14b, and the like. The image reading section 14 reads an image recorded on the PC card 13a inserted into the PC card adapter 14a or the FD 13b inserted into the FD adapter 14b, and transfers the read image to the image transfer means 30. Is transferred to the control unit 7 using.
[0050] 画像書込部 15は、画像搬送部 31を備え、画像搬送部 31は、 FD用アダプタ 15a、 MO用アダプタ 15b、光ディスク用アダプタ 15c等を備える。画像書込部 15は、制御 部 7から入力される書込信号に従って、 FD用アダプタ 15aに差し込まれた FD16a、 MO用アダプタ 15bに差し込まれた M〇 16b、光ディスク用アダプタ 15cに差し込ま れた光ディスク 16cに対し各種データを書き込む。  [0050] The image writing unit 15 includes an image transport unit 31, and the image transport unit 31 includes an FD adapter 15a, an MO adapter 15b, an optical disk adapter 15c, and the like. The image writing unit 15 is configured such that the FD 16a inserted into the FD adapter 15a, the M〇 16b inserted into the MO adapter 15b, and the optical disk inserted into the optical disk adapter 15c, according to the write signal input from the control unit 7. Write various data to 16c.
[0051] 通信手段 (入力) 32は、画像処理装置 1が設置された施設内の別のコンピュータや インターネット等を介した遠方のコンピュータから画像や各種コマンド等を受信する。  [0051] The communication means (input) 32 receives images, various commands, and the like from another computer in the facility where the image processing apparatus 1 is installed or a distant computer via the Internet or the like.
[0052] 通信手段(出力) 33は、画像や注文情報等を、画像処理装置 1が設置された施設 内の他のコンピュータや、インターネット等を介した遠方のコンピュータに対して送信 する。  [0052] The communication means (output) 33 transmits an image, order information, and the like to another computer in the facility where the image processing apparatus 1 is installed, or to a remote computer via the Internet or the like.
[0053] データ蓄積手段 71は、画像や注文情報(どの駒の画像から何枚プリントを作成する かの情報、プリントサイズの情報等)等のデータを格納する。  The data storage unit 71 stores data such as images and order information (information on how many prints are to be made from which frame images, print size information, etc.).
〈画像処理部 70の構成〉  <Configuration of the image processing unit 70>
次に、図 3を参照して画像処理部 70の構成を説明する。画像処理部 70は、図 3に 示すように、フィルムスキャンデータ処理部 701、反射原稿スキャンデータ処理部 70 2、画像データ書式解読処理部 703、画像調整処理部 704 (特許請求の範囲に記載 の取得手段、エッジ抽出手段、作成手段、初期画素抽出手段、拡張後領域抽出手 段、判別手段、処理手段の各々に対応。)、 CRT固有処理部 705、プリンタ固有処 理部 706、プリンタ固有処理部 707、画像データ書式作成処理部 708を備える。  Next, the configuration of the image processing unit 70 will be described with reference to FIG. As shown in FIG. 3, the image processing unit 70 includes a film scan data processing unit 701, a reflection original scan data processing unit 702, an image data format decoding processing unit 703, and an image adjustment processing unit 704 (described in the claims). Acquisition unit, edge extraction unit, creation unit, initial pixel extraction unit, expanded area extraction unit, discrimination unit, and processing unit.), CRT specific processing unit 705, printer specific processing unit 706, printer specific processing And an image data format creation processing unit 708.
[0054] フィルムスキャンデータ処理部 701は、フィルムスキャナ部 9から入力された画像に 対し、フィルムスキャナ部 9に固有の校正操作、ネガ原稿の場合のネガポジ反転、グ レーバランス調整、コントラスト調整等を施し、画像調整処理部 704に出力する。また 、フィルムスキャンデータ処理部 701は、フィルムサイズ、ネガポジ種別、フィルムに光 学的又は磁気的に記録された ISO (International Organization for Standar dization)感度、メーカー名、主要被写体に関わる情報、撮影条件に関する情報 (例 えば APS (Advanced Photo System)の記載情報内容)等も併せて画像調整処 理部 704に出力する。 The film scan data processing unit 701 performs a calibration operation unique to the film scanner unit 9, a negative-positive inversion in the case of a negative original, a gray balance adjustment, a contrast adjustment, and the like for the image input from the film scanner unit 9. And outputs it to the image adjustment processing unit 704. Further, the film scan data processing unit 701 includes a film size, a negative / positive type, ISO (International Organization for Standardization) sensitivity optically or magnetically recorded on the film, a manufacturer name, information on a main subject, and shooting conditions. Information (example For example, APS (Advanced Photo System) information) is also output to the image adjustment processing unit 704.
[0055] 反射原稿スキャンデータ処理部 702は、反射原稿入力装置 10から入力された画像 に対し、反射原稿入力装置 10に固有の校正操作、ネガ原稿の場合のネガポジ反転 、グレーバランス調整、コントラスト調整等を施し、画像調整処理部 704に出力する。  The reflection document scan data processing unit 702 performs a calibration operation unique to the reflection document input device 10, negative-positive inversion for a negative document, gray balance adjustment, and contrast adjustment for an image input from the reflection document input device 10. And outputs the result to the image adjustment processing unit 704.
[0056] 画像データ書式解読処理部 703は、画像転送手段 30や通信手段 (入力) 32から 入力された画像信号のデータ書式に従って、圧縮符号の復元、色データの表現方 法の変換等を行い、画像調整処理部 704に出力する。  The image data format decryption processing unit 703 performs restoration of a compression code, conversion of a method of expressing color data, and the like in accordance with the data format of the image signal input from the image transfer unit 30 or the communication unit (input) 32. Are output to the image adjustment processing unit 704.
[0057] 画像調整処理部 704は、フィルムスキャナ部 9、反射原稿入力装置 10、画像転送 手段 30、通信手段 (入力) 32から入力された画像に対し、各種画像処理を施す。特 に、画像調整処理部 704は、図 4のフローチャートに示す画像処理を実行する。  The image adjustment processing unit 704 performs various types of image processing on images input from the film scanner unit 9, the reflection original input device 10, the image transfer unit 30, and the communication unit (input) 32. In particular, the image adjustment processing unit 704 executes the image processing shown in the flowchart of FIG.
[0058] 画像調整処理部 704は、上記画像処理済みの画像を、 CRT固有処理部 705、プリ ンタ固有処理部 706、プリンタ固有処理部 707、画像データ書式作成部 708、デー タ蓄積手段 71に出力する。  The image adjustment processing unit 704 sends the processed image to the CRT-specific processing unit 705, the printer-specific processing unit 706, the printer-specific processing unit 707, the image data format creation unit 708, and the data storage unit 71. Output.
[0059] CRT固有処理部 705は、画像調整処理部 704から入力された画像に対して、画素 数変更、カラーマッチング等の処理を施し、各種表示情報とともに CRT8に出力する  [0059] The CRT-specific processing unit 705 performs processing such as changing the number of pixels and color matching on the image input from the image adjustment processing unit 704, and outputs the processed image to the CRT 8 together with various display information.
[0060] プリンタ固有処理部 706は、画像調整処理部 704から入力された画像処理済みの 画像信号に対して、プリンタ固有の校正処理、カラーマッチング、画素数変更等を行 レ、、露光処理部 4に出力する。 The printer-specific processing unit 706 performs printer-specific calibration processing, color matching, changing the number of pixels, and the like on the image-processed image signal input from the image adjustment processing unit 704, and an exposure processing unit. Output to 4.
[0061] 本実施の形態の画像処理装置 1に、インクジェットプリンタ等の外部プリンタ 34に応 じたプリンタ固有処理部 707が設けられている。このプリンタ固有処理部 707は、画 像調整処理部 704から入力された画像に対して、プリンタ固有の適正な校正処理、 カラーマッチング、画素数変更等を行う。 [0061] The image processing apparatus 1 of the present embodiment is provided with a printer-specific processing unit 707 corresponding to the external printer 34 such as an inkjet printer. The printer-specific processing unit 707 performs an appropriate printer-specific calibration process, color matching, change of the number of pixels, and the like on the image input from the image adjustment processing unit 704.
[0062] 画像データ書式作成処理部 708は、画像調整処理部 704から入力された画像に して、 JPEG (Joint Photographic Experts Group)、 TIFF fagged Image File Format)、 Exif (Exchangea Die image file format)等 (こ代表 れ o各 種の汎用画像フォーマットへの変換を行レ、、画像搬送部 31や通信手段(出力) 33に 出力する。 [0062] The image data format creation processing unit 708 converts the image input from the image adjustment processing unit 704 into a JPEG (Joint Photographic Experts Group), a TIFF fagged Image File Format, an Exif (Exchangea Die image file format), or the like. (This is typical.) Conversion to various general-purpose image formats is performed, and the image is transferred to the image transport unit 31 and communication means (output) 33. Output.
[0063] なお、制御部 7の上記各部(例えば、フィルムスキャンデータ処理部 701、反射原稿 スキャンデータ処理部 702、画像データ書式解読処理部 703、画像調整処理部 704 、 CRT固有処理部 705、プリンタ固有処理部 706及び 707、画像データ書式作成処 理部 708等。)は、必ずしも物理的に独立したデバイスとして実現される必要はなぐ 制御部 7が具備する CPUにより処理されるプログラムが有する機能を表したものであ つてもよレ、。また、画像処理装置 1は、上述の内容に限定されるものではなぐデジタ ルフォトプリンタ、プリンタドライノ 、各種の画像処理ソフトのプラグイン等、種々の形 態に適用することができる。  Note that the above-described units of the control unit 7 (eg, a film scan data processing unit 701, a reflection original scan data processing unit 702, an image data format decoding processing unit 703, an image adjustment processing unit 704, a CRT specific processing unit 705, a printer The unique processing units 706 and 707, the image data format creation processing unit 708, etc.) do not necessarily have to be realized as physically independent devices. It may be something that is represented. Further, the image processing apparatus 1 can be applied to various modes, such as a digital photo printer, a printer dryino, and a plug-in of various image processing software, which are not limited to the above-described contents.
[0064] 次に、図 4を参照して、本発明を適用した画像処理について説明する。以下で説明 する画像処理は、画像調整処理部 704により実行される。  Next, image processing to which the present invention is applied will be described with reference to FIG. The image processing described below is executed by the image adjustment processing unit 704.
[0065] まず、フィルムスキャナ部 9、反射原稿入力装置 10、画像転送装置 30、通信手段( 入力) 32等を介して元画像を取得すると (ステップ S1)、元画像のエッジを抽出すると 共に (ステップ S2)、元画像に対する低周波画像を作成する(ステップ S3)。  First, when an original image is acquired via the film scanner unit 9, the reflection original input device 10, the image transfer device 30, the communication means (input) 32 (Step S1), the edges of the original image are extracted and ( Step S2), a low-frequency image is created for the original image (step S3).
[0066] ここで、エッジを抽出する際、画像信号の輝度を Yと表現すると、輝度変化率 Δ Υを 、対象画素の輝度値 (又は変化率算出対象とした複数画素との輝度平均値等。 ) Y で規格化した値 Δ Υ/Υを算出し、当該算出した値 Δ Υ/Υが特定閾値以上である 場合にエッジと判定し、エッジ画素の位置を表す情報(画素を識別するための情報) をデータ蓄積手段 71や制御部 7の内蔵メモリ等に蓄積する。  Here, when extracting the edge, if the luminance of the image signal is expressed as Y, the luminance change rate Δ Υ is represented by the luminance value of the target pixel (or the luminance average value of a plurality of pixels for which the change rate is calculated, etc.). ) The value Δ 値 / Δ normalized by Y is calculated, and if the calculated value ΔΥ / Υ is equal to or larger than a specific threshold, it is determined to be an edge, and information indicating the position of the edge pixel (for identifying the pixel) Is stored in the data storage means 71 or the internal memory of the control unit 7.
[0067] 画像エッジの抽出は、公知のエッジ抽出フィルターを用いて行うことも可能であるが 、本実施の形態では、二項ウェーブレット変換によって得られる高周波成分を用いて 行うのが適正な画像エッジを抽出する上で好ましい。また、低周波画像の生成は、公 知のローパスフィルターを用いて行うことも可能である力 本実施の形態では、ニ項ゥ エーブレット変換によって得られる低周波成分を用いて行うのが好ましい。  [0067] Although image edge extraction can be performed using a known edge extraction filter, in the present embodiment, it is appropriate to perform image edge extraction using high-frequency components obtained by binomial wavelet transform. Is preferred for extracting In addition, the generation of the low-frequency image can be performed using a known low-pass filter. In the present embodiment, it is preferable to use a low-frequency component obtained by the binomial-wavelet transform.
[0068] 次に、ステップ S3の段階で作成した低周波画像に対し単純領域拡張方法を用い て肌色領域を抽出する。  Next, a skin color region is extracted from the low-frequency image created in step S3 by using the simple region extension method.
[0069] すなわち、肌色を表す画像信号を識別して抽出するための抽出条件を一つ用いて 、当該抽出条件を満たす画像信号の画素の中から初期画素(一又は複数の画素に よって成る。)を特定し、当該初期画素から単純領域拡張を開始する(ステップ S4)。 なお、単純領域拡張方法に限らず、公知の拡張方法が利用可能である。 [0069] That is, using one extraction condition for identifying and extracting an image signal representing a flesh color, an initial pixel (one or more pixels) is selected from the pixels of the image signal satisfying the extraction condition. It consists. ) Is specified, and simple area expansion is started from the initial pixel (step S4). In addition, not only the simple area expansion method but also a known expansion method can be used.
[0070] ここで、上記抽出条件は予め設定されており、データ蓄積手段 71や制御部 7の内 蔵メモリ等に格納されているものとする。 [0070] Here, it is assumed that the extraction conditions are set in advance and stored in the data storage means 71 or the internal memory of the control unit 7.
[0071] この抽出条件は、ユーザが画像上の点(画素)をマウス等で指定し、当該指定され た画素の画像信号に基づいて設定されるものであっても良いし、また予め決められたThe extraction condition may be such that the user specifies a point (pixel) on the image with a mouse or the like and is set based on the image signal of the specified pixel, or may be predetermined. Was
RGB値に基づいて設定されるものであっても良レ、。この際、人物の肌色は略同一の 光源下では一定範囲内の色相(Hue)、彩度(Satulation)を有するため、初期画素 は、色相および彩度を規定した条件に基づいて選択されるのが好ましい。また、色相 、彩度を規定した条件は、撮影時の光源種により変更されるのが好ましぐ更に、撮 影時の光源種は公知の方法により自動的に判別されるのが好ましい。 Good, even if it is set based on RGB values. At this time, since the flesh color of the person has a hue (Hue) and a saturation (Satulation) within a certain range under substantially the same light source, the initial pixel is selected based on the conditions defining the hue and the saturation. Is preferred. It is preferable that the conditions defining the hue and saturation be changed depending on the type of light source at the time of photographing. Further, it is preferable that the type of light source at the time of photographing is automatically determined by a known method.
[0072] ステップ S4の後、隣接画素間の画像信号の差を順次比較しながら肌色領域を抽 出するための単純領域拡張を行う(ステップ S5)。すなわち、隣接画素の画像信号が 抽出条件を満たしているか否力を判定し (ステップ S6)、隣接画素の画像信号が抽 出条件を満たしている場合には (ステップ S6 ; Yes)、当該隣接画素がエッジであるか 否かを更に判定し (ステップ S7)、当該隣接画素がエッジでない場合には (ステップ S 7 ; No)、当該隣接画素を肌色領域に含めていく(ステップ S8)。ここで、肌色領域に 含められた画素は、それぞれの位置を表す情報 (画素を識別するための情報)がデ ータ蓄積手段 71や制御部 7の内蔵メモリ等に順次蓄積されていく。  After step S4, simple area expansion for extracting a skin color area is performed while sequentially comparing the difference in image signal between adjacent pixels (step S5). That is, it is determined whether the image signal of the adjacent pixel satisfies the extraction condition (step S6). If the image signal of the adjacent pixel satisfies the extraction condition (step S6; Yes), It is further determined whether or not is an edge (step S7). If the adjacent pixel is not an edge (step S7; No), the adjacent pixel is included in the skin color area (step S8). Here, with respect to the pixels included in the flesh color region, information (information for identifying the pixels) representing each position is sequentially stored in the data storage unit 71, the built-in memory of the control unit 7, and the like.
[0073] そして、ステップ S8の後、ステップ S5に移行して、上記処理を繰り返し行う。その過 程で、ステップ S6の段階で隣接画素の画像信号が抽出条件を満たしていない場合( ステップ S6; No)や、ステップ S7の段階で当該隣接画素がエッジの場合 (ステップ S 7 ; Yes)に至ると、単純領域拡張を用いた肌色領域の抽出処理を終了する。  [0073] After step S8, the process proceeds to step S5, and the above processing is repeated. In the process, if the image signal of the adjacent pixel does not satisfy the extraction condition in step S6 (step S6; No), or if the adjacent pixel is an edge in step S7 (step S7; Yes). , The extraction process of the skin color area using the simple area expansion is ended.
[0074] ここで、 目や口等は、肌色領域中に形成されているにもかかわらず単純領域拡張さ れた肌色領域の一部としては抽出されない。一方、 目や口は肌色領域内で閉領域と して含まれている。このため、閉領域が、単純領域拡張された肌色領域の一部として 含まれている場合には、当該閉領域を当該肌色領域に含めるように処理するのが好 ましぐこれにより、 目や口等も単純領域拡張された肌色領域に含ませて抽出可能と なる。 Here, the eyes, the mouth, and the like are not extracted as a part of the skin-color area that is simply expanded even though it is formed in the skin-color area. On the other hand, the eyes and mouth are included as closed areas in the skin color area. For this reason, when the closed region is included as a part of the skin color region that has been simply expanded, it is preferable to process the closed region so as to be included in the skin color region. Etc. can also be extracted by including it in the skin color area expanded by simple area Become.
[0075] 以上説明したように、画像調整処理部 704は、元画像から肌色領域を抽出する際、 まず、エッジを抽出して当該元画像に対する低周波画像を作成する。そして、この低 周波画像に対し、単純領域拡張方法を用いて肌色領域をエッジに至るまで広げてい き、肌色領域を抽出する。  As described above, when extracting a skin color region from an original image, the image adjustment processing unit 704 first extracts edges to create a low-frequency image for the original image. Then, the skin color region is expanded to the edge of the low-frequency image using the simple region expansion method, and the skin color region is extracted.
[0076] 従って、低周波画像に基づいて単純領域拡張が行われることにより、粒状ノイズ等 の影響が除外できるので、所望とする肌色領域が適正に抽出可能となる。  [0076] Therefore, by performing simple area expansion based on the low-frequency image, the influence of granular noise or the like can be excluded, so that a desired skin color area can be appropriately extracted.
[0077] 更に、単純領域拡張の際に、予め抽出された画像エッジに対応する画素に至ると 当該単純領域拡張が強制的に終了されるので、所望とする肌色領域がより適正に抽 出可能となる。  [0077] Further, in the simple area expansion, when the pixel corresponding to the image edge extracted in advance is reached, the simple area expansion is forcibly terminated, so that the desired skin color area can be more appropriately extracted. It becomes.
[0078] 更に、複数の異なる抽出条件が適用され、当該各抽出条件に基づき単純領域拡 張法を用いて各顔領域が抽出されるので、例えば、撮影時の光源種が特定できない 画像や、複数の顔が含まれ、更にそれぞれが異なる光源で照らされているような画像 であっても、当該画像から顔領域を適正に抽出できる。  Further, since a plurality of different extraction conditions are applied, and each face region is extracted using the simple region expansion method based on each of the extraction conditions, for example, an image in which the light source type at the time of shooting cannot be specified, Even if the image includes a plurality of faces, each of which is illuminated by a different light source, a face area can be properly extracted from the image.
[0079] なお、本実施の形態における記述は、本発明に係る画像領域抽出方法、画像領域 抽出装置、画像領域抽出プログラム、画像処理方法、画像処理装置及び画像処理 プログラムの一例を示すものであり、これに限定されるものではない。本実施の形態 における画像処理装置 1の細部構成および詳細動作に関しては、本発明の趣旨を 逸脱しない範囲で適宜変更可能である。  The description in the present embodiment shows an example of the image area extracting method, the image area extracting apparatus, the image area extracting program, the image processing method, the image processing apparatus, and the image processing program according to the present invention. However, the present invention is not limited to this. The detailed configuration and detailed operation of the image processing device 1 according to the present embodiment can be appropriately changed without departing from the spirit of the present invention.
[0080] 例えば、図 4のフローチャートに示す画像処理に対し、更に、当該画像処理で抽出 された肌色領域が顔として特定可能か否力を判定する処理を加えるようにしてもょレ、 。抽出された肌色領域が顔であるか否かを判別する方法は、ニューロネットワークや ,パターンマッチングを用いたもの等の公知の判別方法が利用可能である。また、顔 と判別された際に、当該肌色領域に対して施す画像処理としては、カラー調整、粒状 除去、鮮鋭性強調、ダイナミックレンジ調整等の処理がある。特に、肌色領域の抽出 時間の短縮化を図るため、元画像の画像サイズを縮小した縮小画像に対して肌色領 域の抽出を行い、当該抽出した縮小画像の肌色領域に対応する元画像の領域に対 して画像処理を行うようにしても良レ、。 [0081] また、本実施例では、一種類の抽出条件に対応する一の初期画素から単純領域 拡張を開始する場合について説明したが、これに限らず、複数種類の抽出条件に対 応する複数の初期画素からそれぞれ独立に単純領域拡張を行うようにしてもよい。こ の場合、図 4のステップ S4からステップ S8までの処理力 初期画素の種類に応じて 行われ、初期画素の種類の数に応じた複数の肌色領域が抽出される。従って、複数 の異なる抽出条件が適用され、当該各抽出条件に対し単純領域拡張法を用いて各 肌色領域が抽出されるので、例えば、撮影時の光源種が特定できない画像や、複数 の肌色領域が含まれ、更にそれぞれが異なる光源で照らされているような画像であつ ても、当該画像から肌色領域を正確に抽出できる。 For example, in addition to the image processing shown in the flowchart of FIG. 4, a process of determining whether or not the skin color region extracted in the image processing can be specified as a face may be added. As a method for determining whether or not the extracted skin color area is a face, a known determination method such as a method using a neural network or pattern matching can be used. Further, image processing to be performed on the skin color area when it is determined to be a face includes processing such as color adjustment, grain removal, sharpness enhancement, and dynamic range adjustment. In particular, in order to shorten the extraction time of the flesh-color area, the flesh-color area is extracted from the reduced image obtained by reducing the image size of the original image, and the area of the original image corresponding to the flesh-color area of the extracted reduced image is extracted. It is good to perform image processing for Further, in the present embodiment, the case where simple area expansion is started from one initial pixel corresponding to one type of extraction condition has been described. However, the present invention is not limited to this. The simple area expansion may be performed independently from the initial pixels of the above. In this case, the processing power from step S4 to step S8 in FIG. 4 is performed according to the types of the initial pixels, and a plurality of skin color regions corresponding to the number of types of the initial pixels are extracted. Therefore, a plurality of different extraction conditions are applied, and each skin color region is extracted using the simple region expansion method for each extraction condition. , And even if the images are each illuminated by a different light source, the flesh color region can be accurately extracted from the images.
[0082] また、本実施例では、単純領域拡張方法を用いて抽出する画像領域を肌色領域と して説明したが、これに限らず、例えば、抽出する画像領域が頭髪等の他の領域に 対しても適用可能である。 Further, in the present embodiment, the image area to be extracted using the simple area expansion method has been described as the skin color area. However, the present invention is not limited to this. For example, the image area to be extracted may be other areas such as the head hair. It is also applicable.
産業上の利用可能性  Industrial applicability
[0083] 本発明によれば、低周波画像で単純領域拡張を行うことで、粒状ノイズ等の影響を 除外することができるので、所望とする画像領域が適正に抽出可能となる。 According to the present invention, by performing simple area expansion on a low-frequency image, the influence of granular noise or the like can be excluded, so that a desired image area can be properly extracted.
[0084] 更に、領域拡張の際に、予め抽出された画像エッジに対応する画素に至ると当該 領域拡張が強制的に止められるので、所望とする画像領域がより適正に抽出可能と なる。 Further, in the case of region expansion, when the pixel corresponding to the image edge extracted in advance is reached, the region expansion is forcibly stopped, so that a desired image region can be more appropriately extracted.
[0085] 更に、複数の異なる抽出条件を適用し、当該各抽出条件に基づいて各顔領域の 抽出を行うので、例えば、撮影時の光源種が特定できない画像や、複数の顔が含ま れ、更にそれぞれが異なる光源で照らされているような画像であっても、当該画像か ら顔領域を適正に抽出できる。  [0085] Furthermore, since a plurality of different extraction conditions are applied and each face region is extracted based on the respective extraction conditions, for example, an image in which the light source type at the time of shooting cannot be specified, or a plurality of faces are included. Furthermore, even if the images are each illuminated by different light sources, the face region can be properly extracted from the images.

Claims

請求の範囲 The scope of the claims
[1] 複数画素の信号から成る画像信号を取得する工程と、  [1] a step of acquiring an image signal composed of signals of a plurality of pixels;
前記複数画素から画像エッジに対応する画素を抽出する工程と、  Extracting a pixel corresponding to an image edge from the plurality of pixels;
前記画像信号から低周波画像信号を作成する工程と、  Creating a low frequency image signal from the image signal;
前記複数画素の中から所定の抽出条件を満たす画素を初期画素として抽出する 工程と、  Extracting a pixel satisfying a predetermined extraction condition from the plurality of pixels as an initial pixel;
前記低周波画像信号を用いて前記初期画素から画像領域の拡張を行い、その後 、当該拡張過程で画像領域が前記画像エッジの画素に到達した場合には当該拡張 を停止して拡張後の画像領域の画素を抽出する工程と、  The image area is extended from the initial pixels using the low-frequency image signal. Thereafter, when the image area reaches the pixel at the image edge in the extension process, the extension is stopped and the image area after the extension is extended. Extracting pixels of
を含むことを特徴とする画像領域抽出方法。  A method for extracting an image region, comprising:
[2] 前記画像エッジに対応する画素を抽出する工程では、画素間の画像信号の輝度変 化率を、対象画素の輝度値又は輝度変化率の算出対象となる画素の輝度の平均値 の何れかによつて規格化し、当該規格化値が所定閾値以上の場合に、当該拡張対 象の画素を画像エッジに対応する画素として抽出することを特徴とする請求の範囲 第 1項に記載の画像領域抽出方法。  [2] In the step of extracting a pixel corresponding to the image edge, the luminance change rate of the image signal between the pixels is determined by using either the luminance value of the target pixel or the average value of the luminance of the pixel whose luminance change rate is to be calculated. 2.The image according to claim 1, wherein the pixel to be expanded is extracted as a pixel corresponding to an image edge when the standardized value is equal to or larger than a predetermined threshold. Region extraction method.
[3] 前記抽出条件は、人物の肌を表す画素を抽出するための条件であることを特徴とす る請求の範囲第 1項に記載の画像領域抽出方法。 3. The method according to claim 1, wherein the extraction condition is a condition for extracting a pixel representing a human skin.
[4] 複数画素の信号から成る画像信号を取得する取得手段と、 [4] acquiring means for acquiring an image signal composed of signals of a plurality of pixels;
前記複数画素から画像エッジに対応する画素を抽出するエッジ抽出手段と、 前記画像信号から低周波画像信号を作成する作成手段と、  Edge extracting means for extracting a pixel corresponding to an image edge from the plurality of pixels, creating means for creating a low-frequency image signal from the image signal,
前記複数画素の中から所定の抽出条件を満たす画素を初期画素として抽出する 初期画素抽出手段と、  Initial pixel extracting means for extracting a pixel satisfying a predetermined extraction condition from the plurality of pixels as an initial pixel,
前記低周波画像信号を用いて前記初期画素から画像領域の拡張を行い、その後 、当該拡張過程で画像領域が前記画像エッジの画素に到達した場合には当該拡張 を停止して拡張後の画像領域の画素を抽出する拡張後領域抽出手段と、  The image area is extended from the initial pixels using the low-frequency image signal. Thereafter, when the image area reaches the pixel at the image edge in the extension process, the extension is stopped and the image area after the extension is extended. Expanded region extracting means for extracting pixels of
を備えたことを特徴とする画像領域抽出装置。  An image region extraction device comprising:
[5] 前記エッジ抽出手段は、画素間の画像信号の輝度変化率を、対象画素の輝度値又 は輝度変化率の算出対象となる画素の輝度の平均値の何れかによつて規格化し、 当該規格化値が所定閾値以上の場合に、当該拡張対象の画素を画像エッジに対応 する画素として抽出することを特徴とする請求の範囲第 4項に記載の画像領域抽出 装置。 [5] The edge extraction means normalizes the luminance change rate of the image signal between the pixels by using either the luminance value of the target pixel or the average luminance value of the pixel for which the luminance change rate is to be calculated, 5. The image region extracting apparatus according to claim 4, wherein when the standardized value is equal to or larger than a predetermined threshold, the expansion target pixel is extracted as a pixel corresponding to an image edge.
[6] 前記抽出条件は、人物の肌を表す画素を抽出するための条件であることを特徴とす る請求の範囲第 4項に記載の画像領域抽出装置。  6. The image region extraction device according to claim 4, wherein the extraction condition is a condition for extracting a pixel representing a human skin.
[7] 画像処理を行うコンピュータに、  [7] Computers that perform image processing
複数画素の信号から成る画像信号を取得する機能と、  A function of acquiring an image signal composed of signals of a plurality of pixels,
前記複数画素から画像エッジに対応する画素を抽出する機能と、  A function of extracting a pixel corresponding to an image edge from the plurality of pixels,
前記画像信号から低周波画像信号を作成する機能と、  A function of creating a low-frequency image signal from the image signal;
前記複数画素の中から所定の抽出条件を満たす画素を初期画素として抽出する 機能と、  A function of extracting a pixel satisfying a predetermined extraction condition from the plurality of pixels as an initial pixel;
前記低周波画像信号を用いて前記初期画素から画像領域の拡張を行い、その後 、当該拡張過程で画像領域が前記画像エッジの画素に到達した場合には当該拡張 を停止して拡張後の画像領域の画素を抽出する機能と、  The image area is extended from the initial pixels using the low-frequency image signal. Thereafter, when the image area reaches the pixel at the image edge in the extension process, the extension is stopped and the image area after the extension is extended. Function to extract the pixels of
を実現させるための画像領域抽出プログラム。  Image area extraction program for realizing.
[8] 前記コンピュータに、 [8] The computer
画素間の画像信号の輝度変化率を、対象画素の輝度値又は輝度変化率の算出対 象となる画素の輝度の平均値の何れかによつて規格化し、当該規格化値が所定閾 値以上の場合に、当該拡張対象の画素を画像エッジに対応する画素として抽出する 機能を更に実現させるための請求の範囲第 7項に記載の画像領域抽出プログラム。  The luminance change rate of the image signal between pixels is normalized by either the luminance value of the target pixel or the average value of the luminance of the pixel for which the luminance change rate is to be calculated, and the normalized value is equal to or greater than a predetermined threshold value. 8. The image area extraction program according to claim 7, further comprising a function of extracting the pixel to be extended as a pixel corresponding to an image edge in the case of (1).
[9] 前記抽出条件は、人物の肌を表す画素を抽出するための条件であることを特徴とす る請求の範囲第 7項に記載の画像領域抽出プログラム。 9. The image region extraction program according to claim 7, wherein the extraction condition is a condition for extracting a pixel representing a human skin.
[10] 複数画素の信号から成る画像信号を取得する工程と、 [10] a step of obtaining an image signal composed of signals of a plurality of pixels;
前記複数画素から画像エッジに対応する画素を抽出する工程と、  Extracting a pixel corresponding to an image edge from the plurality of pixels;
前記画像信号から低周波画像信号を作成する工程と、  Creating a low frequency image signal from the image signal;
前記複数画素の中から、人物の肌を表す画素を抽出するための抽出条件を満た す画素を初期画素として抽出する工程と、  Extracting a pixel that satisfies an extraction condition for extracting a pixel representing a human skin from the plurality of pixels as an initial pixel;
前記低周波画像信号を用いて前記初期画素から画像領域の拡張を行い、その後 、当該拡張過程で画像領域が前記画像エッジの画素に到達した場合には当該拡張 を停止して拡張後の画像領域の画素を抽出する工程と、 Using the low-frequency image signal to extend the image area from the initial pixel, then Stopping the expansion when the image area reaches the pixel of the image edge in the expansion process, and extracting the pixels of the image area after expansion;
前記拡張後の画像領域が人物の顔を表しているか否かを判別する工程と、 人物の顔を表していると判定された画像領域に対し、所定の画像処理を施す工程 と、  A step of determining whether or not the image area after the expansion represents a person's face; a step of performing predetermined image processing on the image area determined to represent a person's face;
を含むことを特徴とする画像処理方法。  An image processing method comprising:
[11] 前記初期画像を抽出する工程は、 [11] The step of extracting the initial image,
前記複数画素の中から、人物の肌を表す画素を抽出するための複数種類の抽出 条件を各々満たす複数の画素を、それぞれ初期画素として抽出する工程であり、 前記拡張後の画像領域の画素を抽出する工程は、  It is a step of extracting, as initial pixels, a plurality of pixels each satisfying a plurality of types of extraction conditions for extracting a pixel representing a person's skin from the plurality of pixels, and extracting pixels of the expanded image area. The process of extracting
前記低周波画像信号を用いて前記各初期画素から画像領域の拡張を行い、その 後、当該各拡張過程で画像領域が前記画像エッジの画素に到達した場合には当該 各拡張を停止して拡張後の各画像領域の画素を抽出する工程であり、  The image area is expanded from the initial pixels using the low-frequency image signal, and thereafter, when the image area reaches the pixel at the image edge in the expansion process, the expansion is stopped and expanded. It is a step of extracting pixels of each subsequent image area,
前記人物の顔を表しているか否かを判断する工程は、  The step of determining whether or not the face of the person is represented,
前記拡張後の各画像領域が人物の顔を表しているか否力を判別する工程、 である請求の範囲第 10項に記載の画像処理方法。  11. The image processing method according to claim 10, wherein the step of determining whether each of the image areas after the expansion represents a person's face is performed.
[12] 前記画像エッジに対応する画素を抽出する工程では、画素間の画像信号の輝度変 化率を、対象画素の輝度値又は輝度変化率の算出対象となる画素の輝度の平均値 の何れかによつて規格化し、当該規格化値が所定閾値以上の場合に、当該拡張対 象の画素を画像エッジに対応する画素として抽出することを特徴とする請求の範囲 第 10項に記載の画像処理方法。 [12] In the step of extracting a pixel corresponding to the image edge, the luminance change rate of the image signal between the pixels is calculated by using the luminance value of the target pixel or the average value of the luminance of the pixel whose luminance change rate is to be calculated. 11.The image according to claim 10, wherein the pixel to be expanded is extracted as a pixel corresponding to an image edge when the standardized value is equal to or larger than a predetermined threshold. Processing method.
[13] 複数画素の信号から成る画像信号を取得する取得手段と、 [13] acquiring means for acquiring an image signal composed of signals of a plurality of pixels;
前記複数画素から画像エッジに対応する画素を抽出するエッジ抽出手段と、 前記画像信号から低周波画像信号を作成する作成手段と、  Edge extracting means for extracting a pixel corresponding to an image edge from the plurality of pixels, creating means for creating a low-frequency image signal from the image signal,
前記複数画素の中から、人物の肌を表す画素を抽出するための抽出条件を満た す画素を初期画素として抽出する初期画素抽出手段と、  Initial pixel extracting means for extracting a pixel satisfying an extraction condition for extracting a pixel representing a human skin from the plurality of pixels as an initial pixel;
前記低周波画像信号を用いて前記初期画素から画像領域の拡張を行い、その後 、当該拡張過程で画像領域が前記画像エッジの画素に到達した場合には当該拡張 を停止して拡張後の画像領域の画素を抽出する拡張後領域抽出手段と、 前記拡張後の画像領域が人物の顔を表しているか否かを判別する判別手段と、 人物の顔を表していると判定された画像領域に対し、所定の画像処理を施す処理 手段と、 The image area is extended from the initial pixel using the low-frequency image signal. Thereafter, when the image area reaches the pixel at the image edge in the extension process, the extension is performed. Stopping, extracting the pixels of the image area after expansion, an expanded area extraction means, and a determination means for determining whether the expanded image area represents a human face, Processing means for performing predetermined image processing on the image area determined to be present,
を備えたことを特徴とする画像処理装置。  An image processing apparatus comprising:
[14] 前記初期画像を抽出する手段は、 [14] The means for extracting the initial image,
前記複数画素の中から、人物の肌を表す画素を抽出するための複数種類の抽出 条件を各々満たす複数の画素を、それぞれ初期画素として抽出する初期画素抽出 手段であり、  Initial pixel extraction means for extracting, as initial pixels, a plurality of pixels each satisfying a plurality of types of extraction conditions for extracting a pixel representing a human skin from the plurality of pixels,
前記拡張後の画像領域の画素を抽出する手段は、  Means for extracting pixels of the image area after the expansion,
前記低周波画像信号を用いて前記各初期画素から画像領域の拡張を行い、その 後、当該各拡張過程で画像領域が前記画像エッジの画素に到達した場合には当該 各拡張を停止して拡張後の各画像領域の画素を抽出する拡張後領域抽出手段であ り、  The image area is expanded from the initial pixels using the low-frequency image signal, and thereafter, when the image area reaches the pixel at the image edge in the expansion process, the expansion is stopped and expanded. A post-expansion area extracting means for extracting pixels of each subsequent image area,
前記人物の顔を表わしているか否力を判断する手段は、  The means for determining whether or not the face of the person is represented,
前記拡張後の各画像領域が人物の顔を表しているか否かを判別する判別手段、 である請求の範囲第 13項に記載の画像処理装置。  14. The image processing device according to claim 13, further comprising: a determination unit configured to determine whether each of the image regions after the expansion represents a human face.
[15] 前記エッジ抽出手段は、画素間の画像信号の輝度変化率を、対象画素の輝度値又 は輝度変化率の算出対象となる画素の輝度の平均値の何れかによつて規格化し、 当該規格化値が所定閾値以上の場合に、当該拡張対象の画素を画像エッジに対応 する画素として抽出することを特徴とする請求の範囲第 13項に記載の画像処理装置 [15] The edge extracting means normalizes the luminance change rate of the image signal between pixels by using either the luminance value of the target pixel or the average luminance value of the pixel for which the luminance change rate is to be calculated, 14. The image processing apparatus according to claim 13, wherein when the standardized value is equal to or larger than a predetermined threshold, the pixel to be expanded is extracted as a pixel corresponding to an image edge.
[16] 画像処理を行うコンピュータに、 [16] Computers that perform image processing
複数画素の信号から成る画像信号を取得する機能と、  A function of acquiring an image signal composed of signals of a plurality of pixels,
前記複数画素から画像エッジに対応する画素を抽出する機能と、  A function of extracting a pixel corresponding to an image edge from the plurality of pixels,
前記画像信号から低周波画像信号を作成する機能と、  A function of creating a low-frequency image signal from the image signal;
前記複数画素の中から、人物の肌を表す画素を抽出するための抽出条件を満た す画素を初期画素として抽出する機能と、 前記低周波画像信号を用いて前記初期画素から画像領域の拡張を行い、その後 、当該拡張過程で画像領域が前記画像エッジの画素に到達した場合には当該拡張 を停止して拡張後の画像領域の画素を抽出する機能と、 A function of extracting a pixel that satisfies an extraction condition for extracting a pixel representing a human skin from the plurality of pixels as an initial pixel; The image area is extended from the initial pixels using the low-frequency image signal. Thereafter, when the image area reaches the pixel at the image edge in the extension process, the extension is stopped and the image area after the extension is extended. Function to extract the pixels of
前記拡張後の画像領域が人物の顔を表しているか否かを判別する機能と、 人物の顔を表していると判定された画像領域に対し、所定の画像処理を施す機能 と、  A function of determining whether or not the image area after the expansion represents a human face; a function of performing predetermined image processing on the image area determined to represent a human face;
を実現させるための画像処理プログラム。  Image processing program for realizing.
[17] 前記初期画像を抽出する機能は、  [17] The function of extracting the initial image,
前記複数画素の中から、人物の肌を表す画素を抽出するための複数種類の抽出 条件を各々満たす複数の画素を、それぞれ初期画素として抽出する機能であり、 前記拡張後の画像領域の画素を抽出する機能は、  A function of extracting, as initial pixels, a plurality of pixels each satisfying a plurality of types of extraction conditions for extracting a pixel representing a person's skin from the plurality of pixels, and extracting pixels of the image area after the expansion. The function to extract is
前記低周波画像信号を用いて前記各初期画素から画像領域の拡張を行い、その 後、当該各拡張過程で画像領域が前記画像エッジの画素に到達した場合には当該 各拡張を停止して拡張後の各画像領域の画素を抽出する機能であり、  The image area is expanded from the initial pixels using the low-frequency image signal, and thereafter, when the image area reaches the pixel at the image edge in the expansion process, the expansion is stopped and expanded. It is a function to extract the pixels of each subsequent image area,
前記人物の顔を表しているか否かを判断する機能は、  The function of determining whether or not the face of the person is represented,
前記拡張後の各画像領域が人物の顔を表しているか否かを判別する機能である、 請求の範囲第 16項に記載の画像処理プログラム。  17. The image processing program according to claim 16, wherein the function is to determine whether each of the extended image areas represents a human face.
[18] 前記コンピュータに、 [18] The computer
画素間の画像信号の輝度変化率を、対象画素の輝度値又は輝度変化率の算出対 象となる画素の輝度の平均値の何れかによつて規格化し、当該規格化値が所定閾 値以上の場合に、当該拡張対象の画素を画像エッジに対応する画素として抽出する 機能を更に実現させるための請求の範囲第 16項に記載の画像処理プログラム。  The luminance change rate of the image signal between pixels is normalized by either the luminance value of the target pixel or the average value of the luminance of the pixel for which the luminance change rate is to be calculated, and the normalized value is equal to or greater than a predetermined threshold value. 17. The computer-readable storage medium according to claim 16, further comprising a function of extracting the pixel to be extended as a pixel corresponding to an image edge.
PCT/JP2004/018816 2003-12-26 2004-12-16 Image area extraction and image processing method, device, and program WO2005064539A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2003434973 2003-12-26
JP2003-434973 2003-12-26

Publications (1)

Publication Number Publication Date
WO2005064539A1 true WO2005064539A1 (en) 2005-07-14

Family

ID=34736581

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2004/018816 WO2005064539A1 (en) 2003-12-26 2004-12-16 Image area extraction and image processing method, device, and program

Country Status (1)

Country Link
WO (1) WO2005064539A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS61161091A (en) * 1985-01-08 1986-07-21 Fuji Photo Film Co Ltd Image processing method
JPH07140260A (en) * 1993-11-18 1995-06-02 Nippon Signal Co Ltd:The Method for sensing stopped vehicle using image
JPH09322192A (en) * 1996-05-29 1997-12-12 Nec Corp Detection and correction device for pink-eye effect
JP2001057630A (en) * 1999-08-18 2001-02-27 Fuji Photo Film Co Ltd Image processing unit and image processing method
JP2001118064A (en) * 1999-10-20 2001-04-27 Nippon Hoso Kyokai <Nhk> Image processor

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS61161091A (en) * 1985-01-08 1986-07-21 Fuji Photo Film Co Ltd Image processing method
JPH07140260A (en) * 1993-11-18 1995-06-02 Nippon Signal Co Ltd:The Method for sensing stopped vehicle using image
JPH09322192A (en) * 1996-05-29 1997-12-12 Nec Corp Detection and correction device for pink-eye effect
JP2001057630A (en) * 1999-08-18 2001-02-27 Fuji Photo Film Co Ltd Image processing unit and image processing method
JP2001118064A (en) * 1999-10-20 2001-04-27 Nippon Hoso Kyokai <Nhk> Image processor

Similar Documents

Publication Publication Date Title
US8107764B2 (en) Image processing apparatus, image processing method, and image processing program
US20040247175A1 (en) Image processing method, image capturing apparatus, image processing apparatus and image recording apparatus
JP2005094571A (en) Camera with red-eye correcting function
JP2007087234A (en) Image processing method and apparatus, and program
JP2005190435A (en) Image processing method, image processing apparatus and image recording apparatus
JP2007189428A (en) Apparatus and program for index image output
JP2003244467A (en) Image processing method, image processor and image recorder
JP2004096506A (en) Image forming method, image processor and image recording device
JP2003283731A (en) Image input apparatus, image output apparatus, and image recorder comprising them
WO2005112428A1 (en) Image processing method, image processing device, image recorder, and image processing program
JP2006318255A (en) Image processing method, image processor and image processing program
JP2005192162A (en) Image processing method, image processing apparatus, and image recording apparatus
JP2004193957A (en) Image processing apparatus, image processing method, image processing program, and image recording apparatus
US20040036892A1 (en) Image processing method, image processing apparatus, image recording apparatus and recording medium
JP2003250040A (en) Image processing method, image processing apparatus, image recording apparatus, and recording medium
JP4811401B2 (en) Image processing method and image processing apparatus
JP2005102154A (en) Image processing apparatus, method and program
WO2005064539A1 (en) Image area extraction and image processing method, device, and program
JP2004193956A (en) Image processing apparatus, image processing method, image processing program, and image recording apparatus
JP2004328534A (en) Image forming method, image processing apparatus and image recording apparatus
JP4228579B2 (en) Image processing method and image processing apparatus
JP2004096508A (en) Image processing method, image processing apparatus, image recording apparatus, program, and recording medium
JP2005316581A (en) Image processing method, image processor and image processing program
JP2005209012A (en) Image processing method, apparatus and program
JP2004242066A (en) Image recorder, image recording method and image processing program

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP