WO2005064539A1 - Image area extraction and image processing method, device, and program - Google Patents
Image area extraction and image processing method, device, and program Download PDFInfo
- Publication number
- WO2005064539A1 WO2005064539A1 PCT/JP2004/018816 JP2004018816W WO2005064539A1 WO 2005064539 A1 WO2005064539 A1 WO 2005064539A1 JP 2004018816 W JP2004018816 W JP 2004018816W WO 2005064539 A1 WO2005064539 A1 WO 2005064539A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- pixel
- pixels
- extracting
- image area
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
Definitions
- the present invention relates to an image region extraction method, an image region extraction device, an image region extraction program, an image processing method, an image processing device, and an image processing program for extracting an image region from an image.
- Such image signals are subjected to various image processing such as negative / positive inversion, luminance adjustment, color balance adjustment, grain removal, sharpness enhancement, and the like, and then are subjected to CD-R (CD-Recordable), CD-RW. (CD—Rewritable), FD (Floppy (registered trademark) disk), distributed via recording media such as memory cards and the Internet.
- CD-R CD-Recordable
- CD-RW CD—Rewritable
- FD Floppy (registered trademark) disk
- the distributed image signal is output as a hard copy image using silver halide photographic paper, an ink jet printer, a thermal printer, etc., or displayed on a CRT (Cathode Ray Tube), liquid crystal display, plasma display, etc. for viewing. You.
- DSCs digital still cameras
- an image to be viewed includes a person's face
- the person's face is most watched at the time of viewing. For this reason, in order to output a high-quality image, it is necessary to impart appropriate color, brightness, sharpness, noise, three-dimensionality, and the like to a person's face.
- the simple area extension method is to extract an image area by expanding adjacent pixels whose data difference between pixels is equal to or less than a threshold value as belonging to the same image area, and extracting the image area.
- a threshold value for example, see Non-Patent Document 1.
- the data difference between pixels adjacent to the initial pixel is less than or equal to the threshold. It is assumed that the adjacent pixel and the initial pixel belong to the same image area, and the same judgment is performed for the pixel adjacent to the pixel belonging to the same image area, and the same area is determined starting from the initial pixel.
- This is an image processing method that extracts an image area by gradually expanding it.
- Patent Document 1 Japanese Patent Application Laid-Open No. 2001-57630
- Non-Patent Document 1 Mikio Takagi and Director of Director Hisahisa Shimoda, "Image Analysis Handbook", First Edition, The University of Tokyo Press, January 17, 1991
- it is difficult to accurately extract a desired image region from an image and perform appropriate image processing on the extracted image region by the method for extracting an image region as disclosed in Japanese Patent Application Laid-Open No. H11-157163. Was.
- the image processing may not be properly performed. That is, if image processing is performed on a specific image area, accurate extraction of the image area is premised. Even if applied, the desired effect cannot be obtained, and the possibility is high.
- An object of the present invention is to appropriately extract a desired image area from an image and perform appropriate image processing on the extracted image area.
- the image area is extended from the initial pixels using the low-frequency image signal. Thereafter, when the image area reaches the pixel at the image edge in the extension process, the extension is stopped and the image area after the extension is extended. Extracting pixels of
- the luminance of the image signal between the pixels is determined.
- the rate of change is normalized by either the luminance value of the target pixel or the average value of the luminance of the pixel for which the luminance change rate is to be calculated, and when the normalized value is equal to or greater than a predetermined threshold, the pixel to be expanded is Is preferably extracted as a pixel corresponding to the image edge.
- the extraction condition is preferably a condition for extracting a pixel representing the skin of a person.
- the image area is extended from the initial pixels using the low-frequency image signal. Thereafter, when the image area reaches the pixel at the image edge in the extension process, the extension is stopped and the image area after the extension is extended. Extracting pixels of
- the image area is expanded from the initial pixels using the low-frequency image signal, and thereafter, when the image area reaches the pixel at the image edge in the expansion process, the expansion is stopped and expanded. It is a step of extracting pixels of each subsequent image area,
- the step of determining whether or not the face of the person is represented A step of determining whether or not each of the image areas after the expansion represents a person's face.
- a luminance change rate of an image signal between pixels is determined by: Normalization is performed using either the luminance value of the target pixel or the average value of the luminance of the pixel whose luminance change rate is to be calculated. If the standardized value is equal to or greater than a predetermined threshold, the pixel to be expanded is set as the image edge. It is preferable to extract as corresponding pixels.
- FIG. 1 is a perspective view showing an external configuration of an image processing apparatus to which the present invention is applied.
- FIG. 2 is a block diagram showing an internal structure of an image processing device to which the present invention is applied.
- FIG. 3 is a block diagram mainly showing an internal configuration of an image processing unit shown in FIG. 2.
- FIG. 4 is a flowchart illustrating image processing contents to which the present invention is applied.
- Acquiring means for acquiring an image signal composed of signals of a plurality of pixels
- Edge extracting means for extracting a pixel corresponding to an image edge from the plurality of pixels, creating means for creating a low-frequency image signal from the image signal,
- Initial pixel extracting means for extracting a pixel satisfying a predetermined extraction condition from the plurality of pixels as an initial pixel
- the image area is extended from the initial pixels using the low-frequency image signal. Thereafter, when the image area reaches the pixel at the image edge in the extension process, the extension is stopped and the image area after the extension is extended. Expanded region extracting means for extracting pixels of
- the edge extraction means normalizes the luminance change rate of the image signal between pixels by using either the luminance value of the target pixel or the average value of the luminance of the pixel for which the luminance change rate is to be calculated. Is greater than or equal to a predetermined threshold, the pixel to be expanded is paired with the image edge. Preferably, it is extracted as the corresponding pixel.
- the extraction condition is preferably a condition for extracting a pixel representing the skin of a person.
- the image area is extended from the initial pixels using the low-frequency image signal. Thereafter, when the image area reaches the pixel at the image edge in the extension process, the extension is stopped and the image area after the extension is extended. Function to extract the pixels of
- the luminance change rate of the image signal between pixels is normalized by either the luminance value of the target pixel or the average value of the luminance of the pixel for which the luminance change rate is to be calculated, and the normalized value is equal to or greater than a predetermined threshold value. In this case, it is preferable to further realize a function of extracting the pixel to be extended as a pixel corresponding to an image edge.
- the extraction condition is preferably a condition for extracting a pixel representing the skin of a person.
- Acquiring means for acquiring an image signal composed of signals of a plurality of pixels
- Edge extracting means for extracting a pixel corresponding to an image edge from the plurality of pixels, creating means for creating a low-frequency image signal from the image signal,
- Initial pixel extracting means for extracting a pixel satisfying an extraction condition for extracting a pixel representing a human skin from the plurality of pixels as an initial pixel;
- the image area is extended from the initial pixels using the low-frequency image signal. Thereafter, when the image area reaches the pixel at the image edge in the extension process, the extension is stopped and the image area after the extension is extended. Expanded region extracting means for extracting pixels of
- the means for extracting the initial image is the means for extracting the initial image
- Initial pixel extraction means for extracting, as initial pixels, a plurality of pixels each satisfying a plurality of types of extraction conditions for extracting a pixel representing a human skin from the plurality of pixels
- the image area is expanded from the initial pixels using the low-frequency image signal, and thereafter, when the image area reaches the pixel at the image edge in the expansion process, the expansion is stopped and expanded.
- the means for determining whether or not the face of the person is represented
- a determination unit configured to determine whether each of the extended image areas represents a person's face.
- the edge extraction means normalizes the luminance change rate of the image signal between pixels by using either the luminance value of the target pixel or the average value of the luminance of the pixel for which the luminance change rate is to be calculated. When is greater than or equal to a predetermined threshold, it is preferable to extract the pixel to be expanded as a pixel corresponding to an image edge.
- the image area is extended from the initial pixels using the low-frequency image signal. Thereafter, when the image area reaches the pixel at the image edge in the extension process, the extension is stopped and the image area after the extension is extended. Function to extract the pixels of
- the function to extract is
- the image area is expanded from the initial pixels using the low-frequency image signal, and thereafter, when the image area reaches the pixel at the image edge in the expansion process, the expansion is stopped and expanded. It is a function to extract the pixels of each subsequent image area,
- the luminance change rate of the image signal between pixels is normalized by either the luminance value of the target pixel or the average value of the luminance of the pixel for which the luminance change rate is to be calculated, and the normalized value is determined by a predetermined threshold.
- the value is equal to or larger than the value, it is preferable to further realize a function of extracting the pixel to be extended as a pixel corresponding to an image edge.
- the image processing apparatus 1 is provided with a magazine loading section 3 for loading a photosensitive material on one side surface of a housing 2. Inside the housing 2, there are provided an exposure processing section 4 for exposing the photosensitive material, and a print creating section 5 for developing and drying the exposed photosensitive material to create a print. On the other side of the housing 2, a tray 6 for discharging the print created by the print creating section 5 is provided.
- a CRT 8 as a display device
- a film scanner unit 9 that is a device for reading a transparent original
- a reflective original input device 10 and an operation unit 11 are provided at an upper portion of the housing 2.
- the housing 2 is provided with an image reading section 14 for reading images recorded on various digital recording media, and an image writing section 15 for writing (outputting) image signals on various digital recording media.
- a control unit 7 for integrally controlling each unit constituting the image processing apparatus 1 is provided inside the housing 2.
- the image reading unit 14 is provided with a PC card adapter 14a and an FD adapter 14b, so that the PC card 13a and the FD 13b can be inserted.
- the image writing unit 15 is provided with an FD adapter 15a, a MO (Magneto-Optical) adapter 15b, and an optical disk adapter 15c, and the FD 16a, M016b, and the optical disk 16c can be inserted thereinto, respectively.
- the optical disk 16c includes CD_R, DVD-R (Digita 1 Versatile Disk—Recordable), DVD—RW (DVD—Rewritable), and the like.
- the operation unit 11, the CRT 8, the film scanner unit 9, the reflection document input device 10, and the image reading unit 14 are configured integrally with the housing 2. May be provided separately.
- the image processing apparatus 1 includes a control unit 7, an exposure processing unit 4, a print generation unit 5, a film scanner unit 9, a reflection document input device 10, an image reading unit 14, a communication unit (input) 32, It has an image writing unit 15, a data storage unit 71, an operation unit 11, a CRT 8, and a communication unit (output) 33.
- the exposure processing section 4 exposes the photosensitive material to an image, and outputs the photosensitive material to the print creating section 5.
- the print creating section 5 develops the exposed photosensitive material and dries it to create prints Pl, P2 and P3.
- the control unit 7 includes a microcomputer, and various control programs such as an image processing program stored in a ROM (Read Only Memory) or the like (not shown) and a CPU (Central Processing Unit) (not shown). In cooperation with (abbreviation), the operation of each unit constituting the image processing apparatus 1 is controlled in a comprehensive manner.
- ROM Read Only Memory
- CPU Central Processing Unit
- the control unit 7 includes an image processing unit 70, and based on an input signal (command information) from the operation unit 11, a film scanner unit 9, a reflection document input device 10, an image reading unit 14, an external device
- the communication unit (input) forms an image for exposure for each image input from the input unit 32 and outputs the image to the exposure processing unit 4.
- the image processing unit 70 will be described later in detail.
- the film scanner unit 9 reads an image recorded on a transparent original such as a developed negative film N or a reversal film captured by an analog camera.
- the reflection document input device 10 reads an image formed on a print P (photo print, document, various prints) by a flatbed scanner (not shown).
- the operation unit 11 has information input means 12.
- the information input means 12 is composed of, for example, a touch panel or the like, and outputs a press signal of the information input means 12 to the control section 7 as an input signal.
- the operation unit 11 may be configured to include a keyboard, a mouse, and the like.
- the CRT 8 displays an image or the like according to the display control signal input from the control unit 7.
- the image reading unit 14 includes an image transfer unit 30, reads an image recorded on the PC card 13a or the FD 13b, and transfers the image to the control unit 7.
- the image transfer means 30 has a PC card adapter 14a, an FD adapter 14b, and the like.
- the image reading section 14 reads an image recorded on the PC card 13a inserted into the PC card adapter 14a or the FD 13b inserted into the FD adapter 14b, and transfers the read image to the image transfer means 30. Is transferred to the control unit 7 using.
- the image writing unit 15 includes an image transport unit 31, and the image transport unit 31 includes an FD adapter 15a, an MO adapter 15b, an optical disk adapter 15c, and the like.
- the image writing unit 15 is configured such that the FD 16a inserted into the FD adapter 15a, the M ⁇ 16b inserted into the MO adapter 15b, and the optical disk inserted into the optical disk adapter 15c, according to the write signal input from the control unit 7. Write various data to 16c.
- the communication means (input) 32 receives images, various commands, and the like from another computer in the facility where the image processing apparatus 1 is installed or a distant computer via the Internet or the like.
- the communication means (output) 33 transmits an image, order information, and the like to another computer in the facility where the image processing apparatus 1 is installed, or to a remote computer via the Internet or the like.
- the data storage unit 71 stores data such as images and order information (information on how many prints are to be made from which frame images, print size information, etc.).
- the image processing unit 70 includes a film scan data processing unit 701, a reflection original scan data processing unit 702, an image data format decoding processing unit 703, and an image adjustment processing unit 704 (described in the claims). Acquisition unit, edge extraction unit, creation unit, initial pixel extraction unit, expanded area extraction unit, discrimination unit, and processing unit.), CRT specific processing unit 705, printer specific processing unit 706, printer specific processing And an image data format creation processing unit 708.
- the film scan data processing unit 701 performs a calibration operation unique to the film scanner unit 9, a negative-positive inversion in the case of a negative original, a gray balance adjustment, a contrast adjustment, and the like for the image input from the film scanner unit 9. And outputs it to the image adjustment processing unit 704. Further, the film scan data processing unit 701 includes a film size, a negative / positive type, ISO (International Organization for Standardization) sensitivity optically or magnetically recorded on the film, a manufacturer name, information on a main subject, and shooting conditions. Information (example For example, APS (Advanced Photo System) information) is also output to the image adjustment processing unit 704.
- ISO International Organization for Standardization
- APS Advanced Photo System
- the reflection document scan data processing unit 702 performs a calibration operation unique to the reflection document input device 10, negative-positive inversion for a negative document, gray balance adjustment, and contrast adjustment for an image input from the reflection document input device 10. And outputs the result to the image adjustment processing unit 704.
- the image data format decryption processing unit 703 performs restoration of a compression code, conversion of a method of expressing color data, and the like in accordance with the data format of the image signal input from the image transfer unit 30 or the communication unit (input) 32. Are output to the image adjustment processing unit 704.
- the image adjustment processing unit 704 performs various types of image processing on images input from the film scanner unit 9, the reflection original input device 10, the image transfer unit 30, and the communication unit (input) 32. In particular, the image adjustment processing unit 704 executes the image processing shown in the flowchart of FIG.
- the image adjustment processing unit 704 sends the processed image to the CRT-specific processing unit 705, the printer-specific processing unit 706, the printer-specific processing unit 707, the image data format creation unit 708, and the data storage unit 71. Output.
- the CRT-specific processing unit 705 performs processing such as changing the number of pixels and color matching on the image input from the image adjustment processing unit 704, and outputs the processed image to the CRT 8 together with various display information.
- the printer-specific processing unit 706 performs printer-specific calibration processing, color matching, changing the number of pixels, and the like on the image-processed image signal input from the image adjustment processing unit 704, and an exposure processing unit. Output to 4.
- the image processing apparatus 1 of the present embodiment is provided with a printer-specific processing unit 707 corresponding to the external printer 34 such as an inkjet printer.
- the printer-specific processing unit 707 performs an appropriate printer-specific calibration process, color matching, change of the number of pixels, and the like on the image input from the image adjustment processing unit 704.
- the image data format creation processing unit 708 converts the image input from the image adjustment processing unit 704 into a JPEG (Joint Photographic Experts Group), a TIFF fagged Image File Format, an Exif (Exchangea Die image file format), or the like. (This is typical.) Conversion to various general-purpose image formats is performed, and the image is transferred to the image transport unit 31 and communication means (output) 33. Output.
- JPEG Joint Photographic Experts Group
- TIFF fagged Image File Format an Exif (Exchangea Die image file format)
- Exif Exchangea Die image file format
- control unit 7 eg, a film scan data processing unit 701, a reflection original scan data processing unit 702, an image data format decoding processing unit 703, an image adjustment processing unit 704, a CRT specific processing unit 705, a printer
- the unique processing units 706 and 707, the image data format creation processing unit 708, etc. do not necessarily have to be realized as physically independent devices. It may be something that is represented. Further, the image processing apparatus 1 can be applied to various modes, such as a digital photo printer, a printer dryino, and a plug-in of various image processing software, which are not limited to the above-described contents.
- image processing to which the present invention is applied will be described with reference to FIG.
- the image processing described below is executed by the image adjustment processing unit 704.
- Step S1 when an original image is acquired via the film scanner unit 9, the reflection original input device 10, the image transfer device 30, the communication means (input) 32 (Step S1), the edges of the original image are extracted and ( Step S2), a low-frequency image is created for the original image (step S3).
- the luminance change rate ⁇ ⁇ is represented by the luminance value of the target pixel (or the luminance average value of a plurality of pixels for which the change rate is calculated, etc.).
- the value ⁇ ⁇ / ⁇ normalized by Y is calculated, and if the calculated value ⁇ / ⁇ is equal to or larger than a specific threshold, it is determined to be an edge, and information indicating the position of the edge pixel (for identifying the pixel) Is stored in the data storage means 71 or the internal memory of the control unit 7.
- image edge extraction can be performed using a known edge extraction filter, in the present embodiment, it is appropriate to perform image edge extraction using high-frequency components obtained by binomial wavelet transform. Is preferred for extracting
- the generation of the low-frequency image can be performed using a known low-pass filter. In the present embodiment, it is preferable to use a low-frequency component obtained by the binomial-wavelet transform.
- a skin color region is extracted from the low-frequency image created in step S3 by using the simple region extension method.
- an initial pixel (one or more pixels) is selected from the pixels of the image signal satisfying the extraction condition. It consists. ) Is specified, and simple area expansion is started from the initial pixel (step S4).
- simple area expansion method not only the simple area expansion method but also a known expansion method can be used.
- the extraction condition may be such that the user specifies a point (pixel) on the image with a mouse or the like and is set based on the image signal of the specified pixel, or may be predetermined.
- the initial pixel is selected based on the conditions defining the hue and the saturation. Is preferred. It is preferable that the conditions defining the hue and saturation be changed depending on the type of light source at the time of photographing. Further, it is preferable that the type of light source at the time of photographing is automatically determined by a known method.
- step S4 simple area expansion for extracting a skin color area is performed while sequentially comparing the difference in image signal between adjacent pixels (step S5). That is, it is determined whether the image signal of the adjacent pixel satisfies the extraction condition (step S6). If the image signal of the adjacent pixel satisfies the extraction condition (step S6; Yes), It is further determined whether or not is an edge (step S7). If the adjacent pixel is not an edge (step S7; No), the adjacent pixel is included in the skin color area (step S8).
- information information for identifying the pixels representing each position is sequentially stored in the data storage unit 71, the built-in memory of the control unit 7, and the like.
- step S8 the process proceeds to step S5, and the above processing is repeated.
- step S6 if the image signal of the adjacent pixel does not satisfy the extraction condition in step S6 (step S6; No), or if the adjacent pixel is an edge in step S7 (step S7; Yes). , The extraction process of the skin color area using the simple area expansion is ended.
- the eyes, the mouth, and the like are not extracted as a part of the skin-color area that is simply expanded even though it is formed in the skin-color area.
- the eyes and mouth are included as closed areas in the skin color area.
- Etc. can also be extracted by including it in the skin color area expanded by simple area Become.
- the image adjustment processing unit 704 first extracts edges to create a low-frequency image for the original image. Then, the skin color region is expanded to the edge of the low-frequency image using the simple region expansion method, and the skin color region is extracted.
- the simple area expansion when the pixel corresponding to the image edge extracted in advance is reached, the simple area expansion is forcibly terminated, so that the desired skin color area can be more appropriately extracted. It becomes.
- each face region is extracted using the simple region expansion method based on each of the extraction conditions, for example, an image in which the light source type at the time of shooting cannot be specified, Even if the image includes a plurality of faces, each of which is illuminated by a different light source, a face area can be properly extracted from the image.
- the description in the present embodiment shows an example of the image area extracting method, the image area extracting apparatus, the image area extracting program, the image processing method, the image processing apparatus, and the image processing program according to the present invention.
- the present invention is not limited to this.
- the detailed configuration and detailed operation of the image processing device 1 according to the present embodiment can be appropriately changed without departing from the spirit of the present invention.
- a process of determining whether or not the skin color region extracted in the image processing can be specified as a face may be added.
- a known determination method such as a method using a neural network or pattern matching can be used.
- image processing to be performed on the skin color area when it is determined to be a face includes processing such as color adjustment, grain removal, sharpness enhancement, and dynamic range adjustment.
- the flesh-color area is extracted from the reduced image obtained by reducing the image size of the original image, and the area of the original image corresponding to the flesh-color area of the extracted reduced image is extracted. It is good to perform image processing for Further, in the present embodiment, the case where simple area expansion is started from one initial pixel corresponding to one type of extraction condition has been described. However, the present invention is not limited to this.
- the simple area expansion may be performed independently from the initial pixels of the above.
- the processing power from step S4 to step S8 in FIG. 4 is performed according to the types of the initial pixels, and a plurality of skin color regions corresponding to the number of types of the initial pixels are extracted. Therefore, a plurality of different extraction conditions are applied, and each skin color region is extracted using the simple region expansion method for each extraction condition. , And even if the images are each illuminated by a different light source, the flesh color region can be accurately extracted from the images.
- the image area to be extracted using the simple area expansion method has been described as the skin color area.
- the present invention is not limited to this.
- the image area to be extracted may be other areas such as the head hair. It is also applicable.
- the influence of granular noise or the like can be excluded, so that a desired image area can be properly extracted.
- region expansion when the pixel corresponding to the image edge extracted in advance is reached, the region expansion is forcibly stopped, so that a desired image region can be more appropriately extracted.
- each face region is extracted based on the respective extraction conditions, for example, an image in which the light source type at the time of shooting cannot be specified, or a plurality of faces are included. Furthermore, even if the images are each illuminated by different light sources, the face region can be properly extracted from the images.
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2003434973 | 2003-12-26 | ||
JP2003-434973 | 2003-12-26 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2005064539A1 true WO2005064539A1 (en) | 2005-07-14 |
Family
ID=34736581
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2004/018816 WO2005064539A1 (en) | 2003-12-26 | 2004-12-16 | Image area extraction and image processing method, device, and program |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2005064539A1 (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS61161091A (en) * | 1985-01-08 | 1986-07-21 | Fuji Photo Film Co Ltd | Image processing method |
JPH07140260A (en) * | 1993-11-18 | 1995-06-02 | Nippon Signal Co Ltd:The | Method for sensing stopped vehicle using image |
JPH09322192A (en) * | 1996-05-29 | 1997-12-12 | Nec Corp | Detection and correction device for pink-eye effect |
JP2001057630A (en) * | 1999-08-18 | 2001-02-27 | Fuji Photo Film Co Ltd | Image processing unit and image processing method |
JP2001118064A (en) * | 1999-10-20 | 2001-04-27 | Nippon Hoso Kyokai <Nhk> | Image processor |
-
2004
- 2004-12-16 WO PCT/JP2004/018816 patent/WO2005064539A1/en not_active Application Discontinuation
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS61161091A (en) * | 1985-01-08 | 1986-07-21 | Fuji Photo Film Co Ltd | Image processing method |
JPH07140260A (en) * | 1993-11-18 | 1995-06-02 | Nippon Signal Co Ltd:The | Method for sensing stopped vehicle using image |
JPH09322192A (en) * | 1996-05-29 | 1997-12-12 | Nec Corp | Detection and correction device for pink-eye effect |
JP2001057630A (en) * | 1999-08-18 | 2001-02-27 | Fuji Photo Film Co Ltd | Image processing unit and image processing method |
JP2001118064A (en) * | 1999-10-20 | 2001-04-27 | Nippon Hoso Kyokai <Nhk> | Image processor |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8107764B2 (en) | Image processing apparatus, image processing method, and image processing program | |
US20040247175A1 (en) | Image processing method, image capturing apparatus, image processing apparatus and image recording apparatus | |
JP2005094571A (en) | Camera with red-eye correcting function | |
JP2007087234A (en) | Image processing method and apparatus, and program | |
JP2005190435A (en) | Image processing method, image processing apparatus and image recording apparatus | |
JP2007189428A (en) | Apparatus and program for index image output | |
JP2003244467A (en) | Image processing method, image processor and image recorder | |
JP2004096506A (en) | Image forming method, image processor and image recording device | |
JP2003283731A (en) | Image input apparatus, image output apparatus, and image recorder comprising them | |
WO2005112428A1 (en) | Image processing method, image processing device, image recorder, and image processing program | |
JP2006318255A (en) | Image processing method, image processor and image processing program | |
JP2005192162A (en) | Image processing method, image processing apparatus, and image recording apparatus | |
JP2004193957A (en) | Image processing apparatus, image processing method, image processing program, and image recording apparatus | |
US20040036892A1 (en) | Image processing method, image processing apparatus, image recording apparatus and recording medium | |
JP2003250040A (en) | Image processing method, image processing apparatus, image recording apparatus, and recording medium | |
JP4811401B2 (en) | Image processing method and image processing apparatus | |
JP2005102154A (en) | Image processing apparatus, method and program | |
WO2005064539A1 (en) | Image area extraction and image processing method, device, and program | |
JP2004193956A (en) | Image processing apparatus, image processing method, image processing program, and image recording apparatus | |
JP2004328534A (en) | Image forming method, image processing apparatus and image recording apparatus | |
JP4228579B2 (en) | Image processing method and image processing apparatus | |
JP2004096508A (en) | Image processing method, image processing apparatus, image recording apparatus, program, and recording medium | |
JP2005316581A (en) | Image processing method, image processor and image processing program | |
JP2005209012A (en) | Image processing method, apparatus and program | |
JP2004242066A (en) | Image recorder, image recording method and image processing program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWW | Wipo information: withdrawn in national office |
Country of ref document: DE |
|
122 | Ep: pct application non-entry in european phase | ||
NENP | Non-entry into the national phase |
Ref country code: JP |
|
WWW | Wipo information: withdrawn in national office |
Country of ref document: JP |