US20040208114A1 - Image pickup device, image pickup device program and image pickup method - Google Patents

Image pickup device, image pickup device program and image pickup method Download PDF

Info

Publication number
US20040208114A1
US20040208114A1 US10/758,905 US75890504A US2004208114A1 US 20040208114 A1 US20040208114 A1 US 20040208114A1 US 75890504 A US75890504 A US 75890504A US 2004208114 A1 US2004208114 A1 US 2004208114A1
Authority
US
United States
Prior art keywords
image
image pickup
face
information
inference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/758,905
Inventor
Shihong Lao
Masato Kawade
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Omron Corp
Original Assignee
Omron Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Omron Corp filed Critical Omron Corp
Assigned to OMRON CORPORATION reassignment OMRON CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KAWADE, MASATO
Assigned to OMRON CORPORATION reassignment OMRON CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LAO, SHIHONG
Publication of US20040208114A1 publication Critical patent/US20040208114A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/60Colour correction or control
    • H04N1/62Retouching, i.e. modification of isolated colours only or in isolated picture areas only
    • H04N1/628Memory colours, e.g. skin or sky
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2101/00Still video cameras

Definitions

  • the present invention relates to an image pickup device comprising an image pickup unit including a lens and an image sensor and a control unit for storing a processed image in an internal memory or a predetermined storage medium, or more in particular to a technique for the image pickup device to generate an image including the face of an object in accordance with the image pickup operation with a person as an object.
  • Japanese Unexamined Patent Publication No. 10-268447 discloses a technique in which a photograph is printed using the image data retrieved from an image sensor in such a manner that the face area of a person is extracted from the image data, and based on the light measurement data within the particular area, the exposure amount is determined for correction and the image is adjusted in accordance with the features of the face image.
  • Japanese Unexamined Patent Publication No. 8-62741 discloses a technique in which an image picked up by a camera is printed out in such a manner that a skin color area corresponding to the face image is extracted from the image to be processed, and based on the brightness information of the same image, the intensity of back light is determined. According to the degree of human presence and the intensity of back light, the gradation is corrected differently.
  • Japanese Unexamined Patent Publication No. 11-146405 discloses a video signal processing device such as a color video camera, in which a skin color area is extracted in the process of retrieving the video signal, and upon extraction of the skin color area, the video signal is corrected in brightness and color. In this way, only the skin color area can be corrected.
  • the correction parameters appear to be determined by comparing the feature amounts such as the brightness and color of the face image with a predetermined reference.
  • This reference of correction is determined in accordance with the skin color of a predetermined race, and therefore the correction process executed for other races as an object may develop an inconvenience.
  • the conventional techniques described in the cited patent publications are invented by Japanese and the skin color is normally assumed to be yellow.
  • the correction parameters for back light may be applicable. Since the face image of a black person and the face image in back light are actually considerably different from each other, however, the proper correction is difficult. In the case where a white person becomes an object, on the other hand, a correcting process, if executed, similar to that for a yellow person would lead to a yellowish skin color and is liable to form an apparently unnatural image.
  • a unified correction parameter may encounter a difficulty also in the case where the age or sex is different, like the race.
  • the face image of a person in the twenties and the face image of a person in the forties, for example, are considered to have a considerable difference in the parts to be corrected and the reference.
  • a male object and a female object generally may have different face color references which are considered preferable.
  • This invention has been developed in view of the problems described above, and an object thereof is to provide an image pickup device, an image pickup program and an image pickup method in which an image of an object is picked up by automatically setting the image pickup conditions in accordance with the attributes and the preferences of each object person.
  • Another object of the invention is to provide an image pickup device, an image pickup program and an image pickup method in which the information required for correcting the face image of each object is stored as linked to the image picked up, so that the face image picked up can be easily corrected in accordance with the characteristics or preferences of the object.
  • an image pickup device comprising an image pickup unit including a lens and an image sensor, and a control unit for processing the image picked up by the image pickup unit and storing the processed image in an internal memory or a predetermined storage medium
  • the control unit includes a face image extraction part for extracting the face image contained in the image picked up by the image pickup unit, an inference part for executing the process of inferring the attributes or specifically at least the race, age and sex of an object person based on the feature amounts in an image area including the face image extracted, an image pickup conditions adjusting part for adjusting the image pickup conditions for the image pickup unit based on the result of inference in the inference part, and an information processing part for storing in the memory or the storage medium the image obtained under the image pickup conditions adjusted by the image pickup conditions adjusting part.
  • the image pickup unit in addition to the lens and the image sensor such as CCD, may include a mechanism for adjusting the lens stop and the focal point, a mechanism for driving the image sensor, a strobe and a mechanism for adjusting the intensity of the strobe.
  • the control unit is preferably configured of a computer incorporating a program required for processing each of the part described above.
  • the control unit is not necessarily configured of a computer alone, and may include a dedicated circuit for executing a part of the processes.
  • the memory for storing the processed image is preferably a nonvolatile memory such as a hard disk or a flush memory.
  • the storage medium is preferably a removable one such as a memory card, a Compact Flush (registered trade mark), a CD-R/RW, a DVD-R/RW or a digital video tape having a sufficient capacity to store a plurality of images.
  • the image pickup device is required to be equipped with an A/D converter for converting an analog image signal generated by the image pickup unit into a digital image signal.
  • the image pickup device may comprise an image processing circuit for compressing or otherwise converting the digital image into an image data in a predetermined file format.
  • the face image extraction part scans a search area of a predetermined size on the image input from the image pickup unit at a predetermined timing and searches for the features points indicating the features of the organs making up the face thereby to extract a face image.
  • the invention is not necessarily limited to this method, but a face image can alternatively be extracted by the conventional method of detecting a skin color area or the simple pattern matching process.
  • the inference part is used to determine the race, age and sex with high accuracy by the arithmetic operation using the feature points of the face organs.
  • the race can be inferred using, for example, the technique described in the first reference cited below, or other methods of extracting the brightness distribution in the face image.
  • the age and sex on the other hand, can be estimated by the method described, for example, in the second reference cited below.
  • the first reference includes Gregory Shakhnarovich, Paul A. Viola, Baback Moghaddam; “A Unified Learning Framework for Real Time Face Detection and Classification”; Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture and Gesture Recognition, US. Institute of Electrical and Electronics Engineers (IEEE), May 2002.
  • the feature amounts used for inference are acquired mainly from the extraction area of the face image. Nevertheless, the feature amounts of the whole or a part of the image and the area around the face image may also be included.
  • the feature amounts thus extracted include the mean and variance of color and lightness of the face image, intensity distribution, the color difference or the lightness difference with the surrounding image. Also, by applying these feature amounts to a predetermined calculation formula, the secondary feature amounts required for inference can be determined.
  • Examples of the image pickup conditions adjusted according to this invention include the shutter speed and the stop for determining the exposure amount, the focal length and the presence or absence of the strobe and the intensity thereof.
  • the image pickup conditions can be automatically adjusted based on the inference result after executing the face image extraction process and the inference process.
  • a set table indicating the correspondence between an adjust value (hereinafter referred to as the “image pickup parameter”) for determining each image pickup condition and each factor constituting the object of inference is prepared preferably beforehand, so that the result of the inference process is compared with the set table to acquire the image pickup parameter corresponding the inference result.
  • the image pickup parameter an adjust value for determining each image pickup condition and each factor constituting the object of inference.
  • this set table at least one of the race, age and sex is classified into a plurality of categories (for example, “white person”, “yellow person” and “black person” for the race, and “teens”, “twenties” and “thirties” for the age), and for each combination of these categories, the corresponding one of various image pickup parameters is determined.
  • the condition most suitable for the object are selectively set from a plurality of reference image pickup conditions for the race, age and sex, and the image pickup operation is performed under this condition. Specifically, an image can be picked up by adjusting the image pickup conditions automatically according to the race, age and sex of individual objects.
  • an image pickup device comprising an image pickup unit and a control unit similar to those of the image pickup device according to the first aspect of the invention.
  • the control unit of the image pickup device to the second aspect of the invention includes
  • a registration part for holding the registration information on the correspondence between the feature amount of the face image of each of a predetermined number of objects and the information required for adjusting the optimum image pickup conditions on the one hand and the identification information unique to the object on the other hand,
  • a face image extraction part for extracting the face image contained in the image acquired by the image pickup unit
  • an inference part for estimating the object by comparing the feature amount of the face image extracted by the face image extraction part with the information registered in the registration part
  • an image pickup conditions adjusting part for adjusting the image pickup conditions for the image pickup unit using the registered information of the object estimated by the inference part
  • an information processing part for storing in the memory or the storage medium the image picked up under the image pickup conditions adjusted by the image pickup conditions adjusting part.
  • the registration part can be set in the memory of the computer making up the control unit.
  • the “information required for adjusting the optimum image pickup conditions” is, for example, a parameter for specifying the color of the face image (gradation, lightness, etc. of R, G, B making up the face color), and preferably adjusted to the color liked by the object to be registered.
  • the identification information unique to the object preferably includes the name (which is not limited to the full name but may be a nick name) of an object by which the photographer or the object can be easily confirmed.
  • the extraction process of the face image by the face image extraction part can be executed in the same manner as in the image pickup device according to the first aspect of the invention.
  • the inference part can estimate who an object is by the process of comparing the feature amount of the face image obtained by the extraction process with the feature amount of the face image of each object registered in the registration part.
  • the image pickup conditions adjusting part the image pickup conditions for the object recognized are adjusted based on the information required to adjust the optimum image pickup conditions.
  • a parameter indicating an optimum skin color of an object is registered as the aforementioned information, for example, exposure is adjusted, for example, in such as a manner that the extracted face image is as similar as possible to the particular optimum skin color.
  • the information required for adjusting the optimum image pickup conditions and the feature amounts of the face image of a particular person to be an object are registered beforehand in the registration part.
  • an image is picked up by automatically adjusting the image pickup conditions to acquire a face image liked by the particular object.
  • the image pickup devices can be configured as a camera for generating a digital still image (hereinafter simply referred to as the “digital camera”).
  • the image sensor is driven at the timing when the photographer is considered to have determined an image pickup area such as when the shutter button is pressed halfway or the line-of-sight sensor has detected that the line of sight of the photographer has been set in a predetermined direction, and after adjusting the image pickup conditions using the image thus obtained, the image sensor is desirably driven again.
  • the image picked up under the adjusted image pickup conditions is stored in the memory or the storage medium.
  • a high-speed processing CPU is incorporated or a plurality of CPUs are mounted in the control unit to execute the process of extracting or inferring the face image in real time, and the image sensor is driven continuously.
  • the image pickup device is rendered to function as a camera device for generating a dynamic image (hereinafter referred to as the “digital video camera”).
  • the images picked up under the adjusted image pickup conditions are desirably accumulated sequentially in the memory or the storage medium while continuously executing the face image extraction process, the inference process and the process of adjusting the image pickup conditions.
  • the information processing part includes part for generating link information containing the position of extracting the face image by the image extraction part and the inference information obtained by the inference processing of the inference part, and the link information is stored in the memory or the storage medium together with the image picked up by the image pickup unit.
  • the position of extracting the face image and the result of the inference process are stored as linked with the image, and therefore even after the image is picked up, the face image can be corrected in detail using the link information.
  • the face image after being picked up is corrected desirably using an external device such as a personal computer.
  • the image pickup device therefore, preferably comprises an interface circuit or an output terminal corresponding to the external device so as to output the stored image and the link information to the external device.
  • the correcting process can be executed by the external device without using the output function.
  • the picked-up image and the link information are retrieved into the external device for more detailed correction.
  • the image thus processed is printed, displayed on the monitor or distributed through a communication line such as an internet.
  • the link information can contain the object identification information. After executing the correcting operation suitable for the object in the external device, therefore, the specifics of the correction are registered in correspondence with each identification information. Thus, the face image of the same object can be corrected later in the same manner as in the preceding correction using the registered information. Also, since the image is linked with the personal identification information, the picked-up images can be readily put in order or printed as photo copies.
  • the link information can include the additional information such as the image pickup date and the image pickup conditions that have been set.
  • An image pickup device comprises a distance recognition part for recognizing the distance to an object, wherein the face image extraction part includes part for specifying the size of the face image to be extracted, based on the result of recognition by the distance recognition part.
  • the distance recognition part can be a distance measuring sensor utilizing the reflected light.
  • the face image extraction part after estimating the size of the object face based on the distance recognized by the distance recognition part, sets a search area of a size corresponding to the size of the face estimated on the image to be processed, or adjusts the size of the provisionally-acquired image based on the estimation result without changing the size of the search area. In this way, only the face image corresponding to the estimated size can be searched for. In any way, the time required for the extraction process of the face image is shortened according to the size of the face image to be extracted, thereby realizing a high-speed processing corresponding to the image pickup operation.
  • the direction of the line of sight of the photographer is specified by the line-of-sight sensor or the like and the search is made by limiting the face image detection range from the same direction. Also, in view of the fact that the image of the same person is picked up very often, an image containing the features similar to those of the face image obtained in the preceding extraction process can be searched for preferentially within a predetermined time elapsed from the preceding image pickup process.
  • the face image extraction process can be completed in a short time in accordance with the image pickup operation of the photographer.
  • control unit includes a focal length adjusting part for adjusting the focal length of the lens of the image pickup unit in accordance with the extraction result of the face image extraction part.
  • a normal auto-focus image pickup device develops an inconvenience in which the face of the object becomes unclear, for example, as the result of an image being focused at the center of the image or before the object.
  • the focal length is adjusted in such a manner as to focus the face of the object based on the result of face image extraction, and therefore the face image of the object can be clearly picked up.
  • the repetitive execution of the face extraction process and the focal length adjusting process described above makes adjustment possible whereby the face of the object is always focused.
  • An image pickup device comprises a first operating unit for designating the face image extraction range, wherein the face image extraction part includes a part for limiting the face image extraction area of the image picked up by the image pickup unit in accordance with the designating operation of the first operating unit.
  • the first operating unit may be arranged at an appropriate position on the body of the image pickup device.
  • the image pickup device comprises a display unit for displaying the image picked up by the image pickup unit and a user interface for designating the face image extraction range on the image displayed on the display unit.
  • the first operating unit may be configured integrally with second and third operating units described later.
  • the face image extraction process is limited to the area designated by the photographer, and therefore the time required for the extraction process is shortened.
  • An image pickup device comprises a second operating unit for designating a predetermined one of the results of extraction of a predetermined face image, wherein the face image extraction part includes a part for updating the face image extraction result in accordance with the designating operation of the second operating unit.
  • the second operating unit preferably includes a display unit for displaying the face image extraction result and a user interface for receiving the designation of the deletion on the display screen of the display unit.
  • the face image extraction result can be displayed by arranging the pointer at the face image extraction point on the provisionally-acquired image. More preferably, a marking is displayed to clarify the position and size of the face image (for example, by setting a frame image including the face image).
  • the extraction result of the face image of the particular person other than the object is deleted or otherwise the face image not required for the subsequent inference process or link information production can be deleted, and therefore the detailed process is concentrated on the object.
  • the second operating unit may have the function of correcting the extraction result as well as the function of designating the deletion of the face image extraction result.
  • the photographer can confirm whether the face image has been correctly extracted or not, and in the case where an error exists, can correctly set again the position and size of the face image by the correcting operation. This can avoid an otherwise possible case in which an erroneous inference process is executed due to an error of the face image extraction thereby leading to an error in the subsequent process.
  • An image pickup device comprises a third operating unit for operation to correct the inference information obtained by the inference process of the inference part, wherein the information processing part includes a part for correcting the inference information in accordance with the correcting operation of the third operating unit.
  • the third operating unit may include a display unit for displaying the image picked up and the inference information for the image and a user interface for receiving the operation of correcting the inference information on the display screen of the display unit.
  • any error which may occur in the inference process is corrected as right information by the correcting operation.
  • the image pickup conditions not suitable for the object can thus be prevented from being set by an erroneous inference process.
  • the operation of correcting the inference information may include the operation of adding new information not inferred, as well as the operation of correcting the error of the inference result.
  • the information designated for addition may include the correcting process difficult to execute at the time of image pickup operation (the process requiring a considerable length of processing time such as smoothing a contour line, extracting and erasing a defect of the face surface or correcting the color taste and brightness of the skin color in detail). By doing so, not only the error of the inference result can be corrected, but also the link information including the detailed contents of image correction can be produced in accordance with the preference of the photographer or the object person. Thus, the image picked up can be corrected in more detail.
  • An image pickup device comprises a fourth operating unit for correcting the image pickup conditions adjusted by the image pickup conditions adjusting part, wherein the image pickup conditions adjusting part includes a part for readjusting the image pickup conditions in accordance with the correcting operation by the fourth operating unit.
  • the fourth operating unit is for correcting the image pickup conditions adjusted by the image pickup conditions adjusting part, and this function may be added to the normal operating unit for adjusting the stop and the focal length.
  • the image pickup conditions adjusted based on the inference information can be finely adjusted, so that an image can be picked up under more appropriate image pickup conditions.
  • the information processing part includes a part for determining the direction of the face of the object in the image stored in the memory or the storage medium based on the result of extraction by the face image extraction part, and a part for rotating the image in such a manner that the determined face direction, if different from a predetermined reference direction, comes to agree with the predetermined reference direction.
  • the face direction can be stored to the reference direction by rotational correction.
  • the image picked up can be displayed with the object kept in the same direction. Also in the case where a plurality of images are printed on a single sheet of paper, no correcting operation is required to set the object in direction.
  • the image pickup device may be embodied as a device having the configuration described below.
  • an image pickup device comprises a feature amount storage part for storing the feature amount of the face image extracted, wherein the face image extraction part includes a specified image extraction part for extracting an image area having the feature amount of a specified face image stored in the feature amount storage part from the image picked up by the image pickup unit.
  • the face image extraction process can be executed by extracting the image area including the same feature amount as the specified face image extracted in the past. In picking up images of the same person as an object successively, therefore, the face image of the object can be extracted within a short time.
  • the feature amount storage part is not necessarily required to store the feature amounts of the face images of all the objects picked up in the past, but only the feature amounts acquired up to a predetermined time before or the feature amounts acquired for a predetermined number of the latest images.
  • An image pickup device comprises an object storage part for storing the feature amount of the face image of a specified object, wherein the information processing part is so configured that the feature amount of the face image extracted by the face image extraction part is compared with the feature amount stored in the object storage part, and in the case where the comparison shows that the extracted face image is associated with the specified object, the link information including the inference information obtained by the inference process of the inference part and the information for identifying the specified object is produced and stored in the memory or the storage medium together with the image picked up by the image pickup unit.
  • the link information including the information for identifying the same person (such as the name and the ID number) can be produced and stored together with the image thereof.
  • the person to be corrected is specified and corrected based on the ID information of the person.
  • special points of correction for the same person can be designated and the detailed correction can be carried out.
  • the image is linked with the ID information of individual persons, the images picked up can be put in order and the photos printed easily.
  • the ID information may be input before or after picking up the image.
  • the image pickup device preferably comprises an operating unit for giving an instruction to register the face image extracted by the image extraction part and inputting the personal ID information, wherein the information processing part includes a part for registering in the object storage part the feature amount of the face image designated by the operating unit in correspondence with the personal ID information input.
  • a specified one of the persons whose images have been picked up can be designated and the feature amount of the face image thereof can be registered together with the personal ID information.
  • the personal ID information can be stored as linked to the image of the particular person.
  • the face image to be registered can be designated not only immediately after picking up the image thereof but also by retrieving the image stored in the memory or the storage medium.
  • control unit includes a part for receiving the input of the information required for adjusting the optimum image pickup conditions and the ID information of the object in accordance with the image pickup operation of a predetermined object for registration in the registration part and storing the input information in the registration part together with the face image of the object.
  • the image pickup device for example, an image picked up by the image pickup unit is displayed, the operation for adjusting the image pickup conditions is performed and the input of the ID information of the object received.
  • the parameter indicating the feature amount of the face image contained in the image after the adjust operation can be registered as the information required for adjusting the optimum image pickup conditions.
  • the input processing of the information can be executed after picking up the image as well as at the time of picking up the image.
  • This operation of processing the input includes the operation of giving an instruction to correct the face image picked up and the operation of inputting the ID information, so that the parameter indicating the feature amount of the face image corrected is registered as the information required for adjusting the optimum image pickup conditions.
  • the image pickup device having the aforementioned various configurations is a portable communication terminal (a mobile telephone, etc.) having the camera function as well as an ordinary digital camera or a digital video camera. Also, the image pickup device may be combined with a printer as a photo seal vending machine.
  • FIG. 1 shows a block diagram of a configuration of a digital camera according to this invention.
  • FIG. 2 shows the function set in the CPU of the digital camera of FIG. 1.
  • FIG. 3 shows an example of display of the face area extraction result.
  • FIG. 4 shows an example configuration of the link information.
  • FIG. 5 shows an example parameter for setting the face area.
  • FIG. 6 shows a flowchart of the steps of controlling the process of picking up an image of a person.
  • FIG. 7 shows a histogram of the brightness distribution in the face area for each difference in race and lighting conditions.
  • FIG. 1 shows a configuration of a digital camera according to this invention.
  • a digital camera 1 mainly comprises an image pickup unit 4 including a lens unit 2 and a CCD 3 and a control unit 7 for retrieving the image from the CCD 3 and generating a digital image in the final form.
  • the digital camera 1 has further built therein peripheral circuits such as an A/D converter 8 , an image processing circuit 9 , a lens adjusting unit 10 , a shutter control unit 11 , a strobe control unit 12 , a length measuring sensor 13 , a line-of-sight sensor 14 , a USB interface 15 and an input/output interface 17 .
  • a memory card 16 is adapted to be removably connected to the control unit 7 .
  • the control unit 7 is configured of a CPU 5 and a nonvolatile memory 6 such as a flush memory (hereinafter referred to simply as the memory 6 ).
  • the memory 6 has stored therein a program required for the operation of the CPU 5 , a data base for storing various reference tables and templates used in the process and image data output from the image processing circuit 9 and the CPU 5 .
  • the lens adjusting unit 10 includes a focal point adjusting mechanism and a stop adjusting mechanism of the lens unit 2 .
  • the shutter control unit 11 is for supplying a drive pulse to the CCD 3 to store the charge
  • the strobe control unit 12 is for adjusting the luminous timing and the light amount of a strobe not shown.
  • the focal point, the stop adjust value, the interval of the drive pulses applied to the CCD 3 and the luminous amount of the strobe are adjusted in accordance with the control signal from the CPU 5 .
  • the A/D converter 8 retrieves the outputs sequentially from the pixels of the CCD 3 and converts them into digital signals for the color components of R, G and B.
  • the image processing circuit 9 includes a set of a plurality of shift registers and flip-flops, and upon receipt of the output from the A/D converter 8 , generates a full-color image data having a combined intensity of R, G and B for each pixel.
  • the image data thus generated are stored in the memory 6 and processed as predetermined by the CPU 5 .
  • the length measuring sensor 13 is for measuring the distance to an object and either of the passive type using trigonometry or the active type using infrared light.
  • the line-of-sight sensor 14 radiates the infrared light on the eye balls of a photographer to detect the direction of the line of sight of the photographer by the measurement using the reflected light image.
  • the measurement result of the sensors is input to the CPU 5 and used to generate a preview image before the main image pickup process or to set the image pickup parameters for the main image pickup process.
  • the USB interface 15 is an interface conforming with the Universal Serial Bus Standard and used for transferring the image stored in the memory 6 or the memory card 16 to an external device such as a personal computer.
  • the input/output interface 17 is connected with the operating unit 18 and the display unit 19 .
  • the operating unit 18 and the display unit 19 are arranged on the body surface of the digital camera 1 .
  • the display unit 19 is used to display an image in process or a screen for inputting the information.
  • the operating unit 18 includes the operating keys for inputting information and a shutter button.
  • the functions shown in FIG. 2 are set in the CPU 5 by the program stored in the memory 6 . These functions permit the digital camera 1 according to this embodiment to pick up an image of a person as an object by estimating the race, age and sex of the object person in a predetermined image pickup area, and thus by determining the image pickup parameters suitable to the result of the estimation, the main image pickup process can be executed.
  • the adjust values including the shutter speed, stop and focal length and whether the strobe is used or not are always determined as image pickup parameters. Further, in the case where the strobe is used, the luminous intensity thereof is set.
  • the preview image acquisition unit 51 generates a preview image of the image pickup area by controlling the lens adjusting unit 10 and the shutter control unit 11 (the strobe is not desirably used when acquiring the preview image).
  • a face detection processing unit 52 detects a face image from the preview image acquired by the preview image acquisition unit 51 .
  • a face area setting unit 53 sets an area of predetermined size including the face image as a processing area (hereinafter referred to as “the face area”) in accordance with the result of detection of the face image thereby to produce an area setting parameter described later.
  • An inference processing unit 54 infers the race, age and sex of the object based on the feature amount in the face area that has been set.
  • a parameter determining unit 55 determines the image pickup parameters suitable for the result of the inference processing. According to this embodiment, the presence or absence of back light is inferred in addition to the race, age and sex described above. The result of inference of these four factors and the distance to the object are combined variously in advance. For each combination, a table with optimum image pickup parameters set therein is produced and registered in the memory 6 . In the process of determining the image pickup parameters, the table is accessed with the inference result and the measurement of the length measuring sensor 13 thereby to extract the optimum image pickup parameters.
  • a main image pickup processing unit 56 controls the lens adjusting unit 10 , the shutter control unit 11 and the strobe control unit 12 based on the image pickup parameters determined by the parameter determining unit 55 thereby to execute the main image pickup process (using the strobe, if required).
  • the image data storage unit 57 generates an information link image in which the image obtained in the main image pickup process is linked to the processing result of the face area setting unit 53 and the inference processing unit 54 , and stores the information link image in the memory 6 or the memory card 16 .
  • the image data output unit 58 appropriately reads the information link image in store, and outputs them externally through the USB interface 15 .
  • the user interface control unit 59 checks the face area setting result and the link information of the information link image, and corrects an error, if any, or inputs additional information.
  • a preview image including the face area setting result is displayed on the display unit 19 at the end of the face area setting process.
  • FIG. 3 shows an example display, in which frame images 21 , 22 corresponding to the boundary lines of the face areas are displayed on the face images of the persons in the preview image 20 .
  • the user interface control unit 59 sets an operating screen for performing the operation of correcting the setting of the face area and the operation of setting a new face area, accepts various operations and outputs the contents of each operation to the face area setting unit 53 .
  • the face area setting unit 53 in accordance with the contents of each operation, corrects the place and size of the face area thus set, deletes an unnecessary face area and sets a new face area.
  • the user interface control unit 59 when the information link image is stored in the image data storage unit 57 , displays the image and the link information contained in the information link image. At the same time, an operating screen for correcting the link information or inputting additional information is set to accept various corrections and inputs. The contents of the correction and the input information are delivered to the image data storage unit 57 . The image data storage unit 57 corrects the corresponding information based on the correction of the information link image, and adds the additionally input information to the existing link information.
  • the correcting information is mainly for correcting an error of the inference process.
  • the additional information is input by the operation of correcting an option that cannot be performed by the standard correction by the inference process or the operation of inputting the personal information such as the name of the object (surname or given name, or nick name).
  • the digital camera 1 has the function of correcting the picked-up image such as the brightness or the contour in accordance with the designating operation of the user.
  • the image data storage unit 57 executes the process of adding the correction-related information to the link information.
  • FIG. 4 shows an example of a configuration of the link information in the information link image.
  • the uppermost column corresponds to the index information in the link information and has set therein the image number, the image pickup data and the image pickup mode.
  • the image number is a serial number assigned in the order of generation of images by the image pickup operation.
  • the image pickup mode has a plurality of modes including the portrait image pickup mode and the landscape image pickup mode. Only in the case where the portrait image pickup mode is selected, the face detection process and the inference process are executed to set specific link information.
  • the link information contains the information indicating the result of setting the face area such as the coordinate (x p , y p ) of the face image detection position, the size r of the face image and the face tilt angle ⁇ , and the information indicating the inference result such as the race, age, sex and the presence or absence of back light. Further, the link information contains additional information including the personal information such as the name of the object and the information including the item of the selected option correction, whether the correction is made or not in the device and the contents of correction.
  • the link information in store may contain the image pickup parameters used, in addition to those shown. Also, with regard to the image picked up in other than the portrait image pickup mode, the information link image is generated which has only the index information with the image number, the image pickup date and the image pickup mode.
  • the detection position (x p , y p ), the face size r and the fact tilt angle ⁇ contained in the link information are used as parameters for setting the face area suitable for the particular face image in the case where the same face image is corrected in detail after the main image pickup process.
  • FIG. 5 shows a specific example of each parameter.
  • a feature point P corresponding to the highest nose position is extracted from the feature points of the face image, and the coordinate (x p , y p ) of this point P is used as a face detect position.
  • this point P is used as an origin, the boundary between the forehead and the hair is searched for in each direction, and a point Q associated with the shortest distance from point P is determined from the feature points corresponding to the boundaries thus searched for.
  • the distance between point Q and point P is set as the face size r.
  • a vector C directed from point P to point Q is set, and the angle that the vector C forms with the horizontal direction of the image (x axis) is measured as a face tilt angle ⁇ .
  • character U designates an example of the face area set by each parameter.
  • the size of this face area U is determined based on the face size r, and set in such a manner that the center thereof corresponds to point P and the main axis is tilted by ⁇ with respect to the x axis.
  • the link information shown in FIG. 4 can be stored in the memory card 16 or output to an external device by the function of the image data output unit 58 .
  • the face area to be processed can be automatically set by the parameters (x p , y p ), r and ⁇ .
  • detailed and proper correction process can be executed based on the inference result and the option correction items.
  • FIG. 6 shows the steps of the control operation performed by the CPU 5 in accordance with the operation of taking a photo in portrait image pickup mode.
  • step ST 1 the CCD 3 is driven by setting the appropriate image pickup conditions based on the measurement of the line-of-sight sensor 14 and the length measuring sensor 13 , and a preview image is generated from the output of the CCD 3 .
  • the preview image thus generated is stored in the working area of the memory 6 , and the process proceeds to step ST 2 for executing the face detection process.
  • a search area of predetermined size is scanned on the preview image thereby to search for the feature points of the face image.
  • a table indicating the standard face image size corresponding to each distance to the object is set in the memory 6 .
  • the approximate size of the face image appearing on the preview image is predicted, and in accordance with the particular size, the size of the search area is adjusted.
  • the scanning of the search area is confined to a predetermined range around the point where the photographer has set the line of sight.
  • the highly accurate face detection process is executed within a short time.
  • step ST 3 the various parameters shown in FIG. 5 are extracted for the face image detected, and based on these parameters, the face area U is set. Further, in step ST 4 , the result of setting the face area is displayed as a frame image in the preview image on the display unit 19 . The image thus displayed is corrected by the photographer, and in accordance with this operation, the parameters for setting the face area are corrected (ST 5 , ST 6 ).
  • the correcting operation in step ST 5 includes the operation of changing the position and size of the set face area, the operation of deleting the face area and the operation of setting a new face area.
  • the process executed in step ST 6 includes the process of changing the value of the parameters for setting the face area, the process of deleting the parameters and the process of setting a new parameter.
  • step ST 7 execute various inference processes for the face area finally determined.
  • the answer in step ST 5 is NO, and the process proceeds to step ST 7 for executing the inference process for the face area set in step ST 3 .
  • step ST 7 the race, age, sex and the presence or absence of back light are estimated for the set face area.
  • the race estimation process can be executed based on the cited reference Gregory Shakhnarovich, Paul A. Viola, Baback Moghaddam: “A Unified Learning Framework for Real Time Face Detection and Classification”. According to this embodiment, however, the race and the presence or absence of back light are estimated at the same time using the brightness distribution in the face area to shorten the processing time.
  • FIG. 7 shows an example of histogram for each of the color data of R, G and B and the lightness L (arithmetic average of R, G and B) for three different cases of objects and lighting environments.
  • the histogram is drawn in gradation scale, and the brightness increases progressively rightward in the page.
  • FIG. 7( 1 ) is a histogram representing a case in which an image of a yellow people is picked up in the proper lighting environment.
  • this histogram the distribution toward a higher brightness is comparatively greater for each color data. Especially, the intensity of the red component is emphasized.
  • FIG. 7( 2 ) is a histogram representing a case in which the image of the same yellow people as in FIG. 7 ( 1 ) is picked up in back light. In this histogram, the appearance ratio of each color data is considerably lower than in FIG. 7( 1 ) and the distribution is concentrated on dark side.
  • FIG. 7( 3 ) is a histogram representing a case in which the image of a black people is picked up in the proper lighting environment.
  • the distribution has a peak on both dark and bright sides (the dark side is considered to correspond to the skin, and the bright side to the eyes and teeth).
  • a template of brightness histogram is prepared for a plurality of image pickup environments having different lighting conditions for each race, and the histogram extracted for the face area to be processed is collated with each template. In this way, the race and the presence or absence of back light is estimated.
  • this histogram is used for the determining process by extracting also the brightness distribution of a local area such as the eyes or mouth as well as the whole face area, a more accurate estimation result can be obtained.
  • the age and sex are estimated, as described in the cited reference Kiyoshi Hosoi, Erina Takigawa and Masato Kawade: “Sex and Age Estimation System by Gabor Wavelet Transform and Support Vector Machine”, by a method in which the feature amount of the feature point for each organ is applied to an estimation system called the support vector machine. Nevertheless, the invention is not necessarily limited to this method.
  • the set table in the memory 6 is accessed with the inference result and the measurement of the length measuring sensor 13 in step ST 8 thereby to determine the image pickup parameters suitable for the inference result and the distance to the object.
  • the focal length of the lens 2 is determined also taking the face image extraction result into consideration. In this way, the state can be set in which the face of the object is just in focus.
  • step ST 9 a marker indicating that the image pickup operation is possible is displayed on the display unit 19 .
  • the photographer operates the shutter button.
  • the answer in step ST 10 turns YES, and the main image pickup process is executed using the set image pickup parameters in step ST 11 .
  • the link information having the configuration shown in FIG. 4 is generated in step ST 12 .
  • the link information in this stage has yet to contain the information such as the name and the option correction, but only the index information, the face area setting parameter and the inference result are set.
  • step ST 13 the image obtained in the main image pickup process and the link information are displayed on the display unit 19 .
  • the answer in step ST 14 turns YES, and the process proceeds to step ST 15 , where the link information is corrected or the additional information is set, as the case may be, in accordance with the input operation.
  • step ST 16 the final image link information is stored in the memory 6 or the memory card 16 .
  • the answer in step ST 14 turns NO, and the process proceeds to step ST 16 , so that only the information link image having only the basic link information generated in step ST 12 is stored.
  • each information may be displayed sequentially by the switching operation, and the correcting operation is accepted each time of display.
  • the correcting operation is accepted each time of display.
  • the correction of an option is designated.
  • a user interface indicating a menu of correction items is prepared for accepting an item selection. In this way, the additional information can be input or recognized easily.
  • the information link image is displayed and the correction or additional input is accepted immediately after the image pickup operation.
  • the timing of display and correction is not limited to this method. After an image is picked up, for example, the information link image having the basic configuration is generated and held temporarily, so that the image and the link information are displayed in response to the image call operation at a predetermined time point and the input operation for correction or addition may be accepted. In this case, the link information stored in the preceding process is rewritten in accordance with the correction.
  • the focal length of the lens 2 is adjusted in accordance with the detection result at the end of the face image detection in step ST 2 , and the preview image is retrieved again.
  • the subsequent face image setting process and inference process can be carried out with a face image having clear face features of the object. Therefore, the process of higher accuracy can be executed.
  • the race, age and sex having a large effect on the manner in which the image of the object face is picked up are estimated, and in accordance with the result of estimation, the image pickup parameters are set to conduct the main image pickup process. Even in the case where an image is picked up at the same place under the same lighting conditions, therefore, the image pickup conditions are varied from one object to another. Thus, a clear face image similar to the real image of each object can be generated.
  • This correcting process is executed by an editing device such as a personal computer which fetches the information link image through the memory card 16 or the USB interface 15 .
  • an editing device such as a personal computer which fetches the information link image through the memory card 16 or the USB interface 15 .
  • a correcting parameter or algorithm is set in advance for a plurality of classes of each item of race, age and sex (the parameters and the algorithm are hereinafter referred to as “the correction data”).
  • the suitable correction data are selected from the inference result contained in the link information of the image to be processed, and the process based on the correction data is executed.
  • the face image can be corrected in different ways according to the difference in race, age and sex.
  • the reddish part due to pimples or the like is detected for female persons in teens or twenties and replaced by the same color as the surrounding part or corrected to increase the whiteness of the skin.
  • the skin color is corrected to the color of sunburn, while the part of small wrinkles or freckles is detected and replaced with the same state as the other parts for female persons in forties. In this way, what is considered most desirable correction is carried out for persons of different age and sex.
  • correction data of this type can be freely changed, and therefore the contents thereof can be changed in accordance with the fashion or season.
  • the inference process is executed for each object, after which any of the image pickup parameters corresponding to each inference result is selected or the parameters are averaged out for each person to conduct the main image pickup process.
  • the detailed correction process may be executed using the link information for each object after image pickup operation, thereby correcting the errors due to the difference in the inference result between the objects.
  • the correction process can be carried out in accordance with each item.
  • the detailed correction can be conducted in accordance with the desire of the object and the photographer.
  • a data base of information for the correction of each person can be constructed in the editing device. This data base makes it possible to correct the image of a person in registration in the same way as before.
  • the link information shown in FIG. 4 also contains the information as to whether the image has been corrected by the camera and the information indicating the specifics of the correction. Based on these information, the device can be set not to duplicate the correction carried out by the camera.
  • the four factors including the race, age, sex and the presence or absence of back light are estimated for the detected face image.
  • only one of the three factors except for the back light may be estimated while the remaining factors may be set as addition to the link information.
  • the main object is limited to a predetermined race (yellow people, for example)
  • only the age and sex are estimated, while in the case where the black or white people is an object, the information about the particular race is added to the link information.
  • the link information is rendered to contain the information about the use of the strobe. By doing so, the correction after the image pickup process is possible taking the lightness balance between the face image and the surrounding area.
  • a data base containing the correspondence between the link information including the personal information described above and the result of extracting the feature points of the face image is stored in the memory 6 .
  • an object of a person whose image has ever been picked up can be identified by accessing the data base for the features of the extracted face image. Further, using the link information registered in the data base, the information link image can be easily set.
  • the features and the correction data of the face image of a person once corrected are stored in a data base. After that, therefore, a person can be identified by comparing the result of extracting the feature points contained in the newly input link information with the data base. Further, based on the correction data registered in the data base, the correction suitable for a particular person can be carried out.
  • the GPS, gyro or the geophysical sensor is built in to detect the place of image pickup and the camera position, and the detection data are contained in the link information.
  • This information can be printed with the image or distributed at the time of editing the image. Also, since the direction of image pickup operation is known, the lighting conditions can be determined with the time information. Thus, whether the image has been picked up in back light and other conditions can be determined for correction. Also, the images can be put in order based on the image pickup positions.
  • a network communication function such as the internet is built in, and the weather conditions at the place of image pickup are fetched from a meteorological server and used for setting the image pickup parameters. Also, the weather conditions can be contained in the link information and used for correction at the time of editing process.
  • the communication function with a network is built in, and the acquired information link image is transmitted to an image editing server.
  • the image transmitted is corrected based on the link information, the processed image is returned or printed and sent to the distributor.
  • the digital camera 1 is built in the portable telephone. Then, the image pickup device can be easily designed using the communication functions set in the portable telephone.
  • an image can be picked up by automatically setting the image pickup conditions corresponding to the preference of each object. Therefore, a satisfactory photo of the object can be easily taken and the utility of the digital camera is improved. Even in the case where an image is picked up by this method, an object can be designated with the name of a person and the correction suitable for the particular object can be made in an external editing device by generating the information link image having the link information including the name of the object person and the image pickup conditions.
  • an image can be picked up by setting detailed image pickup conditions for the race, age and sex of the object, and therefore a face image very similar to a clear, real image of each object can be generated. Also, by picking up an image by setting the image pickup conditions meeting the preference of each object, an image satisfactory to each object can be easily generated. Further, the face image after being picked up can be easily corrected in detail in a way meeting the desire of each person based on the link information.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)
  • Television Signal Processing For Recording (AREA)
  • Image Analysis (AREA)

Abstract

An image pickup device, an image pickup device program and an image pickup method are disclosed, in which an image is picked up by setting the image pickup conditions suitable for an object, and the face image thus picked up can be easily corrected in accordance with the characteristics and preferences of the object. Once a shutter button is pressed halfway and a line-of-sight sensor 14 confirms that the line of sight of a photographer is set, a CPU 5 generates a preview image, and upon detection of the face image, infers the race, age and sex of the object using the feature amount of the face image. Further, image pickup parameters suitable for the inference result and the measurement of a length measuring sensor 13 are set, and the main image pickup process is executed under the image pickup conditions corresponding to the parameters. Furthermore, the result of detection of the face image, the inference result and the information additionally input after image pickup operation are linked with the image acquired by the main image pickup process, and stored in a memory 6 or a memory card 16 or output externally through a USB interface 15.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of the Invention [0001]
  • The present invention relates to an image pickup device comprising an image pickup unit including a lens and an image sensor and a control unit for storing a processed image in an internal memory or a predetermined storage medium, or more in particular to a technique for the image pickup device to generate an image including the face of an object in accordance with the image pickup operation with a person as an object. [0002]
  • 2. Description of the Background Art [0003]
  • In recent years, techniques have been disclosed for extracting an image area corresponding to the face of an object from an image picked up from a person constituting the object, and adjusting the image pickup conditions such as the exposure amount or correcting the image thus picked up, based on the feature amount within the particular area. Examples of these techniques are described in Japanese Unexamined Patent Publications No. 10-268447, No. 8-62741 and No. 11-146405. [0004]
  • Japanese Unexamined Patent Publication No. 10-268447 discloses a technique in which a photograph is printed using the image data retrieved from an image sensor in such a manner that the face area of a person is extracted from the image data, and based on the light measurement data within the particular area, the exposure amount is determined for correction and the image is adjusted in accordance with the features of the face image. [0005]
  • Japanese Unexamined Patent Publication No. 8-62741 discloses a technique in which an image picked up by a camera is printed out in such a manner that a skin color area corresponding to the face image is extracted from the image to be processed, and based on the brightness information of the same image, the intensity of back light is determined. According to the degree of human presence and the intensity of back light, the gradation is corrected differently. [0006]
  • Japanese Unexamined Patent Publication No. 11-146405 discloses a video signal processing device such as a color video camera, in which a skin color area is extracted in the process of retrieving the video signal, and upon extraction of the skin color area, the video signal is corrected in brightness and color. In this way, only the skin color area can be corrected. [0007]
  • In the conventional correction process described including the techniques contained in the patent publications described above, the correction parameters appear to be determined by comparing the feature amounts such as the brightness and color of the face image with a predetermined reference. This reference of correction, however, is determined in accordance with the skin color of a predetermined race, and therefore the correction process executed for other races as an object may develop an inconvenience. The conventional techniques described in the cited patent publications are invented by Japanese and the skin color is normally assumed to be yellow. [0008]
  • In the case where an image of a black person is picked up as an object with the reference of correction based on the race of yellow skin color, for example, the correction parameters for back light may be applicable. Since the face image of a black person and the face image in back light are actually considerably different from each other, however, the proper correction is difficult. In the case where a white person becomes an object, on the other hand, a correcting process, if executed, similar to that for a yellow person would lead to a yellowish skin color and is liable to form an apparently unnatural image. [0009]
  • A unified correction parameter may encounter a difficulty also in the case where the age or sex is different, like the race. The face image of a person in the twenties and the face image of a person in the forties, for example, are considered to have a considerable difference in the parts to be corrected and the reference. Also, a male object and a female object generally may have different face color references which are considered preferable. [0010]
  • Further, different persons have different preferences about the color and brightness of the face, and a preference may be varied according to the current fashion or from one season to another. The face images of individual objects having these various factors cannot be corrected with the unified reference as in the prior art. [0011]
  • SUMMARY OF THE INVENTION
  • This invention has been developed in view of the problems described above, and an object thereof is to provide an image pickup device, an image pickup program and an image pickup method in which an image of an object is picked up by automatically setting the image pickup conditions in accordance with the attributes and the preferences of each object person. [0012]
  • Another object of the invention is to provide an image pickup device, an image pickup program and an image pickup method in which the information required for correcting the face image of each object is stored as linked to the image picked up, so that the face image picked up can be easily corrected in accordance with the characteristics or preferences of the object. [0013]
  • According to a first aspect of the invention, there is provided an image pickup device comprising an image pickup unit including a lens and an image sensor, and a control unit for processing the image picked up by the image pickup unit and storing the processed image in an internal memory or a predetermined storage medium, wherein the control unit includes a face image extraction part for extracting the face image contained in the image picked up by the image pickup unit, an inference part for executing the process of inferring the attributes or specifically at least the race, age and sex of an object person based on the feature amounts in an image area including the face image extracted, an image pickup conditions adjusting part for adjusting the image pickup conditions for the image pickup unit based on the result of inference in the inference part, and an information processing part for storing in the memory or the storage medium the image obtained under the image pickup conditions adjusted by the image pickup conditions adjusting part. [0014]
  • The image pickup unit, in addition to the lens and the image sensor such as CCD, may include a mechanism for adjusting the lens stop and the focal point, a mechanism for driving the image sensor, a strobe and a mechanism for adjusting the intensity of the strobe. [0015]
  • The control unit is preferably configured of a computer incorporating a program required for processing each of the part described above. The control unit, however, is not necessarily configured of a computer alone, and may include a dedicated circuit for executing a part of the processes. [0016]
  • The memory for storing the processed image is preferably a nonvolatile memory such as a hard disk or a flush memory. The storage medium is preferably a removable one such as a memory card, a Compact Flush (registered trade mark), a CD-R/RW, a DVD-R/RW or a digital video tape having a sufficient capacity to store a plurality of images. [0017]
  • Further, the image pickup device is required to be equipped with an A/D converter for converting an analog image signal generated by the image pickup unit into a digital image signal. Furthermore, the image pickup device may comprise an image processing circuit for compressing or otherwise converting the digital image into an image data in a predetermined file format. [0018]
  • The face image extraction part scans a search area of a predetermined size on the image input from the image pickup unit at a predetermined timing and searches for the features points indicating the features of the organs making up the face thereby to extract a face image. Nevertheless, the invention is not necessarily limited to this method, but a face image can alternatively be extracted by the conventional method of detecting a skin color area or the simple pattern matching process. [0019]
  • The inference part is used to determine the race, age and sex with high accuracy by the arithmetic operation using the feature points of the face organs. The race can be inferred using, for example, the technique described in the first reference cited below, or other methods of extracting the brightness distribution in the face image. The age and sex, on the other hand, can be estimated by the method described, for example, in the second reference cited below. [0020]
  • The first reference includes Gregory Shakhnarovich, Paul A. Viola, Baback Moghaddam; “A Unified Learning Framework for Real Time Face Detection and Classification”; Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture and Gesture Recognition, US. Institute of Electrical and Electronics Engineers (IEEE), May 2002. [0021]
  • For estimating the age and sex, on the other hand, the method disclosed in the second reference Kiyoshi Hosoi, Erina Takigawa and Masato Kawade; “Sex and Age Estimation System by Gabor Wavelet Transform and Support Vector Machine”, Proceedings of Eighth Image Sensing Symposium, Image Sensing Technological Research Society, July 2002. [0022]
  • The feature amounts used for inference are acquired mainly from the extraction area of the face image. Nevertheless, the feature amounts of the whole or a part of the image and the area around the face image may also be included. The feature amounts thus extracted include the mean and variance of color and lightness of the face image, intensity distribution, the color difference or the lightness difference with the surrounding image. Also, by applying these feature amounts to a predetermined calculation formula, the secondary feature amounts required for inference can be determined. [0023]
  • Examples of the image pickup conditions adjusted according to this invention include the shutter speed and the stop for determining the exposure amount, the focal length and the presence or absence of the strobe and the intensity thereof. With the image pickup device described above, the image pickup conditions can be automatically adjusted based on the inference result after executing the face image extraction process and the inference process. [0024]
  • In executing the process for adjusting the image pickup conditions, a set table indicating the correspondence between an adjust value (hereinafter referred to as the “image pickup parameter”) for determining each image pickup condition and each factor constituting the object of inference is prepared preferably beforehand, so that the result of the inference process is compared with the set table to acquire the image pickup parameter corresponding the inference result. In this set table, at least one of the race, age and sex is classified into a plurality of categories (for example, “white person”, “yellow person” and “black person” for the race, and “teens”, “twenties” and “thirties” for the age), and for each combination of these categories, the corresponding one of various image pickup parameters is determined. [0025]
  • With the image pickup device described above, the condition most suitable for the object are selectively set from a plurality of reference image pickup conditions for the race, age and sex, and the image pickup operation is performed under this condition. Specifically, an image can be picked up by adjusting the image pickup conditions automatically according to the race, age and sex of individual objects. [0026]
  • According to a second aspect of the invention, there is provided an image pickup device comprising an image pickup unit and a control unit similar to those of the image pickup device according to the first aspect of the invention. The control unit of the image pickup device to the second aspect of the invention includes [0027]
  • a registration part for holding the registration information on the correspondence between the feature amount of the face image of each of a predetermined number of objects and the information required for adjusting the optimum image pickup conditions on the one hand and the identification information unique to the object on the other hand, [0028]
  • a face image extraction part for extracting the face image contained in the image acquired by the image pickup unit, [0029]
  • an inference part for estimating the object by comparing the feature amount of the face image extracted by the face image extraction part with the information registered in the registration part, [0030]
  • an image pickup conditions adjusting part for adjusting the image pickup conditions for the image pickup unit using the registered information of the object estimated by the inference part, and [0031]
  • an information processing part for storing in the memory or the storage medium the image picked up under the image pickup conditions adjusted by the image pickup conditions adjusting part. [0032]
  • In this configuration, the registration part can be set in the memory of the computer making up the control unit. The “information required for adjusting the optimum image pickup conditions” is, for example, a parameter for specifying the color of the face image (gradation, lightness, etc. of R, G, B making up the face color), and preferably adjusted to the color liked by the object to be registered. The identification information unique to the object, on the other hand, preferably includes the name (which is not limited to the full name but may be a nick name) of an object by which the photographer or the object can be easily confirmed. [0033]
  • The extraction process of the face image by the face image extraction part can be executed in the same manner as in the image pickup device according to the first aspect of the invention. The inference part can estimate who an object is by the process of comparing the feature amount of the face image obtained by the extraction process with the feature amount of the face image of each object registered in the registration part. [0034]
  • In the image pickup conditions adjusting part, the image pickup conditions for the object recognized are adjusted based on the information required to adjust the optimum image pickup conditions. In the case where a parameter indicating an optimum skin color of an object is registered as the aforementioned information, for example, exposure is adjusted, for example, in such as a manner that the extracted face image is as similar as possible to the particular optimum skin color. [0035]
  • In the image pickup device according to this second aspect of the invention, the information required for adjusting the optimum image pickup conditions and the feature amounts of the face image of a particular person to be an object are registered beforehand in the registration part. Thus, an image is picked up by automatically adjusting the image pickup conditions to acquire a face image liked by the particular object. [0036]
  • The image pickup devices according to the first and second aspects of the invention described above can be configured as a camera for generating a digital still image (hereinafter simply referred to as the “digital camera”). In the device of this type, the image sensor is driven at the timing when the photographer is considered to have determined an image pickup area such as when the shutter button is pressed halfway or the line-of-sight sensor has detected that the line of sight of the photographer has been set in a predetermined direction, and after adjusting the image pickup conditions using the image thus obtained, the image sensor is desirably driven again. In this case, the image picked up under the adjusted image pickup conditions is stored in the memory or the storage medium. [0037]
  • As an alternative, a high-speed processing CPU is incorporated or a plurality of CPUs are mounted in the control unit to execute the process of extracting or inferring the face image in real time, and the image sensor is driven continuously. Then, the image pickup device is rendered to function as a camera device for generating a dynamic image (hereinafter referred to as the “digital video camera”). In this case, the images picked up under the adjusted image pickup conditions are desirably accumulated sequentially in the memory or the storage medium while continuously executing the face image extraction process, the inference process and the process of adjusting the image pickup conditions. [0038]
  • Next, various embodiments shared by the image pickup devices according to the first and second aspects of the invention are explained. First, according to one preferred embodiment, the information processing part includes part for generating link information containing the position of extracting the face image by the image extraction part and the inference information obtained by the inference processing of the inference part, and the link information is stored in the memory or the storage medium together with the image picked up by the image pickup unit. [0039]
  • According to this embodiment, the position of extracting the face image and the result of the inference process are stored as linked with the image, and therefore even after the image is picked up, the face image can be corrected in detail using the link information. The face image after being picked up is corrected desirably using an external device such as a personal computer. The image pickup device, therefore, preferably comprises an interface circuit or an output terminal corresponding to the external device so as to output the stored image and the link information to the external device. In the case where the removable storage medium described above is used, however, the correcting process can be executed by the external device without using the output function. [0040]
  • According to this embodiment, the picked-up image and the link information are retrieved into the external device for more detailed correction. The image thus processed is printed, displayed on the monitor or distributed through a communication line such as an internet. [0041]
  • In the image pickup device according to the second aspect of the invention, the link information can contain the object identification information. After executing the correcting operation suitable for the object in the external device, therefore, the specifics of the correction are registered in correspondence with each identification information. Thus, the face image of the same object can be corrected later in the same manner as in the preceding correction using the registered information. Also, since the image is linked with the personal identification information, the picked-up images can be readily put in order or printed as photo copies. [0042]
  • Further, the link information can include the additional information such as the image pickup date and the image pickup conditions that have been set. [0043]
  • An image pickup device according to another embodiment comprises a distance recognition part for recognizing the distance to an object, wherein the face image extraction part includes part for specifying the size of the face image to be extracted, based on the result of recognition by the distance recognition part. [0044]
  • The distance recognition part can be a distance measuring sensor utilizing the reflected light. The face image extraction part, after estimating the size of the object face based on the distance recognized by the distance recognition part, sets a search area of a size corresponding to the size of the face estimated on the image to be processed, or adjusts the size of the provisionally-acquired image based on the estimation result without changing the size of the search area. In this way, only the face image corresponding to the estimated size can be searched for. In any way, the time required for the extraction process of the face image is shortened according to the size of the face image to be extracted, thereby realizing a high-speed processing corresponding to the image pickup operation. [0045]
  • In another method of shortening the time of extracting the face image, the direction of the line of sight of the photographer is specified by the line-of-sight sensor or the like and the search is made by limiting the face image detection range from the same direction. Also, in view of the fact that the image of the same person is picked up very often, an image containing the features similar to those of the face image obtained in the preceding extraction process can be searched for preferentially within a predetermined time elapsed from the preceding image pickup process. [0046]
  • As described above, by shortening the time required for face image extraction, the face image extraction process can be completed in a short time in accordance with the image pickup operation of the photographer. [0047]
  • In an image pickup device according to still another embodiment of the invention, the control unit includes a focal length adjusting part for adjusting the focal length of the lens of the image pickup unit in accordance with the extraction result of the face image extraction part. [0048]
  • A normal auto-focus image pickup device develops an inconvenience in which the face of the object becomes unclear, for example, as the result of an image being focused at the center of the image or before the object. With the image pickup device according to this embodiment, in contrast, the focal length is adjusted in such a manner as to focus the face of the object based on the result of face image extraction, and therefore the face image of the object can be clearly picked up. Especially in the case where images are picked up continuously or the image pickup device is configured as a video camera, the repetitive execution of the face extraction process and the focal length adjusting process described above makes adjustment possible whereby the face of the object is always focused. [0049]
  • Also, by executing the inference process for the face image thus focused, a more accurate inference result is obtained. [0050]
  • An image pickup device according to yet another embodiment of the invention comprises a first operating unit for designating the face image extraction range, wherein the face image extraction part includes a part for limiting the face image extraction area of the image picked up by the image pickup unit in accordance with the designating operation of the first operating unit. [0051]
  • The first operating unit may be arranged at an appropriate position on the body of the image pickup device. Preferably, however, the image pickup device comprises a display unit for displaying the image picked up by the image pickup unit and a user interface for designating the face image extraction range on the image displayed on the display unit. The first operating unit may be configured integrally with second and third operating units described later. [0052]
  • According to this embodiment, the face image extraction process is limited to the area designated by the photographer, and therefore the time required for the extraction process is shortened. [0053]
  • An image pickup device according to a further embodiment of the invention comprises a second operating unit for designating a predetermined one of the results of extraction of a predetermined face image, wherein the face image extraction part includes a part for updating the face image extraction result in accordance with the designating operation of the second operating unit. [0054]
  • The second operating unit preferably includes a display unit for displaying the face image extraction result and a user interface for receiving the designation of the deletion on the display screen of the display unit. Incidentally, the face image extraction result can be displayed by arranging the pointer at the face image extraction point on the provisionally-acquired image. More preferably, a marking is displayed to clarify the position and size of the face image (for example, by setting a frame image including the face image). [0055]
  • According to this embodiment, in the case where a face image of other than the person constituting an object is included, the extraction result of the face image of the particular person other than the object is deleted or otherwise the face image not required for the subsequent inference process or link information production can be deleted, and therefore the detailed process is concentrated on the object. [0056]
  • Further, the second operating unit may have the function of correcting the extraction result as well as the function of designating the deletion of the face image extraction result. Thus, the photographer can confirm whether the face image has been correctly extracted or not, and in the case where an error exists, can correctly set again the position and size of the face image by the correcting operation. This can avoid an otherwise possible case in which an erroneous inference process is executed due to an error of the face image extraction thereby leading to an error in the subsequent process. [0057]
  • An image pickup device according to a still further embodiment of the invention comprises a third operating unit for operation to correct the inference information obtained by the inference process of the inference part, wherein the information processing part includes a part for correcting the inference information in accordance with the correcting operation of the third operating unit. The third operating unit may include a display unit for displaying the image picked up and the inference information for the image and a user interface for receiving the operation of correcting the inference information on the display screen of the display unit. [0058]
  • According to this embodiment, any error which may occur in the inference process is corrected as right information by the correcting operation. The image pickup conditions not suitable for the object can thus be prevented from being set by an erroneous inference process. [0059]
  • The operation of correcting the inference information may include the operation of adding new information not inferred, as well as the operation of correcting the error of the inference result. The information designated for addition may include the correcting process difficult to execute at the time of image pickup operation (the process requiring a considerable length of processing time such as smoothing a contour line, extracting and erasing a defect of the face surface or correcting the color taste and brightness of the skin color in detail). By doing so, not only the error of the inference result can be corrected, but also the link information including the detailed contents of image correction can be produced in accordance with the preference of the photographer or the object person. Thus, the image picked up can be corrected in more detail. [0060]
  • An image pickup device according to a yet further embodiment of the invention comprises a fourth operating unit for correcting the image pickup conditions adjusted by the image pickup conditions adjusting part, wherein the image pickup conditions adjusting part includes a part for readjusting the image pickup conditions in accordance with the correcting operation by the fourth operating unit. [0061]
  • The fourth operating unit is for correcting the image pickup conditions adjusted by the image pickup conditions adjusting part, and this function may be added to the normal operating unit for adjusting the stop and the focal length. According to this embodiment, the image pickup conditions adjusted based on the inference information can be finely adjusted, so that an image can be picked up under more appropriate image pickup conditions. [0062]
  • In an image pickup device according to another embodiment of the invention, the information processing part includes a part for determining the direction of the face of the object in the image stored in the memory or the storage medium based on the result of extraction by the face image extraction part, and a part for rotating the image in such a manner that the determined face direction, if different from a predetermined reference direction, comes to agree with the predetermined reference direction. [0063]
  • According to this embodiment, depending on the direction of the image pickup device in which the photographer picks up an image, the face direction can be stored to the reference direction by rotational correction. As a result, the image picked up can be displayed with the object kept in the same direction. Also in the case where a plurality of images are printed on a single sheet of paper, no correcting operation is required to set the object in direction. [0064]
  • Further, the image pickup device according to the first aspect of the invention may be embodied as a device having the configuration described below. [0065]
  • First, an image pickup device according to a first embodiment comprises a feature amount storage part for storing the feature amount of the face image extracted, wherein the face image extraction part includes a specified image extraction part for extracting an image area having the feature amount of a specified face image stored in the feature amount storage part from the image picked up by the image pickup unit. [0066]
  • According to this embodiment, the face image extraction process can be executed by extracting the image area including the same feature amount as the specified face image extracted in the past. In picking up images of the same person as an object successively, therefore, the face image of the object can be extracted within a short time. [0067]
  • The feature amount storage part is not necessarily required to store the feature amounts of the face images of all the objects picked up in the past, but only the feature amounts acquired up to a predetermined time before or the feature amounts acquired for a predetermined number of the latest images. [0068]
  • An image pickup device according to still another embodiment of the invention comprises an object storage part for storing the feature amount of the face image of a specified object, wherein the information processing part is so configured that the feature amount of the face image extracted by the face image extraction part is compared with the feature amount stored in the object storage part, and in the case where the comparison shows that the extracted face image is associated with the specified object, the link information including the inference information obtained by the inference process of the inference part and the information for identifying the specified object is produced and stored in the memory or the storage medium together with the image picked up by the image pickup unit. [0069]
  • According to this embodiment, upon extraction of the face image of a specified person, the link information including the information for identifying the same person (such as the name and the ID number) can be produced and stored together with the image thereof. After picking up the image, therefore, the person to be corrected is specified and corrected based on the ID information of the person. Further, special points of correction for the same person can be designated and the detailed correction can be carried out. Also, since the image is linked with the ID information of individual persons, the images picked up can be put in order and the photos printed easily. [0070]
  • Incidentally, the ID information may be input before or after picking up the image. [0071]
  • The image pickup device according to this embodiment preferably comprises an operating unit for giving an instruction to register the face image extracted by the image extraction part and inputting the personal ID information, wherein the information processing part includes a part for registering in the object storage part the feature amount of the face image designated by the operating unit in correspondence with the personal ID information input. [0072]
  • By doing so, a specified one of the persons whose images have been picked up can be designated and the feature amount of the face image thereof can be registered together with the personal ID information. Subsequently, each time the image of the registered person is picked up, the personal ID information can be stored as linked to the image of the particular person. The face image to be registered can be designated not only immediately after picking up the image thereof but also by retrieving the image stored in the memory or the storage medium. [0073]
  • In a preferred embodiment of the image pickup device according to the second aspect of the invention, the control unit includes a part for receiving the input of the information required for adjusting the optimum image pickup conditions and the ID information of the object in accordance with the image pickup operation of a predetermined object for registration in the registration part and storing the input information in the registration part together with the face image of the object. [0074]
  • In the image pickup device according to this embodiment, for example, an image picked up by the image pickup unit is displayed, the operation for adjusting the image pickup conditions is performed and the input of the ID information of the object received. In this case, the parameter indicating the feature amount of the face image contained in the image after the adjust operation can be registered as the information required for adjusting the optimum image pickup conditions. [0075]
  • In the case where the image pickup device has the function of correcting the picked-up image by the correcting operation, the input processing of the information can be executed after picking up the image as well as at the time of picking up the image. This operation of processing the input includes the operation of giving an instruction to correct the face image picked up and the operation of inputting the ID information, so that the parameter indicating the feature amount of the face image corrected is registered as the information required for adjusting the optimum image pickup conditions. [0076]
  • By inputting the information for registration in the registration part using the image picked up in this way, the object itself can execute the input process, and therefore the image can be picked up by adjusting the image pickup conditions in accordance with the preference of the object itself. [0077]
  • The image pickup device having the aforementioned various configurations is a portable communication terminal (a mobile telephone, etc.) having the camera function as well as an ordinary digital camera or a digital video camera. Also, the image pickup device may be combined with a printer as a photo seal vending machine. [0078]
  • According to still another aspect of the invention, there is provided a program executed by the image pickup device or a method executed by the image pickup device. [0079]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 shows a block diagram of a configuration of a digital camera according to this invention. [0080]
  • FIG. 2 shows the function set in the CPU of the digital camera of FIG. 1. [0081]
  • FIG. 3 shows an example of display of the face area extraction result. [0082]
  • FIG. 4 shows an example configuration of the link information. [0083]
  • FIG. 5 shows an example parameter for setting the face area. [0084]
  • FIG. 6 shows a flowchart of the steps of controlling the process of picking up an image of a person. [0085]
  • FIG. 7 shows a histogram of the brightness distribution in the face area for each difference in race and lighting conditions.[0086]
  • DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • FIG. 1 shows a configuration of a digital camera according to this invention. [0087]
  • A [0088] digital camera 1 mainly comprises an image pickup unit 4 including a lens unit 2 and a CCD 3 and a control unit 7 for retrieving the image from the CCD 3 and generating a digital image in the final form. The digital camera 1 has further built therein peripheral circuits such as an A/D converter 8, an image processing circuit 9, a lens adjusting unit 10, a shutter control unit 11, a strobe control unit 12, a length measuring sensor 13, a line-of-sight sensor 14, a USB interface 15 and an input/output interface 17. A memory card 16 is adapted to be removably connected to the control unit 7.
  • The [0089] control unit 7 is configured of a CPU 5 and a nonvolatile memory 6 such as a flush memory (hereinafter referred to simply as the memory 6). The memory 6 has stored therein a program required for the operation of the CPU 5, a data base for storing various reference tables and templates used in the process and image data output from the image processing circuit 9 and the CPU 5.
  • The [0090] lens adjusting unit 10 includes a focal point adjusting mechanism and a stop adjusting mechanism of the lens unit 2. The shutter control unit 11 is for supplying a drive pulse to the CCD 3 to store the charge, and the strobe control unit 12 is for adjusting the luminous timing and the light amount of a strobe not shown. The focal point, the stop adjust value, the interval of the drive pulses applied to the CCD 3 and the luminous amount of the strobe are adjusted in accordance with the control signal from the CPU 5. The A/D converter 8 retrieves the outputs sequentially from the pixels of the CCD 3 and converts them into digital signals for the color components of R, G and B. The image processing circuit 9 includes a set of a plurality of shift registers and flip-flops, and upon receipt of the output from the A/D converter 8, generates a full-color image data having a combined intensity of R, G and B for each pixel. The image data thus generated are stored in the memory 6 and processed as predetermined by the CPU 5.
  • The [0091] length measuring sensor 13 is for measuring the distance to an object and either of the passive type using trigonometry or the active type using infrared light. The line-of-sight sensor 14 radiates the infrared light on the eye balls of a photographer to detect the direction of the line of sight of the photographer by the measurement using the reflected light image. The measurement result of the sensors is input to the CPU 5 and used to generate a preview image before the main image pickup process or to set the image pickup parameters for the main image pickup process.
  • The [0092] USB interface 15 is an interface conforming with the Universal Serial Bus Standard and used for transferring the image stored in the memory 6 or the memory card 16 to an external device such as a personal computer.
  • The input/[0093] output interface 17 is connected with the operating unit 18 and the display unit 19. The operating unit 18 and the display unit 19 are arranged on the body surface of the digital camera 1. The display unit 19 is used to display an image in process or a screen for inputting the information. The operating unit 18 includes the operating keys for inputting information and a shutter button.
  • In this configuration, the functions shown in FIG. 2 are set in the [0094] CPU 5 by the program stored in the memory 6. These functions permit the digital camera 1 according to this embodiment to pick up an image of a person as an object by estimating the race, age and sex of the object person in a predetermined image pickup area, and thus by determining the image pickup parameters suitable to the result of the estimation, the main image pickup process can be executed.
  • According to this embodiment, the adjust values including the shutter speed, stop and focal length and whether the strobe is used or not are always determined as image pickup parameters. Further, in the case where the strobe is used, the luminous intensity thereof is set. [0095]
  • In FIG. 2, the preview [0096] image acquisition unit 51 generates a preview image of the image pickup area by controlling the lens adjusting unit 10 and the shutter control unit 11 (the strobe is not desirably used when acquiring the preview image). A face detection processing unit 52 detects a face image from the preview image acquired by the preview image acquisition unit 51. A face area setting unit 53 sets an area of predetermined size including the face image as a processing area (hereinafter referred to as “the face area”) in accordance with the result of detection of the face image thereby to produce an area setting parameter described later.
  • An [0097] inference processing unit 54 infers the race, age and sex of the object based on the feature amount in the face area that has been set. A parameter determining unit 55 determines the image pickup parameters suitable for the result of the inference processing. According to this embodiment, the presence or absence of back light is inferred in addition to the race, age and sex described above. The result of inference of these four factors and the distance to the object are combined variously in advance. For each combination, a table with optimum image pickup parameters set therein is produced and registered in the memory 6. In the process of determining the image pickup parameters, the table is accessed with the inference result and the measurement of the length measuring sensor 13 thereby to extract the optimum image pickup parameters.
  • A main image [0098] pickup processing unit 56 controls the lens adjusting unit 10, the shutter control unit 11 and the strobe control unit 12 based on the image pickup parameters determined by the parameter determining unit 55 thereby to execute the main image pickup process (using the strobe, if required). The image data storage unit 57 generates an information link image in which the image obtained in the main image pickup process is linked to the processing result of the face area setting unit 53 and the inference processing unit 54, and stores the information link image in the memory 6 or the memory card 16. The image data output unit 58 appropriately reads the information link image in store, and outputs them externally through the USB interface 15.
  • The user [0099] interface control unit 59 checks the face area setting result and the link information of the information link image, and corrects an error, if any, or inputs additional information. According to this embodiment, a preview image including the face area setting result is displayed on the display unit 19 at the end of the face area setting process. FIG. 3 shows an example display, in which frame images 21, 22 corresponding to the boundary lines of the face areas are displayed on the face images of the persons in the preview image 20.
  • With this display, the user [0100] interface control unit 59 sets an operating screen for performing the operation of correcting the setting of the face area and the operation of setting a new face area, accepts various operations and outputs the contents of each operation to the face area setting unit 53. The face area setting unit 53, in accordance with the contents of each operation, corrects the place and size of the face area thus set, deletes an unnecessary face area and sets a new face area.
  • Further, the user [0101] interface control unit 59, when the information link image is stored in the image data storage unit 57, displays the image and the link information contained in the information link image. At the same time, an operating screen for correcting the link information or inputting additional information is set to accept various corrections and inputs. The contents of the correction and the input information are delivered to the image data storage unit 57. The image data storage unit 57 corrects the corresponding information based on the correction of the information link image, and adds the additionally input information to the existing link information.
  • The correcting information is mainly for correcting an error of the inference process. The additional information is input by the operation of correcting an option that cannot be performed by the standard correction by the inference process or the operation of inputting the personal information such as the name of the object (surname or given name, or nick name). [0102]
  • Further, the [0103] digital camera 1 according to this embodiment has the function of correcting the picked-up image such as the brightness or the contour in accordance with the designating operation of the user. When this correcting operation is performed in the device, the image data storage unit 57 executes the process of adding the correction-related information to the link information.
  • FIG. 4 shows an example of a configuration of the link information in the information link image. [0104]
  • In FIG. 4, the uppermost column corresponds to the index information in the link information and has set therein the image number, the image pickup data and the image pickup mode. The image number is a serial number assigned in the order of generation of images by the image pickup operation. The image pickup mode has a plurality of modes including the portrait image pickup mode and the landscape image pickup mode. Only in the case where the portrait image pickup mode is selected, the face detection process and the inference process are executed to set specific link information. [0105]
  • The link information contains the information indicating the result of setting the face area such as the coordinate (x[0106] p, yp) of the face image detection position, the size r of the face image and the face tilt angle θ, and the information indicating the inference result such as the race, age, sex and the presence or absence of back light. Further, the link information contains additional information including the personal information such as the name of the object and the information including the item of the selected option correction, whether the correction is made or not in the device and the contents of correction.
  • The link information in store may contain the image pickup parameters used, in addition to those shown. Also, with regard to the image picked up in other than the portrait image pickup mode, the information link image is generated which has only the index information with the image number, the image pickup date and the image pickup mode. [0107]
  • The detection position (x[0108] p, yp), the face size r and the fact tilt angle θ contained in the link information are used as parameters for setting the face area suitable for the particular face image in the case where the same face image is corrected in detail after the main image pickup process.
  • FIG. 5 shows a specific example of each parameter. In the shown case, a feature point P corresponding to the highest nose position is extracted from the feature points of the face image, and the coordinate (x[0109] p, yp) of this point P is used as a face detect position. Also, using this point P as an origin, the boundary between the forehead and the hair is searched for in each direction, and a point Q associated with the shortest distance from point P is determined from the feature points corresponding to the boundaries thus searched for. The distance between point Q and point P is set as the face size r. Further, a vector C directed from point P to point Q is set, and the angle that the vector C forms with the horizontal direction of the image (x axis) is measured as a face tilt angle θ.
  • In FIG. 5, character U designates an example of the face area set by each parameter. The size of this face area U is determined based on the face size r, and set in such a manner that the center thereof corresponds to point P and the main axis is tilted by θ with respect to the x axis. [0110]
  • The link information shown in FIG. 4 can be stored in the [0111] memory card 16 or output to an external device by the function of the image data output unit 58. Once the information link image containing this link information is fetched into a personal computer or the like, therefore, the face area to be processed can be automatically set by the parameters (xp, yp), r and θ. As a result, detailed and proper correction process can be executed based on the inference result and the option correction items.
  • FIG. 6 shows the steps of the control operation performed by the [0112] CPU 5 in accordance with the operation of taking a photo in portrait image pickup mode.
  • This process is started at the time point when the shutter button is pressed halfway and the line of sight of the photographer is set by the measurement of the line-of-[0113] sight sensor 14. First in step ST1, the CCD 3 is driven by setting the appropriate image pickup conditions based on the measurement of the line-of-sight sensor 14 and the length measuring sensor 13, and a preview image is generated from the output of the CCD 3. The preview image thus generated is stored in the working area of the memory 6, and the process proceeds to step ST2 for executing the face detection process.
  • In the face detection process, a search area of predetermined size is scanned on the preview image thereby to search for the feature points of the face image. According to this embodiment, a table indicating the standard face image size corresponding to each distance to the object is set in the [0114] memory 6. By accessing the table with the measurement of the length measuring sensor 13, the approximate size of the face image appearing on the preview image is predicted, and in accordance with the particular size, the size of the search area is adjusted. Also, based on the measurement of the line-of-sight sensor 14, the scanning of the search area is confined to a predetermined range around the point where the photographer has set the line of sight. At the same time, using the method disclosed in Patent Reference 4, the highly accurate face detection process is executed within a short time.
  • In step ST[0115] 3, the various parameters shown in FIG. 5 are extracted for the face image detected, and based on these parameters, the face area U is set. Further, in step ST4, the result of setting the face area is displayed as a frame image in the preview image on the display unit 19. The image thus displayed is corrected by the photographer, and in accordance with this operation, the parameters for setting the face area are corrected (ST5, ST6).
  • The correcting operation in step ST[0116] 5 includes the operation of changing the position and size of the set face area, the operation of deleting the face area and the operation of setting a new face area. The process executed in step ST6, on the other hand, includes the process of changing the value of the parameters for setting the face area, the process of deleting the parameters and the process of setting a new parameter.
  • When the photographer performs the final determining operation after correcting the face area, the process proceeds to step ST[0117] 7 to execute various inference processes for the face area finally determined. In the case where the photographer performs the final determining process immediately without the correcting operation on the preview image on display in step ST4, the answer in step ST5 is NO, and the process proceeds to step ST7 for executing the inference process for the face area set in step ST3.
  • In step ST[0118] 7, the race, age, sex and the presence or absence of back light are estimated for the set face area. The race estimation process can be executed based on the cited reference Gregory Shakhnarovich, Paul A. Viola, Baback Moghaddam: “A Unified Learning Framework for Real Time Face Detection and Classification”. According to this embodiment, however, the race and the presence or absence of back light are estimated at the same time using the brightness distribution in the face area to shorten the processing time.
  • FIG. 7 shows an example of histogram for each of the color data of R, G and B and the lightness L (arithmetic average of R, G and B) for three different cases of objects and lighting environments. The histogram is drawn in gradation scale, and the brightness increases progressively rightward in the page. [0119]
  • FIG. 7([0120] 1) is a histogram representing a case in which an image of a yellow people is picked up in the proper lighting environment. In this histogram, the distribution toward a higher brightness is comparatively greater for each color data. Especially, the intensity of the red component is emphasized.
  • FIG. 7([0121] 2) is a histogram representing a case in which the image of the same yellow people as in FIG. 7(1) is picked up in back light. In this histogram, the appearance ratio of each color data is considerably lower than in FIG. 7(1) and the distribution is concentrated on dark side.
  • FIG. 7([0122] 3) is a histogram representing a case in which the image of a black people is picked up in the proper lighting environment. In this histogram, the distribution has a peak on both dark and bright sides (the dark side is considered to correspond to the skin, and the bright side to the eyes and teeth).
  • According to this embodiment, a template of brightness histogram is prepared for a plurality of image pickup environments having different lighting conditions for each race, and the histogram extracted for the face area to be processed is collated with each template. In this way, the race and the presence or absence of back light is estimated. In the case where this histogram is used for the determining process by extracting also the brightness distribution of a local area such as the eyes or mouth as well as the whole face area, a more accurate estimation result can be obtained. [0123]
  • Incidentally, the age and sex are estimated, as described in the cited reference Kiyoshi Hosoi, Erina Takigawa and Masato Kawade: “Sex and Age Estimation System by Gabor Wavelet Transform and Support Vector Machine”, by a method in which the feature amount of the feature point for each organ is applied to an estimation system called the support vector machine. Nevertheless, the invention is not necessarily limited to this method. [0124]
  • Returning to FIG. 6, at the end of a series of the inference process, the set table in the [0125] memory 6 is accessed with the inference result and the measurement of the length measuring sensor 13 in step ST8 thereby to determine the image pickup parameters suitable for the inference result and the distance to the object.
  • Among the image pickup parameters, the focal length of the [0126] lens 2 is determined also taking the face image extraction result into consideration. In this way, the state can be set in which the face of the object is just in focus.
  • In step ST[0127] 9, a marker indicating that the image pickup operation is possible is displayed on the display unit 19. In accordance with this display, the photographer operates the shutter button. The answer in step ST10 turns YES, and the main image pickup process is executed using the set image pickup parameters in step ST11.
  • After that, the link information having the configuration shown in FIG. 4 is generated in step ST[0128] 12. The link information in this stage has yet to contain the information such as the name and the option correction, but only the index information, the face area setting parameter and the inference result are set.
  • In step ST[0129] 13, the image obtained in the main image pickup process and the link information are displayed on the display unit 19. In the case where the input operation for correcting or adding the link information or the operation of designating the image correction is performed on the information on display, the answer in step ST14 turns YES, and the process proceeds to step ST15, where the link information is corrected or the additional information is set, as the case may be, in accordance with the input operation.
  • In the case where the final determining operation is performed subsequently, the process proceeds to step ST[0130] 16, where the final image link information is stored in the memory 6 or the memory card 16. In the case where the final determining operation is performed immediately without correcting or adding the information displayed in step ST13, on the other hand, the answer in step ST14 turns NO, and the process proceeds to step ST16, so that only the information link image having only the basic link information generated in step ST12 is stored.
  • In displaying or correcting the link information, each information may be displayed sequentially by the switching operation, and the correcting operation is accepted each time of display. In adding the information, on the other hand, assume that the correction of an option is designated. In such a case, a user interface indicating a menu of correction items is prepared for accepting an item selection. In this way, the additional information can be input or recognized easily. [0131]
  • In the process described above, the information link image is displayed and the correction or additional input is accepted immediately after the image pickup operation. However, the timing of display and correction is not limited to this method. After an image is picked up, for example, the information link image having the basic configuration is generated and held temporarily, so that the image and the link information are displayed in response to the image call operation at a predetermined time point and the input operation for correction or addition may be accepted. In this case, the link information stored in the preceding process is rewritten in accordance with the correction. [0132]
  • Further, in the above-mentioned process, the focal length of the [0133] lens 2 is adjusted in accordance with the detection result at the end of the face image detection in step ST2, and the preview image is retrieved again. By doing so, the subsequent face image setting process and inference process can be carried out with a face image having clear face features of the object. Therefore, the process of higher accuracy can be executed.
  • In the process described above, the race, age and sex having a large effect on the manner in which the image of the object face is picked up are estimated, and in accordance with the result of estimation, the image pickup parameters are set to conduct the main image pickup process. Even in the case where an image is picked up at the same place under the same lighting conditions, therefore, the image pickup conditions are varied from one object to another. Thus, a clear face image similar to the real image of each object can be generated. [0134]
  • Next, the correcting process using the information link image generated in the steps described above is explained. This correcting process is executed by an editing device such as a personal computer which fetches the information link image through the [0135] memory card 16 or the USB interface 15. In the editing device, a correcting parameter or algorithm is set in advance for a plurality of classes of each item of race, age and sex (the parameters and the algorithm are hereinafter referred to as “the correction data”). The suitable correction data are selected from the inference result contained in the link information of the image to be processed, and the process based on the correction data is executed. As a result, the face image can be corrected in different ways according to the difference in race, age and sex.
  • As an example, the reddish part due to pimples or the like is detected for female persons in teens or twenties and replaced by the same color as the surrounding part or corrected to increase the whiteness of the skin. For male persons in twenties or thirties, on the other hand, the skin color is corrected to the color of sunburn, while the part of small wrinkles or freckles is detected and replaced with the same state as the other parts for female persons in forties. In this way, what is considered most desirable correction is carried out for persons of different age and sex. [0136]
  • The correction data of this type can be freely changed, and therefore the contents thereof can be changed in accordance with the fashion or season. [0137]
  • In the case where a plurality of persons exist as objects of the [0138] digital camera 1, the inference process is executed for each object, after which any of the image pickup parameters corresponding to each inference result is selected or the parameters are averaged out for each person to conduct the main image pickup process. Also in this case, the detailed correction process may be executed using the link information for each object after image pickup operation, thereby correcting the errors due to the difference in the inference result between the objects.
  • Further, in the case where the link information contains the option correction items, the correction process can be carried out in accordance with each item. Thus, the detailed correction can be conducted in accordance with the desire of the object and the photographer. Also, in the case where the personal information such as the name is contained in the link information, a data base of information for the correction of each person can be constructed in the editing device. This data base makes it possible to correct the image of a person in registration in the same way as before. [0139]
  • The link information shown in FIG. 4 also contains the information as to whether the image has been corrected by the camera and the information indicating the specifics of the correction. Based on these information, the device can be set not to duplicate the correction carried out by the camera. [0140]
  • According to this embodiment, the four factors including the race, age, sex and the presence or absence of back light are estimated for the detected face image. As an alternative, only one of the three factors except for the back light may be estimated while the remaining factors may be set as addition to the link information. Especially, as far as the main object is limited to a predetermined race (yellow people, for example), only the age and sex are estimated, while in the case where the black or white people is an object, the information about the particular race is added to the link information. [0141]
  • Further, mechanism and program for executing the following processes can be built in the digital camera [0142] 1:
  • (1) The features of the face image detected by the preceding image pickup process is stored, so that the image area having the same features is searched for in the next image pickup process. Especially, in the case where the time interval is short between image pickup processes, the image of the same person is liable to be picked up successively, and therefore this method can be used to detect the face image in a short length of time. An application of this method to the face detection process disclosed in Patent Reference [0143] 4 described above can detect the face image of the same person with high accuracy. Nevertheless, also in the method of detecting the skin color area, the process can be executed at higher speed by determining an area having a skin color similar to the one detected in the preceding image pickup process as an object of detection.
  • (2) In the case where the strobe is used in the main image pickup process, the link information is rendered to contain the information about the use of the strobe. By doing so, the correction after the image pickup process is possible taking the lightness balance between the face image and the surrounding area. [0144]
  • (3) A data base containing the correspondence between the link information including the personal information described above and the result of extracting the feature points of the face image is stored in the [0145] memory 6. In this way, an object of a person whose image has ever been picked up can be identified by accessing the data base for the features of the extracted face image. Further, using the link information registered in the data base, the information link image can be easily set.
  • Also in the editing device, the features and the correction data of the face image of a person once corrected are stored in a data base. After that, therefore, a person can be identified by comparing the result of extracting the feature points contained in the newly input link information with the data base. Further, based on the correction data registered in the data base, the correction suitable for a particular person can be carried out. [0146]
  • (4) The GPS, gyro or the geophysical sensor is built in to detect the place of image pickup and the camera position, and the detection data are contained in the link information. This information can be printed with the image or distributed at the time of editing the image. Also, since the direction of image pickup operation is known, the lighting conditions can be determined with the time information. Thus, whether the image has been picked up in back light and other conditions can be determined for correction. Also, the images can be put in order based on the image pickup positions. [0147]
  • (5) With the geophysical sensor described in (4) above, a network communication function such as the internet is built in, and the weather conditions at the place of image pickup are fetched from a meteorological server and used for setting the image pickup parameters. Also, the weather conditions can be contained in the link information and used for correction at the time of editing process. [0148]
  • (6) As in (5) above, the communication function with a network is built in, and the acquired information link image is transmitted to an image editing server. In the image editing server, the image transmitted is corrected based on the link information, the processed image is returned or printed and sent to the distributor. [0149]
  • In the case where the functions described in (4), (5) and (6) above are provided, the [0150] digital camera 1 is built in the portable telephone. Then, the image pickup device can be easily designed using the communication functions set in the portable telephone.
  • (7) After picking up an image of a given person a plurality of times, an image considered most suitable by the person is selected, and the feature amount of the face image in the particular image and the parameters defining the face color (gradation and lightness of R, G and B) are registered in the [0151] memory 6 together with the name of the person, while at the same time incorporating the mode for using the registered information into the portrait image pick up mode. When the mode for using the registered information is selected, the face image is detected, and the feature amount of the particular face image is compared with the registered information to estimate the object. Then, the image pickup conditions are adjusted in such a manner that the color of the face image already detected may be as similar to the registered face color of the estimated object as possible.
  • With this configuration, an image can be picked up by automatically setting the image pickup conditions corresponding to the preference of each object. Therefore, a satisfactory photo of the object can be easily taken and the utility of the digital camera is improved. Even in the case where an image is picked up by this method, an object can be designated with the name of a person and the correction suitable for the particular object can be made in an external editing device by generating the information link image having the link information including the name of the object person and the image pickup conditions. [0152]
  • It will thus be understood from the foregoing description that according to this invention, an image can be picked up by setting detailed image pickup conditions for the race, age and sex of the object, and therefore a face image very similar to a clear, real image of each object can be generated. Also, by picking up an image by setting the image pickup conditions meeting the preference of each object, an image satisfactory to each object can be easily generated. Further, the face image after being picked up can be easily corrected in detail in a way meeting the desire of each person based on the link information. [0153]

Claims (26)

What is claimed is:
1. An image pickup device comprising an image pickup unit including a lens and an image sensor, and a control unit for processing the image picked up by the image pickup unit and storing the processed image in an internal memory or a predetermined storage medium,
wherein the control unit includes
a face image extraction part for extracting the face image contained in the image picked up by the image pickup unit,
an inference part for executing the process of inferring the attributes of a person constituting an object based on the feature amounts in an image area including the face image extracted,
an image pickup conditions adjusting part for adjusting the image pickup conditions for the image pickup unit based on the result of inference in the inference part, and
an information processing part for storing in selected one of the memory and the storage medium the image obtained under the image pickup conditions adjusted by the image pickup conditions adjusting part.
2. The image pickup device according to claim 1,
wherein the inference part includes an inference part for executing the inference of at least one of the race, age and sex as the attributes.
3. The image pickup device according to claim 1,
wherein the information processing part includes a part for producing the link information containing the position of the face image extracted by the face image extraction part and the inference information obtained by the inference process executed by the inference part, and
wherein the link information is stored in selected one of the memory and the storage medium together with the image picked up by the image pickup unit.
4. The image pickup device according to claim 1, further comprising a distance recognition part for recognizing the distance to an object,
wherein the face image extraction part includes a part for specifying the size of the face image to be extracted, based on the result of recognition by the distance recognition part.
5. The image pickup device according to claim 1,
wherein the control unit includes the focal length adjusting part for adjusting the focal length of a lens of the image pickup unit in accordance with the result of extraction by the face image extraction part.
6. The image pickup device according to claim 1, further comprising a first operating unit for designating the range of extracting a face image,
wherein the face image extraction part includes a part for limiting the face image extraction area in the image picked up by the image pickup unit in accordance with the designating operation of the first operating unit.
7. The image pickup device according to claim 1, further comprising a second operating unit for designating the deletion of the result of extracting a predetermined part of the face image extracted,
wherein the face image extraction part includes a part for updating the result of extracting the face image in accordance with the designating operation of the second operating unit.
8. The image pickup device according to claim 1, further comprising a third operating unit for performing the operation of correcting the inference information obtained by the inference process of the inference part,
wherein the information processing part includes a part for correcting the inference information in accordance with the correcting operation of the third operating unit.
9. The image pickup device according to claim 1, further comprising a fourth operating unit for correcting the image pickup conditions adjusted by the image pickup conditions adjusting part,
wherein the image pickup conditions adjusting part includes a part for readjusting the image pickup conditions in accordance with the correcting operation of the fourth operating unit.
10. The image pickup device according to claim 1,
wherein the information processing part includes a part for determining the direction of the face of an object in the image based on the result of extraction of the image stored in selected one of the memory and the storage medium by the face image extraction part, and a part for rotating the image in such a manner that the face direction conforms with a predetermined reference direction in the case where the determined face direction is different from the reference direction.
11. The image pickup device according to claim 1, further comprising a feature amount storage part for storing the feature amount of the face image already extracted,
wherein the face image extraction part includes a specified image extraction part for extracting an image area including the feature amount of the specified face image stored in the feature amount storage part from the image picked up by the image pickup unit.
12. The image pickup device according to claim 1, further comprising an object storage part for storing the feature amount of the face image of a specified object,
wherein the information processing part compares the feature amount of the face image extracted by the face image extraction part with the feature amount stored in the object storage part, so that in the case where the comparing process shows that the extracted face image is that of the specified object, the link information containing the inference information obtained by the inference process of the inference part and the information for identifying the specified object is produced and stored in selected one of the memory and the storage medium together with the image picked up by the image pickup unit.
13. An image pickup device comprising an image pickup unit including a lens and an image sensor, and a control unit for processing the image picked up by the image pickup unit and storing the processed image in selected one of an internal memory and a predetermined storage medium,
wherein the control unit includes
a registration part for holding the feature amount of the face image of each of a predetermined number of objects and the information required for adjusting the optimum image pickup conditions in correspondence with the identification information unique to the object,
a face image extraction part for extracting the face image contained in the image picked up by the image pickup unit,
an inference part for inferring the object by comparing the feature amount of the face image extracted by the face image extraction part with the information registered in the registration part,
an image pickup conditions adjusting part for adjusting the image pickup conditions for the image pickup unit using the registered information of the object estimated by the inference part, and
an information processing part for storing in selected one of the memory and the storage medium the image obtained under the image pickup conditions adjusted by the image pickup conditions adjusting part.
14. The image pickup device according to claim 13,
wherein the control unit includes a part for receiving the input of the information required for adjusting the optimum image pickup conditions and the identification information of the object in response to the image pickup operation of a predetermined object for registration in the registration part, and storing the input information in the registration part together with the face image of the object.
15. The image pickup device according to claim 13,
wherein the information processing part includes a part for producing the link information containing the position of the face image extracted by the face image extraction part and the inference information obtained by the inference process executed by the inference part, and
wherein the link information is stored in selected one of the memory and the storage medium together with the image picked up by the image pickup unit.
16. The image pickup device according to claim 13, further comprising a distance recognition part for recognizing the distance to an object,
wherein the face image extraction part includes a part for specifying the size of the face image to be extracted, based on the result of recognition by the distance recognition part.
17. The image pickup device according to claim 13, wherein the control unit includes a focal length adjusting part for adjusting the focal length of the lens of the image pickup unit in accordance with the result of extraction by the face image extraction part.
18. The image pickup device according to claim 13, further comprising a first operating unit for designating the range of extraction of the face image,
wherein the face image extraction part includes a part for limiting the face image extraction area in the image picked up by the image pickup unit in accordance with the designating operation of the first operating unit.
19. The image pickup device according to claim 13, further comprising a second operating unit for designating the deletion of the result of extracting a predetermined part of the face image extracted,
wherein the face image extraction part includes a part for updating the result of extracting the face image in accordance with the designating operation of the second operating unit.
20. The image pickup device according to claim 13, further comprising a third operating unit for performing the operation of correcting the inference information acquired by the inference process of the inference part,
wherein the information processing part includes a part for correcting the inference information in accordance with the correcting operation of the third operating unit.
21. The image pickup device according to claim 13, further comprising a fourth operating unit for correcting the image pickup conditions adjusted by the image pickup conditions adjusting part,
wherein the image pickup conditions adjusting part includes a part for readjusting the image pickup conditions in accordance with the correcting operation of the fourth operating unit.
22. The image pickup device according to claim 13,
wherein the information processing part includes a part for determining the direction of the object face in the image stored in selected one of the memory and the storage medium based on the result of extraction by the face image extraction part, and a part for rotating the image in such a manner that the direction of the face conforms with a predetermined reference direction in the case where the determined direction of the face is different from the predetermined reference direction.
23. A program to be executed by an image pickup device comprising an image pickup unit including a lens and an image sensor, and a control unit for processing the image picked up by the image pickup unit and storing the processed image in selected one of an internal memory and a predetermined storage medium, the program comprising:
a step of extracting the face image contained in the image picked up by the image pickup unit;
a step of inferring the attributes of a person constituting an object based on the feature amount in an image area including the face image upon extraction of the face image;
a step of adjusting the image pickup conditions for the image pickup unit based on the result of inference in the inference step; and
an information processing step of storing in selected one of the memory and the storage medium the image acquired under the image pickup conditions adjusted by the image pickup conditions adjusting step.
24. A program to be executed by an image pickup device comprising an image pickup unit including a lens and an image sensor, and a control unit for processing the image picked up by the image pickup unit and storing the processed image in selected one of an internal memory and a predetermined storage medium, the program comprising the steps of:
registering the registration information on the feature amount of the face image of each of a predetermined number of objects and the information required for adjusting the optimum image pickup conditions in correspondence with the identification information unique to the object;
extracting the face image contained in the image picked up by the image pickup unit;
estimating the object by comparing the registration information with the feature amount of the face image extracted in the face image extraction step;
adjusting the image pickup conditions for the image pickup device using the registration information of the object estimated in the estimation step; and
storing in selected one of the memory and the storage medium the image acquired under the image pickup conditions adjusted in the image pickup conditions adjusting step.
25. A method to be executed by an image pickup device comprising an image pickup unit including a lens and an image sensor, and a control unit for processing the image picked up by the image pickup unit and storing the processed image in selected one of an internal memory and a predetermined storage medium, the method comprising:
a step of extracting the face image contained in the image picked up by the image pickup unit;
a step of inferring the attributes of a person constituting an object based on the feature amount in an image area including the face image upon extraction of the face image;
a step of adjusting the image pickup conditions for the image pickup unit based on the result of inference in the inference step; and
an information processing step of storing in selected one of the memory and the storage medium the image acquired under the image pickup conditions adjusted by the image pickup conditions adjusting step.
26. A method to be executed by an image pickup device comprising an image pickup unit including a lens and an image sensor, and a control unit for processing the image picked up by the image pickup unit and storing the processed image in selected one of an internal memory and a predetermined storage medium, the method comprising the steps of:
registering the registration information on the feature amount of the face image of each of a predetermined number of objects and the information required for adjusting the optimum image pickup conditions in correspondence with the identification information unique to the object;
extracting the face image contained in the image picked up by the image pickup unit;
estimating the object by comparing the feature amount of the face image extracted in the face image extraction step with the registration information;
adjusting the image pickup conditions for the image pickup unit using the registration information on the object estimated by the estimation step; and
storing in selected one of the memory and the storage medium the image acquired under the image pickup conditions adjusted in the image pickup conditions adjusting step.
US10/758,905 2003-01-17 2004-01-16 Image pickup device, image pickup device program and image pickup method Abandoned US20040208114A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2003009285A JP4218348B2 (en) 2003-01-17 2003-01-17 Imaging device
JP009285/2003 2003-01-17

Publications (1)

Publication Number Publication Date
US20040208114A1 true US20040208114A1 (en) 2004-10-21

Family

ID=32588556

Family Applications (1)

Application Number Title Priority Date Filing Date
US10/758,905 Abandoned US20040208114A1 (en) 2003-01-17 2004-01-16 Image pickup device, image pickup device program and image pickup method

Country Status (4)

Country Link
US (1) US20040208114A1 (en)
EP (1) EP1441497A3 (en)
JP (1) JP4218348B2 (en)
CN (1) CN1263286C (en)

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040183951A1 (en) * 2003-03-06 2004-09-23 Lee Hyeok-Beom Image-detectable monitoring system and method for using the same
US20050088546A1 (en) * 2003-10-27 2005-04-28 Fuji Photo Film Co., Ltd. Photographic apparatus
US20050219395A1 (en) * 2004-03-31 2005-10-06 Fuji Photo Film Co., Ltd. Digital still camera and method of controlling same
US20060182433A1 (en) * 2005-02-15 2006-08-17 Nikon Corporation Electronic camera
US20070019083A1 (en) * 2005-07-11 2007-01-25 Fuji Photo Film Co., Ltd. Image capturing apparatus, photograph quantity management method, and photograph quantity management program
US20070030363A1 (en) * 2005-08-05 2007-02-08 Hewlett-Packard Development Company, L.P. Image capture method and apparatus
US20070286590A1 (en) * 2006-06-09 2007-12-13 Sony Corporation Imaging apparatus, control method of imaging apparatus, and computer program
US20080002865A1 (en) * 2006-06-19 2008-01-03 Tetsuya Toyoda Electronic imaging apparatus and system for specifying an individual
US20080024644A1 (en) * 2006-07-25 2008-01-31 Fujifilm Corporation System for and method of taking image
US20080123967A1 (en) * 2006-11-08 2008-05-29 Cryptometrics, Inc. System and method for parallel image processing
US20080284901A1 (en) * 2007-05-18 2008-11-20 Takeshi Misawa Automatic focus adjusting apparatus and automatic focus adjusting method, and image pickup apparatus and image pickup method
US20090060291A1 (en) * 2007-09-03 2009-03-05 Sony Corporation Information processing apparatus, information processing method, and computer program
US20090102940A1 (en) * 2007-10-17 2009-04-23 Akihiro Uchida Imaging device and imaging control method
US20090116830A1 (en) * 2007-11-05 2009-05-07 Sony Corporation Imaging apparatus and method for controlling the same
US20090135269A1 (en) * 2005-11-25 2009-05-28 Nikon Corporation Electronic Camera and Image Processing Device
US20090273667A1 (en) * 2006-04-11 2009-11-05 Nikon Corporation Electronic Camera
US20100157084A1 (en) * 2008-12-18 2010-06-24 Olympus Imaging Corp. Imaging apparatus and image processing method used in imaging device
US20100194906A1 (en) * 2009-01-23 2010-08-05 Nikon Corporation Display apparatus and imaging apparatus
US20100271507A1 (en) * 2009-04-24 2010-10-28 Qualcomm Incorporated Image capture parameter adjustment using face brightness information
US20110058032A1 (en) * 2009-09-07 2011-03-10 Samsung Electronics Co., Ltd. Apparatus and method for detecting face
US20110069201A1 (en) * 2009-03-31 2011-03-24 Ryouichi Kawanishi Image capturing device, integrated circuit, image capturing method, program, and recording medium
US20110133299A1 (en) * 2009-12-08 2011-06-09 Qualcomm Incorporated Magnetic Tunnel Junction Device
US20110199502A1 (en) * 2010-02-15 2011-08-18 Canon Kabushiki Kaisha Image pickup apparatus
US20120020545A1 (en) * 2010-07-21 2012-01-26 Fuji Machine Mfg. Co., Ltd. Component presence/absence judging apparatus and method
US20120062765A1 (en) * 2007-05-02 2012-03-15 Casio Computer Co., Ltd. Imaging device, recording medium having recorded therein imaging control program, and imaging control method
US20120257826A1 (en) * 2011-04-09 2012-10-11 Samsung Electronics Co., Ltd Color conversion apparatus and method thereof
US20130128073A1 (en) * 2011-11-22 2013-05-23 Samsung Electronics Co. Ltd. Apparatus and method for adjusting white balance
US8842932B2 (en) 2010-06-18 2014-09-23 Casio Computer Co., Ltd. Image processing apparatus, image processing method, and recording medium recording a program
US8866931B2 (en) 2007-08-23 2014-10-21 Samsung Electronics Co., Ltd. Apparatus and method for image recognition of facial areas in photographic images from a digital camera
US20160110588A1 (en) * 2014-10-15 2016-04-21 Sony Computer Entertainment Inc. Information processing device, information processing method, and computer program
US20160196662A1 (en) * 2013-08-16 2016-07-07 Beijing Jingdong Shangke Information Technology Co., Ltd. Method and device for manufacturing virtual fitting model image
US20160364883A1 (en) * 2015-06-11 2016-12-15 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and recording medium
US9621779B2 (en) 2010-03-30 2017-04-11 Panasonic Intellectual Property Management Co., Ltd. Face recognition device and method that update feature amounts at different frequencies based on estimated distance
US20170365063A1 (en) * 2016-06-21 2017-12-21 Canon Kabushiki Kaisha Image processing apparatus and method for controlling the same
EP3982239A4 (en) * 2019-07-25 2022-08-03 Huawei Technologies Co., Ltd. Input method and electronic device

Families Citing this family (86)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4277534B2 (en) * 2003-02-12 2009-06-10 オムロン株式会社 Image editing apparatus and image editing method
US7792970B2 (en) 2005-06-17 2010-09-07 Fotonation Vision Limited Method for establishing a paired connection between media devices
US7440593B1 (en) 2003-06-26 2008-10-21 Fotonation Vision Limited Method of improving orientation and color balance of digital images using face detection information
US8494286B2 (en) 2008-02-05 2013-07-23 DigitalOptics Corporation Europe Limited Face detection in mid-shot digital images
US7844076B2 (en) 2003-06-26 2010-11-30 Fotonation Vision Limited Digital image processing using face detection and skin tone information
US7565030B2 (en) 2003-06-26 2009-07-21 Fotonation Vision Limited Detecting orientation of digital images using face detection information
US7471846B2 (en) 2003-06-26 2008-12-30 Fotonation Vision Limited Perfecting the effect of flash within an image acquisition devices using face detection
US8948468B2 (en) 2003-06-26 2015-02-03 Fotonation Limited Modification of viewing parameters for digital images using face detection information
US7574016B2 (en) 2003-06-26 2009-08-11 Fotonation Vision Limited Digital image processing using face detection information
US7269292B2 (en) 2003-06-26 2007-09-11 Fotonation Vision Limited Digital image adjustable compression and resolution using face detection information
US8498452B2 (en) 2003-06-26 2013-07-30 DigitalOptics Corporation Europe Limited Digital image processing using face detection information
JP4574249B2 (en) * 2004-06-29 2010-11-04 キヤノン株式会社 Image processing apparatus and method, program, and imaging apparatus
US7430369B2 (en) 2004-08-16 2008-09-30 Canon Kabushiki Kaisha Image capture apparatus and control method therefor
JP2006115406A (en) * 2004-10-18 2006-04-27 Omron Corp Imaging apparatus
JP4135702B2 (en) * 2004-11-25 2008-08-20 松下電工株式会社 Intercom system
JP4424184B2 (en) * 2004-12-08 2010-03-03 株式会社ニコン Imaging device
JP2006295749A (en) * 2005-04-14 2006-10-26 Fuji Photo Film Co Ltd Reduced image list displaying system, method, and program controlling the system
JP4906034B2 (en) * 2005-05-16 2012-03-28 富士フイルム株式会社 Imaging apparatus, method, and program
US8260285B2 (en) 2005-06-14 2012-09-04 St-Ericsson Sa Performing diagnostics in a wireless system
JP2007010898A (en) * 2005-06-29 2007-01-18 Casio Comput Co Ltd Imaging apparatus and program therefor
JP4670521B2 (en) * 2005-07-19 2011-04-13 株式会社ニコン Imaging device
JP2007065290A (en) * 2005-08-31 2007-03-15 Nikon Corp Automatic focusing device
JP4632250B2 (en) * 2005-09-15 2011-02-16 Kddi株式会社 Attribute determination entertainment device by face
JP4525561B2 (en) * 2005-11-11 2010-08-18 ソニー株式会社 Imaging apparatus, image processing method, and program
JP2007143047A (en) * 2005-11-22 2007-06-07 Fujifilm Corp Image processing system and image processing program
JP4315148B2 (en) * 2005-11-25 2009-08-19 株式会社ニコン Electronic camera
JP2007178543A (en) * 2005-12-27 2007-07-12 Samsung Techwin Co Ltd Imaging apparatus
JP4579169B2 (en) * 2006-02-27 2010-11-10 富士フイルム株式会社 Imaging condition setting method and imaging apparatus using the same
JP4816140B2 (en) * 2006-02-28 2011-11-16 ソニー株式会社 Image processing system and method, image processing apparatus and method, imaging apparatus and method, program recording medium, and program
JP4760497B2 (en) * 2006-04-03 2011-08-31 セイコーエプソン株式会社 Image data generation apparatus and image data generation method
JP5087856B2 (en) 2006-04-05 2012-12-05 株式会社ニコン Electronic camera
JP2007282119A (en) * 2006-04-11 2007-10-25 Nikon Corp Electronic camera and image processing apparatus
JP2007282118A (en) * 2006-04-11 2007-10-25 Nikon Corp Electronic camera and image processing apparatus
JP4127297B2 (en) * 2006-06-09 2008-07-30 ソニー株式会社 IMAGING DEVICE, IMAGING DEVICE CONTROL METHOD, AND COMPUTER PROGRAM
US7965875B2 (en) 2006-06-12 2011-06-21 Tessera Technologies Ireland Limited Advances in extending the AAM techniques from grayscale to color images
JP4818999B2 (en) * 2006-07-25 2011-11-16 富士フイルム株式会社 Imaging apparatus, method, and program
JP4444936B2 (en) * 2006-09-19 2010-03-31 富士フイルム株式会社 Imaging apparatus, method, and program
US7889242B2 (en) * 2006-10-26 2011-02-15 Hewlett-Packard Development Company, L.P. Blemish repair tool for digital photographs in a camera
JP2008109582A (en) * 2006-10-27 2008-05-08 Fujifilm Corp Imaging apparatus, and imaging method
JP4780198B2 (en) * 2006-11-10 2011-09-28 コニカミノルタホールディングス株式会社 Authentication system and authentication method
US8055067B2 (en) 2007-01-18 2011-11-08 DigitalOptics Corporation Europe Limited Color segmentation
JP5099488B2 (en) * 2007-08-31 2012-12-19 カシオ計算機株式会社 Imaging apparatus, face recognition method and program thereof
JP5096863B2 (en) * 2007-10-09 2012-12-12 オリンパスイメージング株式会社 Search device
JP5043721B2 (en) * 2008-02-29 2012-10-10 オリンパスイメージング株式会社 Imaging device
JP5042896B2 (en) * 2008-03-25 2012-10-03 オリンパス株式会社 Image processing apparatus and image processing program
US7855737B2 (en) 2008-03-26 2010-12-21 Fotonation Ireland Limited Method of making a digital camera image of a scene including the camera user
JP2009260630A (en) * 2008-04-16 2009-11-05 Olympus Corp Image processor and image processing program
JP2008282031A (en) * 2008-06-23 2008-11-20 Sony Corp Imaging apparatus and imaging apparatus control method, and computer program
JP5195120B2 (en) * 2008-07-25 2013-05-08 株式会社ニコン Digital camera
US9053524B2 (en) 2008-07-30 2015-06-09 Fotonation Limited Eye beautification under inaccurate localization
JP5547730B2 (en) 2008-07-30 2014-07-16 デジタルオプティックス・コーポレイション・ヨーロッパ・リミテッド Automatic facial and skin beautification using face detection
US8520089B2 (en) 2008-07-30 2013-08-27 DigitalOptics Corporation Europe Limited Eye beautification
KR101539043B1 (en) * 2008-10-31 2015-07-24 삼성전자주식회사 Image photography apparatus and method for proposing composition based person
JP5489197B2 (en) * 2008-11-10 2014-05-14 九州日本電気ソフトウェア株式会社 Electronic advertisement apparatus / method and program
JP5272797B2 (en) * 2009-02-24 2013-08-28 株式会社ニコン Digital camera
JP4788792B2 (en) * 2009-03-11 2011-10-05 カシオ計算機株式会社 Imaging apparatus, imaging method, and imaging program
JP5093178B2 (en) * 2009-04-06 2012-12-05 株式会社ニコン Electronic camera
JP5402409B2 (en) 2009-08-31 2014-01-29 ソニー株式会社 Shooting condition setting device, shooting condition setting method, and shooting condition setting program
US8379917B2 (en) 2009-10-02 2013-02-19 DigitalOptics Corporation Europe Limited Face recognition performance using additional image features
KR101634247B1 (en) * 2009-12-04 2016-07-08 삼성전자주식회사 Digital photographing apparatus, mdthod for controlling the same
JP5045776B2 (en) * 2010-03-23 2012-10-10 カシオ計算機株式会社 Camera, camera control program, photographing method, and subject information transmission / reception system
JP5168320B2 (en) * 2010-06-16 2013-03-21 カシオ計算機株式会社 Camera, best shot shooting method, program
CN103069790B (en) * 2010-08-18 2016-03-16 日本电气株式会社 Image capture device, image and sound bearing calibration
JP2011135587A (en) * 2011-01-24 2011-07-07 Seiko Epson Corp Image-data generating apparatus and image-data generating method
WO2013025206A2 (en) 2011-08-16 2013-02-21 Empire Technology Development Llc Allocating data to plurality storage devices
CN104094589B (en) * 2012-02-06 2017-08-29 索尼公司 Picture catching controller, image processor, the method and image processing method for controlling picture catching
TWI461813B (en) * 2012-02-24 2014-11-21 Htc Corp Image capture method and image capture system thereof
JP5408288B2 (en) * 2012-05-14 2014-02-05 株式会社ニコン Electronic camera
CN103945104B (en) * 2013-01-21 2018-03-23 联想(北京)有限公司 Information processing method and electronic equipment
JP6030457B2 (en) * 2013-01-23 2016-11-24 株式会社メガチップス Image detection apparatus, control program, and image detection method
CN104519267A (en) * 2013-09-30 2015-04-15 北京三星通信技术研究有限公司 Shooting control method and terminal equipment
JP6270578B2 (en) * 2014-03-26 2018-01-31 キヤノン株式会社 IMAGING DEVICE, IMAGING DEVICE CONTROL METHOD, AND PROGRAM
US9665198B2 (en) * 2014-05-06 2017-05-30 Qualcomm Incorporated System and method for optimizing haptic feedback
CN104267933A (en) * 2014-08-21 2015-01-07 深圳市金立通信设备有限公司 Flashlight adjusting method
CN104270564B (en) * 2014-08-21 2019-05-14 深圳市金立通信设备有限公司 A kind of terminal
CN104735362B (en) * 2015-03-11 2017-11-07 广东欧珀移动通信有限公司 Photographic method and device
CN104735354B (en) * 2015-03-13 2018-01-19 广东欧珀移动通信有限公司 A kind of method and device of shooting image
US10235032B2 (en) * 2015-08-05 2019-03-19 Htc Corporation Method for optimizing a captured photo or a recorded multi-media and system and electric device therefor
JP6398920B2 (en) * 2015-09-03 2018-10-03 オムロン株式会社 Violator detection device and violator detection system provided with the same
US9992407B2 (en) * 2015-10-01 2018-06-05 International Business Machines Corporation Image context based camera configuration
CN105721770A (en) * 2016-01-20 2016-06-29 广东欧珀移动通信有限公司 Shooting control method and shooting control device
CN106408603B (en) * 2016-06-21 2023-06-02 北京小米移动软件有限公司 Shooting method and device
US11227170B2 (en) * 2017-07-20 2022-01-18 Panasonic Intellectual Property Management Co., Ltd. Collation device and collation method
JP7199426B2 (en) * 2017-09-13 2023-01-05 コーニンクレッカ フィリップス エヌ ヴェ Camera and image calibration for subject identification
CN107727220A (en) * 2017-10-11 2018-02-23 上海展扬通信技术有限公司 A kind of human body measurement method and body measurement system based on intelligent terminal
CN108419011A (en) * 2018-02-11 2018-08-17 广东欧珀移动通信有限公司 Image pickup method and Related product

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5463470A (en) * 1991-10-09 1995-10-31 Fuji Photo Film Co., Ltd. Methods of collecting photometric image data and determining light exposure by extracting feature image data from an original image
US5638136A (en) * 1992-01-13 1997-06-10 Mitsubishi Denki Kabushiki Kaisha Method and apparatus for detecting flesh tones in an image
US5815199A (en) * 1991-01-31 1998-09-29 Matsushita Electric Works, Ltd. Interphone with television
US6297846B1 (en) * 1996-05-30 2001-10-02 Fujitsu Limited Display control system for videoconference terminals
US20010046311A1 (en) * 1997-06-06 2001-11-29 Oki Electric Industry Co., Ltd. System for identifying individuals
US20020015514A1 (en) * 2000-04-13 2002-02-07 Naoto Kinjo Image processing method
US20020113862A1 (en) * 2000-11-10 2002-08-22 Center Julian L. Videoconferencing method with tracking of face and dynamic bandwidth allocation
US6806898B1 (en) * 2000-03-20 2004-10-19 Microsoft Corp. System and method for automatically adjusting gaze and head orientation for video conferencing
US20040228505A1 (en) * 2003-04-14 2004-11-18 Fuji Photo Film Co., Ltd. Image characteristic portion extraction method, computer readable medium, and data collection and processing device
US20050063566A1 (en) * 2001-10-17 2005-03-24 Beek Gary A . Van Face imaging system for recordal and automated identity confirmation
US7057636B1 (en) * 1998-12-22 2006-06-06 Koninklijke Philips Electronics N.V. Conferencing system and method for the automatic determination of preset positions corresponding to participants in video-mediated communications
US20060177110A1 (en) * 2005-01-20 2006-08-10 Kazuyuki Imagawa Face detection device
US20060204055A1 (en) * 2003-06-26 2006-09-14 Eran Steinberg Digital image processing using face detection information

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3298138B2 (en) * 1992-03-23 2002-07-02 株式会社ニコン Focus detection device
JP3461626B2 (en) * 1995-07-28 2003-10-27 シャープ株式会社 Specific image region extraction method and specific image region extraction device
JP2000188768A (en) * 1998-12-22 2000-07-04 Victor Co Of Japan Ltd Automatic gradation correction method
US6526161B1 (en) * 1999-08-30 2003-02-25 Koninklijke Philips Electronics N.V. System and method for biometrics-based facial feature extraction
JP4576658B2 (en) * 2000-02-29 2010-11-10 ソニー株式会社 Imaging apparatus, imaging method, and imaging program
US6301440B1 (en) * 2000-04-13 2001-10-09 International Business Machines Corp. System and method for automatically setting image acquisition controls
EP1158786A3 (en) * 2000-05-24 2005-03-09 Sony Corporation Transmission of the region of interest of an image

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5815199A (en) * 1991-01-31 1998-09-29 Matsushita Electric Works, Ltd. Interphone with television
US5463470A (en) * 1991-10-09 1995-10-31 Fuji Photo Film Co., Ltd. Methods of collecting photometric image data and determining light exposure by extracting feature image data from an original image
US5638136A (en) * 1992-01-13 1997-06-10 Mitsubishi Denki Kabushiki Kaisha Method and apparatus for detecting flesh tones in an image
US6297846B1 (en) * 1996-05-30 2001-10-02 Fujitsu Limited Display control system for videoconference terminals
US20010046311A1 (en) * 1997-06-06 2001-11-29 Oki Electric Industry Co., Ltd. System for identifying individuals
US7057636B1 (en) * 1998-12-22 2006-06-06 Koninklijke Philips Electronics N.V. Conferencing system and method for the automatic determination of preset positions corresponding to participants in video-mediated communications
US6806898B1 (en) * 2000-03-20 2004-10-19 Microsoft Corp. System and method for automatically adjusting gaze and head orientation for video conferencing
US20050008246A1 (en) * 2000-04-13 2005-01-13 Fuji Photo Film Co., Ltd. Image Processing method
US20020015514A1 (en) * 2000-04-13 2002-02-07 Naoto Kinjo Image processing method
US20060251299A1 (en) * 2000-04-13 2006-11-09 Fuji Photo Film Co., Ltd. Image processing method
US20020113862A1 (en) * 2000-11-10 2002-08-22 Center Julian L. Videoconferencing method with tracking of face and dynamic bandwidth allocation
US20050063566A1 (en) * 2001-10-17 2005-03-24 Beek Gary A . Van Face imaging system for recordal and automated identity confirmation
US20040228505A1 (en) * 2003-04-14 2004-11-18 Fuji Photo Film Co., Ltd. Image characteristic portion extraction method, computer readable medium, and data collection and processing device
US20060204055A1 (en) * 2003-06-26 2006-09-14 Eran Steinberg Digital image processing using face detection information
US20060177110A1 (en) * 2005-01-20 2006-08-10 Kazuyuki Imagawa Face detection device

Cited By (67)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7359529B2 (en) * 2003-03-06 2008-04-15 Samsung Electronics Co., Ltd. Image-detectable monitoring system and method for using the same
US20040183951A1 (en) * 2003-03-06 2004-09-23 Lee Hyeok-Beom Image-detectable monitoring system and method for using the same
US20050088546A1 (en) * 2003-10-27 2005-04-28 Fuji Photo Film Co., Ltd. Photographic apparatus
US7532235B2 (en) * 2003-10-27 2009-05-12 Fujifilm Corporation Photographic apparatus
US20050219395A1 (en) * 2004-03-31 2005-10-06 Fuji Photo Film Co., Ltd. Digital still camera and method of controlling same
US7884874B2 (en) * 2004-03-31 2011-02-08 Fujifilm Corporation Digital still camera and method of controlling same
US20060182433A1 (en) * 2005-02-15 2006-08-17 Nikon Corporation Electronic camera
US20090147107A1 (en) * 2005-02-15 2009-06-11 Nikon Corporation Electronic camera
US7881601B2 (en) 2005-02-15 2011-02-01 Nikon Corporation Electronic camera
US20070019083A1 (en) * 2005-07-11 2007-01-25 Fuji Photo Film Co., Ltd. Image capturing apparatus, photograph quantity management method, and photograph quantity management program
US7787665B2 (en) * 2005-07-11 2010-08-31 Fujifilm Corporation Image capturing apparatus, photograph quantity management method, and photograph quantity management program
US8054343B2 (en) * 2005-08-05 2011-11-08 Hewlett-Packard Development Company, L.P. Image capture method and apparatus
US20070030363A1 (en) * 2005-08-05 2007-02-08 Hewlett-Packard Development Company, L.P. Image capture method and apparatus
US8488847B2 (en) 2005-11-25 2013-07-16 Nikon Corporation Electronic camera and image processing device
US20090135269A1 (en) * 2005-11-25 2009-05-28 Nikon Corporation Electronic Camera and Image Processing Device
US20090273667A1 (en) * 2006-04-11 2009-11-05 Nikon Corporation Electronic Camera
US8212894B2 (en) * 2006-04-11 2012-07-03 Nikon Corporation Electronic camera having a face detecting function of a subject
US20070286590A1 (en) * 2006-06-09 2007-12-13 Sony Corporation Imaging apparatus, control method of imaging apparatus, and computer program
US7860386B2 (en) * 2006-06-09 2010-12-28 Sony Corporation Imaging apparatus, control method of imaging apparatus, and computer program
US8180116B2 (en) * 2006-06-19 2012-05-15 Olympus Imaging Corp. Image pickup apparatus and system for specifying an individual
US20080002865A1 (en) * 2006-06-19 2008-01-03 Tetsuya Toyoda Electronic imaging apparatus and system for specifying an individual
US7796163B2 (en) 2006-07-25 2010-09-14 Fujifilm Corporation System for and method of taking image based on objective body in a taken image
US20080024644A1 (en) * 2006-07-25 2008-01-31 Fujifilm Corporation System for and method of taking image
US8295649B2 (en) * 2006-11-08 2012-10-23 Nextgenid, Inc. System and method for parallel processing of images from a large number of cameras
US20080123967A1 (en) * 2006-11-08 2008-05-29 Cryptometrics, Inc. System and method for parallel image processing
US20120062765A1 (en) * 2007-05-02 2012-03-15 Casio Computer Co., Ltd. Imaging device, recording medium having recorded therein imaging control program, and imaging control method
US8379942B2 (en) * 2007-05-02 2013-02-19 Casio Computer Co., Ltd. Imaging device, recording medium having recorded therein imaging control program, and imaging control method
US8004599B2 (en) 2007-05-18 2011-08-23 Fujifilm Corporation Automatic focus adjusting apparatus and automatic focus adjusting method, and image pickup apparatus and image pickup method
US20080284901A1 (en) * 2007-05-18 2008-11-20 Takeshi Misawa Automatic focus adjusting apparatus and automatic focus adjusting method, and image pickup apparatus and image pickup method
US8866931B2 (en) 2007-08-23 2014-10-21 Samsung Electronics Co., Ltd. Apparatus and method for image recognition of facial areas in photographic images from a digital camera
US20090060291A1 (en) * 2007-09-03 2009-03-05 Sony Corporation Information processing apparatus, information processing method, and computer program
US8295556B2 (en) * 2007-09-03 2012-10-23 Sony Corporation Apparatus and method for determining line-of-sight direction in a face image and controlling camera operations therefrom
US8111315B2 (en) * 2007-10-17 2012-02-07 Fujifilm Corporation Imaging device and imaging control method that detects and displays composition information
US20090102940A1 (en) * 2007-10-17 2009-04-23 Akihiro Uchida Imaging device and imaging control method
US7801432B2 (en) * 2007-11-05 2010-09-21 Sony Corporation Imaging apparatus and method for controlling the same
US20090116830A1 (en) * 2007-11-05 2009-05-07 Sony Corporation Imaging apparatus and method for controlling the same
US20100157084A1 (en) * 2008-12-18 2010-06-24 Olympus Imaging Corp. Imaging apparatus and image processing method used in imaging device
US8570391B2 (en) 2008-12-18 2013-10-29 Olympus Imaging Corp. Imaging apparatus and image processing method used in imaging device
US20100194906A1 (en) * 2009-01-23 2010-08-05 Nikon Corporation Display apparatus and imaging apparatus
US8421901B2 (en) 2009-01-23 2013-04-16 Nikon Corporation Display apparatus and imaging apparatus
US8675096B2 (en) 2009-03-31 2014-03-18 Panasonic Corporation Image capturing device for setting one or more setting values for an imaging mechanism based on acquired sound data that includes information reflecting an imaging environment
US20110069201A1 (en) * 2009-03-31 2011-03-24 Ryouichi Kawanishi Image capturing device, integrated circuit, image capturing method, program, and recording medium
US8339506B2 (en) 2009-04-24 2012-12-25 Qualcomm Incorporated Image capture parameter adjustment using face brightness information
US20100271507A1 (en) * 2009-04-24 2010-10-28 Qualcomm Incorporated Image capture parameter adjustment using face brightness information
US20110058032A1 (en) * 2009-09-07 2011-03-10 Samsung Electronics Co., Ltd. Apparatus and method for detecting face
US8780197B2 (en) * 2009-09-07 2014-07-15 Samsung Electronics Co., Ltd. Apparatus and method for detecting face
US20110133299A1 (en) * 2009-12-08 2011-06-09 Qualcomm Incorporated Magnetic Tunnel Junction Device
US8558331B2 (en) 2009-12-08 2013-10-15 Qualcomm Incorporated Magnetic tunnel junction device
US8969984B2 (en) 2009-12-08 2015-03-03 Qualcomm Incorporated Magnetic tunnel junction device
US8817122B2 (en) 2010-02-15 2014-08-26 Canon Kabushiki Kaisha Image pickup apparatus
US20110199502A1 (en) * 2010-02-15 2011-08-18 Canon Kabushiki Kaisha Image pickup apparatus
US9621779B2 (en) 2010-03-30 2017-04-11 Panasonic Intellectual Property Management Co., Ltd. Face recognition device and method that update feature amounts at different frequencies based on estimated distance
US8842932B2 (en) 2010-06-18 2014-09-23 Casio Computer Co., Ltd. Image processing apparatus, image processing method, and recording medium recording a program
US20120020545A1 (en) * 2010-07-21 2012-01-26 Fuji Machine Mfg. Co., Ltd. Component presence/absence judging apparatus and method
US8699782B2 (en) * 2010-07-21 2014-04-15 Fuji Machine Mfg. Co., Ltd. Component presence/absence judging apparatus and method
US20120257826A1 (en) * 2011-04-09 2012-10-11 Samsung Electronics Co., Ltd Color conversion apparatus and method thereof
US8849025B2 (en) * 2011-04-09 2014-09-30 Samsung Electronics Co., Ltd Color conversion apparatus and method thereof
US8941757B2 (en) * 2011-11-22 2015-01-27 Samsung Electronics Co., Ltd. Apparatus and method for adjusting white balance
US20130128073A1 (en) * 2011-11-22 2013-05-23 Samsung Electronics Co. Ltd. Apparatus and method for adjusting white balance
US20160196662A1 (en) * 2013-08-16 2016-07-07 Beijing Jingdong Shangke Information Technology Co., Ltd. Method and device for manufacturing virtual fitting model image
US20160110588A1 (en) * 2014-10-15 2016-04-21 Sony Computer Entertainment Inc. Information processing device, information processing method, and computer program
US9716712B2 (en) * 2014-10-15 2017-07-25 Sony Interactive Entertainment Inc. Information processing device, and information processing method to improve face authentication
US20160364883A1 (en) * 2015-06-11 2016-12-15 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and recording medium
US10242287B2 (en) * 2015-06-11 2019-03-26 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and recording medium
US10565712B2 (en) * 2016-06-21 2020-02-18 Canon Kabushiki Kaisha Image processing apparatus and method for controlling the same
US20170365063A1 (en) * 2016-06-21 2017-12-21 Canon Kabushiki Kaisha Image processing apparatus and method for controlling the same
EP3982239A4 (en) * 2019-07-25 2022-08-03 Huawei Technologies Co., Ltd. Input method and electronic device

Also Published As

Publication number Publication date
JP4218348B2 (en) 2009-02-04
CN1522052A (en) 2004-08-18
CN1263286C (en) 2006-07-05
JP2004222118A (en) 2004-08-05
EP1441497A3 (en) 2005-06-22
EP1441497A2 (en) 2004-07-28

Similar Documents

Publication Publication Date Title
US20040208114A1 (en) Image pickup device, image pickup device program and image pickup method
EP1447973B1 (en) Image editing apparatus, image editing method and program
JP5159515B2 (en) Image processing apparatus and control method thereof
US6636635B2 (en) Object extraction method, and image sensing apparatus using the method
JP4574249B2 (en) Image processing apparatus and method, program, and imaging apparatus
JP4725377B2 (en) Face image registration device, face image registration method, face image registration program, and recording medium
US20050220346A1 (en) Red eye detection device, red eye detection method, and recording medium with red eye detection program
US20100205177A1 (en) Object identification apparatus and method for identifying object
US20060251299A1 (en) Image processing method
US20090002509A1 (en) Digital camera and method of controlling same
US20050129326A1 (en) Image processing apparatus and print system
US20070071316A1 (en) Image correcting method and image correcting system
KR20080060265A (en) Determining a particular person from a collection
US20120020568A1 (en) Image processor and image processing method
JP2004094491A (en) Face orientation estimation device and method and its program
US20030193582A1 (en) Method for storing an image, method and system for retrieving a registered image and method for performing image processing on a registered image
US20090324069A1 (en) Image processing device, image processing method, and computer readable medium
KR20190036168A (en) Method for correcting image based on category and recognition rate of objects included image and electronic device for the same
JP2011145958A (en) Pattern identification device and method of controlling the same
JP4090926B2 (en) Image storage method, registered image retrieval method and system, registered image processing method, and program for executing these methods
US8213720B2 (en) System and method for determining chin position in a digital image
US20050041103A1 (en) Image processing method, image processing apparatus and image processing program
JP7321772B2 (en) Image processing device, image processing method, and program
JP4522229B2 (en) Image processing method and apparatus
JP4946913B2 (en) Imaging apparatus and image processing program

Legal Events

Date Code Title Description
AS Assignment

Owner name: OMRON CORPORATION, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KAWADE, MASATO;REEL/FRAME:015496/0595

Effective date: 20040608

AS Assignment

Owner name: OMRON CORPORATION, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LAO, SHIHONG;REEL/FRAME:015482/0387

Effective date: 20040608

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION