US20040170397A1 - Camera and method of photographing good image - Google Patents
Camera and method of photographing good image Download PDFInfo
- Publication number
- US20040170397A1 US20040170397A1 US10/798,375 US79837504A US2004170397A1 US 20040170397 A1 US20040170397 A1 US 20040170397A1 US 79837504 A US79837504 A US 79837504A US 2004170397 A1 US2004170397 A1 US 2004170397A1
- Authority
- US
- United States
- Prior art keywords
- image
- condition
- photographing
- timing signal
- variation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/64—Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/765—Interface circuits between an apparatus for recording and another apparatus
- H04N5/77—Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
- H04N5/772—Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera the recording apparatus and the television camera being placed in the same enclosure
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/78—Television signal recording using magnetic recording
- H04N5/781—Television signal recording using magnetic recording on disks or drums
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/907—Television signal recording using static stores, e.g. storage tubes or semiconductor memories
Definitions
- the present invention relates to a camera, and more particularly to a camera capable of automatically photographing an image of a subject when the subject satisfies a predetermined photographing condition.
- Japanese Patent Laid-open Publication (Kokai) H 9 -212620 and Japanese Patent Laid-open Publication (Kokai) H10-191216 disclose a technique to continuously photograph a plurality of images. Those images are displayed, and the person photographed by the camera can select a desirable image from among those images.
- Japanese Patent Laid-open Publication (Kokai) H5-40303, H4-156526 and H5-100148 disclose cameras which can automatically judge the timing for photographing images.
- a camera comprises: an image data input unit forming an image of a subject for photographing said subject; a condition storing unit storing a predetermined photographing condition related to a desirable subject; and a timing signal generator outputting a timing signal when said subject satisfies said photographing condition.
- the camera may include an extractor extracting data of an aimed object from said image of said subject based on an extracting condition, wherein said photographing condition may include a predetermined condition related to a desirable aimed object and said timing signal generator outputs said timing signal when said aimed object satisfies said photographing condition.
- the extracting condition may be based on depth information of said image indicating the distance to each part of said subject.
- the extractor may detect data of a judgement location from said data of said aimed object in said image based on a detecting condition different from said extracting condition, said photographing condition may include a predetermined photographing condition related to a desirable judgement location, and the timing signal generator may output said timing signal when said judgement location satisfies said photographing condition.
- the extractor may extract data of a plurality of said aimed objects from said image; and said timing signal generator may output said timing signal when said plurality of aimed objects satisfy said photographing condition.
- the timing signal generator may output said timing signal when the ratio of said aimed objects satisfying said photographing condition against all of said plurality of said aimed object exceeds a predetermined ratio.
- the extractor may detect data of a plurality of judgement locations from each of said data of said plurality of aimed objects based on a detecting condition different from said first condition, said photographing condition may include a predetermined photographing condition related to said judgement location, and said timing signal generator may output said timing signal when said plurality of said judgement locations satisfy said photographing condition.
- the timing signal generator may output said timing signal when the ratio of said judgement locations satisfying said photographing condition against all of said plurality of said aimed object exceeds a predetermined ratio.
- the camera may include an image-pickup control unit controlling said input unit for photographing said image based on said timing signal.
- the camera may include an illuminator illuminating said subject based on said timing signal.
- the camera may include a recording unit recording said image on a replaceable nonvolatile recording medium based on said timing signal.
- the camera may include an alarm outputting an alarm signal for notifying that said subject satisfies said photographing condition based on said timing signal.
- the photographing condition may include a plurality of photographing conditions
- said camera may include a condition-setting unit previously selecting at least one of said photographing conditions, for photographing said image, from among said plurality of photographing conditions.
- the camera may include: an input condition determining unit determining an input condition for inputting said image based on information of said judgement location detected by said extractor; and an image-forming control unit controlling an input unit for forming said image of said subject based on said input condition.
- the camera as set forth in claim may include an image processing unit processing said image based on information of said judgement location detected by said extractor.
- a camera comprises: an image data input unit forming a plurality of images of a subject for photographing said subject; a condition storing unit storing a predetermined photographing condition related to a desirable variation of said subject; a variation detector detecting variation of said subject in said plurality of said images based on information of said plurality of images; and a timing signal generator outputting a timing signal when said variation of said subject satisfies said photographing condition.
- the camera may include: an extractor extracting data of an aimed object from each of said plurality of images of said subject based on an extracting condition, wherein said photographing condition may include a predetermined condition related to a desirable aimed object, said variation detector may detect variation of said aimed object in said plurality of images based on said information of said plurality of images, and said timing signal generator may output said timing signal when said variation of said aimed object satisfies said photographing condition.
- the extracting condition may be based on depth information of said plurality of images indicating the distance to each part of said subject.
- the extractor may detect data of a judgement location from said data of said aimed object in each of said plurality of images based on a detecting condition different from said extracting condition, said photographing condition may include a predetermined photographing condition related to a desirable judgement location, said variation detector may detect variation of said judgement location in said plurality of images based on said information of said plurality of images, and said timing signal generator may output said timing signal when said variation of said judgement location satisfies said photographing condition.
- the photographing condition may include a predetermined starting condition for starting detection of said variation of said judgement location, and said variation detector may start detecting said variation of said judgement location when said judgement location satisfies said starting condition.
- the extractor may extract data of a plurality of said aimed objects from each of said plurality of images, said variation detector may detect variation of each of said plurality of said aimed objects in said plurality of images based on information of said plurality of images, and said timing signal generator may output said timing signal when said variation of said plurality of said aimed objects satisfy said photographing condition.
- the extractor may detect data of a plurality of judgement locations from each of said data of said plurality of aimed objects based on a detecting condition different from said extracting condition, said photographing condition may include a predetermined photographing condition related to desirable variation of said judgement location, said variation detector may detect variation of each of said plurality of said judgement locations in said plurality of images based on information of said plurality of images, and said timing signal generator may output said timing signal when said variation of said plurality of said judgement locations satisfy said photographing condition.
- the camera may include an image pickup control unit controlling said input unit for photographing said image based on said timing signal.
- the camera may include an illuminator illuminating said subject based on said timing signal.
- the camera may include a recording unit recording said image on a replaceable nonvolatile recording medium based on said timing signal.
- the camera may include an alarm outputting an alarm signal for notifying that said subject satisfies said photographing condition based on said timing signal.
- the photographing condition may include a plurality of photographing conditions
- said camera may include a condition-setting unit previously selecting at least one of said photographing conditions for photographing said image, from among said plurality of photographing conditions.
- the timing signal generator may select said judgement location satisfying said photographing condition from among said plurality of said judgement locations in said plurality of images, and outputs information for said aimed object including said judgement location
- the camera may include: an input condition determining unit determining an input condition for inputting said image based on information for said judgement location; and an image forming control unit controlling an input unit for forming said image of said subject based on said input condition.
- the timing signal generator may select said judgement location satisfying said photographing condition from among said plurality of said judgement locations in said plurality of images, and outputs information for said aimed object including said judgement location, and said camera may include an image processing unit processing said image based on said information for said judgement location.
- a method of photographing an image of a subject comprises outputting a timing signal when said subject satisfies a predetermined photographing condition.
- the method may include: extracting data of an aimed object from said image of said subject based on an extracting condition, wherein said photographing condition may include a predetermined condition related to a desirable aimed object, and said timing signal may be output when said aimed object satisfies said photographing condition.
- the extracting may include detecting data of a judgement location from said data of said aimed object in said image based on a detecting condition different from said extracting condition, said photographing condition may include a predetermined photographing condition related to a desirable judgement location, and said timing signal may be output when said judgement location satisfies said photographing condition.
- the method may include photographing said subject based on said timing signal.
- the method may include recording said photographed image of said subject on a replaceable nonvolatile recording medium based on said timing signal.
- the method may include: determining an input condition for inputting said image based on information for said judgement location detected in said detecting step; and forming said image of said subject based on said input condition.
- the method may include processing said image based on information for said judgement location detected in said detecting step.
- a method of photographing a plurality of images of a subject comprising: detecting variation of said subject in said plurality of said images based on information for said plurality of images; outputting a timing signal when said variation of said subject satisfies a predetermined photographing condition related to a desirable variation of said subject.
- the method may include extracting data of an aimed object from each of said plurality of images of said subject based on an extracting condition, said detecting may include detecting variation of said aimed object based on information for said image, and said timing signal may be output when said variation of said aimed object satisfies said photographing condition.
- the extraction of said aimed object may include detecting data of a judgement location from said data of said aimed object in each of said plurality of images based on a detecting condition different from said extracting condition, said detecting variation of said subject may include detecting variation of said judgement location based on information for said image, and said timing signal may be output when said variation of said judgement location satisfies said photographing condition.
- the photographing condition may include a predetermined starting condition for starting detection of said variation of said judgement location, and said detecting of variation may start detecting said variation of said judgement location when said judgement location satisfies said starting condition.
- the method may include photographing said image based on said timing signal.
- FIG. 1 shows a camera of the first embodiment according to the present invention
- FIG. 2 is a block diagram of the control unit of the first embodiment
- FIG. 3 is a block diagram of the function of the extractor
- FIG. 4 is a flowchart showing a method of photographing an image
- FIG. 5 is a flowchart showing in detail the method of extracting a face part, step 106 in FIG. 4,
- FIG. 6 is a flowchart showing in detail the method of detecting a judgement location, step 108 in FIG. 4,
- FIG. 7 is a flowchart showing in detail the method of generating a timing signal, step 110 in FIG. 4,
- FIG. 8 is a flowchart showing in detail the method of photographing a refined image, step 112 in FIG. 4,
- FIG. 9 is a flowchart showing in detail the method of photographing a refined image, step 112 in FIG. 4,
- FIG. 10 is a flowchart showing in detail the method of generating a timing signal, step 110 in FIG. 4,
- FIG. 11 shows a camera of the second embodiment according to the present invention
- FIG. 12 is a block diagram of the control unit of the second embodiment
- FIG. 13 is a block diagram of the control unit of the third embodiment
- FIG. 14 is a block diagram of the function of the extractor 60 .
- FIG. 15 is a block diagram of the function of the photographing condition judging unit
- FIG. 16 is a flowchart showing in detail the method of detecting a judgement location, step 108 in FIG. 4,
- FIG. 17 is a flowchart showing in detail the method of generating a timing signal, step 110 in FIG. 4,
- FIG. 18 is a flowchart showing in detail the method of photographing a refined image, step 112 in FIG. 4,
- FIG. 19 is a flowchart showing in detail the method of generating a timing signal, step 110 in FIG. 4,
- FIG. 20 is a block diagram of the control unit of the fourth embodiment
- FIG. 21 is a block diagram of the control unit of the fifth embodiment
- FIG. 22 is a flowchart showing a method of photographing an image
- FIG. 23 shows a camera of the sixth embodiment.
- FIG. 1 shows a camera 10 of the first embodiment according to the present invention.
- the camera 10 continuously photographs raw images of a subject and determines the timing for photographing a refined image based on the previously photographed raw images.
- the camera 10 photographs a refined image of the subject in accordance with the timing signal. Therefore, the timing for photographing a refined image may be automatically determined by the camera 10 .
- the camera 10 includes an input unit 20 , an A/D converter 30 , a memory 40 , a control unit 50 , a release button 52 , an alarm 54 , a recording unit 90 and an output unit 92 .
- the camera 10 of this embodiment further includes an illuminator 53 .
- the camera 10 may be, for example, a digital still camera or a digital video camera that can photograph a still image.
- the input unit 20 includes a parallactic image data input unit 22 and a normal image data input unit 24 .
- the parallactic image data input unit 22 inputs parallactic images which are photographed from different viewpoints.
- the parallactic image data input unit 22 has a parallactic lens 32 , a parallactic shutter 34 , and a parallactic charge coupled device (CCD) 36 .
- the parallactic lens 32 forms an image of a subject.
- the parallactic shutter 34 has a plurality of shutter units each of which serve as viewpoints.
- the parallactic shutter 34 opens one of the plurality of shutter units.
- the parallactic CCD 36 receives the image of the subject through the parallactic lens 32 and whichever of the shutter units of the parallactic shutter 34 that are opened.
- the parallactic CCD 36 also receives another image of the subject through the parallactic lens 32 and another of the shutter units of the parallactic shutter 34 , which is opened at this time.
- the images received through the parallactic lens 32 and the parallactic shutter 34 form a parallactic image.
- the parallactic CCD 36 receives the parallactic image of the subject formed by the parallactic lens 32 and converts it to electronic signals.
- the normal image data input unit 24 inputs a normal image photographed from a single viewpoint.
- the normal image data input unit 24 has a lens 25 , a lens stop 26 , a shutter 27 , a color filter 28 and a charge coupled device (CCD) 29 .
- the lens 25 forms an image of a subject.
- the lens stop 26 adjusts an aperture condition.
- the shutter 27 adjusts exposure time.
- the color filter 28 separates RGB components of the light received through the lens 25 .
- the CCD 29 receives the image of the subject formed by the lens 25 and converts it to electric signals.
- the A/D converter 30 receives analog signals from the parallactic image data input unit 22 and the normal image data input unit 24 .
- the A/D converter 30 converts the received analog signals to digital signals and outputs the digital signals to the memory 40 .
- the memory 40 stores the input digital signals. This means that the memory 40 stores the data of the parallactic image, the subject photographed by the parallactic image data input unit 22 , and the data of the normal image of the subject photographed by the normal image data input unit 24 .
- the control unit 50 outputs a timing signal for starting photographing of an image of a subject when the subject satisfies a predetermined photographic condition.
- the timing signal is input to the input unit 20 .
- the camera 10 then starts the photographing operation based on the timing signal, to obtain a refined image of the subject.
- the control unit 50 processes the photographed refined image and outputs the processed image.
- the control unit 50 controls at least one of the following conditions: focus condition of the lens 25 , aperture condition of the lens stop 26 , exposure time of the shutter 27 , output signal of the CCD 29 , condition of the parallactic shutter 34 , and output signal of the parallactic CCD 36 .
- the control unit 50 also controls the illuminator 53 .
- the release button 52 outputs to the control unit 50 a signal for starting the photographing operation . This means that when a user of the camera 10 pushes the release button 52 , the signal is output to the control unit 50 .
- the control unit 50 then controls the input unit 20 for photographing an image of the subject.
- the camera 10 is capable of automatically photographing a refined image of the subject by determining best timing for photographing the refined image.
- the camera 10 is also capable of photographing the image at a desirable timing for the user of the camera 10 , when he/she pushes the release button 52 .
- the camera 10 may have a switch, not shown in the drawings, for selecting an automatic photographing mode in which the best timing for photographing the image is automatically determined, and a manual photographing mode in which the user of the camera 10 determines the desirable timing.
- the alarm 54 outputs an alarm signal upon receiving the timing signal from the control unit 50 .
- the alarm 54 may be, for example, an alarm generator or a light-emitting diode.
- the user of the camera 10 can know the best timing determined by the camera 10 for photographing a refined image of the subject.
- the recording unit 90 records the image output from the control unit 50 on a recording medium.
- the recording medium may be, for example, a magnetic recording medium such as a floppy disk, or a nonvolatile memory such as a flash memory.
- the output unit 92 outputs the image recorded on the recording medium.
- the output unit 92 may be, for example, a printer or a monitor.
- the output unit 92 may be a small liquid crystal display (LCD) of the camera 10 . In this case, the user can see the image processed by the control unit 50 immediately after photographing the image.
- the output unit 92 may be an external monitor connected to the camera 10 .
- FIG. 2 is a block diagram of the control unit 50 according to the first embodiment.
- the control unit 50 includes an image pickup control unit 56 , an image forming control unit 58 , an extractor 60 , a condition-storing unit 70 , a timing signal generator 80 , an input condition determining unit 82 , and an image processing unit 84 .
- the extractor 60 receives a parallactic image photographed by the parallactic image data input unit 22 and a raw image photographed by the image data input unit 24 , from the memory 40 .
- the extractor 60 extracts an aimed object from the raw image based on the information obtained from the parallactic image and the raw image.
- the information includes image information of the raw image and depth information of the parallactic image.
- the aimed object defined here is an independent object at which a photographer aims when photographing.
- the aimed object may be, for example, a person in a room when the person and the objects in the room are photographed, a fish in an aquarium when the fish and the aquarium are photographed, or a bird stopping on a branch of a tree when the bird and the tree are photographed.
- the extractor 60 detects a judgement location from the aimed object based on the information obtained from the parallactic images and the raw images.
- the judgement location defined here is a location to which specific attention is paid when selecting a desirable image.
- the judgement location may be, for example, an eye of a person when the person is photographed, or a wing of a bird when the bird is photographed.
- the aimed object may be an area including the judgement location, extracted for a certain purpose.
- the information for the judgement location is output to the timing signal generator 80 , the input-condition-determining unit 82 and the image-processing unit 84 .
- the condition-storing unit 70 stores predetermined conditions related to a judgement location which should be included in a raw image obtained by photographing a subject.
- the best timing for photographing a refined image of the subject in this embodiment is when the aimed object in the image is in good condition. This means that a judgement location included in the aimed object satisfies the predetermined conditions stored in the condition-storing unit 70 .
- the condition-storing unit 70 may store a plurality of photographing conditions.
- the condition-storing unit 70 may include a condition-setting unit, not shown in the drawings, by which a user can select at least one of the photographing conditions from among a plurality of photographing conditions.
- the timing signal generator 80 outputs a timing signal for photographing an image.
- the timing signal generator 80 outputs the timing signal when the judgement location detected by the extractor 60 satisfies the predetermined photographing condition stored in the storing unit 70 .
- the input-condition-determining unit 82 determines an input condition for inputting a refined image, based on the information for the aimed object or the judgement location received from the extractor 60 .
- the input condition is output to the image-forming control unit 58 .
- the input condition may be, for example, focus condition of the lens 25 such that the aimed object including the judgement location is focussed.
- the camera 10 can photograph a refined image in which the subject is in good condition.
- the image-forming control unit 58 controls the input unit 20 to form a refined image of the subject based on the input condition determined by the input-condition-determining unit 82 .
- the image-pickup control unit 56 controls the input unit 20 to photograph a refined image of the subject based on the input condition determined by the condition-determining unit 70 . This means that the image-pickup control unit 56 controls at least one of the conditions including output signal of the CCD 29 and output signal of the parallactic CCD 36 , based on the input condition.
- the output signal of the CCD 29 determines the gradation characteristics based on a gamma ( ⁇ ) curve and sensitivity.
- the image-pickup control unit 56 controls the input unit 20 , to photograph a refined image based on the timing signal output from the timing signal generator 80 .
- the image-pickup control unit 56 controls the image-processing unit 84 to process the refined image.
- the image-pickup control unit 56 may control the illuminator 53 , for flashing a light preceding or at the same time as photographing a refined image by the input unit 20 .
- the image-pickup control unit 56 also controls the image-processing unit 84 , to process the input refined image.
- the image-processing unit 84 receives the refined image photographed by the image data input unit 24 from the memory 40 .
- the image-processing unit 84 then processes the refined image based on the information for the aimed object or the judgement location extracted from the extractor 60 .
- the process condition for processing a normal image may relate to compression of the image.
- the process condition in this case is determined based on the data for the aimed object.
- the image-processing unit 84 separately determines the compressing condition of the image for the aimed object and for the components other than the aimed object so that the quality of the aimed object does not deteriorate, even though the data size of the image itself is compressed.
- the image processing unit 84 may separately determine the color compressing condition for the aimed object and the components other than the aimed object.
- the process condition for processing a normal image may relate to color of the image.
- the process condition in this case is determined based on the depth information.
- the processing-condition-determining unit 74 may, for example, separately determine the color condition for the aimed object and the components other than the aimed object, so that all the components have optimum gradation.
- the image-processing unit 84 may determine a processing condition in which the aimed object in the image is magnified and the magnified aimed object is composited with a background image.
- the background image may be the components included in the original image other than the aimed object, or an image previously selected by the user of the camera 10 .
- the image-processing unit 84 may then composite the data for the aimed object and the data for the components other than the aimed object to form a composite image.
- the extractor 60 extracts the data for the aimed object and the judgement location from the image, and the aimed object and the judgement location can be processed separately from the components other than these parts.
- the best timing for photographing a refined image means that the targeted person has a good appearance.
- the good appearance of the person may be when, for example, “the person is not blinking”, “the person's eyes are not red-eyed”, “the person is looking at the camera”, or “the person is smiling”.
- the condition-storing unit 70 stores these conditions as the photographing conditions.
- the condition-storing unit 70 may set a photographing condition by selecting at least one of the photographing conditions stored therein.
- the condition-storing unit 70 stores conditions such as “the person is not blinking”, “the person's eyes are not red-eyed”, “the person is looking at the camera”, and “the person is smiling” as the photographing conditions. These photographing conditions relate to the face of the person, and more specifically to the eyes or mouth of the person. Therefore, it is assumed in this embodiment that the aimed object is the face area of the person and the judgement location is the eyes or mouth of the person.
- Each of the photographing conditions has a reference situation for the judgement location, which should meet the requirements of the photographing condition.
- the condition-storing unit 70 also stores the reference situations for the judgement location, each respectively corresponding to each of the photographing conditions.
- the reference situations for the judgement location corresponding to each of the photographing conditions will be described in the following.
- the reference situation may relate to the shape of the eye, color of the eye, and size of the eye.
- the reference situation may also relate to the size of the eye, as well as shape of the mouth, and size of the mouth. Whether each of the judgement locations satisfies each of these reference situations or not is judged in accordance with predetermined algorithms based on experience.
- the judgement location may be the eye of the person.
- the reference situation for the eye in this photographing condition will be determined as follows. When a person blinks, his/her eyelid hides his/her eyeball. While he/she is blinking and his/her eye is partially closed, the white part of his/her eyeball is especially hidden by his/her eyelid. This means that when the person is not blinking, the white part of his/her eyeball should be relatively large. Therefore, the reference situation for the photographing condition “the person is not blinking” becomes “the white part of his/her eyeball has a large dimension”.
- the judgement location may be the eyes of the person.
- the reference situation for the eyes in this photographing condition will be determined as follows. Eyes of a person are usually red-eyed when the person is photographed using a flash in a dark situation. This happens because the person's eyes cannot sensibly compensate for the sudden brightness and his/her pupils become red. This means that when the person's eyes look red-eyed, his/her pupils in each iris become red and the rest of the iris does not become red. Typically, people of Asian descent have brown or dark brown colored irises, and people of European descent have green or blue colored irises. Therefore, the reference situation for the photographing condition “the person's eyes are not red-eyed” becomes “the red part in his/her iris has a small dimension”.
- the judgement location may be the eye of the person.
- the reference situation for the eye in this photographing condition will be determined as follows. When a person is looking at the camera, a line between the camera and the iris of the person and a normal vector of his/her iris are almost the same. Therefore, the reference situation for the photographing condition “the person is looking at the camera” becomes “the normal vector of the iris in his/her eye is approximately equal to the angle of the line between the camera and his/her iris”.
- the judgement location may be the eyes and the mouth of the person.
- the reference situation for the eyes and the mouth in this photographing condition will be determined as follows.
- a person is smiling although it depends on each person, his/her eyes become relatively thin.
- his/her mouth expands right-and-left wards and his/her teeth are shown. Therefore, the reference situations for the photographing condition “the person is smiling” become “the white part in his/her eyes has a small dimension”, “the width of his/her mouth is wide” and “the white area in his/her mouth has a large dimension”.
- FIG. 3 is a block diagram of the function of the extractor 60 .
- the extractor 60 includes a depth information extractor 62 , an image information extractor 64 , an aimed object extractor 66 and a judgement location detector 68 .
- the depth information extractor 62 extracts the depth information indicating the distance to each of components of the subject, based on the data for the parallactic image received from the memory 40 . This means that the depth information extractor 62 determines a corresponding point for each of the components based on the parallactic image and gives a parallax amount. The depth information extractor 62 extracts the depth information based on the parallax amount of each of the components. Determining the corresponding point is a known technique, thus the explanation of this technique will be omitted. Extracting the depth information based on the parallax amount is also a known technique using the principle of triangulation, thus the explanation of this technique will be omitted.
- the image information extractor 64 extracts the image information for normal images, from the data for the normal images received from the memory 40 .
- the image information includes, for example, data for the normal image such as luminescence distribution, intensity distribution, color distribution, texture distribution, and motion distribution.
- the aimed object extractor 66 extracts data for the face area of the person as the aimed object, based on the depth information and the image information.
- Each of the images may include, for example, a plurality of components.
- the aimed object extractor 66 recognizes each of the components based on the depth information.
- the aimed object extractor 66 then specifies the face area by referring to the depth information and the image information of each of the components. The method of specifying the face area will be described in the following.
- the aimed object extractor 66 receives the photographing condition from the condition-storing unit 70 .
- the aimed object extractor 66 extracts the aimed object based on the photographing condition.
- the aimed object is the face of the photographed person. Therefore, at first, the component including the face is specified depending on assumptions such as “the person should be close to the camera”, “the person should be in the middle of the image”, or “the proportional relationship of the height of the person to the width and height of the image should be within a predetermined range”.
- the distance from the camera to each of the components in the image is evaluated based on the depth information.
- the distance from the center of the image to each of the components in the image, and the proportional relationship of the height of the components are evaluated based on the image information.
- Each of the values is multiplied by predetermined constants corresponding to each condition.
- the multiplied values are added for each of the components.
- the added values are defined as weighted averages.
- the component having the largest weighted average is extracted as the component including the aimed object.
- the constants by which the values for each of the components are multiplied may be predetermined based on the aimed object.
- the aimed object is assumed to be the face of the photographed person. Therefore, the aimed object extractor 66 specifies the area having a skin color as the face part, based on the image information.
- the colors of each of the components are evaluated based on the color distribution of the images.
- the values of the color distribution may also be multiplied by predetermined constants and the multiplied values are added for each of the components to give the weighted averages.
- the aimed object extractor 66 extracts an aimed object based on the depth information in addition to the image information. Therefore, even when a plurality of people are photographed in the image and their faces are close to each other, the faces of the different people can be distinctly extracted.
- the judgement location detector 68 detects the judgement location from the data for the face area extracted by the aimed object extractor 66 .
- the judgement location detector 68 receives the photographing condition from the condition-storing unit 70 .
- the judgement location detector 68 detects the judgement location based on the photographing condition.
- the judgement location is the eyes or mouth of the photographed person. Therefore, the judgement location detector 68 detects the eyes and mouth from the face area.
- the extractor 60 detects the judgement location from the extracted aimed object based on the image information for the aimed object. Therefore, the extractor 60 does not extract locations having similar shapes to the judgement location from the subject other than the aimed object included in the image.
- the judgement location detector 68 then outputs the data for the detected judgement locations to the timing signal generator 80 .
- the timing signal generator 80 receives the data for the detected judgement locations from the extractor 60 .
- the timing signal generator 80 also receives the photographing condition from the condition-storing unit 70 .
- the timing signal generator 80 compares each of the judgement locations based on the reference situation for the photographing condition.
- the timing signal generator 80 then generates a timing signal when the judgement location satisfies the reference situation for the photographing condition.
- the timing signal generator 80 calculates the dimension of the white part of the eye detected by the judgement location detector 68 for each of the images, based on the image information.
- the timing signal generator 80 generates a timing signal when the dimension of the white part of the eye has a larger dimension than a predetermined dimension.
- the width of the eye is always the same, even when the person opens or closes his/her eye. Therefore, the predetermined dimension may be determined relative to the width of the eye. People usually blink both eyes at the same time, therefore, the timing signal generator 80 may check only one of the eyes of the photographed person. However, by checking both eyes, the desired judgement location can be selected more precisely.
- the timing signal generator 80 calculates the dimension of the red part in the iris of the eye detected by the judgement location detector 68 for the image, based on the image information.
- the iris of his/her eye is recognized as being a cylindrical or elliptic area whose circumference has a brownish or blue/green color.
- the timing signal generator 80 generates a timing signal when the dimension of the red part of the eye has smaller dimension than a predetermined dimension. Both eyes of people are usually red eyed at the same time, therefore, the timing signal generator 80 may check only one of the eyes of the photographed person. However, by checking both of his/her eyes, the desired judgement location can be selected more precisely.
- the timing signal generator 80 recognizes the iris as being a cylindrical or elliptic area whose circumference has a brownish or blue/green color. The timing signal generator 80 then recognizes the center of the iris and the normal vector of the center of the iris. The timing signal generator 80 generates a timing signal when the normal vector of the iris in the eye is closer to the line between the camera and the iris than a predetermined distance.
- the normal vector of the iris can be obtained from the relative position of the camera and the face of the person, the relative position of the face and the eyes of the person, and the relative position of the eyes and the irises of the person.
- the timing signal generator 80 may judge the desired judgement location based on the normal vector obtained from these relative positions.
- the timing signal generator 80 calculates the dimension of the white part of the eye, the width of the mouth, and the dimension of the white part of the mouth detected by the judgement location detector 68 for each of the images, based on the image information.
- the timing signal generator 80 generates a timing signal when the white part of the eye has a smaller dimension than a predetermined dimension, when the mouth has a wider width than a predetermined width, or when the white part of the mouth has a larger dimension than a predetermined dimension.
- the predetermined dimension for the white part of the eye is relatively determined with respect to the width of the eye.
- the predetermined width for the mouth is relatively determined with respect to the width of the face of the person.
- the predetermined dimension for the white part of the mouth is relatively determined with respect to the dimension of the face of the person.
- the timing signal generator 80 outputs a timing signal when the judgement location satisfies the above reference situations.
- the control unit 50 extracts the face part based on the raw image and the information for the raw image.
- the control unit 50 detects the judgement location from the data for the extracted face part.
- the camera 10 can automatically photograph a desirable refined image without bothering the photographer.
- the extractor 60 extracts the aimed object and detects the judgement locations for each of the people. This means that the aimed object extractor 66 extracts the face parts for each of the people from each of the images.
- the judgement location extractor 68 detects the eyes or the mouth for each of the people from each of the images.
- the timing signal generator 80 compares each of the judgement locations for each of the people based on the reference situation for the photographing condition.
- the timing signal generator 80 may generate a timing signal when the judgement locations for many of the people satisfy the reference situation for the photographing condition.
- the timing signal generator 80 may output the timing signal when the ratio of the judgement locations satisfying the photographing condition against all of the plurality of the judgement locations exceeds a predetermined ratio. In this case, the camera 10 can photograph a refined image in which many of the people have a good appearance.
- FIG. 4 is a flowchart showing a method of photographing an image.
- the camera 10 starts photographing the subject when the release button 52 is pushed (S 100 ).
- data for a parallactic image is input from the parallactic image data input unit 22 (S 102 ).
- data for raw images are continuously input from the image data input unit 24 (S 104 ).
- the aimed object extractor 66 extracts the face part of the targeted person as the aimed object (S 106 ) .
- the judgement location detector 68 detects the judgement location based on the image information for the face part (S 108 ).
- the timing signal generator 80 generates and outputs a timing signal when the judgement location satisfies a predetermined photographing condition (S 110 ). Upon receiving the timing signal, the image pickup control unit 56 controls the input unit 20 to photograph a refined image (S 112 ).
- the image-processing unit 84 processes the refined image, for example, compositing images and the like (S 114 ).
- the recording unit 90 records the processed image on a recording medium (S 116 ).
- the output unit 92 outputs the recorded image (S 118 ).
- the photographing operation is terminated (S 120 ).
- FIG. 5 is a flowchart showing in detail the method of extracting a face part, step 106 in FIG. 4.
- the depth information extractor 62 extracts the depth information based on the parallactic image (S 130 ).
- the image information extractor 64 extracts the image information based on the raw image (Sl 32 ).
- the aimed object extractor 66 extracts the face part of the targeted person based on the depth information and the image information (S 134 ).
- the aimed object extractor 66 extracts the face parts for all of the people from each of the images (S 136 ).
- FIG. 6 is a flowchart showing in detail the method of detecting a judgement location, step 108 in FIG. 4.
- the judgement location detector 68 detects the judgement location based on the image information for the face part (S 150 ). When each of the images includes a plurality of people, the judgement location detector 68 detects the judgement locations for all of the people (S 152 and S 150 ). Then, the input-condition-determining unit 82 determines the input condition based on the image information for the judgement location (S 154 ).
- FIG. 7 is a flowchart showing in detail the method of generating a timing signal, step 110 in FIG. 4.
- the timing signal generator 80 judges whether the judgement location detected by the judgement location detector 68 satisfies the photographing condition or not (S 160 ).
- the timing signal generator 80 continues judging whether the judgement location satisfies the photographing condition or not for a predetermined period (S 164 and S 160 ).
- the timing signal generator 80 generates a timing signal when the judgement location satisfies the photographing condition (S 162 ).
- the image pickup control unit 56 controls the input unit 20 to stop photographing raw images when the judgement location does not satisfy the predetermined photographing condition for a predetermined period (S 164 and S 166 ).
- FIG. 8 is a flowchart showing in detail the method of photographing a refined image, step 112 in FIG. 4.
- the image pickup control unit 56 controls the input unit 20 to automatically photograph a refined image based on the timing signal output at the step 110 in FIG. 4 (S 170 ) .
- the input unit 20 inputs the data for the refined image (S 172 ).
- the camera 10 may not automatically photograph a refined image but the user of the camera 10 may push the release button 52 to photograph the refined image, upon receiving the alarm signal from the alarm 54 .
- FIG. 9 is a flowchart showing in detail the method of photographing a refined image, step 112 in FIG. 4.
- the alarm 54 outputs an alarm signal such as an alarm sound or an alarm light based on the timing signal generated at the step 110 (Sl 90 ).
- an alarm signal such as an alarm sound or an alarm light based on the timing signal generated at the step 110 (Sl 90 ).
- the release button 52 S 192
- the camera 10 photographs a refined image (S 194 )
- the alarm 54 outputs the alarm sound or the alarm light based on the timing signal
- the user can photograph a refined image at an optimum timing, without having to judge the timing himself. Furthermore, the targeted person can also notice the timing by the alarm sound or the alarm light.
- the alarm 54 may output an alarm signal such as an alarm sound or an alarm light when the timing signal is not output from the timing signal generator for a predetermined period.
- FIG. 10 is a flowchart showing in detail the method of generating a timing signal in which the alarm 54 outputs the alarm signal, step 110 in FIG. 4.
- the timing signal generator 80 judges whether or not the judgement location detected by the judgement location detector 68 satisfies the photographing condition (S 180 ).
- the timing signal generator 80 continues judging whether or not the judgement location satisfies the photographing condition for a predetermined period (S 184 and S 180 ).
- the timing signal generator 80 generates a timing signal when the judgement location satisfies the photographing condition (S 182 ).
- the alarm 54 outputs an alarm signal such as the alarm sound and the alarm light when the timing signal generator 80 does not output the timing signal for a predetermined period (S 184 and S 186 ).
- the image pickup control unit 56 controls the input unit 20 to stop photographing raw images at this time (S 188 ).
- the alarm 54 outputs an alarm signal such as an alarm sound and an alarm light when the timing signal is not output within a predetermined period, the photographer and the targeted person become aware of the fact that the targeted person does not meet the photographing condition, by the sound and the light.
- an alarm signal such as an alarm sound and an alarm light
- FIG. 11 shows a camera 110 of the second embodiment according to the present invention.
- the camera 110 continuously photographs raw images of a subject.
- the camera 110 then photographs a refined image of the subject, in accordance with a predetermined input condition, at the timing when one of the previously photographed raw images satisfies a predetermined photographing condition.
- the camera 110 in this embodiment is a silver halide type camera by which an image of a subject is formed on a silver halide film.
- the camera 110 includes an input unit 120 , an A/D converter 30 , a memory 40 , a control unit 150 , a release button 52 and an alarm 54 .
- the A/D converter 30 , the memory 40 , the release button 52 and the alarm 54 in this embodiment have the same structures and functions as those explained in the first embodiment. Therefore, the explanation of these parts will be omitted.
- the input unit 120 includes a parallactic image data input unit 122 , a raw image data input unit 124 and a refined image data input unit 130 .
- the parallactic image data input unit 122 and the raw image data input unit 124 in this embodiment respectively have the same structures and functions as the parallactic image data input unit 22 and the image data input unit 24 explained in the first embodiment.
- the refined image data input unit 130 includes a lens 132 , a lens stop 134 , a shutter 136 and a photographing unit 138 .
- the lens 132 , the lens stop 134 and the shutter 136 in this embodiment respectively have the same structures and functions as the lens 25 , the lens stop 26 and the shutter 27 shown in FIG. 1 of the first embodiment.
- the photographing unit 138 receives an optical image of a subject and forms an image of the subject on a silver halide film.
- the image data input unit 24 of the first embodiment inputs both a raw image and a refined image.
- the raw image data input unit 124 inputs an electronic raw image and the refined image data input unit 130 inputs a refined image and forms the refined image on a film.
- the raw image data input unit 124 has a CCD for receiving the image of the subject in the same way as the data input unit 24 of the first embodiment.
- the raw image data input unit 124 outputs electronic signals for the image converted by the CCD.
- FIG. 12 is a block diagram of the control unit 150 according to the second embodiment.
- the control unit 150 includes an image pickup control unit 56 , an image forming control unit 58 , an extractor 60 , a condition-storing unit 70 , a timing signal generator 80 and an input-condition-determining unit 82 .
- the extractor 60 , the condition-storing unit 70 , the timing signal generator 80 and the input-condition-determining unit 82 in this embodiment respectively have the same structures and functions as those of the first embodiment, thus the explanation of these parts will be omitted.
- the image-forming control unit 58 controls the input unit 120 to form an image of a subject.
- the image forming control unit 58 controls at least one of the following conditions of the input unit 120 : focus condition of the lens 132 , aperture condition of the lens stop 134 and exposure time of the shutter 136 , based on the input condition determined by the input-condition-determining unit 82 .
- the image-pickup control unit 56 controls the input unit 120 to photograph an image of a subject.
- the image-pickup control unit 56 also controls the photographing unit 138 to photograph a refined image, based on the input condition.
- the camera 110 includes the raw image data input unit 124 for inputting an electronic raw image in addition to the image data input unit 130 for inputting a refined image. Therefore, the camera can automatically set an optimum condition for photographing a refined image of the subject. Thus, the desired refined image can be obtained without photographing a plurality of images using silver halide films, which can be expensive.
- a camera of the third embodiment according to the present invention will be explained in the following.
- the camera of this embodiment has the same structure as that of the first embodiment explained with reference to FIG. 1.
- the camera of the third embodiment continuously photographs raw images of a subject.
- the camera then photographs a refined image, in accordance with a predetermined input condition, at the timing when the previously photographed raw image satisfies a predetermined photographing condition.
- the camera 110 may have a switch, not shown in the drawings, for selecting an automatic photographing mode in which the best timing for photographing the image is automatically determined, and a manual photographing mode in which the user of the camera 110 determines the best timing.
- the camera of this embodiment has the same structure as that of the first embodiment and includes an input unit 20 , an A/D converter 30 , a memory 40 , a control unit 50 , a release button 52 , an alarm 54 , a recording unit 90 and an output unit 92 .
- the camera of this embodiment may be, for example, a digital still camera or a digital video camera that can photograph a still image.
- FIG. 13 is a block diagram of the control unit 50 according to the third embodiment.
- the control unit 50 includes an image-pickup control unit 56 , an image-forming control unit 58 , an extractor 60 , a condition-storing unit 70 , a photographing condition judging unit 80 , an input-condition-determining unit 82 , and an image-processing unit 84 .
- the extractor 60 receives a parallactic image photographed by the parallactic image data input unit 22 and a normal image photographed by the image data input unit 24 , from the memory 40 .
- the normal image includes a raw image and a refined image.
- the extractor 60 extracts an aimed object from the normal image based on the information obtained from the parallactic image and the normal image.
- the information includes image information of the normal image and depth information of the parallactic image.
- the extractor 60 outputs data for the aimed object to the input-condition-determining unit 82 and to the image-processing unit 84 .
- the extractor 60 then detects a judgement location from the aimed object based on the information obtained from the parallactic images and the normal images. It is also assumed that the extractor 60 detects shapes or colors of the eyes or the mouth of the targeted person as the judgement location in this embodiment.
- the condition-storing unit 70 stores predetermined photographing conditions related to the judgement location, which should be included in each of the raw images obtained by photographing the subject.
- the condition-storing unit 70 may store a plurality of photographing conditions.
- the condition-storing unit 70 may include a condition-setting unit, not shown in the drawings, by which a user can select at least one of the photographing conditions from among a plurality of photographing conditions.
- the best timing for photographing a refined image may be, for example, the timing when the targeted person does a predetermined motion.
- This means that the best timing may be the timing when the aimed object of the targeted person shows a predetermined variation.
- the predetermined variation may be, for example, “the person opens his/her eyes after he/she has been closing his/her eyes for more than two seconds” or “the person's vision of sight follows a predetermined trail”.
- the condition storing unit 70 stores these conditions as the photographing conditions.
- the photographing condition judging unit 80 outputs a timing signal for photographing an image.
- the photographing condition judging unit 80 outputs the timing signal when the judgement location detected by the extractor 60 shows a predetermined motion that satisfies the predetermined photographing condition stored in the storing unit 70 .
- the input-condition-determining unit 82 determines an input condition for inputting an image based on the information for an aimed object or the judgement location received from the extractor 60 .
- the input-condition-determining unit 82 outputs the input condition to the image forming control unit 58 .
- the input condition may be, for example, focus condition of the lens 25 such that the aimed object including the judgement location is focussed.
- the camera of this embodiment can photograph a refined image in which the subject is in good condition.
- the image-forming control unit 58 controls the input unit 20 to form a refined image of the subject based on the input condition determined by the condition-determining unit 70 . This means that the image-forming control unit 58 controls at least one of the conditions including focus condition of the lens 25 , aperture condition of the lens stop 26 , exposure time of the shutter 27 , and condition of the parallactic shutter 34 , based on the input condition.
- the image pickup control unit 56 controls the input unit 20 , to photograph a refined image of the subject based on the input condition determined by the condition-determining unit 70 . This means that the image-pickup control unit 56 controls at least one of the conditions including output signal of the CCD 29 and output signal of the parallactic CCD 36 , based on the input condition.
- the image-pickup control unit 56 controls the input unit 20 , to photograph a refined image based on the timing signal output from the photographing condition judging unit 80 .
- the image-pickup control unit 56 controls the image-processing unit 84 to process the refined image.
- the image-processing unit 84 receives the refined image photographed by the image data input unit 24 from the memory 40 .
- the image-processing unit 84 then processes the refined image based on the information for the aimed object or the judgement location extracted from the extractor 60 .
- the refined image is processed in accordance with the process conditions as explained in the first embodiment.
- FIG. 14 is a functional block diagram of the extractor 60 .
- the extractor 60 includes a depth information extractor 62 , an image information extractor 64 , an aimed object extractor 66 and a judgement location detector 68 .
- the depth information extractor 62 extracts the depth information indicating the distance to each of components of the subject, based on the data of the parallactic image received from the memory 40 .
- the image information extractor 64 extracts the image information for normal images, from the data for the normal images received from the memory 40 .
- the image information includes, for example, data of the normal image such as luminescence distribution, intensity distribution, color distribution, texture distribution, and motion distribution.
- the aimed object extractor 66 extracts data for the face area of the person as the aimed object, based on the depth information and the image information.
- the aimed object is extracted in a similar manner as that explained in the first embodiment.
- the aimed object extractor 66 outputs the information for the aimed object to the input-condition-determining unit 82 and the image-processing unit 84 .
- the aimed object extractor 66 extracts an aimed object based on the depth information in addition to the image information. Therefore, even when a plurality of people are photographed in the image and their faces are close to each other, the faces of the different people can be distinctly extracted.
- the judgement location detector 68 detects the judgement location from the data for the aimed object extracted by the aimed object extractor 66 .
- the judgement location is detected in accordance with a detecting condition different from the extracting condition for extracting the aimed object by the aimed object extractor 66 .
- the judgement location is eyes or mouth of the photographed person. Therefore, the judgement location detector 68 detects the eyes and mouth from the face area.
- the judgement location detector 68 outputs the information for the judgement location to the photographing condition judging unit 80 .
- FIG. 15 is a block diagram of the function of the photographing condition judging unit 80 .
- the photographing condition judging unit 80 includes a detection-starting unit 85 , a variation detector 86 and a judging unit 88 .
- the photographing condition includes a predetermined photographing condition related to the motion of the judgement location of the aimed object, and the starting condition for starting detection of the motion of the judgement location.
- the detection-starting unit 85 outputs a starting signal when the judgement location detected by the extractor 60 satisfies a predetermined starting condition.
- the variation detector 86 starts detecting variation in the motion of the judgement location upon receiving the starting signal from the detection-starting unit 85 .
- the judging unit 88 outputs the timing signal for photographing a refined image when the variation of the motion of the judgement location detected by the variation detector 86 satisfies a predetermined photographing condition.
- the photographing conditions may be, for example, “the person opens his/her eyes after he/she has been closing his/her eyes for more than two seconds” or “the person's vision of sight follows a predetermined trail”. It is desirable that the photographing conditions are motions or variations of the targeted person, which the targeted person usually does not perform in front of the camera, in order to avoid misjudgment.
- Each of the photographing conditions has a reference situation for the judgement location, which should meet the requirements of the photographing condition.
- the condition-storing unit 70 also stores the reference situations for the judgement location, each respectively corresponding to each of the photographing conditions.
- the reference situations for the judgement location corresponding to each of the photographing conditions will be described in the following.
- the reference situation may relate to shape of the eye, color of the eye, and size of the eye. Whether each of the judgement locations satisfies each of these reference situations or not is judged in accordance with predetermined algorithms based on experience.
- the judgement location may be the eye of the person.
- the reference situation for the eye in this photographing condition will be determined as follows. When a person blinks, his/her eyelid hides his/her eyeball. While he/she is blinking and his/her eye is partially closed, a white part of his/her eyeball is especially hidden by his/her eyelid. This means that when the person is blinking, the white part of his/her eyeball should be relatively small and when the person is not blinking, the white part of his/her eyeball should be relatively large. Therefore, whether the person opens his/her eyes or not is determined based on the dimension of the white part of his/her eyeball.
- the starting condition for the photographing condition “the person opens his/her eyes after he/she has been closing his/her eyes for more than two seconds” becomes “the person closes his/her eyes”.
- the detection-starting unit 85 outputs a starting signal when it detects the closed eye of the person.
- the variation detector 86 starts detecting variation of the eye upon receiving the starting signal.
- the variation detector 86 counts the period while the person keeps his/her eyes closed, from the data for the raw images continuously input.
- the variation detector 86 outputs the timing signal when the person opens his/her eyes after he/she has had his/her eyes closed for more than two seconds. It is desirable for the variation detector 86 to output the timing signal one second after the person opens his/her eyes rather than the moment when the person opens his/her eyes.
- the judgement location may be the eye of the person.
- the reference situation for the eye in this photographing condition will be determined as follows.
- the trail of the person's vision of sight can be detected by detecting the normal vector of the iris in the eye.
- the iris of his/her eye is recognized, from his/her eye detected by the judgement location detector 68 , as being a cylindrical or elliptic area whose circumference has a brownish or blue/green color.
- the center of the iris is then detected based on the image information for the eye.
- the normal vector of the center of the iris is obtained based on the depth information.
- the predetermined trail of the vision of sight is “the person looks upper left with respect to the camera, lower right with respect to the camera and then at the camera”, for example.
- the starting condition in this case becomes “the person looks to the upper left with respect to the camera”.
- the detection starting unit 85 outputs a starting signal when it detects that the person is looking to the upper left with respect to the camera.
- the variation detector 86 starts detecting variation of the vision of sight of the person upon receiving the starting signal.
- the variation detector 86 detects the trail of the vision of sight based on the data for the plurality of input raw images.
- the judgement unit 88 outputs the timing signal when the trail is “upper left, lower right and then at the camera”.
- the control unit 50 extracts the face part based on the data for the raw image and the information thereof and then detects the judgement location from the information for the extracted face part. The control unit 50 then detects the variation of the judgement location and determines the timing for photographing when the detected judgement location satisfies the photographing condition. Therefore, the camera of this embodiment can automatically photograph at a timing when the targeted person is in good condition.
- the judgement location detector 68 detects the judgement locations for each of the people. This means that the aimed object extractor 66 extracts the face parts for each of the people from each of the images. The judgement location extractor 68 detects the eyes or the mouth for each of the people from each of the images.
- the variation detector 86 detects the variation of the judgement locations for each of the people.
- the judging unit 88 outputs the timing signal when the variation of the plurality of judgement locations satisfy the photographing condition.
- the judging unit 88 selects the aimed objects respectively, including the judgement locations whose variation satisfies the photographing condition.
- the judging unit 88 then outputs the information of the aimed objects including the selected judgement locations to the input-condition-determining unit 82 and the image-processing unit 84 .
- FIG. 16 is a flowchart showing in detail the method of detecting a judgement location, step 108 in FIG. 4.
- the judgement location detector 68 detects the judgement location based on the image information for the face part (S 250 ). When each of the images includes a plurality of people, the judgement location detector 68 detects the judgement locations for all of the people (S 252 and S 250 ).
- FIG. 17 is a flowchart showing in detail the method of generating a timing signal, step 110 in FIG. 4.
- the detection starting unit 85 judges whether or not the judgement location detected by the judgement location detector 68 satisfies the starting condition (S 260 ).
- the detection-starting unit 85 continues judging whether or not the judgement location satisfies the starting condition for a predetermined period (S 260 and S 262 ).
- the variation detector 86 starts detecting the variation of the judgement location when the judgement location satisfies the starting condition (S 261 ).
- the image-pickup control unit 56 controls the input unit 20 to stop photographing raw images when the judgement location does not satisfy the predetermined starting condition for a predetermined period (S 262 and S 263 ).
- the judging unit 88 judges whether the variation of the judgement location satisfies the photographing condition or not (S 264 ).
- the timing signal generator 80 generates a timing signal when the variation of the judgement location satisfies the photographing condition (S 265 ).
- the process returns to step S 260 if the predetermined period is remaining.
- the detection starting unit 85 judges again whether or not the judgement location detected by the judgement location detector 68 satisfies the starting condition (S 260 ).
- the image pickup control unit 56 controls the input unit 20 to stop photographing raw images when the predetermined period is expired (S 266 and S 267 ).
- FIG. 18 is a flowchart showing in detail the method of photographing a refined image, step 112 in FIG. 4.
- the image-pickup control unit 56 controls the input unit 20 to automatically photograph a refined image based on the timing signal output at the step 110 in FIG. 4 (S 270 ).
- the input unit 20 inputs the data for the refined image (S 272 ).
- the camera 10 may not automatically photograph a refined image but the user of the camera 10 may press the release button 52 to photographing the refined image upon receiving the alarm signal from the alarm 54 .
- the method of manually photographing a refined image by the user of the camera 10 is in accordance with the flowchart shown in FIG. 9, which is explained in the first embodiment.
- the alarm 54 outputs an alarm signal such as an alarm sound or an alarm light based on the timing signal generated at the step 110 (S 190 ) .
- the release button 52 S 192
- the camera 10 photographs a refined image (S 194 ).
- the alarm 54 outputs the alarm sound or the alarm light based on the timing signal
- the user can photograph a refined image at an optimum timing without having to judge the timing himself. Furthermore, the targeted person can also notice the timing because of the alarm sound or the alarm light.
- the alarm 54 may output an alarm signal such as an alarm sound or an alarm light when the timing signal is not output from the timing signal generator for a predetermined period.
- FIG. 19 is a flowchart showing in detail the method of generating a timing signal in which the alarm 54 outputs the alarm signal, step 110 in FIG. 4.
- the detection-starting unit 85 judges whether or not the judgement location detected by the judgement location detector 68 satisfies the starting condition (S 300 ).
- the detection starting unit 85 continues judging whether or not the judgement location satisfies the starting condition for a predetermined period (S 300 and S 304 ).
- the variation detector 86 starts detecting the variation of the judgement location when the judgement location satisfies the starting condition (S 302 ).
- the alarm 54 outputs an alarm signal such as an alarm sound and an alarm light when the photographing condition judging unit 80 does not output the timing signal for a predetermined period (S 304 and S 306 ). Then, the image-pickup control unit 56 controls the input unit 20 to stop photographing raw images, when the judgement location does not satisfy the predetermined starting condition for a predetermined period (S 308 ).
- the judging unit 88 judges whether or not the variation of the judgement location satisfies the photographing condition (S 310 ).
- the timing signal generator 80 generates a timing signal when the variation of the judgement location satisfies the photographing condition (S 312 ).
- the process returns to the step S 314 if the predetermined period is not remaining.
- the detection starting unit 85 judges again whether or not the judgement location detected by the judgement location detector 68 satisfies the starting condition (S 314 and S 300 ).
- the alarm 54 outputs an alarm signal such as an alarm sound and an alarm light when the predetermined period is expired (S 316 ).
- the image-pickup control unit 56 controls the input unit 20 to stop photographing raw images (S 318 ).
- the alarm 54 outputs the alarm signal such as the alarm sound and the alarm light when the timing signal is not output within a predetermined period, the photographer and the targeted person become aware of the fact that the targeted person does not meet the photographing condition because of the sound and the light.
- the camera of the fourth embodiment will be explained in the following.
- the camera of this embodiment is a silver halide type camera by which an image of a subject is formed on a silver halide film and has the same structure as that explained in the second embodiment shown in FIG. 11. Therefore, the explanation of the structure of the camera in this embodiment will be omitted.
- FIG. 20 is a block diagram of the control unit 150 in this embodiment.
- the control unit 150 in this embodiment includes an image pickup control unit 56 , an image forming control unit 58 , an extractor 60 , a condition storing unit 70 , a photographing condition judging unit 180 , an input-condition-determining unit 82 .
- the extractor 60 , the condition storing unit 70 , the photographing condition judging unit 180 and the input-condition-determining unit 82 in this embodiment respectively have same the structures and functions as those explained in the first embodiment, therefore, the explanation of these parts will be omitted.
- the image-forming control unit 58 controls the input unit 120 to form an image of a subject.
- the image-forming control unit 58 controls at least one of the following conditions of the input unit 120 : focus condition of the lens 132 , aperture condition of the lens stop 134 and exposure time of the shutter 136 , based on the input condition determined by the input-condition-determining unit 82 .
- the image pickup control unit 56 controls the input unit 120 to photograph an image of a subject.
- the image pickup control unit 56 also controls the photographing unit 138 to photograph a refined image based on the input condition.
- the camera 110 includes the raw image data input unit 124 for inputting an electronic raw image, in addition to the image data input unit 130 for inputting a refined image. Therefore, the camera can automatically set an optimum condition for photographing a refined image of the subject. Thus, a desired refined image can be obtained without photographing a plurality of images using silver halide films, which can be expensive.
- a camera of the fifth embodiment according to the present invention will be explained in the following.
- the camera of this embodiment continuously photographs images of a subject.
- the camera outputs a timing signal when the targeted subject in the image satisfies the photographing condition.
- the camera of this embodiment Upon receiving the timing signal, the camera of this embodiment records one of the images, which was photographed at a predetermined earlier period than the timing signal, based on the timing signal, as a refined image.
- the camera of this embodiment includes a control unit 50 .
- the structure of the camera of this embodiment other than the control unit 50 is the same as that explained in the first to fourth embodiments. Thus, the explanation of same parts will be omitted.
- FIG. 21 is a block diagram of the control unit 50 according to the fourth embodiment.
- the control unit 50 includes an extractor 60 , a condition-storing unit 70 , a timing signal generator 80 , an image processing unit 84 , and a image storing unit 140 .
- the extractor 60 , the condition-storing unit 70 , the timing signal generator 80 and the image processing unit 84 are the same as those explained in the first to fourth embodiments.
- the timing signal generator 80 is shown in FIG. 21, the part having the numeral 80 may be the photographing condition judging unit explained in the third and the fourth embodiments.
- the image storing unit 140 temporarily stores the images photographed by the image data input unit 24 and input from the memory 40 . Each of the images is respectively stored with time records of when the image was photographed.
- the image storing unit 140 receives the timing signal from the timing signal generator 80 and then outputs one of the raw images photographed at a timing earlier than the timing signal by a predetermined period as the refined image, to the image processing unit 84 .
- the image processing unit 84 processes the refined image based on the information for the extractor 60 .
- FIG. 22 is a flowchart showing a method of photographing an image.
- the camera starts photographing the subject when the release button 52 is pressed (S 400 ) .
- data for a parallactic image is input from the parallactic image data input unit 22 (S 402 ).
- data for raw images are continuously input from the image data input unit 24 (S 404 ).
- the raw images are temporarily stored in the image storing unit 140 .
- the aimed object extractor 66 extracts the face part of the targeted person as the aimed object (S 406 ).
- the judgement location detector 68 detects the judgement location based on the image information for the face part (S 408 ).
- the photographing condition judging unit 180 generates and outputs a timing signal when the judgement location satisfies a predetermined photographing condition (S 410 ).
- the image storing unit 140 selects one of the raw images photographed at a timing earlier than the timing signal by a predetermined period, as the refined image.
- the image storing unit 140 outputs the refined image to the image-processing unit 84 (S 412 ).
- the image-processing unit 84 processes the refined image (S 414 ).
- the processing of the refined image may include compositing a plurality of refined images and the like.
- the recording unit 90 records the processed image on a recording medium (S 416 ).
- the output unit 92 outputs the processed image (S 418 ), and the photographing operation is terminated (S 420 ).
- the image storing unit 140 may store all of the raw images which are photographed from a timing earlier than the timing signal by a predetermined period to the timing of the timing signal, as the refined images.
- the image-processing unit 84 processes the plurality of refined images.
- the camera stores the raw image which is photographed at a timing earlier than the timing signal by a predetermined period as the refined image, based on the timing signal. Therefore, the refined image is selected by considering the delay time, even when the extractor 60 takes a certain time for extracting the aimed object and detecting the judgement location. Thus, an image in which the targeted person has a good appearance can be obtained.
- the camera stores all of the raw images which are photographed from a timing earlier than the timing signal by a predetermined period to the timing of the timing signal, as the refined images. Therefore, an image in which the targeted person has a good appearance can be selected.
- FIG. 23 shows a camera 210 of the sixth embodiment according to the present invention.
- the camera 210 of this embodiment continuously photographs a plurality of raw images of a subject in the same way as the first to fifth embodiments.
- the camera 210 outputs a timing signal when the raw image satisfies the photographing condition.
- the camera 210 of this embodiment has the same structure as that of the first embodiment and further includes a communication unit 150 .
- the camera 210 outputs the timing signal through the communication unit 150 , to control operation of an external apparatus 160 based on the timing signal.
- the communication unit 150 of the camera 210 sends the timing signal to the external apparatus 160 by a wireless means.
- the communication unit 150 of the camera 210 and the external apparatus may be held in communication with each other by a wireless means such as via a radio or infrared radiation or by cables such as via a USB or a LAN.
- the external apparatus 160 may be, for example, a camera for photographing a refined image of the target, or an illuminator.
- the camera 210 continuously photographs raw images of a subject.
- the camera 210 outputs a timing signal when the raw image satisfies a predetermined selecting condition.
- the timing signal is transferred from the camera 210 to the external apparatus 160 through the communication unit 150 of the camera 210 .
- the external apparatus 160 is another camera for photographing a refined image
- the external apparatus photographs a refined image of the subject based on the timing signal from the camera 210 .
- a silver halide type camera that does not include a raw image data input unit can photograph a refined image of a subject at the timing when the targeted person is in good condition.
- a desired refined image can be obtained without photographing a plurality of images using silver halide films which can be expensive.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
Abstract
A camera includes a release button, an input unit, an A/D converter, a memory, a control unit, an alarm, a recording unit and an output unit. The memory stores data for the image converted by the A/D converter. The control unit judges whether or not the image stored in the memory satisfies a predetermined photographing condition and outputs a timing signal when the image satisfies the photographing condition. The alarm outputs an alarm signal to a photographer. The recording unit records the refined image on a recording medium. The output unit outputs the refined image.
Description
- This application is a Divisional of co-pending application Ser. No. 09/586,600, filed on Jun. 2, 2000, the entire contents of which are hereby incorporated by reference and for which priority is claimed under 35 U.S.C. § 120; this application also claims priority based on a Japanese patent applications, Hei 11-157159 filed on Jun. 3, 1999, and Hei 11-158666 filed on Jun. 4, 1999, the contents of which are incorporated herein by reference.
- 1. Field of the Invention
- The present invention relates to a camera, and more particularly to a camera capable of automatically photographing an image of a subject when the subject satisfies a predetermined photographing condition.
- 2. Description of the Related Art
- Conventionally, a technique is known to correct a photograph so that a person photographed by a camera can be satisfied with the result. However, this technique requires a high degree of skill. Furthermore, it is difficult to correct a person's face in the photograph when he or she is blinking or is not smiling, to a face as if he or she is not blinking or is smiling.
- On the other hand, Japanese Patent Laid-open Publication (Kokai) H9-212620 and Japanese Patent Laid-open Publication (Kokai) H10-191216 disclose a technique to continuously photograph a plurality of images. Those images are displayed, and the person photographed by the camera can select a desirable image from among those images.
- Japanese Patent Laid-open Publication (Kokai) H5-40303, H4-156526 and H5-100148 disclose cameras which can automatically judge the timing for photographing images.
- However, this was troublesome because the photographed person or the photographer needed to select the desired image by checking all of the images. Furthermore, when a lot of people are photographed in the image, it is more difficult to select an image that all of them are satisfied with.
- Furthermore, images are photographed at the timing when the photographer judges it is the best timing. Therefore, the photographers timing is not always matched with the best timing for the photographed person. In addition, when a lot of people are photographed in the image, it is more difficult to judge the best timing at which many of them will be satisfied with the image.
- Therefore, it is an object of the present invention to provide a camera which overcomes the above issues in the related art. This object is achieved by combinations described in the independent claims. The dependent claims define further advantageous and exemplary combinations of the present invention.
- According to the first aspect of the present invention, a camera comprises: an image data input unit forming an image of a subject for photographing said subject; a condition storing unit storing a predetermined photographing condition related to a desirable subject; and a timing signal generator outputting a timing signal when said subject satisfies said photographing condition.
- The camera may include an extractor extracting data of an aimed object from said image of said subject based on an extracting condition, wherein said photographing condition may include a predetermined condition related to a desirable aimed object and said timing signal generator outputs said timing signal when said aimed object satisfies said photographing condition.
- The extracting condition may be based on depth information of said image indicating the distance to each part of said subject.
- The extractor may detect data of a judgement location from said data of said aimed object in said image based on a detecting condition different from said extracting condition, said photographing condition may include a predetermined photographing condition related to a desirable judgement location, and the timing signal generator may output said timing signal when said judgement location satisfies said photographing condition.
- The extractor may extract data of a plurality of said aimed objects from said image; and said timing signal generator may output said timing signal when said plurality of aimed objects satisfy said photographing condition.
- The timing signal generator may output said timing signal when the ratio of said aimed objects satisfying said photographing condition against all of said plurality of said aimed object exceeds a predetermined ratio.
- The extractor may detect data of a plurality of judgement locations from each of said data of said plurality of aimed objects based on a detecting condition different from said first condition, said photographing condition may include a predetermined photographing condition related to said judgement location, and said timing signal generator may output said timing signal when said plurality of said judgement locations satisfy said photographing condition.
- The timing signal generator may output said timing signal when the ratio of said judgement locations satisfying said photographing condition against all of said plurality of said aimed object exceeds a predetermined ratio.
- The camera may include an image-pickup control unit controlling said input unit for photographing said image based on said timing signal.
- The camera may include an illuminator illuminating said subject based on said timing signal.
- The camera may include a recording unit recording said image on a replaceable nonvolatile recording medium based on said timing signal.
- The camera may include an alarm outputting an alarm signal for notifying that said subject satisfies said photographing condition based on said timing signal.
- The photographing condition may include a plurality of photographing conditions, and said camera may include a condition-setting unit previously selecting at least one of said photographing conditions, for photographing said image, from among said plurality of photographing conditions.
- The camera may include: an input condition determining unit determining an input condition for inputting said image based on information of said judgement location detected by said extractor; and an image-forming control unit controlling an input unit for forming said image of said subject based on said input condition.
- The camera as set forth in claim may include an image processing unit processing said image based on information of said judgement location detected by said extractor.
- According to the first aspect of the present invention, a camera comprises: an image data input unit forming a plurality of images of a subject for photographing said subject; a condition storing unit storing a predetermined photographing condition related to a desirable variation of said subject; a variation detector detecting variation of said subject in said plurality of said images based on information of said plurality of images; and a timing signal generator outputting a timing signal when said variation of said subject satisfies said photographing condition.
- The camera may include: an extractor extracting data of an aimed object from each of said plurality of images of said subject based on an extracting condition, wherein said photographing condition may include a predetermined condition related to a desirable aimed object, said variation detector may detect variation of said aimed object in said plurality of images based on said information of said plurality of images, and said timing signal generator may output said timing signal when said variation of said aimed object satisfies said photographing condition.
- The extracting condition may be based on depth information of said plurality of images indicating the distance to each part of said subject.
- The extractor may detect data of a judgement location from said data of said aimed object in each of said plurality of images based on a detecting condition different from said extracting condition, said photographing condition may include a predetermined photographing condition related to a desirable judgement location, said variation detector may detect variation of said judgement location in said plurality of images based on said information of said plurality of images, and said timing signal generator may output said timing signal when said variation of said judgement location satisfies said photographing condition.
- The photographing condition may include a predetermined starting condition for starting detection of said variation of said judgement location, and said variation detector may start detecting said variation of said judgement location when said judgement location satisfies said starting condition.
- The extractor may extract data of a plurality of said aimed objects from each of said plurality of images, said variation detector may detect variation of each of said plurality of said aimed objects in said plurality of images based on information of said plurality of images, and said timing signal generator may output said timing signal when said variation of said plurality of said aimed objects satisfy said photographing condition.
- The extractor may detect data of a plurality of judgement locations from each of said data of said plurality of aimed objects based on a detecting condition different from said extracting condition, said photographing condition may include a predetermined photographing condition related to desirable variation of said judgement location, said variation detector may detect variation of each of said plurality of said judgement locations in said plurality of images based on information of said plurality of images, and said timing signal generator may output said timing signal when said variation of said plurality of said judgement locations satisfy said photographing condition.
- The camera may include an image pickup control unit controlling said input unit for photographing said image based on said timing signal.
- The camera may include an illuminator illuminating said subject based on said timing signal.
- The camera may include a recording unit recording said image on a replaceable nonvolatile recording medium based on said timing signal.
- The camera may include an alarm outputting an alarm signal for notifying that said subject satisfies said photographing condition based on said timing signal.
- The photographing condition may include a plurality of photographing conditions, and said camera may include a condition-setting unit previously selecting at least one of said photographing conditions for photographing said image, from among said plurality of photographing conditions.
- The timing signal generator may select said judgement location satisfying said photographing condition from among said plurality of said judgement locations in said plurality of images, and outputs information for said aimed object including said judgement location, and the camera may include: an input condition determining unit determining an input condition for inputting said image based on information for said judgement location; and an image forming control unit controlling an input unit for forming said image of said subject based on said input condition.
- The timing signal generator may select said judgement location satisfying said photographing condition from among said plurality of said judgement locations in said plurality of images, and outputs information for said aimed object including said judgement location, and said camera may include an image processing unit processing said image based on said information for said judgement location.
- According to the third aspect of the present invention, a method of photographing an image of a subject comprises outputting a timing signal when said subject satisfies a predetermined photographing condition.
- The method may include: extracting data of an aimed object from said image of said subject based on an extracting condition, wherein said photographing condition may include a predetermined condition related to a desirable aimed object, and said timing signal may be output when said aimed object satisfies said photographing condition.
- The extracting may include detecting data of a judgement location from said data of said aimed object in said image based on a detecting condition different from said extracting condition, said photographing condition may include a predetermined photographing condition related to a desirable judgement location, and said timing signal may be output when said judgement location satisfies said photographing condition.
- The method may include photographing said subject based on said timing signal.
- The method may include recording said photographed image of said subject on a replaceable nonvolatile recording medium based on said timing signal.
- The method may include: determining an input condition for inputting said image based on information for said judgement location detected in said detecting step; and forming said image of said subject based on said input condition.
- The method may include processing said image based on information for said judgement location detected in said detecting step.
- According to the fourth aspect of the present invention, a method of photographing a plurality of images of a subject comprising: detecting variation of said subject in said plurality of said images based on information for said plurality of images; outputting a timing signal when said variation of said subject satisfies a predetermined photographing condition related to a desirable variation of said subject.
- The method may include extracting data of an aimed object from each of said plurality of images of said subject based on an extracting condition, said detecting may include detecting variation of said aimed object based on information for said image, and said timing signal may be output when said variation of said aimed object satisfies said photographing condition.
- The extraction of said aimed object may include detecting data of a judgement location from said data of said aimed object in each of said plurality of images based on a detecting condition different from said extracting condition, said detecting variation of said subject may include detecting variation of said judgement location based on information for said image, and said timing signal may be output when said variation of said judgement location satisfies said photographing condition.
- The photographing condition may include a predetermined starting condition for starting detection of said variation of said judgement location, and said detecting of variation may start detecting said variation of said judgement location when said judgement location satisfies said starting condition.
- The method may include photographing said image based on said timing signal.
- This summary of the invention does not necessarily describe all necessary features so that the invention may also be a sub-combination of these described features.
- FIG. 1 shows a camera of the first embodiment according to the present invention,
- FIG. 2 is a block diagram of the control unit of the first embodiment,
- FIG. 3 is a block diagram of the function of the extractor,
- FIG. 4 is a flowchart showing a method of photographing an image,
- FIG. 5 is a flowchart showing in detail the method of extracting a face part, step106 in FIG. 4,
- FIG. 6 is a flowchart showing in detail the method of detecting a judgement location, step108 in FIG. 4,
- FIG. 7 is a flowchart showing in detail the method of generating a timing signal,
step 110 in FIG. 4, - FIG. 8 is a flowchart showing in detail the method of photographing a refined image,
step 112 in FIG. 4, - FIG. 9 is a flowchart showing in detail the method of photographing a refined image,
step 112 in FIG. 4, - FIG. 10 is a flowchart showing in detail the method of generating a timing signal,
step 110 in FIG. 4, - FIG. 11 shows a camera of the second embodiment according to the present invention,
- FIG. 12 is a block diagram of the control unit of the second embodiment,
- FIG. 13 is a block diagram of the control unit of the third embodiment,
- FIG. 14 is a block diagram of the function of the
extractor 60, - FIG. 15 is a block diagram of the function of the photographing condition judging unit,
- FIG. 16 is a flowchart showing in detail the method of detecting a judgement location, step108 in FIG. 4,
- FIG. 17 is a flowchart showing in detail the method of generating a timing signal,
step 110 in FIG. 4, - FIG. 18 is a flowchart showing in detail the method of photographing a refined image,
step 112 in FIG. 4, - FIG. 19 is a flowchart showing in detail the method of generating a timing signal,
step 110 in FIG. 4, - FIG. 20 is a block diagram of the control unit of the fourth embodiment,
- FIG. 21 is a block diagram of the control unit of the fifth embodiment,
- FIG. 22 is a flowchart showing a method of photographing an image, and
- FIG. 23 shows a camera of the sixth embodiment.
- The invention will now be described based on the preferred embodiments, which do not intend to limit the scope of the present invention, but exemplify the invention. All of the features and the combinations thereof described in the embodiment are not necessarily essential to the invention.
- FIG. 1 shows a
camera 10 of the first embodiment according to the present invention. Thecamera 10 continuously photographs raw images of a subject and determines the timing for photographing a refined image based on the previously photographed raw images. Thecamera 10 photographs a refined image of the subject in accordance with the timing signal. Therefore, the timing for photographing a refined image may be automatically determined by thecamera 10. - The
camera 10 includes aninput unit 20, an A/D converter 30, amemory 40, acontrol unit 50, arelease button 52, analarm 54, arecording unit 90 and anoutput unit 92. Thecamera 10 of this embodiment further includes anilluminator 53. Thecamera 10 may be, for example, a digital still camera or a digital video camera that can photograph a still image. - The
input unit 20 includes a parallactic imagedata input unit 22 and a normal imagedata input unit 24. The parallactic imagedata input unit 22 inputs parallactic images which are photographed from different viewpoints. The parallactic imagedata input unit 22 has aparallactic lens 32, aparallactic shutter 34, and a parallactic charge coupled device (CCD) 36. Theparallactic lens 32 forms an image of a subject. Theparallactic shutter 34 has a plurality of shutter units each of which serve as viewpoints. Theparallactic shutter 34 opens one of the plurality of shutter units. Theparallactic CCD 36 receives the image of the subject through theparallactic lens 32 and whichever of the shutter units of theparallactic shutter 34 that are opened. Theparallactic CCD 36 also receives another image of the subject through theparallactic lens 32 and another of the shutter units of theparallactic shutter 34, which is opened at this time. The images received through theparallactic lens 32 and theparallactic shutter 34 form a parallactic image. Thus, theparallactic CCD 36 receives the parallactic image of the subject formed by theparallactic lens 32 and converts it to electronic signals. - The normal image
data input unit 24 inputs a normal image photographed from a single viewpoint. The normal imagedata input unit 24 has alens 25, alens stop 26, ashutter 27, acolor filter 28 and a charge coupled device (CCD) 29. Thelens 25 forms an image of a subject. Thelens stop 26 adjusts an aperture condition. Theshutter 27 adjusts exposure time. Thecolor filter 28 separates RGB components of the light received through thelens 25. TheCCD 29 receives the image of the subject formed by thelens 25 and converts it to electric signals. - The A/
D converter 30 receives analog signals from the parallactic imagedata input unit 22 and the normal imagedata input unit 24. The A/D converter 30 converts the received analog signals to digital signals and outputs the digital signals to thememory 40. Thememory 40 stores the input digital signals. This means that thememory 40 stores the data of the parallactic image, the subject photographed by the parallactic imagedata input unit 22, and the data of the normal image of the subject photographed by the normal imagedata input unit 24. - The
control unit 50 outputs a timing signal for starting photographing of an image of a subject when the subject satisfies a predetermined photographic condition. The timing signal is input to theinput unit 20. Thecamera 10 then starts the photographing operation based on the timing signal, to obtain a refined image of the subject. Thecontrol unit 50 processes the photographed refined image and outputs the processed image. Thecontrol unit 50 controls at least one of the following conditions: focus condition of thelens 25, aperture condition of thelens stop 26, exposure time of theshutter 27, output signal of theCCD 29, condition of theparallactic shutter 34, and output signal of theparallactic CCD 36. Thecontrol unit 50 also controls theilluminator 53. - The
release button 52 outputs to the control unit 50 a signal for starting the photographing operation . This means that when a user of thecamera 10 pushes therelease button 52, the signal is output to thecontrol unit 50. Thecontrol unit 50 then controls theinput unit 20 for photographing an image of the subject. - As described above, the
camera 10 is capable of automatically photographing a refined image of the subject by determining best timing for photographing the refined image. However, thecamera 10 is also capable of photographing the image at a desirable timing for the user of thecamera 10, when he/she pushes therelease button 52. Thecamera 10 may have a switch, not shown in the drawings, for selecting an automatic photographing mode in which the best timing for photographing the image is automatically determined, and a manual photographing mode in which the user of thecamera 10 determines the desirable timing. - The
alarm 54 outputs an alarm signal upon receiving the timing signal from thecontrol unit 50. Thealarm 54 may be, for example, an alarm generator or a light-emitting diode. Thus, the user of thecamera 10 can know the best timing determined by thecamera 10 for photographing a refined image of the subject. - The
recording unit 90 records the image output from thecontrol unit 50 on a recording medium. The recording medium may be, for example, a magnetic recording medium such as a floppy disk, or a nonvolatile memory such as a flash memory. - The
output unit 92 outputs the image recorded on the recording medium. Theoutput unit 92 may be, for example, a printer or a monitor. Theoutput unit 92 may be a small liquid crystal display (LCD) of thecamera 10. In this case, the user can see the image processed by thecontrol unit 50 immediately after photographing the image. Theoutput unit 92 may be an external monitor connected to thecamera 10. - FIG. 2 is a block diagram of the
control unit 50 according to the first embodiment. Thecontrol unit 50 includes an imagepickup control unit 56, an image formingcontrol unit 58, anextractor 60, a condition-storingunit 70, atiming signal generator 80, an inputcondition determining unit 82, and animage processing unit 84. - The
extractor 60 receives a parallactic image photographed by the parallactic imagedata input unit 22 and a raw image photographed by the imagedata input unit 24, from thememory 40. Theextractor 60 extracts an aimed object from the raw image based on the information obtained from the parallactic image and the raw image. The information includes image information of the raw image and depth information of the parallactic image. The aimed object defined here is an independent object at which a photographer aims when photographing. The aimed object may be, for example, a person in a room when the person and the objects in the room are photographed, a fish in an aquarium when the fish and the aquarium are photographed, or a bird stopping on a branch of a tree when the bird and the tree are photographed. - The
extractor 60 then detects a judgement location from the aimed object based on the information obtained from the parallactic images and the raw images. The judgement location defined here is a location to which specific attention is paid when selecting a desirable image. The judgement location may be, for example, an eye of a person when the person is photographed, or a wing of a bird when the bird is photographed. The aimed object may be an area including the judgement location, extracted for a certain purpose. The information for the judgement location is output to thetiming signal generator 80, the input-condition-determiningunit 82 and the image-processingunit 84. - The condition-storing
unit 70 stores predetermined conditions related to a judgement location which should be included in a raw image obtained by photographing a subject. The best timing for photographing a refined image of the subject in this embodiment is when the aimed object in the image is in good condition. This means that a judgement location included in the aimed object satisfies the predetermined conditions stored in the condition-storingunit 70. The condition-storingunit 70 may store a plurality of photographing conditions. The condition-storingunit 70 may include a condition-setting unit, not shown in the drawings, by which a user can select at least one of the photographing conditions from among a plurality of photographing conditions. - The
timing signal generator 80 outputs a timing signal for photographing an image. Thetiming signal generator 80 outputs the timing signal when the judgement location detected by theextractor 60 satisfies the predetermined photographing condition stored in the storingunit 70. - The input-condition-determining
unit 82 determines an input condition for inputting a refined image, based on the information for the aimed object or the judgement location received from theextractor 60. The input condition is output to the image-formingcontrol unit 58. The input condition may be, for example, focus condition of thelens 25 such that the aimed object including the judgement location is focussed. - As the
input unit 20 inputs an image in accordance with the input condition such as the focus condition of thelens 25, determined by the input-condition-determiningunit 82, thecamera 10 can photograph a refined image in which the subject is in good condition. - The image-forming
control unit 58 controls theinput unit 20 to form a refined image of the subject based on the input condition determined by the input-condition-determiningunit 82. This means that the image-formingcontrol unit 58 controls at least one of the conditions including focus condition of thelens 25, aperture condition of thelens stop 26, exposure time of theshutter 27, and condition of theparallactic shutter 34, based on the input condition. - The image-
pickup control unit 56 controls theinput unit 20 to photograph a refined image of the subject based on the input condition determined by the condition-determiningunit 70. This means that the image-pickup control unit 56 controls at least one of the conditions including output signal of theCCD 29 and output signal of theparallactic CCD 36, based on the input condition. The output signal of theCCD 29 determines the gradation characteristics based on a gamma (γ) curve and sensitivity. - The image-
pickup control unit 56 controls theinput unit 20, to photograph a refined image based on the timing signal output from thetiming signal generator 80. The image-pickup control unit 56 controls the image-processingunit 84 to process the refined image. The image-pickup control unit 56 may control theilluminator 53, for flashing a light preceding or at the same time as photographing a refined image by theinput unit 20. The image-pickup control unit 56 also controls the image-processingunit 84, to process the input refined image. - The image-processing
unit 84 receives the refined image photographed by the imagedata input unit 24 from thememory 40. The image-processingunit 84 then processes the refined image based on the information for the aimed object or the judgement location extracted from theextractor 60. - Examples of the process condition for processing a normal image are explained in the following.
- The process condition for processing a normal image may relate to compression of the image. The process condition in this case is determined based on the data for the aimed object. The image-processing
unit 84 separately determines the compressing condition of the image for the aimed object and for the components other than the aimed object so that the quality of the aimed object does not deteriorate, even though the data size of the image itself is compressed. Theimage processing unit 84 may separately determine the color compressing condition for the aimed object and the components other than the aimed object. - The process condition for processing a normal image may relate to color of the image. The process condition in this case is determined based on the depth information. The processing-condition-determining unit74 may, for example, separately determine the color condition for the aimed object and the components other than the aimed object, so that all the components have optimum gradation.
- The image-processing
unit 84 may determine a processing condition in which the aimed object in the image is magnified and the magnified aimed object is composited with a background image. The background image may be the components included in the original image other than the aimed object, or an image previously selected by the user of thecamera 10. The image-processingunit 84 may then composite the data for the aimed object and the data for the components other than the aimed object to form a composite image. - As described above, the
extractor 60 extracts the data for the aimed object and the judgement location from the image, and the aimed object and the judgement location can be processed separately from the components other than these parts. - Since cameras are usually used to photograph human beings, the best timing for photographing a refined image means that the targeted person has a good appearance. The good appearance of the person may be when, for example, “the person is not blinking”, “the person's eyes are not red-eyed”, “the person is looking at the camera”, or “the person is smiling”. The condition-storing
unit 70 stores these conditions as the photographing conditions. The condition-storingunit 70 may set a photographing condition by selecting at least one of the photographing conditions stored therein. - The method of outputting a timing signal for photographing a refined image of a subject when a targeted person has a good appearance will be explained. The condition-storing
unit 70 stores conditions such as “the person is not blinking”, “the person's eyes are not red-eyed”, “the person is looking at the camera”, and “the person is smiling” as the photographing conditions. These photographing conditions relate to the face of the person, and more specifically to the eyes or mouth of the person. Therefore, it is assumed in this embodiment that the aimed object is the face area of the person and the judgement location is the eyes or mouth of the person. - Each of the photographing conditions has a reference situation for the judgement location, which should meet the requirements of the photographing condition. The condition-storing
unit 70 also stores the reference situations for the judgement location, each respectively corresponding to each of the photographing conditions. The reference situations for the judgement location corresponding to each of the photographing conditions will be described in the following. - For the conditions such as “the person is not blinking”, “the person's eyes are not red-eyed” and “the person is looking at the camera”, the reference situation may relate to the shape of the eye, color of the eye, and size of the eye. For the condition such as “the person is smiling”, the reference situation may also relate to the size of the eye, as well as shape of the mouth, and size of the mouth. Whether each of the judgement locations satisfies each of these reference situations or not is judged in accordance with predetermined algorithms based on experience.
- When the photographing condition “the person is not blinking” is selected, the judgement location may be the eye of the person. The reference situation for the eye in this photographing condition will be determined as follows. When a person blinks, his/her eyelid hides his/her eyeball. While he/she is blinking and his/her eye is partially closed, the white part of his/her eyeball is especially hidden by his/her eyelid. This means that when the person is not blinking, the white part of his/her eyeball should be relatively large. Therefore, the reference situation for the photographing condition “the person is not blinking” becomes “the white part of his/her eyeball has a large dimension”.
- When the photographing condition “the person's eyes are not red-eyed” is selected, the judgement location may be the eyes of the person. The reference situation for the eyes in this photographing condition will be determined as follows. Eyes of a person are usually red-eyed when the person is photographed using a flash in a dark situation. This happens because the person's eyes cannot sensibly compensate for the sudden brightness and his/her pupils become red. This means that when the person's eyes look red-eyed, his/her pupils in each iris become red and the rest of the iris does not become red. Typically, people of Asian descent have brown or dark brown colored irises, and people of European descent have green or blue colored irises. Therefore, the reference situation for the photographing condition “the person's eyes are not red-eyed” becomes “the red part in his/her iris has a small dimension”.
- When the photographing condition “the person is looking at the camera” is selected, the judgement location may be the eye of the person. The reference situation for the eye in this photographing condition will be determined as follows. When a person is looking at the camera, a line between the camera and the iris of the person and a normal vector of his/her iris are almost the same. Therefore, the reference situation for the photographing condition “the person is looking at the camera” becomes “the normal vector of the iris in his/her eye is approximately equal to the angle of the line between the camera and his/her iris”.
- When the photographing condition “the person is smiling” is selected, the judgement location may be the eyes and the mouth of the person. The reference situation for the eyes and the mouth in this photographing condition will be determined as follows. When a person is smiling, although it depends on each person, his/her eyes become relatively thin. At this time, although it depends on each person, his/her mouth expands right-and-left wards and his/her teeth are shown. Therefore, the reference situations for the photographing condition “the person is smiling” become “the white part in his/her eyes has a small dimension”, “the width of his/her mouth is wide” and “the white area in his/her mouth has a large dimension”.
- FIG. 3 is a block diagram of the function of the
extractor 60. Theextractor 60 includes adepth information extractor 62, animage information extractor 64, an aimedobject extractor 66 and ajudgement location detector 68. - The
depth information extractor 62 extracts the depth information indicating the distance to each of components of the subject, based on the data for the parallactic image received from thememory 40. This means that thedepth information extractor 62 determines a corresponding point for each of the components based on the parallactic image and gives a parallax amount. Thedepth information extractor 62 extracts the depth information based on the parallax amount of each of the components. Determining the corresponding point is a known technique, thus the explanation of this technique will be omitted. Extracting the depth information based on the parallax amount is also a known technique using the principle of triangulation, thus the explanation of this technique will be omitted. - The
image information extractor 64 extracts the image information for normal images, from the data for the normal images received from thememory 40. The image information includes, for example, data for the normal image such as luminescence distribution, intensity distribution, color distribution, texture distribution, and motion distribution. - The aimed
object extractor 66 extracts data for the face area of the person as the aimed object, based on the depth information and the image information. Each of the images may include, for example, a plurality of components. The aimedobject extractor 66 recognizes each of the components based on the depth information. The aimedobject extractor 66 then specifies the face area by referring to the depth information and the image information of each of the components. The method of specifying the face area will be described in the following. - The aimed
object extractor 66 receives the photographing condition from the condition-storingunit 70. The aimedobject extractor 66 extracts the aimed object based on the photographing condition. In this embodiment, the aimed object is the face of the photographed person. Therefore, at first, the component including the face is specified depending on assumptions such as “the person should be close to the camera”, “the person should be in the middle of the image”, or “the proportional relationship of the height of the person to the width and height of the image should be within a predetermined range”. The distance from the camera to each of the components in the image is evaluated based on the depth information. The distance from the center of the image to each of the components in the image, and the proportional relationship of the height of the components are evaluated based on the image information. Each of the values is multiplied by predetermined constants corresponding to each condition. The multiplied values are added for each of the components. The added values are defined as weighted averages. The component having the largest weighted average is extracted as the component including the aimed object. - The constants by which the values for each of the components are multiplied may be predetermined based on the aimed object. In this embodiment, for example, the aimed object is assumed to be the face of the photographed person. Therefore, the aimed
object extractor 66 specifies the area having a skin color as the face part, based on the image information. The colors of each of the components are evaluated based on the color distribution of the images. The values of the color distribution may also be multiplied by predetermined constants and the multiplied values are added for each of the components to give the weighted averages. - As described above, the aimed
object extractor 66 extracts an aimed object based on the depth information in addition to the image information. Therefore, even when a plurality of people are photographed in the image and their faces are close to each other, the faces of the different people can be distinctly extracted. - The
judgement location detector 68 detects the judgement location from the data for the face area extracted by the aimedobject extractor 66. Thejudgement location detector 68 receives the photographing condition from the condition-storingunit 70. Thejudgement location detector 68 detects the judgement location based on the photographing condition. In this embodiment, the judgement location is the eyes or mouth of the photographed person. Therefore, thejudgement location detector 68 detects the eyes and mouth from the face area. - There is relatively little variation in the eyes of people with respect to color, shapes or their place on the face. Therefore, patterns of eyes such as the color of the eyes, shape of the eyes, and the place of the eyes on the face are previously determined, and the parts which are approximately similar to the determined patterns of the eyes are recognized as the judgement location on the face. Similarly, there is relatively little variation in mouths of people with respect to color, shapes or place on the face. Therefore, patterns of the mouth are also previously determined and the parts which are approximately similar to the determined patterns of the mouth are recognized as the judgement location on the face.
- The
extractor 60 detects the judgement location from the extracted aimed object based on the image information for the aimed object. Therefore, theextractor 60 does not extract locations having similar shapes to the judgement location from the subject other than the aimed object included in the image. - The
judgement location detector 68 then outputs the data for the detected judgement locations to thetiming signal generator 80. - Referring back to FIG. 2, the method for judging the best timing for photographing an image will be explained in the following.
- The
timing signal generator 80 receives the data for the detected judgement locations from theextractor 60. Thetiming signal generator 80 also receives the photographing condition from the condition-storingunit 70. Thetiming signal generator 80 compares each of the judgement locations based on the reference situation for the photographing condition. Thetiming signal generator 80 then generates a timing signal when the judgement location satisfies the reference situation for the photographing condition. - When the photographing condition “the person is not blinking” is selected, the judgement location is the eyes and the reference situation is “the white part of his/her eyeball has a large dimension”, as described above. Therefore, the
timing signal generator 80 calculates the dimension of the white part of the eye detected by thejudgement location detector 68 for each of the images, based on the image information. Thetiming signal generator 80 generates a timing signal when the dimension of the white part of the eye has a larger dimension than a predetermined dimension. The width of the eye is always the same, even when the person opens or closes his/her eye. Therefore, the predetermined dimension may be determined relative to the width of the eye. People usually blink both eyes at the same time, therefore, thetiming signal generator 80 may check only one of the eyes of the photographed person. However, by checking both eyes, the desired judgement location can be selected more precisely. - When the photographing condition “the person's eyes are not red-eyed” is selected, the judgement location is the eyes and the reference situation is “the red part in his/her iris has a small dimension”, as described above. Therefore, the
timing signal generator 80 calculates the dimension of the red part in the iris of the eye detected by thejudgement location detector 68 for the image, based on the image information. The iris of his/her eye is recognized as being a cylindrical or elliptic area whose circumference has a brownish or blue/green color. Thetiming signal generator 80 generates a timing signal when the dimension of the red part of the eye has smaller dimension than a predetermined dimension. Both eyes of people are usually red eyed at the same time, therefore, thetiming signal generator 80 may check only one of the eyes of the photographed person. However, by checking both of his/her eyes, the desired judgement location can be selected more precisely. - When the photographing condition “the person is looking at the camera” is selected, the judgement location is the eye and the reference situation is “the normal vector of the iris in his/her eye is approximately equal to the angle of the line between the camera and his/her iris”, as described above. Therefore, the
timing signal generator 80 recognizes the iris as being a cylindrical or elliptic area whose circumference has a brownish or blue/green color. Thetiming signal generator 80 then recognizes the center of the iris and the normal vector of the center of the iris. Thetiming signal generator 80 generates a timing signal when the normal vector of the iris in the eye is closer to the line between the camera and the iris than a predetermined distance. - The normal vector of the iris can be obtained from the relative position of the camera and the face of the person, the relative position of the face and the eyes of the person, and the relative position of the eyes and the irises of the person. The
timing signal generator 80 may judge the desired judgement location based on the normal vector obtained from these relative positions. - When the photographing condition “the person is smiling” is selected, the judgement location is the eyes or the mouth and the reference situation is “the white part in his/her eye has a small dimension”, “the width of his/her mouth is wide” or “the white part in his/her mouth has a large dimension”, as described above. Therefore, the
timing signal generator 80 calculates the dimension of the white part of the eye, the width of the mouth, and the dimension of the white part of the mouth detected by thejudgement location detector 68 for each of the images, based on the image information. Thetiming signal generator 80 generates a timing signal when the white part of the eye has a smaller dimension than a predetermined dimension, when the mouth has a wider width than a predetermined width, or when the white part of the mouth has a larger dimension than a predetermined dimension. The predetermined dimension for the white part of the eye is relatively determined with respect to the width of the eye. The predetermined width for the mouth is relatively determined with respect to the width of the face of the person. The predetermined dimension for the white part of the mouth is relatively determined with respect to the dimension of the face of the person. - The
timing signal generator 80 outputs a timing signal when the judgement location satisfies the above reference situations. As described above, thecontrol unit 50 extracts the face part based on the raw image and the information for the raw image. Thecontrol unit 50 then detects the judgement location from the data for the extracted face part. As thecamera 10 photographs a subject when the detected judgement location satisfies the photographing condition, thecamera 10 can automatically photograph a desirable refined image without bothering the photographer. - The method of generating a timing signal when a plurality of people is photographed, will be explained next.
- When each of the images includes a plurality of people, the
extractor 60 extracts the aimed object and detects the judgement locations for each of the people. This means that the aimedobject extractor 66 extracts the face parts for each of the people from each of the images. Thejudgement location extractor 68 detects the eyes or the mouth for each of the people from each of the images. - When each of the images includes a plurality of people, the
timing signal generator 80 compares each of the judgement locations for each of the people based on the reference situation for the photographing condition. Thetiming signal generator 80 may generate a timing signal when the judgement locations for many of the people satisfy the reference situation for the photographing condition. Thetiming signal generator 80 may output the timing signal when the ratio of the judgement locations satisfying the photographing condition against all of the plurality of the judgement locations exceeds a predetermined ratio. In this case, thecamera 10 can photograph a refined image in which many of the people have a good appearance. - FIG. 4 is a flowchart showing a method of photographing an image. The
camera 10 starts photographing the subject when therelease button 52 is pushed (S100). When thecamera 10 starts photographing, data for a parallactic image is input from the parallactic image data input unit 22 (S102). At the same time, data for raw images are continuously input from the image data input unit 24 (S104). Then, the aimedobject extractor 66 extracts the face part of the targeted person as the aimed object (S106) . Thejudgement location detector 68 detects the judgement location based on the image information for the face part (S108). Thetiming signal generator 80 generates and outputs a timing signal when the judgement location satisfies a predetermined photographing condition (S110). Upon receiving the timing signal, the imagepickup control unit 56 controls theinput unit 20 to photograph a refined image (S112). - The image-processing
unit 84 processes the refined image, for example, compositing images and the like (S114). Therecording unit 90 records the processed image on a recording medium (S116). Theoutput unit 92 outputs the recorded image (S118). The photographing operation is terminated (S120). - FIG. 5 is a flowchart showing in detail the method of extracting a face part, step106 in FIG. 4. The
depth information extractor 62 extracts the depth information based on the parallactic image (S130). Theimage information extractor 64 extracts the image information based on the raw image (Sl32). Then, the aimedobject extractor 66 extracts the face part of the targeted person based on the depth information and the image information (S134). When each of the images includes a plurality of people, the aimedobject extractor 66 extracts the face parts for all of the people from each of the images (S136). - FIG. 6 is a flowchart showing in detail the method of detecting a judgement location, step108 in FIG. 4. The
judgement location detector 68 detects the judgement location based on the image information for the face part (S150). When each of the images includes a plurality of people, thejudgement location detector 68 detects the judgement locations for all of the people (S152 and S150). Then, the input-condition-determiningunit 82 determines the input condition based on the image information for the judgement location (S154). - FIG. 7 is a flowchart showing in detail the method of generating a timing signal,
step 110 in FIG. 4. Thetiming signal generator 80 judges whether the judgement location detected by thejudgement location detector 68 satisfies the photographing condition or not (S160). Thetiming signal generator 80 continues judging whether the judgement location satisfies the photographing condition or not for a predetermined period (S164 and S160). Thetiming signal generator 80 generates a timing signal when the judgement location satisfies the photographing condition (S162). The imagepickup control unit 56 controls theinput unit 20 to stop photographing raw images when the judgement location does not satisfy the predetermined photographing condition for a predetermined period (S164 and S166). - FIG. 8 is a flowchart showing in detail the method of photographing a refined image,
step 112 in FIG. 4. The imagepickup control unit 56 controls theinput unit 20 to automatically photograph a refined image based on the timing signal output at thestep 110 in FIG. 4 (S170) . Theinput unit 20 inputs the data for the refined image (S172). - At the
step 112 in FIG. 4, thecamera 10 may not automatically photograph a refined image but the user of thecamera 10 may push therelease button 52 to photograph the refined image, upon receiving the alarm signal from thealarm 54. - FIG. 9 is a flowchart showing in detail the method of photographing a refined image,
step 112 in FIG. 4. Thealarm 54 outputs an alarm signal such as an alarm sound or an alarm light based on the timing signal generated at the step 110 (Sl90). When the user, or the photographer of thecamera 10 notices the alarm signal, and then pushes the release button 52 (S192), thecamera 10 photographs a refined image (S194) - As the
alarm 54 outputs the alarm sound or the alarm light based on the timing signal, the user can photograph a refined image at an optimum timing, without having to judge the timing himself. Furthermore, the targeted person can also notice the timing by the alarm sound or the alarm light. - The
alarm 54 may output an alarm signal such as an alarm sound or an alarm light when the timing signal is not output from the timing signal generator for a predetermined period. - FIG. 10 is a flowchart showing in detail the method of generating a timing signal in which the
alarm 54 outputs the alarm signal,step 110 in FIG. 4. Thetiming signal generator 80 judges whether or not the judgement location detected by thejudgement location detector 68 satisfies the photographing condition (S180). Thetiming signal generator 80 continues judging whether or not the judgement location satisfies the photographing condition for a predetermined period (S184 and S180). Thetiming signal generator 80 generates a timing signal when the judgement location satisfies the photographing condition (S182). Thealarm 54 outputs an alarm signal such as the alarm sound and the alarm light when thetiming signal generator 80 does not output the timing signal for a predetermined period (S184 and S186). The imagepickup control unit 56 controls theinput unit 20 to stop photographing raw images at this time (S188). - As the
alarm 54 outputs an alarm signal such as an alarm sound and an alarm light when the timing signal is not output within a predetermined period, the photographer and the targeted person become aware of the fact that the targeted person does not meet the photographing condition, by the sound and the light. - FIG. 11 shows a
camera 110 of the second embodiment according to the present invention. Thecamera 110 continuously photographs raw images of a subject. Thecamera 110 then photographs a refined image of the subject, in accordance with a predetermined input condition, at the timing when one of the previously photographed raw images satisfies a predetermined photographing condition. Thecamera 110 in this embodiment is a silver halide type camera by which an image of a subject is formed on a silver halide film. Thecamera 110 includes aninput unit 120, an A/D converter30, amemory 40, acontrol unit 150, arelease button 52 and analarm 54. The A/D converter 30, thememory 40, therelease button 52 and thealarm 54 in this embodiment have the same structures and functions as those explained in the first embodiment. Therefore, the explanation of these parts will be omitted. - The
input unit 120 includes a parallactic imagedata input unit 122, a raw imagedata input unit 124 and a refined imagedata input unit 130. The parallactic imagedata input unit 122 and the raw imagedata input unit 124 in this embodiment respectively have the same structures and functions as the parallactic imagedata input unit 22 and the imagedata input unit 24 explained in the first embodiment. The refined imagedata input unit 130 includes a lens 132, a lens stop 134, a shutter 136 and a photographingunit 138. The lens 132, the lens stop 134 and the shutter 136 in this embodiment respectively have the same structures and functions as thelens 25, thelens stop 26 and theshutter 27 shown in FIG. 1 of the first embodiment. The photographingunit 138 receives an optical image of a subject and forms an image of the subject on a silver halide film. - The image
data input unit 24 of the first embodiment inputs both a raw image and a refined image. As for thecamera 110 of this embodiment, the raw imagedata input unit 124 inputs an electronic raw image and the refined imagedata input unit 130 inputs a refined image and forms the refined image on a film. The raw imagedata input unit 124 has a CCD for receiving the image of the subject in the same way as thedata input unit 24 of the first embodiment. The raw imagedata input unit 124 outputs electronic signals for the image converted by the CCD. - FIG. 12 is a block diagram of the
control unit 150 according to the second embodiment. Thecontrol unit 150 includes an imagepickup control unit 56, an image formingcontrol unit 58, anextractor 60, a condition-storingunit 70, atiming signal generator 80 and an input-condition-determiningunit 82. Theextractor 60, the condition-storingunit 70, thetiming signal generator 80 and the input-condition-determiningunit 82 in this embodiment respectively have the same structures and functions as those of the first embodiment, thus the explanation of these parts will be omitted. - The image-forming
control unit 58 controls theinput unit 120 to form an image of a subject. The image formingcontrol unit 58 controls at least one of the following conditions of the input unit 120: focus condition of the lens 132, aperture condition of the lens stop 134 and exposure time of the shutter 136, based on the input condition determined by the input-condition-determiningunit 82. The image-pickup control unit 56 controls theinput unit 120 to photograph an image of a subject. The image-pickup control unit 56 also controls the photographingunit 138 to photograph a refined image, based on the input condition. - In this embodiment, the
camera 110 includes the raw imagedata input unit 124 for inputting an electronic raw image in addition to the imagedata input unit 130 for inputting a refined image. Therefore, the camera can automatically set an optimum condition for photographing a refined image of the subject. Thus, the desired refined image can be obtained without photographing a plurality of images using silver halide films, which can be expensive. - A camera of the third embodiment according to the present invention will be explained in the following. The camera of this embodiment has the same structure as that of the first embodiment explained with reference to FIG. 1. The camera of the third embodiment continuously photographs raw images of a subject. The camera then photographs a refined image, in accordance with a predetermined input condition, at the timing when the previously photographed raw image satisfies a predetermined photographing condition. The
camera 110 may have a switch, not shown in the drawings, for selecting an automatic photographing mode in which the best timing for photographing the image is automatically determined, and a manual photographing mode in which the user of thecamera 110 determines the best timing. - The camera of this embodiment has the same structure as that of the first embodiment and includes an
input unit 20, an A/D converter 30, amemory 40, acontrol unit 50, arelease button 52, analarm 54, arecording unit 90 and anoutput unit 92. The camera of this embodiment may be, for example, a digital still camera or a digital video camera that can photograph a still image. - FIG. 13 is a block diagram of the
control unit 50 according to the third embodiment. Thecontrol unit 50 includes an image-pickup control unit 56, an image-formingcontrol unit 58, anextractor 60, a condition-storingunit 70, a photographingcondition judging unit 80, an input-condition-determiningunit 82, and an image-processingunit 84. - The
extractor 60 receives a parallactic image photographed by the parallactic imagedata input unit 22 and a normal image photographed by the imagedata input unit 24, from thememory 40. The normal image includes a raw image and a refined image. Theextractor 60 extracts an aimed object from the normal image based on the information obtained from the parallactic image and the normal image. The information includes image information of the normal image and depth information of the parallactic image. Theextractor 60 outputs data for the aimed object to the input-condition-determiningunit 82 and to the image-processingunit 84. - As described above, cameras are usually used to photograph human beings. Therefore, the best timing for photographing a refined image may be determined by the condition of a targeted person. Therefore, it is assumed that the
extractor 60 extracts a face part of the targeted person as the aimed object in this embodiment. - The
extractor 60 then detects a judgement location from the aimed object based on the information obtained from the parallactic images and the normal images. It is also assumed that theextractor 60 detects shapes or colors of the eyes or the mouth of the targeted person as the judgement location in this embodiment. - The condition-storing
unit 70 stores predetermined photographing conditions related to the judgement location, which should be included in each of the raw images obtained by photographing the subject. The condition-storingunit 70 may store a plurality of photographing conditions. The condition-storingunit 70 may include a condition-setting unit, not shown in the drawings, by which a user can select at least one of the photographing conditions from among a plurality of photographing conditions. - The best timing for photographing a refined image may be, for example, the timing when the targeted person does a predetermined motion. This means that the best timing may be the timing when the aimed object of the targeted person shows a predetermined variation. The predetermined variation may be, for example, “the person opens his/her eyes after he/she has been closing his/her eyes for more than two seconds” or “the person's vision of sight follows a predetermined trail”. The
condition storing unit 70 stores these conditions as the photographing conditions. - The photographing
condition judging unit 80 outputs a timing signal for photographing an image. The photographingcondition judging unit 80 outputs the timing signal when the judgement location detected by theextractor 60 shows a predetermined motion that satisfies the predetermined photographing condition stored in the storingunit 70. - The input-condition-determining
unit 82 determines an input condition for inputting an image based on the information for an aimed object or the judgement location received from theextractor 60. The input-condition-determiningunit 82 outputs the input condition to the image formingcontrol unit 58. The input condition may be, for example, focus condition of thelens 25 such that the aimed object including the judgement location is focussed. As theinput unit 20 inputs an image in accordance with the input condition such as the focus condition of thelens 25, determined by the input-condition-determiningunit 82, the camera of this embodiment can photograph a refined image in which the subject is in good condition. - The image-forming
control unit 58 controls theinput unit 20 to form a refined image of the subject based on the input condition determined by the condition-determiningunit 70. This means that the image-formingcontrol unit 58 controls at least one of the conditions including focus condition of thelens 25, aperture condition of thelens stop 26, exposure time of theshutter 27, and condition of theparallactic shutter 34, based on the input condition. - The image
pickup control unit 56 controls theinput unit 20, to photograph a refined image of the subject based on the input condition determined by the condition-determiningunit 70. This means that the image-pickup control unit 56 controls at least one of the conditions including output signal of theCCD 29 and output signal of theparallactic CCD 36, based on the input condition. The image-pickup control unit 56 controls theinput unit 20, to photograph a refined image based on the timing signal output from the photographingcondition judging unit 80. The image-pickup control unit 56 controls the image-processingunit 84 to process the refined image. - The image-processing
unit 84 receives the refined image photographed by the imagedata input unit 24 from thememory 40. The image-processingunit 84 then processes the refined image based on the information for the aimed object or the judgement location extracted from theextractor 60. The refined image is processed in accordance with the process conditions as explained in the first embodiment. - FIG. 14 is a functional block diagram of the
extractor 60. Theextractor 60 includes adepth information extractor 62, animage information extractor 64, an aimedobject extractor 66 and ajudgement location detector 68. - The
depth information extractor 62 extracts the depth information indicating the distance to each of components of the subject, based on the data of the parallactic image received from thememory 40. - The
image information extractor 64 extracts the image information for normal images, from the data for the normal images received from thememory 40. The image information includes, for example, data of the normal image such as luminescence distribution, intensity distribution, color distribution, texture distribution, and motion distribution. - The aimed
object extractor 66 extracts data for the face area of the person as the aimed object, based on the depth information and the image information. The aimed object is extracted in a similar manner as that explained in the first embodiment. - The aimed
object extractor 66 outputs the information for the aimed object to the input-condition-determiningunit 82 and the image-processingunit 84. - As described above, the aimed
object extractor 66 extracts an aimed object based on the depth information in addition to the image information. Therefore, even when a plurality of people are photographed in the image and their faces are close to each other, the faces of the different people can be distinctly extracted. - The
judgement location detector 68 detects the judgement location from the data for the aimed object extracted by the aimedobject extractor 66. The judgement location is detected in accordance with a detecting condition different from the extracting condition for extracting the aimed object by the aimedobject extractor 66. In this embodiment, the judgement location is eyes or mouth of the photographed person. Therefore, thejudgement location detector 68 detects the eyes and mouth from the face area. - The
judgement location detector 68 outputs the information for the judgement location to the photographingcondition judging unit 80. - FIG. 15 is a block diagram of the function of the photographing
condition judging unit 80. The photographingcondition judging unit 80 includes a detection-startingunit 85, avariation detector 86 and a judgingunit 88. The photographing condition includes a predetermined photographing condition related to the motion of the judgement location of the aimed object, and the starting condition for starting detection of the motion of the judgement location. - The detection-starting
unit 85 outputs a starting signal when the judgement location detected by theextractor 60 satisfies a predetermined starting condition. Thevariation detector 86 starts detecting variation in the motion of the judgement location upon receiving the starting signal from the detection-startingunit 85. The judgingunit 88 outputs the timing signal for photographing a refined image when the variation of the motion of the judgement location detected by thevariation detector 86 satisfies a predetermined photographing condition. - The photographing conditions may be, for example, “the person opens his/her eyes after he/she has been closing his/her eyes for more than two seconds” or “the person's vision of sight follows a predetermined trail”. It is desirable that the photographing conditions are motions or variations of the targeted person, which the targeted person usually does not perform in front of the camera, in order to avoid misjudgment.
- Each of the photographing conditions has a reference situation for the judgement location, which should meet the requirements of the photographing condition. The condition-storing
unit 70 also stores the reference situations for the judgement location, each respectively corresponding to each of the photographing conditions. The reference situations for the judgement location corresponding to each of the photographing conditions will be described in the following. - For the conditions such as “the person is not blinking”, and “the person is looking at the camera”, the reference situation may relate to shape of the eye, color of the eye, and size of the eye. Whether each of the judgement locations satisfies each of these reference situations or not is judged in accordance with predetermined algorithms based on experience.
- When the photographing condition “the person opens his/her eyes after he/she has been closing his/her eyes for more than two seconds” is selected, the judgement location may be the eye of the person. The reference situation for the eye in this photographing condition will be determined as follows. When a person blinks, his/her eyelid hides his/her eyeball. While he/she is blinking and his/her eye is partially closed, a white part of his/her eyeball is especially hidden by his/her eyelid. This means that when the person is blinking, the white part of his/her eyeball should be relatively small and when the person is not blinking, the white part of his/her eyeball should be relatively large. Therefore, whether the person opens his/her eyes or not is determined based on the dimension of the white part of his/her eyeball.
- The starting condition for the photographing condition “the person opens his/her eyes after he/she has been closing his/her eyes for more than two seconds” becomes “the person closes his/her eyes”. The detection-starting
unit 85 outputs a starting signal when it detects the closed eye of the person. Thevariation detector 86 starts detecting variation of the eye upon receiving the starting signal. Thevariation detector 86 counts the period while the person keeps his/her eyes closed, from the data for the raw images continuously input. Thevariation detector 86 outputs the timing signal when the person opens his/her eyes after he/she has had his/her eyes closed for more than two seconds. It is desirable for thevariation detector 86 to output the timing signal one second after the person opens his/her eyes rather than the moment when the person opens his/her eyes. - When the photographing condition “the person's vision of sight follows a predetermined trail” is selected, the judgement location may be the eye of the person. The reference situation for the eye in this photographing condition will be determined as follows. The trail of the person's vision of sight can be detected by detecting the normal vector of the iris in the eye. At first, the iris of his/her eye is recognized, from his/her eye detected by the
judgement location detector 68, as being a cylindrical or elliptic area whose circumference has a brownish or blue/green color. The center of the iris is then detected based on the image information for the eye. The normal vector of the center of the iris is obtained based on the depth information. - It is assumed in this photographing condition that the predetermined trail of the vision of sight is “the person looks upper left with respect to the camera, lower right with respect to the camera and then at the camera”, for example. The starting condition in this case becomes “the person looks to the upper left with respect to the camera”. The
detection starting unit 85 outputs a starting signal when it detects that the person is looking to the upper left with respect to the camera. Thevariation detector 86 starts detecting variation of the vision of sight of the person upon receiving the starting signal. Thevariation detector 86 detects the trail of the vision of sight based on the data for the plurality of input raw images. Thejudgement unit 88 outputs the timing signal when the trail is “upper left, lower right and then at the camera”. - The
control unit 50 extracts the face part based on the data for the raw image and the information thereof and then detects the judgement location from the information for the extracted face part. Thecontrol unit 50 then detects the variation of the judgement location and determines the timing for photographing when the detected judgement location satisfies the photographing condition. Therefore, the camera of this embodiment can automatically photograph at a timing when the targeted person is in good condition. - The method of generating a timing signal when a plurality of people is photographed will be explained next.
- When each of the images includes a plurality of people, the
judgement location detector 68 detects the judgement locations for each of the people. This means that the aimedobject extractor 66 extracts the face parts for each of the people from each of the images. Thejudgement location extractor 68 detects the eyes or the mouth for each of the people from each of the images. - At this time, the
variation detector 86 detects the variation of the judgement locations for each of the people. The judgingunit 88 outputs the timing signal when the variation of the plurality of judgement locations satisfy the photographing condition. The judgingunit 88 selects the aimed objects respectively, including the judgement locations whose variation satisfies the photographing condition. The judgingunit 88 then outputs the information of the aimed objects including the selected judgement locations to the input-condition-determiningunit 82 and the image-processingunit 84. - The method of photographing an image in this embodiment is almost same as that of the first embodiment shown in FIGS. 4 and 5.
- FIG. 16 is a flowchart showing in detail the method of detecting a judgement location, step108 in FIG. 4. The
judgement location detector 68 detects the judgement location based on the image information for the face part (S250). When each of the images includes a plurality of people, thejudgement location detector 68 detects the judgement locations for all of the people (S252 and S250). - FIG. 17 is a flowchart showing in detail the method of generating a timing signal,
step 110 in FIG. 4. Thedetection starting unit 85 judges whether or not the judgement location detected by thejudgement location detector 68 satisfies the starting condition (S260). The detection-startingunit 85 continues judging whether or not the judgement location satisfies the starting condition for a predetermined period (S260 and S262). Thevariation detector 86 starts detecting the variation of the judgement location when the judgement location satisfies the starting condition (S261). The image-pickup control unit 56 controls theinput unit 20 to stop photographing raw images when the judgement location does not satisfy the predetermined starting condition for a predetermined period (S262 and S263). - The judging
unit 88 then judges whether the variation of the judgement location satisfies the photographing condition or not (S264). Thetiming signal generator 80 generates a timing signal when the variation of the judgement location satisfies the photographing condition (S265). When the variation of the judgement location does not satisfy the photographing condition, the process returns to step S260 if the predetermined period is remaining. Then, thedetection starting unit 85 judges again whether or not the judgement location detected by thejudgement location detector 68 satisfies the starting condition (S260). The imagepickup control unit 56 controls theinput unit 20 to stop photographing raw images when the predetermined period is expired (S266 and S267). - FIG. 18 is a flowchart showing in detail the method of photographing a refined image,
step 112 in FIG. 4. The image-pickup control unit 56 controls theinput unit 20 to automatically photograph a refined image based on the timing signal output at thestep 110 in FIG. 4 (S270). Theinput unit 20 inputs the data for the refined image (S272). - At the
step 112 in FIG. 4, thecamera 10 may not automatically photograph a refined image but the user of thecamera 10 may press therelease button 52 to photographing the refined image upon receiving the alarm signal from thealarm 54. - The method of manually photographing a refined image by the user of the
camera 10 is in accordance with the flowchart shown in FIG. 9, which is explained in the first embodiment. Thealarm 54 outputs an alarm signal such as an alarm sound or an alarm light based on the timing signal generated at the step 110 (S190) . When the user, or the photographer of thecamera 10 notices the alarm signal, and then he/she presses the release button 52 (S192), thecamera 10 photographs a refined image (S194). - As the
alarm 54 outputs the alarm sound or the alarm light based on the timing signal, the user can photograph a refined image at an optimum timing without having to judge the timing himself. Furthermore, the targeted person can also notice the timing because of the alarm sound or the alarm light. - The
alarm 54 may output an alarm signal such as an alarm sound or an alarm light when the timing signal is not output from the timing signal generator for a predetermined period. - FIG. 19 is a flowchart showing in detail the method of generating a timing signal in which the
alarm 54 outputs the alarm signal,step 110 in FIG. 4. The detection-startingunit 85 judges whether or not the judgement location detected by thejudgement location detector 68 satisfies the starting condition (S300). Thedetection starting unit 85 continues judging whether or not the judgement location satisfies the starting condition for a predetermined period (S300 and S304). Thevariation detector 86 starts detecting the variation of the judgement location when the judgement location satisfies the starting condition (S302). Thealarm 54 outputs an alarm signal such as an alarm sound and an alarm light when the photographingcondition judging unit 80 does not output the timing signal for a predetermined period (S304 and S306). Then, the image-pickup control unit 56 controls theinput unit 20 to stop photographing raw images, when the judgement location does not satisfy the predetermined starting condition for a predetermined period (S308). - The judging
unit 88 then judges whether or not the variation of the judgement location satisfies the photographing condition (S310). Thetiming signal generator 80 generates a timing signal when the variation of the judgement location satisfies the photographing condition (S312). When the variation of the judgement location does not satisfy the photographing condition, the process returns to the step S314 if the predetermined period is not remaining. Then, thedetection starting unit 85 judges again whether or not the judgement location detected by thejudgement location detector 68 satisfies the starting condition (S314 and S300). At the step S314, thealarm 54 outputs an alarm signal such as an alarm sound and an alarm light when the predetermined period is expired (S316). The image-pickup control unit 56 controls theinput unit 20 to stop photographing raw images (S318). - As the
alarm 54 outputs the alarm signal such as the alarm sound and the alarm light when the timing signal is not output within a predetermined period, the photographer and the targeted person become aware of the fact that the targeted person does not meet the photographing condition because of the sound and the light. - The camera of the fourth embodiment will be explained in the following. The camera of this embodiment is a silver halide type camera by which an image of a subject is formed on a silver halide film and has the same structure as that explained in the second embodiment shown in FIG. 11. Therefore, the explanation of the structure of the camera in this embodiment will be omitted.
- FIG. 20 is a block diagram of the
control unit 150 in this embodiment. Thecontrol unit 150 in this embodiment includes an imagepickup control unit 56, an image formingcontrol unit 58, anextractor 60, acondition storing unit 70, a photographingcondition judging unit 180, an input-condition-determiningunit 82. Theextractor 60, thecondition storing unit 70, the photographingcondition judging unit 180 and the input-condition-determiningunit 82 in this embodiment respectively have same the structures and functions as those explained in the first embodiment, therefore, the explanation of these parts will be omitted. - The image-forming
control unit 58 controls theinput unit 120 to form an image of a subject. The image-formingcontrol unit 58 controls at least one of the following conditions of the input unit 120: focus condition of the lens 132, aperture condition of the lens stop 134 and exposure time of the shutter 136, based on the input condition determined by the input-condition-determiningunit 82. The imagepickup control unit 56 controls theinput unit 120 to photograph an image of a subject. The imagepickup control unit 56 also controls the photographingunit 138 to photograph a refined image based on the input condition. - In this embodiment, the
camera 110 includes the raw imagedata input unit 124 for inputting an electronic raw image, in addition to the imagedata input unit 130 for inputting a refined image. Therefore, the camera can automatically set an optimum condition for photographing a refined image of the subject. Thus, a desired refined image can be obtained without photographing a plurality of images using silver halide films, which can be expensive. - A camera of the fifth embodiment according to the present invention will be explained in the following. The camera of this embodiment continuously photographs images of a subject. The camera outputs a timing signal when the targeted subject in the image satisfies the photographing condition. Upon receiving the timing signal, the camera of this embodiment records one of the images, which was photographed at a predetermined earlier period than the timing signal, based on the timing signal, as a refined image.
- The camera of this embodiment includes a
control unit 50. The structure of the camera of this embodiment other than thecontrol unit 50 is the same as that explained in the first to fourth embodiments. Thus, the explanation of same parts will be omitted. - FIG. 21 is a block diagram of the
control unit 50 according to the fourth embodiment. Thecontrol unit 50 includes anextractor 60, a condition-storingunit 70, atiming signal generator 80, animage processing unit 84, and aimage storing unit 140. Theextractor 60, the condition-storingunit 70, thetiming signal generator 80 and theimage processing unit 84 are the same as those explained in the first to fourth embodiments. Although only thetiming signal generator 80 is shown in FIG. 21, the part having the numeral 80 may be the photographing condition judging unit explained in the third and the fourth embodiments. - The
image storing unit 140 temporarily stores the images photographed by the imagedata input unit 24 and input from thememory 40. Each of the images is respectively stored with time records of when the image was photographed. Theimage storing unit 140 receives the timing signal from thetiming signal generator 80 and then outputs one of the raw images photographed at a timing earlier than the timing signal by a predetermined period as the refined image, to theimage processing unit 84. Theimage processing unit 84 processes the refined image based on the information for theextractor 60. - FIG. 22 is a flowchart showing a method of photographing an image. The camera starts photographing the subject when the
release button 52 is pressed (S400) . When the camera starts photographing, data for a parallactic image is input from the parallactic image data input unit 22 (S402). At the same time, data for raw images are continuously input from the image data input unit 24 (S404). The raw images are temporarily stored in theimage storing unit 140. Then, the aimedobject extractor 66 extracts the face part of the targeted person as the aimed object (S406). Thejudgement location detector 68 detects the judgement location based on the image information for the face part (S408). The photographingcondition judging unit 180 generates and outputs a timing signal when the judgement location satisfies a predetermined photographing condition (S410). Upon receiving the timing signal, theimage storing unit 140 selects one of the raw images photographed at a timing earlier than the timing signal by a predetermined period, as the refined image. Theimage storing unit 140 outputs the refined image to the image-processing unit 84 (S412). - The image-processing
unit 84 processes the refined image (S414). The processing of the refined image may include compositing a plurality of refined images and the like. Therecording unit 90 records the processed image on a recording medium (S416). Theoutput unit 92 outputs the processed image (S418), and the photographing operation is terminated (S420). - The detailed operations of the
steps 206, 208 and 210 are the same as those explained in the previous embodiments. Thus, an explanation of these steps will be omitted. - The
image storing unit 140 may store all of the raw images which are photographed from a timing earlier than the timing signal by a predetermined period to the timing of the timing signal, as the refined images. In this case, the image-processingunit 84 processes the plurality of refined images. - As described above, the camera stores the raw image which is photographed at a timing earlier than the timing signal by a predetermined period as the refined image, based on the timing signal. Therefore, the refined image is selected by considering the delay time, even when the
extractor 60 takes a certain time for extracting the aimed object and detecting the judgement location. Thus, an image in which the targeted person has a good appearance can be obtained. - Furthermore, the camera stores all of the raw images which are photographed from a timing earlier than the timing signal by a predetermined period to the timing of the timing signal, as the refined images. Therefore, an image in which the targeted person has a good appearance can be selected.
- FIG. 23 shows a
camera 210 of the sixth embodiment according to the present invention. Thecamera 210 of this embodiment continuously photographs a plurality of raw images of a subject in the same way as the first to fifth embodiments. Thecamera 210 outputs a timing signal when the raw image satisfies the photographing condition. - The
camera 210 of this embodiment has the same structure as that of the first embodiment and further includes acommunication unit 150. Thecamera 210 outputs the timing signal through thecommunication unit 150, to control operation of anexternal apparatus 160 based on the timing signal. Thecommunication unit 150 of thecamera 210 sends the timing signal to theexternal apparatus 160 by a wireless means. Thecommunication unit 150 of thecamera 210 and the external apparatus may be held in communication with each other by a wireless means such as via a radio or infrared radiation or by cables such as via a USB or a LAN. Theexternal apparatus 160 may be, for example, a camera for photographing a refined image of the target, or an illuminator. - In this embodiment, the
camera 210 continuously photographs raw images of a subject. Thecamera 210 outputs a timing signal when the raw image satisfies a predetermined selecting condition. The timing signal is transferred from thecamera 210 to theexternal apparatus 160 through thecommunication unit 150 of thecamera 210. When theexternal apparatus 160 is another camera for photographing a refined image, the external apparatus photographs a refined image of the subject based on the timing signal from thecamera 210. - Using the
camera 210 of this embodiment, even a silver halide type camera that does not include a raw image data input unit can photograph a refined image of a subject at the timing when the targeted person is in good condition. Thus, a desired refined image can be obtained without photographing a plurality of images using silver halide films which can be expensive. - As described above, according to the embodiments of the present invention, an image in which a targeted object satisfies a predetermined photographing condition.
- Although the present invention has been described by way of exemplary embodiments, it should be understood that many changes and substitutions may be made by those skilled in the art without departing from the spirit and the scope of the present invention which is defined only by the appended claims.
Claims (19)
1. A camera comprising:
an image data input unit forming a plurality of images of a subject for photographing said subject;
a condition storing unit storing a predetermined photographing condition related to a desirable variation of said subject;
a variation detector detecting variation of said subject in said plurality of said images based on information of said plurality of images; and
a timing signal generator outputting a timing signal when said variation of said subject satisfies said photographing condition.
2. A camera as set forth in claim 1 , further comprising: an extractor extracting data of an aimed object from each of said plurality of images of said subject based on an extracting condition,
wherein said photographing condition includes a predetermined condition related to a desirable aimed object,
said variation detector detects variation of said aimed object in said plurality of images based on said information of said plurality of images, and
said timing signal generator outputs said timing signal when said variation of said aimed object satisfies said photographing condition.
3. A camera as set forth in claim 2 , wherein said extracting condition is based on depth information of said plurality of images indicating the distance to each part of said subject.
4. A camera as set forth in claim 2 ,
wherein said extractor detects data of a judgement location from said data of said aimed object in each of said plurality of images based on a detecting condition different from said extracting condition,
said photographing condition includes a predetermined photographing condition related to a desirable judgement location,
said variation detector detects variation of said judgement location in said plurality of images based on said information of said plurality of images, and
said timing signal generator outputs said timing signal when said variation of said judgement location satisfies said photographing condition.
5. A camera as set forth in claim 4 ,
wherein said photographing condition includes a predetermined starting condition for starting detection of said variation of said judgement location, and
said variation detector starts detecting said variation of said judgement location when said judgement location satisfies said starting condition.
6. A camera as set forth in claim 2 ,
wherein said extractor extracts data of a plurality of said aimed objects from each of said plurality of images,
said variation detector detects variation of each of said plurality of said aimed objects in said plurality of images based on information of said plurality of images, and
said timing signal generator outputs said timing signal when said variation of said plurality of said aimed objects satisfy said photographing condition.
7. A camera as set forth in claim 6 ,
wherein said extractor detects data of a plurality of judgement locations from each of said data of said plurality of aimed objects based on a detecting condition different from said extracting condition,
said photographing condition includes a predetermined photographing condition related to desirable variation of said judgement location,
said variation detector detects variation of each of said plurality of said judgement locations in said plurality of images based on information of said plurality of images, and
said timing signal generator outputs said timing signal when said variation of said plurality of said judgement locations satisfy said photographing condition.
8. A camera as set forth in claim 1 further comprising an image pickup control unit controlling said input unit for photographing said image based on said timing signal.
9. A camera as set forth in claim 1 , further comprising an illuminator illuminating said subject based on said timing signal.
10. A camera as set forth in claim 1 , further comprising a recording unit recording said image on a replaceable nonvolatile recording medium based on said timing signal.
11. A camera as set forth in claim 1 , further comprising an alarm outputting an alarm signal for notifying that said subject satisfies said photographing condition based on said timing signal.
12. A camera as set forth in claim 1 ,
wherein said photographing condition includes a plurality of photographing conditions, and
said camera further comprises a condition-setting unit previously selecting at least one of said photographing conditions for photographing said image, from among said plurality of photographing conditions.
13. A camera as set forth in claim 8 , wherein said timing signal generator selects said judgement location satisfying said photographing condition from among said plurality of said judgement locations in said plurality of images, and outputs information for said aimed object including said judgement location, and
said camera further comprising:
an input condition determining unit determining an input condition for inputting said image based on information for said judgement location; and
an image forming control unit controlling an input unit for forming said image of said subject based on said input condition.
14. A camera as set forth in claim 8 , wherein said timing signal generator selects said judgement location satisfying
said photographing condition from among said plurality of said judgement locations in said plurality of images, and outputs information for said aimed object including said judgement location, and
said camera further comprising an image processing unit processing said image based on said information for said judgement location.
15. A method of photographing a plurality of images of a subject comprising:
detecting variation of said subject in said plurality of said images based on information for said plurality of images;
outputting a timing signal when said variation of said subject satisfies a predetermined photographing condition related to a desirable variation of said subject.
16. A method as set forth in claim 15 , further comprising extracting data of an aimed object from each of said plurality of images of said subject based on an extracting condition,
said detecting includes detecting variation of said aimed object based on information for said image, and
said timing signal is output when said variation of said aimed object satisfies said photographing condition.
17. A method as set forth in claim 16 , wherein said extraction of said aimed object includes detecting data of a judgement location from said data of said aimed object in each of said plurality of images based on a detecting condition different from said extracting condition,
said detecting variation of said subject includes detecting variation of said judgement location based on information for said image, and
said timing signal is output when said variation of said judgement location satisfies said photographing condition.
18. A method as set forth in claim 17 , wherein said photographing condition includes a predetermined starting condition for starting detection of said variation of said judgement location, and
said detecting of variation starts detecting said variation of said judgement location when said judgement location satisfies said starting condition.
19. A method as set forth in claim 15 , further comprising photographing said image based on said timing signal.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/798,375 US20040170397A1 (en) | 1999-06-03 | 2004-03-12 | Camera and method of photographing good image |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JPHEI11-157159 | 1999-06-03 | ||
JP11157159A JP2000347277A (en) | 1999-06-03 | 1999-06-03 | Camera and method of pick up |
JPHEI11-158666 | 1999-06-04 | ||
JP11158666A JP2000347278A (en) | 1999-06-04 | 1999-06-04 | Camera and photographing method |
US09/586,600 US7248300B1 (en) | 1999-06-03 | 2000-06-02 | Camera and method of photographing good image |
US10/798,375 US20040170397A1 (en) | 1999-06-03 | 2004-03-12 | Camera and method of photographing good image |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/586,600 Division US7248300B1 (en) | 1999-06-03 | 2000-06-02 | Camera and method of photographing good image |
Publications (1)
Publication Number | Publication Date |
---|---|
US20040170397A1 true US20040170397A1 (en) | 2004-09-02 |
Family
ID=38266876
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/586,600 Expired - Lifetime US7248300B1 (en) | 1999-06-03 | 2000-06-02 | Camera and method of photographing good image |
US10/798,375 Abandoned US20040170397A1 (en) | 1999-06-03 | 2004-03-12 | Camera and method of photographing good image |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/586,600 Expired - Lifetime US7248300B1 (en) | 1999-06-03 | 2000-06-02 | Camera and method of photographing good image |
Country Status (1)
Country | Link |
---|---|
US (2) | US7248300B1 (en) |
Cited By (55)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050243185A1 (en) * | 2004-05-03 | 2005-11-03 | Samsung Techwin Co., Ltd. | Method for controlling digital photographing apparatus, and digital photographing apparatus using the method |
US20070201726A1 (en) * | 2006-02-24 | 2007-08-30 | Eran Steinberg | Method and Apparatus for Selective Rejection of Digital Images |
US20070201724A1 (en) * | 2006-02-24 | 2007-08-30 | Eran Steinberg | Method and Apparatus for Selective Disqualification of Digital Images |
US20070201725A1 (en) * | 2006-02-24 | 2007-08-30 | Eran Steinberg | Digital Image Acquisition Control and Correction Method and Apparatus |
US20070263934A1 (en) * | 2001-09-18 | 2007-11-15 | Noriaki Ojima | Image pickup device, automatic focusing method, automatic exposure method, electronic flash control method and computer program |
US20080068466A1 (en) * | 2006-09-19 | 2008-03-20 | Fujifilm Corporation | Imaging apparatus, method, and program |
US20080266419A1 (en) * | 2007-04-30 | 2008-10-30 | Fotonation Ireland Limited | Method and apparatus for automatically controlling the decisive moment for an image acquisition device |
WO2009053863A1 (en) * | 2007-10-26 | 2009-04-30 | Sony Ericsson Mobile Communications Ab | Automatic timing of a photographic shot |
US20090167889A1 (en) * | 2007-12-28 | 2009-07-02 | Casio Computer Co., Ltd. | Image capturing device |
US20090190803A1 (en) * | 2008-01-29 | 2009-07-30 | Fotonation Ireland Limited | Detecting facial expressions in digital images |
US20090304289A1 (en) * | 2008-06-06 | 2009-12-10 | Sony Corporation | Image capturing apparatus, image capturing method, and computer program |
US7684630B2 (en) | 2003-06-26 | 2010-03-23 | Fotonation Vision Limited | Digital image adjustable compression and resolution using face detection information |
US20100079613A1 (en) * | 2008-06-06 | 2010-04-01 | Sony Corporation | Image capturing apparatus, image capturing method, and computer program |
US7693311B2 (en) | 2003-06-26 | 2010-04-06 | Fotonation Vision Limited | Perfecting the effect of flash within an image acquisition devices using face detection |
US7809162B2 (en) | 2003-06-26 | 2010-10-05 | Fotonation Vision Limited | Digital image processing using face detection information |
US7844076B2 (en) | 2003-06-26 | 2010-11-30 | Fotonation Vision Limited | Digital image processing using face detection and skin tone information |
US7844135B2 (en) | 2003-06-26 | 2010-11-30 | Tessera Technologies Ireland Limited | Detecting orientation of digital images using face detection information |
US7855737B2 (en) | 2008-03-26 | 2010-12-21 | Fotonation Ireland Limited | Method of making a digital camera image of a scene including the camera user |
US7864990B2 (en) | 2006-08-11 | 2011-01-04 | Tessera Technologies Ireland Limited | Real-time face tracking in a digital image acquisition device |
US20110007174A1 (en) * | 2009-05-20 | 2011-01-13 | Fotonation Ireland Limited | Identifying Facial Expressions in Acquired Digital Images |
US7912245B2 (en) | 2003-06-26 | 2011-03-22 | Tessera Technologies Ireland Limited | Method of improving orientation and color balance of digital images using face detection information |
US7916971B2 (en) | 2007-05-24 | 2011-03-29 | Tessera Technologies Ireland Limited | Image processing method and apparatus |
US7916897B2 (en) | 2006-08-11 | 2011-03-29 | Tessera Technologies Ireland Limited | Face tracking for controlling imaging parameters |
US20110116780A1 (en) * | 2007-10-31 | 2011-05-19 | Sony Corporation | Photographic apparatus and photographic method |
US7953251B1 (en) | 2004-10-28 | 2011-05-31 | Tessera Technologies Ireland Limited | Method and apparatus for detection and correction of flash-induced eye defects within digital images using preview or other reference images |
US7962629B2 (en) | 2005-06-17 | 2011-06-14 | Tessera Technologies Ireland Limited | Method for establishing a paired connection between media devices |
US7965875B2 (en) | 2006-06-12 | 2011-06-21 | Tessera Technologies Ireland Limited | Advances in extending the AAM techniques from grayscale to color images |
CN102158649A (en) * | 2010-02-01 | 2011-08-17 | 奥林巴斯映像株式会社 | Photographic device and photographic method thereof |
EP2360620A1 (en) * | 2010-02-24 | 2011-08-24 | Research In Motion Limited | Eye blink avoidance during image acquisition in a mobile communications device with digital camera functionality |
US20110205383A1 (en) * | 2010-02-24 | 2011-08-25 | Research In Motion Limited | Eye blink avoidance during image acquisition in a mobile communications device with digital camera functionality |
US8050465B2 (en) | 2006-08-11 | 2011-11-01 | DigitalOptics Corporation Europe Limited | Real-time face tracking in a digital image acquisition device |
US8055067B2 (en) | 2007-01-18 | 2011-11-08 | DigitalOptics Corporation Europe Limited | Color segmentation |
US8155397B2 (en) | 2007-09-26 | 2012-04-10 | DigitalOptics Corporation Europe Limited | Face tracking in a camera processor |
US8213737B2 (en) | 2007-06-21 | 2012-07-03 | DigitalOptics Corporation Europe Limited | Digital image enhancement with reference images |
US8224039B2 (en) | 2007-02-28 | 2012-07-17 | DigitalOptics Corporation Europe Limited | Separating a directional lighting variability in statistical face modelling based on texture space decomposition |
US20120236163A1 (en) * | 2009-09-30 | 2012-09-20 | Panasonic Corporation | Photography device, photography method, and program |
US8330831B2 (en) | 2003-08-05 | 2012-12-11 | DigitalOptics Corporation Europe Limited | Method of gathering visual meta data using a reference image |
US8345114B2 (en) | 2008-07-30 | 2013-01-01 | DigitalOptics Corporation Europe Limited | Automatic face and skin beautification using face detection |
US8379917B2 (en) | 2009-10-02 | 2013-02-19 | DigitalOptics Corporation Europe Limited | Face recognition performance using additional image features |
US8494286B2 (en) | 2008-02-05 | 2013-07-23 | DigitalOptics Corporation Europe Limited | Face detection in mid-shot digital images |
US8498452B2 (en) | 2003-06-26 | 2013-07-30 | DigitalOptics Corporation Europe Limited | Digital image processing using face detection information |
US8503800B2 (en) | 2007-03-05 | 2013-08-06 | DigitalOptics Corporation Europe Limited | Illumination detection using classifier chains |
US8509496B2 (en) | 2006-08-11 | 2013-08-13 | DigitalOptics Corporation Europe Limited | Real-time face tracking with reference images |
CN103369248A (en) * | 2013-07-20 | 2013-10-23 | 厦门美图移动科技有限公司 | Method for photographing allowing closed eyes to be opened |
US8593542B2 (en) | 2005-12-27 | 2013-11-26 | DigitalOptics Corporation Europe Limited | Foreground/background separation using reference images |
US8649604B2 (en) | 2007-03-05 | 2014-02-11 | DigitalOptics Corporation Europe Limited | Face searching and detection in a digital image acquisition device |
US8675991B2 (en) | 2003-06-26 | 2014-03-18 | DigitalOptics Corporation Europe Limited | Modification of post-viewing parameters for digital images using region or feature information |
US8682097B2 (en) | 2006-02-14 | 2014-03-25 | DigitalOptics Corporation Europe Limited | Digital image enhancement with reference images |
US8836777B2 (en) | 2011-02-25 | 2014-09-16 | DigitalOptics Corporation Europe Limited | Automatic detection of vertical gaze using an embedded imaging device |
US8989453B2 (en) | 2003-06-26 | 2015-03-24 | Fotonation Limited | Digital image processing using face detection information |
US9129381B2 (en) | 2003-06-26 | 2015-09-08 | Fotonation Limited | Modification of post-viewing parameters for digital images using image region or feature information |
WO2015138169A1 (en) * | 2014-03-10 | 2015-09-17 | Qualcomm Incorporated | Blink and averted gaze avoidance in photographic images |
US9692964B2 (en) | 2003-06-26 | 2017-06-27 | Fotonation Limited | Modification of post-viewing parameters for digital images using image region or feature information |
JP2017525069A (en) * | 2014-07-11 | 2017-08-31 | インテル コーポレイション | Dynamic control for data capture |
US10949674B2 (en) | 2015-12-24 | 2021-03-16 | Intel Corporation | Video summarization using semantic information |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007535266A (en) * | 2004-04-29 | 2007-11-29 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | Device for supporting electronic view finding |
JP4757559B2 (en) * | 2004-08-11 | 2011-08-24 | 富士フイルム株式会社 | Apparatus and method for detecting components of a subject |
US10380267B2 (en) | 2005-10-26 | 2019-08-13 | Cortica, Ltd. | System and method for tagging multimedia content elements |
US9384196B2 (en) | 2005-10-26 | 2016-07-05 | Cortica, Ltd. | Signature generation for multimedia deep-content-classification by a large-scale matching system and method thereof |
JP2009010776A (en) * | 2007-06-28 | 2009-01-15 | Sony Corp | Imaging device, photography control method, and program |
JP5129683B2 (en) * | 2008-08-05 | 2013-01-30 | キヤノン株式会社 | Imaging apparatus and control method thereof |
JP5361547B2 (en) * | 2008-08-07 | 2013-12-04 | キヤノン株式会社 | Imaging apparatus, imaging method, and program |
US9111287B2 (en) * | 2009-09-30 | 2015-08-18 | Microsoft Technology Licensing, Llc | Video content-aware advertisement placement |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4881127A (en) * | 1987-02-25 | 1989-11-14 | Konica Corporation | Still video camera with electronic shutter and flash |
US6539100B1 (en) * | 1999-01-27 | 2003-03-25 | International Business Machines Corporation | Method and apparatus for associating pupils with subjects |
US6606117B1 (en) * | 1997-09-15 | 2003-08-12 | Canon Kabushiki Kaisha | Content information gathering apparatus system and method |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS58166629U (en) * | 1982-04-30 | 1983-11-07 | オリンパス光学工業株式会社 | focus detection camera |
US5619264A (en) * | 1988-02-09 | 1997-04-08 | Canon Kabushiki Kaisha | Automatic focusing device |
JPH0537940A (en) | 1991-08-01 | 1993-02-12 | Minolta Camera Co Ltd | Video camera |
JPH04156526A (en) | 1990-10-19 | 1992-05-29 | Nikon Corp | Line of sight detection-operated camera |
JPH05100148A (en) | 1991-10-04 | 1993-04-23 | Nikon Corp | Camera with line of sight detecting device |
JPH0540303A (en) | 1991-08-05 | 1993-02-19 | Canon Inc | Camera |
JPH07295085A (en) | 1994-04-28 | 1995-11-10 | Canon Inc | Blink input device and camera |
JPH08251475A (en) | 1995-03-11 | 1996-09-27 | Nissan Motor Co Ltd | Open eye sensor for image pickup device |
JPH095815A (en) | 1995-06-19 | 1997-01-10 | Canon Inc | Camera |
JPH09181866A (en) | 1995-12-25 | 1997-07-11 | Olympus Optical Co Ltd | Camera system |
JPH09212620A (en) | 1996-01-31 | 1997-08-15 | Nissha Printing Co Ltd | Manufacture of face image |
JP3683649B2 (en) | 1996-06-28 | 2005-08-17 | 富士通株式会社 | Image shooting device |
JPH10178585A (en) * | 1996-12-19 | 1998-06-30 | Fuji Photo Film Co Ltd | Image generator |
JP3754155B2 (en) | 1996-12-26 | 2006-03-08 | 富士写真フイルム株式会社 | Photography equipment |
JP3728848B2 (en) * | 1997-02-07 | 2005-12-21 | 株式会社ニコン | Single-lens reflex camera with built-in flash |
JPH1114512A (en) | 1997-06-20 | 1999-01-22 | Tomakomai Rinshiyou Kensa Center:Kk | Deodorizing feces sampling container, smell suppressing method for feces, and deodorant |
JPH11122526A (en) | 1997-10-14 | 1999-04-30 | Oki Electric Ind Co Ltd | Tracking image pickup device |
-
2000
- 2000-06-02 US US09/586,600 patent/US7248300B1/en not_active Expired - Lifetime
-
2004
- 2004-03-12 US US10/798,375 patent/US20040170397A1/en not_active Abandoned
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4881127A (en) * | 1987-02-25 | 1989-11-14 | Konica Corporation | Still video camera with electronic shutter and flash |
US6606117B1 (en) * | 1997-09-15 | 2003-08-12 | Canon Kabushiki Kaisha | Content information gathering apparatus system and method |
US6539100B1 (en) * | 1999-01-27 | 2003-03-25 | International Business Machines Corporation | Method and apparatus for associating pupils with subjects |
Cited By (123)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8421899B2 (en) | 2001-09-18 | 2013-04-16 | Ricoh Company, Limited | Image pickup device, automatic focusing method, automatic exposure method, electronic flash control method and computer program |
US20070263909A1 (en) * | 2001-09-18 | 2007-11-15 | Noriaki Ojima | Image pickup device, automatic focusing method, automatic exposure method, electronic flash control method and computer program |
US20070263935A1 (en) * | 2001-09-18 | 2007-11-15 | Sanno Masato | Image pickup device, automatic focusing method, automatic exposure method, electronic flash control method and computer program |
US20070263933A1 (en) * | 2001-09-18 | 2007-11-15 | Noriaki Ojima | Image pickup device, automatic focusing method, automatic exposure method, electronic flash control method and computer program |
US20070268370A1 (en) * | 2001-09-18 | 2007-11-22 | Sanno Masato | Image pickup device, automatic focusing method, automatic exposure method, electronic flash control method and computer program |
US7903163B2 (en) | 2001-09-18 | 2011-03-08 | Ricoh Company, Limited | Image pickup device, automatic focusing method, automatic exposure method, electronic flash control method and computer program |
US20110115940A1 (en) * | 2001-09-18 | 2011-05-19 | Noriaki Ojima | Image pickup device, automatic focusing method, automatic exposure method, electronic flash control method and computer program |
US7973853B2 (en) | 2001-09-18 | 2011-07-05 | Ricoh Company, Limited | Image pickup device, automatic focusing method, automatic exposure method calculating an exposure based on a detected face |
US7978261B2 (en) | 2001-09-18 | 2011-07-12 | Ricoh Company, Limited | Image pickup device, automatic focusing method, automatic exposure method, electronic flash control method and computer program |
US7920187B2 (en) | 2001-09-18 | 2011-04-05 | Ricoh Company, Limited | Image pickup device that identifies portions of a face |
US7787025B2 (en) | 2001-09-18 | 2010-08-31 | Ricoh Company, Limited | Image pickup device that cuts out a face image from subject image data |
US20070263934A1 (en) * | 2001-09-18 | 2007-11-15 | Noriaki Ojima | Image pickup device, automatic focusing method, automatic exposure method, electronic flash control method and computer program |
US8989453B2 (en) | 2003-06-26 | 2015-03-24 | Fotonation Limited | Digital image processing using face detection information |
US8055090B2 (en) | 2003-06-26 | 2011-11-08 | DigitalOptics Corporation Europe Limited | Digital image processing using face detection information |
US8498446B2 (en) | 2003-06-26 | 2013-07-30 | DigitalOptics Corporation Europe Limited | Method of improving orientation and color balance of digital images using face detection information |
US7912245B2 (en) | 2003-06-26 | 2011-03-22 | Tessera Technologies Ireland Limited | Method of improving orientation and color balance of digital images using face detection information |
US8326066B2 (en) | 2003-06-26 | 2012-12-04 | DigitalOptics Corporation Europe Limited | Digital image adjustable compression and resolution using face detection information |
US8224108B2 (en) | 2003-06-26 | 2012-07-17 | DigitalOptics Corporation Europe Limited | Digital image processing using face detection information |
US8675991B2 (en) | 2003-06-26 | 2014-03-18 | DigitalOptics Corporation Europe Limited | Modification of post-viewing parameters for digital images using region or feature information |
US7684630B2 (en) | 2003-06-26 | 2010-03-23 | Fotonation Vision Limited | Digital image adjustable compression and resolution using face detection information |
US8131016B2 (en) | 2003-06-26 | 2012-03-06 | DigitalOptics Corporation Europe Limited | Digital image processing using face detection information |
US7693311B2 (en) | 2003-06-26 | 2010-04-06 | Fotonation Vision Limited | Perfecting the effect of flash within an image acquisition devices using face detection |
US7702136B2 (en) | 2003-06-26 | 2010-04-20 | Fotonation Vision Limited | Perfecting the effect of flash within an image acquisition devices using face detection |
US8948468B2 (en) | 2003-06-26 | 2015-02-03 | Fotonation Limited | Modification of viewing parameters for digital images using face detection information |
US8126208B2 (en) | 2003-06-26 | 2012-02-28 | DigitalOptics Corporation Europe Limited | Digital image processing using face detection information |
US8498452B2 (en) | 2003-06-26 | 2013-07-30 | DigitalOptics Corporation Europe Limited | Digital image processing using face detection information |
US7809162B2 (en) | 2003-06-26 | 2010-10-05 | Fotonation Vision Limited | Digital image processing using face detection information |
US7844076B2 (en) | 2003-06-26 | 2010-11-30 | Fotonation Vision Limited | Digital image processing using face detection and skin tone information |
US7844135B2 (en) | 2003-06-26 | 2010-11-30 | Tessera Technologies Ireland Limited | Detecting orientation of digital images using face detection information |
US7848549B2 (en) | 2003-06-26 | 2010-12-07 | Fotonation Vision Limited | Digital image processing using face detection information |
US7853043B2 (en) | 2003-06-26 | 2010-12-14 | Tessera Technologies Ireland Limited | Digital image processing using face detection information |
US8005265B2 (en) | 2003-06-26 | 2011-08-23 | Tessera Technologies Ireland Limited | Digital image processing using face detection information |
US7860274B2 (en) | 2003-06-26 | 2010-12-28 | Fotonation Vision Limited | Digital image processing using face detection information |
US9053545B2 (en) | 2003-06-26 | 2015-06-09 | Fotonation Limited | Modification of viewing parameters for digital images using face detection information |
US9129381B2 (en) | 2003-06-26 | 2015-09-08 | Fotonation Limited | Modification of post-viewing parameters for digital images using image region or feature information |
US9692964B2 (en) | 2003-06-26 | 2017-06-27 | Fotonation Limited | Modification of post-viewing parameters for digital images using image region or feature information |
US8330831B2 (en) | 2003-08-05 | 2012-12-11 | DigitalOptics Corporation Europe Limited | Method of gathering visual meta data using a reference image |
US20050243185A1 (en) * | 2004-05-03 | 2005-11-03 | Samsung Techwin Co., Ltd. | Method for controlling digital photographing apparatus, and digital photographing apparatus using the method |
US8135184B2 (en) | 2004-10-28 | 2012-03-13 | DigitalOptics Corporation Europe Limited | Method and apparatus for detection and correction of multiple image defects within digital images using preview or other reference images |
US8320641B2 (en) | 2004-10-28 | 2012-11-27 | DigitalOptics Corporation Europe Limited | Method and apparatus for red-eye detection using preview or other reference images |
US7953251B1 (en) | 2004-10-28 | 2011-05-31 | Tessera Technologies Ireland Limited | Method and apparatus for detection and correction of flash-induced eye defects within digital images using preview or other reference images |
US7962629B2 (en) | 2005-06-17 | 2011-06-14 | Tessera Technologies Ireland Limited | Method for establishing a paired connection between media devices |
US8593542B2 (en) | 2005-12-27 | 2013-11-26 | DigitalOptics Corporation Europe Limited | Foreground/background separation using reference images |
US8682097B2 (en) | 2006-02-14 | 2014-03-25 | DigitalOptics Corporation Europe Limited | Digital image enhancement with reference images |
US20070201724A1 (en) * | 2006-02-24 | 2007-08-30 | Eran Steinberg | Method and Apparatus for Selective Disqualification of Digital Images |
US20070201726A1 (en) * | 2006-02-24 | 2007-08-30 | Eran Steinberg | Method and Apparatus for Selective Rejection of Digital Images |
US8285001B2 (en) | 2006-02-24 | 2012-10-09 | DigitalOptics Corporation Europe Limited | Method and apparatus for selective disqualification of digital images |
US20110033112A1 (en) * | 2006-02-24 | 2011-02-10 | Tessera Technologies Ireland Limited | Method and apparatus for selective disqualification of digital images |
US7995795B2 (en) | 2006-02-24 | 2011-08-09 | Tessera Technologies Ireland Limited | Method and apparatus for selective disqualification of digital images |
EP1989663A4 (en) * | 2006-02-24 | 2009-02-25 | Fotonation Vision Ltd | Method and apparatus for selective disqualification of digital images |
EP1989663A1 (en) * | 2006-02-24 | 2008-11-12 | Fotonation Vision Limited | Method and apparatus for selective disqualification of digital images |
US8005268B2 (en) | 2006-02-24 | 2011-08-23 | Tessera Technologies Ireland Limited | Digital image acquisition control and correction method and apparatus |
US20070201725A1 (en) * | 2006-02-24 | 2007-08-30 | Eran Steinberg | Digital Image Acquisition Control and Correction Method and Apparatus |
US7551754B2 (en) | 2006-02-24 | 2009-06-23 | Fotonation Vision Limited | Method and apparatus for selective rejection of digital images |
US7792335B2 (en) | 2006-02-24 | 2010-09-07 | Fotonation Vision Limited | Method and apparatus for selective disqualification of digital images |
US8265348B2 (en) | 2006-02-24 | 2012-09-11 | DigitalOptics Corporation Europe Limited | Digital image acquisition control and correction method and apparatus |
US7804983B2 (en) | 2006-02-24 | 2010-09-28 | Fotonation Vision Limited | Digital image acquisition control and correction method and apparatus |
US7965875B2 (en) | 2006-06-12 | 2011-06-21 | Tessera Technologies Ireland Limited | Advances in extending the AAM techniques from grayscale to color images |
US8055029B2 (en) | 2006-08-11 | 2011-11-08 | DigitalOptics Corporation Europe Limited | Real-time face tracking in a digital image acquisition device |
US8050465B2 (en) | 2006-08-11 | 2011-11-01 | DigitalOptics Corporation Europe Limited | Real-time face tracking in a digital image acquisition device |
US7916897B2 (en) | 2006-08-11 | 2011-03-29 | Tessera Technologies Ireland Limited | Face tracking for controlling imaging parameters |
US8385610B2 (en) | 2006-08-11 | 2013-02-26 | DigitalOptics Corporation Europe Limited | Face tracking for controlling imaging parameters |
US7864990B2 (en) | 2006-08-11 | 2011-01-04 | Tessera Technologies Ireland Limited | Real-time face tracking in a digital image acquisition device |
US8509496B2 (en) | 2006-08-11 | 2013-08-13 | DigitalOptics Corporation Europe Limited | Real-time face tracking with reference images |
US8270674B2 (en) | 2006-08-11 | 2012-09-18 | DigitalOptics Corporation Europe Limited | Real-time face tracking in a digital image acquisition device |
US20080068466A1 (en) * | 2006-09-19 | 2008-03-20 | Fujifilm Corporation | Imaging apparatus, method, and program |
US8284264B2 (en) * | 2006-09-19 | 2012-10-09 | Fujifilm Corporation | Imaging apparatus, method, and program |
US8055067B2 (en) | 2007-01-18 | 2011-11-08 | DigitalOptics Corporation Europe Limited | Color segmentation |
US8224039B2 (en) | 2007-02-28 | 2012-07-17 | DigitalOptics Corporation Europe Limited | Separating a directional lighting variability in statistical face modelling based on texture space decomposition |
US8509561B2 (en) | 2007-02-28 | 2013-08-13 | DigitalOptics Corporation Europe Limited | Separating directional lighting variability in statistical face modelling based on texture space decomposition |
US8649604B2 (en) | 2007-03-05 | 2014-02-11 | DigitalOptics Corporation Europe Limited | Face searching and detection in a digital image acquisition device |
US8503800B2 (en) | 2007-03-05 | 2013-08-06 | DigitalOptics Corporation Europe Limited | Illumination detection using classifier chains |
US8923564B2 (en) | 2007-03-05 | 2014-12-30 | DigitalOptics Corporation Europe Limited | Face searching and detection in a digital image acquisition device |
US9224034B2 (en) | 2007-03-05 | 2015-12-29 | Fotonation Limited | Face searching and detection in a digital image acquisition device |
US20080266419A1 (en) * | 2007-04-30 | 2008-10-30 | Fotonation Ireland Limited | Method and apparatus for automatically controlling the decisive moment for an image acquisition device |
WO2008131823A1 (en) * | 2007-04-30 | 2008-11-06 | Fotonation Vision Limited | Method and apparatus for automatically controlling the decisive moment for an image acquisition device |
US8515138B2 (en) | 2007-05-24 | 2013-08-20 | DigitalOptics Corporation Europe Limited | Image processing method and apparatus |
US8494232B2 (en) | 2007-05-24 | 2013-07-23 | DigitalOptics Corporation Europe Limited | Image processing method and apparatus |
US7916971B2 (en) | 2007-05-24 | 2011-03-29 | Tessera Technologies Ireland Limited | Image processing method and apparatus |
US10733472B2 (en) | 2007-06-21 | 2020-08-04 | Fotonation Limited | Image capture device with contemporaneous image correction mechanism |
US8896725B2 (en) | 2007-06-21 | 2014-11-25 | Fotonation Limited | Image capture device with contemporaneous reference image capture mechanism |
US8213737B2 (en) | 2007-06-21 | 2012-07-03 | DigitalOptics Corporation Europe Limited | Digital image enhancement with reference images |
US9767539B2 (en) | 2007-06-21 | 2017-09-19 | Fotonation Limited | Image capture device with contemporaneous image correction mechanism |
US8155397B2 (en) | 2007-09-26 | 2012-04-10 | DigitalOptics Corporation Europe Limited | Face tracking in a camera processor |
WO2009053863A1 (en) * | 2007-10-26 | 2009-04-30 | Sony Ericsson Mobile Communications Ab | Automatic timing of a photographic shot |
US20110116780A1 (en) * | 2007-10-31 | 2011-05-19 | Sony Corporation | Photographic apparatus and photographic method |
US8270825B2 (en) * | 2007-10-31 | 2012-09-18 | Sony Corporation | Photographic apparatus and photographic method |
US20090167889A1 (en) * | 2007-12-28 | 2009-07-02 | Casio Computer Co., Ltd. | Image capturing device |
US8786721B2 (en) | 2007-12-28 | 2014-07-22 | Casio Computer Co., Ltd. | Image capturing device |
US8872934B2 (en) | 2007-12-28 | 2014-10-28 | Casio Computer Co., Ltd. | Image capturing device which inhibits incorrect detection of subject movement during automatic image capturing |
US9462180B2 (en) | 2008-01-27 | 2016-10-04 | Fotonation Limited | Detecting facial expressions in digital images |
US11689796B2 (en) | 2008-01-27 | 2023-06-27 | Adeia Imaging Llc | Detecting facial expressions in digital images |
US11470241B2 (en) | 2008-01-27 | 2022-10-11 | Fotonation Limited | Detecting facial expressions in digital images |
US20090190803A1 (en) * | 2008-01-29 | 2009-07-30 | Fotonation Ireland Limited | Detecting facial expressions in digital images |
US8750578B2 (en) | 2008-01-29 | 2014-06-10 | DigitalOptics Corporation Europe Limited | Detecting facial expressions in digital images |
US8494286B2 (en) | 2008-02-05 | 2013-07-23 | DigitalOptics Corporation Europe Limited | Face detection in mid-shot digital images |
US7855737B2 (en) | 2008-03-26 | 2010-12-21 | Fotonation Ireland Limited | Method of making a digital camera image of a scene including the camera user |
US8243182B2 (en) | 2008-03-26 | 2012-08-14 | DigitalOptics Corporation Europe Limited | Method of making a digital camera image of a scene including the camera user |
US20100079613A1 (en) * | 2008-06-06 | 2010-04-01 | Sony Corporation | Image capturing apparatus, image capturing method, and computer program |
US20090304289A1 (en) * | 2008-06-06 | 2009-12-10 | Sony Corporation | Image capturing apparatus, image capturing method, and computer program |
US8477207B2 (en) | 2008-06-06 | 2013-07-02 | Sony Corporation | Image capturing apparatus, image capturing method, and computer program |
US8467581B2 (en) | 2008-06-06 | 2013-06-18 | Sony Corporation | Image capturing apparatus, image capturing method, and computer program |
US9007480B2 (en) | 2008-07-30 | 2015-04-14 | Fotonation Limited | Automatic face and skin beautification using face detection |
US8384793B2 (en) | 2008-07-30 | 2013-02-26 | DigitalOptics Corporation Europe Limited | Automatic face and skin beautification using face detection |
US8345114B2 (en) | 2008-07-30 | 2013-01-01 | DigitalOptics Corporation Europe Limited | Automatic face and skin beautification using face detection |
US20110007174A1 (en) * | 2009-05-20 | 2011-01-13 | Fotonation Ireland Limited | Identifying Facial Expressions in Acquired Digital Images |
US8488023B2 (en) | 2009-05-20 | 2013-07-16 | DigitalOptics Corporation Europe Limited | Identifying facial expressions in acquired digital images |
US20120236163A1 (en) * | 2009-09-30 | 2012-09-20 | Panasonic Corporation | Photography device, photography method, and program |
US8379917B2 (en) | 2009-10-02 | 2013-02-19 | DigitalOptics Corporation Europe Limited | Face recognition performance using additional image features |
US10032068B2 (en) | 2009-10-02 | 2018-07-24 | Fotonation Limited | Method of making a digital camera image of a first scene with a superimposed second scene |
CN102158649A (en) * | 2010-02-01 | 2011-08-17 | 奥林巴斯映像株式会社 | Photographic device and photographic method thereof |
EP2360620A1 (en) * | 2010-02-24 | 2011-08-24 | Research In Motion Limited | Eye blink avoidance during image acquisition in a mobile communications device with digital camera functionality |
US20110205383A1 (en) * | 2010-02-24 | 2011-08-25 | Research In Motion Limited | Eye blink avoidance during image acquisition in a mobile communications device with digital camera functionality |
US8836777B2 (en) | 2011-02-25 | 2014-09-16 | DigitalOptics Corporation Europe Limited | Automatic detection of vertical gaze using an embedded imaging device |
CN103369248A (en) * | 2013-07-20 | 2013-10-23 | 厦门美图移动科技有限公司 | Method for photographing allowing closed eyes to be opened |
US9549118B2 (en) | 2014-03-10 | 2017-01-17 | Qualcomm Incorporated | Blink and averted gaze avoidance in photographic images |
CN106104568A (en) * | 2014-03-10 | 2016-11-09 | 高通股份有限公司 | Nictation in photographs and transfer are watched attentively and are avoided |
WO2015138169A1 (en) * | 2014-03-10 | 2015-09-17 | Qualcomm Incorporated | Blink and averted gaze avoidance in photographic images |
EP3567523B1 (en) * | 2014-03-10 | 2024-05-29 | QUALCOMM Incorporated | Blink and averted gaze avoidance in photographic images |
JP2017525069A (en) * | 2014-07-11 | 2017-08-31 | インテル コーポレイション | Dynamic control for data capture |
US10074003B2 (en) | 2014-07-11 | 2018-09-11 | Intel Corporation | Dynamic control for data capture |
US10949674B2 (en) | 2015-12-24 | 2021-03-16 | Intel Corporation | Video summarization using semantic information |
US11861495B2 (en) | 2015-12-24 | 2024-01-02 | Intel Corporation | Video summarization using semantic information |
Also Published As
Publication number | Publication date |
---|---|
US7248300B1 (en) | 2007-07-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7248300B1 (en) | Camera and method of photographing good image | |
US8102440B2 (en) | Image selecting apparatus, camera, and method of selecting image | |
US9147106B2 (en) | Digital camera system | |
US7973853B2 (en) | Image pickup device, automatic focusing method, automatic exposure method calculating an exposure based on a detected face | |
KR100960034B1 (en) | Image pickup apparatus, and device and method for control of image pickup | |
US7038715B1 (en) | Digital still camera with high-quality portrait mode | |
US7711190B2 (en) | Imaging device, imaging method and imaging program | |
US8077215B2 (en) | Apparatus for detecting blinking state of eye | |
JP4870887B2 (en) | Imaging apparatus, strobe control method, and program for computer to execute the method | |
US8159561B2 (en) | Digital camera with feature extraction device | |
EP1429279A2 (en) | Face recognition method, face recognition apparatus, face extraction method and image pickup apparatus | |
CN103458183A (en) | Imaging apparatus and control method for the same | |
JP2004320286A (en) | Digital camera | |
JP2004317699A (en) | Digital camera | |
JP2004320285A (en) | Digital camera | |
JP2000347278A (en) | Camera and photographing method | |
CN108093170B (en) | User photographing method, device and equipment | |
JP2000347277A (en) | Camera and method of pick up | |
JP3920758B2 (en) | Surveillance camera | |
JP2024003949A (en) | Electronic device, control method for electronic device, program, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE |