US20080284900A1 - Digital camera - Google Patents

Digital camera Download PDF

Info

Publication number
US20080284900A1
US20080284900A1 US12/078,632 US7863208A US2008284900A1 US 20080284900 A1 US20080284900 A1 US 20080284900A1 US 7863208 A US7863208 A US 7863208A US 2008284900 A1 US2008284900 A1 US 2008284900A1
Authority
US
United States
Prior art keywords
object
distance
feature region
unit
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/078,632
Inventor
Koichi Abe
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nikon Corp
Original Assignee
Nikon Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to JP2007-098136 priority Critical
Priority to JP2007098136 priority
Priority to JP2008-094974 priority
Priority to JP2008094974A priority patent/JP5251215B2/en
Application filed by Nikon Corp filed Critical Nikon Corp
Assigned to NIKON CORPORATION reassignment NIKON CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ABE, KOICHI
Publication of US20080284900A1 publication Critical patent/US20080284900A1/en
Application status is Abandoned legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B3/00Focusing arrangements of general interest for cameras, projectors or printers
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B13/00Viewfinders; Focusing aids for cameras; Means for focusing for cameras; Autofocus systems for cameras
    • G03B13/32Means for focusing
    • G03B13/34Power focusing
    • G03B13/36Autofocus systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/225Television cameras ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, camcorders, webcams, camera modules specially adapted for being embedded in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/232Devices for controlling television cameras, e.g. remote control ; Control of cameras comprising an electronic image sensor
    • H04N5/23212Focusing based on image signals provided by the electronic image sensor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/225Television cameras ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, camcorders, webcams, camera modules specially adapted for being embedded in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/232Devices for controlling television cameras, e.g. remote control ; Control of cameras comprising an electronic image sensor
    • H04N5/23296Control of means for changing angle of the field of view, e.g. optical zoom objective, electronic zooming or combined use of optical and electronic zooming

Abstract

A digital camera includes: an imaging unit that receives and images a light from a subject transmitted a photographing optical system; a recognition unit that recognizes a feature region of the subject using an image obtained by imaging with the imaging unit; a detection unit that detects a size of the feature region that is recognized with the recognition unit; and a control unit that predicts a distance to the subject after a predetermined period of time according to the size of the feature region, and controls the photographing optical system so as to focus on the subject.

Description

    TECHNICAL FIELD
  • This invention relates to a digital camera.
  • BACKGROUND TECHNOLOGY
  • As a method of autofocus (AF) of digital cameras, a contrast detection method is heretofore known. In the contrast detection method, image signals are obtained by imaging an object by an imaging element such as a CCD, a component of a predetermined spatial frequency band is extracted from the image signals contained within a predetermined AF area within an image and a focus evaluation value is calculated by integrating its absolute value. The focus evaluation value is a value that corresponds to the contrast in the focal point detection area, and the value increases as the contrast increases. Based on the characteristic that the contrast of an image becomes higher as a focus lens assumes a position closer to a focus position, the lens position at which the focus evaluation value peaks (hereafter referred to as the peak position) is determined, the peak position is determined to be the focus position, and the focus lens is driven to this focus position (Patent Reference 1).
  • [Patent Reference 1] Japanese Published Patent Application 2003-315665
  • DISCLOSURE OF THE INVENTION Problems to be Resolved by the Invention
  • However, for detecting the peak position of the contrast, which is the focus position, the focus evaluation values are calculated at predetermined intervals, moving the focus lens along an optical axis, the focus evaluation values of those points are analyzed, and the peak position is detected. Therefore, there has been a problem that bringing an object into focus takes time and that a moving object cannot be brought into focus.
  • An object of this invention is to provide a digital camera in which an object can be more accurately brought into focus, and shooting can be performed.
  • Means of Solving the Problem
  • The digital camera according to claim 1 is provided with an imaging unit that receives and images light from an object which has passed through shooting optical system; a recognition unit that recognizes a feature region of the object by using an image imaged by the imaging unit; a detection unit that detects a size of the feature region recognized by the recognition unit; and a control unit that predicts a distance to the object after a predetermined time according to the size of the feature region and controls the shooting optical system so as to bring the object into focus.
  • In the digital camera according to claim 2, the digital camera according to claim 1 is further provided with a distance calculation unit that calculates a distance to the object according to the size of the feature region; and a speed calculation unit that calculates a moving speed of the object according to a time change of the distance to the object, in which the control unit predicts the distance to the object calculated by the distance calculation unit and the distance to the object according to the moving speed of the object calculated by the speed calculation unit.
  • In the digital camera according to claim 3, based on the digital camera according to claim 2, the distance calculation unit calculates the distance to the object based on position information of a lens constituting the shooting optical system, and calculates the distance to the object based on the calculated distance to the object and the size of the feature region after calculating the distance to the object based on the position information of the lens constituting the shooting optical system.
  • In the digital camera according to claim 4, based on the digital camera according to any of claims 1-3, the control unit predicts the distance to the object at the time of imaging based on the time between a shooting execution operation and imaging by the imaging unit, and controls the shooting optical system so as to bring the object into focus at the time of imaging by the imaging unit.
  • In the digital camera according to claim 5, the digital camera according to any of claims 1-4 is further provided with a registration unit that selects the feature region of the object for predicting the distance to the object, from the feature regions recognized by the recognition unit, and registers feature information of the selected feature region of the object, in which after feature information of the feature region of the object is registered, the recognition unit recognizes the feature region of the object based on the feature information of the feature region of the object.
  • In the digital camera according to claim 6, the digital camera according to claim 5 is further provided with a record control unit that stores in a recording medium an image, which has been obtained by imaging by the imaging unit, in which the registration unit registers the feature information of the feature region of the object based on the image stored in the recording medium.
  • In the digital camera according to claim 7, based on the digital camera according to claim 5 or 6, the feature information of the feature region is at least one of position information of a lens constituting the shooting optical system, the distance to the object, and the size of the feature region.
  • In the digital camera according to claim 8, based on the digital camera according to claim 4, a shooting condition is modified in response to one of calculation results of the distance calculation unit and the speed calculation unit.
  • In the digital camera according to claim 9, based on the digital camera according to claim 8, the shooting condition is one of a shutter speed and ISO sensitivity.
  • In the digital camera according to claim 10, based on the digital camera according to any of claims 1-9, the control unit predicts the distance to the object after the predetermined time based on the size of a plurality of the feature regions, existing on a plurality of images time-sequentially obtained by the imaging unit.
  • EFFECTS OF THE INVENTION
  • According to this invention, a digital camera is provided in which an object can be more accurately brought into focus, and shooting can be performed.
  • BEST MODE TO IMPLEMENT THE INVENTION First Embodiment
  • The first embodiment of the present invention is described hereinafter.
  • FIG. 1 is a block diagram that shows an electrical configuration of a digital camera 1 according to the embodiment.
  • A Lens 2 includes a focus lens 2 a and a zoom lens 2 b, and constitutes a shooting optical system. The focus lens 2 a is a lens for making adjustment with respect to an object, and is moved in the optical axial direction by a focus lens drive unit 3. The zoom lens 2 b is a lens for modifying a focal length of the lens 2, and is moved in the optical axial direction by a zoom lens drive unit 4. Each of the focus lens drive unit 3 and the zoom lens drive unit 4 is composed of, for example, a stepping motor and is controlled based on an instruction from a control unit 5. A focus lens position detection unit 6 detects a position on the optical axis of the focus lens 2 a and sends a detection signal to the control unit 5. A zoom lens position detection unit 7 detects a position on the optical axis of the zoom lens 2 b and sends the detection signal to the control unit 5.
  • Light from the object forms an image on an imaging element 8 by the lens 2. The imaging element 8, which is a solid-state imaging element such as a CCD and a CMOS, outputs an imaging signal, in which an object image is photoelectrically converted into an electrical signal, to an analog signal processing unit 9. The imaging signal, which is an analog signal input to the analog signal processing unit 9, is subject to processing such as correlated double sampling (CDS) and is input to an analog-digital converter (ADC) 10. Additionally, the imaging signal is converted from the analog signal to a digital signal with the ADC 10 and is stored in a memory 11. The memory 11 includes a buffer memory in which the imaging signal is temporarily stored, a built-in memory in which already shot image data is recorded, etc. The image data that is stored in the memory 11 is sent to a digital signal processing unit 13 through a bus 12. The digital signal processing unit 13, which is, for example, a digital signal processor (DSP), performs known image processings such as white balance processing, interpolation processing, and gamma correction, for the image data, and then stores the image data in the memory 11 again.
  • The image data in which the image has been processed is subject to known compression processing such as JPEG by a compression/expansion unit 14 and is recorded in a memory card 15, which is detachable with respect to the digital camera 1. In the case of reproducing and displaying the image recorded in the memory card 15, the image recorded in the memory card 15 is read in the memory 11, digital image is converted to an analog imaging signal with a digital-analog converter (DAC) 16, and the image is displayed on a display unit 17. The display unit 17, which is, for example, a liquid crystal display, reproduces and displays images recorded in the memory card 15, and displays an image that is imaged by the imaging element 8 when the image is shot as a through image. The image data can be recorded in the memory card 15 or the built-in memory within the memory 11. However, when the built-in memory is used, the memory card 15 is not used.
  • The control unit 5 is connected to an operation unit 18. The control unit 5 includes, for example, a CPU, and controls the operation of the digital camera 1 in response to signals input from the operation unit 18. The operation unit 18 includes a power source button 19, a release button 20, a menu button 21, an arrow key 22, an enter button 23, an AF mode selection switch 24, etc.
  • The power source button 19 is a button for switching the digital camera 1 to be powered on (ON) and off (OFF).
  • The release button 20 is a button that a user presses down in order to issue an instruction on image shooting. Pressing the release button 20 halfway down causes a halfway-press switch SW1 to be powered on (ON) and causes an ON signal to be output, while not pressing the release button 20 halfway down causes the halfway-press switch SW1 to be powered off (OFF) and causes an OFF signal to be output. The signal output by the halfway-press switch SW1 is input to the control unit 5. Pressing the release button 20 down fully (pressing the button down deeper than the halfway-press operation) causes a fully-press switch SW2 to be powered on (ON) and causes the ON signal to be output, while not pressing the release button 20 down fully causes the fully-press switch SW2 to be powered off (OFF) and causes the OFF signal to be output. The signal output by the fully-press switch SW2 is input to the control unit 5.
  • The menu button 21 is a button for displaying a menu corresponding to a mode selected by the user.
  • The arrow key 22 is a button for selecting an operation desired by the user, such as moving a cursor in vertical and horizontal directions for selecting items to be displayed on the display unit 17.
  • The enter button 23 is a button for determining the operation selected with the arrow key 22.
  • The AF mode selection switch 24 is a switch for selecting whether an image is shot in a predictive AF mode. The predictive AF mode, which is a shooting mode performed by the AF mode selection switch 24 being powered on (ON), is an operation described hereinafter in FIG. 3. FIG. 3 and predictive AF processing are hereinafter described in detail. When the AF mode selection switch 24 is powered on (ON), a mode is switched to the predictive AF mode. When the AF mode selection switch 24 is powered off (OFF), a mode is switched to a conventional contrast AF mode, for example, as shown in the Background Technology section.
  • A feature region recognition calculation unit 25 recognizes a feature region from the image data. If the recognition is successful, coordinates indicating the position and the size of the recognized feature region is output to the control unit 5. Once the coordinates indicating the position and the size of the feature region is input, based on this, an image is created on which a frame indicating the size of the feature region (feature region mark) is superimposed on an image for displaying a through image, and the image is displayed on the display unit 17. The calculation for recognizing the feature region can also be performed at the control unit 5.
  • The digital camera 1 of the first embodiment of this invention recognizes the feature region from the image shot by the imaging element 8, continuously detects the size of the feature region specified by the user, and calculates the movement of the object from a change in size of the feature region. Then, the digital camera 1, based on the result, predicts the distance to the object at the time of imaging, and controls the drive position of the focus lens 2 a so that the object is brought into focus.
  • A method for recognizing a feature region is hereinafter described. For example, when the object is a person, a face of the person is recognized as a feature region. The feature region recognition calculation unit 25 detects whether the face of the person exists on the through image displayed on the display unit 17. A method for detecting a face of a person includes, for example, detecting flesh color from an image (Japanese Published Patent Application 2004-037733), extracting a candidate region corresponding to a face shape and determining the face region from within the region (Japanese Published Patent Application 8-063597), etc. Furthermore, a method for recognizing a person includes, for example, identifying the person by means of comparing an image, in which each feature point such as an eye, a nose, a mouth, etc. is extracted from the image, to a dictionary image of each person that has been registered in advance (Japanese Published Patent Application 9-251534). If the recognition of the face of the person is successful by using such known methods, coordinates indicating the position and the size of the recognized face region is output to the control unit 5.
  • When a plurality of persons are recognized, as hereinafter described, a person who is desired to be brought into focus is specified from among the plurality of persons. The control unit 5 controls the display unit 17 according to the coordinates input from the feature region recognition calculation unit 25 and displays the frame indicating the face region (face region mark) superimposing on the through image as illustrated in FIG. 2. If there is only one face detected by the feature region recognition calculation unit 25, the feature region mark is displayed on the face region. If there are a plurality of faces (three faces in FIG. 2) detected with the feature region recognition calculation unit 25 as illustrated in FIG. 2, the respective feature region marks M1 to M3 are displayed corresponding to a plurality of face regions.
  • A registration method for a feature region and a prediction method for an object distance are described hereinafter.
  • FIG. 3 is a flowchart showing a shooting procedure in a predictive AF mode. Processing shown in FIG. 3 is performed by the control unit 5. In this embodiment, a case is explained in which a plurality of persons exist within a through image and the person who is desired to be continuously brought into focus is selected from among them and is shot.
  • If the AF mode selection switch 24 is switched ON with the power source button 19 of the digital camera 1 being switched ON, a predictive AF program is executed, which performs an operation shown in FIG. 3.
  • First, steps S101 to S105 are steps relating to recognition of a feature region.
  • In step S101, when the AF mode selection switch 24 is switched ON, a through image is displayed on the display unit 17. The image that is repeatedly shot by the imaging element 8 is consecutively updated and displayed on the display unit 17 as a through image.
  • In step S102, when the menu button 21 is pressed down in a state in which the through image is displayed on the display unit 17, the control unit 5 sends an instruction to the display unit 17 and superimposes a screen for selecting a type of the object to be recognized over the through image and displays it on the display unit 17. As a type of the object, something in which an object itself, such as a person, a soccer ball, and a car, moves is displayed on the select screen. The user selects the type of the object in the select screen by operating the arrow key 22 and determines this by pressing down the enter button 23. If the enter button 23 is not ON, the determination of step S102 is repeated until the enter button 23 is turned ON. If the enter button 23 is turned ON, the operation proceeds to step S103. Since the object of this embodiment is a person, a case is described in which the person is selected as the type of the object.
  • When the type of the object to be recognized is selected, in step S103, the control unit 5 sends the feature region recognition calculation unit 25 an instruction for initiating the feature region recognition processing to the through image. Here, since the person is selected as the type of the object in step S102, face region recognition processing is initiated in which a face of a person is recognized as a feature region.
  • In step S104, the control unit 5 determines whether the face region has been recognized in response to a recognition result of the face region at that time from the feature region recognition calculation unit 25. If the face region is not recognized for some reason, for example, a face region does not exist in the through image, that the face region exists in the through image but is too small, etc., the operation returns to step S103, and performs the face region recognition processing again. If the face region is recognized, the operation proceeds to step S105, the control unit 5 sends an instruction to the display unit 17, and the face region marks M1 to M3 are superimposed on the through image and displayed on the display unit 17 as illustrated in FIG. 2.
  • At this time, a cross-shaped mark M4 for selecting a face region is displayed on the display unit 17, as hereinafter described in detail. The cross-shaped mark M4 is displayed only within the face region mark M1 that is the closest to the center of the through image to be displayed on the display unit 17. In other words, if only one face region mark exists, the cross-shaped mark is displayed within the face region mark. If a plurality of face region marks exist, the cross-shaped mark is displayed only within the face region mark that is the closest to the center of the through image to be displayed on the display unit 17.
  • The following steps S106 to S109 are steps relating to registration of a face region.
  • In step S106, a face region of an object to be shot is selected. In a state in which a plurality of the face region marks M1 to M3 are displayed on the through image as illustrated in FIG. 2, the user operates the arrow key 22 and selects the face region mark that the user desires to register from among the face region marks M1 to M3. At this time, the cross-shaped mark M4 indicates the face region mark (FIG. 2 shows that the face region mark M2 is selected) that is selected at that time. If the user operates the arrow key 22 in the vertical and horizontal directions, the cross-shaped mark M4 jumps and moves from the face region mark where the cross-shaped mark M4 is displayed to other face region mark. For example, in a state in which the face region mark M2 is selected as illustrated in FIG. 2, if the user presses down the left portion of the arrow key 22, the cross-shaped mark M4 jumps and moves from the face region mark M2 to the face region mark M1.
  • In a state in which the user matches the cross-shaped mark M4 with the face region mark of the person to be shot, the decision is made by pressing down the enter button 23. Once the face region mark is selected, the feature region recognition calculation unit 25 extracts the selected feature points such as eyes, a nose, a mouth, etc. An adjacent region including the feature points (feature point adjacent region including an eye region, a nose region, a mouth region, etc.) is registered in the memory 11 as a template. Once the face region mark is selected, the through image is displayed on the display unit 17. Then, the feature region recognition calculation unit 25 extracts the feature point adjacent region with respect to the face region recognized within the through image. The control unit 5 compares the feature point adjacent region extracted from the through image to the template registered in the memory 11, that is, calculates similarity.
  • Based on the calculation result of similarity, the control unit 5 sends the display unit 17 an instruction for executing displaying the face region mark in the feature point adjacent region in which similarity to the template is determined to be high. The control unit 5 sends the display unit 17 an instruction for canceling displaying the face region mark in the feature point adjacent region in which similarity to the template is determined to be low.
  • Therefore, after selecting the face region mark, among the face region within the through image, the face region mark is displayed only on the face region that matches the selected face region, and a face region mark is not displayed in other face regions. A method for selecting a feature region is not limited to this. For example, as long as there is a display unit 17 that provides a touch panel on the surface, the face region mark can be selected by pressing the face region mark to be selected with a finger, etc., instead of the user operating the arrow key 22 and selecting the face region mark. If the face region is not selected in step S106, the operation returns to step S105.
  • If the face region is selected, the operation proceeds to step S107. In step S107, when the ON signal of the halfway-press switch SW1 is input to the control unit 5 by operating halfway-pressing the release button 20, the AF processing is performed with respect to the face region selected in step S106. This AF processing is a conventional contrast AF as described in the Background Technology section. Once the face region is brought into focus in step S108, it is whether the enter button 23 is pressed down in a state in which the face region is brought into focus. If the enter button 23 is not pressed, the operation returns to step S107 and performs the AF processing again. If the enter button 23 is pressed, the operation proceeds to step S109.
  • In step S109, the control unit 5 registers in the memory 11 information related to the object determined in step S108. The information related to the object refers to position information of the lens 2 at the time of determining the face region in step S108, distance (object distance) to the object (face region) calculated based on the position information of the lens 2, and the size of the face region mark. The position information of the lens 2 refers to the position of the focus lens 2 a and the zoom lens 2 b on the optical axis and is obtained by the focus lens position detection unit 6 and the zoom lens position detection unit 7. The detection signal is output to the control unit 5. Once the detection signal is input, the control unit 5 calculates the object distance based on the detection signal. The size of the face region is the length of one of a vertical side and a horizontal side of the face region mark displayed in a rectangular shape, or the combination. This determines the relationship between the predetermined object distance and the size of the face region mark. Upon completion of the registration of the information related to the object, a through image is displayed on the display unit 17.
  • The following steps S110 to S120 are steps relating to shooting.
  • In step S110, it is determined whether the halfway-press switch SW1 of the release button 20 is ON. If the halfway-press switch SW1 of the release button 20 is OFF, the determination of step S110 is repeated until the halfway-press switch SW1 is turned ON. If the halfway-press switch SW1 of the release button 20 is ON, the operation proceeds to step S111.
  • When the halfway-press switch SW1 of the release button 20 is ON, in step S1, a stop value is set so that an undepicted stop becomes the smallest or close to the smallest. In other words, it is set to be a pan focus. The reason why this is set is that an object which is being moved during the object distance calculation processing of the later-mentioned step S116, particularly, the face region is recognized; thus, by deepening the depth of field, the moving object, particularly the face region, can be recognized in a wide range with respect to the optical axis direction. Here, the focus lens 2 a is driven so as to correspond to a hyperfocal distance. The hyperfocal distance is the shortest object distance among the object distances included in the depth of field at the time of pan focus shooting. Here, the depth of field can be set according to the configuration of the object in which the user defines to shoot, and the focus lens 2 a can be driven corresponding to this.
  • In step S112, it is determined whether the face region registered in step S109 exists in the through image. If the registered face region does not exist within the through image, the operation proceeds to step S113. In step S113, the control unit 5 releases the pan focus setting by resetting the stop value that has been set in step S111, and sets the stop value so as to have an appropriate exposure with respect to the object existing within the through image. In step S114, the predictive AF mode is switched to a normal AF mode, for example, a conventional contrast AF as described in the Background Technology section. For example, if a landscape such as a mountain is being displayed as a through image, the focus lens 2 a is driven so as to be brought into focus to be infinity. In step S115, it is determined whether the fully-press switch SW2 of the release button 20 is ON.
  • If the fully-press switch SW2 is OFF, the operation returns to step S111, and the stop value is set so that an undepicted stop becomes the smallest or close to the smallest. If the fully-press switch SW2 is ON, the operation proceeds to step S120.
  • Meanwhile, in step S112, if the registered face region exists within the through image, the operation proceeds to step S116 after displaying the face region mark with respect to the registered face region, and the object distance calculation processing begins. The object distance at this time is calculated by substituting a parameter of the size of the face region mark and a parameter of the focal length of the lens 2 for a predetermined arithmetic expression. A table in which the relationship between the size of the face region mark and the focal length of the lens 2 is correlated to the object distance may be created in advance and stored in the memory 11, and the object distance can be calculated by referring to this table.
  • As long as the face region is detected, the face region mark is displayed by tracking the face region in order to be superimposed on the face region even if the face region moves. Here, in step S111, since the focus lens 2 a is driven so as to correspond to the hyperfocal distance, the focus lens 2 a is not driven even if the halfway-press switch SW1 of the release button 20 is ON. The focus lens 2 a, however, may be driven so as to be brought into focus in response to the movement of the object. This processing is described in a fourth embodiment. After the face region is determined to exist in step S112, the setting for the pan focus continues until the setting of the pan focus is canceled in step S118.
  • In step S116, for each image that has been time-sequentially shot by the imaging element 8 (for example, 30 frames per second), information on the size of the face region mark and the focal length of the lens 2 is obtained, and the object distance is calculated. If the focal length of the lens 2 is the same as when the face region is registered in step S109, if the size of the face region mark displayed on the through image is smaller than the size of the face region mark when the face region is registered, the object distance is recognized to be longer than the object distance when the face region is registered.
  • Meanwhile, if the size of the face region mark displayed on the through image is larger than the size of the face region mark at the time of registering the face region, the object distance is recognized to be shorter than the object distance at the time of registering the face region. The obtained object distance is recorded in the memory 11. The object distance recorded in the memory 11 refers to a plurality of frames, and the object distance in the memory 11 is sequentially updated every time an object is sequentially shot by the imaging element 8.
  • Additionally, a moving speed of the object is calculated from the time change of the object distance of the plurality of frames recorded in the memory 11, and the subsequent object distance is calculated. A description is given in FIG. 4, with an object person being person A herein, supposing that the vertical length of the face region mark is “a” and the calculated object distance is 5 meters when time t=0 seconds. The object moves thereafter at a certain speed, the vertical length of the face region mark is “b”, which is longer than “a”, and the calculated object distance is 4.83 meters when time t= 5/30 seconds. During this period, if the focal length of the lens 2 has not changed, the moving speed of the object is 1 meter per second. Thus, the object can be predicted to be at a position of 4.80 meters of the object distance when t= 6/30 seconds. The calculation of the object distance is repeatedly performed until the release button 20 is pressed down fully.
  • In step S117, it is determined whether the fully-press switch SW2 of the release button 20 is ON. If the fully-press switch SW2 of the release button 20 is OFF, the operation returns to step S112 and it is again determined whether the registered face region exists within the through image. If the fully-press switch SW2 of the release button 20 is ON, the operation proceeds to step S118.
  • In step S118, the pan focus setting is released by resetting the stop value that has been set in step S111, and the stop value is set so as to have an appropriate exposure with respect to the object.
  • In step S119, the predictive AF processing is performed with respect to the object. In a camera with an AF function, the time difference between the release button 20 being fully pressed down and actual shooting being performed (hereafter referred to as release time lag) may become a problem. There has been a problem that a focus of a shot image is shifted because the focus position with respect to the object changes during the release time lag especially when the object moves. Here, the focus position with respect to the object after the release time lag is predicted from the moving speed of the object according to the result of the object distance calculation processing of step S116. The focus lens 2 a is moved so as to bring the predicted focus position into focus, so the state in which the object is brought into focus during the shooting becomes optimized. The release time lag is 0.01 second in this embodiment. The position of the object at 0.01 second and after is predicted after the release button 20 is pressed down fully, and shooting is performed after the position is brought into focus.
  • FIGS. 5( a) and 5(b) are diagrams showing examples of the predictive AF processing of step S119. Here, movement of an object, which is a person B, is shown.
  • FIG. 5( a) is a diagram showing a time change in a size of a face region mark of a person B. Here, the size of the face region mark is a vertical side length of the face region mark. The horizontal axis represents numbers for each image time-sequentially shot by the imaging element 8 (through image I1 to through image I7). These images are taken at 30 frames per second. That is, one scale on the horizontal axis indicates 1/30 seconds. The vertical axis indicates the size of the face region mark.
  • FIG. 5( b) is a diagram showing a time change of an object distance of a person B. In the same manner as in FIG. 5( a), the horizontal axis represents numbers for each image time-sequentially shot by the imaging element 8, and one scale on the horizontal axis indicates 1/30 seconds. The vertical axis indicates an object distance of the person B. As described above, the object distance is calculated according to the size of the face region mark and the focal length of the lens 2.
  • Movement of the person B is described as follows. The size of the face region mark is “a” at the time of the through image 11 (FIG. 5( a)), and the object distance of the person B is 5.0 meters (FIG. 5( b)). In the same manner, in the through image 12 and the through image 13, the size of the face region mark remains “a” (FIG. 5( a)). Thus, the object distance of the person B remains 5.0 meters (FIG. 5( b)). In the through image 14, the size of the face region mark is changed to “b”, which is larger than “a” (FIG. 5( a)). Thus, the object distance of the person B becomes shorter, which is 4.9 meters (FIG. 5( b)). Additionally, in the through image 15, the through image 16, and the through image 17, the size of the face region mark becomes larger in proportion to c, d, and e, respectively, as time elapses (FIG. 5( a)), and the object distance of the person B is 4.6 meters at the time of the through image 17. Consequently, the person B is determined to be moving closer to the camera at the moving speed of 3.0 meters per second.
  • Supposing that the release button 20 is pressed down fully at the time of the through image 17, in response to the fully-press signal, the control unit 5 calculates the position of the person B at 0.01 second at the time of imaging, that is, for the release time lag, based on the object distance of the person B and the moving speed. Therefore, the position of the person B at the time of imaging can be predicted to be 4.6 m+(−3.0 m/second)×0.01 second=4.57 meters. Based on the calculation result of the object distance, the control unit 5, sends the focus lens drive unit 3 an instruction for driving the focus lens 2 a so the position of 4.57 meters of the object distance is brought into focus. Then, the focus lens drive unit 3, which has received the instruction from the control unit 5, drives the focus lens 2 a.
  • In step S120, shooting is performed by the imaging element 8. Here, exposure conditions of the digital camera 1 may as well be modified according to the movement of the object. For example, if the moving speed of the object is fast, shutter speed may be made faster, or ISO sensitivity may be increased.
  • According to the above-described embodiment, the following operational effects can be obtained.
  • By calculating the size of the feature region and the distance between the focal length of the lens 2 and the object for each image time-sequentially shot by the imaging element 8, the distance to the object at the time of imaging is predicted, and the focus lens 2 a is driven so as to bring the object into focus. Because of this, the object can be more accurately brought into focus, and shooting can be performed.
  • The feature region, selected by the user, of at least one feature region recognized from the image is registered. This causes the predictive AF to be performed with respect to the object having the registered feature region even if a plurality of objects exist at the time of imaging. Therefore, the object having the registered feature region can be constantly brought into focus without bringing other objects that are not registered into focus.
  • When displaying the through image at the time of imaging, a stop value is set so that a diameter of an undepicted stop becomes the smallest or close to the smallest, and the focus lens 2 a is driven to a position corresponding to a hyperfocal distance. This enables the depth of field to be deepened. Even if it is a moving object (feature region), the image data which has been brought into focus can be obtained in a wide range. Additionally, since the lens 2 is not needed to be driven, power consumption of the digital camera 1 can be decreased.
  • The focus lens 2 a is fixed after being driven to the position corresponding to a hyperfocal distance and is driven to a position in which the object is brought into focus at the time of imaging. This enables the lens 2 to be efficiently driven to the focus position, and the speed of the AF processing can be increased.
  • The exposure conditions of the digital camera 1 are modified according to the movement of the object at the time of imaging. Because of this, shooting can be performed under appropriate exposure conditions with respect to the object.
  • The embodiment can also be modified as follows.
  • In step S102 of in FIG. 3, an example of a person being selected as a type of the object was explained. Here, the example of a soccer ball is described. A method for recognizing and determining a soccer ball as a feature region is described. The other parts are described above according the embodiment.
  • A method for recognizing a soccer ball includes a method extracting a round-shaped region candidate corresponding to the shape of the soccer ball from image data and determining a soccer ball from within the region, a method detecting color from image data, etc. In addition, the soccer ball may as well be recognized by combining these methods.
  • A method for recognizing a soccer ball by detecting color from image data is described here. Supposing that a soccer ball is formed by two colors, black and white, the soccer ball is recognized by extracting a region formed by the two colors, black and white, from image data. Additionally, the region ratio of black and white forming the soccer ball has no significant difference even if it is seen from any angles. Therefore, the region ratio of black and white is registered in advance. Then, the region corresponding to the pre-registered region ratio of black and white is extracted.
  • Furthermore, the user may set the shape of the feature region. In step S102 of FIG. 3, once a soccer ball is selected as a type of object, a selection tool corresponding to the shape of the soccer ball, for example, a round-shaped frame, is displayed with overlapping with a through image. The size of the selection tool can be adjusted by operating the arrow key 22. The user adjusts the size of the selection tool so as to be substantially the same size of the soccer ball displayed on the through image. After adjusting the size of the selection tool, pressing down the enter button 23 causes the size of the selection tool to be fixed. The selection tool whose size is fixed can be moved in vertical and horizontal directions by operating the arrow key 22. The user superimposes the selection tool on the soccer ball displayed on the through image. Once the position of the selection tool is adjusted, pressing down the enter button 23 causes the soccer ball to be registered as a feature region.
  • According to the above-mentioned modified example, the following operational effects can be obtained.
  • A method for recognizing a soccer ball from an image is to include registering the color region ratio specific to the feature region in advance and detecting the feature region corresponding to the color region ratio in addition to extracting the round-shaped region and detecting particular colors from image data. This improves the accuracy of recognizing the feature region from the image data.
  • The selection tool according to the type of the selected object is displayed, and the user can adjust the selection tool according to the size and the position of the feature region. This enables the feature region to be designated even if the object whose feature region is difficult to be recognized from the image data.
  • Second Embodiment
  • The second embodiment of the present invention is described hereinafter.
  • In the first embodiment, the feature region is designated which is desired to be brought into focus from the feature region that is recognized within the through image. In this embodiment, the feature region which is desired to be brought into focus can be set in advance from the image data saved in the memory card 15, etc.
  • A basic configuration of the digital camera of the second embodiment is the same as that of the first embodiment. Portions different from the first embodiment are described hereinafter. FIG. 6 is a flowchart showing a processing procedure for setting a feature region based on image data saved in the memory card 15, etc. The processing shown in FIG. 6 is executed by the control unit 5, etc.
  • First, while the power source button 19 of the digital camera 1 is ON, if a setup mode is selected by operating an undepicted mode dial, a setup menu screen is displayed on the display unit 17. On this setup menu screen, various menu items related to shooting and reproduction are displayed. Here, there is an item of “predictive AF” which performs various settings of the predictive AF mode. The operating the arrow key 22 causes the item of “predictive AF” to be selected. Pressing down the enter key 23 causes the item of “predictive AF” to be determined. Then, a predictive AF menu screen is displayed on the display unit 17.
  • On this predictive AF menu screen, various menu items related to the predictive AF mode are displayed. Here, there is an item of “feature region setting” for designating a feature region when shooting is performed in the predictive AF mode. Operating the arrow key 22 causes the item of “feature region setting” to be selected. Pressing down the enter key 23 causes the item of “feature region setting” to be determined. Then, the operation proceeds to step S201, and a feature region setting screen is displayed on the display unit 17.
  • A list of the images saved in the memory card 15 is displayed on the feature region setting screen. As a method of displaying a list, thumbnail images of the images saved in the memory card 15 may be displayed. If the memory card 15 is not used, thumbnail images of the images saved in the built-in memory within the memory 11 may be displayed.
  • In step S202, it is determined whether the thumbnail image including the feature region which is desired to be brought into focus from among the thumbnail images displayed on the display unit 17. As a method of determining a thumbnail image, operating the arrow key 22 causes the thumbnail images to be selected. Pressing down the enter key 23 causes the thumbnail images to be determined. If the thumbnail image is not decided, the operation repeats the determination of step S202 until the thumbnail image is determined. If the thumbnail image is determined, the operation proceeds to step S203.
  • Once the thumbnail image is determined, in step S203, the image corresponding to the thumbnail image is reproduced and displayed on the display unit 17. At this time, the image for selecting the type of the object to be recognized is superimposed on the through image and displayed on the display unit 17. Once the type of the object is selected, the selection tool corresponding to the shape of the object is superimposed on the through image and displayed. For example, if a person is selected as the type of the object, a selection tool is displayed whose elliptical shape is vertically long. Then, operating both the arrow key 22 and the enter key 23 causes the size and the position of the selection tool to be adjusted for setting the feature region. Details of a method of setting a selection tool is the same as that of the modified example of the first embodiment, description is omitted herein.
  • In step S204, it is determined whether an instruction for completing the feature region setting screen exists. If the feature region setting screen is not completed, the operation returns to step S201, and the feature region setting screen is again displayed on the display unit 17. For example, if the user operates the operation unit 18 and selects the completion of the feature region setting screen, and the feature region setting screen is completed, setting for the feature region is completed. If an object is shot after the setting for the feature region, the operation proceeds to processing shown in FIG. 7.
  • Furthermore, in this embodiment, there is only one image data which is used when the feature region is set. However, the feature region can also be set by using a plurality of image data with respect to the same object. For example, if the type of the object is a person, the face becomes a feature region. A feature region can also be set from image data including a face facing an angled direction such as a side view of the face. Furthermore, if the feature region is set by using a plurality of image data with respect to the same object, the plurality of image data are configured to become related image data. As a method for making related image data, for example, there is a method in which the user inputs and saves the same keyword into the image data.
  • Additionally, the number of objects to be subject to the feature region is not limited to one, but a plurality of feature regions of different objects can be set.
  • FIG. 7 is a flowchart showing a shooting procedure in the predictive AF mode when a feature region is already set from image data.
  • If the AF mode selection switch 24 is switched ON with the power source button 19 of the digital camera 1 being switched ON, in step S205, it is determined whether there is one feature region set from the image data. If there is one feature region set from the image data, the operation proceeds to step S208. If there are a plurality of feature regions set from the image data, the operation proceeds to step S206, and the list is displayed so as to see the set feature region. As a method of displaying the list, the thumbnail images including the set feature regions may be displayed, or the keywords registered in the images including the set feature regions may also be displayed.
  • In step S207, it is determined whether one feature region is selected from among the list of the set feature regions. If a feature region is not selected, determination of step S207 is repeated until a feature region is selected. If a feature region is selected, the operation proceeds to step S208.
  • In step S208, information related to the set object is registered in the memory 11. The information related to the object refers to position information of the lens 2 at the time of shooting an image, the distance (object distance) to the object calculated according to the position information of the lens 2, and the size of the feature region. Such information is recorded in the images in Exif format. The size of the feature region is the size of the selection tool when the size of the selection tool is determined in step S203.
  • For example, if the selection tool is elliptical-shaped, the size of the feature region is either the length of an elliptical long axis (segment in which intersection of ellipse and a straight line connecting two foci of the ellipse is used as both ends), the length of a short elliptical axis (segment in which intersection of ellipse, a straight line perpendicular to the center of the ellipse, and a long axis is used as both ends), or the combination of the length of the long elliptical axis and the length of the short elliptical axis. Thus, the control unit 5 reads the information related to the object from the image and saves the information in the built-in memory within the memory 11.
  • Once the information about the object is registered in step S208, the operation proceeds to step S110 of FIG. 3. Since subsequent steps are the same as those of the first embodiment, description is omitted herein.
  • According to the above-described embodiment, the following operational effects can be obtained.
  • The feature region which is desired to be brought into focus from the image data saved in the memory card 15, etc. can be set. By so doing, the user can set the feature region which is desired to be brought into focus in advance before shooting. As soon as the digital camera 1 is activated, shooting can be performed.
  • The feature region can be set by using a plurality of image data with respect to the same object. This improves accuracy of recognizing an object.
  • Third Embodiment
  • The third embodiment of the present invention is described hereinafter.
  • In the first embodiment, in response to the operation of the enter key 23 by the user, the feature region is set which is desired to be brought into focus, and the object information is registered. In this embodiment, setting of the feature region and registration of object information are automatically performed.
  • The basic configuration of the digital camera of the third embodiment is the same as that of the first embodiment. Portions different from the first embodiment are described hereinafter. A shooting procedure in the predictive AF mode of the third embodiment is described with reference to a flowchart of FIG. 8. The processing shown in the flowchart of FIG. 8 is executed by the control unit 5, etc.
  • In step S301, if the AF mode selection switch 24 is switched ON, the through image is displayed on the display unit 17. In step S302, the type of the object is selected. A case is described hereinafter in which a person is selected as an object type.
  • Once the object type to be recognized is selected, in step S303, the control unit 5 sends the feature region recognition calculation unit 25 an instruction for initiating the feature region recognition processing with respect to the through image. Here, since a person is selected as the object type in step S302, the face region recognition processing is initiated in which a face of a person is recognized as a feature region.
  • In step S304, the control unit 5 determines whether the face region is recognized in response to a recognition result of the face region at that time from the feature region recognition calculation unit 25. If the face region is not recognized, the operation returns to the step S303 and again performs the face region recognition processing. If the face region is recognized, the operation proceeds to step S305. Here, if a plurality of face regions are recognized, the largest face region is automatically selected among the plurality of the recognized face regions. Then, the operation proceeds to step S305. Alternatively, a region located in a position closest to the center of the screen may be automatically selected among the plurality of the face regions.
  • In step S305, the face region mark indicating the recognized face region is superimposed on the through image and displayed on the display unit 17. The face region mark indicates the face region whose object information is to be registered. The face region mark is made to be, for example, a rectangular frame shown in FIG. 2 and is displayed, for example, in white.
  • When the face region to be registered is set, the operation proceeds to step S307. In step S307, in response to the ON signal of the halfway-press switch SW1, the contrast AF processing is performed with respect to the designated face region. In the following step S308, it is determined whether the designated face region is brought into focus. If the face region is determined to be brought into focus, the operation proceeds to step S309. In step S309, the face recognition processing is again performed in a state in which the object is accurately brought into focus. The object information in this state is registered in the memory 11. Bringing the designated face region into focus causes the display color of the face region mark to be changed so as to indicate that the face region is brought into focus. For example, the face region mark displayed in white is changed into green after being brought into focus. Alternatively, the face region mark may be flashed after being brought into focus.
  • If it is determined that the face region is not brought into focus in step S308, the operation proceeds to step S321, and the user is informed that the object information cannot be registered. A warning of being non-registrable includes, for example, displaying a warning on the display unit 17 or turning on a warning light.
  • Thus, after the recognition of the face region and the registration of the object information are automatically performed, the operation proceeds to step S311. Processing of steps S311 to S320 is the same as that of steps S111 to S120 of the first embodiment, description is omitted.
  • In the above-mentioned third embodiment, setting of the feature region and registration of the object information can be easily performed.
  • Fourth Embodiment
  • The fourth embodiment of the present invention is described hereinafter.
  • In the above-mentioned first embodiment, pan focus setting is performed when displaying a through image in response to the operation of the halfway-press switch SW1. In the fourth embodiment, if the selected face region exists within the through image, the predictive AF processing is performed so that movement of the face region is predicted and brought into focus.
  • The basic configuration of the digital camera of the fourth embodiment is the same as that of the first embodiment. Portions different from the first embodiment are described hereinafter. A shooting procedure of the predictive AF mode of the fourth embodiment is described with reference to a flowchart of FIG. 9. The processing shown in the flowchart in FIG. 9 is executed by the control unit 5, etc.
  • Processing of steps S401 to S409 is the same as that of steps S101 to S109 of the above-mentioned first embodiment, so description is omitted here.
  • In step S410, it is determined whether the halfway-press switch SW1 of the release button 20 is ON. If the halfway-press switch SW1 of the release button 20 is OFF, the determination of step S410 is repeated until the halfway-press switch SW1 is turned ON. If the halfway-press switch SW1 of the release button 20 is ON, the operation proceeds to step S412.
  • In step S412, it is determined whether the face region registered in step S409 exists within the through image. If the registered face region does not exist within the through image, the operation proceeds to step S414. In step S414, a conventional contrast AF processing is performed. In step S415, it is determined whether the fully-press switch SW2 of the release button 20 is ON. If the fully-press switch SW2 is OFF, the operation returns to step S412. If the fully-press switch SW2 is ON, the operation proceeds to step S420.
  • In step S412, if the registered face region exists within the through image, the face region mark is displayed with respect to the registered face region, and then the operation proceeds to step S416, and the object distance calculation processing is performed. The control unit 5 calculates a current object distance according to the information about the size of the face region mark and the focal length of the lens 2. Furthermore, in the same manner as the above-mentioned first embodiment, the object distance after a predetermined time is predicted.
  • In the following step S416A, the previous object distance calculated in a previous cycle and recorded in the memory 11 is compared with the object distance after the predetermined time calculated in step S416. For the predetermined time, an appropriate value is set in advance in consideration of control delay in the control unit 5. For example, the predetermined time may be set at the same value as the above-mentioned release time lag.
  • In step S416B, if it is determined that the difference between the previous object distance and the predicted object distance (=“previous object distance”−“predicted object distance”) is a threshold value or more, the operation proceeds to step S416C. If (“previous object distance”−“predicted object distance”) is determined to be a threshold value or less, the operation proceeds to step S417. Even if the object distance changes, the threshold value is appropriately set in advance as a value in which the face of the object corresponding to the registered face region is not blurred on the through image. A change rate of the object distance may also be set as a threshold value. If the object distance is short, the threshold value may be set to be smaller than that of the long distance.
  • In step S416C, the predictive AF processing is performed with respect to the object. Here, a focus position of the object after a predetermined time is predicted based on the object distance after the predetermined time calculated in step S416, and the focus lens 2 a is moved so as to bring the focus position into focus. In step S416D, the object distance of this cycle calculated in step S416 is recorded in the memory 11.
  • In step S417, it is determined whether the fully-press switch SW2 of the release button 20 is ON. If the fully-press switch SW2 of the release button 20 is OFF, the operation returns to the step S412, and it is again determined whether the registered face region exists within the through image. If the fully-pressed switch SW2 of the release button 20 is ON, the operation proceeds to step S418.
  • In step S418, the object distance calculation processing is performed with respect to the registered face region. In the following step S419, the predictive AF processing is performed with respect to the object. After that, in step S420, shooting is performed by the imaging element 8.
  • According to the above-mentioned fourth embodiment, the previous object distance and a predicted object distance are compared to each other when the through image is displayed. If the image of the face of the object corresponding to the face region set on the through image is predicted to be blurred, the predictive AF processing is performed with respect to the set face region. Thus, the through image which has accurately brought the moving object into focus can be displayed as needed.
  • The above-mentioned second embodiment may also be combined with the third or fourth embodiment. Alternatively, the third and fourth embodiments may also be combined.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram showing an electrical configuration of a digital camera 1 in a first embodiment of the present invention.
  • FIG. 2 is a diagram displaying face region marks with respect to faces of persons who are objects in the first embodiment of the present invention.
  • FIG. 3 is a flowchart showing a shooting procedure in a predictive AF mode in the first embodiment of the present invention.
  • FIG. 4 is a diagram showing an example of the relationship between a time change of an object distance of a plurality of frames and a subsequent object distance in the first embodiment of the present invention.
  • FIG. 5 shows diagrams showing an example of the relationship between a through image, a size of a face region mark, and an object distance in the first embodiment of the present invention.
  • FIG. 6 is a flowchart showing a procedure of setting a feature region from image data in a second embodiment of the present invention.
  • FIG. 7 is a flowchart showing a shooting procedure in a predictive AF mode if a feature region is set from image data in the second embodiment of the present invention.
  • FIG. 8 is a flowchart showing a shooting procedure in a predictive AF mode in a third embodiment of the present invention.
  • FIG. 9 is a flowchart showing a shooting procedure in a predictive AF mode in a fourth embodiment of the present invention.
  • EXPLANATION OF THE SYMBOLS
    • 1 Digital camera
    • 2 Lens
    • 2 a Focus lens
    • 2 b Zoom lens
    • 3 Focus lens drive unit
    • 4 Zoom lens drive unit
    • 5 Control unit
    • 6 Focus lens position detection unit
    • 7 Zoom lens position detection unit
    • 8 Imaging element
    • 9 Analog signal processing unit
    • 10 Analog-digital converter
    • 11 Memory
    • 12 Bus
    • 13 Digital signal processing unit
    • 14 Compression/expansion unit
    • 15 Memory card
    • 16 Digital-analog converter
    • 17 Display unit
    • 18 Operation unit
    • 19 Power source button
    • 20 Release button
    • 21 Menu button
    • 22 Arrow key
    • 23 Enter button
    • 24 AF mode selection switch
    • 25 Feature region recognition calculation unit

Claims (10)

1. A digital camera, comprising:
an imaging unit that receives and images light from an object which has passed through shooting optical system;
a recognition unit that recognizes a feature region of the object by using an image obtained by imaging by the imaging unit;
a detection unit that detects a size of the feature region that is recognized by the recognition unit; and
a control unit that predicts a distance to the object after a predetermined time according to the size of the feature region, and controls the shooting optical system so as to bring the object into focus.
2. The digital camera according to claim 1, further comprising:
a distance calculation unit that calculates a distance to the object according to the size of the feature region; and
a speed calculation unit that calculates a moving speed of the object according to a time change of the distance to the object,
wherein the control unit predicts the distance to the object calculated by the distance calculation unit and the distance to the object according to the moving speed of the object calculated by the speed calculation unit.
3. The digital camera according to claim 2,
wherein the distance calculation unit calculates the distance to the object based on position information of a lens constituting the shooting optical system, and calculates the distance to the object based on the calculated distance to the object and the size of the feature region after calculating the distance to the object based on the position information of the lens constituting the shooting optical system.
4. The digital camera according to any of claims 1-3,
wherein the control unit predicts the distance to the object at the time of imaging based on the time between a shooting execution operation and imaging by the imaging unit, and controls the shooting optical system so as to bring the object into focus at the time of imaging by the imaging unit.
5. The digital camera according to any of claims 1-4, comprising:
a registration unit that selects the feature region of the object for predicting the distance to the object, from at least one said feature regions recognized by the recognition unit, and registers feature information of the selected feature region of the object;
wherein after feature information of the feature region of the object is registered, the recognition unit recognizes the feature region of the object based on the feature information of the feature region of the object.
6. The digital camera according to claim 5, further comprising:
a record control unit that stores in a recording medium an image, which is obtained by imaging by the imaging unit,
wherein the registration unit registers the feature information of the feature region of the object based on the image stored in the recording medium.
7. The digital camera according to claim 5 or 6,
wherein the feature information of the feature region is at least one of position information of a lens constituting the shooting optical system, the distance to the object, and the size of the feature region.
8. The digital camera according to claim 4,
wherein a shooting condition is modified in response to one of calculation results of the distance calculation unit and the speed calculation unit.
9. The digital camera according to claim 8,
wherein the shooting condition is one of a shutter speed and ISO sensitivity.
10. The digital camera according to any of claims 1-9,
US12/078,632 2007-04-04 2008-04-02 Digital camera Abandoned US20080284900A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2007-098136 2007-04-04
JP2007098136 2007-04-04
JP2008-094974 2008-04-01
JP2008094974A JP5251215B2 (en) 2007-04-04 2008-04-01 Digital camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US13/037,026 US8253847B2 (en) 2007-04-04 2011-02-28 Digital camera having an automatic focus

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/037,026 Continuation US8253847B2 (en) 2007-04-04 2011-02-28 Digital camera having an automatic focus

Publications (1)

Publication Number Publication Date
US20080284900A1 true US20080284900A1 (en) 2008-11-20

Family

ID=39769597

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/078,632 Abandoned US20080284900A1 (en) 2007-04-04 2008-04-02 Digital camera
US13/037,026 Expired - Fee Related US8253847B2 (en) 2007-04-04 2011-02-28 Digital camera having an automatic focus

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/037,026 Expired - Fee Related US8253847B2 (en) 2007-04-04 2011-02-28 Digital camera having an automatic focus

Country Status (2)

Country Link
US (2) US20080284900A1 (en)
EP (1) EP1986421A3 (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090135291A1 (en) * 2007-11-28 2009-05-28 Fujifilm Corporation Image pickup apparatus and image pickup method used for the same
US20100033592A1 (en) * 2008-08-07 2010-02-11 Canon Kabushiki Kaisha Image pickup device and method of controlling same
US20100188560A1 (en) * 2008-12-26 2010-07-29 Takenori Sakai Imaging apparatus
US20100194931A1 (en) * 2007-07-23 2010-08-05 Panasonic Corporation Imaging device
US20100253797A1 (en) * 2009-04-01 2010-10-07 Samsung Electronics Co., Ltd. Smart flash viewer
US20100309327A1 (en) * 2009-06-05 2010-12-09 Hon Hai Precision Industry Co., Ltd. Camera
US20110002680A1 (en) * 2009-07-02 2011-01-06 Texas Instruments Incorporated Method and apparatus for focusing an image of an imaging device
US20110096995A1 (en) * 2009-10-27 2011-04-28 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and program
US20110115945A1 (en) * 2009-11-17 2011-05-19 Fujifilm Corporation Autofocus system
US20110242346A1 (en) * 2010-03-31 2011-10-06 Shunta Ego Compound eye photographing method and apparatus
US20120057794A1 (en) * 2010-09-06 2012-03-08 Shingo Tsurumi Image processing device, program, and image procesing method
US20140168460A1 (en) * 2012-12-13 2014-06-19 Hon Hai Precision Industry Co., Ltd. Electronic device and image capturing control method
US20140219517A1 (en) * 2010-12-30 2014-08-07 Nokia Corporation Methods, apparatuses and computer program products for efficiently recognizing faces of images associated with various illumination conditions
US20160227104A1 (en) * 2015-01-29 2016-08-04 Haike Guan Image processing apparatus, image capturing apparatus, and storage medium storing image processing program
US9519202B2 (en) 2012-03-15 2016-12-13 Panasonic Intellectual Property Management Co., Ltd. Auto-focusing device and image pickup device
US20170019589A1 (en) * 2015-07-14 2017-01-19 Samsung Electronics Co., Ltd. Image capturing apparatus and method
US9621781B2 (en) 2011-10-11 2017-04-11 Olympus Corporation Focus control device, endoscope system, and focus control method
US9742983B2 (en) * 2015-02-02 2017-08-22 Canon Kabushiki Kaisha Image capturing apparatus with automatic focus adjustment and control method thereof, and storage medium

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8237807B2 (en) * 2008-07-24 2012-08-07 Apple Inc. Image capturing device with touch screen for adjusting camera settings
CN102265602A (en) 2008-12-26 2011-11-30 松下电器产业株式会社 Image capture device
JP5538865B2 (en) * 2009-12-21 2014-07-02 キヤノン株式会社 Imaging apparatus and control method thereof
JP5760727B2 (en) * 2011-06-14 2015-08-12 リコーイメージング株式会社 Image processing apparatus and image processing method
KR101710633B1 (en) * 2011-08-05 2017-02-27 삼성전자주식회사 Auto focus adjusting method, auto focus adjusting apparatus, and digital photographing apparatus including the same
CN103765276B (en) * 2011-09-02 2017-01-18 株式会社尼康 Focus evaluation device, imaging device, and program
JP5932361B2 (en) * 2012-01-25 2016-06-08 キヤノン株式会社 Distance information detection apparatus, distance information detection method, and camera
US9554035B2 (en) * 2013-02-07 2017-01-24 Sony Corporation Image pickup device, method of controlling image pickup device, and computer program for automatically achieving composition specified by user
TW201517615A (en) * 2013-10-16 2015-05-01 Novatek Microelectronics Corp Focus method
US20160073039A1 (en) * 2014-09-08 2016-03-10 Sony Corporation System and method for auto-adjust rate of zoom feature for digital video
FR3042367A1 (en) * 2015-10-12 2017-04-14 Stmicroelectronics (Grenoble 2) Sas Method for capturing images of a moving object and corresponding apparatus

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5475422A (en) * 1993-06-21 1995-12-12 Nippon Telegraph And Telephone Corporation Method and apparatus for reconstructing three-dimensional objects
US20040207743A1 (en) * 2003-04-15 2004-10-21 Nikon Corporation Digital camera system
US20050069208A1 (en) * 2003-08-29 2005-03-31 Sony Corporation Object detector, object detecting method and robot
US20050285967A1 (en) * 2004-06-15 2005-12-29 Hirofumi Suda Focus control apparatus and optical apparatus
US20070030375A1 (en) * 2005-08-05 2007-02-08 Canon Kabushiki Kaisha Image processing method, imaging apparatus, and storage medium storing control program of image processing method executable by computer
US20070030381A1 (en) * 2005-01-18 2007-02-08 Nikon Corporation Digital camera

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07119880B2 (en) * 1986-12-24 1995-12-20 株式会社精工舎 Automatic focusing camera
JPH05196861A (en) * 1992-01-17 1993-08-06 Kyocera Corp Predictive focusing device
JPH05216093A (en) 1992-01-31 1993-08-27 Canon Inc Camera with function for initializing operation mode
JP3557659B2 (en) 1994-08-22 2004-08-25 コニカミノルタホールディングス株式会社 Face extraction method
JP3279913B2 (en) 1996-03-18 2002-04-30 株式会社東芝 Person authentication device, feature point extraction device, and feature point extraction method
JP2002330335A (en) 2001-04-27 2002-11-15 Matsushita Electric Ind Co Ltd Still picture image pickup device
JP5011625B2 (en) 2001-09-06 2012-08-29 株式会社ニコン Imaging device
JP2003315665A (en) * 2002-04-23 2003-11-06 Nikon Corp Camera
JP4122865B2 (en) 2002-07-02 2008-07-23 コニカミノルタオプト株式会社 Autofocus device
JP4196714B2 (en) 2003-04-15 2008-12-17 株式会社ニコン Digital camera
US7733412B2 (en) * 2004-06-03 2010-06-08 Canon Kabushiki Kaisha Image pickup apparatus and image pickup method
JP2006319596A (en) 2005-05-12 2006-11-24 Fuji Photo Film Co Ltd Imaging apparatus and imaging method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5475422A (en) * 1993-06-21 1995-12-12 Nippon Telegraph And Telephone Corporation Method and apparatus for reconstructing three-dimensional objects
US20040207743A1 (en) * 2003-04-15 2004-10-21 Nikon Corporation Digital camera system
US20050069208A1 (en) * 2003-08-29 2005-03-31 Sony Corporation Object detector, object detecting method and robot
US20050285967A1 (en) * 2004-06-15 2005-12-29 Hirofumi Suda Focus control apparatus and optical apparatus
US20070030381A1 (en) * 2005-01-18 2007-02-08 Nikon Corporation Digital camera
US20070030375A1 (en) * 2005-08-05 2007-02-08 Canon Kabushiki Kaisha Image processing method, imaging apparatus, and storage medium storing control program of image processing method executable by computer

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8294813B2 (en) * 2007-07-23 2012-10-23 Panasonic Corporation Imaging device with a scene discriminator
US20100194931A1 (en) * 2007-07-23 2010-08-05 Panasonic Corporation Imaging device
US8269879B2 (en) * 2007-11-28 2012-09-18 Fujifilm Corporation Image pickup apparatus and image pickup method used for the same
US8421905B2 (en) * 2007-11-28 2013-04-16 Fujifilm Corporation Image pickup apparatus and image pickup method used for the same
US20090135291A1 (en) * 2007-11-28 2009-05-28 Fujifilm Corporation Image pickup apparatus and image pickup method used for the same
US8223253B2 (en) * 2008-08-07 2012-07-17 Canon Kabushiki Kaisha Image pickup device and method of controlling same
US20100033592A1 (en) * 2008-08-07 2010-02-11 Canon Kabushiki Kaisha Image pickup device and method of controlling same
US20100188560A1 (en) * 2008-12-26 2010-07-29 Takenori Sakai Imaging apparatus
US8493494B2 (en) * 2008-12-26 2013-07-23 Panasonic Corporation Imaging apparatus with subject selecting mode
US20100253797A1 (en) * 2009-04-01 2010-10-07 Samsung Electronics Co., Ltd. Smart flash viewer
US20100309327A1 (en) * 2009-06-05 2010-12-09 Hon Hai Precision Industry Co., Ltd. Camera
US20110002680A1 (en) * 2009-07-02 2011-01-06 Texas Instruments Incorporated Method and apparatus for focusing an image of an imaging device
US8577098B2 (en) * 2009-10-27 2013-11-05 Canon Kabushiki Kaisha Apparatus, method and program for designating an object image to be registered
US20110096995A1 (en) * 2009-10-27 2011-04-28 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and program
US20110115945A1 (en) * 2009-11-17 2011-05-19 Fujifilm Corporation Autofocus system
US8643766B2 (en) 2009-11-17 2014-02-04 Fujifilm Corporation Autofocus system equipped with a face recognition and tracking function
US20110242346A1 (en) * 2010-03-31 2011-10-06 Shunta Ego Compound eye photographing method and apparatus
US9865068B2 (en) * 2010-09-06 2018-01-09 Sony Corporation Image processing device, and image procesing method
US20120057794A1 (en) * 2010-09-06 2012-03-08 Shingo Tsurumi Image processing device, program, and image procesing method
US20140219517A1 (en) * 2010-12-30 2014-08-07 Nokia Corporation Methods, apparatuses and computer program products for efficiently recognizing faces of images associated with various illumination conditions
US9760764B2 (en) * 2010-12-30 2017-09-12 Nokia Technologies Oy Methods, apparatuses and computer program products for efficiently recognizing faces of images associated with various illumination conditions
US9621781B2 (en) 2011-10-11 2017-04-11 Olympus Corporation Focus control device, endoscope system, and focus control method
US9519202B2 (en) 2012-03-15 2016-12-13 Panasonic Intellectual Property Management Co., Ltd. Auto-focusing device and image pickup device
US20140168460A1 (en) * 2012-12-13 2014-06-19 Hon Hai Precision Industry Co., Ltd. Electronic device and image capturing control method
US20160227104A1 (en) * 2015-01-29 2016-08-04 Haike Guan Image processing apparatus, image capturing apparatus, and storage medium storing image processing program
US9800777B2 (en) * 2015-01-29 2017-10-24 Ricoh Company, Ltd. Image processing apparatus, image capturing apparatus, and storage medium storing image processing program
US9742983B2 (en) * 2015-02-02 2017-08-22 Canon Kabushiki Kaisha Image capturing apparatus with automatic focus adjustment and control method thereof, and storage medium
US20170019589A1 (en) * 2015-07-14 2017-01-19 Samsung Electronics Co., Ltd. Image capturing apparatus and method
US10382672B2 (en) * 2015-07-14 2019-08-13 Samsung Electronics Co., Ltd. Image capturing apparatus and method

Also Published As

Publication number Publication date
US20110141344A1 (en) 2011-06-16
EP1986421A3 (en) 2008-12-03
EP1986421A2 (en) 2008-10-29
US8253847B2 (en) 2012-08-28

Similar Documents

Publication Publication Date Title
US8593533B2 (en) Image processing apparatus, image-pickup apparatus, and image processing method
US8629915B2 (en) Digital photographing apparatus, method of controlling the same, and computer readable storage medium
US8145049B2 (en) Focus adjustment method, focus adjustment apparatus, and control method thereof
US9225855B2 (en) Imaging apparatus, imaging system, and control method for increasing accuracy when determining an imaging scene based on input image data and information stored in an external information processing apparatus
US8509482B2 (en) Subject tracking apparatus, subject region extraction apparatus, and control methods therefor
US8416312B2 (en) Image selection device and method for selecting image
US8831282B2 (en) Imaging device including a face detector
EP2051506B1 (en) Imaging device and imaging control method
US8073207B2 (en) Method for displaying face detection frame, method for displaying character information, and image-taking device
JP5060233B2 (en) Imaging apparatus and automatic photographing method thereof
EP1909229B1 (en) Tracking device and image-capturing apparatus
JP4127296B2 (en) Imaging device, imaging device control method, and computer program
JP3848230B2 (en) Autofocus device, imaging device, autofocus method, program, and storage medium
US8350892B2 (en) Image pickup apparatus, image pickup method, playback control apparatus, playback control method, and program
US7868917B2 (en) Imaging device with moving object prediction notification
US8194173B2 (en) Auto-focusing electronic camera that focuses on a characterized portion of an object
TWI399082B (en) Display control device, display control method and program
US7791668B2 (en) Digital camera
US7860386B2 (en) Imaging apparatus, control method of imaging apparatus, and computer program
US8797423B2 (en) System for and method of controlling a parameter used for detecting an objective body in an image and computer program
US9747492B2 (en) Image processing apparatus, method of processing image, and computer-readable storage medium
US7973833B2 (en) System for and method of taking image and computer program
JP5589527B2 (en) Imaging apparatus and tracking subject detection method
US8265474B2 (en) Autofocus system
US20120050587A1 (en) Imaging apparatus and image capturing method

Legal Events

Date Code Title Description
AS Assignment

Owner name: NIKON CORPORATION, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ABE, KOICHI;REEL/FRAME:021062/0572

Effective date: 20080523

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION