US7602417B2 - Image processing method, imaging apparatus, and storage medium storing control program of image processing method executable by computer - Google Patents

Image processing method, imaging apparatus, and storage medium storing control program of image processing method executable by computer Download PDF

Info

Publication number
US7602417B2
US7602417B2 US11/457,862 US45786206A US7602417B2 US 7602417 B2 US7602417 B2 US 7602417B2 US 45786206 A US45786206 A US 45786206A US 7602417 B2 US7602417 B2 US 7602417B2
Authority
US
United States
Prior art keywords
detection result
display
detected
image data
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US11/457,862
Other versions
US20070030375A1 (en
Inventor
Tsutomu Ogasawara
Eiichiro Ikeda
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Assigned to CANON KABUSHIKI KAISHA reassignment CANON KABUSHIKI KAISHA ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: IKEDA, EIICHIRO, OGASAWARA, TSUTOMU
Publication of US20070030375A1 publication Critical patent/US20070030375A1/en
Priority to US12/552,571 priority Critical patent/US7817202B2/en
Priority to US12/552,554 priority patent/US7738024B2/en
Application granted granted Critical
Publication of US7602417B2 publication Critical patent/US7602417B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • H04N5/77Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
    • H04N5/772Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera the recording apparatus and the television camera being placed in the same enclosure
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • H04N23/635Region indicators; Field of view indicators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/675Focus control based on electronic image sensor signals comprising setting of focusing regions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • H04N23/634Warning indications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/65Control of camera operation in relation to power supply
    • H04N23/651Control of camera operation in relation to power supply for reducing power consumption by affecting camera operations, e.g. sleep mode, hibernation mode or power off of selective parts of the camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor

Definitions

  • the present invention relates to an image processing technique for repetitively detecting, from image data, an object satisfying predetermined conditions and displaying a detection result of the object.
  • An imaging apparatus can repetitively detect, from image data, an object satisfying predetermined conditions, for example, to improve the following situations discussed herein below.
  • a camera may have an auto-focus function for automatically adjusting focus on a target subject to be photographed.
  • the camera selects one or plural focus detection areas, and adjusts the lens position of its imaging optics system to focus on a subject in the selected focus detection area. Then, the camera performs exposure compensation processing by enlarging a weighting factor applied to a brightness value of the main subject located in the focus detection area.
  • the focus detection area can occupy a relatively limited area on the screen.
  • the main subject is present outside the focus detection area, it is difficult to focus on the main subject.
  • the focus adjusting action may be erroneously applied to another object different from the target (i.e., main subject).
  • main subject i.e., main subject
  • the camera may regard a different subject in another focus detection area as a main subject and may erroneously apply a focus adjusting action to this subject.
  • Japanese Patent Application Laid-open No. 2003-107335 discloses a camera that can automate the processes of detecting a main subject from obtained image data using a shape analysis, displaying a focus detection area corresponding to the detected main subject, and performing a focus adjusting action applied to the focus detection area.
  • the image data is entirely searched to detect a main subject and accordingly the focus adjusting action can be applied to the main subject wherever the main subject is present in an object field. Furthermore, to momentarily track the main subject, the detecting action of the main subject based on the shape analysis must be performed periodically.
  • an apparatus capable of automatically detecting a main subject may erroneously select a subject that a user does not intend to shoot. Hence, it is necessary to let a user confirm a main subject detected by the camera.
  • a liquid crystal monitor or other display unit can continuously display image data to let a user observe the movement of a subject.
  • the processing for updating the detection result of a main subject should be performed periodically. And, the latest region where the main subject is present should be continuously displayed.
  • a frame indicating the position of a detected subject can be superimposed on an image captured by the camera.
  • a practical method for detecting a main subject it is possible to detect a front or oblique face of a subject person based on the spatial relationship between both eyes, a nose, and a mouth on a face (refer to Japanese Patent Application Laid-open No. 2002-251380).
  • the present invention is directed to an apparatus having a function of repetitively detecting, from image data, an object satisfying predetermined conditions, and can stably display a detection result even when a target object cannot be temporarily detected.
  • an image processing method which includes repetitively updating image data; displaying an image based on the image data; detecting an object satisfying predetermined conditions from the image data; combining a display of detection result indicating a region where the object is detected with a display of the image; and determining whether the display of detection result should be continued, when the object cannot be detected during the display of detection result.
  • the image processing method may further include measuring a time when the detection result is continuously displayed; and determining based on a measurement result whether the detection result should be continuously displayed.
  • the display of the detection result is canceled when the measurement result reached a predetermined time.
  • the measurement result is reset when the object can be detected during the display of the detection result.
  • the predetermined time is changeable in accordance with a position of the detected object in the image. Further, according to still another aspect of the present invention, the closer the position of the detected object is to an edge of the image, the shorter the predetermined time is set.
  • the predetermined time is changeable in accordance with a size of the detected object.
  • the image processing method may further include determining, based on a moving direction of the detected object, whether the detection result should be continuously displayed.
  • the image processing method may further include determining whether the detection result should be continuously displayed, based on a moving direction of the detected object and a position of the detected object in the image. Additionally, according to yet another aspect of the present invention, the image processing method may further include detecting a change amount of the image data in response to the update of the image data; and determining, based on a detection result, whether the detection result should be continuously displayed.
  • the image processing method may further include detecting a change amount of the image data in response to the update of the image data; and resetting the measurement result in accordance with the change amount.
  • an imaging apparatus which includes an imaging element configured to produce image data based on light reflected from a subject; a detection circuit configured to detect an object satisfying predetermined conditions from the image data obtained from the imaging element; a display unit configured to repetitively obtain the image data and display an image based on the obtained image data, and configured to combine the image with a detection result indicating a region where the object detected by the detection circuit is present; and a signal processing circuit configured to determine whether the detection result should be continuously displayed on the display unit when the detection circuit cannot detect the object while the display unit displays the detection result.
  • the display unit combines a moving image and the detection result.
  • the imaging apparatus may further include a focus control circuit that performs auto-focus processing applied to the object detected by the detection circuit.
  • the imaging apparatus may further include a focus control circuit that performs exposure control applied to the object detected by the detection circuit.
  • the imaging apparatus may further comprising a timer that measures a time when the detection result is continuously displayed, wherein the signal processing circuit determines based on a measurement result of the timer whether the detection result should be continuously displayed. Also, according to another aspect of the present invention, the signal processing circuit cancels the display of the detection result when the measurement result reached a predetermined time. And still yet, according to another aspect of the present invention, the signal processing circuit resets the measurement result of the timer in response to a detection of the object by the detection circuit when the display unit displays the detection result.
  • a computer readable medium which contains computer-executable instructions for performing processing of image data.
  • the medium includes computer-executable instructions for repetitively updating image data; computer-executable instructions for displaying an image based on the image data; computer-executable instructions for detecting an object satisfying predetermined conditions from the image data; computer-executable instructions for combining a display of detection result indicating a region where the object is detected with a display of the image; and computer-executable instructions for determining whether the display of detection result should be continued, when the object cannot be detected during the display of detection result.
  • the computer readable medium may further include computer-executable instructions for measuring a time when the detection result is continuously displayed; and computer-executable instructions for determining based on a measurement result whether the detection result should be continuously displayed. Still yet, according to another aspect of the present invention, the display of the detection result is canceled when the measurement result reached a predetermined time. And finally, in another aspect of the present invention, the measurement result is reset when the object can be detected during the display of the detection result.
  • FIG. 1 is a block diagram illustrating a schematic arrangement of an exemplary imaging apparatus in accordance with a first exemplary embodiment of the present invention.
  • FIG. 2 is a flowchart showing an exemplary main processing routine in accordance with the first embodiment of the present invention.
  • FIG. 3 is a flowchart showing an exemplary frame display processing routine in accordance with the first embodiment of the present invention.
  • FIG. 4 is a flowchart showing an exemplary AE and AF processing routine in accordance with the first embodiment of the present invention.
  • FIG. 5 is a flowchart showing an exemplary imaging processing routine in accordance with the first embodiment of the present invention.
  • FIG. 6A is a view showing an exemplary relationship between the measurement time of a timer and display/erasure of a face detection frame in accordance with the first embodiment of the present invention.
  • FIG. 6B is a view showing an exemplary relationship between the measurement time of a timer and display/erasure of a face detection frame in accordance with a second exemplary embodiment of the present invention.
  • FIG. 6C is a view showing another exemplary relationship between the measurement time of a timer and display/erasure of a face detection frame in accordance with the second embodiment of the present invention.
  • FIG. 7A is a view illustrating an exemplary display pattern of a face detection frame in accordance with an aspect of the present invention.
  • FIG. 7B is a view showing an exemplary relationship between the face detection frame and dissected regions in accordance with an aspect of the present.
  • FIG. 8A is a view showing an exemplary relationship between the position and size of a detected face region in comparison with through display image data used in face detection processing in accordance with the second embodiment of the present invention.
  • FIG. 8B is a graph showing an exemplary relationship between the position of a detected face region and a timer correction coefficient in accordance with the second embodiment of the present invention.
  • FIG. 8C is a graph showing an exemplary relationship between the size of a detected face region and a timer correction coefficient in accordance with the second embodiment of the present invention.
  • FIG. 9 is a flowchart showing an exemplary frame display processing routine in accordance with a third exemplary embodiment of the present invention.
  • FIG. 10 is a flowchart showing an exemplary frame display processing routine in accordance with a fourth exemplary embodiment of the present invention.
  • FIG. 11 is a flowchart showing an exemplary scene detection processing routine in accordance with the fourth embodiment of the present invention.
  • FIGS. 12A through 12F are views each illustrating an exemplary display pattern of the face detection frame in accordance with a fifth exemplary embodiment of the present invention.
  • FIG. 13 is a view illustrating an exemplary display pattern of the face detection frame in accordance with a modified embodiment of the fifth embodiment of the present invention.
  • FIG. 14 is a view illustrating a display pattern of a conventional face detection frame.
  • FIG. 1 is a block diagram illustrating an imaging apparatus 1000 in accordance with an exemplary embodiment of the present invention.
  • the imaging apparatus 1000 is an electronic still camera.
  • the imaging apparatus 1000 includes an imaging lens group 1001 , a light quantity adjuster 1002 including a diaphragm apparatus and a shutter apparatus, an imaging element 1003 (e.g., CCD or CMOS) that can convert light flux (i.e., subject image) having passed through the imaging lens group 1001 into an electric signal, and an analog signal processing circuit 1004 that can apply clamp processing and gain processing to an analog signal produced from the imaging element 1003 .
  • an imaging lens group 1001 e.g., a light quantity adjuster 1002 including a diaphragm apparatus and a shutter apparatus
  • an imaging element 1003 e.g., CCD or CMOS
  • an analog signal processing circuit 1004 that can apply clamp processing and gain processing to an analog signal produced from the imaging element 1003 .
  • the imaging apparatus 1000 includes an analog/digital (hereinafter, referred to as A/D) converter 1005 that can convert an output of the analog signal processing circuit 1004 into a digital signal, and a digital signal processing circuit 1007 that can apply pixel interpolation processing and color conversion processing to the data produced from the A/D converter 1005 or to the data produced from the memory control circuit 1006 .
  • the digital signal processing circuit 1007 can also perform calculation based on captured image data.
  • the imaging apparatus 1000 includes a system control circuit 1012 that can control, based on calculation results obtained by the digital signal processing circuit 1007 , a through-the-lens (TTL) type auto focus (AF) processing, auto exposure (AE) processing, and pre-flash (EF) processing, applied to an exposure control circuit 1013 and a focus control circuit 1014 .
  • TTL through-the-lens
  • AF through-the-lens
  • AE auto exposure
  • EF pre-flash
  • the digital signal processing circuit 1007 can apply predetermined calculation processing to the captured image data, and execute a TTL-type auto white balance (AWB) processing based on obtained calculation results.
  • ATB TTL-type auto white balance
  • the digital signal processing circuit 1007 includes a face detection circuit 1016 that can detect features of a face from the captured image data based on the detection of edges of eyes, a mouth, or the like.
  • the face detection circuit 1016 can execute face detection processing for detecting a region corresponding to a human face.
  • the digital signal processing circuit 1007 includes a timer 1015 that can measure a display time for each of later-described individual face detection frames.
  • a memory control circuit 1006 can control the analog signal processing circuit 1004 , the A/D converter 1005 , the digital signal processing circuit 1007 , a memory 1008 , and a digital/analog (hereinafter, referred to as D/A) converter 1009 .
  • the digital data produced from the A/D converter 1005 can be written, via the digital signal processing circuit 1007 and the memory control circuit 1006 , into the memory 1008 .
  • the digital data produced from the A/D converter 1005 can be written, via the memory control circuit 1006 , into the memory 1008 .
  • the memory 1008 can store data to be displayed on a display unit 1010 .
  • the data recorded in the memory 1008 can be outputted, via the D/A converter 1009 , to the display unit 1010 such as a liquid crystal monitor that can display an image based on the received data.
  • the memory 1008 can store captured still images and moving images, with a sufficient storage capacity for a predetermined number of still images and a predetermined time of moving images. In other words, a user can shoot continuous still images or can shoot panoramic images, because the memory 1008 enables writing large-sized image data at higher speeds. Furthermore, the memory 1008 can be used as a work area of the system control circuit 1012 .
  • the display unit 1010 can function as an electronic viewfinder that successively displays captured image data.
  • the display unit 1010 can arbitrarily turn the display on or off in response to an instruction given from the system control circuit 1012 .
  • the imaging apparatus 1000 can reduce electric power consumption.
  • the display unit 1010 can display an operation state and a message with images and letters in accordance with the operation of the system control circuit 1012 that can execute the program(s).
  • An interface 1011 can control communications between the imaging apparatus 1000 and a storage medium (e.g., a memory card or a hard disk).
  • the imaging apparatus 1000 can transfer or receive image data and management information via the interface 1011 to or from a peripheral device (e.g., other computer or a printer).
  • the interface 1011 can be configured to be able to operate in conformity with the protocol of a PCMCIA card or a Compact Flash (registered trademark) card, various types of communication cards can be inserted into card slots of the interface 1011 .
  • the communication card can be selected from a LAN card, a modem card, a USB card, an IEEE1394 card, a P1284 card, a SCSI card, and a PHS card.
  • the system control circuit 1012 can control the operation of the imaging apparatus 1000 .
  • the system control circuit 1012 includes a memory that can store numerous constants, variables, and program(s) used in the operation of the system control circuit 1012 .
  • the exposure control circuit 1013 can control the diaphragm apparatus and the shutter apparatus equipped in the light quantity adjuster 1002 .
  • the focus control circuit 1014 can control a focusing action and a zooming action of the imaging lens group 1001 .
  • the exposure control circuit 1013 and the focus control circuit 1014 can be controlled according to the TTL-type.
  • the system control circuit 1012 controls the exposure control circuit 1013 and the focus control circuit 1014 , based on calculation results obtained by the digital signal processing circuit 1007 based on the captured image data.
  • FIGS. 2 through 5 are flowcharts showing exemplar operations of the electronic camera in accordance with the present exemplary embodiment.
  • the program for executing the processing is stored in the memory of the system control circuit 1012 and can be executed under the control of the system control circuit 1012 .
  • FIG. 2 is a flowchart showing an exemplary main processing routine in the imaging apparatus 1000 in accordance with the present exemplary embodiment.
  • the processing shown in FIG. 2 can be started, for example, in response to a turning-on operation of a power source immediately after the batteries are replaced.
  • step S 101 the system control circuit 1012 initializes various flags and control variables stored in its memory.
  • step S 102 the system control circuit 1012 turns the image display of the display unit 1010 to an OFF state as initial settings.
  • step S 103 the system control circuit 1012 detects the state of operation mode set for the imaging apparatus 1000 .
  • step S 105 the system control circuit 1012 changes the display of the display unit 1010 to a deactivated state and stores flags and control variables and other necessary parameters, setting values, and setting modes. Then, the system control circuit 1012 performs predetermined termination processing for turning off the power source of the display unit 1010 and other components in the imaging apparatus 1000 .
  • step S 103 When any other mode but a shooting mode is set in step S 103 , the system control circuit 1012 proceeds to step S 104 .
  • step S 104 the system control circuit 1012 executes required processing corresponding to the selected mode and returns to step S 103 .
  • step S 106 the system control circuit 1012 determines whether a residual amount or an operation state of the power source is at a warning level which may cause the imaging apparatus 1000 to malfunction.
  • step S 106 When the system control circuit 1012 decides that the power source is in the warning level (NO at step S 106 ), the processing flow proceeds to step S 108 , where the system control circuit 1012 causes the display unit 1010 to perform a predetermined warning display with images and sounds. Then, the processing flow returns to step S 103 .
  • step S 107 the system control circuit 1012 determines whether an operation state of the storage medium is in a warning level according to which the imaging apparatus 1000 may fail especially in recording and playback of image data.
  • step S 107 When the system control circuit 1012 decides that the storage medium is in the warning level (NO at step S 107 ), the processing flow proceeds to the above-described step S 108 to cause the display unit 1010 to perform a predetermined warning display with images and sounds. Then, the processing flow returns to step S 103 .
  • step S 109 the system control circuit 1012 causes the display unit 1010 to display the state of various settings of the imaging apparatus 1000 with images and sounds.
  • step S 110 the system control circuit 1012 turns the image display of display unit 1010 to an ON state, and causes the light quantity adjuster 1002 to open the shutter apparatus. Furthermore, in step S 111 , the system control circuit 1012 causes the display unit 1010 to start a through display according to which captured image data can be successively displayed as a moving image.
  • the captured image data is successively written in the memory 1008 and the written data is successively displayed on the display unit 1010 to realize an electronic viewfinder function.
  • the display unit 1010 can update the image display at intervals of 1/30 second.
  • step S 112 the system control circuit 1012 causes the digital signal processing circuit 1007 to start the face detection processing for detecting a face region from the image data.
  • the system control circuit 1012 causes the digital signal processing circuit 1007 to start the face detection processing for detecting a face region from the image data.
  • various conventional methods are available.
  • a neural network is a representative method for detecting a face region based on a learning technique.
  • a template matching can be used to extract features representing eyes, a nose, or any other physical shape from an image region.
  • the quantity of features such as a skin color or an eye shape
  • a statistical method refers to Japanese Patent Application Laid-open No. 10-232934 or Japanese Patent Application Laid-open No. 2000-48184.
  • the face detection processing is performed using a method for detecting a pair of eyes (both eyes), a nose, and a mouth and determining a human face region based on a detected relative position.
  • a person i.e., an object to be detected
  • identifying a face region may be difficult because a pair of eye (i.e., a reference portion) cannot be detected.
  • the face detection processing requires a significant calculation time.
  • the digital signal processing circuit 1007 cannot apply the face detection processing to the entire image data obtained for the through display.
  • the digital signal processing circuit 1007 performs the face detection processing for every two acquisitions of through display image data.
  • step S 113 the system control circuit 1012 causes the digital signal processing circuit 1007 to perform frame display processing for displaying a frame showing a face detection result obtained in step S 112 .
  • FIG. 3 is a flowchart showing details of the frame display processing (refer to step S 113 ).
  • step S 201 the digital signal processing circuit 1007 determines whether a frame indicating the position of a detected face region (hereinafter, referred to as “face detection frame”) is already displayed on the display unit 1010 .
  • face detection frame a frame indicating the position of a detected face region
  • the processing flow proceeds to step S 202 .
  • the processing of step S 113 is first executed after a user sets a shooting mode to the imaging apparatus 1000 , no face detection frame is displayed on the display unit 1010 . Accordingly, the processing flow proceeds to step S 202 .
  • step S 202 the digital signal processing circuit 107 obtains coordinate data of all face regions detected in the face detection processing of step S 112 .
  • step S 203 the display unit 1010 combines, based on the coordinate data of each face region obtained in step S 202 , the display of a face detection frame surrounding a detected face region with the display of a subject image, as shown in FIG. 7A .
  • FIG. 7A is a display screen of the display unit 1010 , including a rectangular face detection frame surrounding a detected human head.
  • step S 204 the timer 1015 starts measuring a display time of the face detection frame displayed in step S 203 .
  • step S 205 the digital signal processing circuit 1007 determines whether the display of a face detection frame for all coordinate data representing the face regions obtained in step S 202 is accomplished. When the display of a face detection frame for all coordinate data has been accomplished, the digital signal processing circuit 1007 terminates this routine.
  • step S 203 When the display of face detection frames for all coordinate data is not accomplished yet, the processing flow returns to step S 203 . For example, if no face region is detected in step S 112 , the processing of steps S 203 through S 205 relating to the display of a face detection frame will not be performed.
  • the system control circuit 1012 determines whether a shutter switch SW 1 is in a pressed state. When the shutter switch SW 1 is not in a pressed state, the processing flow returns to step S 113 . More specifically, unless the shutter switch SW 1 is pressed, the through display in step S 111 , the face detection processing in step S 112 , and the frame display processing in step S 113 are repetitively performed.
  • the shape of the face detection frame can be an ellipse, or any other shape that fits the contour of a subject face. Furthermore, instead of displaying a face detection frame, it is possible to use a method for emphasizing the contour of a face region, or a method for shading the region other than a face region, as far as a detected face region can be recognized.
  • step S 201 when a face detection frame is already displayed, the processing flow proceeds to step S 206 .
  • the digital signal processing circuit 1007 obtains coordinate data of all face regions detected in the face detection processing of step S 112 .
  • step S 207 the digital signal processing circuit 1007 selects one of face detection frames already displayed. Then, the digital signal processing circuit 1007 determines whether a selected face detection frame is present in the vicinity of any coordinate position of a face region newly obtained in step S 206 . When the selected face detection frame is present in the vicinity of any coordinate position of a newly obtained face region (YES in step S 207 ), the processing flow proceeds to step S 208 .
  • a neighboring region used in the decision can be experimentally obtained in such a manner that a person surrounded by a selected face detection frame agrees with a person represented by the coordinates of a newly obtained face region.
  • the digital signal processing circuit 1007 can select the coordinate data of a face region closest to the coordinate position of a face region being set in the selected face detection frame, and then execute the processing of step S 208 . In this case, if the digital signal processing circuit 1007 has an individual authentication function, the digital signal processing circuit 1007 can determine in step S 207 whether a person surrounded by an already displayed face detection frame is the same.
  • step S 208 the digital signal processing circuit 1007 compares the coordinates of a selected face detection frame and the coordinates of a face region positioned in the vicinity of the face detection frame and obtains the difference.
  • step S 209 the digital signal processing circuit 1007 determines whether the difference obtained in step S 208 is within a predetermined range. When the difference is within the predetermined range, the display unit 1010 does not update the position of the face detection frame and continues the display of the already displayed face detection frame. Then, the processing flow proceeds to step S 210 .
  • step S 211 the display unit 1010 sets a new face detection frame based on the coordinate data of the face region compared in step S 208 , and displays the new face detection frame. Then, the processing flow proceeds to step S 210 .
  • step S 209 by determining whether the difference obtained in step S 208 is within the predetermined range, it can be determined if the coordinates of a newly obtained face region are positioned in the already displayed face detection frame.
  • step S 210 the timer 1015 starts measuring a display time corresponding to the face detection frame selected as an object in the decision made in step S 207 , or a display time corresponding to the face detection frame updated in step S 211 after returning its value to an initial value. Then, the processing flow proceeds to step S 212 .
  • step S 207 when the coordinate position of the newly obtain face region is not in the vicinity of the selected face detection frame, the processing flow proceeds to step S 213 .
  • step S 213 the digital signal processing circuit 1007 determines whether the display time of the selected face detection frame measured by the timer 1015 has reached a predetermined time. When the measurement time has already reached the predetermined time (YES in step S 213 ), the processing flow proceeds to step S 214 .
  • the digital signal processing circuit 1007 erases the face detection frame and resets the measurement time to an initial value. Otherwise, when the measurement time does not yet reach the predetermined time (NO in step S 213 ), the digital signal processing circuit 1007 continues displaying the face detection frame without resetting the timer 1015 . Then, the processing flow proceeds to step S 212 .
  • step S 212 the digital signal processing circuit 1007 determines whether the processing of step S 207 is finished for all of the already displayed face detection frames. When there is any face detection frame being not yet processed, the processing flow returns to step S 207 . Otherwise, the processing flow proceeds to step S 215 .
  • step S 215 the digital signal processing circuit 1007 determines whether there is any face region whose coordinate position is not close to all of the face detection frames used in step S 207 . When such a face region is not present, the digital signal processing circuit 1007 terminates this routine. When at least one face region has a coordinate position not close to any of the face detection frames, the processing flow proceeds to step S 216 to set a new face detection frame.
  • step S 216 the display unit 1010 combines, based on the coordinate data of each face region obtained in step S 206 , the display of a face detection frame surrounding a detected face region with the display of a subject image.
  • step S 217 the timer 1015 starts measuring a display time of a face detection frame newly displayed in step S 216 .
  • step S 218 the digital signal processing circuit 1007 determines whether there is a face region to which a face detection frame is not yet set. When the setting of a face detection frame for all coordinate data has been accomplished (YES in step S 218 ), the digital signal processing circuit 1007 terminates this routine. When there is a face region to which a face detection frame is not yet set (NO in step S 218 ), the processing flow returns to step S 216 .
  • step S 114 the system control circuit 1012 determines whether the shutter switch SW 1 is in a pressed state.
  • the processing flow returns to step S 113 .
  • the processing flow proceeds to step S 115 .
  • the digital signal processing circuit 1007 suspends the face detection processing until the shutter switch SW 1 is released.
  • step S 115 the system control circuit 1012 performs AF processing to adjust a focal distance of the imaging lens group 1001 to the subject, and further performs AE processing to determine a diaphragm value and a shutter speed.
  • the AE processing can include the settings for a flashlight if necessary.
  • FIG. 4 is a flowchart showing details of exemplary AF and AE processing performed in step S 115 .
  • step S 301 an electric charge signal is produced from the imaging element 1003 .
  • the A/D converter 1005 converts the electric charge signal into digital data.
  • the digital signal processing circuit 1007 inputs the digital data.
  • the digital signal processing circuit 1007 performs, based on the input image data, predetermined calculations for the TTL-type AE processing, EF processing, AWB processing, and AF processing. In each processing, the digital signal processing circuit 1007 uses only the image data of the face regions detected by step S 112 without using all of captured pixels, or increases a weighting factor given to the detected face regions compared to those given to other regions. Thus, in each of the TTL-type AE processing, EF processing, AWB processing, and AF processing, the digital signal processing circuit 1007 can give priority to the calculations of image data of the detected face regions.
  • step S 210 when the processing flow proceeds from step S 209 to step S 210 , the coordinate position of the detected face region does not completely agree with the position of the displayed face detection frame.
  • the digital signal processing circuit 1007 uses the coordinates of a latest detected face region instead of using the position where the face detection frame is displayed.
  • step S 302 based on the results obtained by predetermined calculations in step S 301 , the system control circuit 1012 determines whether the exposure is appropriate. When the exposure is inappropriate (NO in step S 302 ), the processing flow proceeds to step S 303 . Instep S 303 , the system control circuit 1012 causes the exposure control circuit 1013 to perform an AE control.
  • step S 304 the system control circuit 1012 determines, based on the measurement data obtained in the AE control, whether flashlight is required. When flashlight is necessary, the processing flow proceeds to step S 305 . In step S 305 , a flashlight flag is set and a flashlight (not shown in the drawings) is charged. Then, the processing flow returns to step S 301 . On the other hand, when no flashlight is required, the process returns to step S 301 .
  • step S 302 When the exposure is appropriate in step S 302 (YES in step S 302 ), the processing flow proceeds to step S 306 .
  • step S 306 the system control circuit 1012 causes its memory or the memory 1008 to store measurement data or setting parameters. Then, in step S 307 , the system control circuit 1012 determines whether the white balance is appropriate, based on calculation results obtained by the digital signal processing circuit 1007 and measurement data obtained in the AE control.
  • step S 308 the system control circuit 1012 causes the digital signal processing circuit 1007 to adjust color processing parameters and perform an AWB control. Then, the processing flow returns to step S 301 .
  • step S 309 the processing flow proceeds to step S 309 .
  • step S 309 the system control circuit 1012 causes its memory to store measurement data or setting parameters of the memory 1008 .
  • step S 310 the system control circuit 1012 determines whether the camera is in a focused state. When the camera is not in a focused state, the processing flow proceeds to step S 311 .
  • step S 311 the system control circuit 1012 causes the focus control circuit 1014 to perform an AF control. Then, the processing flow returns to step S 301 .
  • the system control circuit 1012 decides that the camera is in a focused state in step S 310 , the processing flow proceeds to step S 312 .
  • step S 312 the system control circuit 1012 causes its memory to store measurement data or setting parameters, sets a display of an AF frame indicating a focused region, and terminates the AF processing and the AE processing.
  • the AF frame has a position identical to the coordinate position of the latest face region.
  • step S 116 the system control circuit 1012 sets the display unit 1010 into the through display state again after finishing the AF processing and the AE processing, to display the AF frame.
  • step S 117 the system control circuit 1012 determines whether a shutter switch SW 2 is switched to an ON state. If it is determined that switch SW 2 is switched to an ON state (YES in step S 117 ), the process proceeds to step S 119 . If it is determined that switch SW 2 is not switched to an ON state (NO in step S 117 ), the process proceeds to step S 118 . In step S 118 , the system control circuit 1012 determines whether the shutter switch SW 1 is switched to an ON state. If it is determined that switch SW 1 is switched to an ON state (YES in step S 118 ), the process returns to step S 117 .
  • step S 118 If it is determined that switch SW 1 is not switched to an ON state (NO in step S 118 ), the process returns to step S 112 . Thus, when both the shutter switches SW 1 and SW 2 are not switched to an ON state, the processing flow returns to step S 112 .
  • step S 119 the system control circuit 1012 executes exposure processing for writing captured image data into the memory 1008 . Furthermore, the system control circuit 1012 with the memory control circuit 1006 (and the digital signal processing circuit 1007 if necessary) executes shooting processing including developing processing for reading image data from the memory 1008 and performing various processing.
  • FIG. 5 is a flowchart showing exemplary details of the shooting processing (refer to step S 119 ).
  • step S 401 according to the measurement data obtained in the above-described AE control, the exposure control circuit 1013 sets a diaphragm value of the light quantity adjuster 1002 and starts exposure of the imaging element 1003 .
  • step S 402 the system control circuit 1012 determines, based on a flashlight flag, whether flashlight is necessary. When flashlight is necessary, the processing flow proceeds to step S 403 to cause a flash unit to emit a predetermined quantity of light for a pre-flash. It is noted that the light quality of a pre-flash can be determined based on the diaphragm value of the light quantity adjuster 1002 , distance to a subject, and sensitivity being set for the imaging element 1003 .
  • the system control circuit 1012 decides that no flashlight is necessary in step S 402 , the flash unit does not perform a pre-flash and the processing flow proceeds to step S 409 .
  • step S 404 the exposure control circuit 1013 waits for an exposure termination timing for imaging element 1003 based on photometric data.
  • the processing flow proceeds to step S 405 .
  • step S 405 the system control circuit 1012 causes the imaging element 1003 to produce an electric charge signal. Then, the system control circuit 1012 causes the A/D converter 1005 , the digital signal processing circuit 1007 , and the memory control circuit 1006 (or the A/D converter 1005 and directly the memory control circuit 1006 ) to write captured image data into the memory 1008 .
  • step S 406 the system control circuit 1012 obtains an average brightness value in the face region during the moment of a pre-flash, and calculates an optimum quantity of light emission (i.e., the quantity of light emission during an actual shot) so that the face region can have an appropriate brightness.
  • the quantity of light emission for an actual shot can be the same as that for the pre-shot action.
  • the quantity of light emission can be doubled for an actual shot.
  • step S 407 the system control circuit 1012 causes the imaging element 1003 to perform a reset action for an actual shot and start exposure. Then, the processing flow proceeds to step S 408 .
  • step S 408 the flash unit emits light with the optimum quantity obtained in step S 406 .
  • step S 409 the exposure control circuit 1013 waits for an exposure termination timing for imaging element 1003 based on photometric data.
  • the exposure control circuit 1013 stops exposure and closes the shutter in step S 410 .
  • step S 411 the system control circuit 1012 causes the imaging element 1003 to output an electric charge signal. Then, the system control circuit 1012 causes the A/D converter 1005 , the digital signal processing circuit 1007 , and the memory control circuit 1006 (or the A/D converter 1005 and directly the memory control circuit 1006 ) to write captured image data into the memory 1008 .
  • step S 412 the system control circuit 1012 causes the memory control circuit 1006 (and the digital signal processing circuit 1007 if necessary) to read the image data from the memory 1008 and execute vertical addition processing.
  • step S 413 the system control circuit 1012 causes the digital signal processing circuit 1007 to perform color processing.
  • step S 414 the system control circuit 1012 causes the memory 1008 to store the processed display image data, and terminates the shooting processing routine.
  • step S 120 the system control circuit 1012 causes the display unit 1010 to execute a quick review display based on the image data obtained in step S 119 . It is noted that the display unit 1010 during a shooting action can always be in a displayed state as an electronic viewfinder. Further, it noted that the quick review display can be performed immediately after accomplishing a shooting action.
  • step S 121 the system control circuit 1012 causes the memory control circuit 1006 (and the digital signal processing circuit 1007 if necessary) to read the captured image data from the memory 1008 and execute various image processing. Furthermore, the system control circuit 1012 executes recording processing for compressing image data and writing compressed image data into a storage medium.
  • step S 122 determines in step S 122 whether the shutter switch SW 2 is in a pressed state. If in step S 122 the switch SW 2 is in a pressed state (YES in step S 122 ), the process proceeds to step S 123 . If in step S 122 the switch SW 2 is not in a pressed state (NO in step S 122 ), the process proceeds to step S 124 .
  • step S 123 the system control circuit 1012 determines whether a continuous shooting flag is in an ON state. If in step S 123 the continuous shooting flag is in an ON state (YES in step S 123 ), the process flow returns to step S 119 .
  • the continuous shooting flag can be stored in the internal memory of the system control circuit 1012 or in the memory 1008 .
  • step S 119 the system control circuit 1012 causes the imaging apparatus 1000 to shoot the next image to realize a continuous shooting.
  • step S 122 the continuous shooting flag is not in an ON state (NO in step S 123 )
  • the process returns to step S 122 .
  • the system control circuit 1012 repeats the processing of steps S 122 and S 123 until the shutter switch SW 2 is released.
  • step S 121 it is determined, in an operation setting mode for executing the quick review display immediately after accomplishing a shooting action, whether the shutter switch SW 2 is in a pressed state at the termination timing of the recording processing (refer to step S 121 ).
  • the display unit 1010 continues the quick review display until the shutter switch SW 2 is released. This procedure enables a user to carefully confirm shot images (i.e., captured images).
  • step S 124 from step S 122 .
  • a user may continuously press the shutter switch SW 2 to confirm shot images provided by the quick review display for a while after the recording processing of step S 121 and then turn the shutter switch SW 2 off. In such a case, the processing flow proceeds to step S 124 from step S 122 , in a similarly manner.
  • step S 124 the system control circuit 1012 determines whether a predetermined minimum review time has elapsed. If in step S 124 the minimum review time has already elapsed (YES in step S 124 ), the processing flow proceeds to step S 125 .
  • step S 125 the system control circuit 1012 brings the display state of the display unit 1010 into a through display state. With this processing, a user can confirm shot images on the display unit 1010 that provides a quick review display. Then, the display unit 1010 starts a through display state to successively display shot image data for the next shoot.
  • step S 126 the system control circuit 1012 determines whether the shutter switch SW 1 is in an ON state. When the shutter switch SW 1 is in an ON state, the processing flow proceeds to step S 117 for the next shooting action. On the other hand, when the shutter switch SW 1 is in an OFF state, the imaging apparatus 1000 performs a series of shooting operations. Then, the processing flow returns to step S 112 .
  • the above-described face detection frame displaying method has the following characteristics.
  • the timer 1015 starts measuring a display time of the face detection frame (refer to steps S 204 , S 210 , and S 217 ). If the timer 1015 counts up a predetermined time, i.e., if it continuously fails to detect a civilization having a human face similar to that surrounded by the face detection frame, the face detection frame is erased (refer to step S 214 ).
  • the predetermined time counted by the timer 1015 is set to be longer than the time required to perform the next face detection and display the face detection frame again based on the detection result after the face detection frame is displayed at least once. More specifically, as far as the face detection is continuously successful, the face detection frame cannot be erased by the timer that counts the predetermined time.
  • FIG. 6A is a view exemplarily showing a relationship between the measurement time of the timer 1015 and display/erasure of a face detection frame.
  • FIG. 6A shows, from the above, the timing of obtaining image data for a through display, the timing of obtaining a face detection result, the timing of updating the face detection frame, and a measured time of the timer 1015 .
  • Each abscissa represents the time.
  • the through display image data can be updated at the intervals of 1/30 second, as indicated by timings t 0 , t 1 , t 2 , t 3 , t 4 , t 5 , t 6 , t 7 , t 8 , t 9 , and t 10 .
  • a face detection result corresponding to the image data obtained at the timing t 0 can be obtained at the timing ( 1 ).
  • a face detection result corresponding to the image data obtained at the timing t 2 can be obtained at the timing ( 2 ).
  • a face detection result corresponding to the image data obtained at the timing t 4 can be obtained at the timing ( 3 ).
  • the face detections at the timings ( 3 ) and ( 4 ) have ended in failure.
  • the face detection results obtained in the timings ( 1 ) and ( 2 ) involve coordinate data of a detected face region.
  • the face detection frame can be updated at the intervals of 1/30 second, i.e., at the same intervals as those of the through display.
  • the timer 1015 counts down by one each time the update timing of the face detection frame comes. According to the example shown in FIG. 6A , coordinate data of a detected face region can be obtained at the timing ( 1 ). The face detection frame can be newly displayed at the next update timing of the face detection frame. The timer 1015 sets the count value to ‘5.’ At the next update timing of the face detection frame, the count value decreases by 1 and becomes ‘4.’
  • the face detection frame can be updated based on the coordinate data of a newly obtained face region.
  • the timer 1015 again resets the count value to ‘5.’
  • the face detection result obtained at the timing ( 3 ) does not involve any coordinate data of a face region. Therefore, the timer 1015 decreases its count value to ‘2.’ In this case, because the count value does not reach ‘0’ yet, the face detection frame can be continuously displayed regardless of failure in the face detection.
  • the face detection result obtained at the next timing ( 4 ) does not involve any coordinate data of a face region, and the count value decreases to ‘0.’ In this case, because the measurement time of the timer 1015 has reached the predetermined time, the face detection frame is erased.
  • the face detection result obtained at the timing ( 5 ) involves coordinate data of a newly detected face region. Accordingly, the face detection frame is displayed again, and the timer 1015 sets its count value to ‘5.’
  • the present exemplary embodiment provides a function of continuously displaying the face detection frame for a predetermined time until the count value of the timer 1015 reaches ‘0’, even if the face detection is failed in this period. Furthermore, when no face detection frame is displayed on the display unit 1010 , the detection result of a newly obtained face can be directly used to display a face detection frame (refer to step S 203 , FIG. 2 ).
  • the position of the face detection frame should be updated to track a face position of the subject (refer to step S 211 , FIG. 3 ).
  • the first exemplary embodiment enables a continuous display of a face detection frame for a predetermined time regardless of failure in the face detection. Accordingly, when a subject person blinks and closes an eye or suddenly turns his/her face to look away, the present exemplary embodiment can prevent the face detection frame from being erased inadvertently. As a result, the first exemplary embodiment can suppress an undesirable phenomenon repeating the display and erasure of the face detection frame in a short period of time.
  • the second exemplary embodiment is characterized in that the timer 1015 can change a count value according to the position or size of a detected face region.
  • the timer 1015 sets a longer count time.
  • the timer 1015 sets a shorter count time. Furthermore, if a detected face region has a small area, a subject person will be positioned far from the camera. Even when the person moves, the positional change of the person on the screen will remains small.
  • the timer 1015 sets a longer count time when the detected face region has a smaller area, and sets a shorter count time when the detected face region has a larger area.
  • FIG. 8A is a view showing an exemplary relationship between the position and size of a detected face region in comparison with through display image data used in the face detection processing.
  • FIG. 8B is a graph showing an exemplary relationship between the position of a detected face region and a correction coefficient applied to the count value of the timer 1015 .
  • FIG. 8C is a graph showing an exemplary relationship between the size of a detected face region and a correction coefficient applied to the count value of the timer 1015 .
  • FIG. 8A shows QVGA image data consisting of 320 ⁇ 240 pixels, with an upper left corner positioned at the coordinates (0,0) and a lower right corner positioned at the coordinates ( 319 , 239 ).
  • coordinates (x 0 , y 0 ) represent an upper left corner
  • coordinates (x 0 , y 1 ) represent a lower left corner
  • coordinates (x 1 , y 0 ) represent an upper right corner
  • coordinates (x 1 , y 1 ) represent a lower right corner of a face region obtained in the face detection processing.
  • a shortest distance DIS representing a distance between the face detection frame and the edge of a screen according to the following formula.
  • DIS min( x 0, y 0, 320- x 1, 240- y 1) Then, it is possible to obtain a timer correction coefficient K_DIS with reference to the obtained DIS.
  • the timer correction coefficient K_DIS can take a value between 0.0 and 1.0, as understood from the relationship of DIS and K_DIS shown in FIG. 8B .
  • the timer correction coefficient K_DIS takes a smaller value.
  • the present exemplary embodiment obtains the distance between the face detection frame and the edge of a screen, it is possible to obtain a distance between a central region of the face detection frame and the edge of a screen.
  • AREA ( x 1- x 0) ⁇ ( y 1- y 0)
  • the timer correction coefficient K_AREA can take a value between 0.0 and 1.0, as understood from the relationship of AREA and K_AREA shown in FIG. 8C .
  • the timer correction coefficient K_AREA takes a smaller value.
  • the initial count value T is an integer obtainable by removing fractions from the result of the multiplication between the coefficients K_DIS and K_AREA and the reference value Tref. It is however possible to use, in the calculation, only one of the timer correction coefficients K_DIS and K_AREA.
  • FIGS. 6B and 6C show a relationship between the measurement time of timer 1015 and display/erasure of a face detection frame, when the timer 1015 uses an initial count value different from that used in the first exemplary embodiment.
  • coordinate data of a face region can be obtained at the timings ( 1 ) and ( 2 ).
  • the initial count value of the timer is set to ‘2.’ Therefore, failure in obtaining coordinate data of a face region at the timing ( 3 ) results in disappearance of the face detection frame at the next update timing of the face detection frame.
  • a new face detection frame is not displayed until the timing ( 5 ) at which coordinate data of a new face region can be obtained.
  • the new face detection frame can be displayed immediately after the timing ( 5 ).
  • an initial count value newly set for the timer 1015 is ‘3’, because a newly detected face region is positioned relatively far from the edge of a screen, or because a newly detected face region has a smaller area.
  • the initial count value of the timer 1015 is set to ‘7.’ Therefore, regardless of failure in obtaining coordinate data of a face region at the timings ( 3 ) and ( 4 ), the count value of the timer 1015 never reaches ‘0’ before the timing ( 5 ) at which coordinate data of a new face region can be obtained. Accordingly, as shown in FIG. 6C , the face detection frame can be continuously displayed during the term of timings ( 1 ) through ( 5 ).
  • the second exemplary embodiment enables a continuous display of a face detection frame for a predetermined time regardless of failure in the face detection. Furthermore, the second exemplary embodiment enables to change the predetermined time in accordance with at least one of the position and size of a detected face.
  • the present exemplary embodiment can prevent the face detection frame from being erased inadvertently.
  • the second exemplary embodiment can suppress an undesirable phenomenon repeating the display and erasure of the face detection frame in a short period of time.
  • the third exemplary embodiment is differentiated in the succeeding processing of step S 207 (from FIG. 3 ) shown in FIG. 3 , performed when the selected face detection frame is not positioned near the coordinate position of a face region obtained in step S 206 .
  • the third exemplary embodiment includes processing of steps S 501 through S 505 which is not described in the first exemplary embodiment and will be described below in detail. Steps denoted by the same reference numerals as those shown in FIG. 3 described in the first exemplary embodiment can perform the same processing.
  • step S 207 of FIG. 9 the digital signal processing circuit 1007 selects one of face detection frames already displayed. Then, the digital signal processing circuit 1007 determines whether a selected face detection frame is present in the vicinity of any coordinate position of a face region newly obtained in step S 206 . When the selected face detection frame is not present in the vicinity of any coordinate position of a newly obtained face region, the processing flow proceeds to step S 501 .
  • step S 501 the digital signal processing circuit 1007 determines whether, at a previous timing, any coordinate position of a face region is detected in the vicinity of the face detection frame selected in step S 207 .
  • the processing flow proceeds to step S 213 to continuously display or erase the face detection frame with reference to the measurement time of the timer 1015 .
  • the processing flow proceeds to step S 502 .
  • step S 502 the digital signal processing circuit 1007 calculates the movement of a face region based on the coordinate data of the face region positioned closest to the face detection frame selected in step S 207 .
  • the processing flow proceeds to step S 503 .
  • the timer 1015 resets its count value to an initial value and starts measuring a display time and then the process flow proceeds to step S 212 .
  • step S 503 the digital signal processing circuit 1007 determines whether the coordinate position of the face region used in the calculation of the movement of the face region is positioned at the edge of the screen. When the face region is positioned at the edge of the screen, the face region may probably disappear from the screen. Therefore, in step S 504 , the face detection frame used in the calculation of the face region is erased. Next the process flow proceeds to step S 212 .
  • the third exemplary embodiment decides the erasure of a face detection frame based on the moving direction and position of a face region. Accordingly, in a situation that a subject will soon disappear from the screen, the face detection frame can be promptly erased.
  • the present exemplary embodiment can suppress such an undesirable phenomenon that the detection frame is continuously displayed even after a subject face has already disappeared from the screen.
  • step S 502 it is possible to obtain not only the moving direction but also the moving speed of a face region.
  • an initial value set for the timer 1015 in step S 505 can be decreased. The faster the moving speed of a subject is, the higher the possibility of disappearing from the screen becomes.
  • the fourth exemplary embodiment is differentiated in the succeeding processing of step S 207 shown in FIG. 3 , performed when the selected face detection frame is not positioned near the coordinate position of a face region obtained in step S 206 .
  • the fourth exemplary embodiment includes processing of steps S 601 through S 603 which is not described in the first exemplary embodiment and will be described below in detail. Steps denoted in FIG. 10 by the same reference numerals as those shown in FIG. 3 described in the first exemplary embodiment can perform the same processing.
  • step S 207 of FIG. 10 the digital signal processing circuit 1007 selects one of face detection frames already displayed. Then, the digital signal processing circuit 1007 determines whether a selected face detection frame is present in the vicinity of any coordinate position of a face region newly obtained in step S 206 .
  • step S 601 the digital signal processing circuit 1007 executes scene detection processing for determining whether the situation of a subject has changed.
  • FIG. 11 is a flowchart showing details of the scene detection processing, which can be described with reference to dissected regions shown in FIGS. 7A and 7B .
  • FIG. 7A is a display screen of the display unit 1010
  • FIG. 7B shows dissected regions of the imaging element 1003 .
  • chain lines represent boundaries of dissected regions.
  • FIG. 7A shows chain lines similar to those of FIG. 7B .
  • a shaded region represents a group of dissected regions where the face detection frame of FIG. 7A is present.
  • step S 701 the digital signal processing circuit 1007 detects a brightness signal of each dissected region where the face detection frame is not present (i.e., each non-shaded region in FIG. 7B ). Then, the digital signal processing circuit 1007 compares the detected brightness signal with a brightness signal obtained from a corresponding region of image data when the timer 1015 is previously reset to an initial value, and calculates a difference ⁇ Yn between the compared signals.
  • step S 702 the digital signal processing circuit 1007 converts the image data of each dissected region where the face detection frame is present, into a specific frequency (i.e., (1 ⁇ 2)*fnn representing a half of the Nyquist frequency according to the present exemplary embodiment). Then, the digital signal processing circuit 1007 compares the converted frequency with a frequency obtained according to a similar method from a corresponding region of image data when the timer 1015 is previously reset to an initial value, and calculates a difference ⁇ (1 ⁇ 2)*fnn between the compared frequencies.
  • a specific frequency i.e., (1 ⁇ 2)*fnn representing a half of the Nyquist frequency according to the present exemplary embodiment.
  • step S 703 the digital signal processing circuit 1007 detects a color difference signal in each dissected region where the face detection frame is not present. Next, the digital signal processing circuit 1007 compares the detected color difference signal with a color difference signal obtained from a corresponding region of image data when the timer 1015 is previously reset to an initial value, and calculates differences ⁇ Crn and ⁇ Cbn between the compared color difference signals.
  • step S 704 the digital signal processing circuit 1007 determines whether the difference ⁇ Yn calculated in step S 701 is less than or equal to a threshold Yth. If the difference ⁇ Yn calculated in step S 701 is less than or equal to a threshold Yth, the process flow proceeds to step S 705 . If the difference ⁇ Yn calculated in step S 701 is not less than or equal to a threshold Yth, the process flow proceeds to step S 708 .
  • step S 705 the digital signal processing circuit 1007 determines whether the difference ⁇ (1 ⁇ 2)*fnn calculated in step S 702 is less than or equal to a threshold fnth. If the difference ⁇ (1 ⁇ 2)*fnn calculated in step S 702 is less than or equal to a threshold fnth, the process flow proceeds to step S 706 . If the difference ⁇ (1 ⁇ 2)*fnn calculated in step S 702 is not less than or equal to a threshold fnth, the process flow proceeds to step S 708 .
  • step S 706 the digital signal processing circuit 1007 determines whether the differences ⁇ Crn and ⁇ Cbn calculated in step S 703 are less than or equal to thresholds Crth and Cbth, respectively. If the differences ⁇ Crn and ⁇ Cbn calculated in step S 703 are less than or equal to thresholds Crth and Cbth, respectively, the process flow proceeds to step S 707 . If the differences ⁇ Crn and ⁇ Cbn calculated in step S 703 are not less than or equal to thresholds Crth and Cbth, respectively, the process flow proceeds to step S 708 . Further, it is noted that in this case, the thresholds Yth, fnth, Crth, and Cbth can be experimentally obtained and can be used to determine whether a subject remains on a screen.
  • step S 707 the digital signal processing circuit 1007 sets a flag indicating no scene change.
  • step S 708 the digital signal processing circuit 1007 sets a flag indicating a scene change. After setting either flag, the digital signal processing circuit 1007 terminates this routine.
  • step S 602 determines in step S 602 whether the flag indicating a scene change is set.
  • the processing flow proceeds to step S 213 .
  • the processing flow proceeds to step S 603 .
  • step S 603 the timer sets an initial value and starts measuring a display time. Then, the processing flow proceeds to step S 212 .
  • step S 213 the digital signal processing circuit 1007 determines whether the count value of the timer 1015 has reached the predetermined time. When the timer count value has already reached the predetermined time, the processing flow proceeds to step S 214 to erase the face detection frame displayed on the display unit 1010 .
  • the flag indicating a scene change is not set, the situation change of a subject will be small. More specifically, when a subject person blinks and closes an eye, or suddenly turns his/her face to look away, the face detection probably results in failure. Therefore, regardless of failure in the face detection, it is proper to presume that the position of a subject head does not change largely. Thus, the already displayed face detection frame is continuously displayed.
  • the timer can be omitted.
  • the reference values used for setting the scene change flag in the present exemplary embodiment are the difference (i.e., ⁇ Yn) in the brightness signal, the difference (i.e., ⁇ (1 ⁇ 2)*fnn) in the converted specific frequency, and the difference (i.e., ⁇ Crn and ⁇ Cbn) in the color difference signal, in respective dissected regions.
  • ⁇ Yn the difference in the brightness signal
  • ⁇ (1 ⁇ 2)*fnnn in the converted specific frequency
  • the difference i.e., ⁇ Crn and ⁇ Cbn
  • the AE processing, AF processing, or AWB processing can be performed in the stage of performing the through display before an ON state of the shutter switch SW 1 is detected in step S 114 .
  • the brightness value, focused state, white balance information obtained based on image data for the through display can be compared with target reference values.
  • the conventional AE processing, AF processing, or AWB processing can be performed.
  • the digital signal processing circuit 1007 can obtain compensated amounts for the AE processing, AF processing, and AWB processing which are calculated based on image data obtained for the through display.
  • these compensated amounts can be compared with thresholds. When all or at least one of the compensated amounts exceeds the threshold(s), the situation change of a subject can be decided as not small and the flag indicating a scene change can be set.
  • the information used for the scene detection decisions are not limited to signals obtainable from the image data captured by the imaging apparatus 1000 .
  • signals produced from a shake detection circuit or a posture detection circuit installed on the imaging apparatus 1000 can be used as reference values for the decisions.
  • a specific signal representing a user's operation entered through an operating member of the imaging apparatus 1000 or a predetermined signal transmitted from an external apparatus can be used as reference values for the decisions.
  • the face detection frame when the situation change of a subject is small, the face detection frame can be continuously displayed and the timer can be reset regardless of failure in the face detection.
  • the display time of the face detection frame can be increased compared with other cases.
  • the present exemplary embodiment can prevent the face detection frame from being erased when a subject person blinks and closes an eye or turns his/her face to look away.
  • the fourth exemplary embodiment can suppress an undesirable phenomenon repeating the display and erasure of the face detection frame in a short period of time.
  • a fifth exemplary embodiment of the present invention will be described below.
  • the fifth exemplary embodiment is different from other exemplary embodiments in the contents displayed on display unit 1010 when the situation change of a subject is small according to the result of the scene detection processing.
  • FIGS. 12B through 12F show various patterns of the face detection frame displayed when the situation change of a subject is assumed to be small according to the scene detection processing.
  • FIG. 12A shows an ordinary pattern of the face detection frame displayed when the face detection is successful.
  • FIG. 12B shows a face detection frame changed in color, or displayed in a semitransparent state.
  • FIG. 12C shows a face detection frame displayed as a dot image, or displayed according to a predetermined flickering mode.
  • FIG. 12D shows a face detection frame with a thin line.
  • FIG. 12E shows a face detection frame displayed in part, or in a predetermined cutout state.
  • FIG. 12F shows a face detection frame displayed together with an icon.
  • the face detection frame can be displayed for a while with a specific display pattern so as to let a user recognize the result of failure in the face detection, without immediately erasing the face detection frame.
  • the change in the display pattern of the face detection frame can be reduced so as to assure the visibility of the screen.
  • FIG. 13 shows a modified embodiment of the present exemplary embodiment.
  • the state of the face detection frame changes stepwise each time a predetermined time elapses and finally disappears as shown in FIG. 13 . Accordingly, if the face detection is failed, the face detection frame will be erased depending on an elapsed time, even when the situation change of a subject is assumed to be small.
  • a pair of eyes, a nose, and a mouth are detected and a human face region is determined based on their relative positions.
  • the method for identifying a main subject through the face detection processing is not limited to the disclosed example.
  • the method for detecting a main subject is not limited to the face detection processing.
  • the main subject may not be a civilization.
  • the main subject can be an animal, a plant, a building, or a geometric pattern.
  • any other exemplary embodiment can be used to obtain comparable effects in assuring a fine display regardless of temporary failure in the detection of a desired subject, if it can provide the above-described functions of detecting a desired subject and displaying the position of a detected subject.
  • the imaging apparatus 1000 detects a main subject during the through display of the subject.
  • the present invention is not limited to the disclosed example.
  • another exemplary embodiment of the present invention can possess the capability of transferring image data obtained in the imaging apparatus to an external device, causing a display unit of the external device to display the image data, and causing the external device to detect a main subject.
  • image data can be a moving image already recorded in a storage medium or device if it is readable.
  • another exemplary embodiment of the present invention can provide functions of repetitively detecting an object satisfying specific conditions from continuously changing image data and realizing the display reflecting detection results.
  • an exemplary embodiment of the present invention can include the program(s) and a storage medium of the program(s) readable by a computer.
  • the program(s) can be recorded into, for example, a CD-ROM or other recording medium, or can be supplied to a computer via various transmission media.
  • the record medium storing the program(s) can be selected from any one of flexible disk, hard disk, optical disk, magneto-optical disk, MO, CD-ROM, CD-R, CD-RW, magnetic tape, nonvolatile memory card, ROM, and DVD (DVD-ROM, DVD-R).
  • the transmission medium of the program(s) can be a computer network (e.g., LAN, or WAN represented by Internet) that can supply carriers of program information.
  • a computer network e.g., LAN, or WAN represented by Internet
  • the transmission medium of the program(s) can be a communication medium (e.g., an optical fiver or other cable line, or a wireless line) used in a wireless communication network system.
  • a communication medium e.g., an optical fiver or other cable line, or a wireless line
  • the operating system or other application software running on the computer may execute part or all of the processing so that the functions of the above-described exemplary embodiments can be realized.
  • the program(s) read out of a recording medium can be written into a memory of a feature expansion board equipped in a computer or into a memory of a feature expansion unit connected to the computer.
  • a CPU provided on the feature expansion board or the feature expansion unit can execute part or all of the processing so that the functions of the above-described exemplary embodiments can be realized.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Focusing (AREA)
  • Indication In Cameras, And Counting Of Exposures (AREA)
  • Automatic Focus Adjustment (AREA)

Abstract

An image processing method is provided for detecting the position of a specific subject from a moving image and combining a display of detection result indicating the detected position with the moving image. The image processing method includes a step of determining, depending on a display time of the detection result, whether the detection result should be continuously displayed, when the subject cannot be detected during the display of the detection result combined with the moving image.

Description

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to an image processing technique for repetitively detecting, from image data, an object satisfying predetermined conditions and displaying a detection result of the object.
2. Description of the Related Art
An imaging apparatus can repetitively detect, from image data, an object satisfying predetermined conditions, for example, to improve the following situations discussed herein below.
For instance, a camera may have an auto-focus function for automatically adjusting focus on a target subject to be photographed. In general, the camera selects one or plural focus detection areas, and adjusts the lens position of its imaging optics system to focus on a subject in the selected focus detection area. Then, the camera performs exposure compensation processing by enlarging a weighting factor applied to a brightness value of the main subject located in the focus detection area.
However, the focus detection area can occupy a relatively limited area on the screen. When the main subject is present outside the focus detection area, it is difficult to focus on the main subject. Furthermore, even if a main subject is present in the focus detection area, the focus adjusting action may be erroneously applied to another object different from the target (i.e., main subject). For example, when a subject in a different focus detection area is positioned more adjacently to a camera than the main subject, the camera may regard a different subject in another focus detection area as a main subject and may erroneously apply a focus adjusting action to this subject.
To avoid this drawback, it is possible to request a user to instruct a focus detection area where a main subject is present each time the user shoots the target, although it is not convenient for the user. In view of the above, Japanese Patent Application Laid-open No. 2003-107335 discloses a camera that can automate the processes of detecting a main subject from obtained image data using a shape analysis, displaying a focus detection area corresponding to the detected main subject, and performing a focus adjusting action applied to the focus detection area.
According to the aforementioned camera, the image data is entirely searched to detect a main subject and accordingly the focus adjusting action can be applied to the main subject wherever the main subject is present in an object field. Furthermore, to momentarily track the main subject, the detecting action of the main subject based on the shape analysis must be performed periodically.
However, an apparatus capable of automatically detecting a main subject may erroneously select a subject that a user does not intend to shoot. Hence, it is necessary to let a user confirm a main subject detected by the camera.
Furthermore, a liquid crystal monitor or other display unit can continuously display image data to let a user observe the movement of a subject. In such a case, it is desirable to update the detection result of a main subject in accordance with a detected movement of the main subject. To this end, the processing for updating the detection result of a main subject should be performed periodically. And, the latest region where the main subject is present should be continuously displayed.
More specifically, when a main subject is detected, a frame indicating the position of a detected subject can be superimposed on an image captured by the camera. In this case, as a practical method for detecting a main subject, it is possible to detect a front or oblique face of a subject person based on the spatial relationship between both eyes, a nose, and a mouth on a face (refer to Japanese Patent Application Laid-open No. 2002-251380).
However, according to the above-described detection method, if a subject person blinks and closes an eye or suddenly turns his/her face to look away, one eye will not be recognized on a captured image and accordingly the camera will fail in detecting a main subject. As a result, as shown in FIG. 14, the frame indicating the detected region will temporarily disappear from the monitor screen, whereas the main subject remains at the same place.
For example, posing for a while is hard for a child who is waiting for completion of a shot. Thus, if the above-described method is used to detect a child, the camera may temporarily fail in detecting a main subject. Such drawbacks will induce an undesirable phenomenon repeating the display and erasure of the frame indicating the position of a main subject in a short period of time. In this situation, the image displayed on a display unit will be unstable and a user will be unable to surely observe the movement of a subject.
Similar problems will commonly arise when a main subject is repetitively detected from a moving image or from continuously changing image data and a detection result is displayed on a display unit.
Also it is noted that the above-described problems are not limited to cameras or other image capturing devices. For example, similar problems will arise in application software that can detect a target object from transferred moving image data and display a detection result.
Therefore it would be desirable to provide an apparatus which has a function for repetitively detecting, from image data, an object satisfying predetermined conditions, and of which further can stably display a detection result even when a target object cannot be temporarily detected
SUMMARY OF THE INVENTION
The present invention is directed to an apparatus having a function of repetitively detecting, from image data, an object satisfying predetermined conditions, and can stably display a detection result even when a target object cannot be temporarily detected.
According to an aspect of the present invention, an image processing method is provided which includes repetitively updating image data; displaying an image based on the image data; detecting an object satisfying predetermined conditions from the image data; combining a display of detection result indicating a region where the object is detected with a display of the image; and determining whether the display of detection result should be continued, when the object cannot be detected during the display of detection result.
According to another aspect of the present invention, the image processing method may further include measuring a time when the detection result is continuously displayed; and determining based on a measurement result whether the detection result should be continuously displayed. According to another aspect of the present invention, the display of the detection result is canceled when the measurement result reached a predetermined time.
According to yet another aspect of the present invention, the measurement result is reset when the object can be detected during the display of the detection result. Moreover, according to yet another aspect of the present invention, the predetermined time is changeable in accordance with a position of the detected object in the image. Further, according to still another aspect of the present invention, the closer the position of the detected object is to an edge of the image, the shorter the predetermined time is set.
According to another aspect of the present invention, the predetermined time is changeable in accordance with a size of the detected object. And according to another aspect of the present invention, the larger the size of the detected object is, the shorter the predetermined time is set. Furthermore, in another aspect of the present invention, the image processing method may further include determining, based on a moving direction of the detected object, whether the detection result should be continuously displayed.
And, according to another aspect of the present invention, the image processing method may further include determining whether the detection result should be continuously displayed, based on a moving direction of the detected object and a position of the detected object in the image. Additionally, according to yet another aspect of the present invention, the image processing method may further include detecting a change amount of the image data in response to the update of the image data; and determining, based on a detection result, whether the detection result should be continuously displayed.
Moreover, according to another aspect of the present invention, the image processing method may further include detecting a change amount of the image data in response to the update of the image data; and resetting the measurement result in accordance with the change amount.
Additionally, according to another aspect of the present invention, an imaging apparatus is provided which includes an imaging element configured to produce image data based on light reflected from a subject; a detection circuit configured to detect an object satisfying predetermined conditions from the image data obtained from the imaging element; a display unit configured to repetitively obtain the image data and display an image based on the obtained image data, and configured to combine the image with a detection result indicating a region where the object detected by the detection circuit is present; and a signal processing circuit configured to determine whether the detection result should be continuously displayed on the display unit when the detection circuit cannot detect the object while the display unit displays the detection result.
Further, according to another aspect of the present invention, the display unit combines a moving image and the detection result. And, according to another aspect of the present invention, the imaging apparatus may further include a focus control circuit that performs auto-focus processing applied to the object detected by the detection circuit. Additionally, according to another aspect of the present invention, the imaging apparatus may further include a focus control circuit that performs exposure control applied to the object detected by the detection circuit.
Moreover, according to another aspect of the present invention, the imaging apparatus may further comprising a timer that measures a time when the detection result is continuously displayed, wherein the signal processing circuit determines based on a measurement result of the timer whether the detection result should be continuously displayed. Also, according to another aspect of the present invention, the signal processing circuit cancels the display of the detection result when the measurement result reached a predetermined time. And still yet, according to another aspect of the present invention, the signal processing circuit resets the measurement result of the timer in response to a detection of the object by the detection circuit when the display unit displays the detection result.
Additionally, according to another aspect of the present invention, a computer readable medium is provided which contains computer-executable instructions for performing processing of image data. Here, the medium includes computer-executable instructions for repetitively updating image data; computer-executable instructions for displaying an image based on the image data; computer-executable instructions for detecting an object satisfying predetermined conditions from the image data; computer-executable instructions for combining a display of detection result indicating a region where the object is detected with a display of the image; and computer-executable instructions for determining whether the display of detection result should be continued, when the object cannot be detected during the display of detection result.
Furthermore, according to another aspect of the present invention, the computer readable medium may further include computer-executable instructions for measuring a time when the detection result is continuously displayed; and computer-executable instructions for determining based on a measurement result whether the detection result should be continuously displayed. Still yet, according to another aspect of the present invention, the display of the detection result is canceled when the measurement result reached a predetermined time. And finally, in another aspect of the present invention, the measurement result is reset when the object can be detected during the display of the detection result.
Further features and aspects of the present invention will become apparent from the following detailed description of exemplary embodiments with reference to the attached drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate various embodiments, features and aspects of the present invention and, together with the description, serve to explain the principles of the invention.
FIG. 1 is a block diagram illustrating a schematic arrangement of an exemplary imaging apparatus in accordance with a first exemplary embodiment of the present invention.
FIG. 2 is a flowchart showing an exemplary main processing routine in accordance with the first embodiment of the present invention.
FIG. 3 is a flowchart showing an exemplary frame display processing routine in accordance with the first embodiment of the present invention.
FIG. 4 is a flowchart showing an exemplary AE and AF processing routine in accordance with the first embodiment of the present invention.
FIG. 5 is a flowchart showing an exemplary imaging processing routine in accordance with the first embodiment of the present invention.
FIG. 6A is a view showing an exemplary relationship between the measurement time of a timer and display/erasure of a face detection frame in accordance with the first embodiment of the present invention.
FIG. 6B is a view showing an exemplary relationship between the measurement time of a timer and display/erasure of a face detection frame in accordance with a second exemplary embodiment of the present invention.
FIG. 6C is a view showing another exemplary relationship between the measurement time of a timer and display/erasure of a face detection frame in accordance with the second embodiment of the present invention.
FIG. 7A is a view illustrating an exemplary display pattern of a face detection frame in accordance with an aspect of the present invention.
FIG. 7B is a view showing an exemplary relationship between the face detection frame and dissected regions in accordance with an aspect of the present.
FIG. 8A is a view showing an exemplary relationship between the position and size of a detected face region in comparison with through display image data used in face detection processing in accordance with the second embodiment of the present invention.
FIG. 8B is a graph showing an exemplary relationship between the position of a detected face region and a timer correction coefficient in accordance with the second embodiment of the present invention.
FIG. 8C is a graph showing an exemplary relationship between the size of a detected face region and a timer correction coefficient in accordance with the second embodiment of the present invention.
FIG. 9 is a flowchart showing an exemplary frame display processing routine in accordance with a third exemplary embodiment of the present invention.
FIG. 10 is a flowchart showing an exemplary frame display processing routine in accordance with a fourth exemplary embodiment of the present invention.
FIG. 11 is a flowchart showing an exemplary scene detection processing routine in accordance with the fourth embodiment of the present invention.
FIGS. 12A through 12F are views each illustrating an exemplary display pattern of the face detection frame in accordance with a fifth exemplary embodiment of the present invention.
FIG. 13 is a view illustrating an exemplary display pattern of the face detection frame in accordance with a modified embodiment of the fifth embodiment of the present invention.
FIG. 14 is a view illustrating a display pattern of a conventional face detection frame.
DETAILED DESCRIPTION OF THE EMBODIMENTS
The following description of various exemplary embodiments, features and aspects of the present invention is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
First Exemplary Embodiment
FIG. 1 is a block diagram illustrating an imaging apparatus 1000 in accordance with an exemplary embodiment of the present invention. In the present exemplary embodiment, the imaging apparatus 1000 is an electronic still camera.
The imaging apparatus 1000 includes an imaging lens group 1001, a light quantity adjuster 1002 including a diaphragm apparatus and a shutter apparatus, an imaging element 1003 (e.g., CCD or CMOS) that can convert light flux (i.e., subject image) having passed through the imaging lens group 1001 into an electric signal, and an analog signal processing circuit 1004 that can apply clamp processing and gain processing to an analog signal produced from the imaging element 1003.
Furthermore, the imaging apparatus 1000 includes an analog/digital (hereinafter, referred to as A/D) converter 1005 that can convert an output of the analog signal processing circuit 1004 into a digital signal, and a digital signal processing circuit 1007 that can apply pixel interpolation processing and color conversion processing to the data produced from the A/D converter 1005 or to the data produced from the memory control circuit 1006. The digital signal processing circuit 1007 can also perform calculation based on captured image data.
Furthermore, the imaging apparatus 1000 includes a system control circuit 1012 that can control, based on calculation results obtained by the digital signal processing circuit 1007, a through-the-lens (TTL) type auto focus (AF) processing, auto exposure (AE) processing, and pre-flash (EF) processing, applied to an exposure control circuit 1013 and a focus control circuit 1014.
Furthermore, the digital signal processing circuit 1007 can apply predetermined calculation processing to the captured image data, and execute a TTL-type auto white balance (AWB) processing based on obtained calculation results.
Moreover, the digital signal processing circuit 1007 includes a face detection circuit 1016 that can detect features of a face from the captured image data based on the detection of edges of eyes, a mouth, or the like. The face detection circuit 1016 can execute face detection processing for detecting a region corresponding to a human face. Furthermore, the digital signal processing circuit 1007 includes a timer 1015 that can measure a display time for each of later-described individual face detection frames.
A memory control circuit 1006 can control the analog signal processing circuit 1004, the A/D converter 1005, the digital signal processing circuit 1007, a memory 1008, and a digital/analog (hereinafter, referred to as D/A) converter 1009. The digital data produced from the A/D converter 1005 can be written, via the digital signal processing circuit 1007 and the memory control circuit 1006, into the memory 1008. Alternatively, the digital data produced from the A/D converter 1005 can be written, via the memory control circuit 1006, into the memory 1008.
The memory 1008 can store data to be displayed on a display unit 1010. The data recorded in the memory 1008 can be outputted, via the D/A converter 1009, to the display unit 1010 such as a liquid crystal monitor that can display an image based on the received data.
Furthermore, the memory 1008 can store captured still images and moving images, with a sufficient storage capacity for a predetermined number of still images and a predetermined time of moving images. In other words, a user can shoot continuous still images or can shoot panoramic images, because the memory 1008 enables writing large-sized image data at higher speeds. Furthermore, the memory 1008 can be used as a work area of the system control circuit 1012.
The display unit 1010 can function as an electronic viewfinder that successively displays captured image data. The display unit 1010 can arbitrarily turn the display on or off in response to an instruction given from the system control circuit 1012. When the display unit 1010 is in an OFF state, the imaging apparatus 1000 can reduce electric power consumption. Furthermore, the display unit 1010 can display an operation state and a message with images and letters in accordance with the operation of the system control circuit 1012 that can execute the program(s).
An interface 1011 can control communications between the imaging apparatus 1000 and a storage medium (e.g., a memory card or a hard disk). The imaging apparatus 1000 can transfer or receive image data and management information via the interface 1011 to or from a peripheral device (e.g., other computer or a printer).
It is also noted that the interface 1011 can be configured to be able to operate in conformity with the protocol of a PCMCIA card or a Compact Flash (registered trademark) card, various types of communication cards can be inserted into card slots of the interface 1011. For example, the communication card can be selected from a LAN card, a modem card, a USB card, an IEEE1394 card, a P1284 card, a SCSI card, and a PHS card.
The system control circuit 1012 can control the operation of the imaging apparatus 1000. The system control circuit 1012 includes a memory that can store numerous constants, variables, and program(s) used in the operation of the system control circuit 1012.
The exposure control circuit 1013 can control the diaphragm apparatus and the shutter apparatus equipped in the light quantity adjuster 1002. The focus control circuit 1014 can control a focusing action and a zooming action of the imaging lens group 1001. The exposure control circuit 1013 and the focus control circuit 1014 can be controlled according to the TTL-type. The system control circuit 1012 controls the exposure control circuit 1013 and the focus control circuit 1014, based on calculation results obtained by the digital signal processing circuit 1007 based on the captured image data.
FIGS. 2 through 5 are flowcharts showing exemplar operations of the electronic camera in accordance with the present exemplary embodiment. The program for executing the processing is stored in the memory of the system control circuit 1012 and can be executed under the control of the system control circuit 1012.
FIG. 2 is a flowchart showing an exemplary main processing routine in the imaging apparatus 1000 in accordance with the present exemplary embodiment. The processing shown in FIG. 2 can be started, for example, in response to a turning-on operation of a power source immediately after the batteries are replaced.
First, in step S101, the system control circuit 1012 initializes various flags and control variables stored in its memory. In step S102, the system control circuit 1012 turns the image display of the display unit 1010 to an OFF state as initial settings. Next, in step S103, the system control circuit 1012 detects the state of operation mode set for the imaging apparatus 1000.
If the operation mode is POWER OFF, the processing flow proceeds to step S105. In step S105, the system control circuit 1012 changes the display of the display unit 1010 to a deactivated state and stores flags and control variables and other necessary parameters, setting values, and setting modes. Then, the system control circuit 1012 performs predetermined termination processing for turning off the power source of the display unit 1010 and other components in the imaging apparatus 1000.
When any other mode but a shooting mode is set in step S103, the system control circuit 1012 proceeds to step S104. In step S104, the system control circuit 1012 executes required processing corresponding to the selected mode and returns to step S103.
When the shooting mode is set in step S103, the processing flow proceeds to step S106. Instep S106, the system control circuit 1012 determines whether a residual amount or an operation state of the power source is at a warning level which may cause the imaging apparatus 1000 to malfunction.
When the system control circuit 1012 decides that the power source is in the warning level (NO at step S106), the processing flow proceeds to step S108, where the system control circuit 1012 causes the display unit 1010 to perform a predetermined warning display with images and sounds. Then, the processing flow returns to step S103.
When the system control circuit 1012 decides the power source is not in the warning level in step S106 (YES at step S106), the processing flow proceeds to step S107. In step S107, the system control circuit 1012 determines whether an operation state of the storage medium is in a warning level according to which the imaging apparatus 1000 may fail especially in recording and playback of image data.
When the system control circuit 1012 decides that the storage medium is in the warning level (NO at step S107), the processing flow proceeds to the above-described step S108 to cause the display unit 1010 to perform a predetermined warning display with images and sounds. Then, the processing flow returns to step S103.
When the system control circuit 1012 decides that the storage medium is not in the warning level in the determination of step S107 (YES at step S107), the processing flow proceeds to step S109. In step S109, the system control circuit 1012 causes the display unit 1010 to display the state of various settings of the imaging apparatus 1000 with images and sounds.
Next, in step S110, the system control circuit 1012 turns the image display of display unit 1010 to an ON state, and causes the light quantity adjuster 1002 to open the shutter apparatus. Furthermore, in step S111, the system control circuit 1012 causes the display unit 1010 to start a through display according to which captured image data can be successively displayed as a moving image.
In the through display state, the captured image data is successively written in the memory 1008 and the written data is successively displayed on the display unit 1010 to realize an electronic viewfinder function. In the present exemplary embodiment, the display unit 1010 can update the image display at intervals of 1/30 second.
In step S112, the system control circuit 1012 causes the digital signal processing circuit 1007 to start the face detection processing for detecting a face region from the image data. As a technique for practically detecting a face region, various conventional methods are available.
For example, a neural network is a representative method for detecting a face region based on a learning technique. Furthermore, a template matching can be used to extract features representing eyes, a nose, or any other physical shape from an image region.
Furthermore, according to another conventional method, the quantity of features, such as a skin color or an eye shape, can be detected from an image and can be analyzed using a statistical method (For example, refer to Japanese Patent Application Laid-open No. 10-232934 or Japanese Patent Application Laid-open No. 2000-48184).
In the present exemplary embodiment, the face detection processing is performed using a method for detecting a pair of eyes (both eyes), a nose, and a mouth and determining a human face region based on a detected relative position. In this case, if a person (i.e., an object to be detected) closes one eye or suddenly turns his/her face to look away, identifying a face region may be difficult because a pair of eye (i.e., a reference portion) cannot be detected.
In many instances, the face detection processing requires a significant calculation time. As a result, the digital signal processing circuit 1007 cannot apply the face detection processing to the entire image data obtained for the through display. Thus, in the present exemplary embodiment, the digital signal processing circuit 1007 performs the face detection processing for every two acquisitions of through display image data.
In step S113, the system control circuit 1012 causes the digital signal processing circuit 1007 to perform frame display processing for displaying a frame showing a face detection result obtained in step S112.
FIG. 3 is a flowchart showing details of the frame display processing (refer to step S113). First, in step S201, the digital signal processing circuit 1007 determines whether a frame indicating the position of a detected face region (hereinafter, referred to as “face detection frame”) is already displayed on the display unit 1010. When the face detection frame is not displayed yet, the processing flow proceeds to step S202. For example, if the processing of step S113 is first executed after a user sets a shooting mode to the imaging apparatus 1000, no face detection frame is displayed on the display unit 1010. Accordingly, the processing flow proceeds to step S202.
In step S202, the digital signal processing circuit 107 obtains coordinate data of all face regions detected in the face detection processing of step S112. In step S203, the display unit 1010 combines, based on the coordinate data of each face region obtained in step S202, the display of a face detection frame surrounding a detected face region with the display of a subject image, as shown in FIG. 7A. FIG. 7A is a display screen of the display unit 1010, including a rectangular face detection frame surrounding a detected human head.
In step S204, the timer 1015 starts measuring a display time of the face detection frame displayed in step S203. In step S205, the digital signal processing circuit 1007 determines whether the display of a face detection frame for all coordinate data representing the face regions obtained in step S202 is accomplished. When the display of a face detection frame for all coordinate data has been accomplished, the digital signal processing circuit 1007 terminates this routine.
When the display of face detection frames for all coordinate data is not accomplished yet, the processing flow returns to step S203. For example, if no face region is detected in step S112, the processing of steps S203 through S205 relating to the display of a face detection frame will not be performed.
Then, returning to step S114 of FIG. 2 after terminating the routine of FIG. 3, the system control circuit 1012 determines whether a shutter switch SW1 is in a pressed state. When the shutter switch SW1 is not in a pressed state, the processing flow returns to step S113. More specifically, unless the shutter switch SW1 is pressed, the through display in step S111, the face detection processing in step S112, and the frame display processing in step S113 are repetitively performed.
Therefore, if a person moves during the through display, the position of a detected face region will change and the face detection frame shifts correspondingly. In the present exemplary embodiment, the shape of the face detection frame can be an ellipse, or any other shape that fits the contour of a subject face. Furthermore, instead of displaying a face detection frame, it is possible to use a method for emphasizing the contour of a face region, or a method for shading the region other than a face region, as far as a detected face region can be recognized.
Returning to step S201, when a face detection frame is already displayed, the processing flow proceeds to step S206. The digital signal processing circuit 1007 obtains coordinate data of all face regions detected in the face detection processing of step S112.
In step S207, the digital signal processing circuit 1007 selects one of face detection frames already displayed. Then, the digital signal processing circuit 1007 determines whether a selected face detection frame is present in the vicinity of any coordinate position of a face region newly obtained in step S206. When the selected face detection frame is present in the vicinity of any coordinate position of a newly obtained face region (YES in step S207), the processing flow proceeds to step S208.
A neighboring region used in the decision can be experimentally obtained in such a manner that a person surrounded by a selected face detection frame agrees with a person represented by the coordinates of a newly obtained face region.
Furthermore, when plural face regions are present in the neighboring region, the digital signal processing circuit 1007 can select the coordinate data of a face region closest to the coordinate position of a face region being set in the selected face detection frame, and then execute the processing of step S208. In this case, if the digital signal processing circuit 1007 has an individual authentication function, the digital signal processing circuit 1007 can determine in step S207 whether a person surrounded by an already displayed face detection frame is the same.
In step S208, the digital signal processing circuit 1007 compares the coordinates of a selected face detection frame and the coordinates of a face region positioned in the vicinity of the face detection frame and obtains the difference.
In step S209, the digital signal processing circuit 1007 determines whether the difference obtained in step S208 is within a predetermined range. When the difference is within the predetermined range, the display unit 1010 does not update the position of the face detection frame and continues the display of the already displayed face detection frame. Then, the processing flow proceeds to step S210.
On the other hand, when the difference obtained in step S208 is not within the predetermined range, the processing flow proceeds to step S211. In step S211, the display unit 1010 sets a new face detection frame based on the coordinate data of the face region compared in step S208, and displays the new face detection frame. Then, the processing flow proceeds to step S210.
In step S209, by determining whether the difference obtained in step S208 is within the predetermined range, it can be determined if the coordinates of a newly obtained face region are positioned in the already displayed face detection frame.
However, if the face detection frame makes frequent shift movements, the visibility of the screen will deteriorate. Hence, when the coordinate position of a newly obtained face region remains in the already displayed face detection frame, it is desirable to postpone updating the face detection frame to improve the visibility of the screen.
In step S210, the timer 1015 starts measuring a display time corresponding to the face detection frame selected as an object in the decision made in step S207, or a display time corresponding to the face detection frame updated in step S211 after returning its value to an initial value. Then, the processing flow proceeds to step S212.
In step S207, when the coordinate position of the newly obtain face region is not in the vicinity of the selected face detection frame, the processing flow proceeds to step S213. In step S213, the digital signal processing circuit 1007 determines whether the display time of the selected face detection frame measured by the timer 1015 has reached a predetermined time. When the measurement time has already reached the predetermined time (YES in step S213), the processing flow proceeds to step S214.
In the S214, the digital signal processing circuit 1007 erases the face detection frame and resets the measurement time to an initial value. Otherwise, when the measurement time does not yet reach the predetermined time (NO in step S213), the digital signal processing circuit 1007 continues displaying the face detection frame without resetting the timer 1015. Then, the processing flow proceeds to step S212.
In step S212, the digital signal processing circuit 1007 determines whether the processing of step S207 is finished for all of the already displayed face detection frames. When there is any face detection frame being not yet processed, the processing flow returns to step S207. Otherwise, the processing flow proceeds to step S215.
In step S215, the digital signal processing circuit 1007 determines whether there is any face region whose coordinate position is not close to all of the face detection frames used in step S207. When such a face region is not present, the digital signal processing circuit 1007 terminates this routine. When at least one face region has a coordinate position not close to any of the face detection frames, the processing flow proceeds to step S216 to set a new face detection frame.
In step S216, the display unit 1010 combines, based on the coordinate data of each face region obtained in step S206, the display of a face detection frame surrounding a detected face region with the display of a subject image. In step S217, the timer 1015 starts measuring a display time of a face detection frame newly displayed in step S216.
In step S218, the digital signal processing circuit 1007 determines whether there is a face region to which a face detection frame is not yet set. When the setting of a face detection frame for all coordinate data has been accomplished (YES in step S218), the digital signal processing circuit 1007 terminates this routine. When there is a face region to which a face detection frame is not yet set (NO in step S218), the processing flow returns to step S216.
Now returning back to FIG. 2, after step S113 is completed, the process proceeds to step S114 where the system control circuit 1012 determines whether the shutter switch SW1 is in a pressed state. When the shutter switch SW1 is not in a pressed state, the processing flow returns to step S113. When the shutter switch SW1 is in a pressed state, the processing flow proceeds to step S115. Here, it is noted that when the shutter switch SW1 is pressed, the digital signal processing circuit 1007 suspends the face detection processing until the shutter switch SW1 is released.
In step S115, the system control circuit 1012 performs AF processing to adjust a focal distance of the imaging lens group 1001 to the subject, and further performs AE processing to determine a diaphragm value and a shutter speed. The AE processing can include the settings for a flashlight if necessary.
FIG. 4 is a flowchart showing details of exemplary AF and AE processing performed in step S115. First, in step S301, an electric charge signal is produced from the imaging element 1003. The A/D converter 1005 converts the electric charge signal into digital data. The digital signal processing circuit 1007 inputs the digital data.
The digital signal processing circuit 1007 performs, based on the input image data, predetermined calculations for the TTL-type AE processing, EF processing, AWB processing, and AF processing. In each processing, the digital signal processing circuit 1007 uses only the image data of the face regions detected by step S112 without using all of captured pixels, or increases a weighting factor given to the detected face regions compared to those given to other regions. Thus, in each of the TTL-type AE processing, EF processing, AWB processing, and AF processing, the digital signal processing circuit 1007 can give priority to the calculations of image data of the detected face regions.
However, when the processing flow proceeds from step S209 to step S210, the coordinate position of the detected face region does not completely agree with the position of the displayed face detection frame. In this case, in each of the AE processing, EF processing, AWB processing, and AF processing, the digital signal processing circuit 1007 uses the coordinates of a latest detected face region instead of using the position where the face detection frame is displayed.
In step S302, based on the results obtained by predetermined calculations in step S301, the system control circuit 1012 determines whether the exposure is appropriate. When the exposure is inappropriate (NO in step S302), the processing flow proceeds to step S303. Instep S303, the system control circuit 1012 causes the exposure control circuit 1013 to perform an AE control.
Then, in step S304, the system control circuit 1012 determines, based on the measurement data obtained in the AE control, whether flashlight is required. When flashlight is necessary, the processing flow proceeds to step S305. In step S305, a flashlight flag is set and a flashlight (not shown in the drawings) is charged. Then, the processing flow returns to step S301. On the other hand, when no flashlight is required, the process returns to step S301.
When the exposure is appropriate in step S302 (YES in step S302), the processing flow proceeds to step S306. In step S306, the system control circuit 1012 causes its memory or the memory 1008 to store measurement data or setting parameters. Then, in step S307, the system control circuit 1012 determines whether the white balance is appropriate, based on calculation results obtained by the digital signal processing circuit 1007 and measurement data obtained in the AE control.
When the white balance is inappropriate (NO in step S307), the processing flow proceeds to step S308. Instep S308, the system control circuit 1012 causes the digital signal processing circuit 1007 to adjust color processing parameters and perform an AWB control. Then, the processing flow returns to step S301. On the other hand, when the system control circuit 1012 determines that the white balance is appropriate in step S307 (YES in step S307), the processing flow proceeds to step S309.
In step S309, the system control circuit 1012 causes its memory to store measurement data or setting parameters of the memory 1008. Then, in step S310, the system control circuit 1012 determines whether the camera is in a focused state. When the camera is not in a focused state, the processing flow proceeds to step S311. Instep S311, the system control circuit 1012 causes the focus control circuit 1014 to perform an AF control. Then, the processing flow returns to step S301. When the system control circuit 1012 decides that the camera is in a focused state in step S310, the processing flow proceeds to step S312.
In step S312, the system control circuit 1012 causes its memory to store measurement data or setting parameters, sets a display of an AF frame indicating a focused region, and terminates the AF processing and the AE processing. The AF frame has a position identical to the coordinate position of the latest face region.
Now returning to FIG. 2, in step S116, the system control circuit 1012 sets the display unit 1010 into the through display state again after finishing the AF processing and the AE processing, to display the AF frame.
Next, in step S117, the system control circuit 1012 determines whether a shutter switch SW2 is switched to an ON state. If it is determined that switch SW2 is switched to an ON state (YES in step S117), the process proceeds to step S119. If it is determined that switch SW2 is not switched to an ON state (NO in step S117), the process proceeds to step S118. In step S118, the system control circuit 1012 determines whether the shutter switch SW1 is switched to an ON state. If it is determined that switch SW1 is switched to an ON state (YES in step S118), the process returns to step S117. If it is determined that switch SW1 is not switched to an ON state (NO in step S118), the process returns to step S112. Thus, when both the shutter switches SW1 and SW2 are not switched to an ON state, the processing flow returns to step S112.
In step S119, the system control circuit 1012 executes exposure processing for writing captured image data into the memory 1008. Furthermore, the system control circuit 1012 with the memory control circuit 1006 (and the digital signal processing circuit 1007 if necessary) executes shooting processing including developing processing for reading image data from the memory 1008 and performing various processing.
FIG. 5 is a flowchart showing exemplary details of the shooting processing (refer to step S119). In step S401, according to the measurement data obtained in the above-described AE control, the exposure control circuit 1013 sets a diaphragm value of the light quantity adjuster 1002 and starts exposure of the imaging element 1003.
In step S402, the system control circuit 1012 determines, based on a flashlight flag, whether flashlight is necessary. When flashlight is necessary, the processing flow proceeds to step S403 to cause a flash unit to emit a predetermined quantity of light for a pre-flash. It is noted that the light quality of a pre-flash can be determined based on the diaphragm value of the light quantity adjuster 1002, distance to a subject, and sensitivity being set for the imaging element 1003. When the system control circuit 1012 decides that no flashlight is necessary in step S402, the flash unit does not perform a pre-flash and the processing flow proceeds to step S409.
In step S404, the exposure control circuit 1013 waits for an exposure termination timing for imaging element 1003 based on photometric data. When the exposure termination timing is completed, the processing flow proceeds to step S405.
In step S405, the system control circuit 1012 causes the imaging element 1003 to produce an electric charge signal. Then, the system control circuit 1012 causes the A/D converter 1005, the digital signal processing circuit 1007, and the memory control circuit 1006 (or the A/D converter 1005 and directly the memory control circuit 1006) to write captured image data into the memory 1008.
Next, the processing flow proceeds to step S406. In step S406, the system control circuit 1012 obtains an average brightness value in the face region during the moment of a pre-flash, and calculates an optimum quantity of light emission (i.e., the quantity of light emission during an actual shot) so that the face region can have an appropriate brightness. For example, when an image signal level obtained during a pre-shot action accompanied with a pre-flash is appropriate, the quantity of light emission for an actual shot can be the same as that for the pre-shot action. Furthermore, if the image signal level during the pre-shot action is one-stage lower than a target level, the quantity of light emission can be doubled for an actual shot.
Next, in step S407, the system control circuit 1012 causes the imaging element 1003 to perform a reset action for an actual shot and start exposure. Then, the processing flow proceeds to step S408. Instep S408, the flash unit emits light with the optimum quantity obtained in step S406.
In step S409, the exposure control circuit 1013 waits for an exposure termination timing for imaging element 1003 based on photometric data. When the exposure termination timing is completed, the exposure control circuit 1013 stops exposure and closes the shutter in step S410.
Next, in step S411, the system control circuit 1012 causes the imaging element 1003 to output an electric charge signal. Then, the system control circuit 1012 causes the A/D converter 1005, the digital signal processing circuit 1007, and the memory control circuit 1006 (or the A/D converter 1005 and directly the memory control circuit 1006) to write captured image data into the memory 1008.
In step S412, the system control circuit 1012 causes the memory control circuit 1006 (and the digital signal processing circuit 1007 if necessary) to read the image data from the memory 1008 and execute vertical addition processing. In step S413, the system control circuit 1012 causes the digital signal processing circuit 1007 to perform color processing. In step S414, the system control circuit 1012 causes the memory 1008 to store the processed display image data, and terminates the shooting processing routine.
Now, once again referring back to FIG. 2, after the shooting processing of step S119 is finished, the processing flow proceeds to step S120. In step S120, the system control circuit 1012 causes the display unit 1010 to execute a quick review display based on the image data obtained in step S119. It is noted that the display unit 1010 during a shooting action can always be in a displayed state as an electronic viewfinder. Further, it noted that the quick review display can be performed immediately after accomplishing a shooting action.
In step S121, the system control circuit 1012 causes the memory control circuit 1006 (and the digital signal processing circuit 1007 if necessary) to read the captured image data from the memory 1008 and execute various image processing. Furthermore, the system control circuit 1012 executes recording processing for compressing image data and writing compressed image data into a storage medium.
After the recording processing of step S121 is finished, the system control circuit 1012 determines in step S122 whether the shutter switch SW2 is in a pressed state. If in step S122 the switch SW2 is in a pressed state (YES in step S122), the process proceeds to step S123. If in step S122 the switch SW2 is not in a pressed state (NO in step S122), the process proceeds to step S124.
In step S123, the system control circuit 1012 determines whether a continuous shooting flag is in an ON state. If in step S123 the continuous shooting flag is in an ON state (YES in step S123), the process flow returns to step S119. Here, it is noted that the continuous shooting flag can be stored in the internal memory of the system control circuit 1012 or in the memory 1008. Then, in step S119, the system control circuit 1012 causes the imaging apparatus 1000 to shoot the next image to realize a continuous shooting. Or, on the other hand, if in step S122 the continuous shooting flag is not in an ON state (NO in step S123), the process returns to step S122. The system control circuit 1012 repeats the processing of steps S122 and S123 until the shutter switch SW2 is released.
As described above, according to the present exemplary embodiment, it is determined, in an operation setting mode for executing the quick review display immediately after accomplishing a shooting action, whether the shutter switch SW2 is in a pressed state at the termination timing of the recording processing (refer to step S121). When the shutter switch SW2 is in a pressed state, the display unit 1010 continues the quick review display until the shutter switch SW2 is released. This procedure enables a user to carefully confirm shot images (i.e., captured images).
If the shutter switch SW2 is turned off immediately after the recording processing of step S121, the processing flow proceeds to step S124 from step S122. A user may continuously press the shutter switch SW2 to confirm shot images provided by the quick review display for a while after the recording processing of step S121 and then turn the shutter switch SW2 off. In such a case, the processing flow proceeds to step S124 from step S122, in a similarly manner.
In step S124, the system control circuit 1012 determines whether a predetermined minimum review time has elapsed. If in step S124 the minimum review time has already elapsed (YES in step S124), the processing flow proceeds to step S125. In step S125, the system control circuit 1012 brings the display state of the display unit 1010 into a through display state. With this processing, a user can confirm shot images on the display unit 1010 that provides a quick review display. Then, the display unit 1010 starts a through display state to successively display shot image data for the next shoot.
Next, the processing flow proceeds to step S126. In step S126, the system control circuit 1012 determines whether the shutter switch SW1 is in an ON state. When the shutter switch SW1 is in an ON state, the processing flow proceeds to step S117 for the next shooting action. On the other hand, when the shutter switch SW1 is in an OFF state, the imaging apparatus 1000 performs a series of shooting operations. Then, the processing flow returns to step S112.
The above-described face detection frame displaying method according to the exemplary embodiment has the following characteristics. When a face detection frame is displayed (refer to steps S203, S211, and S216), the timer 1015 starts measuring a display time of the face detection frame (refer to steps S204, S210, and S217). If the timer 1015 counts up a predetermined time, i.e., if it continuously fails to detect a mankind having a human face similar to that surrounded by the face detection frame, the face detection frame is erased (refer to step S214).
The predetermined time counted by the timer 1015 is set to be longer than the time required to perform the next face detection and display the face detection frame again based on the detection result after the face detection frame is displayed at least once. More specifically, as far as the face detection is continuously successful, the face detection frame cannot be erased by the timer that counts the predetermined time.
FIG. 6A is a view exemplarily showing a relationship between the measurement time of the timer 1015 and display/erasure of a face detection frame. FIG. 6A shows, from the above, the timing of obtaining image data for a through display, the timing of obtaining a face detection result, the timing of updating the face detection frame, and a measured time of the timer 1015. Each abscissa represents the time.
The through display image data can be updated at the intervals of 1/30 second, as indicated by timings t0, t1, t2, t3, t4, t5, t6, t7, t8, t9, and t10. A face detection result corresponding to the image data obtained at the timing t0 can be obtained at the timing (1). A face detection result corresponding to the image data obtained at the timing t2 can be obtained at the timing (2). A face detection result corresponding to the image data obtained at the timing t4 can be obtained at the timing (3).
According to the example shown in FIG. 6A, the face detections at the timings (3) and (4) have ended in failure. The face detection results obtained in the timings (1) and (2) involve coordinate data of a detected face region. The face detection frame can be updated at the intervals of 1/30 second, i.e., at the same intervals as those of the through display.
The timer 1015 counts down by one each time the update timing of the face detection frame comes. According to the example shown in FIG. 6A, coordinate data of a detected face region can be obtained at the timing (1). The face detection frame can be newly displayed at the next update timing of the face detection frame. The timer 1015 sets the count value to ‘5.’ At the next update timing of the face detection frame, the count value decreases by 1 and becomes ‘4.’
At the succeeding update timing of the face detection frame, coordinate data of a face region newly obtained at the timing (2) are present. Accordingly, the face detection frame can be updated based on the coordinate data of a newly obtained face region. The timer 1015 again resets the count value to ‘5.’
However, the face detection result obtained at the timing (3) does not involve any coordinate data of a face region. Therefore, the timer 1015 decreases its count value to ‘2.’ In this case, because the count value does not reach ‘0’ yet, the face detection frame can be continuously displayed regardless of failure in the face detection.
Furthermore, the face detection result obtained at the next timing (4) does not involve any coordinate data of a face region, and the count value decreases to ‘0.’ In this case, because the measurement time of the timer 1015 has reached the predetermined time, the face detection frame is erased.
Subsequently, the face detection result obtained at the timing (5) involves coordinate data of a newly detected face region. Accordingly, the face detection frame is displayed again, and the timer 1015 sets its count value to ‘5.’
As described above, the present exemplary embodiment provides a function of continuously displaying the face detection frame for a predetermined time until the count value of the timer 1015 reaches ‘0’, even if the face detection is failed in this period. Furthermore, when no face detection frame is displayed on the display unit 1010, the detection result of a newly obtained face can be directly used to display a face detection frame (refer to step S203, FIG. 2).
When a face detection frame is already displayed on the display unit 1010, and when a new face detection result is obtained in the vicinity of the face detection frame, different methods can be selected considering the position of the already displayed face detection frame and a face position obtained from a new face detection result.
When the position of the already displayed face detection frame is close to the face position obtained from a new face detection result, frequently shifting the face detection frame is undesirable in view of the visibility and it is desirable to intentionally fix the position of the face detection frame.
On the contrary, when the position of the already displayed face detection frame is not close to the face position obtained from a new face detection result, the position of the face detection frame should be updated to track a face position of the subject (refer to step S211, FIG. 3).
As described above, the first exemplary embodiment enables a continuous display of a face detection frame for a predetermined time regardless of failure in the face detection. Accordingly, when a subject person blinks and closes an eye or suddenly turns his/her face to look away, the present exemplary embodiment can prevent the face detection frame from being erased inadvertently. As a result, the first exemplary embodiment can suppress an undesirable phenomenon repeating the display and erasure of the face detection frame in a short period of time.
Second Exemplary Embodiment
Another exemplary embodiment of the present invention will be described below. The second exemplary embodiment is characterized in that the timer 1015 can change a count value according to the position or size of a detected face region.
When a detected face region is positioned at the center of a screen, failure in detecting a face region may occur due to a person who turns his/her face to look away or closes an eye. In this case, the human face probably remains at the same position regardless of failure in the detection of a face region. Thus, the timer 1015 sets a longer count time.
On the other hand, when a detected face region is positioned at the edge of a screen, failure in detecting a face region probably occurs because a person has disappeared from the screen. In this case, failing in detecting a face region means that the human face is no longer present on the screen. Thus, the timer 1015 sets a shorter count time. Furthermore, if a detected face region has a small area, a subject person will be positioned far from the camera. Even when the person moves, the positional change of the person on the screen will remains small.
On the contrary, if a detected face region has a large area, a subject person will be positioned near the camera. If the person moves slightly, the person may soon disappear from the screen. Hence, the timer 1015 sets a longer count time when the detected face region has a smaller area, and sets a shorter count time when the detected face region has a larger area.
FIG. 8A is a view showing an exemplary relationship between the position and size of a detected face region in comparison with through display image data used in the face detection processing. FIG. 8B is a graph showing an exemplary relationship between the position of a detected face region and a correction coefficient applied to the count value of the timer 1015. FIG. 8C is a graph showing an exemplary relationship between the size of a detected face region and a correction coefficient applied to the count value of the timer 1015.
Now referring to FIG. 8A, it is noted that FIG. 8A shows QVGA image data consisting of 320×240 pixels, with an upper left corner positioned at the coordinates (0,0) and a lower right corner positioned at the coordinates (319,239). Here, it may be assumed that coordinates (x0, y0) represent an upper left corner, coordinates (x0, y1) represent a lower left corner, coordinates (x1, y0) represent an upper right corner, and coordinates (x1, y1) represent a lower right corner of a face region obtained in the face detection processing.
It is possible to calculate, based on these coordinate data, a shortest distance DIS representing a distance between the face detection frame and the edge of a screen according to the following formula.
DIS=min(x0, y0, 320-x1, 240-y1)
Then, it is possible to obtain a timer correction coefficient K_DIS with reference to the obtained DIS.
The timer correction coefficient K_DIS can take a value between 0.0 and 1.0, as understood from the relationship of DIS and K_DIS shown in FIG. 8B. When the distance DIS is small, i.e., when a detected face is positioned near the edge of a screen, the timer correction coefficient K_DIS takes a smaller value.
Although the present exemplary embodiment obtains the distance between the face detection frame and the edge of a screen, it is possible to obtain a distance between a central region of the face detection frame and the edge of a screen.
Furthermore, it is possible to calculate, based on the above-described coordinate data, an AREA representing the size of a face detection frame according to the following formula.
AREA=(x1-x0)×(y1-y0)
Then, it is possible to obtain a timer correction coefficient K_AREA with reference to the obtained AREA.
The timer correction coefficient K_AREA can take a value between 0.0 and 1.0, as understood from the relationship of AREA and K_AREA shown in FIG. 8C. When the AREA is large, i.e., when a detected face has a larger area, the timer correction coefficient K_AREA takes a smaller value.
The timer correction coefficients K_DIS and K_AREA can be multiplied with a timer reference value Tref (Tref=7 according to an exemplary embodiment) to obtain an initial count value T for the timer 1015. In this case, the initial count value T is an integer obtainable by removing fractions from the result of the multiplication between the coefficients K_DIS and K_AREA and the reference value Tref. It is however possible to use, in the calculation, only one of the timer correction coefficients K_DIS and K_AREA.
Each of FIGS. 6B and 6C show a relationship between the measurement time of timer 1015 and display/erasure of a face detection frame, when the timer 1015 uses an initial count value different from that used in the first exemplary embodiment.
According to the example of FIG. 6B, coordinate data of a face region can be obtained at the timings (1) and (2). However, the initial count value of the timer is set to ‘2.’ Therefore, failure in obtaining coordinate data of a face region at the timing (3) results in disappearance of the face detection frame at the next update timing of the face detection frame. It is further noted that a new face detection frame is not displayed until the timing (5) at which coordinate data of a new face region can be obtained. The new face detection frame can be displayed immediately after the timing (5). Further, according to the example shown in FIG. 6B, an initial count value newly set for the timer 1015 is ‘3’, because a newly detected face region is positioned relatively far from the edge of a screen, or because a newly detected face region has a smaller area.
According to the example of FIG. 6C, the initial count value of the timer 1015 is set to ‘7.’ Therefore, regardless of failure in obtaining coordinate data of a face region at the timings (3) and (4), the count value of the timer 1015 never reaches ‘0’ before the timing (5) at which coordinate data of a new face region can be obtained. Accordingly, as shown in FIG. 6C, the face detection frame can be continuously displayed during the term of timings (1) through (5).
As described above, the second exemplary embodiment enables a continuous display of a face detection frame for a predetermined time regardless of failure in the face detection. Furthermore, the second exemplary embodiment enables to change the predetermined time in accordance with at least one of the position and size of a detected face.
Accordingly, in a situation that a subject probably remains on a screen, the present exemplary embodiment can prevent the face detection frame from being erased inadvertently. As a result, the second exemplary embodiment can suppress an undesirable phenomenon repeating the display and erasure of the face detection frame in a short period of time.
Third Exemplary Embodiment
Another exemplary embodiment of the present invention will be described below. Compared to the first exemplary embodiment, the third exemplary embodiment is differentiated in the succeeding processing of step S207 (from FIG. 3) shown in FIG. 3, performed when the selected face detection frame is not positioned near the coordinate position of a face region obtained in step S206.
More specifically, the third exemplary embodiment includes processing of steps S501 through S505 which is not described in the first exemplary embodiment and will be described below in detail. Steps denoted by the same reference numerals as those shown in FIG. 3 described in the first exemplary embodiment can perform the same processing.
In step S207 of FIG. 9, the digital signal processing circuit 1007 selects one of face detection frames already displayed. Then, the digital signal processing circuit 1007 determines whether a selected face detection frame is present in the vicinity of any coordinate position of a face region newly obtained in step S206. When the selected face detection frame is not present in the vicinity of any coordinate position of a newly obtained face region, the processing flow proceeds to step S501.
In step S501, the digital signal processing circuit 1007 determines whether, at a previous timing, any coordinate position of a face region is detected in the vicinity of the face detection frame selected in step S207. When no coordinate position of a face region is detected, the processing flow proceeds to step S213 to continuously display or erase the face detection frame with reference to the measurement time of the timer 1015. When any coordinate position of a face region is detected, the processing flow proceeds to step S502.
In step S502, the digital signal processing circuit 1007 calculates the movement of a face region based on the coordinate data of the face region positioned closest to the face detection frame selected in step S207. When the moving direction of a face region is directed to the outside of a screen, the processing flow proceeds to step S503. On the other hand, when the movement of the face region remains within an area of the screen, the face region does not disappear from the screen for a while. Thus, the processing flow proceeds to step S505. In step S505, the timer 1015 resets its count value to an initial value and starts measuring a display time and then the process flow proceeds to step S212.
In step S503, the digital signal processing circuit 1007 determines whether the coordinate position of the face region used in the calculation of the movement of the face region is positioned at the edge of the screen. When the face region is positioned at the edge of the screen, the face region may probably disappear from the screen. Therefore, in step S504, the face detection frame used in the calculation of the face region is erased. Next the process flow proceeds to step S212.
As described above, the third exemplary embodiment decides the erasure of a face detection frame based on the moving direction and position of a face region. Accordingly, in a situation that a subject will soon disappear from the screen, the face detection frame can be promptly erased.
Thus, the present exemplary embodiment can suppress such an undesirable phenomenon that the detection frame is continuously displayed even after a subject face has already disappeared from the screen.
Furthermore, in the above-described step S502, it is possible to obtain not only the moving direction but also the moving speed of a face region. When the moving speed is high, an initial value set for the timer 1015 in step S505 can be decreased. The faster the moving speed of a subject is, the higher the possibility of disappearing from the screen becomes.
Fourth Exemplary Embodiment
Furthermore, another exemplary embodiment of the present invention will be described below with reference to FIG. 10. Compared to the first exemplary embodiment, the fourth exemplary embodiment is differentiated in the succeeding processing of step S207 shown in FIG. 3, performed when the selected face detection frame is not positioned near the coordinate position of a face region obtained in step S206.
More specifically, the fourth exemplary embodiment includes processing of steps S601 through S603 which is not described in the first exemplary embodiment and will be described below in detail. Steps denoted in FIG. 10 by the same reference numerals as those shown in FIG. 3 described in the first exemplary embodiment can perform the same processing.
In step S207 of FIG. 10, the digital signal processing circuit 1007 selects one of face detection frames already displayed. Then, the digital signal processing circuit 1007 determines whether a selected face detection frame is present in the vicinity of any coordinate position of a face region newly obtained in step S206.
When the selected face detection frame is not present in the vicinity of any coordinate position of a newly obtained face region, the processing flow proceeds to step S601. Instep S601, the digital signal processing circuit 1007 executes scene detection processing for determining whether the situation of a subject has changed.
FIG. 11 is a flowchart showing details of the scene detection processing, which can be described with reference to dissected regions shown in FIGS. 7A and 7B. FIG. 7A is a display screen of the display unit 1010, and FIG. 7B shows dissected regions of the imaging element 1003. In FIG. 7B, chain lines represent boundaries of dissected regions. To make it easy to understand the following description, FIG. 7A shows chain lines similar to those of FIG. 7B. In FIG. 7B, a shaded region represents a group of dissected regions where the face detection frame of FIG. 7A is present.
In FIG. 11, in step S701, the digital signal processing circuit 1007 detects a brightness signal of each dissected region where the face detection frame is not present (i.e., each non-shaded region in FIG. 7B). Then, the digital signal processing circuit 1007 compares the detected brightness signal with a brightness signal obtained from a corresponding region of image data when the timer 1015 is previously reset to an initial value, and calculates a difference ΔYn between the compared signals.
In step S702, the digital signal processing circuit 1007 converts the image data of each dissected region where the face detection frame is present, into a specific frequency (i.e., (½)*fnn representing a half of the Nyquist frequency according to the present exemplary embodiment). Then, the digital signal processing circuit 1007 compares the converted frequency with a frequency obtained according to a similar method from a corresponding region of image data when the timer 1015 is previously reset to an initial value, and calculates a difference Δ(½)*fnn between the compared frequencies.
In step S703, the digital signal processing circuit 1007 detects a color difference signal in each dissected region where the face detection frame is not present. Next, the digital signal processing circuit 1007 compares the detected color difference signal with a color difference signal obtained from a corresponding region of image data when the timer 1015 is previously reset to an initial value, and calculates differences ΔCrn and ΔCbn between the compared color difference signals.
Then, in step S704, the digital signal processing circuit 1007 determines whether the difference ΔYn calculated in step S701 is less than or equal to a threshold Yth. If the difference ΔYn calculated in step S701 is less than or equal to a threshold Yth, the process flow proceeds to step S705. If the difference ΔYn calculated in step S701 is not less than or equal to a threshold Yth, the process flow proceeds to step S708.
Then, in step S705, the digital signal processing circuit 1007 determines whether the difference Δ(½)*fnn calculated in step S702 is less than or equal to a threshold fnth. If the difference Δ(½)*fnn calculated in step S702 is less than or equal to a threshold fnth, the process flow proceeds to step S706. If the difference Δ(½)*fnn calculated in step S702 is not less than or equal to a threshold fnth, the process flow proceeds to step S708.
Next, in step S706, the digital signal processing circuit 1007 determines whether the differences ΔCrn and ΔCbn calculated in step S703 are less than or equal to thresholds Crth and Cbth, respectively. If the differences ΔCrn and ΔCbn calculated in step S703 are less than or equal to thresholds Crth and Cbth, respectively, the process flow proceeds to step S707. If the differences ΔCrn and ΔCbn calculated in step S703 are not less than or equal to thresholds Crth and Cbth, respectively, the process flow proceeds to step S708. Further, it is noted that in this case, the thresholds Yth, fnth, Crth, and Cbth can be experimentally obtained and can be used to determine whether a subject remains on a screen.
As a result, when all conditions of steps S704 through S706 are satisfied, the digital signal processing circuit 1007 decides that a change having occurred in the situation of a subject is small regardless of failure in the face detection. Thus, in step S707, the digital signal processing circuit 1007 sets a flag indicating no scene change.
On the contrary, when any one of the conditions is not satisfied, the digital signal processing circuit 1007 decides that a change having occurred in the situation of a subject is not small. Thus, in step S708, the digital signal processing circuit 1007 sets a flag indicating a scene change. After setting either flag, the digital signal processing circuit 1007 terminates this routine.
Returning to FIG. 10, the digital signal processing circuit 1007 determines in step S602 whether the flag indicating a scene change is set. When the flag indicating a scene change is set (YES in step S602), the processing flow proceeds to step S213. On the contrary, when the flag indicating a scene change is not set (NO in step S602), the processing flow proceeds to step S603. In step S603, the timer sets an initial value and starts measuring a display time. Then, the processing flow proceeds to step S212.
It is noted that if the flag indicating a scene change is set, the situation change of a subject will not be small and the face detection frame of a subject may deviate from the head of a subject. Therefore, in step S213, the digital signal processing circuit 1007 determines whether the count value of the timer 1015 has reached the predetermined time. When the timer count value has already reached the predetermined time, the processing flow proceeds to step S214 to erase the face detection frame displayed on the display unit 1010.
If the flag indicating a scene change is not set, the situation change of a subject will be small. More specifically, when a subject person blinks and closes an eye, or suddenly turns his/her face to look away, the face detection probably results in failure. Therefore, regardless of failure in the face detection, it is proper to presume that the position of a subject head does not change largely. Thus, the already displayed face detection frame is continuously displayed.
It is however possible to immediately erase the face detection frame when the face detection is failed and when the situation change of a subject is not small according to the scene detection processing. In this case, the timer can be omitted.
Furthermore, the reference values used for setting the scene change flag in the present exemplary embodiment are the difference (i.e., ΔYn) in the brightness signal, the difference (i.e., Δ(½)*fnn) in the converted specific frequency, and the difference (i.e., ΔCrn and ΔCbn) in the color difference signal, in respective dissected regions. However, it can be adequately determined whether all or part of these signals are used as reference values for the decisions (determinations).
Furthermore, when these differences are obtained, it is possible to obtain a difference of signals in each of the dissected regions. Alternatively, it is possible to average the signals obtained in plural or all dissected regions, or give weighting factors to the signals.
Furthermore, another method can be used to perform the scene detection processing. First, the AE processing, AF processing, or AWB processing can be performed in the stage of performing the through display before an ON state of the shutter switch SW1 is detected in step S114.
Then, the brightness value, focused state, white balance information obtained based on image data for the through display can be compared with target reference values. To compensate the differences in these factors, the conventional AE processing, AF processing, or AWB processing can be performed.
In the scene detection processing (refer to step S601), the digital signal processing circuit 1007 can obtain compensated amounts for the AE processing, AF processing, and AWB processing which are calculated based on image data obtained for the through display.
Then, these compensated amounts can be compared with thresholds. When all or at least one of the compensated amounts exceeds the threshold(s), the situation change of a subject can be decided as not small and the flag indicating a scene change can be set.
Furthermore, the information used for the scene detection decisions (determinations) are not limited to signals obtainable from the image data captured by the imaging apparatus 1000. For example, signals produced from a shake detection circuit or a posture detection circuit installed on the imaging apparatus 1000 can be used as reference values for the decisions.
Furthermore, a specific signal representing a user's operation entered through an operating member of the imaging apparatus 1000 or a predetermined signal transmitted from an external apparatus can be used as reference values for the decisions.
As described above, according to the fourth exemplary embodiment, when the situation change of a subject is small, the face detection frame can be continuously displayed and the timer can be reset regardless of failure in the face detection.
More specifically, in a case that the situation change of a subject is small regardless of failure in the face detection, the display time of the face detection frame can be increased compared with other cases.
Accordingly, the present exemplary embodiment can prevent the face detection frame from being erased when a subject person blinks and closes an eye or turns his/her face to look away. As a result, the fourth exemplary embodiment can suppress an undesirable phenomenon repeating the display and erasure of the face detection frame in a short period of time.
Fifth Exemplary Embodiment
A fifth exemplary embodiment of the present invention will be described below. The fifth exemplary embodiment is different from other exemplary embodiments in the contents displayed on display unit 1010 when the situation change of a subject is small according to the result of the scene detection processing.
FIGS. 12B through 12F show various patterns of the face detection frame displayed when the situation change of a subject is assumed to be small according to the scene detection processing. FIG. 12A shows an ordinary pattern of the face detection frame displayed when the face detection is successful. FIG. 12B shows a face detection frame changed in color, or displayed in a semitransparent state. FIG. 12C shows a face detection frame displayed as a dot image, or displayed according to a predetermined flickering mode. FIG. 12D shows a face detection frame with a thin line. FIG. 12E shows a face detection frame displayed in part, or in a predetermined cutout state. FIG. 12F shows a face detection frame displayed together with an icon.
In this manner, when the situation change of a subject is assumed to be small regardless of failure in the face detection, the face detection frame can be displayed for a while with a specific display pattern so as to let a user recognize the result of failure in the face detection, without immediately erasing the face detection frame.
Accordingly, when a subject person blinks and closes an eye, or suddenly turns his/her face to look away, the change in the display pattern of the face detection frame can be reduced so as to assure the visibility of the screen.
FIG. 13 shows a modified embodiment of the present exemplary embodiment. When the result of the scene detection processing shows that the situation change of a subject remains small for a while, it is desirable to use a multi-stage display that can change the state of the face detection frame stepwise before it is finally erased.
Even when the situation change of a subject remains small, it is undesirable that the face detection is unfeasible for a long time. Hence, according to the modified embodiment, even when a subject person blinks and closes an eye or suddenly turns his/her face to look away, the state of the face detection frame changes stepwise each time a predetermined time elapses and finally disappears as shown in FIG. 13. Accordingly, if the face detection is failed, the face detection frame will be erased depending on an elapsed time, even when the situation change of a subject is assumed to be small.
According to the above-described exemplary embodiments, a pair of eyes, a nose, and a mouth are detected and a human face region is determined based on their relative positions. However, the method for identifying a main subject through the face detection processing is not limited to the disclosed example.
Other Exemplary Embodiments
As described above, it is possible to perform the face detection processing by using a neural network or other method for analyzing a face region based on a learning technique. Furthermore, the method for detecting a main subject is not limited to the face detection processing. The main subject may not be a mankind. For example, the main subject can be an animal, a plant, a building, or a geometric pattern.
Instead of employing the above-described exemplary embodiment, any other exemplary embodiment can be used to obtain comparable effects in assuring a fine display regardless of temporary failure in the detection of a desired subject, if it can provide the above-described functions of detecting a desired subject and displaying the position of a detected subject.
Furthermore, according to the above-described exemplary embodiments, the imaging apparatus 1000 detects a main subject during the through display of the subject. However, the present invention is not limited to the disclosed example.
For example, another exemplary embodiment of the present invention can possess the capability of transferring image data obtained in the imaging apparatus to an external device, causing a display unit of the external device to display the image data, and causing the external device to detect a main subject.
Furthermore, image data can be a moving image already recorded in a storage medium or device if it is readable. Namely, another exemplary embodiment of the present invention can provide functions of repetitively detecting an object satisfying specific conditions from continuously changing image data and realizing the display reflecting detection results.
The above-described flowcharts can be realized by program codes when a computer operates under the program(s) stored in a RAM or ROM. In this respect, an exemplary embodiment of the present invention can include the program(s) and a storage medium of the program(s) readable by a computer.
More specifically, the program(s) can be recorded into, for example, a CD-ROM or other recording medium, or can be supplied to a computer via various transmission media. The record medium storing the program(s) can be selected from any one of flexible disk, hard disk, optical disk, magneto-optical disk, MO, CD-ROM, CD-R, CD-RW, magnetic tape, nonvolatile memory card, ROM, and DVD (DVD-ROM, DVD-R).
Furthermore, the transmission medium of the program(s) can be a computer network (e.g., LAN, or WAN represented by Internet) that can supply carriers of program information.
Furthermore, the transmission medium of the program(s) can be a communication medium (e.g., an optical fiver or other cable line, or a wireless line) used in a wireless communication network system.
When a computer reads and executes the installed program(s), the functions of the above-described exemplary embodiments can be realized.
Furthermore, based on an instruction of the program(s), the operating system (or other application software) running on the computer may execute part or all of the processing so that the functions of the above-described exemplary embodiments can be realized.
Furthermore, the program(s) read out of a recording medium can be written into a memory of a feature expansion board equipped in a computer or into a memory of a feature expansion unit connected to the computer. In this case, based on an instruction of the program, a CPU provided on the feature expansion board or the feature expansion unit can execute part or all of the processing so that the functions of the above-described exemplary embodiments can be realized.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all modifications, equivalent structures and functions.
This application claims priority from Japanese Patent Application No. 2005-227896 filed Aug. 5, 2005; and Japanese Patent Application No. 2006-162215 filed Jun. 12, 2006, which are hereby incorporated by reference herein in their entirety.

Claims (25)

1. An image processing method comprising:
repetitively updating image data;
detecting an object satisfying predetermined conditions from the image data and find coordinates of the object;
displaying a detection result indicating a region where the object is detected by overlapping the image based on the coordinates of the object; and
measuring time during a continuing display of the detection result,
wherein when the object cannot be detected during the display of detection result and the time of continuing display of the detection result does not reach predetermined time, the detection result is continuously displayed on the display unit based on the coordinates of the object detected before, and when the time of continuing display of the detection result reaches predetermined time, display of the detection result is erased.
2. The image processing method according to claim 1, wherein the measurement result is reset when the object is detected during the display of the detection result.
3. The image processing method according to claim 1, wherein the predetermined time is changeable in accordance with a position of the detected object in the image.
4. The image processing method according to claim 3, wherein, the closer the position of the detected object is to an edge of the image, the shorter the predetermined time is set.
5. The image processing method according to claim 1, wherein the predetermined time is changeable in accordance with a size of the detected object.
6. The image processing method according to claim 5, wherein, the larger the size of the detected object is, the shorter the predetermined time is set.
7. The image processing method according to claim 1, further comprising determining, based on a moving direction of the detected object, whether the detection result should be continuously displayed.
8. The image processing method according to claim 1, further comprising determining whether the detection result should be continuously displayed, based on a moving direction of the detected object and a position of the detected object in the image.
9. The image processing method according to claim 1, further comprising,
detecting a changed amount of the image data in response to the update of the image data; and
determining, based on a detection result of the changed amount, whether the detection result should be continuously displayed.
10. The image processing method according to claim 1, further comprising,
detecting a change amount of the image data in response to the update of the image data; and
resetting the measurement result in accordance with the change amount.
11. The image processing method according to claim 1, wherein the object satisfying the predetermined conditions is a human.
12. The image processing method according to claim 1, wherein the object satisfying the predetermined conditions is the shape of a human face.
13. The image processing method according to claim 1, wherein when the object is newly detected during the display of the detection result and the distance between the coordinates of newly detected object and the coordinates of the displayed detection result satisfies predetermined conditions, the detection result is updated and displayed based on the coordinates of newly detected object.
14. An imaging apparatus comprising:
an imaging element configured to produce image data based on light reflection from a subject;
a detection circuit configured to detect an object satisfying predetermined conditions from the image data obtained from the imaging element and find coordinates of the object;
a display unit configured to repetitively obtain the image data and display an image based on the obtained image data and display the detection result detected by the detection circuit and indicating a region where the object is detected by overlapping the image based on the coordinates detected by the detection circuit;
a timer configured to measure time during the continuing display of the detection result; and
a signal processing circuit configured to continuously display the detection result on the display unit based on the coordinates of the object detected before, when the object cannot be detected during the display unit displaying the detection result and the time of continuing display of detection result does not reach predetermined time, and erases the display of the detection result on the display unit when the time of continuing display of the detection result reaches predetermined time.
15. The imaging apparatus according to claim 14, wherein the display unit combines a moving image and the detection result.
16. The imaging apparatus according to claim 14, further comprising a focus control circuit that performs auto-focus processing applied to the object detected by the detection circuit.
17. The imaging apparatus according to claim 14, further comprising an exposure control circuit that performs exposure control applied to the object detected by the detection circuit.
18. The imaging apparatus according to claim 14, wherein the signal processing circuit resets the measurement time measured by the timer when the object is detected during the display of the detection result.
19. The imaging apparatus according to claim 14, wherein when the object is newly detected during the display of the detection result and the distance between the coordinates of newly detected object and the coordinates of the displayed detection result satisfies predetermined conditions, the signal processing circuit updates the display of the detection result on the imaging apparatus based on the coordinates of newly detected object.
20. A computer readable medium containing computer-executable instructions for performing processing of image data, the medium comprising:
computer-executable instructions that repetitively update image data;
computer-executable instructions that detect an object satisfying predetermined conditions from the image data and find coordinates of the object;
computer-executable instructions that display a detection result indicating a region where the object is detected by overlapping the image based on the coordinates of the object; and
computer-executable instructions that measure time during a continuing display of the detection result,
wherein when the object cannot be detected during the display of detection result and the time of continuing display of the detection result does not reach predetermined time, the detection result is continuously displayed on the display unit based on the coordinates of the object detected before, and when the time of continuing display of the detection result reaches predetermined time, display of the detection result is erased.
21. The computer readable medium according to claim 20, wherein the measurement result is reset when the object is detected during the display of the detection result.
22. An image processing method comprising:
repetitively updating image data;
detecting an object satisfying predetermined conditions from the image data and find coordinates of the object;
displaying a detection result indicating a region where the object is detected by
overlapping the image based on the coordinates of the object; and
measuring time during the continuing display of the detection result;
wherein when the object is newly detected during the display of the detection result and the distance from displayed detection result satisfies predetermined conditions, the detection result is updated and displayed based on the coordinates of the newly detected object, and
wherein when none of the object which the distance from displayed detection result satisfies predetermined detection is newly detected during the display of the detection result, the detection result is continuously displayed on the display unit based on the coordinates of the object detected before when the time of continuing display of the detection result does not reach predetermined time, and when the time of continuing display of the detection result reaches predetermined time, display of the detection result is erased.
23. The image processing method according to claim 22, wherein the measuring time is reset when the object which the distance from displayed detected result satisfies predetermined condition is newly detected during the continuing display of the detection result.
24. An imaging apparatus comprising:
an imaging element configured to produce image data based on light reflection from an object;
a detection circuit configured to detect an object satisfying predetermined conditions from the image data obtained from the imaging element and find coordinates of the object;
a display unit configured to repetitively obtain the image data and display an image based on the obtained image data and display the detection result detected by the detection circuit and indicating a region where the object is detected by overlapping the image based on the coordinates detected by the detection circuit;
a timer configured to measure time during the continuing display of the detection result; and
a signal processing circuit configured to update the display of the detection result on the display unit based on the coordinates of newly detected object when the object of which the distance from displayed detection result satisfies predetermined condition is newly detected during the display unit displaying the detection result, and when none of the object which the distance from displayed detection result satisfies predetermined detection is newly detected during the display unit displaying the detection result, the detection result is continuously displayed on the display unit based on the coordinates of the object detected before when the time of continuing display of the detection result does not reach predetermined time, and the detection result displayed on the display unit is erased when the time of continuing display of the detection result reaches predetermined time.
25. The imaging apparatus according to claim 24, wherein when the object which the distance from displayed detected result satisfies predetermined condition is newly detected during the continuing display of the detection result, the signal processing circuit resets the measuring time measured by the timer.
US11/457,862 2005-08-05 2006-07-17 Image processing method, imaging apparatus, and storage medium storing control program of image processing method executable by computer Active 2027-08-27 US7602417B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US12/552,571 US7817202B2 (en) 2005-08-05 2009-09-02 Image processing method, imaging apparatus, and storage medium storing control program of image processing method executable by computer
US12/552,554 US7738024B2 (en) 2005-08-05 2009-09-02 Image processing method, imaging apparatus, and storage medium storing control program of image processing method executable by computer

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2005227896 2005-08-05
JP2005-227896 2005-08-05
JP2006162215A JP4350725B2 (en) 2005-08-05 2006-06-12 Image processing method, image processing apparatus, and program for causing computer to execute image processing method
JP2006-162215 2006-06-12

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US12/552,554 Division US7738024B2 (en) 2005-08-05 2009-09-02 Image processing method, imaging apparatus, and storage medium storing control program of image processing method executable by computer
US12/552,571 Division US7817202B2 (en) 2005-08-05 2009-09-02 Image processing method, imaging apparatus, and storage medium storing control program of image processing method executable by computer

Publications (2)

Publication Number Publication Date
US20070030375A1 US20070030375A1 (en) 2007-02-08
US7602417B2 true US7602417B2 (en) 2009-10-13

Family

ID=37717279

Family Applications (3)

Application Number Title Priority Date Filing Date
US11/457,862 Active 2027-08-27 US7602417B2 (en) 2005-08-05 2006-07-17 Image processing method, imaging apparatus, and storage medium storing control program of image processing method executable by computer
US12/552,554 Expired - Fee Related US7738024B2 (en) 2005-08-05 2009-09-02 Image processing method, imaging apparatus, and storage medium storing control program of image processing method executable by computer
US12/552,571 Expired - Fee Related US7817202B2 (en) 2005-08-05 2009-09-02 Image processing method, imaging apparatus, and storage medium storing control program of image processing method executable by computer

Family Applications After (2)

Application Number Title Priority Date Filing Date
US12/552,554 Expired - Fee Related US7738024B2 (en) 2005-08-05 2009-09-02 Image processing method, imaging apparatus, and storage medium storing control program of image processing method executable by computer
US12/552,571 Expired - Fee Related US7817202B2 (en) 2005-08-05 2009-09-02 Image processing method, imaging apparatus, and storage medium storing control program of image processing method executable by computer

Country Status (3)

Country Link
US (3) US7602417B2 (en)
JP (1) JP4350725B2 (en)
CN (1) CN1909603B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090016708A1 (en) * 2007-07-09 2009-01-15 Nikon Corporation Image detection device, focusing device, image-capturing device, image detection method, and focusing method
US20090237554A1 (en) * 2008-03-19 2009-09-24 Atsushi Kanayama Autofocus system
US20090315997A1 (en) * 2005-08-05 2009-12-24 Canon Kabushiki Kaisha Image processing method, imaging apparatus, and storage medium storing control program of image processing method executable by computer
US20090324091A1 (en) * 2008-06-25 2009-12-31 Canon Kabushiki Kaisha Image processing apparatus, image processing method, program, and computer-readable print medium
US20100171836A1 (en) * 2009-01-07 2010-07-08 Canon Kabushiki Kaisha Image capturing apparatus, control method thereof, and program
US20130155291A1 (en) * 2011-12-19 2013-06-20 Sanyo Electric Co., Ltd. Electronic camera
US9106872B2 (en) * 2008-10-27 2015-08-11 Sony Corporation Image processing apparatus, image processing method, and program
US9323984B2 (en) * 2014-06-06 2016-04-26 Wipro Limited System and methods of adaptive sampling for emotional state determination
EP2793166A3 (en) * 2013-04-15 2017-01-11 Omron Corporation Target-image detecting device, control method and control program thereof, recording medium, and digital camera
US11287314B1 (en) * 2020-03-17 2022-03-29 Roof Asset Management Usa Ltd. Method for evaluating artificial lighting of a surface

Families Citing this family (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4945722B2 (en) * 2006-02-15 2012-06-06 Hoya株式会社 Imaging device
EP1874043B1 (en) * 2006-02-20 2013-12-04 Panasonic Corporation Image pick up apparatus
JP4507281B2 (en) * 2006-03-30 2010-07-21 富士フイルム株式会社 Image display device, imaging device, and image display method
JP4683228B2 (en) * 2006-07-25 2011-05-18 富士フイルム株式会社 Image display device, photographing device, image display method and program
JP4656657B2 (en) * 2006-07-31 2011-03-23 キヤノン株式会社 Imaging apparatus and control method thereof
JP5044321B2 (en) * 2006-09-13 2012-10-10 株式会社リコー Imaging apparatus and subject detection method
JP2008160620A (en) * 2006-12-26 2008-07-10 Matsushita Electric Ind Co Ltd Image processing apparatus and imaging apparatus
JP4976160B2 (en) 2007-02-22 2012-07-18 パナソニック株式会社 Imaging device
JP4974704B2 (en) * 2007-02-22 2012-07-11 パナソニック株式会社 Imaging device
JP4998026B2 (en) 2007-03-15 2012-08-15 ソニー株式会社 Image processing apparatus, imaging apparatus, image display control method, and computer program
US7973848B2 (en) * 2007-04-02 2011-07-05 Samsung Electronics Co., Ltd. Method and apparatus for providing composition information in digital image processing device
EP1986421A3 (en) * 2007-04-04 2008-12-03 Nikon Corporation Digital camera
JP4506779B2 (en) * 2007-05-09 2010-07-21 カシオ計算機株式会社 Imaging apparatus and program
JP5029137B2 (en) 2007-05-17 2012-09-19 カシオ計算機株式会社 Imaging apparatus and program
JP4858849B2 (en) * 2007-05-18 2012-01-18 カシオ計算機株式会社 Imaging apparatus and program thereof
JP2008287064A (en) * 2007-05-18 2008-11-27 Sony Corp Imaging apparatus
WO2009008164A1 (en) * 2007-07-09 2009-01-15 Panasonic Corporation Digital single-lens reflex camera
JP5064926B2 (en) * 2007-08-02 2012-10-31 キヤノン株式会社 Imaging apparatus and control method thereof
JP4902460B2 (en) * 2007-08-08 2012-03-21 キヤノン株式会社 Image processing apparatus, imaging apparatus, and image processing method
JP4904243B2 (en) * 2007-10-17 2012-03-28 富士フイルム株式会社 Imaging apparatus and imaging control method
JP4950836B2 (en) * 2007-10-24 2012-06-13 富士フイルム株式会社 Imaging apparatus and operation control method thereof
JP4983672B2 (en) * 2008-03-19 2012-07-25 カシオ計算機株式会社 Imaging apparatus and program thereof
JP5217600B2 (en) * 2008-04-22 2013-06-19 ソニー株式会社 Imaging device
JP5674465B2 (en) * 2008-04-23 2015-02-25 レノボ・イノベーションズ・リミテッド(香港) Image processing apparatus, camera, image processing method and program
JP5164711B2 (en) * 2008-07-23 2013-03-21 キヤノン株式会社 Focus adjustment apparatus and control method thereof
JP2010035048A (en) * 2008-07-30 2010-02-12 Fujifilm Corp Imaging apparatus and imaging method
US9104408B2 (en) * 2008-08-22 2015-08-11 Sony Corporation Image display device, control method and computer program
JP5409189B2 (en) * 2008-08-29 2014-02-05 キヤノン株式会社 Imaging apparatus and control method thereof
JP2010087598A (en) * 2008-09-29 2010-04-15 Fujifilm Corp Photographic apparatus, photographic control method and program therefor, image display apparatus, image display method and program therefor, and photographic system, control method therefor and program therefor
JP5207918B2 (en) * 2008-10-27 2013-06-12 キヤノン株式会社 Image processing apparatus, imaging apparatus, and image processing method
TW201023633A (en) * 2008-12-05 2010-06-16 Altek Corp An image capturing device for automatically position indicating and the automatic position indicating method thereof
JP2010204585A (en) * 2009-03-05 2010-09-16 Canon Inc Imaging apparatus and method of controlling the same
JP5395503B2 (en) * 2009-04-27 2014-01-22 富士フイルム株式会社 Display control apparatus and operation control method thereof
JP5412953B2 (en) * 2009-05-20 2014-02-12 リコーイメージング株式会社 Imaging device
US8520131B2 (en) * 2009-06-18 2013-08-27 Nikon Corporation Photometric device, imaging device, and camera
JP5500915B2 (en) * 2009-08-31 2014-05-21 キヤノン株式会社 Imaging apparatus and exposure control method
JP5500916B2 (en) * 2009-08-31 2014-05-21 キヤノン株式会社 Imaging apparatus and control method thereof
JP4831223B2 (en) * 2009-09-29 2011-12-07 カシオ計算機株式会社 Image display apparatus and method, and program
US20110080489A1 (en) * 2009-10-02 2011-04-07 Sony Ericsson Mobile Communications Ab Portrait photo assistant
JP5593990B2 (en) * 2010-09-08 2014-09-24 リコーイメージング株式会社 Imaging system and pixel signal readout method
JP5733956B2 (en) * 2010-11-18 2015-06-10 キヤノン株式会社 IMAGING DEVICE AND ITS CONTROL METHOD, MOVIE RECORDING DEVICE, ITS CONTROL METHOD, PROGRAM, AND STORAGE MEDIUM
US20120194692A1 (en) * 2011-01-31 2012-08-02 Hand Held Products, Inc. Terminal operative for display of electronic record
US20120194415A1 (en) * 2011-01-31 2012-08-02 Honeywell International Inc. Displaying an image
JP6025690B2 (en) * 2013-11-01 2016-11-16 ソニー株式会社 Information processing apparatus and information processing method
JP5720767B2 (en) * 2013-12-17 2015-05-20 株式会社ニコン Focus detection device
WO2016093160A1 (en) * 2014-12-08 2016-06-16 シャープ株式会社 Video processing device
US10264192B2 (en) 2015-01-19 2019-04-16 Sharp Kabushiki Kaisha Video processing device
CN106324945A (en) * 2015-06-30 2017-01-11 中兴通讯股份有限公司 Non-contact automatic focusing method and device
CN105426833A (en) * 2015-11-13 2016-03-23 小米科技有限责任公司 Image identification method and image identification device for game
US10015400B2 (en) * 2015-12-17 2018-07-03 Lg Electronics Inc. Mobile terminal for capturing an image and associated image capturing method
JP6971696B2 (en) 2017-08-04 2021-11-24 キヤノン株式会社 Electronic devices and their control methods
US10757332B2 (en) * 2018-01-12 2020-08-25 Qualcomm Incorporated Movement compensation for camera focus
CN111382726B (en) * 2020-04-01 2023-09-01 浙江大华技术股份有限公司 Engineering operation detection method and related device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6005493A (en) * 1996-09-20 1999-12-21 Hitachi, Ltd. Method of displaying moving object for enabling identification of its moving route display system using the same, and program recording medium therefor
JP2002251380A (en) 2001-02-22 2002-09-06 Omron Corp User collation system
JP2003107335A (en) 2001-09-28 2003-04-09 Ricoh Co Ltd Image pickup device, automatic focusing method, and program for making computer execute the method
US20030071908A1 (en) * 2001-09-18 2003-04-17 Masato Sannoh Image pickup device, automatic focusing method, automatic exposure method, electronic flash control method and computer program
US20040208375A1 (en) * 2002-10-15 2004-10-21 Digicomp Research Corporation Automatic intrusion detection system for perimeter defense
US7023469B1 (en) * 1998-04-30 2006-04-04 Texas Instruments Incorporated Automatic video monitoring system which selectively saves information
US20070053551A1 (en) * 2005-09-07 2007-03-08 Hitachi, Ltd. Driving support apparatus
US7327886B2 (en) * 2004-01-21 2008-02-05 Fujifilm Corporation Photographing apparatus, method and program
US7453506B2 (en) * 2003-08-25 2008-11-18 Fujifilm Corporation Digital camera having a specified portion preview section
US7469055B2 (en) * 2006-08-11 2008-12-23 Fotonation Vision Limited Real-time face tracking in a digital image acquisition device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100547992B1 (en) * 2003-01-16 2006-02-01 삼성테크윈 주식회사 Digital camera and control method thereof
JP4350725B2 (en) * 2005-08-05 2009-10-21 キヤノン株式会社 Image processing method, image processing apparatus, and program for causing computer to execute image processing method
JP4785536B2 (en) * 2006-01-06 2011-10-05 キヤノン株式会社 Imaging device
JP4654974B2 (en) * 2006-05-23 2011-03-23 富士フイルム株式会社 Imaging apparatus and imaging method
JP4730667B2 (en) * 2006-07-25 2011-07-20 富士フイルム株式会社 Automatic regeneration method and apparatus
JP2008136035A (en) * 2006-11-29 2008-06-12 Ricoh Co Ltd Imaging apparatus
JP5224955B2 (en) * 2008-07-17 2013-07-03 キヤノン株式会社 IMAGING DEVICE, IMAGING DEVICE CONTROL METHOD, PROGRAM, AND RECORDING MEDIUM

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6005493A (en) * 1996-09-20 1999-12-21 Hitachi, Ltd. Method of displaying moving object for enabling identification of its moving route display system using the same, and program recording medium therefor
US7023469B1 (en) * 1998-04-30 2006-04-04 Texas Instruments Incorporated Automatic video monitoring system which selectively saves information
JP2002251380A (en) 2001-02-22 2002-09-06 Omron Corp User collation system
US20030071908A1 (en) * 2001-09-18 2003-04-17 Masato Sannoh Image pickup device, automatic focusing method, automatic exposure method, electronic flash control method and computer program
JP2003107335A (en) 2001-09-28 2003-04-09 Ricoh Co Ltd Image pickup device, automatic focusing method, and program for making computer execute the method
US20040208375A1 (en) * 2002-10-15 2004-10-21 Digicomp Research Corporation Automatic intrusion detection system for perimeter defense
US7453506B2 (en) * 2003-08-25 2008-11-18 Fujifilm Corporation Digital camera having a specified portion preview section
US7327886B2 (en) * 2004-01-21 2008-02-05 Fujifilm Corporation Photographing apparatus, method and program
US20070053551A1 (en) * 2005-09-07 2007-03-08 Hitachi, Ltd. Driving support apparatus
US7469055B2 (en) * 2006-08-11 2008-12-23 Fotonation Vision Limited Real-time face tracking in a digital image acquisition device

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7738024B2 (en) * 2005-08-05 2010-06-15 Canon Kabushiki Kaisha Image processing method, imaging apparatus, and storage medium storing control program of image processing method executable by computer
US7817202B2 (en) * 2005-08-05 2010-10-19 Canon Kabushiki Kaisha Image processing method, imaging apparatus, and storage medium storing control program of image processing method executable by computer
US20090315997A1 (en) * 2005-08-05 2009-12-24 Canon Kabushiki Kaisha Image processing method, imaging apparatus, and storage medium storing control program of image processing method executable by computer
US20090322885A1 (en) * 2005-08-05 2009-12-31 Canon Kabushiki Kaisha Image processing method, imaging apparatus, and storage medium storing control program of image processing method executable by computer
US8917334B2 (en) * 2007-07-09 2014-12-23 Nikon Corporation Image detection device, focusing device, image-capturing device, image detection method, and focusing method
US20110292274A1 (en) * 2007-07-09 2011-12-01 Nikon Corporation Image detection device, focusing device, image-capturing device, image detection method, and focusing method
US20090016708A1 (en) * 2007-07-09 2009-01-15 Nikon Corporation Image detection device, focusing device, image-capturing device, image detection method, and focusing method
US20090237554A1 (en) * 2008-03-19 2009-09-24 Atsushi Kanayama Autofocus system
US20090324091A1 (en) * 2008-06-25 2009-12-31 Canon Kabushiki Kaisha Image processing apparatus, image processing method, program, and computer-readable print medium
US8374439B2 (en) 2008-06-25 2013-02-12 Canon Kabushiki Kaisha Image processing apparatus, image processing method, program, and computer-readable print medium
US9106872B2 (en) * 2008-10-27 2015-08-11 Sony Corporation Image processing apparatus, image processing method, and program
US20100171836A1 (en) * 2009-01-07 2010-07-08 Canon Kabushiki Kaisha Image capturing apparatus, control method thereof, and program
US8477194B2 (en) 2009-01-07 2013-07-02 Canon Kabushiki Kaisha Image capturing apparatus, control method thereof, and program
US20130155291A1 (en) * 2011-12-19 2013-06-20 Sanyo Electric Co., Ltd. Electronic camera
EP2793166A3 (en) * 2013-04-15 2017-01-11 Omron Corporation Target-image detecting device, control method and control program thereof, recording medium, and digital camera
US9323984B2 (en) * 2014-06-06 2016-04-26 Wipro Limited System and methods of adaptive sampling for emotional state determination
US11287314B1 (en) * 2020-03-17 2022-03-29 Roof Asset Management Usa Ltd. Method for evaluating artificial lighting of a surface

Also Published As

Publication number Publication date
US20090322885A1 (en) 2009-12-31
US7817202B2 (en) 2010-10-19
US20090315997A1 (en) 2009-12-24
US20070030375A1 (en) 2007-02-08
CN1909603B (en) 2010-09-29
CN1909603A (en) 2007-02-07
US7738024B2 (en) 2010-06-15
JP2007068147A (en) 2007-03-15
JP4350725B2 (en) 2009-10-21

Similar Documents

Publication Publication Date Title
US7602417B2 (en) Image processing method, imaging apparatus, and storage medium storing control program of image processing method executable by computer
US7414667B2 (en) Image sensing apparatus, image processing apparatus, and control method therefor for relaxing a red-eye effect
US8139136B2 (en) Image pickup apparatus, control method of image pickup apparatus and image pickup apparatus having function to detect specific subject
JP5054063B2 (en) Electronic camera, image processing apparatus, and image processing method
US7804533B2 (en) Image sensing apparatus and correction method
JP4989385B2 (en) Imaging apparatus, control method thereof, and program
US20040061796A1 (en) Image capturing apparatus
US8670064B2 (en) Image capturing apparatus and control method therefor
US8284994B2 (en) Image processing apparatus, image processing method, and storage medium
JP5115210B2 (en) Imaging device
JP2007124056A (en) Image processor, control method and program
JP2019121860A (en) Image processing apparatus and control method therefor
US8576306B2 (en) Image sensing apparatus, image processing apparatus, control method, and computer-readable medium
US20150172595A1 (en) Image processing apparatus capable of movie recording, image pickup apparatus, control method therefor, and storage medium
JP4506779B2 (en) Imaging apparatus and program
JP5361502B2 (en) Focus detection apparatus and method, and imaging apparatus
JP2005167697A (en) Electronic camera having red-eye correction function
JP2005266784A (en) Imaging apparatus, its control method, its control program, and storage medium
US8514305B2 (en) Imaging apparatus
JP5448391B2 (en) Imaging apparatus and red-eye correction method
JP2005333620A (en) Image processing apparatus and image processing method
JP4682104B2 (en) Imaging device
JP4773924B2 (en) IMAGING DEVICE, ITS CONTROL METHOD, PROGRAM, AND STORAGE MEDIUM
JP2005326506A (en) Focus detection device and focus detection method
JP2010041504A (en) Camera

Legal Events

Date Code Title Description
AS Assignment

Owner name: CANON KABUSHIKI KAISHA, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:OGASAWARA, TSUTOMU;IKEDA, EIICHIRO;REEL/FRAME:017944/0047;SIGNING DATES FROM 20060704 TO 20060705

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 12