US8004573B2 - Imaging apparatus, imaged picture recording method, and storage medium storing computer program - Google Patents

Imaging apparatus, imaged picture recording method, and storage medium storing computer program Download PDF

Info

Publication number
US8004573B2
US8004573B2 US12/490,646 US49064609A US8004573B2 US 8004573 B2 US8004573 B2 US 8004573B2 US 49064609 A US49064609 A US 49064609A US 8004573 B2 US8004573 B2 US 8004573B2
Authority
US
United States
Prior art keywords
unit
face
faces
imaging
face detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US12/490,646
Other versions
US20090322906A1 (en
Inventor
Kazuyoshi Watanabe
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Casio Computer Co Ltd
Original Assignee
Casio Computer Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Casio Computer Co Ltd filed Critical Casio Computer Co Ltd
Assigned to CASIO COMPUTER CO., LTD. reassignment CASIO COMPUTER CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: WATANABE, KAZUYOSHI
Publication of US20090322906A1 publication Critical patent/US20090322906A1/en
Application granted granted Critical
Publication of US8004573B2 publication Critical patent/US8004573B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00127Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture
    • H04N1/00326Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture with a data reading, recognizing or recording apparatus, e.g. with a bar-code apparatus
    • H04N1/00328Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture with a data reading, recognizing or recording apparatus, e.g. with a bar-code apparatus with an apparatus processing optically-read information
    • H04N1/00331Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture with a data reading, recognizing or recording apparatus, e.g. with a bar-code apparatus with an apparatus processing optically-read information with an apparatus performing optical character recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/21Intermediate information storage
    • H04N1/2104Intermediate information storage for one or a few pictures
    • H04N1/2112Intermediate information storage for one or a few pictures using still video cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2101/00Still video cameras

Definitions

  • the present invention relates to an imaging apparatus that images a picture, an imaged picture recording method, and a storage medium that stores a computer program.
  • Unexamined Japanese Patent Application KOKAI Publication No. 2007-329602 proposes a camera that detects human faces from an imaged picture, counts the number of the faces, and records the picture in a recording medium such as a memory card in a case where the number of the faces is equal to a preset number of persons intended to be imaged.
  • Unexamined Japanese Patent Application KOKAI Publication No. 2007-329602 also proposes a camera that registers a plurality of faces in a face registration memory, counts the number of faces that are recognized as any of the registered faces from the faces included in an imaged picture, and records the picture in a recording medium such as a memory card in a case where the number of the recognized faces is equal to a preset number of persons intended to be imaged.
  • An object of the present invention is to provide an imaging apparatus that needs no setting of a number of persons intended to be imaged and records an imaged picture in response to a change in the number of imaged persons included in an imaging range, an imaged picture recording method, and a storage medium that stores a computer program.
  • an imaging apparatus includes: an imaging unit; a recording instruction detecting unit that detects an instruction to record a picture that is imaged by the imaging unit; a characteristic picture region detecting unit that detects a characteristic picture region included in a picture imaged by the imaging unit, the characteristic picture region containing a predetermined characteristic; an obtaining unit that makes the imaging unit image pictures continuously, and obtains a region number each time a picture is imaged, the region number indicating a number of characteristic picture regions that are detected by the characteristic picture region detecting unit; a determining unit that determines whether the region number obtained by the obtaining unit has changed or not; and a recording unit that records an imaged picture in a recording medium, in response to that the recording instruction detecting unit detects a recording instruction and when the determining unit determines that the region number has changed.
  • an imaged picture recording method includes: a recording instruction detecting step of detecting an instruction to record a picture that is imaged by an imaging unit; a characteristic picture region detecting step of detecting a characteristic picture region included in a picture imaged by the imaging unit, the characteristic picture region containing a predetermined characteristic; an obtaining step of making the imaging unit image pictures continuously, and obtaining a region number each time a picture is imaged, the region number indicating a number of characteristic picture regions detected at the characteristic picture region detecting step; a determining step of determining whether the region number obtained at the obtaining step has changed or not; and a recording step of recording an imaged picture in a recording medium, in response to that a recording instruction is detected at the recording instruction detecting step and when it is determined at the determining step that the region number has changed.
  • a storage medium stores a program that is readable by a computer possessed by an imaging apparatus and controls the computer to function as: a recording instruction detecting unit that detects an instruction to record a picture that is imaged by an imaging unit; a characteristic picture region detecting unit that detects a characteristic picture region included in a picture imaged by the imaging unit, the characteristic picture region containing a predetermined characteristic; an obtaining unit that makes the imaging unit image pictures continuously, and obtains a region number each time a picture is imaged, the region number indicating a number of characteristic picture regions detected by the characteristic picture region detecting unit; a determining unit that determines whether the region number obtained by the obtaining unit has changed or not; and a recording unit that records an imaged picture in a recording medium, in response to that the recording instruction detecting unit detects a recording instruction and when the determining unit determines that the region number has changed.
  • FIG. 1 is a block diagram of a camera according to an embodiment of the present invention
  • FIG. 2 is a diagram showing one example of a picture that is displayed on a display of a display unit
  • FIG. 3 is a diagram showing one example of a picture that includes one more human face than included in the picture of FIG. 2 ;
  • FIG. 4 is a diagram showing one example of a main flowchart according to a first embodiment of the present invention.
  • FIG. 5 is a diagram showing one example of a sub flowchart according to the first embodiment of the present invention.
  • FIG. 6 is a diagram showing a main flowchart according to a modified example of the first embodiment of the present invention.
  • FIG. 7 is a diagram showing one example of a main flowchart according to a second embodiment of the present invention.
  • FIG. 8 is a diagram showing one example of a main flowchart according to a modified example of the second embodiment of the present invention.
  • FIG. 9 is a diagram showing one example of a main flowchart according to a third embodiment of the present invention.
  • a camera 10 includes an imaging unit 1 , a CPU (Central Processing Unit) 2 , a memory unit 3 , a recording medium control unit 4 , a recording medium 5 , an operation unit 6 , a display unit 7 , and a timer 8 .
  • a CPU Central Processing Unit
  • the imaging unit 1 includes an imaging device such as a CCD (Charge Coupled Device), a CMOS (Complementary Metal Oxide Semiconductor) image sensor, etc.
  • the imaging unit 1 images pictures continuously.
  • the imaging unit 1 converts an imaged picture from Analog to Digital (A/D), and outputs picture data, which is in the form of a digital signal, to the CPU 2 .
  • A/D Analog to Digital
  • the CPU 2 is a programmable processor.
  • the CPU 2 controls the entire camera 10 .
  • the CPU 2 receives picture data output by the imaging unit 1 , converts it into a format displayable by the display unit 7 , and outputs it to the display unit 7 in the form of a through-the-lens image.
  • the CPU 2 applies various kinds of image processes and coding to the picture data, and outputs it to the recording medium control unit 4 .
  • the CPU 2 instructs the recording medium control unit 4 to record the picture data in the recording medium 5 .
  • the memory unit 3 includes memories such as a RAM (Random Access Memory), a ROM (Read Only Memory), a flash memory, etc.
  • the memory unit 3 stores programs of the CPU 2 , various settings, etc.
  • the memory unit 3 also temporarily stores data of an imaged picture.
  • the recording medium control unit 4 records picture data in the recording medium 5 in accordance with an instruction of the CPU 2 .
  • the recording medium 5 holds the data of the imaged picture recorded therein
  • the recording medium 5 may be, for example, an SD (Secure Digital) memory card, a hard disk, a CD (Compact Disk), or a DVD (Digital Versatile Disk).
  • SD Secure Digital
  • CD Compact Disk
  • DVD Digital Versatile Disk
  • the operation unit 6 includes various operation keys and switches such as a shutter button and an up, down, left, and right cursor key.
  • the CPU 2 performs auto-focusing and automatic exposure upon detecting an operation to half-press the shutter button, for example.
  • the CPU 2 Upon detecting an operation to full-press the shutter button, the CPU 2 immediately instructs the recording medium control unit 4 to record picture data in the recording medium 5 . Further, upon detecting the operation to full-press the shutter button, the CPU 2 detects human faces (specifically, picture regions representing faces) from each imaged picture and obtains the number of the detected faces, as will be described later. When the number of faces increases, the CPU 2 instructs the recording medium control unit 4 after a predetermined period of time passes since the increase to record picture data in the recording medium 5 .
  • human faces specifically, picture regions representing faces
  • the display unit 7 includes a display.
  • the CPU 2 displays various messages and a through-the-lens image, which has been reduced in size as compared with the pictures imaged by the imaging unit 1 , on the display.
  • the size of the through-the-lens image is, for example, 320 ⁇ 240 pixels (QVGA).
  • QVGA 320 ⁇ 240 pixels
  • the timer 8 is a down-counter, for example.
  • the timer 8 clocks time by counting down a preset predetermined period of time to pass, and upon counting down to 0, notifies the CPU 2 that the count has become 0 by giving the CPU 2 an interrupt.
  • the CPU 2 records picture data that is based on a picture imaged after a predetermined period of time passes since the increase, in the recording medium 5 .
  • the CPU 2 Upon detecting that the shutter button is half-pressed, the CPU 2 detects human faces included in imaged pictures and obtains the number of the detected faces.
  • the CPU 2 displays face frames 71 ( 71 A to 71 C) as shown in FIG. 2 , by superimposing them on the through-the-lens image displayed on the display of the display unit 7 .
  • the CPU 2 again detects human faces from each continuously imaged picture and obtains the number of the detected faces. If the number of faces has increased, for example, from three to four and hence three face frames 71 A to 71 C have increased to four face frames 71 A to 71 D as shown in FIG. 3 , the timer 8 starts counting down in order to clock a predetermined period of time. The CPU 2 records data of a picture that is imaged when the count of the timer 8 turns to 0 in the recording medium 5 .
  • the CPU 2 makes initial settings such as setting a predetermined period of time in the timer 8 (step S 101 ), and waits until an operation to half-press the shutter button is detected (step S 102 ; No).
  • step S 102 Upon detecting an operation to half-press the shutter button (step S 102 ; Yes), the CPU 2 performs a face detecting process, which will be described later, and counts the number of the detected faces (step S 103 ). Then, the CPU 2 performs auto-focusing and automatic exposure, targeting the detected faces (step S 104 ).
  • the CPU 2 displays face frames 71 ( 71 A to 71 C) on the through-the-lens image displayed on the display of the display unit 7 , superimposing them on the detected faces. Under this state, the CPU 2 waits until an operation to full-press the shutter button is detected (step S 105 ; No).
  • step S 105 Upon detecting an operation to full-press the shutter button (step S 105 ; Yes), the CPU 2 again performs the face detecting process and counts the number of the detected faces (step S 106 ). Then, the CPU 2 stores the number of the faces counted in the face detecting process in a predetermined area of the memory unit 3 (step S 107 ). Note that the CPU 2 may skip step S 106 and store the number of the faces counted at step S 103 in a predetermined area of the memory unit 3 at step S 107 .
  • the CPU 2 performs the face detecting process on each new picture that is input from continuous imaging, and counts the number of the detected faces (step S 108 ).
  • step S 109 the CPU 2 returns to step S 107 to store the number of the faces counted at step S 108 , and again performs step S 108 to perform the face detecting process and count the number of the detected faces.
  • the CPU 2 compares the number of the faces counted at step S 108 and the number of the faces stored at step S 107 , and if it is determined that the former has increased from the latter (step S 109 ; Yes), the CPU 2 controls the timer 8 to start counting down the predetermined period of time. That is, among continuously imaged pictures, the CPU 2 compares a nearest previously imaged picture and a currently imaged picture as to the number of faces included therein. And in response to an increase in the number of the faces, the CPU 2 controls the timer 8 to start counting down.
  • the timer 8 Upon the count turning to 0, the timer 8 gives an interrupt to the CPU 2 to notify that the predetermined period of time has passed (step S 110 ; Yes).
  • step S 110 When the predetermined period of time has passed (step S 110 ; Yes), the CPU 2 instructs the recording medium control unit 4 to record the picture that is imaged at this timing of passage in the recording medium 5 (step S 111 ).
  • the recording medium control unit 4 records the imaged picture in the recording medium 5 .
  • the face detecting process at steps S 103 , S 106 , and S 108 detects faces by using a face detector that has a function of distinguishing between picture regions representing faces and picture regions not representing faces.
  • the face detector can detect faces with accuracy sufficient for the purpose of the face detection of the present embodiment, even if the resolution is 320 ⁇ 240 pixels (QVGA) or so. Hence, in the present embodiment, the CPU 2 performs the face detecting process on the through-the-lens image.
  • FIG. 5 is a diagram showing one example of a sub flowchart of the face detecting process described above.
  • the CPU 2 cuts out a small region from the through-the-lens image (step S 201 ), and determines by the face detector whether the small region includes a face or not (step S 202 ).
  • the CPU 2 contracts or expand a small region to cut out a various-sized small region from the through-the-lens image and inputs the cut-out region to the face detector.
  • the face detector calculates a score that indicates face-likeness, based on a characteristic quantity that is obtained from the small region cut out from the through-the-lens image and a characteristic quantity that is retrieved from a face identification database stored in the ROM or the like. In a case where the score is larger than a preset threshold, the face detector determines that the small region cut out from the through-the-lens image includes a face.
  • the face detector is embodied as software that is executed by the CPU 2 .
  • step S 202 In a case where it is determined by the face detector that the small region does not include a face (step S 202 ; No), the CPU 2 returns to step S 201 and cuts out a next small region. On the other hand, in a case where it is determined by the face detector that the small region includes a face (step S 202 ; Yes), the CPU 2 counts the number of detected faces (step S 203 ).
  • step S 204 the CPU 2 displays a message that says, for example, that “there are too many persons”, on the display of the display unit 7 (step S 205 ), and terminates this process by jumping to after step S 111 of FIG. 4 .
  • step S 204 determines whether the number of the counted faces is not larger than the maximum number of faces that are allowed to be detected from one picture.
  • step S 206 determines whether the current small region is the last one in the picture. In a case where it is determined that the small region is not the last one in the picture (step S 206 ; No), the CPU 2 returns to step S 201 and cuts out a next small region.
  • step S 206 determines whether the current small region is the last one in the picture.
  • FIG. 6 is a diagram showing a modified example of the main flowchart shown in FIG. 4 . Steps S 101 to S 106 and steps S 110 and S 111 are the same as those in FIG. 4 .
  • step S 105 Upon detecting an operation to full-press the shutter button (step S 105 ; Yes), the CPU 2 performs the face detecting process and counts the number of the detected faces (step S 106 ).
  • the CPU 2 stores the number of the faces counted in the face detecting process in a predetermined area of the memory unit 3 (step S 307 ). After this, the CPU 2 repeats the face detecting process for each picture and obtains the number of faces counted (step S 308 ). In a case where the number of the faces counted at step S 308 is equal to or smaller than the number of the faces stored at step S 307 (step S 309 ; No), the CPU 2 returns to step S 308 to again perform the face detecting process and counts the number of faces detected.
  • step S 308 In a case where it is determined that the number of the faces counted at step S 308 is larger than the number of the faces stored at step S 307 (step S 309 ; Yes), the CPU 2 waits for a predetermined period of time to pass (step S 110 ) and records data of the picture that is imaged at this timing of passage in the recording medium 5 (step S 111 ).
  • the CPU 2 compares a nearest previously imaged picture and a currently imaged picture among continuously imaged pictures as to the number of faces included and records picture data in the recording medium 5 in response to an increase in the number of faces.
  • the CPU 2 compares a picture imaged when an operation to full-press the shutter button is detected and any picture imaged after the operation to full-press the shutter button is detected among continuously imaged pictures as to the number of faces included, and records picture data in the recording medium 5 in response to an increase in the number of faces.
  • a group of persons, who are to have a group photo taken turn their full face toward the camera or the like.
  • a full face detector which distinguishes between full faces that face the imager direction and faces that face other directions, detects full faces.
  • the second embodiment employs the camera 10 likewise the first embodiment.
  • FIG. 7 is a diagram showing one example of a main flowchart according to the second embodiment of the present invention.
  • FIG. 7 is different from FIG. 4 only in its face detecting process (steps S 403 , S 406 , and S 408 ).
  • the steps other than steps S 403 , S 406 , and S 408 are the same as those of FIG. 4 .
  • the CPU 2 In response to an operation to half-press the shutter button being detected (step S 102 ; Yes), the CPU 2 performs a face detecting process (step S 403 ), and then performs auto-focusing and automatic exposure (step S 104 ). At this time, the CPU 2 sets imaging conditions such as focus, camera exposure, exposure, etc. by targeting the faces detected in the face detecting process.
  • the imaged persons may not necessarily be facing the camera direction, but may be facing sideward or an oblique direction or tilting their faces.
  • the CPU 2 performs a face detecting process in many directions by using a first face detector, which is to be described later.
  • the first face detector can detect not only a full face, but a half face, an oblique face, a tilted face, etc.
  • the CPU 2 uses not the first face detector mentioned above but a second face detector that identifies a full face to perform a face detecting process in the camera direction and counts the number of the detected faces (step S 406 ).
  • the second face detector will be described later.
  • the CPU 2 stores the number of the faces counted in the face detecting process performed in the camera direction in a predetermined area of the memory unit 3 (step S 107 ).
  • the CPU 2 performs the face detecting process on the picture in the camera direction and counts the number of the detected faces (step S 408 ).
  • step S 408 determines that the number of the faces counted at step S 408 is equal to or smaller than the number of the faces stored at step S 107 (step S 109 ; No).
  • the CPU 2 stores the number of the counted faces in the predetermined area of the memory unit 3 (step S 107 ), and again performs step S 408 to perform the face detecting process in the camera direction and count the number of the detected faces.
  • step S 109 the CPU 2 waits for a predetermined period of time to pass (step S 110 ) and records data of the picture that is imaged at this timing of passage in the recording medium 5 (step S 111 ). That is, the CPU 2 compares a nearest previously imaged picture and a currently imaged picture among continuously imaged pictures as to the number of full faces included therein, and in response to an increase in the number of full faces, records picture data in the recording medium 5 .
  • the face detector used at the above step S 202 of FIG. 5 calculates a score that indicates face-likeness, based on a characteristic quantity obtained from a small region cut out from the through-the-lens image and a characteristic quantity retrieved from a face identification database stored in the ROM or the like.
  • the face detector determines that the small region cut out from the through-the-lens image is a face, in a case where the score is larger than a preset threshold.
  • Setting this threshold high means making the face detecting condition stricter. Accordingly, setting the threshold high makes it more certain that a face-like object will be selected, and can increase the probability that a full face will be detected.
  • the face detector used at step S 202 of FIG. 5 can be the second face detector.
  • the face detector used at step S 202 of FIG. 5 can be the first face detector.
  • first face detector and the second face detector may be created separately with the use of the following technique.
  • this technique is one of human face detecting methods that uses statistical pattern recognition.
  • This method makes a face detector learn a face recognition rule by using many facial pictures and many non-facial pictures. For example, according to Adaboost learning, a face detector with a high recognition ability can be configured by combination of detectors with a relatively low recognition ability.
  • the second face detector suitable for full face detection by making it learn a recognition rule by using many full-face pictures and various directional-face pictures (pictures representing a side face that faces sideward, pictures representing an oblique face that faces an oblique direction, pictures representing a tilted face, etc.).
  • the first face detector suitable for various directional face detection by making it learn a recognition rule by using various directional-face pictures and many non-facial pictures.
  • the first face detector as the face detector used at step S 202 of FIG. 5
  • the second face detector as the face detector used at step S 202 of FIG. 5
  • FIG. 8 is a diagram showing a modified example of the second embodiment described above.
  • FIG. 8 is different from FIG. 6 in its face detecting process (steps S 403 , S 506 , and S 508 ).
  • the steps other than steps S 403 , S 506 , and S 508 are the same as those of FIG. 6 .
  • step S 102 in response to an operation to half-press the shutter button being detected (step S 102 ; Yes), the CPU 2 performs the various directional face detecting process (step S 403 ), and performs auto-focusing and automatic exposure targeting the detected faces (step S 104 ).
  • the CPU 2 upon detecting an operation to full-press the shutter button (step S 105 ; Yes), the CPU 2 performs the various directional face detecting process unlike in FIG. 7 , and counts the number of the detected faces (step S 506 ).
  • the CPU 2 stores the number of the faces counted in the various directional face detecting process in a predetermined area of the memory unit 3 (step S 307 ). Note that the CPU 2 may skip step S 506 and store the number of the faces counted at step S 403 in the predetermined area of the memory unit 3 at step S 307 .
  • the CPU 2 performs the face detecting process in the camera direction on each continuously imaged picture and counts the number of the detected lull faces (step S 508 ). In a case where it is determined that the number of the full faces obtained at step S 508 is equal to or smaller than the number of the faces stored at step S 307 (step S 309 ; No), the CPU 2 returns to step S 508 to again perform the face detecting process in the camera direction and count the number of the detected full faces.
  • step S 508 In a case where it is determined that the number of the full faces counted at step S 508 is larger than the number of the faces stored at step S 307 (step S 309 ; Yes), the CPU 2 waits for a predetermined period of time to pass (step S 110 ) and records data of the picture that is imaged at this timing of passage in the recording medium 5 (step S 111 ).
  • the CPU 2 detects various directional faces when an operation to full-press the shutter button is detected, and counts the number of the detected faces and stores it in the predetermined area of the memory unit 3 . Then, in response to that the number of full faces included in a currently imaged picture is larger than the number of the faces stored, the CPU 2 records data of an imaged picture in the recording medium 5 .
  • the first face detector detects not only full faces but half faces, oblique faces, etc., while the second face detector detects only full faces. Therefore, firstly detecting the number of faces using the first face detector and then switching to the second face detector to detect the number of full faces may result in a reduction of the number of detected faces.
  • the CPU 2 compares the number of faces included in a picture that is imaged when an operation to full-press the shutter button is detected with the number of full faces that is obtained from each picture that is imaged continuously thereafter. Therefore, it is possible to prevent picture data from being recorded (step S 111 ) in response to that, for example, a person who faced sideward in a picture that was imaged when an operation to full-press the shutter button was detected (step S 105 ) has turned to face the camera direction.
  • imaging conditions such as focus, camera exposure, exposure, etc.
  • faces it is necessary that faces can be detected regardless of the directions of the faces and the expressions on the faces.
  • imaging conditions it is desired that persons who face the camera direction be imaged, if a group photo of the persons is to be imaged.
  • the second embodiment can satisfy these conflicting needs by detecting faces regardless of whether the faces face the camera direction or not when setting imaging conditions while determining whether the faces face the camera direction or not when imaging the faces.
  • the third embodiment employs face recognition for recognizing a particular person's face, instead of face detection.
  • the CPU 2 records an imaged picture when there is an increase in the number of faces of particular persons that are included in imaged pictures.
  • the third embodiment employs the camera 10 likewise the first embodiment.
  • the CPU 2 preliminarily images faces of persons, who are the target of recognition, and extracts the characteristics of the faces.
  • the CPU 2 registers the extracted characteristics of the faces of the recognition-target persons in a predetermined registration area of the memory unit 3 . Note that characteristics of faces of a plurality of persons can be registered in the registration area.
  • FIG. 9 is a diagram showing one example of a main flowchart according to the third embodiment of the present invention.
  • step S 102 in response to an operation to half-press the shutter button being detected (step S 102 ; Yes), the CPU 2 recognizes the face of any recognition-target person included in continuously imaged pictures, and counts the number of the recognized faces (step S 603 ). Then, the CPU 2 performs auto-focusing and automatic exposure targeting the recognized faces (step S 104 ).
  • step S 105 in response to an operation to full-press the shutter button being detected (step S 105 ; Yes), the CPU 2 recognizes the face of any recognition-target person included in continuously imaged pictures and counts the number of the recognized faces (step S 606 ), and stores the number of the counted faces in a predetermined area of the memory unit 3 (step S 607 ). Note that the CPU 2 may skip step S 606 and store the number of the faces counted at step S 603 in the predetermined area of the memory unit 3 .
  • step S 608 the CPU 2 recognizes the face of any recognition-target person from each picture still further continuously imaged, and counts the number of the recognized faces (step S 608 ).
  • the CPU 2 returns to step S 607 to store the number of the counted faces in the predetermined area of the memory 3 and again recognize the face of any recognition-target person from continuously imaged pictures and count the number of the recognized faces (step S 608 ).
  • the CPU 2 compares the number of the faces stored at step S 607 and the number of faces currently counted, and in response to that the latter has increased from the former (step S 609 ; Yes), controls the timer 8 to start counting down a predetermined period of time (step S 110 ).
  • step S 202 it is possible to realize the face recognition of steps S 603 , S 606 , and S 608 in the flow of FIG. 5 , by replacing step S 202 with a process for determining whether or not a small region includes the face of any recognition-target person that is registered in the memory unit 3 .
  • the process of FIG. 9 is the same as the process of FIG. 4 except in recognizing the face of any recognition-target person and counting the number of the recognized faces (steps S 603 , S 606 , and S 608 ).
  • the process of FIG. 9 may be made the same as the modified process of FIG. 6 except in recognizing the face of any recognition-target person and obtaining the number of the recognized faces.
  • CPU 2 records a picture that is imaged when the persons whose facial characteristic is registered come into the imaging range. Hence, it is possible to prevent a picture from being recorded in response to any unspecified person whose facial characteristic is not registered coming into the imaging range, which may be the case when pictures are taken in a crowd such as a tourist spot, an amusement park, etc.
  • the first embodiment described above detects human face pictures from each imaged picture and records a picture in response to that the number of human face pictures has increased, while it is possible that a picture be recorded in response to that faces have disappeared.
  • the CPU 2 records a picture that includes no human face in the recording medium 5 . Hence, it is possible to prevent any indifferent person from being imaged when taking photos of scenery or constructions.
  • the present invention requires no setting of a number of persons intended to be imaged, and can record a picture that is imaged in response to that the number of imaged persons included in the imaging range has changed.
  • the present invention when taking a group photo that should include the camera operator, the present invention requires no preliminary setting of the number of persons intended to be imaged, but records a picture in response to that the camera operator has come into the imaging range. Hence, the present invention can image a group photo that includes the camera operator, by not requiring him/her to be nervous about timing but allowing him/her leeway.
  • the second embodiment detects full faces by using the second face detector that is suitable for detecting full faces.
  • the second face detector that is suitable for detecting full faces.
  • a program for realizing the functions of the present invention may be stored in a storage medium to be distributed, or may be supplied via a network.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Details Of Cameras Including Film Mechanisms (AREA)
  • Television Signal Processing For Recording (AREA)
  • Image Processing (AREA)

Abstract

Upon an operation to full-press a shutter button being detected, a CPU detects human faces included in a picture imaged by an imaging unit, counts the number of the faces, and stores the number of the counted faces in a predetermined area of a memory unit. Thereafter, the CPU again detects faces included in an imaged picture and counts the number of the faces. When the number of the counted faces is equal to or smaller than the number of the faces stored in the predetermined area of the memory unit, the CPU repeats the process of detecting faces included in an imaged picture and counting the number of the faces. Meanwhile, when the number of the counted faces has increased from the number of the faces stored, the CPU records a picture that is to be imaged after a predetermined period of time passes in a recording medium.

Description

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to an imaging apparatus that images a picture, an imaged picture recording method, and a storage medium that stores a computer program.
2. Description of the Related Art
Unexamined Japanese Patent Application KOKAI Publication No. 2007-329602 proposes a camera that detects human faces from an imaged picture, counts the number of the faces, and records the picture in a recording medium such as a memory card in a case where the number of the faces is equal to a preset number of persons intended to be imaged.
Unexamined Japanese Patent Application KOKAI Publication No. 2007-329602 also proposes a camera that registers a plurality of faces in a face registration memory, counts the number of faces that are recognized as any of the registered faces from the faces included in an imaged picture, and records the picture in a recording medium such as a memory card in a case where the number of the recognized faces is equal to a preset number of persons intended to be imaged.
SUMMARY OP THE INVENTION
An object of the present invention is to provide an imaging apparatus that needs no setting of a number of persons intended to be imaged and records an imaged picture in response to a change in the number of imaged persons included in an imaging range, an imaged picture recording method, and a storage medium that stores a computer program.
To achieve the above object, an imaging apparatus according to the present invention includes: an imaging unit; a recording instruction detecting unit that detects an instruction to record a picture that is imaged by the imaging unit; a characteristic picture region detecting unit that detects a characteristic picture region included in a picture imaged by the imaging unit, the characteristic picture region containing a predetermined characteristic; an obtaining unit that makes the imaging unit image pictures continuously, and obtains a region number each time a picture is imaged, the region number indicating a number of characteristic picture regions that are detected by the characteristic picture region detecting unit; a determining unit that determines whether the region number obtained by the obtaining unit has changed or not; and a recording unit that records an imaged picture in a recording medium, in response to that the recording instruction detecting unit detects a recording instruction and when the determining unit determines that the region number has changed.
To achieve the above object, an imaged picture recording method according to the present invention includes: a recording instruction detecting step of detecting an instruction to record a picture that is imaged by an imaging unit; a characteristic picture region detecting step of detecting a characteristic picture region included in a picture imaged by the imaging unit, the characteristic picture region containing a predetermined characteristic; an obtaining step of making the imaging unit image pictures continuously, and obtaining a region number each time a picture is imaged, the region number indicating a number of characteristic picture regions detected at the characteristic picture region detecting step; a determining step of determining whether the region number obtained at the obtaining step has changed or not; and a recording step of recording an imaged picture in a recording medium, in response to that a recording instruction is detected at the recording instruction detecting step and when it is determined at the determining step that the region number has changed.
To achieve the above object, a storage medium according to the present invention stores a program that is readable by a computer possessed by an imaging apparatus and controls the computer to function as: a recording instruction detecting unit that detects an instruction to record a picture that is imaged by an imaging unit; a characteristic picture region detecting unit that detects a characteristic picture region included in a picture imaged by the imaging unit, the characteristic picture region containing a predetermined characteristic; an obtaining unit that makes the imaging unit image pictures continuously, and obtains a region number each time a picture is imaged, the region number indicating a number of characteristic picture regions detected by the characteristic picture region detecting unit; a determining unit that determines whether the region number obtained by the obtaining unit has changed or not; and a recording unit that records an imaged picture in a recording medium, in response to that the recording instruction detecting unit detects a recording instruction and when the determining unit determines that the region number has changed.
BRIEF DESCRIPTION OF THE DRAWINGS
These objects and other objects and advantages of the present invention will become more apparent upon reading of the following detailed description and the accompanying drawings in which:
FIG. 1 is a block diagram of a camera according to an embodiment of the present invention;
FIG. 2 is a diagram showing one example of a picture that is displayed on a display of a display unit;
FIG. 3 is a diagram showing one example of a picture that includes one more human face than included in the picture of FIG. 2;
FIG. 4 is a diagram showing one example of a main flowchart according to a first embodiment of the present invention;
FIG. 5 is a diagram showing one example of a sub flowchart according to the first embodiment of the present invention;
FIG. 6 is a diagram showing a main flowchart according to a modified example of the first embodiment of the present invention;
FIG. 7 is a diagram showing one example of a main flowchart according to a second embodiment of the present invention;
FIG. 8 is a diagram showing one example of a main flowchart according to a modified example of the second embodiment of the present invention; and
FIG. 9 is a diagram showing one example of a main flowchart according to a third embodiment of the present invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS First Embodiment
As shown in FIG. 1, a camera 10 according to an embodiment of the present invention includes an imaging unit 1, a CPU (Central Processing Unit) 2, a memory unit 3, a recording medium control unit 4, a recording medium 5, an operation unit 6, a display unit 7, and a timer 8.
The imaging unit 1 includes an imaging device such as a CCD (Charge Coupled Device), a CMOS (Complementary Metal Oxide Semiconductor) image sensor, etc. The imaging unit 1 images pictures continuously. The imaging unit 1 converts an imaged picture from Analog to Digital (A/D), and outputs picture data, which is in the form of a digital signal, to the CPU 2.
The CPU 2 is a programmable processor. The CPU 2 controls the entire camera 10. The CPU 2 receives picture data output by the imaging unit 1, converts it into a format displayable by the display unit 7, and outputs it to the display unit 7 in the form of a through-the-lens image. When recording picture data, the CPU 2 applies various kinds of image processes and coding to the picture data, and outputs it to the recording medium control unit 4. The CPU 2 instructs the recording medium control unit 4 to record the picture data in the recording medium 5.
The memory unit 3 includes memories such as a RAM (Random Access Memory), a ROM (Read Only Memory), a flash memory, etc. The memory unit 3 stores programs of the CPU 2, various settings, etc. The memory unit 3 also temporarily stores data of an imaged picture.
The recording medium control unit 4 records picture data in the recording medium 5 in accordance with an instruction of the CPU 2. The recording medium 5 holds the data of the imaged picture recorded therein
The recording medium 5 may be, for example, an SD (Secure Digital) memory card, a hard disk, a CD (Compact Disk), or a DVD (Digital Versatile Disk).
The operation unit 6 includes various operation keys and switches such as a shutter button and an up, down, left, and right cursor key.
The CPU 2 performs auto-focusing and automatic exposure upon detecting an operation to half-press the shutter button, for example.
Upon detecting an operation to full-press the shutter button, the CPU 2 immediately instructs the recording medium control unit 4 to record picture data in the recording medium 5. Further, upon detecting the operation to full-press the shutter button, the CPU 2 detects human faces (specifically, picture regions representing faces) from each imaged picture and obtains the number of the detected faces, as will be described later. When the number of faces increases, the CPU 2 instructs the recording medium control unit 4 after a predetermined period of time passes since the increase to record picture data in the recording medium 5.
The display unit 7 includes a display. The CPU 2 displays various messages and a through-the-lens image, which has been reduced in size as compared with the pictures imaged by the imaging unit 1, on the display. The size of the through-the-lens image is, for example, 320×240 pixels (QVGA). The user gives an operation to press the shutter button, etc., while checking the through-the-lens image on the display.
The timer 8 is a down-counter, for example. The timer 8 clocks time by counting down a preset predetermined period of time to pass, and upon counting down to 0, notifies the CPU 2 that the count has become 0 by giving the CPU 2 an interrupt.
Next, an overview of a process according to the first embodiment will be given. First, when there occurs, after an operation to full-press the shutter button is detected, an increase in the number of human faces included in continuously imaged pictures, the CPU 2 records picture data that is based on a picture imaged after a predetermined period of time passes since the increase, in the recording medium 5.
Upon detecting that the shutter button is half-pressed, the CPU 2 detects human faces included in imaged pictures and obtains the number of the detected faces. The CPU 2 displays face frames 71 (71A to 71C) as shown in FIG. 2, by superimposing them on the through-the-lens image displayed on the display of the display unit 7.
Then, when the shutter button is full-pressed, the CPU 2 again detects human faces from each continuously imaged picture and obtains the number of the detected faces. If the number of faces has increased, for example, from three to four and hence three face frames 71A to 71C have increased to four face frames 71A to 71D as shown in FIG. 3, the timer 8 starts counting down in order to clock a predetermined period of time. The CPU 2 records data of a picture that is imaged when the count of the timer 8 turns to 0 in the recording medium 5.
A specific process will be described in detail with reference to a main flowchart shown in FIG. 4. First, the CPU 2 makes initial settings such as setting a predetermined period of time in the timer 8 (step S101), and waits until an operation to half-press the shutter button is detected (step S102; No).
Upon detecting an operation to half-press the shutter button (step S102; Yes), the CPU 2 performs a face detecting process, which will be described later, and counts the number of the detected faces (step S103). Then, the CPU 2 performs auto-focusing and automatic exposure, targeting the detected faces (step S104).
At this time, as shown in FIG. 2, the CPU 2 displays face frames 71 (71A to 71C) on the through-the-lens image displayed on the display of the display unit 7, superimposing them on the detected faces. Under this state, the CPU 2 waits until an operation to full-press the shutter button is detected (step S105; No).
Upon detecting an operation to full-press the shutter button (step S105; Yes), the CPU 2 again performs the face detecting process and counts the number of the detected faces (step S106). Then, the CPU 2 stores the number of the faces counted in the face detecting process in a predetermined area of the memory unit 3 (step S107). Note that the CPU 2 may skip step S106 and store the number of the faces counted at step S103 in a predetermined area of the memory unit 3 at step S107.
After this, the CPU 2 performs the face detecting process on each new picture that is input from continuous imaging, and counts the number of the detected faces (step S108).
In a case where the number of the faces counted at step S108 is equal to or smaller than the number of the faces stored at step S107 (step S109; No), the CPU 2 returns to step S107 to store the number of the faces counted at step S108, and again performs step S108 to perform the face detecting process and count the number of the detected faces.
The CPU 2 compares the number of the faces counted at step S108 and the number of the faces stored at step S107, and if it is determined that the former has increased from the latter (step S109; Yes), the CPU 2 controls the timer 8 to start counting down the predetermined period of time. That is, among continuously imaged pictures, the CPU 2 compares a nearest previously imaged picture and a currently imaged picture as to the number of faces included therein. And in response to an increase in the number of the faces, the CPU 2 controls the timer 8 to start counting down.
Upon the count turning to 0, the timer 8 gives an interrupt to the CPU 2 to notify that the predetermined period of time has passed (step S110; Yes).
When the predetermined period of time has passed (step S110; Yes), the CPU 2 instructs the recording medium control unit 4 to record the picture that is imaged at this timing of passage in the recording medium 5 (step S111). The recording medium control unit 4 records the imaged picture in the recording medium 5.
Specifically speaking, the face detecting process at steps S103, S106, and S108 detects faces by using a face detector that has a function of distinguishing between picture regions representing faces and picture regions not representing faces.
The face detector can detect faces with accuracy sufficient for the purpose of the face detection of the present embodiment, even if the resolution is 320×240 pixels (QVGA) or so. Hence, in the present embodiment, the CPU 2 performs the face detecting process on the through-the-lens image.
FIG. 5 is a diagram showing one example of a sub flowchart of the face detecting process described above. As shown in FIG. 5, the CPU 2 cuts out a small region from the through-the-lens image (step S201), and determines by the face detector whether the small region includes a face or not (step S202). In order to detect a face that is captured in a large size or a face that is captured in a small size, the CPU 2 contracts or expand a small region to cut out a various-sized small region from the through-the-lens image and inputs the cut-out region to the face detector.
The face detector calculates a score that indicates face-likeness, based on a characteristic quantity that is obtained from the small region cut out from the through-the-lens image and a characteristic quantity that is retrieved from a face identification database stored in the ROM or the like. In a case where the score is larger than a preset threshold, the face detector determines that the small region cut out from the through-the-lens image includes a face.
Note that according to the present invention, the face detector is embodied as software that is executed by the CPU 2.
In a case where it is determined by the face detector that the small region does not include a face (step S202; No), the CPU 2 returns to step S201 and cuts out a next small region. On the other hand, in a case where it is determined by the face detector that the small region includes a face (step S202; Yes), the CPU 2 counts the number of detected faces (step S203).
In a case where it is determined that the number of the counted faces is larger than the maximum number of faces that are allowed to be detected from one picture (step S204; Yes), the CPU 2 displays a message that says, for example, that “there are too many persons”, on the display of the display unit 7 (step S205), and terminates this process by jumping to after step S111 of FIG. 4.
On the other hand, in a case where it is determined that the number of the counted faces is not larger than the maximum number of faces that are allowed to be detected from one picture (step S204; No), the CPU 2 determines whether the current small region is the last one in the picture (step S206). In a case where it is determined that the small region is not the last one in the picture (step S206; No), the CPU 2 returns to step S201 and cuts out a next small region. When the determination by the face detector on the last small region in the picture is completed (step S206; Yes), the CPU 2 returns to the main flowchart.
The first embodiment may be modified as follows. FIG. 6 is a diagram showing a modified example of the main flowchart shown in FIG. 4. Steps S101 to S106 and steps S110 and S111 are the same as those in FIG. 4.
Upon detecting an operation to full-press the shutter button (step S105; Yes), the CPU 2 performs the face detecting process and counts the number of the detected faces (step S106). The CPU 2 stores the number of the faces counted in the face detecting process in a predetermined area of the memory unit 3 (step S307). After this, the CPU 2 repeats the face detecting process for each picture and obtains the number of faces counted (step S308). In a case where the number of the faces counted at step S308 is equal to or smaller than the number of the faces stored at step S307 (step S309; No), the CPU 2 returns to step S308 to again perform the face detecting process and counts the number of faces detected.
In a case where it is determined that the number of the faces counted at step S308 is larger than the number of the faces stored at step S307 (step S309; Yes), the CPU 2 waits for a predetermined period of time to pass (step S110) and records data of the picture that is imaged at this timing of passage in the recording medium 5 (step S111).
That is, in FIG. 4, the CPU 2 compares a nearest previously imaged picture and a currently imaged picture among continuously imaged pictures as to the number of faces included and records picture data in the recording medium 5 in response to an increase in the number of faces. As compared with this, in the modified example shown in FIG. 6, the CPU 2 compares a picture imaged when an operation to full-press the shutter button is detected and any picture imaged after the operation to full-press the shutter button is detected among continuously imaged pictures as to the number of faces included, and records picture data in the recording medium 5 in response to an increase in the number of faces.
Second Embodiment
In many cases, a group of persons, who are to have a group photo taken, turn their full face toward the camera or the like. In the second embodiment, when a group of persons take a group photo of theirs by setting a self-timer, a full face detector, which distinguishes between full faces that face the imager direction and faces that face other directions, detects full faces.
The second embodiment employs the camera 10 likewise the first embodiment.
FIG. 7 is a diagram showing one example of a main flowchart according to the second embodiment of the present invention. FIG. 7 is different from FIG. 4 only in its face detecting process (steps S403, S406, and S408). The steps other than steps S403, S406, and S408 are the same as those of FIG. 4.
In response to an operation to half-press the shutter button being detected (step S102; Yes), the CPU 2 performs a face detecting process (step S403), and then performs auto-focusing and automatic exposure (step S104). At this time, the CPU 2 sets imaging conditions such as focus, camera exposure, exposure, etc. by targeting the faces detected in the face detecting process.
At this time, the imaged persons may not necessarily be facing the camera direction, but may be facing sideward or an oblique direction or tilting their faces. Hence, at step S403, the CPU 2 performs a face detecting process in many directions by using a first face detector, which is to be described later. The first face detector can detect not only a full face, but a half face, an oblique face, a tilted face, etc.
Meanwhile, when having a group photo taken, the group of persons are highly probably turning their full face toward the camera. Hence, upon detecting an operation to full-press the shutter button (step S105; Yes), the CPU 2 uses not the first face detector mentioned above but a second face detector that identifies a full face to perform a face detecting process in the camera direction and counts the number of the detected faces (step S406). The second face detector will be described later.
The CPU 2 stores the number of the faces counted in the face detecting process performed in the camera direction in a predetermined area of the memory unit 3 (step S107).
Then, each time a new picture is input from continuous imaging, the CPU 2 performs the face detecting process on the picture in the camera direction and counts the number of the detected faces (step S408).
After this, in a case where it is determined that the number of the faces counted at step S408 is equal to or smaller than the number of the faces stored at step S107 (step S109; No), the CPU 2 stores the number of the counted faces in the predetermined area of the memory unit 3 (step S107), and again performs step S408 to perform the face detecting process in the camera direction and count the number of the detected faces.
On the other hand, in a case where it is determined that the number of the faces counted at step S408 has increased from the number of the faces stored at step S107 (step S109; Yes), the CPU 2 waits for a predetermined period of time to pass (step S110) and records data of the picture that is imaged at this timing of passage in the recording medium 5 (step S111). That is, the CPU 2 compares a nearest previously imaged picture and a currently imaged picture among continuously imaged pictures as to the number of full faces included therein, and in response to an increase in the number of full faces, records picture data in the recording medium 5.
Next, the first face detector and the second face detector will be explained.
The face detector used at the above step S202 of FIG. 5 calculates a score that indicates face-likeness, based on a characteristic quantity obtained from a small region cut out from the through-the-lens image and a characteristic quantity retrieved from a face identification database stored in the ROM or the like. The face detector determines that the small region cut out from the through-the-lens image is a face, in a case where the score is larger than a preset threshold.
Setting this threshold high means making the face detecting condition stricter. Accordingly, setting the threshold high makes it more certain that a face-like object will be selected, and can increase the probability that a full face will be detected. Hence, with the threshold set high, the face detector used at step S202 of FIG. 5 can be the second face detector. In contrast, with the threshold set low, the face detector used at step S202 of FIG. 5 can be the first face detector.
Alternatively, the first face detector and the second face detector may be created separately with the use of the following technique.
Specifically, this technique is one of human face detecting methods that uses statistical pattern recognition. This method makes a face detector learn a face recognition rule by using many facial pictures and many non-facial pictures. For example, according to Adaboost learning, a face detector with a high recognition ability can be configured by combination of detectors with a relatively low recognition ability.
Hence, it is possible to create the second face detector suitable for full face detection, by making it learn a recognition rule by using many full-face pictures and various directional-face pictures (pictures representing a side face that faces sideward, pictures representing an oblique face that faces an oblique direction, pictures representing a tilted face, etc.). Further, it is possible to create the first face detector suitable for various directional face detection, by making it learn a recognition rule by using various directional-face pictures and many non-facial pictures.
Further, by using the first face detector as the face detector used at step S202 of FIG. 5, it is possible to perform the various directional face detecting process of step S403 in the sub flowchart of FIG. 5. Meanwhile, by using the second face detector as the face detector used at step S202 of FIG. 5, it is possible to perform the face detecting process in the camera direction of steps S406 and S408 in the sub flowchart of FIG. 5.
The second embodiment may be modified as follows. FIG. 8 is a diagram showing a modified example of the second embodiment described above. FIG. 8 is different from FIG. 6 in its face detecting process (steps S403, S506, and S508). The steps other than steps S403, S506, and S508 are the same as those of FIG. 6.
Likewise in FIG. 7, in response to an operation to half-press the shutter button being detected (step S102; Yes), the CPU 2 performs the various directional face detecting process (step S403), and performs auto-focusing and automatic exposure targeting the detected faces (step S104).
However, even after an operation to full-press the shutter button is detected, the imaged persons might be facing sideward, upward, or downward. Hence, upon detecting an operation to full-press the shutter button (step S105; Yes), the CPU 2 performs the various directional face detecting process unlike in FIG. 7, and counts the number of the detected faces (step S506). The CPU 2 stores the number of the faces counted in the various directional face detecting process in a predetermined area of the memory unit 3 (step S307). Note that the CPU 2 may skip step S506 and store the number of the faces counted at step S403 in the predetermined area of the memory unit 3 at step S307.
The CPU 2 performs the face detecting process in the camera direction on each continuously imaged picture and counts the number of the detected lull faces (step S508). In a case where it is determined that the number of the full faces obtained at step S508 is equal to or smaller than the number of the faces stored at step S307 (step S309; No), the CPU 2 returns to step S508 to again perform the face detecting process in the camera direction and count the number of the detected full faces.
In a case where it is determined that the number of the full faces counted at step S508 is larger than the number of the faces stored at step S307 (step S309; Yes), the CPU 2 waits for a predetermined period of time to pass (step S110) and records data of the picture that is imaged at this timing of passage in the recording medium 5 (step S111).
That is, in the modified example shown in FIG. 8, the CPU 2 detects various directional faces when an operation to full-press the shutter button is detected, and counts the number of the detected faces and stores it in the predetermined area of the memory unit 3. Then, in response to that the number of full faces included in a currently imaged picture is larger than the number of the faces stored, the CPU 2 records data of an imaged picture in the recording medium 5.
The first face detector detects not only full faces but half faces, oblique faces, etc., while the second face detector detects only full faces. Therefore, firstly detecting the number of faces using the first face detector and then switching to the second face detector to detect the number of full faces may result in a reduction of the number of detected faces.
The CPU 2 compares the number of faces included in a picture that is imaged when an operation to full-press the shutter button is detected with the number of full faces that is obtained from each picture that is imaged continuously thereafter. Therefore, it is possible to prevent picture data from being recorded (step S111) in response to that, for example, a person who faced sideward in a picture that was imaged when an operation to full-press the shutter button was detected (step S105) has turned to face the camera direction.
When setting imaging conditions such as focus, camera exposure, exposure, etc., it is necessary that faces can be detected regardless of the directions of the faces and the expressions on the faces. Meanwhile, after imaging conditions are set, it is desired that persons who face the camera direction be imaged, if a group photo of the persons is to be imaged.
The second embodiment can satisfy these conflicting needs by detecting faces regardless of whether the faces face the camera direction or not when setting imaging conditions while determining whether the faces face the camera direction or not when imaging the faces.
Third Embodiment
The third embodiment employs face recognition for recognizing a particular person's face, instead of face detection. The CPU 2 records an imaged picture when there is an increase in the number of faces of particular persons that are included in imaged pictures.
The third embodiment employs the camera 10 likewise the first embodiment.
The CPU 2 preliminarily images faces of persons, who are the target of recognition, and extracts the characteristics of the faces. The CPU 2 registers the extracted characteristics of the faces of the recognition-target persons in a predetermined registration area of the memory unit 3. Note that characteristics of faces of a plurality of persons can be registered in the registration area.
FIG. 9 is a diagram showing one example of a main flowchart according to the third embodiment of the present invention.
As shown in the main flowchart of FIG. 9, in response to an operation to half-press the shutter button being detected (step S102; Yes), the CPU 2 recognizes the face of any recognition-target person included in continuously imaged pictures, and counts the number of the recognized faces (step S603). Then, the CPU 2 performs auto-focusing and automatic exposure targeting the recognized faces (step S104).
Next, in response to an operation to full-press the shutter button being detected (step S105; Yes), the CPU 2 recognizes the face of any recognition-target person included in continuously imaged pictures and counts the number of the recognized faces (step S606), and stores the number of the counted faces in a predetermined area of the memory unit 3 (step S607). Note that the CPU 2 may skip step S606 and store the number of the faces counted at step S603 in the predetermined area of the memory unit 3.
Then, the CPU 2 recognizes the face of any recognition-target person from each picture still further continuously imaged, and counts the number of the recognized faces (step S608). In a case where the number of the counted faces is equal to or smaller than the number of the faces stored at step S607 (step S609; No), the CPU 2 returns to step S607 to store the number of the counted faces in the predetermined area of the memory 3 and again recognize the face of any recognition-target person from continuously imaged pictures and count the number of the recognized faces (step S608).
The CPU 2 compares the number of the faces stored at step S607 and the number of faces currently counted, and in response to that the latter has increased from the former (step S609; Yes), controls the timer 8 to start counting down a predetermined period of time (step S110).
It is possible to realize the face recognition of steps S603, S606, and S608 in the flow of FIG. 5, by replacing step S202 with a process for determining whether or not a small region includes the face of any recognition-target person that is registered in the memory unit 3.
The process of FIG. 9 is the same as the process of FIG. 4 except in recognizing the face of any recognition-target person and counting the number of the recognized faces (steps S603, S606, and S608).
Needless to say, the process of FIG. 9 may be made the same as the modified process of FIG. 6 except in recognizing the face of any recognition-target person and obtaining the number of the recognized faces.
In the third embodiment, CPU 2 records a picture that is imaged when the persons whose facial characteristic is registered come into the imaging range. Hence, it is possible to prevent a picture from being recorded in response to any unspecified person whose facial characteristic is not registered coming into the imaging range, which may be the case when pictures are taken in a crowd such as a tourist spot, an amusement park, etc.
The first embodiment described above detects human face pictures from each imaged picture and records a picture in response to that the number of human face pictures has increased, while it is possible that a picture be recorded in response to that faces have disappeared.
In this case, the CPU 2 records a picture that includes no human face in the recording medium 5. Hence, it is possible to prevent any indifferent person from being imaged when taking photos of scenery or constructions.
As explained above, the present invention requires no setting of a number of persons intended to be imaged, and can record a picture that is imaged in response to that the number of imaged persons included in the imaging range has changed.
That is, when taking a group photo that should include the camera operator, the present invention requires no preliminary setting of the number of persons intended to be imaged, but records a picture in response to that the camera operator has come into the imaging range. Hence, the present invention can image a group photo that includes the camera operator, by not requiring him/her to be nervous about timing but allowing him/her leeway.
The second embodiment detects full faces by using the second face detector that is suitable for detecting full faces. Hence, a group photo, in which all the imaged persons face the camera direction, can be imaged more securely than in a case where the second face detector that also detects half faces, oblique faces, etc. is used.
Though embodiments of the present invention have been described, it should be understood that various corrections or combinations thereof that may become necessary from any design requirement or any other factor are included in the scope of the invention set forth in the claims and of the invention that corresponds to the specific examples described in the embodiments.
A program for realizing the functions of the present invention may be stored in a storage medium to be distributed, or may be supplied via a network.
Various embodiments and changes may be made thereunto without departing from the broad spirit and scope of the invention. The above-described embodiments are intended to illustrate the present invention, not to limit the scope of the present invention. The scope of the present invention is shown by the attached claims rather than the embodiments. Various modifications made within the meaning of an equivalent of the claims of the invention and within the claims are to be regarded to be in the scope of the present invention.
This application is based on Japanese Patent Application No. 2008-167053 filed on Jun. 26, 2008 and including specification, claims, drawings and summary. The disclosure of the above Japanese Patent Application is incorporated herein by reference in its entirety.

Claims (6)

1. An imaging apparatus comprising:
an imaging unit configured to capture images continuously;
a face detection unit configured to detect a face in each of the images captured by the imaging unit;
a first face detection control unit configured to set a first detecting condition for the face detection unit to thereby control the face detection unit to detect faces facing in various directions in the images continuously captured by the imaging unit;
an imaging condition setting unit configured to, when a face is detected by control executed by the first face detection control unit, set an imaging condition suitable for the detected face;
a recording instruction detecting unit configured to detect a recording instruction after the imaging condition setting unit sets the imaging condition;
a second face detection control unit configured to, after the recording instruction is detected by the recording instruction detecting unit, set for the face detection unit a second detecting condition, which is more limiting than the first detecting condition, to thereby control the face detection unit to detect a forward-facing face, which does not include any faces facing in other directions, in the images continuously captured by the imaging unit;
a face number obtaining unit configured to count a number of forward-facing faces that were detected by control executed by the second face detection control unit, for each of the continuously captured images;
a change determination unit configured to determine whether or not the number of the forward-facing faces that were counted by the face number obtaining unit in one of the continuously captured images is different as compared to an image captured just prior to the one of the continuously captured images; and
a recording unit that, in response to detection of the recording instruction, and in response to the determination of the change determination unit, saves in a recording medium an image representing the forward-facing faces, captured after the change determination unit determines that the number of forward-facing faces is different.
2. The imaging device according to claim 1, wherein the change determination unit determines whether the number of forward-facing faces counted by the face number obtaining unit in the one of the continuously captured image has increased as compared to the image captured just prior to the one of the continuously captured images; and
wherein the recording unit, in response to the detection of the recording instruction, and in response to a determination that the number of forward-facing faces has increased, saves in the recording medium the image representing the forward-facing faces.
3. The imaging apparatus according to claim 1, further comprising:
a face number storing unit configured to, in response to the detection of the recording instruction by the recording instruction detecting unit, store the number of forward-facing faces counted by the face number obtaining unit;
wherein the change determination unit is configured to determine whether the number of forward-facing faces in the one of the continuously captured image is larger than the number of forward-facing faces stored in the face number storing unit; and
wherein the recording unit is configured to, in response to the determination by the change determination unit that the number of forward-facing faces has increased, save in the recording medium the image representing the forward-facing faces.
4. The imaging apparatus according to any one of claim 1, further comprising:
a clock unit that measures a predetermined time set in advance;
wherein the recording unit is configured to save the image representing the forward-facing faces in response to elapsing of the predetermined time measured by the clock unit after the determination of the change by the change determination unit.
5. An imaged picture recording method, comprising:
an imaging step of causing an imaging unit to continuously capture images;
a first face detection control step of setting a first detecting condition for a face detection unit to thereby control the face detection unit to detect faces facing in various directions in the images continuously captured in the imaging step;
an imaging condition setting step of, when a face is detected by control executed in the first face detection control step, setting an imaging condition suitable for the detected face;
a recording instruction detecting step of detecting a recording instruction after the imaging condition setting step sets the imaging condition;
a second face detection control step of, after the recording instruction is detected in the recording instruction detecting step, setting for the face detection unit a second detecting condition, which is more limiting than the first detecting condition, to thereby control the face detection unit to detect a forward-facing face, which does not include any faces facing in other directions, in the images continuously captured in the imaging step;
a face number obtaining step of obtaining a number of forward-facing faces that were detected by the control executed in the second face detection control step, for each of the continuously captured images;
a change determination step of determining whether or not the number of forward-facing faces counted in the face number obtaining step in one of the continuously captured images is different as compared to an image captured just prior to the one of the continuously captured images; and
a recording step of, in response to detection of the recording instruction, and in response to the determination in the change determination step, saving in a recording medium an image representing the forward-facing faces captured after it is determined in the change determination step, that the number of forward-facing faces is different.
6. A non-transitory computer-readable storage medium storing a program that is readable by a computer of an imaging apparatus to control the computer to function as elements including:
an imaging unit configured to capture images continuously;
a face detection unit configured to detect a face in each of the images captured by the imaging unit;
a first face detection control unit configured to set a first detecting condition for the face detection unit to thereby control the face detection unit to detect faces facing in various directions in the images continuously captured by the imaging unit;
an imaging condition setting unit configured to, when a face is detected by control executed by the first face detection control unit, set an imaging condition suitable for the detected face;
a recording instruction detecting unit configured to detect a recording instruction after the imaging condition setting unit sets the imaging condition;
a second face detection control unit configured to, after the recording instruction is detected by the recording instruction detecting unit, set for the face detection unit a second detecting condition, which is more limiting than the first detecting condition, to thereby control the face detection unit to detect a forward-facing face, which does not include any faces facing in other directions, in the images continuously captured by the imaging unit;
a face number obtaining unit configured to count a number of forward-facing faces that were detected by control executed by the second face detection control unit, for each of the continuously captured images;
a change determination unit configured to determine whether or not the number of the forward-facing faces that were counted by the face number obtaining unit in one of the continuously captured images is different as compared to an image captured just prior to the one of the continuously captured images; and
a recording unit that, in response to detection of the recording instruction, and in response to the determination of the change determination unit, saves in a recording medium an image representing the forward-facing faces, captured after the change determination unit determines that the number of forward-facing faces is different.
US12/490,646 2008-06-26 2009-06-24 Imaging apparatus, imaged picture recording method, and storage medium storing computer program Expired - Fee Related US8004573B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2008-167053 2008-06-26
JP2008167053A JP4466770B2 (en) 2008-06-26 2008-06-26 Imaging apparatus, imaging method, and imaging program

Publications (2)

Publication Number Publication Date
US20090322906A1 US20090322906A1 (en) 2009-12-31
US8004573B2 true US8004573B2 (en) 2011-08-23

Family

ID=41446912

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/490,646 Expired - Fee Related US8004573B2 (en) 2008-06-26 2009-06-24 Imaging apparatus, imaged picture recording method, and storage medium storing computer program

Country Status (2)

Country Link
US (1) US8004573B2 (en)
JP (1) JP4466770B2 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100026830A1 (en) * 2008-07-31 2010-02-04 Samsung Digital Imaging Co., Ltd. Self-timer photographing apparatus and method involving checking the number of persons
US20100156834A1 (en) * 2008-12-24 2010-06-24 Canon Kabushiki Kaisha Image selection method
US20110069193A1 (en) * 2009-09-24 2011-03-24 Canon Kabushiki Kaisha Imaging apparatus and imaging apparatus control method
US20110216209A1 (en) * 2010-03-03 2011-09-08 Fredlund John R Imaging device for capturing self-portrait images
US20110221921A1 (en) * 2010-03-12 2011-09-15 Sanyo Electric Co., Ltd. Electronic camera
US20120086833A1 (en) * 2008-11-26 2012-04-12 Kyocera Corporation Device with camera

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5459771B2 (en) * 2009-10-06 2014-04-02 Necカシオモバイルコミュニケーションズ株式会社 Electronic device, image processing method and program
JP5753418B2 (en) * 2010-05-31 2015-07-22 キヤノン株式会社 Image processing device
CN102682021B (en) * 2011-03-15 2017-04-12 新奥特(北京)视频技术有限公司 Application method and system of non-invasive data statistic way
US9001199B2 (en) * 2011-04-29 2015-04-07 Tata Consultancy Services Limited System and method for human detection and counting using background modeling, HOG and Haar features
US20140168375A1 (en) * 2011-07-25 2014-06-19 Panasonic Corporation Image conversion device, camera, video system, image conversion method and recording medium recording a program
CN103546674A (en) * 2012-07-12 2014-01-29 纬创资通股份有限公司 Image acquisition method
US10453355B2 (en) 2012-09-28 2019-10-22 Nokia Technologies Oy Method and apparatus for determining the attentional focus of individuals within a group
US20140095109A1 (en) * 2012-09-28 2014-04-03 Nokia Corporation Method and apparatus for determining the emotional response of individuals within a group
JP2014187551A (en) * 2013-03-22 2014-10-02 Casio Comput Co Ltd Image acquisition device, image acquisition method and program
WO2015100723A1 (en) * 2014-01-03 2015-07-09 华为终端有限公司 Method and photographing device for implementing self-service group photo taking
US9633335B2 (en) * 2014-01-30 2017-04-25 Accompani, Inc. Managing relationship and contact information
JP2019029998A (en) * 2017-07-28 2019-02-21 キヤノン株式会社 Imaging apparatus, control method of imaging apparatus and control program
JP7298459B2 (en) * 2019-12-03 2023-06-27 富士通株式会社 Monitoring system and monitoring method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006005662A (en) 2004-06-17 2006-01-05 Nikon Corp Electronic camera and electronic camera system
US20060120604A1 (en) * 2004-12-07 2006-06-08 Samsung Electronics Co., Ltd. Method and apparatus for detecting multi-view faces
JP2007329602A (en) 2006-06-07 2007-12-20 Casio Comput Co Ltd Imaging apparatus, photographing method, and photographing program
US20080025710A1 (en) 2006-07-25 2008-01-31 Fujifilm Corporation Image taking system
US20080068466A1 (en) * 2006-09-19 2008-03-20 Fujifilm Corporation Imaging apparatus, method, and program
US20090256901A1 (en) * 2008-04-15 2009-10-15 Mauchly J William Pop-Up PIP for People Not in Picture
US7711190B2 (en) * 2005-03-11 2010-05-04 Fujifilm Corporation Imaging device, imaging method and imaging program

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006005662A (en) 2004-06-17 2006-01-05 Nikon Corp Electronic camera and electronic camera system
US20060120604A1 (en) * 2004-12-07 2006-06-08 Samsung Electronics Co., Ltd. Method and apparatus for detecting multi-view faces
US7711190B2 (en) * 2005-03-11 2010-05-04 Fujifilm Corporation Imaging device, imaging method and imaging program
JP2007329602A (en) 2006-06-07 2007-12-20 Casio Comput Co Ltd Imaging apparatus, photographing method, and photographing program
US20080025710A1 (en) 2006-07-25 2008-01-31 Fujifilm Corporation Image taking system
JP2008054288A (en) 2006-07-25 2008-03-06 Fujifilm Corp Imaging apparatus
US20080068466A1 (en) * 2006-09-19 2008-03-20 Fujifilm Corporation Imaging apparatus, method, and program
US20090256901A1 (en) * 2008-04-15 2009-10-15 Mauchly J William Pop-Up PIP for People Not in Picture

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Japanese Office Action dated Oct. 6, 2009 and English translation thereof issued in a counterpart Japanese Application No. 2008-167053.

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8441542B2 (en) * 2008-07-31 2013-05-14 Samsung Electronics Co., Ltd. Self-timer photographing apparatus and method involving checking the number of persons
DE102009027848B4 (en) * 2008-07-31 2021-07-15 Samsung Electronics Co., Ltd. Self-timer photography apparatus and self-timer photography methods which include checking the number of people
US20100026830A1 (en) * 2008-07-31 2010-02-04 Samsung Digital Imaging Co., Ltd. Self-timer photographing apparatus and method involving checking the number of persons
US8947552B2 (en) * 2008-11-26 2015-02-03 Kyocera Corporation Method and device with camera for capturing an image based on detection of the image
US20120086833A1 (en) * 2008-11-26 2012-04-12 Kyocera Corporation Device with camera
US20100156834A1 (en) * 2008-12-24 2010-06-24 Canon Kabushiki Kaisha Image selection method
US8792685B2 (en) * 2008-12-24 2014-07-29 Canon Kabushiki Kaisha Presenting image subsets based on occurrences of persons satisfying predetermined conditions
US8416315B2 (en) * 2009-09-24 2013-04-09 Canon Kabushiki Kaisha Imaging apparatus and imaging apparatus control method
US20110069193A1 (en) * 2009-09-24 2011-03-24 Canon Kabushiki Kaisha Imaging apparatus and imaging apparatus control method
US20110216209A1 (en) * 2010-03-03 2011-09-08 Fredlund John R Imaging device for capturing self-portrait images
US8957981B2 (en) * 2010-03-03 2015-02-17 Intellectual Ventures Fund 83 Llc Imaging device for capturing self-portrait images
US9462181B2 (en) 2010-03-03 2016-10-04 Intellectual Ventures Fund 83 Llc Imaging device for capturing self-portrait images
US8466981B2 (en) * 2010-03-12 2013-06-18 Sanyo Electric Co., Ltd. Electronic camera for searching a specific object image
US20110221921A1 (en) * 2010-03-12 2011-09-15 Sanyo Electric Co., Ltd. Electronic camera

Also Published As

Publication number Publication date
US20090322906A1 (en) 2009-12-31
JP4466770B2 (en) 2010-05-26
JP2010010999A (en) 2010-01-14

Similar Documents

Publication Publication Date Title
US8004573B2 (en) Imaging apparatus, imaged picture recording method, and storage medium storing computer program
JP5251215B2 (en) Digital camera
JP4196714B2 (en) Digital camera
US8237853B2 (en) Image sensing apparatus and control method therefor
EP2056589A2 (en) Imaging apparatus, method for controlling the same, and program
US8577098B2 (en) Apparatus, method and program for designating an object image to be registered
US9591210B2 (en) Image processing face detection apparatus, method for controlling the same, and program
JP5467992B2 (en) Imaging device
JP4663700B2 (en) Imaging apparatus and imaging method
US20080284867A1 (en) Image pickup apparatus with a human face detecting function, method and program product for detecting a human face
US8284994B2 (en) Image processing apparatus, image processing method, and storage medium
JP2009075999A (en) Image recognition device, method, and program
US20100188520A1 (en) Imaging device and storage medium storing program
JP5604285B2 (en) Imaging device
JP2008078951A (en) Imaging apparatus, photography control method, and program
JP2008299784A (en) Object determination device and program therefor
CN102542251A (en) Object detection device and object detection method
JP7113364B2 (en) Imaging device
WO2008126577A1 (en) Imaging apparatus
JP5181841B2 (en) Imaging apparatus, imaging control program, image reproducing apparatus, and image reproducing program
JP5386880B2 (en) Imaging device, mobile phone terminal, imaging method, program, and recording medium
JP2005223658A (en) Digital camera
JP5267136B2 (en) Electronic camera
JP4254467B2 (en) Digital camera
JP4306399B2 (en) Digital camera

Legal Events

Date Code Title Description
AS Assignment

Owner name: CASIO COMPUTER CO., LTD., JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:WATANABE, KAZUYOSHI;REEL/FRAME:022948/0995

Effective date: 20090626

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

ZAAA Notice of allowance and fees due

Free format text: ORIGINAL CODE: NOA

ZAAB Notice of allowance mailed

Free format text: ORIGINAL CODE: MN/=.

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20230823