JP2008244976A - Imaging device, and method and program for recording photographic image - Google Patents

Imaging device, and method and program for recording photographic image Download PDF

Info

Publication number
JP2008244976A
JP2008244976A JP2007083894A JP2007083894A JP2008244976A JP 2008244976 A JP2008244976 A JP 2008244976A JP 2007083894 A JP2007083894 A JP 2007083894A JP 2007083894 A JP2007083894 A JP 2007083894A JP 2008244976 A JP2008244976 A JP 2008244976A
Authority
JP
Japan
Prior art keywords
face
image
plurality
selection
step
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2007083894A
Other languages
Japanese (ja)
Other versions
JP4899982B2 (en
Inventor
Takeshi Endo
剛 遠藤
Original Assignee
Casio Comput Co Ltd
カシオ計算機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Casio Comput Co Ltd, カシオ計算機株式会社 filed Critical Casio Comput Co Ltd
Priority to JP2007083894A priority Critical patent/JP4899982B2/en
Publication of JP2008244976A publication Critical patent/JP2008244976A/en
Application granted granted Critical
Publication of JP4899982B2 publication Critical patent/JP4899982B2/en
Application status is Active legal-status Critical
Anticipated expiration legal-status Critical

Links

Images

Abstract

A mode for randomly displaying a face portion identified and displayed while viewing a captured image is activated, and the activation timing is set from a plurality of conditions.
An imaging unit that captures an image of a subject and obtains a captured image, a face detection unit that detects a plurality of human face portions from the captured image, and a predetermined number of face portions from the detected plurality of face portions. Selection means for selecting, and selection control for repeatedly executing a new face portion selection operation by the selection means each time a change in the configuration of a plurality of persons detected in a shooting frame matches a predetermined change condition And an automatic processing means for subjecting the face part selected by the selection means to the automation process and changing the contents of the automation process in response to the change of the face part selected by the control of the selection control means; .
[Selection] Figure 3

Description

  The present invention relates to an imaging apparatus, a captured image display method, and a program, and more particularly, to an imaging apparatus that selects a person to be automated in an imaging frame, a captured image capturing and recording method, and a program.

  In recent years, with the rapid spread of digital cameras, digital cameras equipped with various functions have appeared, such as an autofocus function that automatically selects a subject to be focused, a function that automatically determines the timing of recording and recording, etc. The performance of has improved.

  In addition, image processing technology for detecting a person's face in captured image data has also been used for digital cameras.

There has also been proposed an imaging apparatus that detects the face portion of a person from the imaging frame using the above technique and automatically focuses on the face portion (see, for example, Patent Document 1). .)
JP 2006-208443 A

  However, in the past, only one shot was taken, and when there are multiple people in the shooting frame, the focus, exposure conditions, or color balance is adjusted evenly to various people, It has been very troublesome to take a plurality of shots evenly in combination, and it has been difficult to perform such shooting and recording in practice.

  Therefore, the present invention has been made in view of the above-described problems. When a plurality of persons are present in a shooting frame using a face detection technique, the focus and exposure are uniformly distributed to various people. It is an object of the present invention to provide an imaging apparatus, a method for capturing and recording a captured image, and a program that match a condition and a color balance, and that shoots a plurality of times evenly with combinations of various persons.

The present invention proposes the following items in order to solve the above problems.
(1) According to the present invention, an imaging unit that captures an image of a subject to obtain a captured image, a face detection unit that detects a plurality of human face portions from the captured image, and a predetermined one of the detected plurality of face portions A selection means for selecting a number of face parts, and a selection operation of a new face part by the selection means each time a change in the configuration of a plurality of persons detected in a shooting frame matches a predetermined change condition The selection control means for repeatedly executing the processing and the face portion selected by the selection means are subject to automation processing, and the automation processing is performed in response to the change of the face portion selected by the control of the selection control means. An imaging apparatus characterized by including an automatic processing means for changing contents is proposed.

  (2) The present invention is characterized in that the imaging apparatus according to (1) includes image recording means for recording an image that has been subjected to automation processing with the selected face portion being subjected to automation processing. An imaging device is proposed.

  (3) In the image pickup apparatus of (2), the present invention sequentially changes the content of the automation processing by the automation processing unit by operating the selection control unit during shooting and recording of a moving image. There has been proposed an imaging apparatus that records a moving image including a frame that has been subjected to automation processing with a face portion that is sequentially changed as a target of automation processing.

  (4) The imaging apparatus according to (1), wherein the selection control unit is operated to sequentially change the contents of the automation processing by the automation processing unit during shooting of a through image. Has proposed.

  (5) In the imaging apparatus according to (2), during the shooting and display of the through image, the selection control unit is operated to sequentially change the contents of the automation processing by the automation processing unit, and the image recording unit Proposes an imaging apparatus that records and records a still image each time the content of the automation processing by the automation processing means is changed.

  (6) In the imaging apparatus according to any one of (1) to (5), the change condition includes a condition in which configurations of a plurality of persons detected in the shooting frame change more than a predetermined value. An imaging apparatus is proposed.

  (7) In the imaging device according to (1) to (5), the change condition includes a condition after a predetermined time has elapsed since the previous selection operation was performed by the selection unit. An imaging apparatus is proposed.

  (8) In the imaging apparatus according to any one of (1) to (5), the change condition includes a condition in which the face part previously selected by the selection unit is out of the shooting frame or the playback frame. An imaging apparatus characterized by the above has been proposed.

  (9) In the imaging apparatus according to (1) to (5), the change condition is when the face portion that has not been selected in the past by the selection unit exists in the shooting frame. An imaging apparatus characterized by a certain feature has been proposed.

  (10) In the imaging device according to (1) to (9), the selection unit randomly selects a predetermined number of face parts from the plurality of detected face parts. An imaging device is proposed.

  (11) In the imaging device according to (1) to (9), the selection unit preferentially selects a previously unselected face portion from the plurality of detected face portions. A characteristic imaging device is proposed.

  (12) In the imaging apparatus according to any one of (1) to (9), the selection unit selects one face part from the plurality of detected face parts. is suggesting.

  (13) In the imaging device according to (1) to (9), the selection unit selects a plurality of face parts from the plurality of detected face parts, and the automation processing unit An imaging apparatus has been proposed in which an aperture value is set so that the depth of field is deep when a face portion is selected.

  (14) In the imaging apparatus according to (1) to (9), the automation processing unit selects a face portion selected by the selection unit as a target of autofocus processing, and is selected by the control of the selection control unit. An imaging apparatus has been proposed in which the focus position is changed in response to a change in the face part being changed.

  (15) In the imaging apparatus according to (1) to (9), the automation processing unit selects a face portion selected by the selection unit as a target of automatic exposure processing, and is selected by the control of the selection control unit. An imaging apparatus has been proposed in which the exposure condition is changed in response to a change in the face part.

  (16) In the imaging apparatus according to any one of (1) to (9), the automation processing unit sets a face portion selected by the selection unit as a target of automatic color adjustment processing, and is controlled by the selection control unit. There has been proposed an imaging apparatus characterized in that the content of color adjustment processing is changed in response to the change of the selected face portion.

  (17) The present invention provides an imaging unit that captures an image of a subject and obtains a captured image, an image recording unit that records and saves a captured image obtained by the imaging unit, and a plurality of captured images obtained by the imaging unit. Face detection means for detecting a face portion of a person, and detection of the face portion by the face detection means for captured images sequentially acquired by the imaging means, and a configuration of a plurality of persons detected in a shooting frame Proposed is an imaging apparatus comprising: an automatic recording unit that records and saves a captured image corresponding to a captured frame each time a change matches a predetermined change condition by the image recording unit. ing.

  (18) In the present invention, a first step of capturing an image of a subject to obtain a captured image, a second step of detecting a plurality of person's face portions from the captured image, and the detection of the plurality of detected face portions A third step of selecting a predetermined number of face portions from among them, a fourth step of subjecting the selected face portion to automation processing, and automation for the face portion determined as the automation processing target From the third step to the fifth step each time the fifth step of executing the process and the change in the configuration of the plurality of persons detected in the photographing frame match a predetermined change condition And a sixth step of repeatedly executing the method, and a method for capturing and recording a captured image.

  (19) According to the present invention, a first step of capturing a captured image by capturing an image of a subject in a computer included in the image capturing apparatus, a second step of detecting facial portions of a plurality of persons from the captured image, and the detection A third step of selecting a predetermined number of face parts from among the plurality of face parts, a fourth step of selecting the selected face parts as an object of the automation process, and an object of the automation process. A fifth step of executing an automation process on the face part, and a third step each time a change in the configuration of a plurality of persons detected in a shooting frame matches a predetermined change condition. To the fifth step are proposed, and a program for executing the sixth step is proposed.

  According to the present invention, each time a change in the configuration of a plurality of persons detected in a shooting frame matches a predetermined change condition, a new face portion in the shooting frame is selected to perform autofocus or exposure. Repeatedly execute the condition and color balance operation. Therefore, in the situation where there are multiple people in the shooting frame, it is possible to adjust the focus, exposure conditions, and color balance evenly for various people, and to shoot and record multiple times evenly with various combinations of people. There is an effect.

Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings.
Note that the constituent elements in the present embodiment can be appropriately replaced with existing constituent elements and the like, and various variations including combinations with other existing constituent elements are possible. Therefore, the description of the present embodiment does not limit the contents of the invention described in the claims.

<Appearance configuration of imaging device>
An external configuration of the imaging apparatus (digital camera 1) according to the present embodiment will be described with reference to FIG.
FIG. 1 is a front view and a rear view of a digital camera 1 according to the present embodiment. In this figure, although not particularly limited, the digital camera 1 has a collapsible lens barrel 3, a strobe light emitting window 4, a finder front window 5, and a sound collecting hole for recording. 6 and the like, and a shutter button 7 and a power switch 8 are arranged on the upper surface of the camera body 2.

  Also, on the back of the camera body 2, a sound amplifying hole 9, a viewfinder rear window 10, a mode switch 11, a zoom operation switch 12, a MENU button 13, a cursor key 14, and a SET button 15 The DISP button 16 and the liquid crystal monitor 17 are disposed.

  Furthermore, a cover 18 is provided on the bottom surface of the camera body 2, and the battery 19 and a large-capacity external memory 20 such as a card type memory or a card type hard disk mounted inside the camera body 2 can be attached and detached by opening the cover 18. It is like that.

<Electrical configuration of imaging device>
FIG. 2 is an internal block diagram of the digital camera 1 according to the present embodiment.
In this figure, the digital camera 1 includes an audio input system 21, an imaging system 22, a control system 23, an audio output system 24, an image storage system 25, a display system 26, an operation system 27, and the like for each function. Can be classified.

  The audio input system 21 digitally converts an analog audio signal amplified by the amplifier 29, an amplifier 29 that amplifies the sound picked up by the microphone 28 disposed behind the sound collection hole 6 on the front surface of the body 2. An A / D conversion unit 30 that converts the signal into a signal and an audio memory 31 that temporarily stores the digitally converted audio signal are provided.

  The imaging system 22 converts a photographic lens group 32 with a zoom function and an autofocus function housed in the lens barrel 3 in front of the body 2 and a subject image that has passed through the photographic lens group 32 into a two-dimensional image signal. An electronic imaging unit 33 such as a CCD (Charge Coupled Devices) or CMOS (Complementary Metal Oxide Semiconductor), a video processing unit 34 for performing required image processing on the image signal from the electronic imaging unit 33, and after the image processing And an image memory 35 for temporarily storing the image signal.

  In addition, a focus drive unit 36 that drives a focus mechanism in the lens barrel 3 (not shown), a zoom drive unit 37 that drives the zoom mechanism, a strobe 38 provided in the strobe light emission window 4 on the front surface of the body 2, A strobe drive unit 39 for driving the strobe 38, and a shooting control unit 40 for controlling each of these units (electronic imaging unit 33, video processing unit 34, focus drive unit 36, zoom drive unit 37, strobe drive unit 39); It has.

  The control system 23 includes a control circuit 41 using a one-chip microprocessor that controls each of the above-described systems to centrally control the operation of the digital camera 1. In the present embodiment, a detection function for detecting a human face image from a captured image captured by the electronic image capturing unit 33, an operation mode and an automatic selection operation start condition described later, a condition for determining the number of simultaneously selected persons, automatic A function for determining selection conditions and the like, a function for storing a newly detected face image and the detected position information in a list, a function for tracking the detected face image and storing it in the list together with the position information Also, based on the automatic selection condition, a function for focusing the face image selected from the detection list is also executed.

  Further, a program memory 42 that stores various programs necessary for the operation of the control circuit 41 in a non-volatile manner and a user data memory 43 that stores user-specific data and the like in a non-volatile and rewritable manner are provided. In this embodiment, in the user data memory 43, a data table for the operation mode, a data table for the automatic selection start condition, a data table for determining the simultaneous selection number of persons, and a data table for defining the automatic selection condition Etc. are stored.

  The audio output system 24 includes a D / A converter 44 that converts audio data appropriately output from the control circuit 41 into an analog audio signal, an amplifier 45 that amplifies the audio signal, and amplifies the amplified audio signal. For this purpose, a speaker 46 is provided near the loudspeaker hole 9 on the back of the body. The “voice” is not limited to the real voice but may be other environmental sounds or the like, or a mixed sound of the real voice and the environmental sounds.

  The image storage system 25 is a general-purpose encoding method for image data (still image data and moving image data) and audio data (for example, Exif-JPEG or a compatible format for still image data, moving image and audio data). A compression encoding / decompression decoding unit 47 that compresses (encodes) and expands (decodes) data in MPEG or a compatible format, and an external that stores the encoded image data and audio data in a rewritable manner. And a memory 20.

  The display system 26 converts display data appropriately output from the control circuit 41 into a predetermined display format (display format adapted to the resolution of the liquid crystal monitor 17), and an output signal from the display control unit 48. A liquid crystal monitor 17 provided on the back of the body is provided for display.

  The liquid crystal monitor 17 is not limited to “liquid crystal”, and may be a flat display device that can be mounted on a portable electronic device such as the digital camera 1. For example, the liquid crystal monitor 17 may be read as an organic EL display. In the present embodiment, a face image selected from a detection list described later is selected and displayed on the liquid crystal monitor 17 as a focus target.

  The operation system 27 includes an operation input unit 49 including a shutter button 7 on the top surface of the body, a mode change switch 11 on the back surface, a zoom operation switch 12, a MENU button 13, a cursor key 14, a SET button 15, a DISP button 16, and the like. And an input circuit 50 for inputting an operation signal from the input unit 49 to the control circuit 41. Further, in the present embodiment, it is also used for setting an operation mode, an automatic selection operation start condition, a simultaneous selection number determination condition, an automatic selection condition, and the like.

<Function block configuration>
Next, main functional blocks in the imaging apparatus according to the present embodiment will be described with reference to FIG.
As shown in FIG. 3, main functional blocks in the imaging apparatus according to the present embodiment include a face image detection unit 101, a detection list 102, a condition storage unit 103, a condition setting unit 104, and a selection control unit 48. The display unit 106 and the imaging control unit 40 are configured.

  The face image detection unit 101 is provided in the control circuit 41 of the imaging apparatus in FIG. 2, and the detection list 102 and the activation condition storage unit 103 are provided in the user data memory 43 of the imaging apparatus in FIG. The setting unit 102 is provided in the operation input unit 45 of the imaging apparatus in FIG.

  The face image detection unit 101 detects a human face image from the captured image captured by the electronic imaging unit 33 and extracts the face image. As a specific method for detecting a face portion in an image, a publicly known technique can be used, and a detailed description thereof is omitted. For example, feature information common to a human face is stored in advance. There is a method in which feature information extracted from each part of a photographed image is compared with feature information stored in advance, and an image portion having a predetermined degree of similarity or more is determined as a face. Here, as the feature information, information such as a color such as a skin color or a relative positional relationship of facial parts is generally used.

  In addition, when detecting face images continuously, the accuracy and efficiency of detection can be improved by following a moving face using information such as the position and size of the face image detected immediately before. There is also a method of performing a tracking operation to improve the above.

  The detection list 102 is a list in which the face image newly detected in the image data, its position information, the following face image, and its position information are recorded and updated. The condition storage unit 103 stores an operation mode data table, a data table related to automatic selection start conditions, a data table that defines conditions for determining the simultaneous selection number, a data table that defines automatic selection conditions, and the like.

  As shown in FIG. 4, the operation mode data table describes an operation mode for executing the roulette mode. Specifically, in the “focus target automatic selection mode during moving image shooting”, the face image is automatically selected during moving image shooting, and the automatically selected face image is successively focused. In the “automatic determination mode of still image shooting timing”, a face image is automatically selected during display on the monitor, and after the focus is set on the automatically selected face image, a still image is automatically shot.

  As shown in FIG. 5, the data table relating to the start condition of the automatic selection operation is as follows: 1) The face image selected at the time of the previous automatic selection execution is no longer detected (followed); 2) Currently detected (followed) The number of all face images detected is 3 or more. 3) The number of all face images detected (followed) at the time of the previous automatic selection execution and the number of all face images currently detected (followed). 4) The number of face images that were detected (follow-up) at the time of the previous automatic selection execution, but are no longer detectable (follow-up) is 2 or more. 5) Detected at the time of the previous automatic selection execution. The number of face images that have not been (followed) but are newly detected (followed) is 2 or more, 6) random, 7) alive, undetected faces in the detected (followed) face images in the past 8) Elapsed time since the last automatic selection Sec or more, and conditions etc. are shown, the user is able to select these conditions appropriately, in combination with AND or OR or the like. Moreover, about said numerical part, a user can set arbitrarily.

  The data table relating to the conditions for determining the number of simultaneously selected persons determines the number of persons who perform facial image identification display (roulette mode). As shown in FIG. 6, 1) 1 person (fixed), 2) 2 persons Conditions such as (fixed), 3) 10% of the number of all face images detected (followed) at the time of automatic selection execution, and 4) random are shown.

  The data table related to the automatic selection condition indicates the condition on how to select the face images of the number of persons selected by the data table related to the determination condition of the simultaneously selected number of persons, as shown in FIG. Randomly selected from all currently detected (followed) face images 2) Randomly selected from previously unselected face images 3) Fastest detected among previously unselected faces 4) Conditions such as a face image having the closest distance from the camera among the previously unselected face images are shown.

  The condition setting unit 104 includes an operation mode data table stored in the above-described condition storage unit 103, a data table relating to automatic selection start conditions, a data table defining conditions for simultaneously selecting the number of persons, and data defining automatic selection conditions Set the conditions that the user selects arbitrarily from the table. The selection control unit 48 refers to the detection list 102 based on various conditions set by the condition setting unit 104 and performs control for selecting a face image. Under the control of the selection control unit 48, the display unit 106 displays the selected face image while displaying a moving image or a still image. The shooting control unit 40 controls the focus driving unit 36 with the face image selected by the selection control unit 48 as a focus target, and controls the timing of shooting and recording in accordance with the selection timing by the selection control unit 48.

<Processing during movie shooting>
With reference to FIG. 8, the processing operation during moving image shooting will be described.
First, the user uses the condition setting unit 104 to store an operation mode data table stored in the condition storage unit 103, a data table related to automatic selection start conditions, a data table that defines conditions for determining the number of simultaneous selections, and automatic selection. Each condition is arbitrarily set from the data table that defines the condition (step S101).

  Next, the control circuit 41 determines whether or not the start of moving image shooting has been instructed (step S102). If the start of moving image shooting has not been instructed ("No" in step S102), the control circuit 41 returns. stand by. On the other hand, when the start of moving image shooting is instructed (“Yes” in step S102), it is determined whether or not the end of moving image shooting is instructed (step S103).

  If the end of moving image shooting is instructed in step S103 (“Yes” in step S103), a plurality of frame images obtained by imaging are recorded as moving image data, and the entire processing is ended. If the end of moving image shooting is not instructed (“No” in step S103), image data for the next new moving image frame is captured (step S104).

  Then, a face image is detected from the image data newly captured by the face image detection unit 101. If the face image currently being tracked exists in the current image data, the face image is detected. Tracking is continued and position information in the detection list 102 is updated. On the other hand, face images that have been tracked in the past but do not exist in the current image data are deleted from the detection list 102 (step S105). Here, tracking refers to tracking a face image detected in a previously captured moving image frame even if its position changes.

  Also, a new face image is detected from the image data (step S106), and the detected face image identification number and detection position information are added to the detection list 102 (step S107). As a new face image detection method, an image area where no face image has already been detected is targeted. First, after detecting a skin color portion, the eyes, nose, mouth, etc. are detected within the detected skin color area. It is determined whether or not there is a corresponding portion. If there is a corresponding portion, the skin color portion is regarded as a face image.

  Next, the selection control unit 48 determines whether or not the detection statuses of the plurality of face images shown in the detection list 102 match the start conditions for the automatic selection operation set by the user (step S108). If not ("No" in step S108), the process returns to step S103. On the other hand, if the detection statuses of the plurality of face images shown in the detection list 102 match the automatic selection operation start conditions set by the user (“Yes” in step S108), the simultaneous settings set by the user Based on the conditions for determining the number of selected persons, the number of simultaneously selected persons is determined (step S109), and face images for the determined number of persons are selected from the face images in the detection list 102 based on the automatic selection conditions set by the user. (Step S110). Then, the selected face image is identified and displayed on the display unit 106.

  Further, the shooting control unit 40 switches and sets the area corresponding to the selected face image as a new autofocus target area, changes the focus position by the autofocus function in response to the switching setting, and selects the selected area. The face image is focused (step S112), and the process returns to step S103.

  When the number of people selected at the same time is determined as one, the shooting parameters including the aperture value are set by a normal method, or the aperture is set so that the depth of field is narrower than the setting by the normal method. When the value is set and the number of simultaneously selected people is determined as multiple, leave the shooting parameters including the aperture value set in the usual way, and move the focus to the center position of the selected multiple people, Alternatively, the aperture value is set so that the depth of field is deeper than the setting by the normal method.

<Specific examples of processing operations during movie shooting>
A specific example of the processing operation during moving image shooting will be described with reference to FIG.
Here, for example, it is assumed that there are seven persons from Mr. A to Mr. G at the party venue, and a movie of the party is captured with a camera. In the state transition diagram of FIG. 9, the timing interval is 3 seconds.

  In addition, the start condition of the automatic selection operation set by the user is “(the face image that has not been selected in the past among the face images that are currently detected (followed)) OR (elapsed from the previous execution of automatic selection) Time is 4 seconds or more) ”, the determination condition of the simultaneous selection number is“ 1 person (fixed) ”, and the automatic selection condition is“ (the face image detected earlier among unselected face images in the past) OR (If no unselected face image exists in the past, it is selected at random from all face images currently detected (followed)).

  A specific example of the processing operation during moving image shooting will be described based on the state transition diagram shown in FIG. 9 under the above conditions. In FIG. 9, a circle indicates a person detected (followed) in the shooting frame, and a circle indicates a person selected most recently. Further, even if the same face image is detected (following) once, the detected face image is regarded as another face image.

  First, at timing (1), the framing state is (1), and no person exists in the shooting frame. At timing (2), the framing state is (2), and only Mr. A is detected in the shooting frame. At this time, since the automatic selection operation start condition, the simultaneous selection number determination condition, and the automatic selection condition are all satisfied, the automatic selection operation is executed and Mr. A is selected.

  At timing (3), the framing state is (3), and B is added. Also in this case, since the automatic selection operation start condition, the simultaneous selection number determination condition, and the automatic selection condition are all satisfied, the automatic selection operation is executed and Mr. B is selected. At timing (4), the framing state is (4), and Mr. C is added. Also in this case, since the automatic selection operation start condition, the simultaneous selection number determination condition, and the automatic selection condition are all satisfied, the automatic selection operation is executed and Mr. C is selected.

  At timing (5), the framing state remains (4), and the automatic selection operation start condition is not cleared, so the automatic selection operation is not started. At timing (6), the framing state remains (4), but since the “elapsed time from the previous automatic selection execution time is 4 seconds or more”, the automatic selection operation is executed and Mr. A is selected. The

  At timing (7), the framing state remains (4), and the automatic selection operation start condition is not cleared, so the automatic selection operation is not started. At timing (8), the framing state remains (4), but since “the elapsed time from the previous automatic selection execution time is 4 seconds or more”, the automatic selection operation is executed and Mr. B is selected. The

  At timing (9), the framing state is (5) and D is added. Also in this case, since the automatic selection operation start condition, the simultaneous selection number determination condition, and the automatic selection condition are all satisfied, the automatic selection operation is executed and Mr. D is selected. At timing (10), the framing state is (6), and Mr. E is added while Mr. A disappears. Also in this case, since the automatic selection operation start condition, the simultaneous selection number determination condition, and the automatic selection condition are all satisfied, the automatic selection operation is executed and Mr. E is selected.

  At timing (11), the framing state is (7), and Mr. B disappears. In this case, since the start condition for the automatic selection operation is not cleared, the automatic selection operation is not started. At timing (12), the framing state is (8) and Mr. C disappears. However, since “the elapsed time from the previous automatic selection execution is 4 seconds or more”, the automatic selection operation is executed, and D Is selected.

  At timing (13), the framing state is (9), and while Mr. D disappears, Mr. F and Mr. G are added. In this case, since the automatic selection operation start condition, the simultaneous selection number determination condition, and the automatic selection condition are all satisfied, the automatic selection operation is executed and Mr. E is selected.

  In moving image shooting, the person selected one after another is focused as the composition of the person in the shooting frame changes, and the plurality of frame images recorded by the moving image shooting focus on various persons. It is configured to include a frame image composed of various persons with matching.

  As in the above example, according to the present embodiment, in a situation where there are a plurality of persons in the shooting frame, a combination of a plurality of shot images or various persons focused on various persons is included in the shooting frame. Multiple captured images can be captured and recorded evenly and efficiently. Further, in the case of the condition setting as described above, even when a state where a plurality of the same persons are contained in the shooting frame continues for a long time, the focus is not kept on only the same persons.

  In the present embodiment, the case has been described in which the selected person is the target of autofocus (AF), and the person is automatically focused according to the distance to the selected person. The subject is subject to automatic exposure compensation (AE), and the shooting conditions such as the aperture and shutter speed are automatically adjusted according to the brightness of the selected person portion so that the brightness of the person portion is optimized. This is applied when correcting and setting, or the selected person is subjected to automatic color correction (auto white balance), and the hue of the person part is optimized according to the hue of the selected person part ( It may be applied when color correction processing such as white balance processing is performed so that the skin color is clearly reflected. In this way, it is possible to efficiently and efficiently shoot and record a plurality of photographed images in which various persons are photographed in an optimal state (brightness, color, etc. are suitable for each person part).

<Processing during automatic still image shooting>
With reference to FIG. 10, the processing operation at the time of still image reproduction will be described.
First, the user uses the condition setting unit 104 to store an operation mode data table stored in the condition storage unit 103, a data table related to automatic selection start conditions, a data table that defines conditions for determining the number of simultaneous selections, and automatic selection. Each condition is arbitrarily set from the data table that defines the condition (step S201).

  Next, the control circuit 41 determines whether or not it is instructed to start a still image automatic photographing operation (step S202), and if it is not instructed to start a still image automatic photographing operation ("No" in step S202). )), Return and wait. On the other hand, if an instruction to start a still image automatic photographing operation is given (“Yes” in step S202), it is determined whether an instruction to end the still image automatic photographing operation is given (step S203).

  If the end of the still image automatic shooting operation is instructed in step S203 ("Yes" in step S203), the entire process is ended and the end of the still image automatic shooting operation is instructed. If not (“No” in step S203), image data for the next new through image (for monitor image display) is captured and displayed on the display unit 106 (step S204).

  Then, a face image is detected from the image data newly captured by the face image detection unit 101. If the face image currently being tracked exists in the current image data, the face image is detected. Tracking is continued and position information in the detection list 102 is updated. On the other hand, face images that have been tracked in the past but do not exist in the current image data are deleted from the detection list 102 (step S205). Here, tracking refers to tracking a face image detected in a previously captured moving image frame even if its position changes.

  Also, a new face image is detected from the image data (step S206), and the detected face image identification number and detection position information are added to the detection list 102 (step S207). As a new face image detection method, an image area where no face image has already been detected is targeted. First, after detecting a skin color portion, the eyes, nose, mouth, etc. are detected within the detected skin color area. It is determined whether or not there is a corresponding portion. If there is a corresponding portion, the skin color portion is regarded as a face image.

  Next, the selection control unit 48 determines whether or not the detection statuses of the plurality of face images shown in the detection list 102 match the automatic selection operation start condition set by the user (step S208). If not (“No” in step S208), the process returns to step S203. On the other hand, if the detection statuses of the plurality of face images shown in the detection list 102 match the automatic selection operation start conditions set by the user (“Yes” in step S208), the simultaneous settings set by the user Based on the conditions for determining the number of selected persons, the number of simultaneously selected persons is determined (step S209), and face images for the determined number of persons are selected from the face images in the detection list 102 based on the automatic selection conditions set by the user. (Step S210). Then, the selected face image portion in the through image displayed on the display unit 106 is identified and displayed.

  Then, the shooting control unit 40 switches and sets the area corresponding to the selected face image as an autofocus target area, changes the focus position by the autofocus function in response to the switching setting, and selects the selected face image. The still image is recorded by focusing on (step S211), and the process returns to step S203.

  When the number of people selected at the same time is determined as one, the shooting parameters including the aperture value are set by a normal method, or the aperture is set so that the depth of field is narrower than the setting by the normal method. When the value is set and the number of simultaneously selected people is determined as multiple, leave the shooting parameters including the aperture value set in the usual way, and move the focus to the center position of the selected multiple people, Alternatively, the aperture value is set so that the depth of field is deeper than the setting by the normal method.

<Specific examples of processing operations during automatic still image shooting>
A specific example of the processing operation at the time of automatic still image shooting will be described with reference to FIG.
Here, for example, it is assumed that there are seven persons from Mr. A to Mr. G at the party venue and photographing these seven persons.

  Further, the shooting frame is as shown in FIG. 11, the determination condition of the simultaneous selection number set by the user is “one person (fixed)”, and the automatic selection condition is “in the face image not selected in the past”. Is the closest face image from the camera. In this case, Mr. F is selected in the first processing operation, and Mr. E is selected in the second processing operation. Then, an image focused on the selected face image is taken and recorded.

  A specific example of the processing operation at the time of automatic still image shooting will be described with reference to FIG. 9. The situation, various condition settings, and how to switch the selected person are almost the same as the processing operation at the time of moving image shooting. However, during automatic still image shooting, still images are shot and recorded with the focus on the person selected one after the other as the composition of the person in the shooting frame changes, and recorded by this still image automatic shooting. The plurality of still images include various still images in which various people are focused with various person configurations.

  As in the above example, according to the present embodiment, a plurality of still images composed of various combinations of a plurality of persons can be recorded evenly and efficiently in a situation where there are a plurality of persons in the shooting frame. it can.

  In this embodiment, the case of moving image shooting and the case of automatic still image shooting have been described. However, the present embodiment may be applied to a shooting function such as a still-in movie that shoots a still image during moving image shooting, and the recording The present invention may be applied to an operation in which the focus is simply moved during the through image display or the face portion is identified and displayed without performing the above.

  Further, by changing various condition settings in the automatic selection operation, more appropriate shooting can be performed corresponding to various scenes. For example, if the start condition of the automatic selection operation is a case where the configuration of a plurality of persons detected in a shooting frame has changed more than a predetermined value, there are too many shooting records with the same combination of a plurality of persons. In other words, it is possible to prevent the shooting and recording in a necessary combination from being performed.

  In addition, if the start condition of the automatic selection operation is set to be a predetermined time or more after the previous automatic selection operation has been performed, the same condition can be obtained even when a plurality of the same person is in the shooting frame continues for a long time. It is possible to prevent focusing on only a person.

  Further, if the automatic selection operation starts under the condition that the face portion previously determined as the focus target is out of the shooting frame or the playback frame, the same person is not continuously selected as the focus target. .

  Further, if the start condition of the automatic selection operation is that the face portion that has not been determined as the focus target in the shooting frame or the playback frame exists in the past, the same person as the focus target continues. Will not be selected.

  The embodiment of the present invention has been described in detail with reference to the drawings. However, the specific configuration is not limited to this embodiment, and includes a design and the like within a scope not departing from the gist of the present invention. For example, in the present embodiment, the case where the user sets the automatic selection start condition, the simultaneous selection number determination condition, and the automatic selection condition has been described as an example. A mode for setting the conditions may be provided. In this mode, conditions set by the user in the past may be learned according to the type of captured image, and conditions that meet the user's preference may be set automatically.

It is the front view and back view of an imaging device concerning this embodiment. It is an internal block diagram of the imaging device concerning this embodiment. It is a functional block diagram of the imaging device concerning this embodiment. It is a figure which shows the data table of the operation mode which concerns on this embodiment. It is a figure which shows the data table regarding the start conditions of the automatic selection operation | movement which concerns on this embodiment. It is a figure which shows the data table regarding the determination conditions of the simultaneous selection number of persons concerning this embodiment. It is a figure which shows the data table regarding the automatic selection conditions which concern on this embodiment. It is a processing flow at the time of moving image shooting according to the present embodiment. It is a state transition diagram which shows the relationship between the change of framing and automatic selection execution timing. It is a processing flow at the time of still picture reproduction concerning this embodiment. It is a figure which illustrates a still picture.

Explanation of symbols

  DESCRIPTION OF SYMBOLS 1 ... Digital camera, 33 ... Electronic imaging part, 40 ... Shooting control part, 41 ... Control circuit, 43 ... User data memory, 48 ... Selection control part, 101 ... Face image detection unit, 102 ... detection list, 103 ... condition storage unit, 104 ... condition setting unit, 106 ... display unit

Claims (19)

  1. Imaging means for capturing an image of a subject and obtaining a captured image;
    Face detection means for detecting the face portions of a plurality of persons from the captured image;
    Selecting means for selecting a predetermined number of face portions from the detected plurality of face portions;
    Selection control means for repeatedly executing a selection operation of a new face portion by the selection means each time a change in the configuration of a plurality of persons detected in a shooting frame matches a predetermined change condition;
    Automation processing means for subjecting the face portion selected by the selection means to automation processing, and changing the contents of the automation processing in response to the change of the face portion selected by the control of the selection control means;
    An imaging apparatus comprising:
  2. Image recording means for recording an image that has been subjected to automation processing with the selected face portion being subjected to automation processing;
    The imaging apparatus according to claim 1, further comprising:
  3. During shooting and recording of moving images, the selection control means is operated to sequentially change the contents of the automation processing by the automation processing means,
    The imaging apparatus according to claim 2, wherein the image recording unit records a moving image including a frame that has been subjected to the automation process with a face part that is sequentially changed as a target of the automation process.
  4.   The imaging apparatus according to claim 1, wherein during the shooting of a through image, the selection control unit is operated to sequentially change the contents of the automation processing by the automation processing unit.
  5. During shooting and displaying a through image, the selection control means is operated to sequentially change the contents of the automation processing by the automation processing means,
    The imaging apparatus according to claim 2, wherein the image recording unit records and records a still image each time the content of the automation processing by the automation processing unit is changed.
  6.   The imaging apparatus according to claim 1, wherein the change condition includes a condition in which a configuration of a plurality of persons detected in the shooting frame changes to a predetermined value or more.
  7.   The imaging apparatus according to claim 1, wherein the change condition includes a condition after a predetermined time has elapsed since the selection unit performed a previous selection operation.
  8.   The imaging apparatus according to claim 1, wherein the change condition includes a condition in which the face part previously selected by the selection unit is out of the shooting frame or the playback frame.
  9.   The imaging apparatus according to claim 1, wherein the change condition is a case where the face portion that has not been selected in the past by the selection unit exists in the imaging frame. .
  10.   The imaging apparatus according to claim 1, wherein the selection unit randomly selects a predetermined number of face parts from the plurality of detected face parts.
  11.   The imaging apparatus according to claim 1, wherein the selection unit preferentially selects a previously unselected face part from the detected plurality of face parts.
  12.   The imaging apparatus according to claim 1, wherein the selection unit selects one face portion from the plurality of detected face portions.
  13. The selecting means selects a plurality of face portions from the detected plurality of face portions,
    The imaging apparatus according to claim 1, wherein the automation processing unit sets an aperture value so that a depth of field is deep when a plurality of face portions are selected.
  14.   The automation processing unit sets the face portion selected by the selection unit as a target of autofocus processing, and changes the focus position in response to the change of the face portion selected by the control of the selection control unit. The imaging apparatus according to claim 1, wherein:
  15.   The automation processing unit sets the face portion selected by the selection unit as a target of automatic exposure processing, and changes the exposure condition in response to the change of the face portion selected by the control of the selection control unit. The imaging apparatus according to claim 1, wherein:
  16.   The automation processing unit sets the face portion selected by the selection unit as a target for automatic color adjustment processing, and performs color adjustment processing corresponding to the change of the face portion selected by the control of the selection control unit. The imaging apparatus according to claim 1, wherein the content is changed.
  17. Imaging means for capturing an image of a subject and obtaining a captured image;
    Image recording means for recording and saving a photographed image obtained by the imaging means;
    Face detection means for detecting face portions of a plurality of persons from a captured image obtained by the imaging means;
    A face portion is detected by the face detection unit for captured images sequentially acquired by the imaging unit, and changes in the configuration of a plurality of persons detected in the shooting frame match a predetermined change condition. Automatic recording means for recording and storing the captured image corresponding to the captured frame by the image recording means,
    An imaging apparatus comprising:
  18. A first step of imaging a subject to obtain a captured image;
    A second step of detecting face portions of a plurality of persons from the captured image;
    A third step of selecting a predetermined number of face portions from the detected plurality of face portions;
    A fourth step of subjecting the selected face portion to automation processing;
    A fifth step of executing the automation process on the face portion determined as the object of the automation process;
    A sixth step of repeatedly executing from the third step to the fifth step each time a change in the configuration of the plurality of persons detected in the shooting frame matches a predetermined change condition;
    A method for capturing and recording a captured image, comprising:
  19. In the computer with the imaging device,
    A first step of imaging a subject to obtain a captured image;
    A second step of detecting face portions of a plurality of persons from the captured image;
    A third step of selecting a predetermined number of face portions from the detected plurality of face portions;
    A fourth step of subjecting the selected face portion to automation processing;
    A fifth step of executing the automation process on the face portion determined as the object of the automation process;
    A sixth step of repeatedly executing from the third step to the fifth step each time a change in the configuration of the plurality of persons detected in the shooting frame matches a predetermined change condition;
    A program for running
JP2007083894A 2007-03-28 2007-03-28 Imaging apparatus, captured image recording method, and program Active JP4899982B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2007083894A JP4899982B2 (en) 2007-03-28 2007-03-28 Imaging apparatus, captured image recording method, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2007083894A JP4899982B2 (en) 2007-03-28 2007-03-28 Imaging apparatus, captured image recording method, and program

Publications (2)

Publication Number Publication Date
JP2008244976A true JP2008244976A (en) 2008-10-09
JP4899982B2 JP4899982B2 (en) 2012-03-21

Family

ID=39915743

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2007083894A Active JP4899982B2 (en) 2007-03-28 2007-03-28 Imaging apparatus, captured image recording method, and program

Country Status (1)

Country Link
JP (1) JP4899982B2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010127995A (en) * 2008-11-25 2010-06-10 Samsung Digital Imaging Co Ltd Imaging apparatus and imaging method
WO2011048742A1 (en) * 2009-10-19 2011-04-28 パナソニック株式会社 Semiconductor integrated circuit, and image capturing device provided therewith

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003092699A (en) * 2001-09-17 2003-03-28 Ricoh Co Ltd Digital camera imaging apparatus
JP2005086682A (en) * 2003-09-10 2005-03-31 Omron Corp Object decision device
JP2006005662A (en) * 2004-06-17 2006-01-05 Nikon Corp Electronic camera and electronic camera system
JP2006237961A (en) * 2005-02-24 2006-09-07 Funai Electric Co Ltd Imaging apparatus and automatic photographic method
JP2006246361A (en) * 2005-03-07 2006-09-14 Sony Corp Imaging device and imaging method
JP2007006033A (en) * 2005-06-22 2007-01-11 Omron Corp Object determining apparatus, imaging device, and supervisory apparatus
JP2007028123A (en) * 2005-07-15 2007-02-01 Sony Corp Imaging device and imaging method
JP2007074394A (en) * 2005-09-07 2007-03-22 Canon Inc Image pickup device and control method of same

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003092699A (en) * 2001-09-17 2003-03-28 Ricoh Co Ltd Digital camera imaging apparatus
JP2005086682A (en) * 2003-09-10 2005-03-31 Omron Corp Object decision device
JP2006005662A (en) * 2004-06-17 2006-01-05 Nikon Corp Electronic camera and electronic camera system
JP2006237961A (en) * 2005-02-24 2006-09-07 Funai Electric Co Ltd Imaging apparatus and automatic photographic method
JP2006246361A (en) * 2005-03-07 2006-09-14 Sony Corp Imaging device and imaging method
JP2007006033A (en) * 2005-06-22 2007-01-11 Omron Corp Object determining apparatus, imaging device, and supervisory apparatus
JP2007028123A (en) * 2005-07-15 2007-02-01 Sony Corp Imaging device and imaging method
JP2007074394A (en) * 2005-09-07 2007-03-22 Canon Inc Image pickup device and control method of same

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010127995A (en) * 2008-11-25 2010-06-10 Samsung Digital Imaging Co Ltd Imaging apparatus and imaging method
WO2011048742A1 (en) * 2009-10-19 2011-04-28 パナソニック株式会社 Semiconductor integrated circuit, and image capturing device provided therewith

Also Published As

Publication number Publication date
JP4899982B2 (en) 2012-03-21

Similar Documents

Publication Publication Date Title
JP4626425B2 (en) Imaging apparatus, imaging method, and imaging program
JP4768028B2 (en) Image capture method and device
US7433586B2 (en) Camera with an auto-focus function
TWI245188B (en) Image pick-up device and its auto-focus control method
JP2008206027A (en) Imaging device and lens barrel
US8294813B2 (en) Imaging device with a scene discriminator
CN105681652B (en) The control method of camera and camera
CN100431337C (en) Image capture apparatus and auto focus control method
KR101720776B1 (en) Digital image photographing apparatus and method for controlling the same
US9462181B2 (en) Imaging device for capturing self-portrait images
US9013604B2 (en) Video summary including a particular person
CN101341738B (en) Camera apparatus and imaging method
CN100591099C (en) Imaging apparatus
CN101616261B (en) Image recording apparatus, image recording method, image processing apparatus, and image processing method
US20100066847A1 (en) Imaging apparatus and program
JP4957943B2 (en) Imaging apparatus and program thereof
JP2014168126A (en) Image processor, image processing method and program
US7889985B2 (en) Imaging apparatus
JP2009213123A (en) Imaging device and method for its image processing
JP4153444B2 (en) Digital camera
JP2009060379A (en) Imaging device and its program
JP5251215B2 (en) Digital camera
KR101342477B1 (en) Imaging apparatus and imaging method for taking moving image
CN101076996A (en) Device and method for capturing image
JP2014516222A (en) Video Summary including features of interest

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20100305

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20110415

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20110419

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20110615

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20110816

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20111014

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20111206

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20111219

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20150113

Year of fee payment: 3