JP4196714B2 - Digital camera - Google Patents

Digital camera Download PDF

Info

Publication number
JP4196714B2
JP4196714B2 JP2003109886A JP2003109886A JP4196714B2 JP 4196714 B2 JP4196714 B2 JP 4196714B2 JP 2003109886 A JP2003109886 A JP 2003109886A JP 2003109886 A JP2003109886 A JP 2003109886A JP 4196714 B2 JP4196714 B2 JP 4196714B2
Authority
JP
Japan
Prior art keywords
step
extracted
person
digital camera
feature point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2003109886A
Other languages
Japanese (ja)
Other versions
JP2004320287A (en
Inventor
雅 太田
秀臣 日比野
弘剛 野崎
Original Assignee
株式会社ニコン
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社ニコン filed Critical 株式会社ニコン
Priority to JP2003109886A priority Critical patent/JP4196714B2/en
Priority claimed from US10/814,142 external-priority patent/US20040207743A1/en
Publication of JP2004320287A publication Critical patent/JP2004320287A/en
Application granted granted Critical
Publication of JP4196714B2 publication Critical patent/JP4196714B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Description

[0001]
BACKGROUND OF THE INVENTION
The present invention relates to a digital camera that identifies feature points of a person and operates according to the identification result.
[0002]
[Prior art]
2. Description of the Related Art Conventionally, many techniques for identifying a person from image data have been known, including a system that authenticates a person by registering a fingerprint or iris feature point in advance and collating it with the feature point. Japanese Patent Application Laid-Open No. 9-251534 describes in detail how to identify eyes as a person by extracting eyes, nose, mouth, etc., registering them as feature points, and comparing them with feature points extracted from the input image. Japanese Patent Laid-Open No. 10-232934 discloses a method for increasing the accuracy of a dictionary image when registering feature points extracted in this way. Examples of applying these technologies to cameras are given below.
[0003]
Japanese Patent Application Laid-Open No. 2001-201779 registers a camera user as reference information in advance, and only when the camera user matches the identification information input by photographing the camera toward his / her face. A camera that enables operation is disclosed. Japanese Patent Application Laid-Open No. 2001-309225 discloses a camera in which data such as face coordinates, dimensions, eye positions, head poses, and the like recognized by a face recognition algorithm are recorded together with image data in an image memory. Japanese Patent Application Laid-Open No. 2001-326841 discloses an image pickup apparatus (digital camera) that stores identification information (face, fingerprint, palm print) for identifying a regular user in advance. Japanese Patent Laid-Open No. 2002-232761 discloses an image recording apparatus that records in association with identification information of a subject that has been read in advance on a captured image. Japanese Patent Application Laid-Open No. 2002-333651 discloses a photographing apparatus that generates a recording signal by comparing preliminarily stored appearance information and photographing face information. This appearance information is recorded together with the priority.
[0004]
[Problems to be solved by the invention]
In the present invention, a method of easily selecting a desired feature point after extracting and displaying the feature points, which has not been achieved in the above-described previous inventions, and information relating to the selected feature points are reliably recorded. An object of the present invention is to provide a digital camera.
[0005]
[Means for solving problems]
In order to solve the above problems, the invention of claim 1 is characterized in that an extraction unit that extracts a predetermined feature portion from image data, a reception unit that receives an instruction from a user, and a plurality of the feature points are extracted. A selection unit that selects each feature site in a predetermined order according to an instruction received by the reception unit; and a display unit that displays feature site information for specifying the feature site selected by the selection unit. Features. Thus, the user can easily specify and select a desired person or the like. In the invention of claim 2, the display means displays the feature part information superimposed on the image data. In the invention of claim 3, the display means further displays a face from the feature part extracted by the extraction means. A determination unit configured to determine a size, wherein the selection unit is selected in order of the face determined by the determination unit; and in the invention according to claim 4, the distance to the feature portion extracted by the extraction unit is further determined. A determining means for determining is provided, and the selecting means is adapted to select in order of the distance determined by the determining means, so that a desired subject can be easily selected. The invention of claim 5 further comprises focus area setting means for setting a predetermined area including the characteristic part extracted by the extraction means as a focus area for detecting focus, and the invention of claim 6 further comprises , And a photometric area setting means for setting a predetermined area including the characteristic part extracted by the extracting means as a photometric area.
[0014]
DETAILED DESCRIPTION OF THE INVENTION
Embodiments of the present invention will be described below with reference to the drawings.
FIG. 1 is a block diagram illustrating the main functions of the digital camera of the present invention.
[0015]
The photographing lens 101 includes a zoom lens for continuously changing the focal length, a focusing lens for adjusting the focus, and a VR (Vibration Reduction) lens for correcting camera shake during photographing. These lenses are driven by a driver 113. Here, the driver 113 includes a zoom driving mechanism for the zoom lens and its driving circuit, a focusing driving mechanism for the focusing lens and its driving circuit, and a VR lens driving mechanism and its driving circuit, which are controlled by the CPU 112, respectively. The detector 121 detects the position of the focusing lens and the position of the zoom lens and transmits the respective lens positions to the CPU 112.
[0016]
The photographing lens 101 forms a subject image on the imaging surface of the image sensor 103. The image sensor 103 is a photoelectric conversion image sensor that outputs an electrical signal corresponding to the light intensity of the subject image formed on the image pickup surface, and a CCD type or MOS type solid state image sensor is used. The image sensor 103 is driven by a driver 115 that controls the timing of signal extraction. A diaphragm 102 is provided between the photographing lens 101 and the image sensor 103. The diaphragm 102 is driven by a driver 114 equipped with a diaphragm mechanism and its drive circuit. The imaging signal from the solid-state imaging device 103 is input to the analog signal processing circuit 104, and processing such as correlated double sampling processing (CDS) is performed in the analog signal processing circuit 104. The imaging signal processed by the analog signal processing circuit 104 is converted from an analog signal to a digital signal by the A / D converter 135.
[0017]
The A / D converted signal is subjected to various image processing such as contour enhancement and gamma correction in the digital signal processing circuit 106. A plurality of contour enhancement parameters are prepared in advance, and the optimum parameter is selected according to the image data. The digital signal processing circuit 106 also includes a luminance / color difference signal generation circuit that performs processing for recording, and a plurality of parameters for generating these are prepared in advance. An optimum parameter is selected in order to obtain the best color reproduction according to the image taken from the plurality of color conversion parameters. A plurality of parameters for edge enhancement and color reproduction are stored in a storage unit 1127 described later, and the CPU 112 selects an optimum parameter from this. The buffer memory 105 is a frame memory that can store data for a plurality of frames imaged by the image sensor 103, and the A / D converted signal is temporarily stored in the buffer memory 105. The digital signal processing circuit 106 reads the data stored in the buffer memory 105 and performs each process described above, and the processed data is stored in the buffer memory 105 again. The CPU 112 is connected to the digital signal processing circuit 106, the drivers 113 to 115, and the like, and performs sequence control of the camera photographing operation. The AE calculation unit 1121 of the CPU 112 performs automatic exposure calculation based on the image signal from the image sensor, and the AWB calculation unit 1122 performs calculation for setting white balance parameters. The feature point extraction calculation unit 1123 stores the feature points such as the shape, position, and size of the person from the image data in the storage unit 1127 according to a predetermined algorithm, and the detected size of the face, eye width, and the like at that time. The approximate distance to each person extracted from the focal length of the zoom lens detected by the detector 121 is also calculated and stored in the storage unit 1127 together with the extraction date and time. Here, this distance calculation method will be described with reference to FIG. FIG. 22 shows a case where the distance to the person is calculated based on the extracted eye width. A is the average value of the actual eye width of a general adult, a is the eye width imaged on the extracted image sensor, L is the distance from the imaging lens to the person, and f is the focal length of the lens. From this figure, the following proportional expression can be easily derived.
[0018]
A / L = a / f
From here, the distance L to the person is L = (A / a) · f. The storage unit 1127 temporarily stores the feature points extracted in this way and the distance to the feature point calculated based on the feature points. From the stored feature points, the user selects and registers the feature points that the user wants to keep. The contents to be registered and the registration method will be described later in detail with reference to FIG.
[0019]
A bandpass filter (BPF) 1124 extracts a high-frequency component in a predetermined band based on an imaging signal in a focus detection area provided in the imaging region. The output of the BPF 1124 is input to the next evaluation value calculation unit 1125, where the absolute value of the high frequency component is integrated and calculated as a focus evaluation value. The AF calculation unit 1126 performs AF calculation by the contrast method based on these focus evaluation values. The CPU 112 adjusts the focusing lens of the photographing lens 101 using the calculation result of the AF calculation unit 1126, and performs a focusing operation.
[0020]
An operation unit 116 connected to the CPU 112 includes a power switch 1161 for turning on / off the camera, a half-press switch 1162 and a full-press switch 1163 for turning on / off in conjunction with the release button, and various types of shooting modes. A setting button 1164, an up / down (U / D) button 1165 for updating a reproduced image, and the like are provided. The setting button 1164 can select and set alphabets, hiragana, katakana, simple kanji, etc. in combination with the U / D button 1165 in order to name the extracted feature points. In addition to this, the U / D button 1165 is used to select a desired person from a plurality of extracted persons, or to manually drive the zoom lens to the telephoto or wide side during photographing.
[0021]
When the subject brightness is low, the strobe 122 is caused to emit light. This strobe is a pre-flash that emits auxiliary light in advance before shooting to prevent red-eye to prevent or reduce the redness of the person's eyes when using the strobe and to measure subject brightness in advance at low brightness It also has functions. Reference numeral 123 denotes a sounding body such as a buzzer that warns with a sound when a camera malfunctions. In addition to the feature point information described above, the storage unit 1127 also stores the peak value of the evaluation value detected from the AF calculation result, the corresponding lens position, and the like. The image data that has been subjected to various processes by the digital signal processing circuit 106 is temporarily stored in the buffer memory 105 and then recorded on an external storage medium 111 such as a memory card via the recording / reproducing signal processing circuit 110. When recording image data in the storage medium 111, data compression is generally performed in a predetermined compression format, for example, the JPEG method. The recording / playback signal processing circuit 110 performs data compression when recording image data on the external storage medium 111 and decompression processing of the compressed image data transferred from the external storage medium 111 or another camera. Reference numeral 121 denotes an interface circuit that performs data communication by connecting to other external devices such as a digital camera wirelessly or by wire. A plurality of these interfaces may be provided at the same time.
[0022]
The monitor 109 is a liquid crystal (LCD) display device for displaying various setting menus when displaying a captured subject image, photographing, or reproducing. Here, it is also used when reproducing and displaying image data recorded in the storage medium 111 and image data transferred from another camera. When displaying an image on the monitor 109, the image data stored in the buffer memory 105 is read, and the digital image data is converted into an analog video signal by the D / A converter 108. Then, an image is displayed on the monitor 109 using the analog video signal.
[0023]
The contrast method of the AF control method employed in this camera will be described. In this method, there is a correlation between the degree of image blur and the contrast, and focusing is performed using the fact that the contrast of the image is maximized when the image is in focus. The magnitude of the contrast can be evaluated by the magnitude of the high-frequency component of the imaging signal. That is, a high-frequency component of the imaging signal is extracted by the BPF 1124 and the absolute value of the high-frequency component is integrated by the evaluation value calculation unit 1125 as the focus evaluation value. As described above, the AF calculation unit 1126 performs AF calculation based on this focus evaluation value. The CPU 112 adjusts the focusing lens position of the photographing lens 101 using the calculation result of the AF calculation unit 1126, and performs a focusing operation.
[0024]
2 and 3 show the overall operation flow of a digital camera having a face recognition function. In FIG. 2, first, when it is detected in step S101 that the power source of the digital camera is turned on by the power source SW 1161, the operation mode of the digital camera is confirmed in step S102. Here, it is determined by the setting button S1164 whether the photographing mode for photographing the subject is set or the reproduction mode for reproducing and displaying the image data recorded on the memory card. If the playback mode has been set, the process proceeds to step S117 in FIG. 3, and if the shooting mode has been set, the process proceeds to step S103. In step S103, the subject image to be photographed is displayed as a moving image on the LCD monitor 109. In step S104, it is determined whether or not it is set to perform feature point extraction processing for extracting feature points according to a predetermined algorithm on the displayed image. A setting button 1164 is used for this setting. If it is not set to perform the feature point extraction process, the process proceeds to step S113 and a normal photographing operation is performed. If it is set to perform feature point extraction processing, the process proceeds to step S105, and feature points and their position information are extracted from the display image for each frame or every two to three frames of moving image data displayed on the LCD monitor 109. To do. The extracted feature points include the outline, direction, position, and size of a person's face, eyes, pupils, eyebrows, nose, mouth, ears, hands, feet, glasses, and the like. Furthermore, by extracting the types of hairstyles, skeletons, and clothes, it is possible to discriminate gender and race of men and women, and to determine age. Further, not only humans but also animals such as dogs, cats and birds, and general subjects such as houses and cars can be extracted. In the following description, feature points are extracted mainly for humans.
[0025]
In step S106, it is determined whether or not there is a feature point that matches the feature points registered in advance in the storage unit 1127 of the digital camera for the extracted feature points. If there is no matching feature point, a marker indicating that the feature point is detected is superimposed on the image displayed on the LCD monitor 109 and displayed in step S107. If a feature point that matches the registered feature point is detected, it is superimposed and displayed with a marker different from other feature points so that it can be distinguished in step S108. FIG. 15 shows an example of the display result. Here, one of the five persons on the screen is too far and too small to detect face feature points, and face feature points are detected for the remaining four persons, and one of them is registered. Indicates that it has been determined. Three faces whose feature points are simply detected are surrounded by a wavy line, and one registered person is surrounded by a solid line. Furthermore, when personal name information such as a person name corresponding to a feature point is registered at the same time as the feature point information, it is also displayed simultaneously as shown in FIG. Thereby, the confirmation of the subject can be further ensured.
[0026]
In this embodiment, the priority order when selecting an AE area or AF area, which will be described later, is also registered as feature point information. FIG. 13 shows an example of a recording state related to feature points in the storage unit 1127. In FIG. 13, feature points that are named as feature points such as Mr. A, B child, and C-chan and feature points that are not named are registered in order as no name. The registration contents of Mr. A are further set with a priority of 1 when selecting the above-mentioned AE area or AF area. Thus, for example, if Mr. A and C-chan are simultaneously extracted in the shooting screen, the area including Mr. A is preferentially set as the AE area or the AF area. This priority can be arbitrarily changed. The date when the feature point of Mr. A is registered as the feature point information of Mr. A is recorded next as the registration date. Here, the registration date indicated by (1) is the date when Mr. A was first registered (2), (3) is different from (1), for example, A, taken in a state of being sideways, backwards, wearing glasses, etc. Shows the date when he added and registered his other features.
[0027]
In this way, the accuracy of identifying a person with respect to the extracted feature points is improved by registering a plurality of feature points as the same person depending on the presence or absence of glasses or wrinkles. The contents of this feature point can be displayed on the LCD monitor 109 and can be arbitrarily added or deleted. In addition to priorities, simple comments other than the date of registration, processing methods (white balance setting, contour compensation processing setting, etc.) that are effective during recording or playback when this feature point is detected, distance to the feature point, etc. It may be recorded. The actual data of the feature points set to be registered in this way are sequentially recorded in the lower feature point data area.
[0028]
Steps S109 to S114 show steps for performing specific processing according to the extracted feature points. Of course, even when a feature point is detected, it is possible to arbitrarily select which one of these steps is adopted using the setting button 1164. In the following, a case where all these steps are set to be selected will be described. In step S109, the displayed extraction result is registered. The registration in step S109 will be described in detail with reference to FIG. If the registration is completed, the process proceeds to the step of setting the shooting angle of view in step S110. By setting in step S110, even when there are a plurality of persons on the shooting screen, it is possible to automatically determine a target subject and zoom in on the person to be captured at the center of the screen. This function is effective when shooting at your child's athletic meet or presentation. Details of step S110 will be described with reference to FIG. In step S111, shooting conditions are set. Here, when there are a plurality of persons on the shooting screen, a predetermined area including the desired person is set as an AF area or an AE area, or an aperture setting according to the size or number of persons is performed. Details of step S111 will be described with reference to FIGS. In step S112, a strobe is set. Details of step S112 will be described with reference to FIG. Steps S109 to S112 so far are settings before shooting, and the setting order can be arbitrarily changed according to the shooting screen, and the contents once set in each step can be reset.
[0029]
In step S113, the subject is photographed. Here, a person is detected and the number of shots is automatically set, or actual exposure is performed in accordance with an operation at the time of shooting a person. This photographing step will be described in detail with reference to FIGS. After photographing is completed, recording processing is performed in step S114. Here, the face of the subject is detected and the white balance is changed, or the process of automatically reducing facial spots and moles is performed. Details of step S114 will be described with reference to FIG. In step S115, the processed image data and feature point information are recorded on the memory card as one file. In step S116, it is determined whether the power is off. If not turned off, the process returns to step S102 to determine the operation mode of the digital camera. If the power switch is off, this sequence is terminated.
[0030]
If the playback mode is set in step S102, the image data recorded in the memory card 111 is played back in step S117 of FIG. 3 and displayed on the LCD monitor 109. This reproduced image may be a still image or a moving image. In step S118, it is determined whether or not it is set to perform feature point extraction processing on the reproduced image as in step S104. If it has not been set, the process proceeds to step S126 to perform a normal reproduction operation. If it is set to perform feature point extraction processing, the process proceeds to step S119 to determine whether or not some feature point information has already been added to the reproduced image data. If feature point information has not been added, a feature point is extracted from the image data in step S120 as in step S105. If the feature point information has been added, the process proceeds to step S121, and the feature point information added to the image data is read out. In step S121, the extracted feature points or the read feature points and feature point information are superimposed and displayed on the reproduced image. The marker display or icon display described above may be used instead of the feature point.
[0031]
In step S123, it is determined whether or not the extracted feature point or the added feature point matches the feature point registered in the recording unit 1127. Again, as in step S106 described above, if there is no matching feature point, a marker or icon indicating that the feature point has been detected is superimposed on the image displayed on the LCD monitor 109 in step S124. To display. If a feature point that matches the registered feature point is detected, it is distinguished in step S125 that it has been registered, and is superimposed and displayed with a marker different from the other feature points. In step S126, the displayed extraction result is registered. This registration will also be described with reference to FIG. When the registration in step S126 is completed, it is determined whether or not the next image data is to be reproduced in step S127. If the next image is selected with the U / D button 1165, the process returns to step S117. If the next image is not selected, the process proceeds to step S128, and it is determined whether the power switch 1161 is turned off. If it is not turned off, the process returns to step S102 in FIG. 2, and if it is turned off, this sequence is terminated.
[0032]
<< Registration of feature point information >>
The step of registering feature point information will be described with reference to FIG. The registration step of FIG. 4 is common to step 109 of FIG. 2 and step S126 of FIG. If the image data is captured image data, in step S151, it is determined whether or not the same feature point as the feature point extracted by the feature point extraction calculation unit 1123 is registered in the storage unit 1127. If the image data is reproduced image data, in step S151, the feature points and feature point information added to the reproduced image data are read, and the feature points or feature points that are the same as the read feature points or feature point information. It is determined whether or not the information is stored in the storage unit 1127 in the recording form described with reference to FIG. If the feature point or feature point information is not added to the reproduced image data, the feature point is extracted from the reproduced image as in the case of the captured image data. Here, the feature point information added to the image data will be described with reference to FIG. As shown in FIG. 14, in the image data file DSC002, feature point information and feature point data are added and recorded in addition to the actual image data. In the case of FIG. 14, two persons, Mr. A and C, are registered as feature point information. The registered contents include the priority, the date when Mr. A or C was extracted from this image data, the center of gravity of the feature point, and Mr. A in addition to Mr. A extracted from image data other than the image data DSC002. Since there are two feature points, they are also added and registered. Here, as in FIG. 13, simple comments and processing methods during recording or reproduction may be recorded. Further, the distance to the feature point calculated by the feature point extraction calculation unit 1123 may be recorded. Changing, adding, or deleting the data content of the feature point information is arbitrary. The actual feature point data of Mr. A or C is recorded in order in the feature point data area below.
[0033]
If the feature point of the captured image data or the feature point added to the reproduced image data and the feature point information are already registered in the storage unit 1127 in step S151, the process proceeds to step S152. Here, it is determined whether registered feature points or feature point information is to be added or changed. Specifically, the extracted person name and priority are input or changed. If not added or changed in step S152, the process proceeds to step S156, and if added or changed, the process proceeds to step S153.
[0034]
If the feature point and the feature point information are not registered in step S151, the process proceeds to step S153. Here, the extracted feature points and the registered feature point information are displayed on the LCD monitor 109. In step S154, it is determined whether or not an instruction to register the displayed feature points or feature point information has been given. In principle, as long as the newly detected feature point is not exactly the same as the already registered feature point, the newly extracted feature point is added to the storage unit 1127 and stored in step S155. This storage instruction can be given by selecting a registration execution display displayed on the screen of the LCD monitor 109 with the setting button 1164 (not shown). This gradually increases the accuracy of person identification. If the extracted feature points have already been registered, or if feature points that are completely irrelevant to the user have been extracted, no new registration instruction is given, and the process proceeds to step S156. In step S156, it is determined whether or not to register other feature points in the same screen. If another feature point is selected, the process returns to step S151 and is registered in the same procedure as before.
[0035]
If no other feature point is selected, the process proceeds to step S157 to determine the operation mode of the digital camera. If the shooting mode is set, this registration step is terminated. This registration operation is performed each time the display screen is changed by changing the subject. If the camera operation mode is the playback mode, the process proceeds to step S158. Here, it is determined whether or not the card recording execution display is selected (not shown) with the setting button 1164. When the recording instruction is selected, the changed or newly added feature point or feature point information is added to the original image and recorded on the memory card. If it is not selected, the additional information is not updated and the main registration step is terminated.
[0036]
《Shooting angle setting》
2 will be described with reference to FIG. 5. This is a suitable setting sequence when, for example, it is desired to photograph C of his child at the athletic meet. First, in step S171, a person to be photographed (for example, C-chan) is selected from the feature point information stored in the storage unit 1127 using the setting button 1164 based on the unique name information of the person, and registered as a priority photographed person in advance. . The person registered as the priority photographer has priority over the priority order added to the feature points described above. In step S172, it is determined whether a person (mainly a face) has been extracted from the shooting screen. If not extracted, the process proceeds to step S173, and the CPU 112 drives the driver 113 to zoom up the zoom lens in the long focus direction. This zooming up may be manual or automatic. In step S174, it is determined whether or not the zoom lens has reached the maximum zoom position, and if not, the process returns to step S172 and this is repeated until a person is extracted. If the zoom lens reaches the maximum focal position in step S174, the process proceeds to step S175, and a warning display (not shown) that no person is detected is displayed on the LCD monitor 109, and the step of setting the angle of view is completed. When the photographer changes the shooting direction and the shooting screen changes, the steps from step S172 are repeated.
[0037]
If a face is detected in step S172, a marker is superimposed on the extracted human face in step S176 as shown in FIG. By looking at this display screen, the user confirms whether or not a face of a preset person is in the shooting screen. If not, the screen can be moved and the desired person can be easily captured on the screen. In step S177, it is determined whether or not the set person in the screen is larger than a predetermined size. If it is equal to or larger than the predetermined size, this step is terminated, and if it is equal to or smaller than the predetermined size, the process proceeds to step S178. In step S178, the CPU 112 automatically zooms up the zoom lens. At this time, the VR lens described above is also driven by the driver 113 at the same time so that the center of gravity of the extracted subject does not deviate from the vicinity of the center of the screen.
[0038]
In step S179, it is determined whether or not the set face size of the person has exceeded a predetermined size. If it is not larger than the predetermined size, the process returns to step S177 to continue driving the zoom lens and the VR lens. If the maximum zoom position is reached in step S180, the process proceeds to step S181 to warn. This warning is displayed as a warning on the LCD monitor 109 (not shown), and the buzzer 123 also warns with sound and ends the sequence. If the size of the face of the desired person exceeds a predetermined size in step S179, this sequence ends. Here, the predetermined size is set in advance using the setting button 1164, for example, about 10% of the entire screen. Further, it is also possible to stop the desired person's face simply by moving the face of the desired person to the center of the screen using the VR lens, without zooming up in step S178. In this way, the user can manually zoom up the desired subject at the center manually to his preferred size. In this way, it is possible to reliably find and record your own child from among many children such as athletic meet, concert, and presentation. In the above description, zooming up is automatically performed when the face is small. Conversely, when the face is too large, the zoom is automatically reduced to a predetermined face size. You may do it. Similarly, if the screen is changed by the user after reaching the maximum zoom position in step S174, the zoom may be zoomed down until a face is extracted. Since the sequence in these cases is almost the same as that in the case of zooming up, the description thereof is omitted.
[0039]
<Setting shooting conditions>
2 will be described with reference to FIGS. 6 to 8. FIG. FIG. 6 is a flow for setting an optimum depth of focus by changing the aperture value according to the distance to each subject when a plurality of subjects are extracted. In step S201, it is determined whether or not a human face outline or eye has been detected. If neither is detected, the process proceeds to step S208, where it is determined that the shooting of a distant view of a landscape or the like is performed, and the process proceeds to step S208, where the aperture value is set large to increase the depth of focus. If a face outline or eyes are detected in step S201, the process proceeds to step S202. In step S 202, the zoom lens position (focal length) at that time is detected by the detector 121 and stored in the storage unit 1127. In step S 203, the distance to the subject is calculated from the size of the face outline or eye width extracted as described above and the zoom lens position stored in the storage unit 1127 and stored in the storage unit 1127. In step S204, it is determined whether or not the distance calculation has been completed for all persons in the shooting screen. If not completed, the process returns to step S203 to calculate the distance for each person and store it in the storage unit 1127.
[0040]
If the distance calculation is completed for all the extracted persons, the process proceeds to step S205 to determine the number of extracted persons. If it is determined in step S205 that the number of persons is equal to or greater than the predetermined value, it is determined as a group photo, and the process proceeds to step S208 to increase the depth of focus and increase the aperture value so that all persons are in focus. Set. Specifically, based on the distance to each person detected in step S203, the optimum depth of focus for focusing on all the persons is obtained, and the corresponding aperture value is set. If it is determined in step S205 that the number of persons is equal to or smaller than a predetermined value, the process proceeds to step S206, where the size of each face is determined. If it is determined that the size of the face is greater than or equal to the predetermined size, the process proceeds to step S207, where it is determined that portrait shooting is performed, and the aperture value is decreased to set the depth of focus shallower. If it is determined in step S206 that the face size is equal to or smaller than the predetermined size, it is determined as a commemorative photo including a landscape, and the process proceeds to step S208 where the aperture value is increased to increase the depth of focus. Here, the predetermined number of people is preset to about 3 to 4 people.
[0041]
In this way, if the user has previously set the shooting mode to a landscape shooting mode, and if a person is detected in the shooting screen, the portrait shooting is automatically performed at a shallow depth suitable for portrait shooting. You can shoot in the mode. Conversely, if no person is detected when the portrait shooting mode is set, the mode can be automatically changed to the deep landscape shooting mode. In the method for calculating the distance to the subject described here, the size of the face and the eye width differ between adults and children, and there are individual differences even between adults and children. Therefore, it is the approximate distance calculated from the average face size and eye width of adults or children. An accurate in-focus position is determined based on the peak position by the contrast method described above.
[0042]
Next, setting of the AF area or the AE area will be described with reference to FIGS. 7, 16, 17, and 18. In FIG. 7, the AF area setting is described, but the AE area setting is exactly the same. In step S221 in FIG. 7, it is first determined whether or not there is a person within a predetermined range in the shooting screen. Here, as a method for determining the presence or absence of a person, it is determined based on whether or not a face outline has been extracted. If there is no person, the process proceeds to step S222, and a preset fixed area such as the center is set as the AF area. This is because, even if a person is extracted, if the person is near the corner of the screen, the photographer determines that he / she is not trying to shoot with emphasis on the person and eliminates this. FIG. 16 shows an example of a shooting screen in this case. In the figure, the person displayed as a marker with a thick wavy line is outside the range indicated by the thin wavy line in the screen. In this case, a thick solid line frame at the center of the screen set in advance is set as the AF area. When multipoint distance measurement is possible, this AF area can be set other than the center of the screen.
[0043]
If a person is extracted within the predetermined range of the screen in step S221, the process proceeds to step S223, and it is determined whether or not the number of extracted human faces is plural. If not, the process proceeds to step S228. If more than one, the process proceeds to step S224. In step S224, the largest face among the extracted faces is selected and set as an AF area to display that it is an AF area. FIG. 17 shows a display example of the shooting screen in this case. Here, it is shown that the maximum face portion displayed with the extracted solid line is set as the AF area. In step S225, it is determined whether or not a person position other than the automatically set AF area is set in the AF area. If the photographer operates the setting button 1164 to select one of the other persons displayed with wavy lines, the AF area is moved in order according to the operation. In this case, as the order of selection, if the above-mentioned priority order is stored, the person is selected in accordance with the priority order, but is selected in the order of the extracted face sizes. May be. If the selection is completed in step S227, the process proceeds to step S228 to determine whether or not the size of the extracted face area is equal to or larger than a first predetermined value. If it is equal to or smaller than the first predetermined value, the process proceeds to step S229, and the AF area is set to a predetermined size (for example, the first predetermined value) including the extracted face inside. This is because if the extracted face area is too small, the accuracy in the AF calculation described above will deteriorate. FIG. 18 shows a display example in this case.
[0044]
If the face area extracted in step S228 is greater than or equal to the first predetermined value, the process proceeds to step S230, where it is further determined whether or not the face area is greater than or equal to the second predetermined value. If it is greater than or equal to the second predetermined value, it is determined that the portrait shooting is performed, and the process proceeds to step S231 to set the extracted eye position in the AF area instead of setting the entire face in the AF area. FIG. 19 shows a display example in this case. If it is equal to or smaller than the second predetermined value, the process proceeds to step S232, and the area of the face extracted previously is set as the AF area. Here, as the first and second predetermined values, optimum values are set in advance after photographing various subjects.
[0045]
In the above description, in step S224, the maximum face is selected first, but the person with the highest priority of registration or the priority photographed person described in the section on setting the shooting angle of view is displayed first. You may do it. Alternatively, the distance to the person may be calculated simultaneously with the face extraction, and the person at the shortest distance may be selected in order. In addition, for the priority photographing person described above, the focus lens movement range is limited based on the calculated distance so that only a predetermined range before and after the calculation distance can be moved. It becomes possible to make it difficult to be affected. Further, the AF follow-up operation is surely and fast for this priority photographed person. In addition, when the continuous shooting mode is set for sports shooting, the shooting distance for the first frame is determined based on the evaluation value peak by the contrast method. It is also possible to easily calculate the distance to the subject together with the zoom lens position at that time by detecting the difference (change amount) from the previous frame of the person, face outline or eye width of the previous shot. It is. By doing so, it is possible to realize AF control capable of following subject variations at high speed.
[0046]
The AF area setting sequence so far can be applied in the same way to the AE area setting as described above. Of course, in this case as well, optimum values for the first predetermined value and the second predetermined value described above are determined in advance by experiments as in the AF area.
[0047]
Next, the change of the shooting mode will be described with reference to FIG. In step S241, it is determined whether the shooting mode is set to a person shooting mode suitable for shooting a person. In this portrait shooting mode, for example, the aperture is set to a value close to full open to blur the background, the white balance is set to emphasize skin color, and the ranging mode is set to AF mode. If the person photographing mode is set, the process proceeds to step S242, where it is determined whether or not a person has been extracted. If not extracted, the process proceeds to step S243, where a warning is given by a buzzer, a monitor, etc., and in step S244, the mode is changed to a landscape shooting mode suitable for distant shooting, and this sequence is terminated. In this landscape photography mode, the aperture is set to a large value in order to increase the depth of focus, and in the distance measurement mode, the focus lens is driven to a fixed position that focuses to an infinite position according to the depth of focus. The white balance is set to be used at the time of normal shooting, or set to emphasize the green and blue sky of the tree when shooting in the daytime. If a person is detected in step S242, this step ends. If it is determined in step S241 that the person photographing mode has not been set, the process proceeds to step S245 to determine whether or not a person has been detected. If it is not detected, this sequence is terminated. If it is detected, the process proceeds to step S246, where a warning is given by a buzzer or a monitor. In step S247, the mode is changed to the person photographing mode, and this sequence is terminated.
[0048]
<Flash settings>
A method for setting the light emission amount of the strobe will be described with reference to FIG. In step S251, it is determined whether the subject brightness measured by the AE calculation circuit 1121 for a subject in a predetermined AE area is greater than a predetermined value. Here, the subject is not limited to a person. If the subject brightness is a dark subject smaller than a predetermined value, the process proceeds to step S261. If the subject brightness is greater than the predetermined value and the subject is bright, the process proceeds to step S252. In step S252, it is determined whether or not a person has been extracted from the shooting screen. Here again, the person is discriminated based on whether or not the outline of the face has been extracted. If the face outline is not extracted, the process proceeds to step S253 and the strobe is set to non-light emission. Based on this non-flash setting, the CPU 112 controls the flash to be non-flash during shooting. Thereby, at the time of actual photographing, the subject is exposed with the shutter speed and the aperture value based on the calculation result of the AE calculation unit 1121.
[0049]
When the face outline is extracted in step S252, the process proceeds to step S254, and the brightness of the extracted human face portion is measured. In step S255, if the measured brightness of the face part is brighter than the predetermined value, the process proceeds to step S253, and if dark, the process proceeds to step S256. In step S256, the distance to the person extracted is calculated based on the detected face size or eye width and the zoom lens position at the same time as in step S203 of FIG. In step S257, it is determined whether the distance to the person is within the proper exposure range of the strobe. If it is within the appropriate exposure range, the process proceeds to step S258, where pre-light emission for red-eye reduction is set before shooting, and the distance calculated so that the face of the person extracted in step S259 is in proper exposure. Set the flash output based on.
[0050]
Thereby, the CPU 112 controls to set the shutter speed and the aperture value calculated by the AE calculation unit 1121 at the time of actual photographing. As a result, the entire screen excluding the person is photographed in a proper exposure state. On the other hand, a person who is darker than the surroundings is controlled to emit a strobe with a light emission amount set based on the distance. As a result, it is possible to photograph a person in an appropriate exposure state. This function is particularly effective during backlight photography. Prior to the main flash emission, the CPU 112 controls the pre-emission for red-eye reduction based on the setting in step S258. This pre-light emission may be performed a plurality of times. In step S257, if it is out of the proper exposure range, the process proceeds to step S259 to display a warning that the person is not properly exposed (not shown).
[0051]
If the subject is dark in step S251, the process proceeds to step S261, where it is determined whether the contour of the face as a person has been extracted from the shooting screen. If the face outline is extracted, the process proceeds to step S262, and the distance to the extracted person is calculated in the same manner as in step S256. In step S263, it is determined whether the distance to the person is within the proper exposure range of the strobe. If it is out of the proper exposure range, the process proceeds to step S260 to display a warning that the person is out of the proper exposure. If it is within the proper exposure range, the process proceeds to step S264, and the strobe is set to pre-emit before photographing. The role of this pre-flash is not only for reducing red-eye described in step S258, but also for determining the flash emission amount at the time of actual shooting based on the reflected light from the person due to the pre-flash. In step S265, a setting is made to determine the light emission amount of the strobe at the time of photographing based on the reflected light from the face portion at the time of pre-light emission. Here, as in the previous case, the pre-emission may be performed a plurality of times, and the pre-emission for red-eye reduction may be divided into the pre-emission for reflected light measurement. If no person is extracted in step S260, the process proceeds to step S266, where the light emission quantity of the strobe is set based on the result of subjecting the subject brightness to the AE calculation. Instead of setting to pre-flash the strobe for red-eye reduction in step S258 or step S264, it may be set to detect the pupil shot after shooting and to correct the red-eye part in software.
[0052]
"photograph"
A sequence of two types of photographing methods different from normal ones will be described with reference to FIGS. 10, 11, 20, and 21. FIG. FIG. 10 is a sequence configured to automatically capture images at a plurality of peak positions of the focus evaluation value obtained from the AF area when the full-press SW 1163 is pressed once. Thereby, a plurality of pieces of image data focused for each subject corresponding to each peak position can be obtained. In step S301, when it is detected that the half-press SW 1162 is turned on, in step S302, the CPU 112 moves the focus lens from the nearest position to infinity, calculates the evaluation value, and detects the peak of the evaluation value. In step S303, it is determined whether there are a plurality of peaks. If there is only one peak, the process proceeds to step S306, and if a plurality of peaks are detected, the process proceeds to step S304. In step S304, the feature point extraction calculation unit 1123 determines whether a person has been extracted. When a person is extracted here, the distance between the extracted eye width and the zoom lens position to the extracted person is calculated in the same manner as before, and the calculated distance to the person has a plurality of peaks. To determine which one of them corresponds. In step S305, the position of the closest person is selected as the first shooting position, and the CPU 112 drives the focus lens to the peak position indicating the position of the closest person.
[0053]
If there is only one peak position in step S303, the peak position detected in step S306 (in this case, this position becomes the nearest peak position) is selected. Even when a plurality of peaks are detected in step S304 and no person is detected, the process proceeds to step S306, and the closest position is selected as the photographing position.
[0054]
In step S307, it is determined whether or not the fully pressed SW 1163 is turned on. If it is not turned on, the process proceeds to step S313, and if it is turned on, the process proceeds to step S308. In step S308, exposure is performed at the peak position selected in step S305 or step S306 described above, and image data accumulated after the exposure is read out. In step S309, it is determined whether or not there is a peak position corresponding to another person position. If there is a peak position corresponding to another person position, the process returns to step S308, and the image data accumulated after the second exposure at that position is read. If there is no other peak position corresponding to the person position, the process proceeds to step S311 to determine whether or not the exposure at the nearest peak position is completed. If the exposure at the closest peak position has not been completed, the process proceeds to step S312 to continue exposure at the closest position. If the exposure at the closest position has been completed, this sequence ends. If the fully pressed SW 1163 has not been pressed in step S307, the process proceeds to step S313. In step S313, it is determined whether the half-press SW 1162 is pressed. If the half-press SW 1162 has been pressed, the process returns to step S307 to lock the focus until the full-press SW 1163 is pressed. If the half-press SW 1162 has not been pressed in step S313, this sequence ends.
[0055]
An actual photographing example will be described with reference to FIGS. FIG. 20 shows a case where a person and a flower are arranged in front of the person on the shooting screen. In normal AF shooting, focusing is performed with the closest priority. In this case, only one image that is in focus with respect to the front flower is shot. FIG. 21 shows the evaluation value change corresponding to the focus lens position in this case. Here, the evaluation value change when the entire screen is the AF area is shown. In this case, two peaks (P1, P2) are detected in the focus evaluation value. In normal AF, P2 which is the nearest peak is selected if it is a certain size or more regardless of the sizes of P1 and P2. Thus, simply detecting the contrast of the subject cannot determine whether the person is located at the position x1 corresponding to P1 or the position x2 corresponding to P2. On the other hand, it is possible to determine that the position x1 is a peak by the person by calculating the distance from the size of the person's face or the eye width to the person. Therefore, it is possible to obtain focused image data by shooting twice for the closest position x2 and the person position x1. Alternatively, shooting may be performed only at the person peak position, and may not be performed when the closest peak is a peak other than a person. At this time, as in the case of the setting of the shooting angle of view described above, a priority photographed person may be set in the camera in advance, and photographing may be performed only once at a peak corresponding to the person.
[0056]
As a result, even if a plurality of persons are in the AE area, it is possible to reliably obtain an image in which the desired person is in focus. When there are a plurality of persons, it may be possible to shoot at a person position corresponding to a peak equal to or higher than a certain evaluation value, instead of shooting all the persons. Alternatively, the maximum number of continuous shots may be set. As described above, since the distance to the feature point calculated based on the feature point is not an accurate distance, it is used to supplementarily determine the human position peak when there are multiple peaks in the contrast method. It is possible to focus accurately.
[0057]
Next, a method for preventing photographing with the eyes closed based on FIG. 11 will be described. If the full press SW 1163 is pressed in step S321, the feature point extraction calculation unit 1123 detects the pupil of the subject from the image data before the full press switch is pressed in step S322. If it is determined that the subject is closed and no pupil is detected, the actual exposure is delayed until the subject's pupil is detected in step S323, and the process returns to step S322. If a pupil is detected, exposure is actually performed in step S324, and the image data exposed in step S325 is read. In step S326, the feature point extraction calculation unit 1123 immediately detects the pupil again from the read image data. If no pupil is detected at this time, a warning is given by the buzzer 123 in step S327, and the process returns to step S322 to confirm that the pupil has been detected and immediately re-expose. If a pupil is detected in step S326, this sequence is terminated. In this way, it is confirmed that the subject has opened his / her eyes before shooting, and it is immediately checked whether or not the subject has been shot with his eyes closed even after shooting. As a result, if the image is taken with the eyes closed, the image can be taken again immediately. Alternatively, if the image is taken with the eyes closed instead of taking the image again, only that portion may be corrected by software after the image is taken. As a correction method, an open eye of a subject when a moving image of the subject is photographed on the monitor after photographing is extracted and replaced with a closed eye.
[0058]
In the explanation of FIG. 11, it is detected that the eyes are closed after shooting, and re-shooting is performed. However, in order to obtain the best image by detecting defects of the shot subject and shooting again. Can be made. For example, if the subject has moved during shooting, it can be determined by detecting blur from the reproduced image. Or, if the face is hidden in the group photo, compare the number of faces before shooting with the number of faces after shooting, or when the extraction of the face outline is insufficient It is also possible to set to shoot again. Furthermore, in the warning of step S327, not only with a buzzer, but also with a voice, for example, “Shooting with closed eyes”, “Shooting with blurring”, “Some people have their faces hidden. It is also possible to specifically warn of a malfunction.
[0059]
<Recording process>
The processing at the time of recording accompanying feature point extraction will be described based on FIG. In step S401, first, the feature point extraction calculation unit 1123 determines whether or not the contour of the person's face has been extracted. If not extracted, a recording process using preset parameters for color reproduction or contour enhancement is performed. If extracted, the process proceeds to step S402 to determine the number of extracted faces. If the number of faces is less than or equal to the predetermined value, the process proceeds to step S406, and if it is greater than or equal to the predetermined value, the process proceeds to step S403. Here, a value of about 3 to 4 is suitable as the predetermined value. If more than 3 to 4 faces are extracted, it is determined that a group photo has been taken, and in step S403, the digital signal processing circuit 106 uses parameters that emphasize skin color as parameters for color reproduction. To do. Further, in step S404, a specific part of the face is detected, and in step S405, processing is performed so as to weaken the contour enhancement of the face part other than the vicinity of the specific part. The specific part is, for example, eyes, nose, mouth, ears, hair, eyebrows and the like. As a result, a low pass filter is applied to the frequency characteristics in portions other than the vicinity of the specific portion, so that wrinkles, moles, spots, etc. on the cheeks and foreheads can be made inconspicuous. If it is determined in step S402 that the number of faces is equal to or smaller than the predetermined value, the process proceeds to step S406 to determine the face size. If multiple faces are detected, the face with the largest size is discriminated. If the face area is larger than the predetermined value, it is determined that the portrait photography is performed, and the process proceeds to step S403 in which processing is performed with emphasis on skin color. If the face area is smaller than the predetermined value, it is determined as commemorative photography including a landscape, and normal recording processing is performed.
[0060]
As described above, in step S403, the skin color emphasis process is not performed only on the face portion, but the parameter emphasizing the skin color is selected and processed on the entire image data instead of the normal color parameter. This is because, in the portion other than the skin color, even if the processing adopting the parameter emphasizing the skin color is performed in this way, since the skin color component is originally small, there is little influence when the skin color emphasis treatment is performed. This eliminates the need for complicated processing such as extracting the face part and emphasizing only the skin color.
[0061]
In the description so far, a sharp face is obtained by applying strong edge enhancement to the extracted eyes, nose, mouth, ears, hair, eyebrows, and the vicinity thereof, contrary to the processing performed in step S405. It can also be expressed. Further, since the effect is small even if the contour emphasis is applied to a very small face, the contour emphasis may be applied only to a certain large face. In addition, only one of the skin color processing in step S403 and the contour enhancement processing in step S405 may be selectable. It is easy to provide a plurality of parameters for the skin color processing or the contour emphasis processing, and to select the appropriate one from these parameters so that the skin color level or the contour emphasis level is the best. In addition to this, in the case of an elderly person or a woman, the parameters for increasing the saturation and the brightness may be selected in addition to the hue by judging the age and gender. Or, if the color balance that is optimal for a specific race is applied to other races as it is, an unnatural skin color will be reproduced, so you can select a color parameter that reduces the skin color according to the race. It is effective. For this purpose, it is only necessary to determine the race based on the skeleton shape of the face, limbs, ears, nose, etc., the pupil or face color, the lip shape, the clothes, the hairstyle, and the like. Further, in the above description, these processes are performed before recording, but this may be performed at the time of reproduction. In other words, in addition to the above-described feature point information and feature point data, information specific to each individual, white balance processing information, and edge enhancement processing information is recorded simultaneously in the image file described with reference to FIG. Can be processed.
[Brief description of the drawings]
FIG. 1 is a block diagram showing a configuration of a digital camera according to the present invention.
FIG. 2 is a flowchart illustrating an operation sequence of the digital camera according to the present invention.
FIG. 3 is a flowchart illustrating an operation sequence of the digital camera according to the present invention.
FIG. 4 is a flowchart illustrating a sequence when registering feature point information.
FIG. 5 is a flowchart illustrating a sequence for setting a shooting angle of view.
FIG. 6 is a flowchart illustrating a sequence for setting shooting conditions.
FIG. 7 is a flowchart illustrating a sequence for setting other shooting conditions.
FIG. 8 is a flowchart illustrating a sequence for setting other shooting conditions.
FIG. 9 is a flowchart for explaining a sequence when setting the light emission amount of a strobe.
FIG. 10 is a flowchart illustrating an imaging sequence.
FIG. 11 is a flowchart illustrating another imaging sequence.
FIG. 12 is a flowchart illustrating a recording processing sequence.
FIG. 13 is a diagram illustrating a recording state of feature points and feature point information.
FIG. 14 is a diagram illustrating a recording state of image data and feature point information added thereto.
FIG. 15 is a display example in which markers are displayed separately for each extracted feature point.
FIG. 16 is a display example showing setting of an AF area or an AE area.
FIG. 17 is a display example showing another setting of the AF area or the AE area.
FIG. 18 is a display example showing other settings of the AF area or the AE area.
FIG. 19 is a display example showing another setting of the AF area or the AE area.
FIG. 20 is a display example showing setting of an AF area.
FIG. 21 is a diagram for explaining the positional relationship of the subject in FIG. 20;
FIG. 22 is an explanatory diagram for obtaining the focal length of the lens and the distance from the eye width to the person.
[Explanation of symbols]
101 Photography lens
102 Aperture
103 Image sensor
104 Analog signal processor
105 Buffer memory
106 Digital signal processor
108 D / A converter
109 LCD monitor
110 Recording / playback signal processing unit
111 External storage media
112 CPU
113 Lens drive unit
114 Aperture drive
115 Image sensor driving unit
116 Operation member
120 interface
121 Lens position detector
122 Strobe
123 Pronunciation
135 A / D Converter
1121 AE calculation unit
1122 AWB calculation unit
1124 Bandpass filter
1125 Adder
1126 AF calculation unit
1127 storage unit
1161 Power switch
1162 half-press switch
1163 Full push switch
1164 Setting button
1165 Up / Down button

Claims (6)

  1. Extracting means for extracting a predetermined feature portion from the image data, receiving means for receiving an instruction from a user, and when a plurality of the feature points are extracted, each feature portion is assigned in a predetermined order according to the instruction received by the receiving means. A digital camera comprising: selecting means for selecting at the same time; and display means for displaying characteristic part information for specifying the characteristic part selected by the selecting means.
  2. 2. The digital camera according to claim 1, wherein the display means displays the characteristic part information superimposed on the image data.
  3. The digital camera according to claim 1, further comprising a discriminating unit that discriminates the size of a face from among the characteristic parts extracted by the extracting unit,
    The digital camera according to claim 1, wherein the selecting means selects the face in the descending order determined by the determining means.
  4. The digital camera according to claim 1, further comprising a determining unit that determines a distance to the feature portion extracted by the extracting unit, wherein the selecting unit selects in order of the distance determined by the determining unit. A featured digital camera.
  5. 2. The digital camera according to claim 1, further comprising a focus area setting unit that sets a predetermined area including the characteristic part extracted by the extraction unit as a focus area for detecting a focus. Digital camera.
  6. 2. The digital camera according to claim 1, further comprising photometric area setting means for setting a predetermined area including the characteristic part extracted by the extracting means as a photometric area.
JP2003109886A 2003-04-15 2003-04-15 Digital camera Active JP4196714B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2003109886A JP4196714B2 (en) 2003-04-15 2003-04-15 Digital camera

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
JP2003109886A JP4196714B2 (en) 2003-04-15 2003-04-15 Digital camera
US10/814,142 US20040207743A1 (en) 2003-04-15 2004-04-01 Digital camera system
DE200460030390 DE602004030390D1 (en) 2003-04-15 2004-04-15 Digital camera
EP04252199A EP1471455B1 (en) 2003-04-15 2004-04-15 Digital camera
US12/289,689 US20090066815A1 (en) 2003-04-15 2008-10-31 Digital camera system
US13/067,502 US20110242363A1 (en) 2003-04-15 2011-06-06 Digital camera system
US13/964,648 US9147106B2 (en) 2003-04-15 2013-08-12 Digital camera system

Publications (2)

Publication Number Publication Date
JP2004320287A JP2004320287A (en) 2004-11-11
JP4196714B2 true JP4196714B2 (en) 2008-12-17

Family

ID=33470890

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2003109886A Active JP4196714B2 (en) 2003-04-15 2003-04-15 Digital camera

Country Status (1)

Country Link
JP (1) JP4196714B2 (en)

Families Citing this family (70)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4130641B2 (en) 2004-03-31 2008-08-06 富士フイルム株式会社 Digital still camera and control method thereof
JP4489608B2 (en) * 2004-03-31 2010-06-23 富士フイルム株式会社 Digital still camera, image reproduction device, face image display device, and control method thereof
JP4506253B2 (en) * 2004-04-14 2010-07-21 カシオ計算機株式会社 Photo image extraction apparatus and program
US7733412B2 (en) 2004-06-03 2010-06-08 Canon Kabushiki Kaisha Image pickup apparatus and image pickup method
JP2006145629A (en) * 2004-11-16 2006-06-08 Fuji Photo Film Co Ltd Imaging apparatus
JP4720167B2 (en) * 2004-12-03 2011-07-13 株式会社ニコン Electronic camera and program
JP4674471B2 (en) * 2005-01-18 2011-04-20 株式会社ニコン Digital camera
JP4324170B2 (en) 2005-03-17 2009-09-02 キヤノン株式会社 Imaging apparatus and display control method
CN101010943B (en) * 2005-04-26 2010-04-21 佳能株式会社 Image taking device and its controlling method
JP4659569B2 (en) * 2005-09-13 2011-03-30 キヤノン株式会社 Imaging device
JP2007080184A (en) * 2005-09-16 2007-03-29 Canon Inc Image processor and method
JP4572815B2 (en) 2005-11-18 2010-11-04 富士フイルム株式会社 Imaging apparatus and imaging method
WO2007060980A1 (en) * 2005-11-25 2007-05-31 Nikon Corporation Electronic camera and image processing device
JP4315148B2 (en) * 2005-11-25 2009-08-19 株式会社ニコン Electronic camera
JP2007148691A (en) * 2005-11-25 2007-06-14 Nikon Corp Image processor
JP2007178543A (en) * 2005-12-27 2007-07-12 Samsung Techwin Co Ltd Imaging apparatus
JP4521360B2 (en) * 2006-01-18 2010-08-11 富士フイルム株式会社 Object detection device, image file recording device, and control method thereof
JP4427515B2 (en) 2006-01-27 2010-03-10 富士フイルム株式会社 Target image detection display control apparatus and control method thereof
WO2007097287A1 (en) 2006-02-20 2007-08-30 Matsushita Electric Industrial Co., Ltd. Imaging device and lens barrel
JP4413235B2 (en) 2006-02-22 2010-02-10 三洋電機株式会社 Electronic camera
JP4644883B2 (en) * 2006-02-27 2011-03-09 富士フイルム株式会社 Imaging device
JP4450799B2 (en) 2006-03-10 2010-04-14 富士フイルム株式会社 Method for controlling target image detection apparatus
JP4742927B2 (en) * 2006-03-17 2011-08-10 株式会社ニコン Electronic camera
JP4839908B2 (en) * 2006-03-20 2011-12-21 カシオ計算機株式会社 Imaging apparatus, automatic focus adjustment method, and program
JP2007295390A (en) * 2006-04-26 2007-11-08 Fujifilm Corp Imaging apparatus, and warning method
JP4182117B2 (en) 2006-05-10 2008-11-19 キヤノン株式会社 Imaging device, its control method, program, and storage medium
JP4683337B2 (en) * 2006-06-07 2011-05-18 富士フイルム株式会社 Image display device and image display method
JP4207980B2 (en) 2006-06-09 2009-01-14 ソニー株式会社 Imaging device, imaging device control method, and computer program
JP4432054B2 (en) 2006-06-20 2010-03-17 富士フイルム株式会社 Imaging apparatus and method
JP2008003335A (en) * 2006-06-23 2008-01-10 Casio Comput Co Ltd Imaging apparatus, focus control method, focus control program
JP4943769B2 (en) * 2006-08-15 2012-05-30 富士フイルム株式会社 Imaging apparatus and in-focus position search method
JP2008058553A (en) * 2006-08-31 2008-03-13 Casio Comput Co Ltd Imaging apparatus, imaging method, and imaging control program
JP4621992B2 (en) * 2006-09-19 2011-02-02 富士フイルム株式会社 Imaging apparatus and imaging method
JP4871691B2 (en) * 2006-09-29 2012-02-08 キヤノン株式会社 Imaging apparatus and control method thereof
JP4264663B2 (en) 2006-11-21 2009-05-20 ソニー株式会社 Imaging apparatus, image processing apparatus, image processing method therefor, and program causing computer to execute the method
KR101310230B1 (en) 2007-01-17 2013-09-24 삼성전자주식회사 Digital photographing apparatus, method for controlling the same, and recording medium storing program to implement the method
JP4898475B2 (en) * 2007-02-05 2012-03-14 富士フイルム株式会社 Imaging control apparatus, imaging apparatus, and imaging control method
JP4976160B2 (en) 2007-02-22 2012-07-18 パナソニック株式会社 Imaging device
JP4974704B2 (en) * 2007-02-22 2012-07-11 パナソニック株式会社 Imaging device
JP5251215B2 (en) * 2007-04-04 2013-07-31 株式会社ニコン Digital camera
EP1986421A3 (en) 2007-04-04 2008-12-03 Nikon Corporation Digital camera
JP4872785B2 (en) * 2007-05-02 2012-02-08 カシオ計算機株式会社 Imaging apparatus, subject selection method, and subject selection program
KR100894485B1 (en) * 2007-07-06 2009-04-22 캐논 가부시끼가이샤 Image capturing apparatus and its control method
US8237803B2 (en) 2007-07-09 2012-08-07 Panasonic Coporation Digital single-lens reflex camera including control section that performs camera shake correction and motion detecting section that detects speed of subject
JP4879127B2 (en) * 2007-09-21 2012-02-22 富士フイルム株式会社 Digital camera and digital camera focus area selection method
JP4518131B2 (en) 2007-10-05 2010-08-04 富士フイルム株式会社 Imaging method and apparatus
JP2009118009A (en) * 2007-11-02 2009-05-28 Sony Corp Imaging apparatus, method for controlling same, and program
JP5137622B2 (en) 2008-03-03 2013-02-06 キヤノン株式会社 Imaging apparatus and control method thereof, image processing apparatus and control method thereof
EP2104338A3 (en) 2008-03-19 2011-08-31 FUJIFILM Corporation Autofocus system
JP5027704B2 (en) * 2008-03-19 2012-09-19 富士フイルム株式会社 Auto focus system
JP2009229570A (en) * 2008-03-19 2009-10-08 Fujinon Corp Autofocus system
JP5043736B2 (en) 2008-03-28 2012-10-10 キヤノン株式会社 Imaging apparatus and control method thereof
JP5206095B2 (en) * 2008-04-25 2013-06-12 ソニー株式会社 Composition determination apparatus, composition determination method, and program
JP4848400B2 (en) * 2008-07-17 2011-12-28 富士フイルム株式会社 Digital camera
JP5207918B2 (en) * 2008-10-27 2013-06-12 キヤノン株式会社 Image processing apparatus, imaging apparatus, and image processing method
JP2009037263A (en) * 2008-11-04 2009-02-19 Canon Inc Point adjusting device, imaging device, control method for focus adjusting device, program, and recording medium
JP2010243843A (en) * 2009-04-07 2010-10-28 Fujifilm Corp Autofocus system
JP5300573B2 (en) * 2009-04-15 2013-09-25 キヤノン株式会社 Imaging apparatus, imaging method, and program
JP4577445B2 (en) * 2009-05-19 2010-11-10 カシオ計算機株式会社 Imaging apparatus, image recording method, and program
JP5573311B2 (en) * 2009-05-19 2014-08-20 株式会社ニコン camera
JP2010282107A (en) * 2009-06-08 2010-12-16 Canon Inc Imaging apparatus and control method therefor
JP5662670B2 (en) * 2009-10-27 2015-02-04 キヤノン株式会社 Image processing apparatus, image processing method, and program
CN102301693B (en) * 2009-12-01 2014-09-24 松下电器产业株式会社 Imaging device for recognition, and method for controlling same
JP5270597B2 (en) * 2010-03-10 2013-08-21 富士フイルム株式会社 Image processing apparatus for color image data and operation control method thereof
JP2011211291A (en) * 2010-03-29 2011-10-20 Sanyo Electric Co Ltd Image processing apparatus, imaging apparatus and display device
JP4991899B2 (en) * 2010-04-06 2012-08-01 キヤノン株式会社 Imaging apparatus and control method thereof
JP5071529B2 (en) * 2010-06-22 2012-11-14 セイコーエプソン株式会社 Digital camera, printing apparatus, method executed in digital camera, and method executed in printing apparatus
JP6500384B2 (en) * 2014-10-10 2019-04-17 リコーイメージング株式会社 Imaging apparatus, imaging method and program
JP6246705B2 (en) * 2014-12-16 2017-12-13 株式会社 日立産業制御ソリューションズ Focus control device, imaging device, and focus control method
JP6393296B2 (en) * 2016-08-30 2018-09-19 キヤノン株式会社 Imaging device and its control method, imaging control device, program, and storage medium

Also Published As

Publication number Publication date
JP2004320287A (en) 2004-11-11

Similar Documents

Publication Publication Date Title
US10157325B2 (en) Image capture device with contemporaneous image correction mechanism
TWI549501B (en) An imaging device, and a control method thereof
US8957993B2 (en) Detecting red eye filter and apparatus using meta-data
JP6101397B2 (en) Photo output method and apparatus
CN107087107B (en) Image processing apparatus and method based on dual camera
JP5136669B2 (en) Image processing apparatus, image processing method, and program
US7046924B2 (en) Method and computer program product for determining an area of importance in an image using eye monitoring information
US7889890B2 (en) Image capture apparatus and control method therefor
KR100738492B1 (en) Image capture apparatus and control method therefor
US8682097B2 (en) Digital image enhancement with reference images
US7672580B2 (en) Imaging apparatus and method for controlling display device
JP5251547B2 (en) Image photographing apparatus, image photographing method, and computer program
CN101854484B (en) Image selection device and method for selecting image
WO2014178212A1 (en) Contact lens and recording medium
US7206022B2 (en) Camera system with eye monitoring
US8330831B2 (en) Method of gathering visual meta data using a reference image
US6134339A (en) Method and apparatus for determining the position of eyes and for correcting eye-defects in a captured frame
US8120664B2 (en) Digital camera
US8212894B2 (en) Electronic camera having a face detecting function of a subject
US9402033B2 (en) Image sensing apparatus and control method therefor
KR100960034B1 (en) Image pickup apparatus, and device and method for control of image pickup
US8004599B2 (en) Automatic focus adjusting apparatus and automatic focus adjusting method, and image pickup apparatus and image pickup method
US8265348B2 (en) Digital image acquisition control and correction method and apparatus
KR100815513B1 (en) Focus adjustment method, focus adjustment apparatus, and control method thereof
US9264620B2 (en) Image photography apparatus and method for proposing composition based person

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20060301

A711 Notification of change in applicant

Free format text: JAPANESE INTERMEDIATE CODE: A711

Effective date: 20060412

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20080529

RD04 Notification of resignation of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7424

Effective date: 20080529

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20080610

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20080808

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20080909

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20080922

R150 Certificate of patent or registration of utility model

Ref document number: 4196714

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20111010

Year of fee payment: 3

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20111010

Year of fee payment: 3

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20141010

Year of fee payment: 6

S531 Written request for registration of change of domicile

Free format text: JAPANESE INTERMEDIATE CODE: R313531

R350 Written notification of registration of transfer

Free format text: JAPANESE INTERMEDIATE CODE: R350

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20141010

Year of fee payment: 6

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20141010

Year of fee payment: 6

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250