CN101382721A - Image pickup apparatus and focusing condition displaying method - Google Patents

Image pickup apparatus and focusing condition displaying method Download PDF

Info

Publication number
CN101382721A
CN101382721A CNA2008102149316A CN200810214931A CN101382721A CN 101382721 A CN101382721 A CN 101382721A CN A2008102149316 A CNA2008102149316 A CN A2008102149316A CN 200810214931 A CN200810214931 A CN 200810214931A CN 101382721 A CN101382721 A CN 101382721A
Authority
CN
China
Prior art keywords
focusing
image
situation
detected
animated image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2008102149316A
Other languages
Chinese (zh)
Other versions
CN101382721B (en
Inventor
冈部雄生
三沢岳志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujifilm Corp
Original Assignee
Fujifilm Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2007240012A external-priority patent/JP4852504B2/en
Application filed by Fujifilm Corp filed Critical Fujifilm Corp
Publication of CN101382721A publication Critical patent/CN101382721A/en
Application granted granted Critical
Publication of CN101382721B publication Critical patent/CN101382721B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

In an image pickup apparatus of the present invention, an image of an object is picked up and an image signal representing the object is continuously captured so that a through-the-lens image is displayed based on the captured image signal, and then a display area for displaying focusing information is synthesized (S16, S18) into the through-the-lens image of a displaying device. In addition, based on the captured image signal, an automatic focus adjustment is performed to maximize the contrast of the object, and the result of the focus adjustment is detected. Based on the detected focusing condition, focusing information which is synthesized into the display area is changed. The focusing information includes at least focusing information of an unfocused condition and focusing information of a focused condition. This enables information that a desired area of an object became focused to be displayed to a user in a manner easy to understand.

Description

Camera head and focusing condition display method
Technical field
The present invention relates to camera head and focusing condition display method, more particularly, relate to particularly and automatically focusing in operation, show camera head and the focusing condition display method of focusing situation.
Background technology
Disclose for showing the method for the information that represents whether object is focused as follows.
Japanese Patent Application Publication No.6-113184 has proposed a kind of camera head, below electronic viewfinder, show that expression is for the bar chart of the state of the switch of the situation of focusing, so that user can check focusing situation.
Japanese Patent Application Publication No.6-301098 has proposed a kind of camera head, shows focusing supplementary table, so that user checks that object is whether in its depth of focus.
Japanese Patent Application Publication No.2002-311489 has proposed a kind of camera head, changes Show Color or the display format of cruciform blip, so that user can check the situation of focusing/not focusing.
Summary of the invention
Yet the mode that disclosed camera head is understood can not be easy to each user in above-mentioned patent documentation, adversely shows the information that represents that object has been focused.
Especially, consider the situation of the user's pickup image such as children of the operation of being wherein unfamiliar with camera head, should be to attract the user's attention and to show this information with regard to understandable mode at a glance.
Existence is for detection of the facial technology of object, at detected face, show square-shaped frame (seeing Figure 28) around, the face detecting with toilet is focused, but when being unfamiliar with the user's pickup image such as children of operation of camera head, existence can not easily inform that user detects the problem that face and face have been focused.
In view of said circumstances, make the present invention, and an object of the present invention is to provide the information that can show in camera operation to determine image and whether focused or easily to determine the camera head in which region of focusing, and focusing condition display method.
For realizing this object, camera head according to a first aspect of the invention comprises: picture pick-up device, the image of picked-up object; Picture catching device, catches the picture signal that represents object continuously through picture pick-up device; Display device, the picture signal based on caught, shows through the lens metering image (through-the-lens image); Automatically device is adjusted in focusing, and the picture signal based on caught is carried out focusing automatically and adjusted to maximize the contrast of object; Focusing condition detection device, by after focusing adjustment device is adjusted automatically, detects the focusing situation of object; And display control device, by for showing that the viewing area of focusing information is synthesized to the through the lens metering image of display device, and response is by the focusing situation of focusing condition detection device detection, will at least in different focusing information when image is not focused and between when image is focused, be synthesized in viewing area.
According to the camera head of first aspect, the image of picked-up object, and continuous capturing represents the picture signal of object, so that the picture signal based on caught, by display device, show through the lens metering image, wherein viewing area is synthesized in this through the lens metering image for showing focusing information.Meanwhile, the picture signal based on caught, carries out focusing automatically and adjusts to maximize the contrast of object, and the result that detects focusing operation.Focusing situation based on detected, changes focusing information to be synthesized in viewing area.Focusing information at least comprises the information of the situation of focusing and the information of the situation of having focused.This makes to be easy to the mode that user understands and shows the information that represents that object has been focused on desired region.
Camera head according to a second embodiment of the present invention, according in the camera head of first aspect, further comprise face detector part, from caught picture signal, detect the face of object, when face being detected by face detector part, focusing adjustment device is carried out automatically and is focused and adjust on detected face automatically.
According to the camera head of second aspect, from caught picture signal, detect the face of object, and when face being detected, on detected face, carry out focusing automatically and adjust.This has prevented any fault of the situation of not focusing of main objects.
At camera head according to a third aspect of the invention we, according in the camera head of second aspect, display control device is synthesized near the position face being detected by face detector part by the viewing area showing on the through the lens metering image on display device.
According to the camera head of the third aspect, when face being detected, the viewing area showing is synthesized to near the position face detecting on through the lens metering image.This makes the information that represents which region of focusing show to be easy to the mode of user's understanding.
In camera head according to a forth aspect of the invention, according to the camera head of the third aspect, the viewing area showing on the through the lens metering image by display control device on display device has dialogue balloons (dialogue balloon) shape.
According to the camera head of fourth aspect, the viewing area with dialogue balloons shape is synthesized in through the lens metering image.This makes demonstration by attracting the user's attention allow to inform user's situation of focusing.
Camera head according to a fifth aspect of the invention, according in any one camera head aspect first to fourth, further comprise memory device, storage is corresponding to the focusing information of focusing situation, at least comprise the information of the situation of focusing and the information of the situation of having focused, and display control device is synthesized to viewing area by the focusing information of storing in memory device.
According to the camera head of the 5th aspect, storage comprises the information of the situation of focusing and the focusing information of the information of the situation of having focused, and shows in the focusing information store the focusing information corresponding to the situation of focusing.This makes to show focusing situation in the understandable mode of user, and this demonstration allows user to be easy to judge focusing situation.
Camera head according to a sixth aspect of the invention, according in the camera head aspect the 5th, further comprise entering apparatus, response focusing situation input focusing information, and the focusing information that memory device stores is inputted by entering apparatus, and when storage in memory device is during corresponding to many of identical focusing situation focusing information, display control device is selected a desired focusing information from many focusing information, and this information is synthesized in viewing area.
According to the camera head of the 6th aspect, further storage is through the focusing information of entering apparatus input, so that storage is corresponding to many focusing information of identical focusing situation.When storage is during corresponding to many of identical focusing situation focusing information, from many focusing information that will show clearly, select a desired focusing information.This permission customizes according to user preferences
In camera head according to a seventh aspect of the invention, according in the camera head aspect the 5th or the 6th, when focusing condition detection device detects focusing and adjusts, display control device is switched to the information of the situation of focusing by never the focus information of situation of focusing information.
According to the camera head of the 7th aspect, before focusing operation, show the information of the situation of not focusing, between the adjustment period of focusing, the information of the situation of not focusing is switched to the information of the situation of focusing of focus image, while having focused with convenient image, show the information of the situation of having focused.This allows to inform clearly user's situation of focusing by the demonstration attracting the user's attention.
In camera head according to an eighth aspect of the invention, according in any one camera head aspect second to the 6th, face detector part detects face and the expression of object, and display control device is synthesized to the expression based on being detected by face detector part in viewing area by focusing information.
According to the camera head of eight aspect, detect face and the expression of object, and show focusing information based on detected expression.This allows demonstration by attracting the user's attention to inform user's situation of focusing in understandable mode.
In camera head according to a ninth aspect of the invention, according to first to any one camera head of eight aspect, the testing result of display control device response focusing condition detection device, changes the size of viewing area.
According to the camera head of the 9th aspect, response focusing situation, changes the size of viewing area, and this allows in understandable mode, informs user's situation of focusing.
Camera head according to the tenth aspect of the invention comprises: picture pick-up device, the image of picked-up object; Picture catching device, through picture pick-up device, catches the picture signal that represents object continuously; Display device, the picture signal based on caught, shows through the lens metering image; Automatically device is adjusted in focusing, and the picture signal based on caught is carried out focusing automatically to the desired region of object and adjusted; Focusing condition detection device, by after focusing adjustment device is adjusted automatically, detects the focusing situation of object; Animated image generates device, generates at least one the animated image with following characteristics: variable position, variable-sized and shape-variable, and show at least at the different images when image is not focused and between when image is focused; And display control device, the focusing situation that response is detected by focusing condition detection device, is synthesized to animated image in through the lens metering image.
According to the camera head of the tenth aspect, the image of picked-up object, and catch continuously the picture signal that represents object, so that the picture signal based on caught shows through the lens metering image.Meanwhile, the picture signal based on caught, carries out focusing automatically to the desired region of object and adjusts, and then, detects the focusing situation of object, so that the focusing situation that response detects is synthesized to animated image in through the lens metering image.Animated image has at least one of following characteristics: variable position, variable-sized and shape-variable, and generated in advance to comprise by shown at least at different image when image is not focused and between when image is focused.In automatic focusing, adjust in device, conventionally, contrast AF system is used for focusing adjustment to maximize the contrast in the desired region of object, but also can uses other system.This allows to be easy to the mode that user understands, and shows that the desired region that represents object becomes the information of having focused.
Camera head according to an eleventh aspect of the invention, according in the camera head aspect the tenth, further comprise face detector part, from caught picture signal, detect the face of object, and when face being detected by face detector part, focusing adjustment device is carried out automatically and is focused and adjust on detected face automatically.
According to the camera head of the tenth one side, from caught picture signal, detect the face of object, and when face being detected, detected face is focused.This has prevented any fault of the situation of not focusing of main objects.
In camera head according to a twelfth aspect of the invention, according in the tenth and the tenth any one camera head on the one hand, the focusing situation that display control device response is detected by focusing condition detection device, at least one of tone, brightness and the saturation degree of change animated image.
According to the camera head of the 12 aspect, respond detected focusing situation, at least one of tone, brightness and the saturation degree of change animated image.This allows to be easy to the mode that user understands and shows focusing situation.
In camera head according to a thirteenth aspect of the invention, according in any one camera head aspect the tenth to the 12, animated image generates the animated image that device generation has a plurality of frames of concentric demonstration, a plurality of frames have different size and rotate until focusing condition detection device detects focusing situation, and when focusing condition detection device detects while focusing situation, a plurality of frames have each other same size and stop.
According to the camera head of the tenth three aspects:, the animated image with a plurality of concentric frames is synthesized in through the lens metering image, and these frames have different size rotation until focusing situation detected, and when detecting while focusing situation, a plurality of frames have same size and stop.Frame can have a plurality of shapes that comprise any geometry (such as circle, ellipse and rectangle) and heart.This allows to inform clearly user's situation of focusing by the demonstration attracting the user's attention.
In camera head according to a fourteenth aspect of the invention, according in any one camera head aspect the tenth to the 12, animated image generates the animated image that device generation has a plurality of frames that rotate in the direction that differs from one another, until focusing condition detection device detects focusing situation.These rotations in different directions of a plurality of frames allow more clearly to inform user's situation of having focused.
In camera head according to a fifteenth aspect of the invention, according in any one camera head aspect the tenth to the 12, animated image generates device generation and has the animated image with predetermined angle speed frame of lasting rotation in predetermined direction, until focusing condition detection device detects focusing situation.
According to the camera head of the 15 aspect, these frames continue rotation until focusing situation detected in predetermined direction with predetermined angle speed.This allows to inform clearly user's situation of focusing by the demonstration attracting the user's attention.
In camera head according to a sixteenth aspect of the invention, according in any one camera head aspect the tenth to the 12, animated image generates the animated image that device generation has near the frame that continues to rock desired region, until focusing situation detected by focusing condition detection device.This allows to inform clearly user's situation of focusing by the demonstration attracting the user's attention.
In camera head according to a seventeenth aspect of the invention, according in any one camera head aspect the 13 to the 16, the focusing situation that response is detected by the condition detection device of focusing, display control device changes frame and it has been carried out to the interregional distance that focusing is adjusted, and when focusing condition detection device detects while focusing situation, make frame overlapping and be presented at and carry out on the region that automatic focusing adjusts.
According to the camera head of the 17 aspect, response focusing situation, change frame and it has been carried out to the interregional distance that focusing is adjusted, when focusing condition detection device detects while focusing situation, make frame overlapping and be presented at and carry out on the region that focusing adjusts.This makes to be easy to the mode that user understands and shows the information that represents which region of focusing.
In camera head according to an eighteenth aspect of the invention, according in any one camera head aspect the tenth to the 12, the focusing situation that response is detected by the condition detection device of focusing, animated image generates the animated image of ear that device generation changes the animal of its posture, and when image is not focused, ear is downward-sloping, when image is focused, ear is holded up, and display control device makes animated image overlapping and be presented at and carry out on the region that focusing adjusts.
Camera head according to an eighteenth aspect of the invention, synthesizes animated image in through the lens metering image, wherein, response focusing situation changes the ear of the animal of its posture, so that response focusing situation, when image is not focused, ear is downward-sloping, and when image is focused, ear is holded up.This allows to inform clearly user's situation of focusing.
In camera head according to a nineteenth aspect of the invention, according in the camera head of the tenth eight aspect, when automatic focusing is adjusted device the face of object is carried out focusing and adjusted, display control device makes animated image overlapping and be presented on the face of object.
According to the camera head of the 19 aspect, overlapping and show to change the animation of ear of the animal of its posture on detected object face, allow to inform clearly user's situation of focusing.
In camera head according to a twentieth aspect of the invention, according in any one camera head aspect the tenth to the 12, the focusing situation that response is detected by the condition detection device of focusing, animated image generates the animated image that device generation shows the different parts of animal, while not focusing with the desired region of convenient object, show the only animation of a part for animal, and when focused in the desired region of object, show whole animal, and display control device makes animated image overlapping and be presented at and carry out on the region that automatic focusing adjusts.
Camera head according to a twentieth aspect of the invention, carrying out on the region of automatic focusing adjustment, the overlapping animated image that shows the different parts of animal with showing response focusing situation, while not focusing with the desired region of convenient object, the animation that only shows the role's that animal is for example a such part, and when focused in the desired region of object, show whole role, this makes to be easy to the mode that user understands and shows focusing situation.
In camera head according to a twenty-first aspect of the invention, according in any one camera head aspect the tenth to the 12, the focusing situation that response is detected by the condition detection device of focusing, animated image generates the animated image that device generation shows the different conditions of born volant animal, while not focusing with the desired region of convenient object, show the animal just circling in the air, and when focused in the desired region of object, show static animal, and when focusing condition detection device detects while focusing situation, display control device is positioned at the animated image of the animal that just circling in the air it is carried out near the region that focusing is adjusted automatically.
Camera head according to a twenty-first aspect of the invention, animated image is synthesized near the position of carrying out the region that automatic focusing adjusts, while not focusing with the desired region of convenient object, the animated image that shows the animal just circling in the air, and when focused in the desired region of object, animal stops circling in the air and being positioned near the region of carrying out automatic focusing adjustment.This makes to be easy to the mode that user understands and shows focusing situation.In addition, this allows to be easy to mode that user understands makes user know that the residing position of animal focuses,, knows where be focusing area that is.
In camera head according to a twenty-second aspect of the invention, according in any one camera head aspect the tenth to the 12, the focusing situation that response is detected by the condition detection device of focusing, animated image generates the animated image that device generation shows the different fresh flower stages, while not focusing with the desired region of convenient object, show petal, and when focused in the desired region of object, show fresh flower in full bloom, and display control device is presented near the position its execution region that focusing is adjusted automatically animated image.
Camera head according to a twenty-second aspect of the invention, response focusing situation is shown to the animated image in different fresh flower stage is synthesized near the position of carrying out the region that automatic focusing adjusts, while not focusing with the desired region of convenient object, show petal, and when focused in the desired region of object, petal is open.This allows to be easy to the mode that user understands and shows focusing situation.
In camera head according to a twenty-third aspect of the invention, according in any one camera head aspect the tenth to the 12, the focusing situation that response is detected by the condition detection device of focusing, animated image generates the animated image that device generation has the dialogue balloons of different size, and display control device is presented at animated image to carry out near position region that automatic focusing adjusts.
Camera head according to a twenty-third aspect of the invention, animated image response focusing situation to the dialogue balloons of different size is synthesized to near the position its execution region that focusing is adjusted automatically.This makes to be easy to the mode that user understands and shows focusing situation.
In camera head according to a twenty-fourth aspect of the invention, according in the camera head of the 20 three aspects:, animated image generates device generation at least when focused in the desired region of object and have the animated image of the dialogue balloons of different images between when do not focus in the desired region of object.
Camera head according to a twenty-fourth aspect of the invention, by response focusing situation at least when focused in the desired region of object and the animated image between when do not focus in the desired region of object with the dialogue balloons of different size and different images be synthesized near the position region of it being carried out the adjustment of automatically focusing.This makes to be easy to the mode that user understands and shows focusing situation.
Focusing condition display method according to the twenty-fifth aspect of the invention comprises: the step that catches continuously the picture signal of object; Picture signal based on caught shows the step of through the lens metering image; Picture signal based on caught, carries out to the desired region of object the step that focusing is adjusted automatically; Detect the step of focusing adjustment situation; And by for showing that the viewing area of focusing information is synthesized to through the lens metering image, and respond detected focusing situation, will at least in different focusing information when do not focus in the desired region of object and between when focused in the desired region of object, be synthesized in viewing area.
Focusing condition display method according to the twenty-sixth aspect comprises: the step that catches continuously the picture signal of object; Picture signal based on caught, the step of demonstration through the lens metering image; Picture signal based on caught, carries out to the desired region of object the step that focusing is adjusted automatically; Detect the step of focusing adjustment situation; And respond detected focusing adjustment situation, the animated image that shows focusing situation is synthesized to the step in through the lens metering image.
In focusing condition display method according to a twenty-seventh aspect of the invention, in focusing condition display method according to the twenty-sixth aspect, carry out the step that focusing is adjusted automatically and further comprise: the facial step that detects object from captured picture signal; And the step of carrying out the adjustment of automatically focusing on detected face.
In focusing condition display method according to a twenty-eighth aspect of the invention, according in the focusing condition display method aspect the of the present invention the 25 or 17, the step that animated image is synthesized to through the lens metering image further comprises:
Generation has at least one the step of animated image of following characteristics: variable position, variable-sized and shape-variable, and at least when focus in the desired region of object and between when focused in the desired region of object, do not showing different images; And
Generated animated image is synthesized to the step in through the lens metering image.
In focusing condition display method according to the twenty-ninth aspect, in focusing condition display method according to a twenty-eighth aspect of the invention, animated image is synthesized to after step in through the lens metering image changes animated image at least one of tone, brightness and saturation degree in the detected focusing adjustment situation of response, animated image is synthesized in through the lens metering image.
According to the present invention, can in camera operation, show the information whether definite image has been focused or be easy to determine which region of focusing that is easy to.
Accompanying drawing explanation
Fig. 1 means the perspective view of the first embodiment of application digital camera of the present invention;
Fig. 2 is the rear view of the first embodiment of digital camera;
Fig. 3 means the block diagram of schematic construction of the first embodiment of digital camera;
Fig. 4 means the process flow diagram for the first embodiment of the process of the focusing situation demonstration of digital camera;
In Fig. 5 before completing focusing operation, the example that focusing situation shows;
Fig. 6 is before completing focusing operation, the example that focusing situation shows;
Fig. 7 is before completing focusing operation, the example that focusing situation shows;
Fig. 8 is when completing focusing operation, the example that focusing situation shows;
Fig. 9 means the process flow diagram for the second embodiment of the process of the focusing situation demonstration of digital camera;
Figure 10 is when completing focusing operation, the example that focusing situation shows;
Figure 11 is before completing focusing operation, the example that focusing situation shows;
Figure 12 A is just in time starting to focus after operation, the example that focusing situation shows; Figure 12 B is focusing operating period, the example that focusing situation shows; And Figure 12 C is when image is focused, the example that focusing situation shows;
Figure 13 means the block diagram of the schematic construction of digital camera;
Figure 14 means the process flow diagram of the first embodiment of the process that the animated image of digital camera shows;
Figure 15 A is before completing focusing operation, the demonstration example of the first embodiment of the animated image of digital camera, and Figure 15 B is when completing focusing and operate, the demonstration example of the first embodiment of the animated image of digital camera;
Figure 16 means the process flow diagram for the second embodiment of the process of the animated image demonstration of digital camera;
Figure 17 A is before completing focusing operation, the demonstration example of the second embodiment of the animated image of digital camera; And Figure 17 B is when completing when operation focusing, the demonstration example of the second embodiment of the animated image of digital camera;
Figure 18 means the process flow diagram for the 3rd embodiment of the process of the animated image demonstration of digital camera;
Figure 19 A is before completing focusing operation, the demonstration example of the 3rd embodiment of the animated image of digital camera; And Figure 19 B is when completing when operation focusing, the demonstration example of the 3rd embodiment of the animated image of digital camera;
Figure 20 means the process flow diagram for the 4th embodiment of the process of the animated image demonstration of digital camera;
Figure 21 A is before completing focusing operation, the demonstration example of the 4th embodiment of the animated image of digital camera; And Figure 21 B is when completing when operation focusing, the demonstration example of the 4th embodiment of the animated image of digital camera;
Figure 22 means the process flow diagram for the 5th embodiment of the process of the animated image demonstration of digital camera;
Figure 23 A is before completing focusing operation, the demonstration example of the 5th embodiment of the animated image of digital camera; And Figure 23 B is when completing when operation focusing, the demonstration example of the 5th embodiment of the animated image of digital camera;
Figure 24 is the process flow diagram for the 6th embodiment of the process of the animated image demonstration of digital camera;
Figure 25 A is before completing focusing operation, the demonstration example of the 6th embodiment of the animated image of digital camera; And Figure 25 B is when completing when operation focusing, the demonstration example of the 6th embodiment of the animated image of digital camera;
Figure 26 means the process flow diagram for the 7th embodiment of the process of the animated image demonstration of digital camera;
Figure 27 A to 27C is the demonstration example of the 7th embodiment of the animated image of digital camera; Figure 27 A is the demonstration example when starting focusing operation, and Figure 27 B is the demonstration example when just processing focusing operation, and Figure 27 C is the demonstration example when completing focusing operation; And
Figure 28 is the demonstration example of prior art.
Embodiment
Now, with reference to the accompanying drawings, hereinafter, by explaining, realize according to the preferred embodiment of camera of the present invention.
< the first embodiment >
Fig. 1 means according to the perspective view of the camera head of the first embodiment of the present invention embodiment.Fig. 2 is the rear view of an embodiment of camera head.Camera head is by the camera lens at its imaging apparatus place, receives light and converts light to digital signal to be stored in the digital camera in storage medium.
Digital camera 10 has the camera body 12 of the square box-like of horizontal length, and as shown in Figure 1, camera body 12 provides camera lens 14, X-flash 16, view finder 18, self-timing lamp 20, AF auxiliary lamp 22, flash of light adjustment sensor 24 etc. in its front.Camera body 12 also provides shutter release button 26, power supply/mode switch 28, mode dial 30 etc. at its top.As shown in Figure 2, camera body 12 is further provided with monitor 32, view finder eyepiece 34, loudspeaker 36, zoom button 38, cruciform button 40, MENU/OK (menu/confirmation) button 42, DISP button 44, BACK (rollback) button 46 etc. after it.
Camera body 12 has lower surface (not shown), provides screw hole for tripod, at battery case and the memory card slot that can open/can close under lid, and respectively by battery and memory card loading in battery case and memory card slot.
Camera lens 14 disposes collapsible zoom lens, when using power supply/mode switch 28 that image pickup mode is set, from camera body 12, stretches out.The zoom mechanism of camera lens 14 and foldable mechanism be based on known technology, and below, will describe in no detail concrete structure.
X-flash 16 comprises illuminating part, and described illuminating part is configured to swing (swing) in the horizontal direction with in vertical direction, so that flashlamp energy directive main objects.Hereinafter, the structure of X-flash 16 will be explained.
View finder 18 is by it, to determine the window of object to be absorbed.
Self-timing lamp 20 is for example formed by LED, and after pressing shutter release button 26, and after section sometime, the utilization of described self-timing lamp is luminous for the self-timer of making a video recording when the shooting (at animage pickup), and this will be described hereinafter.
AF auxiliary lamp 22 is formed by for example high-brightness LED, and response AF is luminous.
As mentioned below, flashlamp is adjusted the light quantity that sensor 24 is adjusted X-flash 16.
Shutter release button 26 is the two-stage switches with what is called " half presses " and " total head "." half presses " of shutter release button 26 causes AE/AF operation, and " total head " of shutter release button 26 causes digital camera 10 to carry out shooting.
Power supply/mode switch 28 serves as the power switch of opening/closing digital camera 10, and the mode switch that also serves as the pattern that digital camera 10 is set, and is slidably disposed between " position, pass ", " reproduction position " and " camera position ".When power supply/mode switch 28 is slided with " reproduction position " or " camera position " on time, open digital camera 10, and when by power supply/mode switch 28 with " position, pass " to closing on time digital camera 10.Power supply/mode switch 28 and aiming at of " reproduction position " are made to arrange " reproduction mode ", and make to arrange " image pickup mode " with aiming at of " camera position ".
The image pickup mode that mode dial 30 serves as the image pickup mode that digital camera 10 is set arranges device, and the setting position of mode dial allows the image pickup mode of digital camera 10 to change over different mode.Pattern for example comprises: for the aperture of digital camera 10 is automatically set, " the automatic camera pattern " of shutter speed etc., for absorbing " the dynamically image pickup mode " of dynamic image, be applicable to absorb " personage's image pickup mode " of personage's image, be applicable to absorb " the Flying Camera pattern " of the image of moving person, be applicable to absorb " the landscape image pickup mode " of the image of landscape, be applicable to absorb " the night scene image pickup mode " of the image of night scene, shooting person arranges " the aperture priority image pickup mode " that aperture calibration and digital camera 10 automatically arrange shutter speed, shutter speed is set shooting person and digital camera 10 automatically arranges " the Shutter speed priority image pickup mode " that aperture is calibrated, shooting person arranges aperture, " the manually image pickup mode " of shutter speed etc. and " the person detecting image pickup mode " that automatically detects personage and flash to this people, to explain after a while.
Monitor 32 is to provide the colored liquid crystal display showing.Monitor 32 use act in reproduction mode, show absorbed image video display board, and also with acting on the user interface display panel of various setting operations.In addition, in image pickup mode, as required, show through the lens metering (through-the-lens) image, monitor 32 is used as to electronic viewfinder (electronic finder) to check visual angle.
When opening voice output by mode dial 30 etc., loudspeaker 36 output predetermined sounds, such as voice and buzzer.
Zoom button 38 serves as the zoom specify devices of specifying zoom, and comprises the zoom button 38T that dolly-out,s dolly-back, and it is specified towards the zoom of the end of dolly-out,ing dolly-back, and zoom wide-angle button 38W, and it specifies the zoom towards wide-angle side.In digital camera 10, in image pickup mode, the dolly-out, dolly-back operation of button 38T and zoom wide-angle button 38W of zoom causes the focal length of camera lens 14 to change.Meanwhile, in reproduction mode, the dolly-out, dolly-back operation of button 38T and zoom wide-angle button 38W of zoom makes the size of reproduced image increase or reduce.
Cruciform button 40 serves as direction designated button, by it, input four upwards, downwards, left with to the appointment of right, and for for example menu item of choice menus screen.
MENU/OK button 42 serves as the normal screen of specifying from each pattern and is switched to the button (MENU button) of menu screen, and serves as the button (OK button) of the execution etc. of specifying the determining of selected content, process.
DISP button 44 serves as the button of the switching of specifying the demonstration on monitor 32, and during making a video recording, press down demonstration that DISP button 44 causes monitor 32 and close (OFF) from opening find a view demonstration-> of guide (framing guide) of (ON)->.At reproduction period, press down DISP button 44 and cause demonstration to be reproduced from the letterless reproduction-> of normal reproduction-> more.
BACK button 46 serves as specifies the button of cancelling input operation or turning back to previous mode of operation.
Fig. 3 means the block diagram of schematic construction of the inside of digital camera 10.
As shown in Figure 3, digital camera 10 disposes CPU110, operating portion (shutter release button 26, power supply/mode switch 28, mode dial 30, zoom button 38, cruciform button 40, MENU/OK button 42, DISP button 44, BACK button 46 etc.) 112, ROM116, EEPROM118, storer 120, VRAM122, imaging apparatus 124, timing generator (TG) 126, simulation process portion (CDS/AMP) 128, A/D converter 130, image input control portion 132, picture signal handling part 134, video encoder 136, word synthesizes portion 138, AF test section 140, AE/AWB test section 142, aperture drive division 144, lens driving portion 146, compression and decompression handling part 148, medium control part 150, storage medium 152, face test section 154, flashlamp is adjusted control part 160 etc.
The operation signal of CPU110 based on from operating portion 112 inputs, intactly controls whole digital camera 10 according to predetermined control program.
Through bus 114, be connected to control program that the ROM116 storage of CPU110 carried out by CPU110 and for controlling required various data, and EEPROM118 storage is about the various configuration informations of the operation of digital camera 10, such as information-setting by user.Storer (SDRAM) 120 use act on CPU110 calculating region and also with acting on the temporary storage area of view data etc., and VRAM122 is as only for the temporary storage area of view data.
Imaging apparatus 124 disposes the colored CCD of the array with predetermined chromatic filter, and absorbs electronically the image of the object being formed by camera lens 14.Timing generator (TG) 126 responses are from the order of CPU110, and output is for driving the timing signal of imaging apparatus 124.
128 pairs of picture signals from imaging apparatus 124 outputs of simulation process portion, R, G and the B signal of sampling and maintenance (correlated-double-sampling process) each pixel, and amplify this signal to output to A/D converter 130.
A/D converter 130 will convert digital R, G and B signal to from simulation R, G and the B signal of 128 outputs of simulation process portion, and exports this signal.
Image input control portion 132 will output to storer 120 from digital R, G and the B signal of A/D converter 130 outputs.
Picture signal handling part 134 comprises synchronizing circuit (for carrying out by compensating the spatial displacement of the colour signal of the chromatic filter array on single CCD the treatment circuit of changing in colour signal), white balance compensation circuit, gamma-correction circuit, contour correction circuit, brightness/colour difference signal generative circuit etc., and according to the order from CPU110, the signal of received image signal carry out desired is processed to generate the view data (yuv data) that comprises brightness data (Y data) and chromatism data (Cr and Cb data).
Video encoder 136, according to the order from CPU110, is controlled the demonstration on monitor 32.; according to the order from CPU110; video encoder 136 converts received image signal will be displayed on the vision signal (for example NTSC signal, PAL signal and SCAM signal) on monitor 32 and signal be outputed to monitor 32 to, and also will output to monitor 32 by the synthetic portion of word 138 synthetic predetermined word and graphical informations as required.
AF test section 140 disposes: Hi-pass filter, only by G signal high fdrequency component; Absolute value handling part, AF region detecting part, remove for example, signal in predetermined focusing area (focus area) (core of screen); And integration part, to the absolute value data integration in AF region.
AE/AWB test section 142, according to the order from CPU110, calculates AE control and AWB and controls required physical quantity.For example, as control required physical quantity for AE, R, the G in each region and the integrated value of B picture signal that for example, by screen being divided into a plurality of regions (16 * 16), obtain.
Aperture drive division 144 and lens driving portion 146, according to the order from CPU110, control the drive division 124A of imaging apparatus 124, and the operation of controlling pick-up lens 14 and aperture 15.
Compression and decompression handling part 148, according to the order from CPU110, is carried out the compression of predetermined pattern and is processed, and generate compressing image data to input image data.Compression and decompression handling part 148, also according to the order from CPU110, is carried out the decompression of predetermined pattern to input compressing image data, and generating solution compressing image data.
Medium control part 150, according to the order from CPU110, is controlled the read/write of the data that are loaded into the storage medium 152 in media slot.
Face test section 154, according to the order from CPU110, extracts the facial zone of image from input image data, and detects the position (for example, the center of gravity of facial zone) in this region.For example,, by extracting skin color data from original image and along their extractions of being named a person for a particular job by definite optical measurement with skin color, carrying out the extraction of facial zone.For extract other known method of facial zone from image, comprise: for the two-dimensional histogram by photometric data being converted to the color harmony saturation degree that colourity and saturation degree and generation change to analyze the method for determining facial zone; For determining that by extracting corresponding to the facial candidate region of human face's shape facial zone and the characteristic quantity based on this region determine the method for facial zone; And for determine the method for facial zone by extract human face's profile from image; For thering are a plurality of templates of human face's shape by preparations, the correlativity between calculation template and image, and determine facial candidate region and extract human face's method based on correlation, and any one of these methods can be for extraction.
Focusing situation shows that generating unit 156 generates the dialogue balloons of display text and symbol.The state of the AF that CPU110 identification is carried out by AF test section 140, and give an order to focusing situation demonstration generating unit 156.According to the order from CPU110, focusing situation shows word or the figure that generating unit 156 generates corresponding to AF state.Then, the facial positional information based on being detected by facial test section 154, CPU110 gives an order to the synthetic portion 138 of word, to show near face by focusing situation, shows the demonstration that generating unit 156 generates.By focusing situation, show that the demonstration that generating unit 156 generates will below explain.
Flashlamp is adjusted control part 160 according to the order from CPU110, controls the luminous of X-flash 16.
Then, hereinafter, by explanation as mentioned above configuration, according to the operation of the digital camera 10 of the present embodiment.
First, hereinafter, the process by explanation for general shooting and recording processing.As mentioned above, by power supply/mode switch 28 is alignd with camera position, digital camera 10 is arranged on to image pickup mode, and can pickup image.The setting of image pickup mode stretches out camera lens 14 to set the waiting status of shooting.
Under image pickup mode, object light being shot, by camera lens 14, is focused on the light receiving surface of imaging apparatus 124 through aperture 15.The light receiving surface of imaging apparatus 124 has red (R) by for example, arranging by predetermined array structure (bayer-pattern, G band model), green (G) and blue (B) chromatic filter a plurality of photodiodes (light receiving element) of two-dimensional arrangements thereon.The object light being shot of camera lens 14 is passed through in each reception by photodiode, and is converted into the signal charge amount corresponding to incident light quantity.
Driving pulse based on providing from timing generator (TG) 126, the signal charge of accumulating is read as to the voltage signal (picture signal) corresponding to this signal charge continuously, to be added in simulation process portion (CDS/AMP) 128 in each photodiode.
By A/D converter 130, by converting digital R, G and B signal to from simulation R, G and the B signal of 128 outputs of simulation process portion, to be added in image input control portion 132.Image input control portion 132 will output to storer 120 from digital R, G and the B signal of A/D converter 130 outputs.
When absorbed image outputs to monitor 32, use the picture signal that outputs to storer 120 from image input control portion 132, to generate brightness/colour difference signal by picture signal handling part 134, and be sent to video encoder 136.Video encoder 136 for example converts the brightness/colour difference signal of input to single pattern, to show (the colored composite video signal of NTSC), and it outputs to monitor 32.In this manner, the image being absorbed by imaging apparatus 124 is presented on monitor 32.
From imaging apparatus 124, catch termly picture signal, and rewrite termly the view data in VRAM122 by the brightness/colour difference signal being generated by picture signal, output to monitor 32, realized so the real-time demonstration of the image being absorbed by imaging apparatus 124.Photographer can check (through the lens metering) image showing on monitor 32 in real time, to check the visual angle of shooting.
As required, the brightness/colour difference signal that adds video encoder 136 from VRAM122 to also adds the synthetic portion 138 of word to, so that this signal and predetermined word or figure are synthesized, and adds video encoder 136 to.This allows the required picked-up information of overlapping demonstration on through the lens metering image.
Press down shutter release button 26 and start shooting.When partly pressing shutter release button 26, SION signal is input to CPU110, make CPU110 carry out AE/AF process.
First, through image input control portion 132, the picture signal being caught by imaging apparatus 124 is input to AF test section 140 and AE/AWB test section 142.
The integration data being obtained by AF test section 140 is reported CPU110.
CPU110 controls lens driving portion 146 and moves the focus lens group in the image pickup optical system that comprises camera lens 14, calculate the focusing estimated value (AF estimated value) of a plurality of AF check points, to the lens location with local maximum estimated value is defined as to focusing position simultaneously.Therefore,, for focus lens group being moved to obtained focusing position, CPU110 controls lens driving portion 146.
The integrated value of CPU110 based on obtaining from AE/AWB test section 142, detects the brightness (object brightness) of object, calculates the exposure value (for the EV value of making a video recording) that is applicable to shooting.Then, with the EV value obtaining and preset program figure for making a video recording, determine f-number and shutter speed, and according to these values, the electronic shutter of CPU110 control imaging apparatus 124 and aperture drive division 144 are to obtain suitable exposure.Meanwhile, use the object brightness detect, CPU110 determines luminous whether necessary from X-flash.
AE/AWB test section 142, in the automatic adjustment of white balance, calculates the average product score value for R, the G of each zoning and each color of B signal, and result of calculation is offered to CPU110.CPU110 is used the integrated value for R, the integrated value for B obtaining obtaining and the integrated value for G obtaining, calculate ratio R/G and ratio B/G for each zoning, so that the distribution in the R/G of the R/G based on obtained and B/G value and B/G color space, determines light source type.According to the adjustment that is applicable to the white balance value of determined light source type, for example, so that (each rate value is approximately 1, the integration ratio of RGB in a screen is R:G:B≤1:1:1), CPU110 is in the adjustment of white balance circuit, control is with respect to the yield value (white balance correction value) of R, G and B signal, and proofreaies and correct the signal in each Color Channel.
As mentioned above, partly press shutter release button 26 to cause AE/AF process.In this process, as required, photographer operates zoom button 38 to adjust visual angle by adjusting zoom lens 14.
After this process, when downward total head shutter release button 26, S2 ON signal is input to CPU110, and CPU110 starts shooting and recording process.That is, use based on the definite shutter speed of optical measurement result and f-number exposure imaging apparatus 124.In this exposure, when X-flash 16 sends the light time, flashlamp is adjusted control part 160 and is controlled the luminous of X-flash 16.When flashlamp is adjusted the predetermined light quantity of sensor 24 reception, flashlamp is adjusted control part 160 and is disconnected to the electric current of X-flash 16, and stops X-flash 16 luminous.
Through simulation process portion 128, A/D converter 130 and image input control portion 132, by storer 120, caught from the picture signal of imaging apparatus 124 outputs, and convert brightness/colour difference signal to by picture signal handling part 134, to be stored in storer 120.
In storer 120, the view data of storage is added to compression and decompression handling part 148, and for example, compress according to predetermined compressed format (jpeg format), using and be stored in storer 120 as for example, image file with predetermined image file layout (Exif form), through medium control part 150, be recorded in storage medium 152.
In the manner described above, by by power supply/mode switch 28 with reproduce position alignment, and digital camera 10 is arranged in reproduction mode, in storage medium 152, the image of record can reproduce and show on monitor 32.
When passing through power supply/mode switch 28 and reproduction position alignment, when digital camera 10 is arranged to reproduction mode, CPU110 outputs to medium control part 150 by order, to read out in the latest image file of record in storage medium 152.
Add the compressing image data being included in read image file to compression and decompression handling part 148, to be extracted, shorten brightness/colour difference signal into, through video encoder 136, output to monitor 32.In such a way, on monitor 32, reproduce and be presented at the image of record in storage medium 152.In reproduction, the brightness/colour difference signal of reproduced image adds word synthetic 138 to synthetic with predetermined word or figure, and as required, adds video encoder 136 to simultaneously.This is superimposed upon on pickup image predetermined shooting information, and is presented on monitor 32.
By the right and left key of operation cruciform button 40, the playback frame by frame of carries out image, and the right button that presses down cruciform button 40 causes reading next image file from storage medium 152, reproduces and show on monitor 32.The left button that presses down cruciform button 40 causes reading previous image file from storage medium 152, reproduces and show on monitor 32.
In the digital camera 10 of the present embodiment, for show focusing situation to user, the state of focusing operation is determined, and is presented corresponding to the demonstration (demonstration of focusing situation) of this state.Now, hereinafter, the process that explanation focusing situation is shown.
The first embodiment > that < focusing situation shows
Fig. 4 is the process flow diagram for the flow process of the process of the focusing situation demonstration of digital camera 10.Conventionally by CPU110, carry out following step.
In the waiting status of shooting, wherein, show through the lens metering image on monitor 32, facial test section 154 is from the face of input object image detection object, and when image comprises face, carry out the extraction of facial zone and the detection of facial positions (step S10).
Determine whether to detect half pressure shutter release button (S1 ON) (step S12).When S1 ON not detected (step S12=is no), repeating step S12.
When S1ON being detected (step S12=is), determine whether from the object image detection of step S10 to face (step S14).When face being detected (step S14=is), as shown in Figure 5, be shown as with by focusing situation, shown the facial zone that generating unit 156 is extracted adjacent, for showing therein the dialogue balloons (step S16) of the viewing area of focusing information, and while showing the inside when the dialogue balloons of also not focusing be focusing information word "? " (step S16).When face not detected (step S14=is no), as shown in Figure 6, by focusing situation, show generating unit 156, lower left quarter at screen shows dialogue balloons (step S18), and while showing the inside when the dialogue balloons of also not focusing be focusing information word "? " (step S20).
Determine whether to detect total head shutter release button 26 (S2 ON).When S2 ON being detected (step S22=is), can not carry out exactly AF, thus focus lens group is shifted to precalculated position (step S40), for shooting (step S42).
When S2 ON not detected (step S22=is no), control lens driving portion 146 with mobile focus lens group, calculate the focusing estimated value at a plurality of AF check points, to the lens location with local maximum estimated value is defined as to focusing position (step S24) simultaneously.Then, for making focus lens group shift to obtained focusing position, control lens driving portion 146 to start the movement of focus lens group.
Determine that whether desired region is by focusing (step S26).When step S10 detects face, determine whether face is focused.When step S10 does not detect face, determine that object is whether near presumptive area picture centre.
When determining the region do not focus desired (step S26=is no), focus lens group is shifted to obtained focusing, meanwhile, as shown in Figure 7, be rotated in dialogue balloons, show word "? ", with by word "? " be transformed into word "! " (step S32), and repeating step S26.When focused in desired region, that is, while completing focus lens group mobile (step S26=is), as shown in Figure 8, the inside that shows dialogue balloons while having focused, be focusing information word "! " (step S28), and the brightness that increases image is to clearly illustrate dialogue balloons (step S30).
; before starting focusing operation (step S20); while showing the inside of the dialogue balloons of not focusing, be focusing information word "? " and focusing operating period (step S24 is to step S32), by rotate type "? " 90 degree with by word "? " convert to step by step word "! ", to synchronize with the end of focus process, show when focused in the inside of dialogue balloons, be focusing information word "! " (step S28).With predetermined rotational speed constantly rotate type "? " or to use the speed rotation that calculates the approximately required Time Calculation of focusing from focusing position, or to move the required time and to calculate time synchronized while allowing rotation and presumptive area to focus the speed that the rotational speed that finishes obtains and rotate by calculating the focusing position that focus lens group is obtained from step S24.
Determine whether to open voice output (step S34).When opening voice output (step S34=is), through loudspeaker 36 outputs, represented the sound of focusing, such as voice, tune, calling etc. (step S36), then, determine whether to detect S2 ON (step S38).When not opening voice output (step S34=is no), determine whether to detect S2 ON (step S38).
When S2 ON not detected (step S38=is no), process turns back to the step (step S24) of calculating focusing estimated value.When S2 ON being detected (step S38=is), pickup image (step S42).
According to the present embodiment, by showing that dialogue balloons and focusing information attract user's attention, allow to inform user's information of focusing in understandable mode.When not carrying out AF exactly, shown without focusing information, and when accurately carrying out AF, show the focusing information corresponding to focusing situation, guiding user carries out AF operation.
Meanwhile, according to the present embodiment, when face being detected, near face, show dialogue balloons etc., thereby to user, represent focusing area in understandable mode.Except showing, also export voice, to user, represent that face focuses.
The second embodiment > that < focusing situation shows
In the second embodiment that focusing situation shows, response focusing situation, display text not only in dialogue balloons, and also response focusing situation changes message.Fig. 9 is the process flow diagram for the flow process of the process of the focusing situation demonstration of digital camera 10.Conventionally by CPU110, carry out following step.By identical reference number, represent parts identical in the first embodiment, will describe in no detail here.
In the waiting status for making a video recording, wherein, show through the lens metering image on monitor 32, facial test section 154 is from the face of input object image detection object, when image comprises face, carry out the extraction of facial zone and the detection of facial positions (step S10).
Determine whether to detect S1 ON (step S12).When S1 ON not detected (step S12=is no), repeating step S12.
When S1 ON being detected (step S12=is), determine whether from the object image detection of step S10 to face (step S14).When face being detected (step S14=is), near facial zone, show dialogue balloons (step S16), and the word " also not having ... " (step S50) of the inside of the dialogue balloons that show to represent also not focus.When face not detected (step S14=is no), at the lower left quarter of screen, show dialogue balloons (step S18), and be shown as also the do not focus word " also not having ... " (step S50) of focusing information of inside of dialogue balloons of expression.
Determine whether to detect S2 ON (step S22).When S2 ON being detected (step S22=is), focus lens group is shifted to precalculated position (step S40), for shooting (step S42).
When S2 ON not detected (step S22=is no), carry out and calculate focusing estimated value and definite focusing position (starting focusing operation) (step S24), to determine that whether desired region is by focusing (step S26).
When determining the region of not focusing desired (step S26=is no), focus lens group is shifted to obtained focusing (step S52), and repeating step S26.When focused in desired region (step S26=is), determine whether from focused object image detection to face (step S54).
When not from object image detection to face (step S54=is no), the word of the focusing information of having focused in the inside of demonstration expression dialogue balloons " " (step S56), and the brightness that increases image is to clearly illustrate dialogue balloons (step S30).
When from object image detection to face (step S54=is), facial test section 154 detects the facial expression (step S58) detecting at step S54, and whether definite expression is to laugh at (step S60).
When expression is (step S60=is) while laughing at, be shown as that focused in the inside that represents dialogue balloons and the word " picture that picked-up is laughed at of the focusing information that object is just being laughed at! " (step S62), and the brightness that increases image is to clearly illustrate dialogue balloons (step S30).When expression is not (step S60=is no) while laughing at, show the word that represents the inside of dialogue balloons and focused " ", and the brightness that increases image is to clearly illustrate dialogue balloons (S30).
Determine whether to open voice output (step S34).When opening voice output (step S34=is), through loudspeaker 36 outputs, represented the sound of focusing, such as voice, tune, calling etc. (step S36), then, determine whether to detect S2 ON (step S38).When not opening voice output (step S34=is no), determine whether to detect S2 ON (step S38).
When S2 ON not detected (step S38=is no), process turns back to the step (step S24) of calculating focusing estimated value.When S2 ON being detected (step S38=is), pickup image (step S42).
According to the present embodiment, use sentence notice focusing situation, allow to inform user's situation of focusing in understandable mode.This also allows user to be easy to judge focusing situation.
In the above-described embodiments, the information of focusing and focusing situation are associated with each other and pre-recorded in ROM116, to show focusing information in dialogue balloons, but focusing information can be inputted by user.In this case, when user is through operating portion 112 input during corresponding to the focusing information of focusing situation, the focusing information of input is associated with focusing situation and be recorded in ROM116, so that the definite result (step S26) based on focused in desired region, from the focusing information of being inputted by user, select corresponding to the focusing information of focusing situation and be presented in dialogue balloons.If many focusing information recordings for identical focusing situation, at ROM116, can automatically be selected to the up-to-date focusing information in recorded focusing information and are shown, maybe can show the preset focusing information by user.
In the above-described embodiments, when selecting face, near face, show dialogue balloons etc., but in addition, as shown in figure 10, can overlappingly on face represent to detect facial frame with showing, more clearly represent to focus which position.The position that shows dialogue balloons is not limited near face, and facial test section 154 can detect mouth, so as dialogue balloons can be oriented to as from the mouth out.
In the above-described embodiments, facial test section 154 detects face or face and expression thereof, but facial test section 154 can detect facial motion, that is, the motion of main objects, to can show the focusing information of the motion that responds object.In other words, when the motion of object being detected by facial test section 154, can rock (shaken) shown focusing information, such as " Now! " while the motion of object no longer being detected with box lunch, that is, and when object stops when mobile, stopping rocking focusing information, such as " ", to inform that user's object stops the mobile fact.
In the above-described embodiments, in dialogue balloons, show focusing information, to inform user's situation of focusing, but can change the size of dialogue balloons, to represent focusing situation.For example, first, little dialogue balloons is only shown to indication (seeing Figure 12 A), and when carrying out focusing operation, expansion dialogue balloons (seeing Figure 12 B), with when completing focusing, has full-size (seeing Figure 12 C).
In the above-described embodiments, when face not detected, at the lower left quarter of screen, show dialogue balloons, but also can use other to show, and can show people's face or such as the animation of the animal of bird.For example, as shown in figure 11, at the lower left quarter of screen, show bird, to can show the dialogue balloons of stretching out from the beak of bird, as bird speak.Selectively, can in dialogue balloons, show text message and graphical information.Meanwhile, in dialogue balloons, can show the animation corresponding to focusing situation.For example, for the situation of not focusing, can show except the facial icon of laughing at face, and to the situation of focusing, can show the facial icon of the face of laughing at.
< the second embodiment >
In the first embodiment of the present invention, when face being detected, near face, show dialogue balloons, text message etc., thereby in understandable mode, to user, inform focusing area or focusing situation, but to user, represent that the method for focusing area or focusing situation is not limited to this in understandable mode.
In the second embodiment of the present invention, thereby demonstration animation is informed focusing area or focusing situation in understandable mode to user.Now, hereinafter, by the second embodiment of explanation digital camera 11.With identical reference number, represent and parts identical in the first embodiment, and do not describe at this.
Figure 13 means the block diagram of schematic construction of the inside of digital camera 11.
As shown in figure 13, digital camera 11 comprises: CPU110, operating portion (shutter release button 26, power supply/mode switch 28, mode dial 30, zoom button 38, cruciform button 40, MENU/OK button 42, DISP button 44, BACK button 46 etc.) 112, ROM116, EEPROM118, storer 120, VRAM122, imaging apparatus 124, timing generator (TG) 126, simulation process portion (CDS/AMP) 128, A/D converter 130, image input control portion 132, picture signal handling part 134, video encoder 136, AF test section 140, AE/AWB test section 142, aperture drive division 144, lens driving portion 146, compression and decompression handling part 148, medium control part 150, storage medium 152, face test section 154, flashlamp is adjusted control part 160, animated image synthesizes portion 162, animated image generating unit 164 etc.
According to the order from CPU110, video encoder 136 converts received image signal to the vision signal (for example NTSC signal, PAL signal or SCAM signal) showing on monitor 32, and vision signal is outputed to monitor 32, and as required, will output to monitor 32 by the synthetic portion of animated image 162 synthetic predetermined Word message or graphical informations.
Animated image generating unit 164 is combined into animated image (dynamic image) by a plurality of still images, and generates animated image, such as with animated GIF, MNG (both are all format style).The state of the AF that CPU110 identification is carried out by AF test section 140, and give an order to animated image generating unit 164.According to the order from CPU110, the still image that animated image generating unit 164 is selected corresponding to AF state, and generate animated image.Then, the facial positional information based on being detected by facial test section 154, CPU110 gives an order to show to the synthetic portion 162 of animated image the animated image being generated by animated image generating unit 164.Program that animated image generating unit 164 can be selected still image by the object state of facial test section 154 detections by response etc. is stored in ROM116, so that animated image generating unit 164 can be used this program to generate animated image.This makes to generate the animated image of realistic feel.Hereinafter, the animated image being generated by animated image generating unit 164 will be explained.
Then, hereinafter, by explanation as above-mentioned configuration, according to the operation of the digital camera 11 of the present embodiment.In the digital camera 11 of the present embodiment, for represent focusing situation to user, the state of focusing operation is determined, and is shown corresponding to the animated image of this state.Now, hereinafter, explanation is shown to the process of this animated image.
The first embodiment > that < focusing situation shows
Figure 14 means the process flow diagram for the flow process of the process of the focusing situation demonstration of digital camera 11.Conventionally by CPU110, carry out following step.
In the waiting status for making a video recording, wherein, show through the lens metering image on monitor 32, facial test section 154 is from the face of input object image detection object, and when image comprises face, carry out the extraction of facial zone and the detection of facial positions (step S110).
Determine whether to detect half pressure shutter release button (S1ON) (step S112).When S1ON not detected (step S112=is no), repeating step S112.
When S1ON being detected (step S112=is), determine whether from the object image detection of step S110 to face (step S114).When face being detected (step S114=is), determine whether to detect S2 ON (step S116).
When face not detected (step S114=is no), determine whether to detect S2 ON (step S118).When S2 ON not detected (step S118=is no), detect facial step (S114) and be repeated, and when S2 ON being detected (step S118=is), pickup image (step S138).
When S2 ON being detected (step S116=is), can not accurately carry out AF, thus focus lens group is shifted to precalculated position (step S140), for shooting (step S138).
When S2 ON not detected (step S116=is no), as shown in Figure 15 A, the optional position on through the lens metering image (for example center), with different colours, shows two round frames (step S120) each other with different size with one heart.In this case, excircle frame has than the larger region of facial zone of detecting at step S114.When two excircle frames have same area, each circle frame consists of the configuration of a plurality of arcs, so as two round frames overlap to form a circle.
Control lens driving portion 146 with mobile focus lens group, meanwhile, calculate the focusing estimated value at a plurality of AF check points, to the lens location with local maximum estimated value is defined as to focusing position (step S122).Then, for focus lens group being moved to obtained focusing position, control lens driving portion 146 to start the movement of focus lens group.
Determine the determined face (step S124) of whether focusing at step S114.If determine also not focusing facial (step S124=is no), so focus lens group is shifted to obtained focusing, simultaneously, animated image generating unit 164 generates animated image, wherein inner circle frame and excircle frame rotate with different directions from each other, overlapping and demonstration (step S126) on through the lens metering image.; animated image generating unit 164 generates animated image; wherein; response focusing situation; the size of reduction excircle frame also increases the size of inner circle frame; so that excircle frame turns clockwise with required speed and inner circle frame is rotated counterclockwise with required speed, and when image is focused, two arcs have mutually the same size.The generation of animated image has increased the visibility of round frame rotation, and wherein round frame rotates with different directions.CPU110 is synthesized in through the lens metering image by generated animated image and shows therein so that animated image moves to the face detecting at step S114 from the position showing at step S120.Then, repeating step S124 again.
When focusing in desired region,, while completing focus lens group mobile (step S124=is), as shown in Figure 15 B, the circumference being become by two circle frameworks that have each other same size and overlap each other overlaps on the face that step S114 detects (step S128).Therefore, two round frames are clearly shown, with higher brightness (step S130).
Determine whether to open voice output (step S132).When opening voice output (step S132=is), through loudspeaker 36, output has represented the sound of focusing, such as voice, tune, calling etc. (step S134), then determines whether to detect S2 ON (step S136).When not opening voice output (step S132=is no), determine whether to detect S2 ON (step S136).
When S2 ON not detected (step S136=is no), process turns back to the step (step S120) that demonstration has two round frames of different size each other.When step S2 ON being detected (step S136=is), pickup image (step S138).
According to the present embodiment, when image is not focused, show a plurality of rotating frames, and when image is focused, these frames form a fixing circle, make it possible to be easy to the mode that user understands and inform the situation of focusing and the situation of having focused.When accurately not carrying out AF, do not have frame shown, guiding user carries out AF operation.Meanwhile, when face being detected, overlapping and display box on detected face, represents focusing area in understandable mode to user.Except showing, also export voice simultaneously, to be easy to mode that user understands, to user, represent that this region focuses.
In the present embodiment, display box has circle, but except circle, this frame can also have various shapes, comprises any geometry (such as triangle, rectangle or ellipse) and irregularly shaped, such as heart.This allows to inform user's situation of focusing by the demonstration attracting the user's attention.When rotating, it changes below shape such as triangle, rectangle, ellipse and heart-shaped shape have advantages of: its stationary state is more easy to identify than circular situation.
Meanwhile, in the present embodiment, with different colors from one another, show two round frames, but when image is focused, can show two round frames with the Neutral colour between two different colours.For example, when displaying bounding box with blueness and show inside casing with yellow, when image is focused, the circle being formed by two frames is yellow green.In this way, with the color changing, show a plurality of frames, increased its visibility.Change color between rotating frame and fixed frame makes face more easy to identify than the frame stopping.Certainly, can show two frames with same color.
Meanwhile, in the present embodiment, when focusing, by display box more clearly, increase visibility, but be not restriction, can use any demonstration, allow to inform user's situation of having focused.For example, when focusing, frame can have darker color, or frame can have wider line.
Meanwhile, in the present embodiment, with predetermined rotational speed rotating frame, but response focusing situation can change rotational speed.Can be to use the speed rotating frame that the required whenabouts of focusing calculates that calculates from focusing position, or to move the required time of focus lens group by calculating from focusing position, and calculate permission and rotate the rotational speed simultaneously finishing with the time of having focused when presumptive area and the speed obtaining rotation.
Meanwhile, in the present embodiment, with a plurality of frames of the direction rotation that differs from one another to allow more clearly to notify user the situation of having focused, but also can be in identical direction rotating frame.In this case, can reduce complexity to reduce the quantity of processing.
The second embodiment > that < focusing situation shows
In the second embodiment showing in focusing situation, show the animation that wherein rectangle frame is rocking.Figure 16 means the process flow diagram for the flow process of the process of the focusing situation demonstration of digital camera 11.Conventionally by CPU110, carry out following step.With identical reference number, represent and parts identical in the first embodiment, and no longer describe in detail at this.
Determine whether to detect S1 ON (step S112).When S1 ON not detected (step S112=is no), repeating step S112.
When S1 ON being detected (step S112=is), determine whether from the object image detection of step S110 to face (step S114).When face being detected (step S114=is), determine whether to detect S2 ON (step S116).
When face not detected (step S114=is no), determine whether to detect S2ON (step S118).When S2 ON not detected (step S118=is no), detect facial step (S114) and be repeated, and when S2 ON being detected (step S118=is), pickup image (step S138).
When S2 ON being detected (step S116=is), can not accurately carry out AF, thus focus lens group is shifted to precalculated position (step S140), for shooting (step S138).
When S2 ON not detected (step S116=is no), as shown in Figure 17 A, any position on through the lens metering image (for example center) shows common square frame (step S220).Now, hereinafter, example explanation to a common square frame, but as shown in Figure 17 A, can show and main frame (frame with wider line) the one or more less important frame (frame with narrower line) that in series (in tandem with) moves.In addition, can on the face detecting, show the sign that represents eyes, mouth etc.
Control lens driving portion 146 with mobile focus lens group, calculate the focusing estimated value at a plurality of AF check points, to the lens location with local maximum estimated value is defined as to focusing position (step S122) simultaneously.Then, for focus lens group being moved to obtained focusing position, control lens driving portion 146 to start the movement of focus lens group.
Determine the face detecting at step S114 whether focus (step S124).When definite face is not also focused (step S124=is no), focus lens group is shifted to obtained focusing, simultaneously, by animated image generating unit 164, generate wherein common square frame and move constantly the animated image of (being that frame rocks), so that overlapping and demonstration (step S226) on through the lens metering image.That is, animated image generating unit 164 response focusing situations, generate common square frame wherein at a predetermined velocity and at a certain distance around central area in any direction in the animated image of continuous moving.CPU110 generated animated image is synthesized to through the lens metering image and show therein so as overlapping on the face detecting at step S114 be the region of Mobility Center.Then, repeating step S124 again.
When focused in desired region, that is, while completing the motion of focus lens group (step S124=is), as shown in Figure 17 B, common square frame overlaps on the face that step S114 detects (step S228).Then, clearly illustrate common square frame, with high brightness (step S230) more.
Determine whether to open voice output (step S132).When opening voice output (step S132=is), through loudspeaker 36, output represents the sound of having focused, and such as voice, tune, calling etc. (step S134), then, determines whether to detect S2 ON (step S136).When not opening voice output (step S132=is no), determine whether to detect S2 ON (step S136).
When S2 ON not detected (step S136=is no), process turns back to the step (step S220) that shows common square frame.When S2 ON being detected (step S136=is), pickup image (step S138).
According to the present embodiment, stop movable frame, this allows to be easy to the mode that user understands and informs the situation of focusing.When also accurately not carrying out AF, do not have frame shown, guiding user carries out AF operation.Meanwhile, when face being detected, the position of frame on detected face stops mobile, in understandable mode, to user, represents focusing area.Export the voice except showing simultaneously, to be easy to mode that user understands, to user, represent that this region focuses.
In the present embodiment, the frame of demonstration has common square, but is not limited to this, and this frame can have various shapes, comprises any geometry, such as triangle, rectangle, polygon, circle and oval, and irregularly shaped, such as heart.
Meanwhile, in the present embodiment, when focusing, by display box more clearly, increase visibility, but be not restriction, can use any focused demonstration of situation of user that allows to inform.For example, when focusing, frame can have darker color, or frame can have wider line.
Meanwhile, in the present embodiment, by predetermined speed movable frame, but can respond displacement, change translational speed.
The 3rd embodiment > that < focusing situation shows
In the 3rd embodiment showing in focusing situation, show the animation that wherein the ear response focusing situation of animal (for example rabbit) moves.Figure 18 means the process flow diagram for the flow process of the process of the focusing situation demonstration of digital camera 11.Conventionally by CPU110, carry out following step.With identical reference number, represent and parts identical in the first embodiment, and describe in no detail at this.
Determine whether to detect S1 ON (step S112).When S1 ON not detected (step 112=is no), repeating step S112.
When S1 ON being detected (step S112=is), determine whether from the object image detection of step S110 to face (step S114).When face being detected (step S114=is), determine whether to detect S2 ON (step S116).
When face not detected (step S114=is no), determine whether to detect S2 ON (step S118).When S2 ON not detected (step S118=is no), detect facial step (step S114) and be repeated, and when S2 ON being detected (step S118=is), pickup image (step S138).
When S2 ON being detected (step S116=is), can not accurately carry out AF, thus focus lens group is shifted to precalculated position (step S140), for shooting (step S138).
When S2 ON not detected (step S116=is no), as shown in Figure 19 A, the facial top of detecting at step S114 shows the ear (step S320) of the bending (rocking) of rabbit.
Control lens driving portion 146 with mobile focus lens group, calculate the focusing estimated value at a plurality of AF check points, to the lens location with local maximum estimated value is defined as to focusing position (step S122) simultaneously.Then, for focus lens group being moved to obtained focusing position, control lens driving portion 146 to start the movement of focus lens group.
Determine the face detecting at step S114 whether focus (step S124).When definite face is not also focused (step S124=is no), focus lens group is shifted to obtained focusing, simultaneously, the ear response focusing situation of rocking that generates rabbit wherein by animated image generating unit 164 starts to hold up in horizontal direction waves and the ear of rabbit is holded up when focusing animated image, so as on through the lens metering image by CPU110 in and demonstration (step S326) overlapping on the position that ear is identical with rocking of rabbit in step S320.Then, repeating step S124 again.
When focused in desired region,, while completing focus lens group mobile (step S124=is), as shown in Figure 19 B, with rabbit in step S320 rock holding up and fixing ear (step S328) of position display rabbit that ear is identical.Then, clearly illustrate the ear of holding up of rabbit, with high brightness (step S330) more.
Determine whether to open voice output (step S132).When opening voice output (step S132=is), through loudspeaker 36 outputs, represent to have completed the sound of focusing, such as voice, tune, calling etc. (step S134), then determine whether to detect S2 ON (step S136).When not opening voice output (step S132=is no), determine whether to detect S2 ON (step S136).
When S2 ON not detected (step S136=is no), process turns back to the step of waving ear (step S320) that shows rabbit.When S2 ON being detected (step S136=is), pickup image (step S138).
According to the present embodiment, use to absorb the display format of the rabbit ears that user notes, inform that like this user's operation of focusing is performed.The grade of holding up of rabbit ears visually represents the progress grade of focusing and operating, and allows like this to inform the situation of focusing and the situation of having focused in the understandable mode of user.Meanwhile, when also accurately not carrying out AF constantly, do not show rabbit ears, guiding user carries out AF operation.Meanwhile, on the face detecting, show rabbit ears, in understandable mode, to user, represent the region that will focus or focusing area.
In the present embodiment, described the example with rabbit ears above, but be not limited to rabbit ears, can use the ear of any animal of conventionally holding up, such as dog ears, large elephant ear, giant panda ear etc.
Meanwhile, in the present embodiment, when focusing, by more clearly showing rabbit ears, increase visibility, but not restriction, can use and allow easily to inform focused any demonstration of situation of user.For example, when focusing, whole rabbit ears can have darker color, or rabbit ears can have wider line.
The 4th embodiment > that < focusing situation shows
In the 4th embodiment that focusing situation shows, show wherein the animation that for example, photo response focusing situation such as the role of animal (bear) occurs gradually.Figure 20 is the process flow diagram for the flow process of the process of the focusing situation demonstration of digital camera 11.Conventionally by CPU110, carry out following step.With identical reference number, represent and parts identical in the first embodiment, and describe in no detail at this.
Determine whether to detect S1 ON (step S112).When S1 ON not detected (step 112=is no), repeating step S112.
When S1 ON being detected (step S112=is), determine whether from the object image detection of step S110 to face (step S114).When face being detected (step S114=is), determine whether to detect S2 ON (step S116).
When face not detected (step S114=is no), determine whether to detect S2 ON (step S118).When S2 ON not detected (step S118=is no), detect facial step (step S114) and be repeated, and when S2 ON being detected (step S118=is), pickup image (step S138).
When S2 ON being detected (step S116=is), can not accurately carry out AF, thus focus lens group is shifted to precalculated position (step S140), for shooting (step S138).
When S2 ON not detected (step S116=is no), control lens driving portion 146 with mobile focus lens group, calculate the focusing estimated value at a plurality of AF check points, to the lens location with local maximum estimated value is defined as to focusing position (step S122) simultaneously.Then, for focus lens group being moved to obtained focusing position, control lens driving portion 146 to start the movement of focus lens group.
Determine the face detecting at step S114 whether focus (step S124).If determine also not focusing (step S124=is no) of face, so focus lens group is moved on to obtained focusing, simultaneously, animated image generating unit 164 generates animated image, wherein, bear role carries out with focusing operation and engenders, with overlapping on through the lens metering image and demonstration (step S426).That is,, as shown in Figure 21 A, animated image generating unit 164 generates animated image, wherein, bear role is near position face, and the side of face is for example carried out and engendered with focusing operation, and as shown in Figure 21 B, when focusing, on facial side, show whole bear role.CPU110 is synthesized to generated animated image in through the lens metering image and therein and shows, so that the facial side that bear role detects from step S114 occurs.Then, repeating step S124 again.
When focused in desired region,, while completing focus lens group mobile (step S124=is), as shown in Figure 21 B, show whole bear role and be positioned at the facial side that step S114 detects, and by the facial front of partially overlapping in of bear role (step S428).Then, clearly illustrate bear role, with high brightness (step S430) more.
Determine whether to open voice output (step S132).When opening voice output (step S132=is), through loudspeaker 36 outputs, represented the sound of focusing, such as voice, tune, calling etc. (step S134), then, determine whether to detect S2 ON (step S136).When not opening voice output (step S132=is no), determine whether to detect S2 ON (step S136).
When S2 ON not detected (step S136=is no), process turns back to the step (step S122) of calculating focusing estimated value.When S2 ON being detected (step S136=is), pickup image (step S138).
According to the present embodiment, use the bear role's who attracts the user's attention display format, to user, represent that focusing operation is performed like this.Bear role's appearance amount visually represents the progress grade of focusing operation, allows like this to inform the situation of focusing and the situation of having focused in the understandable mode of user.Meanwhile, when also accurately not carrying out AF, do not show bear role, guiding user carries out AF operation.Meanwhile, near detected face, show bear role, in understandable mode, to user, represent region or the focusing area of focusing.
Meanwhile, in the present embodiment, when focusing, by more clearly showing bear role, increase visibility, but not restriction, can use and allow easily to inform focused any demonstration of situation of user.For example, when focusing, bear role can have darker color, or bear role can have wider line.
The 5th embodiment > that < focusing situation shows
In the 5th embodiment showing in focusing situation, show the animation that wherein animal (for example bird) response focusing situation moves.Figure 22 means the process flow diagram of the flow process of the process that the focusing situation of digital camera 11 shows.Conventionally by CPU110, carry out following step.With identical reference number, represent and parts identical in the first embodiment, and describe in no detail at this.
Determine whether to detect S1 ON (step S112).When S1 ON not detected (step 112=is no), repeating step S112.
When S1 ON being detected (step S112=is), determine whether from the object image detection of step S110 to face (step S114).When face being detected (step S114=is), determine whether to detect S2 ON (step S116).
When face not detected (step S114=is no), determine whether to detect S2 ON (step S118).When S2 ON not detected (step S118=is no), detect facial step (step S114) and be repeated, and when S2 ON being detected (step S118=is), pickup image (step S138).
When S2 ON being detected (step S116=is), can not accurately carry out AF, thus focus lens group is moved on to precalculated position (step S140), for shooting (step S138).
When S2 ON not detected (step S116=is no), as shown in Figure 23 A, in precalculated position, (for example lower left quarter of screen) shows the bird (step S520) circling in the air.
Control lens driving portion 146 with mobile focus lens group, calculate the focusing estimated value at a plurality of AF check points, to the lens location with local maximum estimated value is defined as to focusing position (step S122) simultaneously.Then, for focus lens group being moved to obtained focusing position, control lens driving portion 146, to start the movement of focus lens group.
Determine at the detected face of step S114 whether focus (step S124).When definite face is not also focused (step S124=is no), focus lens group is shifted to obtained focusing, simultaneously, by animated image generating unit 164, generate animated image, wherein, bird is shielded everywhere and circles in the air, so that by CPU110 overlapping and demonstration (step S526) on through the lens metering image.Then, repeating step S124 again.
When focused in desired region, that is, while completing focus lens group mobile (step S124=is), as shown in Figure 23 B, bird is parked in the facial top (step S528) that step S114 detects.Then, clearly illustrate static bird, with high brightness more, (step S530).
Determine whether to open voice output (step S132).When opening voice output (step S132=is), through loudspeaker 36 outputs, represented the sound of focusing, such as voice, tune, calling etc. (step S134), then, determine whether to detect S2 ON (step S136).When not opening voice output (step S132=is no), determine whether to detect S2 ON (step S136).
When S2 ON not detected (step S136=is no), process turns back to the step (step S520) that shows the bird circling in the air in precalculated position.When S2 ON being detected (step S136=is), pickup image (step S138).
According to the present embodiment, use the display format of the bird that circles in the air attracting the user's attention, to user, represent that focusing operation is performed.The static action of bird of circling in the air visually represents to have focused in the understandable mode of user.Meanwhile, when also accurately not carrying out AF, do not show bird, guiding user carries out AF operation.Meanwhile, bird is still in detected face, in understandable mode, to user, represents focusing position.
Meanwhile, in the present embodiment, the example with bird has been described above, but can have used the ear of born volant any animal, such as butterfly, dragonfly, honeybee, bat etc.In the situation that use the animated image of butterfly, can generate animated image, wherein, at step S526, butterfly circles in the air everywhere in screen, so that at step S528, butterfly stops circling in the air and is still on the detected face of step S114, and at step S530, clearly illustrate static butterfly, with high brightness more.
Meanwhile, in the present embodiment, when focusing, by demonstration, stop circling in the air and static bird, increase visibility, but not restriction, can use any focused demonstration of situation of user that allows to be easy to inform.For example, when focusing, bird can have darker color, or bird can have wider line.
The 6th embodiment > that < focusing situation shows
In the 6th embodiment showing in focusing situation, show the animation that wherein fresh flower response focusing situation is bloomed.Figure 24 means the process flow diagram of the flow process of the process that the focusing situation of digital camera 11 shows.Conventionally by CPU110, carry out following step.With identical reference number, represent and parts identical in the first embodiment, and describe in no detail at this.
Determine whether to detect S1 ON (step S112).When S1 ON not detected (step 112=is no), repeating step S112.
When S1 ON being detected (step S112=is), determine whether from the object image detection of step S110 to face (step S114).When face being detected (step S114=is), determine whether to detect S2 ON (step S116).
When face not detected (step S114=is no), determine whether to detect S2 ON (step S118).When S2 ON not detected (step S118=is no), detect facial step (step S114) and be repeated, and when S2 ON being detected (step S118=is), pickup image (step S138).
When S2 ON being detected (step S116=is), can not accurately carry out AF, thus focus lens group is moved on to precalculated position (step S140), for shooting (step S138)
When S2 ON not detected (step S116=is no), as shown in Figure 25 A, on the face detecting at step S114, show petal (step S620).
Control lens driving portion 146 with mobile focus lens group, calculate the focusing estimated value at a plurality of AF check points, to the lens location with local maximum estimated value is defined as to focusing position (step S122) simultaneously.Then, when focus lens group is moved on to obtained focusing position, control lens driving portion 146 to start the movement of focus lens group.
Determine at the detected face of step S114 whether focus (step S124).When definite face is not also focused (step S124=is no), focus lens group is moved on to obtained focusing, simultaneously, by animated image generating unit 164, generate animated image, wherein, fresh flower response focusing situation is bloomed gradually, so as by CPU110 on through the lens metering image in the identical position synthesis of the petal with step S620 and demonstration (step S626).Then, repeating step S124 again.
When focused in desired region, that is, and while completing focus lens group mobile (step S124=is), as shown in Figure 25 B, at the identical position display of the petal with step S620 fresh flower (step S628) completely in full bloom.Then, clearly illustrate fresh flower completely in full bloom, with high brightness (step S630) more.
Determine whether to open voice output (step S132).When opening voice output (step S132=is), through loudspeaker 36 outputs, represented the sound of focusing, such as voice, tune, calling etc. (step S134), then, determine whether to detect S2 ON (step S136).When not opening voice output (step S132=is no), determine whether to detect S2 ON (step S136).
When S2 ON not detected (step S136=is no), process turns back to the step (step S620) that shows petal.When S2 ON being detected (step S136=is), pickup image (step S138).
According to the present embodiment, use the display format of the flower attracting the user's attention, to user, visually represent that focusing operation is performed.The progress grade that visually represents focusing operation in full bloom of flower, allows to inform the situation of focusing and the situation of having focused in the understandable mode of user like this.Meanwhile, when also accurately not carrying out AF, do not show fresh flower, guiding user carries out AF operation.Meanwhile, near detected face, show fresh flower, in understandable mode, to user, represent position to be focused or focusing position.
Meanwhile, in the present embodiment, when focusing, by showing fresh flower completely in full bloom, increase visibility, but not restriction, can use and allow to be easy to inform focused any demonstration of situation of user.For example, when focusing, fresh flower can have darker color, or fresh flower can have wider line.
The 7th embodiment > that < focusing situation shows
In the 7th embodiment showing in focusing situation, show the animation that wherein dialogue balloons response focusing situation is expanded.Figure 26 means the process flow diagram of the flow process of the process that the focusing situation of digital camera 11 shows.Conventionally by CPU110, carry out following step.With identical reference number, represent and parts identical in the first embodiment, and describe in no detail at this.
Determine whether to detect S1 ON (step S112).When S1 ON not detected (step 112=is no), repeating step S112.
When S1 ON being detected (step S112=is), determine whether from the object image detection of step S110 to face (step S114).When face being detected (step S114=is), determine whether to detect S2 ON (step S116).
When face not detected (step S114=is no), determine whether to detect S2 ON (step S118).When S2 ON not detected (step S118=is no), detect facial step (step S114) and be repeated, and when S2 ON being detected (step S118=is), pickup image (step S138).
When S2 ON being detected (step S116=is), can not accurately carry out AF, thus focus lens group is moved on to precalculated position (step S140), for shooting (step S138).
When S2 ON not detected (step S116=is no), control lens driving portion 146 with mobile focus lens group, calculate the focusing estimated value at a plurality of AF check points, to the lens location with local maximum estimated value is defined as to focusing position (step S122) simultaneously.Then, for focus lens group being moved to obtained focusing position, control lens driving portion 146 to start the movement of focus lens group.
Determine at the detected face of step S114 whether focus (step S124).When definite face is not also focused (step S124=is no), focus lens group is moved to obtained focusing, simultaneously, by animated image generating unit 164, generate animated image, wherein, dialogue balloons response focusing situation is expanded, with overlapping on through the lens metering image and demonstration (step S726).; animated image generating unit 164 generates animated image; wherein, as shown in Figure 27 A, just in time after starting focusing operation; show extremely undersized dialogue balloons (or roundlet); and as shown in Figure 27 B, the size of dialogue balloons is carried out with focusing operation and increased gradually, and as shown in Figure 27 C; when completing focusing operation, dialogue balloons has full-size.Near the face that CPU110 detects at step S114, (for example, in one side) shows the animated image generating.Then, repeating step S124 again.The position that shows dialogue balloons is not limited near face, and facial test section 154 can detect mouth, so as to show dialogue balloons just as from detected mouth out.
When focused in desired region, that is, while completing focus lens group mobile (step S124=is), as shown in Figure 27 C, near the face detecting at step S114, show maximum sized dialogue balloons (step S728).Then, clearly illustrate dialogue balloons, with high brightness (step S730) more.
Determine whether to open voice output (step S132).When opening voice output (step S132=is), through loudspeaker 36 outputs, represented the sound of focusing, such as voice, tune, calling etc. (step S134), then, determine whether to detect S2 ON (step S136).When not opening voice output (step S132=is no), determine whether to detect S2 ON (step S136).
When S2 ON not detected (step S136=is no), process turns back to the step (step S122) of calculating focusing estimated value.When S2 ON being detected (step S136=is), pickup image (step S138).
According to the present embodiment, use the dialogue balloons attracting the user's attention by changing its size, to user, show that focusing operation is performed.The size of dialogue balloons visually represents the progress grade of focusing operation, allows to inform the situation of focusing and the situation of having focused to be easy to the mode of user's understanding.Meanwhile, when also accurately not carrying out AF, do not show dialogue balloons, guiding user carries out AF operation.Meanwhile, detect facial near, show dialogue balloons, in understandable mode, to user, represent focusing position.
In the present embodiment, when focusing, by more clearly showing balloon, increase visibility, but not restriction, can use and allow to be easy to inform focused any demonstration of situation of user.For example, when focusing, dialogue balloons can have wider line.
Meanwhile, in the present embodiment, in through the lens metering image synthetic wherein have letter "! " dialogue balloons there is variable-sized animated image, but can generate wherein until complete focusing operation dialogue balloons have letter "? " and when complete when operation focusing have letter "! " animated image, to synthesize in through the lens metering image.This allows clearly to inform user's situation of having focused.
In the present embodiment, detect face to focus, but the target detecting is not limited to face, and can detects whole people, such as the animal of dog, cat and rabbit, automobile etc. is so that focusing.Can detect whole people, animal, automobile etc. by various known technology.
Application of the present invention is not limited to digital camera, but the present invention also can be applied to camera head, such as mobile phone and the video camera with camera.

Claims (29)

1. a camera head, comprising:
Picture pick-up device, the image of picked-up object;
Picture catching device, through described picture pick-up device, catches the picture signal that represents described object continuously;
Display device, the picture signal based on captured, shows through the lens metering image;
Automatically device is adjusted in focusing, and the picture signal based on captured is carried out focusing automatically and adjusted to maximize the contrast of described object;
Focusing condition detection device, is adjusting after device adjustment by described automatic focusing, detects the focusing situation of described object; And
Display control device, by for showing that the viewing area of focusing information is synthesized to the through the lens metering image of described display device, and respond the focusing situation being detected by described focusing condition detection device, will at least in different focusing information when image is not focused and between when image is focused, be synthesized in described viewing area.
2. camera head as claimed in claim 1, further comprises:
Face detector part detects the face of described object from captured picture signal,
Wherein, when described face being detected by described face detector part, described automatic focusing is adjusted device and on detected face, is carried out focusing adjustment automatically.
3. camera head as claimed in claim 2, wherein,
Described display control device is synthesized to the viewing area showing on the through the lens metering image on described display device near the position described face being detected by described face detector part.
4. camera head as claimed in claim 3, wherein,
The described viewing area showing on described through the lens metering image by described display control device on described display device has dialogue balloons shape.
5. camera head as claimed in claim 1, further comprises,
Memory device, storage is corresponding to the focusing information of focusing situation, and described focusing information at least comprises the information of the situation of focusing and the information of the situation of having focused,
Wherein, described display control device is synthesized to the focusing information of storing in described memory device in described viewing area.
6. camera head as claimed in claim 5, further comprises
Entering apparatus, response focusing situation input focusing information,
Wherein, the described focusing information that described memory device stores is inputted by described entering apparatus, and
When storing corresponding to many of identical focusing situation focusing information in described memory device, described display control device is selected a desired focusing information from described many focusing information, and this information is synthesized in described viewing area.
7. camera head as claimed in claim 5, wherein,
When described focusing condition detection device detects focusing and adjusts, described display control device is switched to the information of the situation of focusing by never the focus information of situation of described focusing information.
8. camera head as claimed in claim 2, wherein,
Described face detector part detects face and the expression of object, and described display control device is synthesized to the focusing information of the described expression based on being detected by described face detector part in described viewing area.
9. camera head as claimed in claim 1, wherein,
Described display control device responds the testing result of described focusing condition detection device, changes the size of described viewing area.
10. a camera head, comprising:
Picture pick-up device, the image of picked-up object;
Picture catching device, through described picture pick-up device, catches the picture signal that represents described object continuously;
Display device, the picture signal based on captured, shows through the lens metering image;
Automatically device is adjusted in focusing, and the picture signal based on captured is carried out focusing automatically to the desired region of described object and adjusted;
Focusing condition detection device, is adjusting after device adjustment by described automatic focusing, detects the focusing situation of described object;
Animated image generates device, generates at least one the animated image having in following characteristics: variable position, variable-sized and shape-variable, and at least when image is focused and between when image is focused, do not showing different images; And
Display control device, the focusing situation that response is detected by described focusing condition detection device, is synthesized to described animated image in described through the lens metering image.
11. camera heads as claimed in claim 10, further comprise:
Face detector part detects the face of described object from captured picture signal,
Wherein, when described face being detected by described face detector part, described automatic focusing is adjusted device and on detected face, is carried out focusing adjustment automatically.
12. camera heads as claimed in claim 10, wherein,
The focusing situation that the response of described display control device is detected by described focusing condition detection device, changes at least one in tone, brightness and the saturation degree of described animated image.
13. camera heads as claimed in claim 10, wherein,
Described animated image generates the animated image that device generation has a plurality of frames of concentric demonstration, described a plurality of frame has different size and rotates until described focusing condition detection device detects focusing situation, and when described focusing condition detection device detects while focusing situation, described a plurality of frames have each other same size and stop.
14. camera heads as claimed in claim 10, wherein,
Described animated image generates the animated image that device generation has a plurality of frames, and described a plurality of frames rotate in the direction that differs from one another, until described focusing condition detection device detects focusing situation.
15. camera heads as claimed in claim 10, wherein,
Described animated image generates the animated image that device generation has frame, and described frame continues rotation with predetermined angle speed, in predetermined direction, until described focusing condition detection device detects focusing situation.
16. camera heads as claimed in claim 10, wherein,
Described animated image generates the animated image that device generation has frame, and described frame rocks near desired region, until focusing situation detected by described focusing condition detection device.
17. camera heads as claimed in claim 13, wherein,
The focusing situation that response is detected by described focusing condition detection device, described display control device changes described frame and has carried out the distance of focusing between the region of adjusting, and when described focusing condition detection device detects while focusing situation, make described frame overlapping and be presented at and carried out on the described region that focusing adjusts.
18. camera heads as claimed in claim 10, wherein,
The focusing situation that response is detected by described focusing condition detection device, described animated image generates the animated image of ear that device generation changes the animal of its posture, and
When image is not focused, ear is downward-sloping, and when image is focused, ear is holded up, and described display control device makes described animated image overlapping and be presented at and carry out on the described region that focusing adjusts.
19. camera heads as claimed in claim 18, wherein,
When described automatic focusing is adjusted device the face of object is carried out focusing and adjusted, described display control device makes described animated image overlapping and be presented on the face of described object.
20. camera heads as claimed in claim 10, wherein,
The focusing situation that response is detected by described focusing condition detection device, described animated image generates the animated image that device generation shows the different parts of animal, while not focusing with the desired region of the described object of box lunch, show the only animation of a part for animal, and when focused in the desired region of described object, show whole animal, and
Described display control device makes described animated image overlapping and be presented at and carry out on the described region that automatic focusing adjusts.
21. camera heads as claimed in claim 10, wherein,
The focusing situation that response is detected by described focusing condition detection device, described animated image generates the animated image that device generation shows the different conditions of born volant animal, while not focusing with the desired region of the described object of box lunch, show the animal just circling in the air, and when focused in the desired region of described object, show static animal, and
When described focusing condition detection device detects while focusing situation, described display control device the animated image of the described animal just circling in the air is positioned at carried out described region that automatic focusing adjusts near.
22. camera heads as claimed in claim 10, wherein,
The focusing situation that response is detected by described focusing condition detection device, described animated image generates the animated image that device generation shows the different fresh flower stages, while not focusing with the desired region of the described object of box lunch, show petal, and when focused in the desired region of described object, show fresh flower in full bloom, and
Described display control device is presented at described animated image and has carried out near the position, described region that automatic focusing is adjusted.
23. camera heads as claimed in claim 10, wherein,
The focusing situation that response is detected by described focusing condition detection device, described animated image generates the animated image that device generation has the dialogue balloons of different size, and
Described display control device is presented at by near the position described automatic control device execution described region that focusing is adjusted automatically described animated image.
24. camera heads as claimed in claim 23, wherein,
Described animated image generates device and at least when focused in the desired region of described object and between when do not focus in the desired region of described object, is generating the animated image of the dialogue balloons with different images.
25. 1 kinds of focusing condition display methods, comprising:
Catch continuously the step of the picture signal of object;
Picture signal based on captured, the step of demonstration through the lens metering image;
Picture signal based on captured, carries out to the desired region of described object the step that focusing is adjusted automatically;
Detect the step of focusing adjustment situation; And
By for showing that the viewing area of focusing information is synthesized to the step of described through the lens metering image, and this step also responds detected focusing situation, will at least in different focusing information when do not focus in the desired region of described object and between when focused in the desired region of described object, be synthesized in described viewing area.
26. 1 kinds of focusing condition display methods, comprising:
Catch continuously the step of the picture signal of object;
Picture signal based on captured, the step of demonstration through the lens metering image;
Picture signal based on captured, carries out to the desired region of described object the step that focusing is adjusted automatically;
Detect the step of focusing adjustment situation; And
Respond detected focusing and adjust situation, the animated image that shows focusing situation is synthesized to the step in through the lens metering image.
27. focusing condition display methods as claimed in claim 26, wherein,
Carrying out the step that focusing is adjusted automatically further comprises:
From captured picture signal, detect the facial step of object; And
On detected face, carry out the step that focusing is adjusted automatically.
28. focusing condition display methods as claimed in claim 26, wherein,
The step that animated image is synthesized to through the lens metering image further comprises:
Generation has at least one the step of animated image in following characteristics: variable position, variable-sized and shape-variable, and at least when focus in the desired region of described object and between when focused in the desired region of described object, do not showing different images, and
The described animated image generating is synthesized to the step in through the lens metering image.
29. focusing condition display methods as claimed in claim 28, wherein,
The described step that described animated image is synthesized in through the lens metering image is adjusted situation in the detected focusing of response, after changing at least one in tone, brightness and the saturation degree of described animated image, described animated image is synthesized in described through the lens metering image.
CN2008102149316A 2007-08-31 2008-08-29 Image pickup apparatus and focusing condition displaying method Expired - Fee Related CN101382721B (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
JP2007-227110 2007-08-31
JP2007227110A JP2009058834A (en) 2007-08-31 2007-08-31 Imaging apparatus
JP2007227110 2007-08-31
JP2007-240012 2007-09-14
JP2007240012 2007-09-14
JP2007240012A JP4852504B2 (en) 2007-09-14 2007-09-14 Imaging apparatus and focus state display method

Publications (2)

Publication Number Publication Date
CN101382721A true CN101382721A (en) 2009-03-11
CN101382721B CN101382721B (en) 2012-09-19

Family

ID=40462628

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008102149316A Expired - Fee Related CN101382721B (en) 2007-08-31 2008-08-29 Image pickup apparatus and focusing condition displaying method

Country Status (2)

Country Link
JP (1) JP2009058834A (en)
CN (1) CN101382721B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104954680A (en) * 2015-06-16 2015-09-30 深圳市金立通信设备有限公司 Camera focusing method and terminal
CN104967778A (en) * 2015-06-16 2015-10-07 广东欧珀移动通信有限公司 Focusing reminding method and terminal

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5385032B2 (en) * 2009-07-08 2014-01-08 ソニーモバイルコミュニケーションズ株式会社 Imaging apparatus and imaging control method
TWI407235B (en) * 2009-07-22 2013-09-01 Hon Hai Prec Ind Co Ltd Auto focus system and method same
JP4930564B2 (en) 2009-09-24 2012-05-16 カシオ計算機株式会社 Image display apparatus and method, and program
JP6148431B2 (en) 2010-12-28 2017-06-14 キヤノン株式会社 Imaging apparatus and control method thereof
TWI585507B (en) * 2015-12-02 2017-06-01 鴻海精密工業股份有限公司 Adjusting method and device of focusing curve of camera lens
JP6659148B2 (en) * 2016-02-03 2020-03-04 キヤノン株式会社 Display control device, control method therefor, program, and storage medium
JP7264051B2 (en) * 2017-06-13 2023-04-25 ソニーグループ株式会社 Image processing device and image processing method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4364465B2 (en) * 2001-09-18 2009-11-18 株式会社リコー Imaging device
JP4344299B2 (en) * 2004-09-16 2009-10-14 富士通マイクロエレクトロニクス株式会社 Imaging apparatus and autofocus focusing time notification method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104954680A (en) * 2015-06-16 2015-09-30 深圳市金立通信设备有限公司 Camera focusing method and terminal
CN104967778A (en) * 2015-06-16 2015-10-07 广东欧珀移动通信有限公司 Focusing reminding method and terminal
CN104967778B (en) * 2015-06-16 2018-03-02 广东欧珀移动通信有限公司 One kind focusing reminding method and terminal
CN108322652A (en) * 2015-06-16 2018-07-24 广东欧珀移动通信有限公司 A kind of focusing reminding method and terminal

Also Published As

Publication number Publication date
JP2009058834A (en) 2009-03-19
CN101382721B (en) 2012-09-19

Similar Documents

Publication Publication Date Title
CN101382721B (en) Image pickup apparatus and focusing condition displaying method
US8106998B2 (en) Image pickup apparatus and focusing condition displaying method
CN101415074B (en) Imaging device and imaging control method
CN102783136B (en) For taking the imaging device of self-portrait images
CN103327248B (en) Photographing unit
CN104683723B (en) Display device, camera and display methods
CN101076997B (en) Image processing and image processing method used therein
JP2008028955A (en) Method and apparatus for automatic reproduction
JP2005269562A (en) Photographing apparatus
CN106797434A (en) Camera head, image capture method, processing routine
CN101115139A (en) Photographing apparatus and exposure control method
CN103248815A (en) Image pickup apparatus and image pickup method
King Digital Photography for Dummies
JP2009267831A (en) Image recording device and method
JP5635450B2 (en) Imaging apparatus, finder and display method thereof
JP4717840B2 (en) Imaging apparatus and control method thereof
JP2010245613A (en) Camera, display image control method and display image control program
CN104980654A (en) Image pickup apparatus including a plurality of image pickup units and method of controlling the same
JP2007259487A (en) Stereoscopic image recording apparatus
JP2007183512A (en) Photographing apparatus and illuminating device
JP4852504B2 (en) Imaging apparatus and focus state display method
JP4885084B2 (en) Imaging apparatus, imaging method, and imaging program
US11558593B2 (en) Scene-based automatic white balance
JP5658580B2 (en) Imaging apparatus, control method therefor, program, and recording medium
JP2005039401A (en) Camera and photographing method of stereoscopic image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120919

Termination date: 20210829

CF01 Termination of patent right due to non-payment of annual fee