JP6159055B2 - Automatic focusing device, imaging device, and automatic focusing method - Google Patents

Automatic focusing device, imaging device, and automatic focusing method Download PDF

Info

Publication number
JP6159055B2
JP6159055B2 JP2012000850A JP2012000850A JP6159055B2 JP 6159055 B2 JP6159055 B2 JP 6159055B2 JP 2012000850 A JP2012000850 A JP 2012000850A JP 2012000850 A JP2012000850 A JP 2012000850A JP 6159055 B2 JP6159055 B2 JP 6159055B2
Authority
JP
Japan
Prior art keywords
subject
focus
size
area
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2012000850A
Other languages
Japanese (ja)
Other versions
JP2013140288A (en
Inventor
英育 本宮
英育 本宮
Original Assignee
キヤノン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by キヤノン株式会社 filed Critical キヤノン株式会社
Priority to JP2012000850A priority Critical patent/JP6159055B2/en
Publication of JP2013140288A publication Critical patent/JP2013140288A/en
Application granted granted Critical
Publication of JP6159055B2 publication Critical patent/JP6159055B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Description

According to the present invention, when an imaging target includes a predetermined subject such as a person, a feature part (such as a face part) such as a person is extracted, and automatic focusing is performed in an imaging apparatus that performs focus adjustment in consideration of the result. The present invention relates to an apparatus and a method.
In autofocus (AF) control, which is automatic focus adjustment for a video camera or the like, the following TV-AF system is the mainstream. That is, an AF evaluation value signal indicating the sharpness (contrast state) of a video signal generated by photoelectrically converting light from a subject using an image sensor, and a focus lens that maximizes the AF evaluation value signal Search for the location of. However, when a person is photographed in an imaging apparatus having a face detection function, it is difficult to obtain a sufficient AF evaluation value because a human face generally has low contrast. Therefore, when a change in the distance to the subject occurs, it is difficult to specify the in-focus direction with a change in the AF evaluation value, and the subject image is blurred because the lens position cannot follow the subject. was there.
In order to solve such problems, the detected face sizes are compared, and if the face size increases or decreases, it is determined that the subject has moved closer or infinitely and follows the subject. A method of driving a focus lens has been proposed (see Patent Document 1 and Patent Document 2).
JP 2009-31760 JP JP 2008-276214 A
However, in the autofocus using the face detection described above, since the shooting parameters such as the depth of field and the state of the subject are not considered, the following can occur. In other words, depending on the detected size variation of the face and the setting of the parameters at the time of shooting, the direction of the subject's distance change may be misjudged, or even if there is no blur on the subject, movement determination is unnecessary. The accuracy of AF may be reduced. Furthermore, when shooting at a wide angle, the focus lens may mistakenly follow the subject when a subject other than the intended subject crosses the screen.
The present invention has been made in view of the above points. The purpose is to perform AF on a subject person using the TV-AF method in moving image shooting, etc., and when a change in the distance of the subject occurs, determine the discrimination area in consideration of the movement of the subject and change the distance. It is possible to provide a focus adjustment device and method that can improve AF stability and accuracy.
Focusing device of the present invention includes a detection means for detecting the output or et predetermined subject in the image pickup element for converting into an electric signal by photoelectric conversion of light from the object which has passed through the imaging optical system including a focus lens, the A generating unit that generates a focus evaluation value from a focus detection signal among outputs of the image sensor corresponding to a focus detection region including a subject, and a determination that determines a change in distance of the subject in the optical axis direction from the output of the detection unit A determination area determining means for determining a size of a determination area in the imaging screen for determining a distance change in the optical axis direction of the subject according to a focal distance of the imaging optical system, and the focus evaluation value. a focusing means for performing focus adjustment, the object of the discriminated if there is a change in the distance in the optical axis direction for moving the discrimination area, in response to said distance changing direction of the optical axis direction of the subject follower Having a control means for moving the Kasurenzu. The discriminating region determining means, wherein the size of the determination area when the detection number of the detected subject in the previous SL detecting means is large so as to be smaller than the size of the determination area when a small number of detected said subject The size of the discrimination area is determined. Alternatively, the determination area determining means may make the size of the determination area when the movement amount of the subject detected by the detection means is smaller than the size of the determination area when the movement amount of the subject is small. The size of the discrimination area is determined.
The focal adjustment method of the present invention includes the steps of detecting the output or et predetermined subject in the image pickup element for converting into an electric signal by photoelectric conversion of light from the object which has passed through the imaging optical system including a focus lens, A generation step for generating a focus evaluation value from a focus detection signal among outputs of the image sensor corresponding to a focus detection region including the subject, and a change in distance in the optical axis direction of the subject from the output in the detection step are determined. Using a determination step, a determination region determination step for determining a size of a determination region in the imaging screen for determining a change in distance of the subject in the optical axis direction according to a focal length of the imaging optical system, and the focus evaluation value a focusing step of performing focus adjustment Te, the discriminated if there is a change in the distance in the optical axis direction of the object that moves the determination area, the optical axis of the object Comprising a control step of moving the focus lens in accordance with a distance change direction countercurrent. In the discrimination area determining step, before Symbol wherein the size of the determining area of often detected number of detected subject to be smaller than the size of the determination area when a small number of detected the object determination in the detection step Determine the size of the area. Alternatively, in the determination area determination step, the size of the determination area when the amount of movement of the subject detected in the detection step is large is made smaller than the size of the determination area when the amount of movement of the subject is small. The size of the discrimination area is determined.
As described above, according to the present invention, it is possible to reduce misjudgment because the region for subject movement determination is made variable in accordance with subject conditions, shooting parameters, and the like. Thereby, the stability and accuracy of autofocus can be improved.
1 is a block diagram illustrating a configuration of a video camera that is an embodiment of the present invention. The flowchart which shows the process of the camera / AF microcomputer of one Example of this invention. The flowchart which shows the process of the micro drive operation | movement of one Example of this invention. The figure which shows the micro drive operation | movement of one Example of this invention. The flowchart which shows the process of the mountain climbing operation | movement of one Example of this invention. The figure which shows the mountain climbing operation | movement of one Example of this invention. The flowchart which shows the process of the movement determination of one Example of this invention. The flowchart which shows the process of the stability determination of one Example of this invention. The flowchart which shows the process of the movement determination threshold value setting of one Example of this invention. The figure which shows the threshold value according to the zoom lens position of one Example of this invention. The figure which shows the correction coefficient K according to the reference | standard face size of one Example of this invention. FIG. 6 is a diagram illustrating a correction coefficient K ′ ″ according to a shooting mode according to an embodiment of the present invention. The figure which shows the calculation method of the reliability showing the side face of one Example of this invention. The flowchart which shows the process of the shake detection of one Example of this invention. The figure which shows the example of the shake detected in the shake detection process of one Example of this invention. The figure which shows the shake detection of one Example of this invention. The flowchart which shows the completion | finish determination processing of one Example of this invention. 5 is a flowchart illustrating subject determination region determination processing according to an embodiment of the present invention. The graph which showed the ratio with respect to the screen of the to-be-photographed object determination area | region of one Example of this invention. The figure which showed the ratio with respect to the screen of the to-be-photographed object determination area | region of one Example of this invention.
The feature of the present invention is that, when automatic focusing is performed, a change in the size of a predetermined subject (such as a person's face) is monitored to determine whether there is a change in the distance in the optical axis direction of the subject. In other words, the area where the movement of the subject is determined can be changed according to the shooting parameters. When there is no change in the distance, focus adjustment is performed by limiting the follow-up driving operation that causes the focus lens to follow the change in the distance of the subject, and when there is a change in distance, the follow-up drive operation is permitted and the focus adjustment is performed. . Based on this concept, each of the focus adjustment apparatuses and methods of the present invention has a basic configuration as described in the section for solving the problems.
Examples of the present invention will be described below. FIG. 1 shows a configuration of a video camera (image pickup apparatus) including an automatic focus adjustment apparatus according to an embodiment of the present invention. In the following embodiments, a video camera will be described, but the present invention can also be applied to other imaging devices such as a digital still camera.
In FIG. 1, 101 is a first fixed lens, 102 is a variable magnification lens that moves in the optical axis direction and performs variable magnification, and 103 is a diaphragm. Reference numeral 104 denotes a second fixed lens, and reference numeral 105 denotes a focus compensator lens (also referred to as a focus lens in this specification) that has both a function of correcting the movement of the focal plane due to zooming and a focusing function. The first fixed lens 101, the variable power lens 102, the stop 103, the second fixed lens 104, and the focus lens 105 constitute an imaging optical system for imaging light from the subject. Reference numeral 106 denotes an image sensor as a photoelectric conversion element constituted by a CCD sensor or a CMOS sensor. The image sensor 106 photoelectrically converts the imaged light into an electrical signal. Reference numeral 107 denotes a CDS / AGC circuit that samples the output of the image sensor 106 and adjusts the gain. A camera signal processing circuit 108 performs various image processing on the output signal from the CDS / AGC circuit 107 to generate a video signal. Reference numeral 109 denotes a monitor constituted by an LCD or the like, which displays a video signal from the camera signal processing circuit 108. A recording unit 115 records the video signal from the camera signal processing circuit 108 on a recording medium such as a magnetic tape, an optical disk, or a semiconductor memory.
Reference numeral 110 denotes a zoom drive source for moving the variable magnification lens 102. Reference numeral 111 denotes a focusing drive source for moving the focus lens 105. The zoom drive source 110 and the focusing drive source 111 are configured by actuators such as a stepping motor, a DC motor, a vibration type motor, and a voice coil motor, respectively. Reference numeral 112 denotes an AF gate that passes only signals in a region (focus detection region) used for focus detection among output signals of all pixels from the CDS / AGC circuit 107. The AF signal processing circuit 113 extracts a high frequency component from the signal that has passed through the AF gate 112 and generates an AF evaluation value. That is, the AF signal processing circuit 113 constitutes a generation unit that generates an AF evaluation value from a focus detection area set for an electric video signal. The AF evaluation value is output to the camera / AF microcomputer 114 which is a control means. The AF evaluation value represents the sharpness (contrast state) of an image generated based on the output signal from the image sensor 106, and the sharpness changes depending on the focus state (degree of focusing) of the imaging optical system. As a result, a signal representing the focus state of the imaging optical system is obtained. The camera / AF microcomputer 114 as a control unit controls the operation of the entire video camera, and controls the focusing drive source 111 based on the AF evaluation value to drive the focus lens 105 and perform focus adjustment. Also do.
The face detection unit 116 according to the present embodiment performs a known face detection process on the image signal to detect a human face area in the shooting screen. That is, the face detection unit 116 constitutes a subject detection unit that detects a predetermined subject (here, a face) from the electrical signal. The detection result is transmitted to the camera / AF microcomputer 114. Based on the detection result, the camera / AF microcomputer 114 transmits information to the AF gate 112 so as to set the focus detection area at a position including the face area in the shooting screen. As face detection processing, a skin color region is extracted from the gradation color of each pixel represented by image data, and a face is detected based on a matching degree with a face contour plate prepared in advance, or a known pattern recognition technique is used. There is a method of performing face detection by extracting facial feature points such as eyes, nose and mouth.
Further, the face detection unit 116 calculates the reliability of the face and the reliability of the profile. The face reliability is calculated based on, for example, the matching degree with the face contour plate, and is expressed in five stages in descending order of matching degree. In addition, as shown in FIG. 13, the calculation of the reliability indicating the likelihood of a side face is performed by dividing the area so that the face detection frame is divided into left and right with reference to the center position of both eyes of the detected face (dotted line in the figure). Extract. Then, the skin color region included in the left and right regions is extracted, and the area is calculated by counting the number of pixels in the skin color region. The reliability representing the likelihood of profile is calculated using the ratio of the skin color area included in the left and right screens. If the left and right skin color areas have the same ratio, the probability of profile is assumed to be low, and the reliability indicating the likelihood of profile is set low, and if the ratio is significantly different, the probability of profile is assumed to be high Set high. In this embodiment, the reliability is set to 5 levels according to this ratio.
Reference numeral 117 denotes an aperture driving source, which includes an actuator for driving the aperture 103 and its driver. From the signal read by the CDS / AGC circuit 107, in order to obtain the luminance value of the photometric frame in the screen, the photometric value is obtained by the luminance information detection / calculation circuit 118, and the photometric photometric value is normalized by calculation. Is done. Then, the camera / AF microcomputer 114 calculates the difference between the photometric value and the target value set so that proper exposure can be obtained. Thereafter, the correction driving amount of the diaphragm is calculated from the calculated difference, and the driving of the diaphragm driving source 117 is controlled by the camera / AF microcomputer 114.
Next, AF control performed by the camera / AF microcomputer 114 will be described with reference to FIG. This AF control is executed according to a computer program stored in the camera / AF microcomputer 114. Step 201 indicates the start of processing. In step 202, a minute driving operation is performed, and it is determined whether the in-focus position is in focus or not. Detailed operation will be described later. In Step 203, if it is determined that the focus is in focus in Step 202, the process goes to Step 209 to perform the focusing process, and if it is not determined to be in focus in Step 202, the process goes to Step 204. In Step 204, if the direction can be determined in Step 202, go to Step 205 to perform the hill-climbing driving operation, and if the direction cannot be determined in Step 202, return to Step 202 and continue the minute driving operation. In Step 205, hill-climbing driving is performed to drive the lens at high speed in the direction in which the AF evaluation value increases. Detailed operation will be described later.
In Step 206, if it is determined in Step 205 that the peak of the AF evaluation value has been exceeded, the process goes to Step 207. If it is not determined in Step 206 that the peak of the AF evaluation value has been exceeded, the process returns to Step 205 and hill-climbing driving is continued. In Step 207, the lens is returned to the lens position where the AF evaluation value during hill-climbing driving reaches a peak. In Step 208, if the peak lens position is returned in Step 207, the process returns to Step 202 and the minute driving operation is performed again. If not returned to the peak lens position in Step 207, the process returns to Step 207 and the operation of returning to the peak lens position is continued. .
Next, the focusing operation from Step 209 will be described. In Step 209, the AF evaluation value is held. In Step 210, the latest AF evaluation value is acquired. In Step 211, the AF evaluation value held in Step 209 is compared with the AF evaluation value newly acquired in Step 210. If there is a difference of a predetermined level or more, it is determined that the operation is restarted, and the process proceeds to Step 202 to restart the minute driving operation. If it is not determined to restart in Step 211, go to Step 212. In Step 212, the lens is stopped, and the process returns to Step 210 to continue the restart determination.
Next, the minute driving operation will be described with reference to FIG. Step 301 indicates the start of processing. In Step 302, the latest face detection position / size is acquired, an AF frame that is a focus detection area is set based on the information, and an AF evaluation value is acquired. In Step 303, a process for determining movement of the subject from the acquired face size is performed. This process will be described with reference to FIG. In Step 304, it is determined whether or not the subject has moved nearer or infinitely. If the subject has moved, the process proceeds to Step 305. Otherwise, the process proceeds to Step 306. In the present embodiment, the size of the subject whose change amount is monitored is the size of a person's face (may be another part of the body) obtained by the face detection unit 116 serving as the subject detection means. Therefore, as will be described later, when the amount of change in face size is within a predetermined amount, it is determined that there is no subject distance change, and when the amount of change is greater than the predetermined amount, it is determined that there is a subject distance change. Is done. When the camera / AF microcomputer 114 determines that there is no change in the distance of the subject, the camera / AF microcomputer 114 performs the focus adjustment by prohibiting the tracking control by the tracking control means. The focus control is performed by allowing the tracking control by the tracking control means. Such switching is performed in Step 304 above.
In Step 305, it is determined whether the result of the movement determination is approaching or moving away. When approaching, it progresses to Step 314, the mountain climbing to a near direction is determined, it progresses to Step 313, and a process is complete | finished. Otherwise, the process proceeds to Step 315 to decide mountain climbing in the infinite direction, and the process proceeds to Step 313 to end the process. The driving for switching the hill-climbing direction in accordance with the moving direction of the subject is the subject tracking drive or tracking control. By performing the tracking drive, AF responsiveness and accuracy are improved. In order to execute this function, the camera / AF microcomputer 114 of the control means determines the presence / absence of a subject distance change based on the detection result by the face detection unit 116 as the subject detection means, and according to the determination result. Tracking control means for causing the focus lens to follow a change in the distance of the subject is included.
Subsequently, in Step 306, if the AF evaluation value captured in Step 302 is larger than the previous AF evaluation value, the process proceeds to Step 307. If the AF evaluation value captured in Step 302 is smaller than the previous AF evaluation value, the process proceeds to Step 308. In Step 307, the focus lens is driven by a predetermined amount in the previous forward direction. On the other hand, in Step 308, the focus lens is driven by a predetermined amount in the reverse direction of the previous time. FIG. 4 shows the time course of the lens operation. Here, the AF evaluation value A for the electric charge accumulated in the CCD (imaging device) during A is taken in by TA, and the AF evaluation value B for the electric charge accumulated in the CCD during B is taken in by TB. In TB, the AF evaluation values A and B are compared, and if A <B, they move in the forward direction, while if A> B, they are in the reverse direction. In Step 309, if the direction determined to be the in-focus direction is continuously the same number of times, the process proceeds to Step 310. If the direction is not continuously advanced the same number of times, the process proceeds to Step 311. In Step 311, if the focus lens repeats reciprocation within a predetermined range a predetermined number of times, the process proceeds to Step 312, and if the focus lens is not within the predetermined range for a predetermined time, the process proceeds to Step 313 and the process is terminated. As a result, the operation returns to the minute driving operation in Step 202 of FIG. In Step 310, assuming that the direction can be determined, the process proceeds to Step 313, where the processing is terminated and the process proceeds to hill climbing driving in Step 205 of FIG. 2. In Step 312, the process is terminated because the in-focus determination has been made, and the process proceeds to a restart determination routine after Step 210 in FIG. 2.
Next, the hill-climbing driving operation will be described with reference to FIG. Step 501 indicates the start of processing. In Step 502, the latest face detection position / size is acquired, an AF frame (focus detection area) is set based on the information, and an AF evaluation value is acquired. In Step 503, if the AF evaluation value captured in Step 502 is larger than the previous AF evaluation value, the process proceeds to Step 504. If the AF evaluation value captured in Step 502 is smaller than the previous AF evaluation value, the process proceeds to Step 505. In Step 504, the focus lens is driven at a predetermined speed in the previous forward direction, and the process proceeds to Step 508 to end the process. On the other hand, in Step 505, if the AF evaluation value has decreased beyond the peak, the process proceeds to Step 506 and it is determined that the peak has been exceeded. The process proceeds to Step 508, where the processing is terminated, and the process proceeds to Step 202 of Step 202 through Step 206 in FIG. If the AF evaluation value does not decrease beyond the peak in Step 505, the process proceeds to Step 507, the focus lens is driven at a predetermined speed in the direction opposite to the previous time, and the process proceeds to Step 508.
FIG. 6 illustrates the lens operation. Here, in the case of A, since the AF evaluation value has decreased beyond the peak, the hill-climbing driving operation is terminated assuming that the in-focus point exists, and the operation shifts to the minute driving operation. On the other hand, in the case of B, since the AF evaluation value decreases without a peak, it is reversed as having the wrong direction, and the hill-climbing driving operation is continued. The movement amount per fixed time, that is, the driving speed is larger than the driving speed of the minute driving. When the size of face detection increases continuously, the AF responsiveness can be improved by setting the driving speed at the time of hill-climbing driving large.
As described above, the camera / AF microcomputer 114 performs control so as to increase the AF evaluation value by moving the focus lens while repeating the restart determination → micro drive → mountain climbing drive → micro drive → restart determination. Yes.
Next, a process for determining whether the subject is near or infinite will be described with reference to FIGS. First, the movement determination process in Step 303 of FIG. 3 will be described with reference to FIG. Step 701 indicates the start of the movement determination process. In Step 702, processing for calculating the average of detected face sizes is performed. The detected face size is stored in a memory in the camera / AF microcomputer, and an average of 10 stored images is calculated using a moving average method. The averaged face size is stored in FaceAveSize. The FaceAveSize has 20 arrays, and a history is stored in FaceAveSize [0] to [19] each time an average value is calculated. Here, the latest average value is stored in FaceAveSize [0]. In Step 721, it is determined whether or not the face detection position acquired in Step 302 is shaken. Details of this processing will be described with reference to FIGS.
In Step 703, it is determined whether or not there is a movement determination flag. When the movement determination flag determined in the process described later is set, the process proceeds to Step 704. Otherwise, the process proceeds to Step 703 ', and the process proceeds to the movement determination area setting process. The process of Step 703 'will be described later with reference to FIG. When the setting of the area in the screen where the movement determination is performed is completed by the movement determination area process, the process proceeds to Step 703 ″. In Step 703 ″, it is determined whether the value of FacePos as the subject is inside the movement determination area JudgeFaceArea. If it is within the area, the process proceeds to Step 707. When that is not right, it changes to Step 720 and complete | finishes a process.
In Step 707, it is determined whether or not the average face size FaceAveSize calculated in Step 702 is a stable value. Details of this processing will be described later with reference to FIG. In Step 708, the stability flag determined in Step 707 and the shaking flag determined in Step 721 are determined. If the stability flag is set and the shaking flag is also set, the process proceeds to Step 709. Otherwise, the process proceeds to Step 720, and the process ends. Here, if the stability flag is CLEAR or the shaking flag is CLEAR, it means that the movement cannot be determined because the state of the subject is not stable. In this case, the process proceeds to Step 306 and subsequent steps through Step 304 in FIG. In Step 709, it is determined whether the face reference size is set. The face reference size FaceBaseSize is the size of the face that serves as a reference when performing movement determination. When the face reference size FaceBaseSize is 0, no value is set, so the process proceeds to Step 719, and in Step 719, FaceAveSize [0] is substituted into FaceBaseSize. Also in this case, the process proceeds to Step 306 and subsequent steps through Step 304 in FIG.
If the face reference size FaceBaseSize is not 0, the process proceeds to Step 710. In Step 710, a movement determination threshold value setting process is performed. This process is a process for setting a face size change threshold value used for movement determination according to the shooting parameters and the state of the subject. Details of this processing will be described later with reference to FIG. Subsequently, in Step 711, the face reference size FaseBaceSize is compared with the current face size FaceAveSize [0]. If the current face size is small, there is a possibility of moving away, so the process proceeds to Step 712. Otherwise, there is a possibility of approaching, so the process proceeds to Step 713. In Step 712, the distance determination is performed. If the difference between the face reference size FaseBaceSize and the current face size FaceAveSize [0] is greater than or equal to THfar, it is determined that the subject has moved away, and the process proceeds to Step 717. If THfar is not exceeded in Step 712, the process proceeds to Step 714. In Step 717, after setting the distance flag of the movement determination flag which means that the subject has moved away, the process proceeds to Step 720 and ends. In this case, the process proceeds to Step 305 and subsequent steps through Step 304 in FIG. In Step 713, approach determination is performed. If the difference between the face reference size FaseBaceSize and the current face size FaceAveSize [0] is greater than or equal to THnear, it is determined that the subject has approached, and the process proceeds to Step 718. If THnear is not exceeded in Step 713, the process proceeds to Step 714. In Step 718, after setting the approach flag of the movement determination flag that means that the subject has approached, the process proceeds to Step 720 and the process is terminated. Also in this case, the process proceeds to Step 305 and subsequent steps through Step 304 in FIG.
In Step 714, when the face reference size is set and the state is stable, the timer FaceJudgeTimer indicating how many times the movement determination is repeated is incremented, and the process proceeds to Step 715. In Step 715, it is determined whether FaceJudgeTimer is equal to or less than TimerTH. This process is intended to initialize the determination process when there is no movement of the subject even though the movement determination has continued for about 2 seconds. Therefore, TimerTH is set to 120, which is a value corresponding to 2 seconds, and it is determined whether this value has been exceeded. In this embodiment, since a system that performs processing 60 times per second is assumed, the value corresponding to 2 seconds is 120. If FaceJudgeTimer exceeds TimerTH, the process proceeds to Step 716 to perform initialization processing. If not, the process proceeds to Step 720 and the process ends. In this case, the process returns to Step 702 to continue the movement determination. In Step 716, FaceJudgeTimer is initialized to 0, and the process proceeds to Step 705. In Step 705, FaseBaceSize is initialized to 0, and the process proceeds to Step 706. In Step 706, the movement determination flag is cleared and initialization is performed. After the initialization process is completed, the process proceeds to Step 720 and the process ends. In this case, the process proceeds to Step 306 and subsequent steps through Step 304 in FIG.
If the movement determination flag is set in Step 703, the process proceeds to Step 704 in order to determine whether or not the movement of the subject has ended. The processing of Step 704 will be described later with reference to FIG. Next, in Step 704 ′, it is determined whether it is determined in Step 704 that the movement of the subject has been completed or whether it is determined that the movement of the subject has continued. When the movement of the subject is completed, the process moves to Step 705 to perform initialization processing. When that is not right, it changes to Step 720 and complete | finishes a process. In this case, the process returns to Step 702 to continue the movement determination.
Next, the movement end determination process will be described with reference to FIG. Step 1701 in FIG. 17 indicates the start of processing. Next, in Step 1702, the camera / AF microcomputer 114 calculates a secondary differential value of the average face size. The specific calculation method of the secondary differential value is shown below. First, the camera / AF microcomputer 114 calculates the first differential value by calculating the difference between the average face size calculated in Step 702 of FIG. 7 and the average face size calculated a predetermined time ago, and the camera / AF microcomputer It is recorded in the memory in 114. Next, the camera / AF microcomputer 114 calculates a change amount between the primary differential value recorded in the memory and the primary differential value calculated in a predetermined time and recorded in the memory, thereby obtaining a secondary. Calculate the differential value. The calculated secondary differential value is a value indicating how much the primary differential value of the average face size calculated this time has changed with respect to the primary differential value of the average face size before a predetermined time.
Next, in Step 1703, FaceAveSize is compared to determine whether the currently detected face size is smaller or larger than the previously detected face size. In order to obtain the difference between the current FaseAveSize [0] and the previous history FaseAveSize [1], FaseAveSizeDiff = FaseAveSize [0] -FaseAveSize [1] is calculated, and the process proceeds to Step 1704. Next, in Step 1704, it is determined whether the movement determination flag is currently set due to approach or whether the movement determination flag is set due to distance. If the movement determination flag is set due to approach, the process proceeds to Step 1705. If the movement determination flag is set due to distance, the process proceeds to Step 1709.
Next, in Step 1705, when the face size is small (FaceAveSizeDiff is a negative value), it is determined that the movement is decelerated or completed, and the process proceeds to Step 1706. On the other hand, if the face size is large (FaceAveSizeDiff is 0 or more), the process proceeds to Step 1707. In Step 1706, the camera / AF microcomputer 114 determines that the movement of the subject has decelerated or ended, and proceeds to Step 1713 to end the processing. On the other hand, in Step 1707, the second-order differential value of the average face size obtained in Step 1702 is compared with the approaching deceleration threshold value. If the second-order differential value of the average face size is less than or equal to the approaching deceleration threshold value, the process proceeds to Step 1706 to approach. If it is larger than the deceleration threshold, the process proceeds to Step 1708. In step 1708, the camera / AF microcomputer 114 determines that the movement of the subject is continuing, transitions to step 1713, and ends the processing.
Next, in Step 1709, when the face size becomes large (FaceAveSizeDiff is a positive value), it is determined that the movement is decelerated or finished, and the process proceeds to Step 1710. On the other hand, if the face size is small (FaceAveSizeDiff is 0 or less), the process proceeds to Step 1711. In Step 1710, the camera / AF microcomputer 114 determines that the movement of the subject has decelerated or ended, and proceeds to Step 1713 to end the process. On the other hand, in Step 1711, the second-order differential value of the average face size obtained in Step 1702 is compared with the distance deceleration threshold value. If the second-order differential value of the average face size is greater than or equal to the distance reduction threshold value, the process proceeds to Step 1710, and the distance is decreased. If it is smaller than the deceleration threshold, the process proceeds to Step 1712. In Step 1712, the camera / AF microcomputer 114 determines that the movement of the subject is continuing, transitions to Step 1713 and ends the processing.
Here, the approaching deceleration threshold is a rate of change when the camera / AF microcomputer 114 determines that the deceleration or movement is completed when the object is approaching at a certain speed and the deceleration or movement is completed. The away-deceleration threshold is a rate of change when the camera / AF microcomputer 114 determines that the deceleration or movement is completed when the object is decelerated or moved from a state where the subject is moving away at a certain speed. In this embodiment, the approaching deceleration threshold and the distance deceleration threshold are empirically set to 10% of the face size, the approaching deceleration threshold is a value of 0 or less, and the distance deceleration threshold is a value of 0 or more. Further, the absolute value of the approaching deceleration threshold and the absolute value of the away deceleration threshold may be the same value or different values. Further, the above-mentioned predetermined time is defined as 15 frames before, as a value that can determine whether or not the secondary differential value is decelerated or stopped by the approach deceleration threshold and the distance deceleration threshold. However, the approaching deceleration threshold, the distance deceleration threshold, and the predetermined time that are set are values that can be freely changed by the system.
In the processing of FIG. 17 described above, it is determined whether or not the subject has been decelerated or moved based on the average face size, but the second derivative value is calculated with respect to the position of the face, human body, etc. A method of determining whether or not the movement has ended may be used. In the case of a system that can detect eyes and mouths, a method of calculating whether or not deceleration or movement has ended by calculating a secondary differential value with respect to the distance between both eyes (or eyes and mouths) may be used. Furthermore, a method may be used in which the ratio of the distance between the eyes and the mouth with respect to the distance between both eyes is calculated, and a secondary differential value is calculated with respect to the calculated ratio to determine whether deceleration or movement has ended. In short, as long as it is information that can determine the deceleration or termination of the movement of a predetermined subject, the subject, such as the size of a predetermined part, the distance between a plurality of parts, and relative information about the ratio between the plurality of sizes and distances, etc. Any shape information may be used.
Subsequently, the subject stability determination process in Step 707 of FIG. 7 will be described with reference to FIG. Step 801 in FIG. 8 indicates the start of processing. Next, in Step 802, the camera / AF microcomputer 114 calculates a secondary differential value of the average face size. In step 803, the camera / AF microcomputer 114 determines whether the secondary differential value calculated in step 802 is equal to or less than a threshold value. When the calculated secondary differential value is less than or equal to the threshold value, the subject is stable because the amount of change in the primary differential value of the average face size calculated this time is smaller than the primary differential value of the average face size before a predetermined time. It judges that it is carrying out, and progresses to Step 804. On the other hand, if the calculated secondary differential value is larger than the threshold value, the change amount of the primary differential value of the average face size calculated this time is larger than the primary differential value of the average face size before a predetermined time, so that the subject is It judges that it is not stable and proceeds to Step 807.
In this embodiment, the threshold is empirically set to a value of 10% of the average face size. The predetermined time before is defined as 15 frames before as a value that can be determined by the threshold whether or not the secondary differential value is stable. However, in the present embodiment, the set threshold value and the predetermined time are values that can be freely changed by the system. Next, in Step 804, the camera / AF microcomputer 114 counts up StableTimer and proceeds to Step 805. However, StableTimer is a variable that counts a period in which the calculated average face size is continuously stable. Therefore, it is possible to determine that the average face size after the stable timer exceeds a predetermined period (hereinafter referred to as stable TH) is a stable average face size. Here, StableTH is 15 frames in this embodiment. Further, StableTH is a variable that counts a period during which the average face size is continuously stable, and therefore, a period that is equal to or shorter than a predetermined time (15 frames in the present embodiment) used when calculating the secondary differential value. Need to be. In Step 807, since the secondary differential value of the average face size is larger than the threshold value, the camera / AF microcomputer 114 clears StableTimer (0 in this embodiment).
Next, in Step 805, the camera / AF microcomputer 114 determines whether or not StableTimer is less than StableTH. If it is less than StableTH, the process proceeds to Step 806. If it is greater than StableTH, the process proceeds to Step 808. In Step 806, the camera / AF microcomputer 114 determines that the average face size calculated in Step 702 is not continuously stable until the StableTimer exceeds StableTH, turns off the stability flag, and finishes the process of FIG. To do. On the other hand, in Step 808, the camera / AF microcomputer 114 determines that the average face size calculated in Step 702 is a value that is stable continuously for a period of time when the StableTimer is equal to or greater than StableTH, and sets the stability flag to ON. Terminate the process.
In the processing of FIG. 8 described above, it is determined whether or not the subject is stable based on the average face size. However, whether or not the subject is stable by calculating a secondary differential value with respect to the position of the face or the human body. It may be a method of judging. Further, in the case of a system capable of detecting eyes and mouth, a method of calculating whether or not the second differential value is calculated with respect to the distance between both eyes (or eyes and mouth) and determining whether or not it is stable may be used. Furthermore, a method may be used in which the ratio of the distance between the eyes and the mouth with respect to the distance between both eyes is calculated, and a secondary differential value is calculated with respect to the calculated ratio to determine whether or not it is stable. In short, if it is information that can determine the stability of the movement of a predetermined subject, the size of the predetermined part, the distance between the plural parts, the relative information about the ratio between the plural sizes and distances, etc. Any shape information may be used.
Next, the movement determination threshold value setting process in Step 710 of FIG. 7 will be described with reference to FIG. Step 901 in FIG. 9 indicates the start of processing. Step 902 is processing for acquiring the lens position of the variable magnification lens (zoom lens) 102. If the driving source of the variable power lens is a step motor, it is the number of steps, and is used to determine which position the lens is on the tele side or the wide side. Subsequently, in Step 903, the reference movement determination threshold Th corresponding to the current zoom position is acquired from the relationship graph between the zoom lens position and the reference movement determination threshold Th shown in FIG. In FIG. 10, the vertical axis represents the change rate (%) of the face size, and the horizontal axis represents the zoom lens position. The relationship between the zoom lens position and the reference movement determination threshold Th is a graph calculated from the relationship between the front end of the depth of field and the amount of change in face size at each zoom lens position with the aperture value Fbase being constant. is there. If the subject moves in a state where the predetermined subject is in focus, blur will start to be recognized on the screen from the point where the depth of field is exceeded, so the reference movement determination threshold value is smaller than the depth of field. Set. Accordingly, it is possible to determine whether or not the subject has moved before the subject starts to blur and to drive the focus lens, thereby improving the follow-up performance of the subject. When the actual reference movement determination threshold Th is calculated, a table of the reference movement determination threshold Th corresponding to each zoom lens position is held in the camera / AF microcomputer 114, and the threshold Th corresponding to the zoom lens position is determined.
The depth of field is calculated using Equation 1 below. If the subject distance is s, and the front and rear ends of the depth of field are Dn and Df, respectively, the result is as follows.
Dn = s (Hf) / (H + s-2f) ... Equation 1a
Df = s (Hf) / (Hs) ... Equation 1b
The hyperfocal length H is expressed by Equation 2 where f is the focal length of the lens, N is the aperture value of the lens, and c is the diameter of the allowable circle of confusion.
H = f * f / N * c Equation 2
In step 904, the current aperture value Fno is acquired. The aperture value is obtained from the drive amount of the aperture 103. In Step 905, the reference movement determination threshold is corrected according to the aperture. Since the depth of field changes according to the aperture value, the acquired current aperture value Fno is compared with the aperture value Fbase when the reference movement determination threshold value in FIG. 10 is calculated. Based on Equation 3, a corrected movement determination threshold Th2 is calculated.
Th2 = (current aperture value Fno / reference aperture value Fbase) * reference movement determination threshold Th.
As described above, the threshold value or the predetermined amount is determined by the depth of field, and is set to an amount smaller than the amount of change in the size of the subject when the subject moves the depth of field. It is said.
Subsequently, in Step 906, the reference face size FaceBaseSize is acquired. In Step 907, the reference movement determination threshold value is corrected according to the reference face size FaceBaseSize. FIG. 11 is a graph showing the relationship between the reference face size and the correction coefficient K. The vertical axis represents the change rate (%) of the face size, and the horizontal axis represents the reference face size. As the reference face size FaceBaseSize increases, the detected face size varies, and correction is required. Based on the acquired reference face size and correction coefficient K graph, a corrected movement determination threshold Th3 is calculated based on Equation 4.
Th3 = correction coefficient K * movement determination threshold Th2 Expression 4
Thus, the subject situation is the size of the subject detected by the detecting means, and the threshold value or the predetermined amount is changed according to the size of the subject, and is different from the reference value of the subject size. Will be changed according to
Subsequently, in Step 908, the face reliability is obtained from the signal of the face detection unit 116. As described above, the reliability of the face is evaluated in five stages of 1 to 5, evaluation 5 has the highest reliability, and evaluation 1 has the lowest reliability. In Step 909, the movement determination threshold value is corrected using the face reliability. When the reliability is low, the detected face size varies greatly, or even if it is small, there is a possibility of erroneous detection. Therefore, it is necessary to perform threshold correction and set the threshold high. In this embodiment, when the face reliability is 3 or less, the correction coefficient K ′ is set to 1.5, and the corrected movement determination threshold Th4 is calculated based on Expression 5 using the acquired face reliability. To do.
Th4 = correction coefficient K ′ movement determination threshold Th3 Expression 5
Thus, the subject situation is a reliability indicating the likelihood of the face detected by the detection means, and the threshold value or the predetermined amount is changed according to the reliability indicating the likelihood of the face, and the reliability If is low, it is changed to a larger amount.
Next, in Step 910, the reliability of profile likelihood is acquired from the signal of the face detection unit 116. As described above, the reliability of profile likelihood is evaluated in five stages of 1 to 5, with evaluation 5 having the highest possibility of profile and evaluation 1 having the lowest possibility of profile. In Step 911, the movement determination threshold value is corrected using the reliability of the profile likelihood. When the reliability of the side face is high, the detected face size varies greatly, and even if it is small, there is a possibility of erroneous detection. Therefore, it is necessary to correct the threshold value and set the threshold value high. In the present embodiment, when the reliability of the side face likelihood is 3 or more, the correction coefficient K ″ is set to 1.5, and the movement after correction is performed based on Expression 6 using the acquired reliability of the side face likelihood. A determination threshold Th5 is calculated.
Th5 = Correction coefficient K ″ * Movement determination threshold Th4 Equation 6
In this way, the subject situation is a reliability indicating whether the face detected by the detection means is facing sideways, and the threshold value or the predetermined amount corresponds to a reliability indicating whether the face is facing sideways. If the reliability is high, the amount is changed to a larger amount.
Subsequently, in Step 912, the photographing mode is acquired from the information in the camera / AF microcomputer 114. A general imaging apparatus has a plurality of shooting modes as shown in FIG. 12 in order to set shooting parameters optimal for a shooting scene. In Step 913, the movement determination threshold value is corrected according to the shooting mode. FIG. 12 is a table showing the relationship between the shooting mode and the correction coefficient K ′ ″. In a shooting mode in which the movement of the subject is assumed to be large, the coefficient K ′ ″ is set to a value of 1 or less so that the threshold value is set low, thereby increasing the autofocus response. Further, in the shooting mode in which the movement of the subject is assumed to be small, the value of the coefficient K ′ ″ is set high so as to increase the threshold for movement determination, and importance is attached to the stability of autofocus. Based on the acquired photographing mode and the correction coefficient K ′ ″ determined from FIG. 12, a corrected movement determination threshold Th6 is calculated based on Expression 7.
Th6 = correction coefficient K ″ ′ * movement determination threshold Th5 Expression 7
Thus, the shooting parameter is a shooting mode, and the threshold value or the predetermined amount is changed according to the shooting mode, and a large amount is set when the shooting mode is set to a mode for shooting a subject with much movement. When the shooting mode is set to a mode for shooting a subject with little movement, the amount is changed to a small amount.
Subsequently, in Step 914, the degree of focus is acquired. The method for calculating the degree of focus is a value obtained by dividing the peak hold value of the AF evaluation value in the evaluation frame by the difference between the maximum value and the minimum value of the luminance level of each line. This degree of focus is represented by 0 to 1, and in the case of a focused subject, the difference between the peak hold of the AF evaluation value and the luminance level tends to be the same value, so the degree of focus approaches 1. An object with a low degree of focus is likely to be blurred and the reliability of the face size may be low, so the threshold value for movement determination is set to be high. For example, using the obtained degree of focus, when the degree of focus is 0.5 or less, the correction coefficient K ″ ″ is set to 1.5, and the movement determination threshold value after correction based on Expression 8 Th7 is calculated.
Th7 = Correction coefficient K ″ ″ * Movement judgment threshold Th6 ・ ・ ・ Equation 8
In this way, the imaging parameter is the degree of focus estimated from the state of the AF evaluation value level and the driving state of the focus lens, and the threshold value or the predetermined amount is changed according to the degree of focus. If the degree is low, it is changed to a larger amount.
Note that these correction coefficients are determined after sufficient measurement according to the camera, and are not limited to these values.
Subsequently, in Step 916, a limit is set to the movement determination threshold Th7 calculated in Step 915. In the case of the correction method described above, the movement determination threshold value is several times the reference movement determination threshold value due to accumulation of corrections, and may not be appropriate as the movement determination threshold value. Therefore, the upper limit of the threshold is set. If the movement determination threshold Th7 is greater than or equal to twice the reference movement determination threshold Th, the process proceeds to Step 918; otherwise, the process proceeds to Step 917. In this embodiment, the limit is set to double in order to maintain the accuracy of the movement determination, but this value can be arbitrarily set to a value that can perform sufficient measurement and reduce erroneous determination. In Step 917, Th7 is substituted for the movement determination threshold THface, and the process proceeds to Step 919. In Step 918, since the movement determination threshold is too large, a limit is set for the threshold. A value obtained by doubling the reference movement determination threshold Th is substituted for the final movement determination threshold THface, and the process proceeds to Step 919.
In Step 919, the threshold for distance determination and approach determination is changed based on the movement determination threshold THface. For face detection, there is a method of determining the face size based on the interval between eyes. When a person turns sideways, the distance between the eyes is reduced, so that the detected face size is reduced even though the distance to the subject has not changed. For this reason, the distance of the subject may be detected. Therefore, by setting the distance detection threshold THfar larger than the approach detection threshold THnear, erroneous determination due to a size change during profile shooting is reduced. In the present embodiment, the distance detection threshold is increased by a factor of 1.5, but this value is set to a value that performs sufficient measurement and reduces erroneous determination. In this way, the subject situation is the result of the determination, and the threshold value or the predetermined amount is set to a different value when the determination result is far away and closer, and when the distance is far away, it approaches. A larger amount is set than when.
The processing flow for setting the movement determination threshold has been described above, and is the processing for calculating the approach detection threshold THnear and the away detection threshold THfar. The coefficients and formulas used in this processing are examples, and the present invention is not limited to these.
Next, the movement determination area setting process will be described with reference to FIG. Step 1801 in FIG. 18 indicates the start of processing. In Step 1802, the following processing is performed. When a user intentionally prohibits a transition that causes another face to become the main face using the so-called main face lock function, main face fixing function, or personal authentication function for the detected subject Does not need to set a movement determination area. Therefore, in Step 1802, if the main face is fixed, the process proceeds to Step 1810, and if not, the process proceeds to Step 1803. In Setp 1803, the face detection position is acquired from the camera / AF microcomputer 114. The acquired face positions are stored in the array FacePosX [0] and FacePosY [0]. At the same time, the data before one process moves to FacePosX [1] and FacePosY [1]. Subsequently, in Step 1804, the focal length is acquired. In this embodiment, the focal length FocalLength value converted to 35 mm is acquired from the camera / AF microcomputer 114.
Subsequently, in Step 1805, reference determination areas FaceAreaX and FaceAreaY are referred to from the graph shown in FIG. Note that the determination area is smaller in the wide area and larger in the tele area. There are three reasons for this.
-Since the angle of view is wider in the wide area, there is a possibility that objects other than the target object are present on the screen.
-Since the depth of field is deeper in the wide area, it is not necessary to actively perform movement determination.
・ Wide subjects are less likely to deviate significantly from the center of the screen due to camera shake.
In the case of the present embodiment, on the wide-angle side, considering the size that the bust up is possible for a subject with a subject distance of several meters, the X direction is limited to about 30% and the Y direction is limited to about 70%. ing. Further, as shown in FIG. 19, it is assumed that there is a low possibility that a subject other than the main subject will enter when the focal length is around 120 mm, and the determination area is not limited.
Subsequently, in Step 1806, the number of detected faces is acquired from the camera / AF microcomputer 114. In Step 1807, a correction coefficient J is set according to the number of face detections. In the case of the present embodiment, it is generally considered that there are one or two main subjects. Therefore, when five or more faces are detected, it is predicted that there is a high possibility of photographing with a crowd, and the determination is made. Set the area small. Therefore, 0.8 is set as the correction coefficient. Otherwise, the correction coefficient is set to 1.0 and no correction is performed. In the following equation, a movement determination area is calculated.
FaceAreaX '= correction coefficient J * FaceAreaX ... Equation 9a
FaceAreaY '= Correction coefficient J * FaceAreaY Equation 9b
Subsequently, in Step 1808, the lateral movement amount of the face is calculated. In order to calculate the amount of movement of the face, the difference between the two face positions is stored in FaceDiffX = FacePosX [0] -FacePosX [1]. In Step 1809, a correction coefficient J ′ is set according to the amount of lateral movement of the face. In the case of this embodiment, if the movement amount FaceDiffX is 20% or more with respect to the screen, the subject may move out of the screen, so the determination area is set small. Therefore, 0.8 is set as the correction coefficient. Otherwise, the correction coefficient is set to 1.0 and no correction is performed. In the following equation, a movement determination area is calculated.
FaceAreaX '' = correction coefficient J '* FaceAreaX' ... Equation 10a
FaceAreaY '' = correction coefficient J '* FaceAreaY' ... Equation 10b
On the other hand, in Step 1810, since the main face is locked, 1.0 is set to FaceAreaX ″ and FaceAreaY ″, respectively, and the determination area is not limited. In Step 1811, the positions of the four corners of the determination area are calculated using the following formula. First, the center position (x, y) of the determination frame is set. This value may be an arbitrary value, but in this embodiment, it is set slightly above the center of the screen as shown in FIG. This is because there is a high possibility that the subject will enter.
JudgeFaceAreaX_Left = Center position x- (Number of pixels in X direction x FaceAreaX '' / 2)
JudgeFaceAreaX_Right = center position x + (number of pixels in X direction × FaceAreaX '' / 2)
JudgeFaceAreaY_Up = center position y- (number of pixels in Y direction x FaceAreaY '' / 2)
JudgeFaceAreaY_Down = center position y + (number of pixels in Y direction × FaceAreaY '' / 2)
... Formula 11
A range surrounded by the obtained coordinates is a movement determination area, and movement determination is performed only when a face is detected in this area. When the above process ends, the process proceeds to Step 1812 to end the process. As described above, in this embodiment, the discrimination area determination means in the camera / AF microcomputer 114 as the control means is such that the discrimination area becomes smaller as the number of subjects that are the subject conditions detected by the detection means increases. To decide. Further, the determination area determination unit determines the determination area so that the larger the amount of movement of the subject that is the state of the subject detected by the detection unit, the smaller the movement amount. Further, the discrimination area determining means has a mode for setting a part of the image sensor as a discrimination area and a mode for setting the entire area on the image sensor as a discrimination area, so that a photographer can specify a predetermined subject. If it has been done, the mode is switched to a mode in which the entire area on the image sensor is set as a discrimination area. Here, the specification of the predetermined subject by the photographer is a state in which at least one photographing setting among individual authentication, subject tracking, and subject lock is made.
Next, the shake detection process that is a feature of the present invention will be described with reference to FIGS. In this embodiment, the shake detection means included in the camera / AF microcomputer 114 of the control means executes shake detection processing. In the shake detection process, attention is paid to the fact that the face and body periodically vibrate up and down and right and left when a person approaches by walking or running toward the camera. And it is the process which decomposes | disassembles the result of the acquired face detection position into the coordinate of a horizontal direction and a vertical direction, and detects whether the position of each direction is vibrating. First, the vibration of the face and body when a person approaches by walking or running toward the camera will be described with reference to FIG. In the present embodiment, the horizontal coordinate of the face detection position is set as an x coordinate, and the vertical coordinate is set as a y coordinate. In addition, with the upper left corner of the image as the origin, the x coordinate takes a larger coordinate value as it approaches the right end of the image, and the y coordinate approaches the lower end of the image. In this embodiment, the position of the subject to be monitored is set as the face detection position (for example, the center of the face), but the organs (eyes, ears, nose, mouth) arranged in the center of other parts of the body and the face of the person. Etc.) may be the center position between at least two organs.
When a person walks on two legs, his face and body shake left and right and up and down. Here, the locus of the position coordinates of the detected face and body draws a so-called 8-character shape as shown in FIG. 15A and a so-called U-shape as shown in FIG. When FIG. 15A is decomposed into x-coordinate and y-coordinate and plotted in the time direction, a locus as shown in FIG. 15C is obtained. From FIG. 15 (c), it can be seen that the vibration in the vertical direction vibrates at a frequency (half the period) twice that of the horizontal vibration. Therefore, when the body swings one reciprocating motion in the left-right direction, the body swings two reciprocations in the vertical direction. Further, if the y-coordinate trajectory in FIG. 15C is delayed by, for example, ½ cycle, the trajectory in FIG. 15B in the x and y coordinates is A → B → C → D → A →. Will be followed periodically. Further, the phase of the x-coordinate trajectory and the y-coordinate trajectory varies depending on the person. From the above, in order to catch a person walking, the trajectory shape is not limited to that shown in FIGS. 15 (a) and 15 (b). It is necessary to capture the feature that it vibrates periodically and the vertical period is half of the horizontal period. That is, the shake detection unit can determine that the subject is shaking when the position of the subject obtained from the subject detection unit satisfies such a predetermined operation characteristic in the video signal of the electrical signal. In the following, a system for detecting a horizontal shake from the x coordinate of the acquired face detection position will be described. The system for detecting a vertical shake is the same system as the following system from the acquired y coordinate of the face detection position. It is clear that this can be realized using. In the following description, only a system that detects vertical shaking is described only when the system that detects horizontal shaking differs from the system that detects vertical shaking.
Next, the processing flow of the shake detection process will be described with reference to the flowchart of FIG. Step 1401 indicates the start of processing. In Step 1402, the latest face detection position / size (hereinafter referred to as current position / current face size) is acquired. In Step 1403, if the current position is larger than the stored maximum position, the process proceeds to Step 1405. Otherwise, the process proceeds to Step 1404. In Step 1404, if the current position is smaller than the stored minimum position, the process proceeds to Step 1406. If not, the process proceeds to Step 1407. In Step 1405, the current position is updated as the maximum position, and the process proceeds to Step 1413. On the other hand, in Step 1406, the current position is updated as the minimum position, and the process proceeds to Step 1413.
In Step 1407, it is determined whether or not the maximum position has been detected. Here, the maximum position is the maximum value of the x coordinate when the position of the face moving in the right direction in the image changes the moving direction to the left. A method for detecting the maximum position will be specifically described. First, the difference between the current position and the stored maximum position is calculated. Next, the calculated difference is compared with a predetermined threshold, and if the calculated difference exceeds the threshold, it is determined that the stored maximum position is the maximum position, and the process proceeds to Step 1408. That is, it is determined that the position of the detected face is turned back from the state of moving to the right to the state of moving to the left in the image. On the other hand, if the calculated difference is equal to or smaller than the threshold value in Step 1407, it is determined that the stored maximum value is not the maximum value, and the process proceeds to Step 1410. Here, the threshold value to be compared with the difference to detect the maximum position is a value obtained from the current face size, and in this embodiment, is a value that is 10% of the current face size. This value is a value determined by performing sufficient measurement, and is a value that does not erroneously determine the maximum position due to variations in the face detection position due to the influence of the shooting environment, facial expression, and the like.
In Step 1408, in order to search for the minimum value thereafter, the stored minimum position is cleared and the process proceeds to Step 1409. In this embodiment, the current position is cleared by substituting it for the minimum position. However, the minimum position may be the maximum value that can be set, or the value at the right end of the screen may be substituted for the x coordinate, and the value at the bottom end of the screen may be substituted for the y coordinate. In Step 1409, the maximum position Count is started. If the count has already started, the count value is reset and the count is restarted. Here, the maximum position Count is a value for measuring the elapsed time since the maximum position was detected in Step 1407. While the maximum position Count is counting up, “the movement direction changes from right to left within a predetermined period. It was judged that it was reversed. Here, in the present embodiment, the predetermined period is 2 seconds in the horizontal direction. This was determined on the basis of sufficient measurements and the walking of a person. Further, since the vibration in the vertical direction is approximately half the horizontal direction, 1 second is set as the predetermined period.
In Step 1410, it is determined whether or not a minimum position has been detected. Here, the minimum position is the minimum value of the x-coordinate when the position of the face moving in the left direction in the image changes the movement direction in the right direction. A method for detecting the minimum position will be specifically described. First, the difference between the current position and the stored minimum position is calculated. Next, the calculated difference is compared with a predetermined threshold, and if the calculated difference exceeds the threshold, it is determined that the stored minimum position is the minimum position, and the process proceeds to Step 1411. That is, it is determined that the position of the detected face is turned back from a state of moving left to a state of moving right. On the other hand, if the calculated difference is equal to or smaller than the threshold value in Step 1410, it is determined that the stored minimum value is not the minimum value, and the process proceeds to Step 1413. Here, the threshold value compared with the difference for detecting the minimum position is a value obtained from the current face size, and in this embodiment, is a value of 10% of the current face size. This value is a value determined by performing sufficient measurement, and is a value that does not erroneously determine the minimum position due to variations in the face detection position due to the influence of the shooting environment, facial expression, and the like.
In Step 1411, the maximum position of the stored face position is cleared and the process proceeds to Step 1412 in order to search for the maximum value thereafter. In this embodiment, the current position is cleared by substituting it into the maximum position. However, the maximum position may be a minimum value that can be set, or the value at the left end of the screen may be substituted for the x coordinate, and the value at the top end of the screen may be substituted for the y coordinate. In Step 1412, the minimum position Count is started. If the count has already started, the count value is reset and the count is restarted. Here, the minimum position Count is a value for measuring the elapsed time after the minimum position is detected in Step 1410. While the minimum position Count is counting up, “the movement direction is changed from left to right within a predetermined period. It was judged that it was reversed. Here, in the present embodiment, the predetermined period is 2 seconds in the horizontal direction. This was determined on the basis of sufficient measurements and the walking of a person. Further, since the vibration in the vertical direction is approximately half the horizontal direction, 1 second is set as the predetermined period.
Next, in Step 1413, it is determined whether or not the maximum position Count exceeds 2 seconds (1 second in the case of the vertical direction). If it exceeds 2 seconds (1 second in the case of the vertical direction), the process proceeds to Step 1414. Otherwise, the process proceeds to Step 1416. In Step 1414, the maximum position Count is cleared to the initial value, the count up is finished, and the process proceeds to Step 1415. In Step 1415, the minimum position is updated with the current position, and the process proceeds to Step 1417. On the other hand, in Step 1416, the maximum position Count is counted up and the process proceeds to Step 1417. Next, in Step 1417, it is determined whether or not the minimum position Count exceeds 2 seconds (1 second in the case of the vertical direction). If it exceeds 2 seconds (1 second in the vertical direction), the process proceeds to Step 1418. Otherwise, the process proceeds to Step 1420. In Step 1418, the minimum position Count is cleared to the initial value, the count up is finished, and the process proceeds to Step 1419. In Step 1419, the maximum position is updated with the current position, and the process proceeds to Step 1421. On the other hand, in Step 1420, the minimum position Count is counted up and the process proceeds to Step 1421.
Next, in Step 1421, it is determined whether both the maximum position Count and the minimum position Count are being counted. If both are determined to be counting, the process proceeds to Step 1422, and if not, the process proceeds to Step 1423. In step 1422, the shaking flag is turned on to end the process. On the other hand, in step 1423, the shaking flag is turned off to end the process.
As described above, in this embodiment, the discriminating unit monitors the amount of change in the size of the subject, and when the change has occurred, the shake detecting unit detects the shake of the subject. If it is determined that there is a change in the distance of the subject. On the other hand, even if a change occurs in the change amount, it is determined that there is no change in the distance of the subject if the shake detection means has not detected the shake of the subject. When it is determined that there is no change in the distance of the subject, the control means performs the focus adjustment by prohibiting the tracking control by the tracking control means, and when it is determined that there is a change in the distance of the subject, the tracking control means The focus adjustment is performed by allowing the tracking control by. The predetermined operating characteristic is that there is a vibration having a predetermined amplitude or more including a maximum value and a minimum value in a vertical direction within a first predetermined period and a horizontal position within a second predetermined period with respect to the position of the subject in the electrical signal. There is a feature that there is a vibration having a predetermined amplitude or more including a maximum value and a minimum value in the direction. The predetermined amplitude has a predetermined ratio with respect to the size of the subject obtained from the subject detection means. The first predetermined period (cycle) is shorter than the second predetermined period (cycle). For example, the first predetermined period is substantially half of the second predetermined period.
FIG. 16 illustrates the shaking detection process. First, at the timing of T1, the difference between the current position and the maximum position is greater than or equal to the folding detection threshold. Therefore, since the maximum position stored at the timing of T1 can be determined to be the maximum position, the maximum position Count is started from T1. Further, at the timing of T2, the difference between the current position and the minimum position is equal to or greater than the folding detection threshold. Accordingly, since the minimum position stored at the timing of T2 can be determined to be the minimum position, the minimum position Count is started from T2. Next, at the timing of T3, the difference between the current position and the maximum position is equal to or greater than the folding detection threshold. Therefore, since the maximum position stored at the timing of T3 can be determined to be the maximum position, the maximum position Count is initialized at T3 and the count is restarted. At the timing of T4, since the minimum position Count exceeds the threshold value, the minimum position Count is cleared and the count ends. Also at the timing of T5, since the maximum position Count exceeds the threshold value, the maximum position Count is cleared and the count ends. By the above processing, since both the maximum position Count and the minimum position Count are being counted in the period from T2 to T4 in FIG. 16, it can be determined that the detected face is shaking during that period.
As described above, in the above-described embodiment, the subject person is focused on the AF evaluation value using face detection, and the subject distance change is based on the subject detection result in addition to the subject detection result. Determine the presence or absence. Then, it is determined that the subject is approaching only when the detected face position performs a predetermined operation. This makes it possible to determine the movement of the subject with high accuracy and stability. Then, by using the information to drive the focus to follow the infinite or close movement of the subject, the focusing accuracy can be improved.
For the processing in the above-described embodiments, a storage medium storing software program codes embodying each function may be provided to the system or apparatus. The functions of the above-described embodiments can be realized by the computer (or CPU or MPU) of the system or apparatus reading and executing the program code stored in the storage medium. In this case, the program code itself read from the storage medium realizes the functions of the above-described embodiments, and the storage medium storing the program code constitutes the present invention. As a storage medium for supplying such a program code, for example, a floppy (registered trademark) disk, a hard disk, an optical disk, a magneto-optical disk, or the like can be used. Alternatively, a CD-ROM, CD-R, magnetic tape, nonvolatile memory card, ROM, or the like can be used. The functions of the above-described embodiments are not only realized by executing the program code read by the computer. This includes the case where the OS (operating system) running on the computer performs part or all of the actual processing based on the instruction of the program code, and the functions of the above-described embodiments are realized by the processing. ing. Furthermore, the program code read from the storage medium may be written in a memory provided in a function expansion board inserted into the computer or a function expansion unit connected to the computer. Thereafter, the CPU of the function expansion board or function expansion unit performs part or all of the actual processing based on the instruction of the program code, and the functions of the above-described embodiments are realized by the processing. It is a waste.
As mentioned above, although preferable embodiment thru | or Example of this invention was demonstrated, this invention is not limited to the said embodiment thru | or Example, A various deformation | transformation and change are possible within the range of the summary.
105: Focus lens, 106: CCD (imaging device), 113: AF signal processing circuit (generation means), 114: Camera / AF microcomputer (control means, discrimination means, discrimination area determination means), 116 ... Face detector (detection means)

Claims (7)

  1. Detecting means for detecting a predetermined subject from an output of an image sensor that photoelectrically converts light from the subject that has passed through an imaging optical system including a focus lens into an electrical signal;
    Generating means for generating a focus evaluation value from a focus detection signal among outputs of the image sensor corresponding to a focus detection area including the subject;
    Discriminating means for discriminating a change in distance of the subject in the optical axis direction from an output of the detecting means;
    Discriminating area determining means for determining the size of the discriminating area in the imaging screen for discriminating a change in distance of the subject in the optical axis direction according to the focal length of the imaging optical system;
    Focus adjusting means for performing focus adjustment using the focus evaluation value;
    Control means for moving the focus lens in accordance with a distance change direction in the optical axis direction of the subject when it is determined that there is a distance change in the optical axis direction of the subject moving in the discrimination region. A focus adjustment device,
    The discriminating area determining means determines the discriminating area so that the size of the discriminating area when the number of detected objects detected by the detecting means is large is smaller than the size of the discriminating area when the detected number of objects is small. An automatic focusing apparatus characterized by determining the size of an area.
  2. The determination area determining unit determines the size of the determination area so that the larger the number of objects detected by the detection unit is, the smaller the size of the determination area is. 2. The automatic focusing apparatus according to 1.
  3. Detecting means for detecting a predetermined subject from an output of an image sensor that photoelectrically converts light from the subject that has passed through an imaging optical system including a focus lens into an electrical signal;
    Generating means for generating a focus evaluation value from a focus detection signal among outputs of the image sensor corresponding to a focus detection area including the subject;
    Discriminating means for discriminating a change in distance of the subject in the optical axis direction from an output of the detecting means;
    Discriminating area determining means for determining the size of the discriminating area in the imaging screen for discriminating a change in distance of the subject in the optical axis direction according to the focal length of the imaging optical system;
    Focus adjusting means for performing focus adjustment using the focus evaluation value;
    Control means for moving the focus lens in accordance with a distance change direction in the optical axis direction of the subject when it is determined that there is a distance change in the optical axis direction of the subject moving in the discrimination region. A focus adjustment device,
    The discriminating area determining means determines the discriminating area so that the size of the discriminating area when the movement amount of the subject detected by the detecting means is large is smaller than the size of the discriminating area when the movement amount of the subject is small. An automatic focusing apparatus characterized by determining the size of an area.
  4. The determination area determining means determines the size of the determination area so that the larger the amount of movement of the subject detected by the detection means is, the smaller the determination area is. automatic focusing device according to 3.
  5. An image pickup apparatus comprising: the automatic focus adjustment apparatus according to any one of claims 1 to 4 ; and the image pickup element.
  6. A detection step of detecting a predetermined subject from an output of an image sensor that photoelectrically converts light from the subject that has passed through an imaging optical system including a focus lens into an electrical signal;
    Generating a focus evaluation value from a focus detection signal among outputs of the image sensor corresponding to a focus detection region including the subject;
    A determination step of determining a distance change in the optical axis direction of the subject from an output in the detection step;
    A determination area determining step for determining a size of the determination area in the imaging screen for determining a distance change in the optical axis direction of the subject according to a focal length of the imaging optical system;
    A focus adjustment step of performing focus adjustment using the focus evaluation value;
    A control step of moving the focus lens in accordance with a distance change direction of the subject in the optical axis direction when it is determined that there is a change in the distance of the subject moving in the optical axis direction. A focus adjustment method,
    In the determination area determination step, the determination area is set so that the size of the determination area when the number of detected objects detected in the detection step is large is smaller than the size of the determination area when the number of detected objects is small. An automatic focusing method characterized by determining the size of the lens.
  7. A detection step of detecting a predetermined subject from an output of an image sensor that photoelectrically converts light from the subject that has passed through an imaging optical system including a focus lens into an electrical signal;
    Generating a focus evaluation value from a focus detection signal among outputs of the image sensor corresponding to a focus detection region including the subject;
    A determination step of determining a distance change in the optical axis direction of the subject from an output in the detection step;
    A determination area determining step for determining a size of the determination area in the imaging screen for determining a distance change in the optical axis direction of the subject according to a focal length of the imaging optical system;
    A focus adjustment step of performing focus adjustment using the focus evaluation value;
    A control step of moving the focus lens in accordance with a distance change direction of the subject in the optical axis direction when it is determined that there is a change in the distance of the subject moving in the optical axis direction. A focus adjustment method,
    In the determination area determining step, the determination area is set such that the size of the determination area when the movement amount of the subject detected in the detection step is large is smaller than the determination area when the movement amount of the subject is small. An automatic focusing method characterized by determining the size of the lens.
JP2012000850A 2012-01-06 2012-01-06 Automatic focusing device, imaging device, and automatic focusing method Active JP6159055B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2012000850A JP6159055B2 (en) 2012-01-06 2012-01-06 Automatic focusing device, imaging device, and automatic focusing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2012000850A JP6159055B2 (en) 2012-01-06 2012-01-06 Automatic focusing device, imaging device, and automatic focusing method

Publications (2)

Publication Number Publication Date
JP2013140288A JP2013140288A (en) 2013-07-18
JP6159055B2 true JP6159055B2 (en) 2017-07-05

Family

ID=49037760

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2012000850A Active JP6159055B2 (en) 2012-01-06 2012-01-06 Automatic focusing device, imaging device, and automatic focusing method

Country Status (1)

Country Link
JP (1) JP6159055B2 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5939904B2 (en) 2012-06-25 2016-06-22 富士フイルム株式会社 Tracking adjustment device, tracking adjustment method and program

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5967505A (en) * 1982-10-12 1984-04-17 Asahi Optical Co Ltd Automatic focusing device of video camera
JP3109819B2 (en) * 1990-09-24 2000-11-20 キヤノン株式会社 Automatic focusing device
JP2957800B2 (en) * 1992-04-17 1999-10-06 三菱電機株式会社 Automatic focusing device
JPH0622196A (en) * 1992-06-30 1994-01-28 Canon Inc Automatic focus adjustment device
JP5088118B2 (en) * 2007-12-07 2012-12-05 株式会社ニコン Focus adjustment device
JP2011199526A (en) * 2010-03-18 2011-10-06 Fujifilm Corp Object tracking device, and method of controlling operation of the same

Also Published As

Publication number Publication date
JP2013140288A (en) 2013-07-18

Similar Documents

Publication Publication Date Title
JP5049899B2 (en) Imaging apparatus and control method thereof
US9794467B2 (en) Focus adjustment apparatus, method for controlling focus adjustment apparatus, and storage medium
JP5294647B2 (en) Focus adjustment apparatus and control method thereof
JP2010139666A (en) Imaging device
US9160916B2 (en) Focus adjustment apparatus with a size detector
JP6184189B2 (en) Subject detecting device and its control method, imaging device, subject detecting device control program, and storage medium
JP5335408B2 (en) Focus adjustment apparatus and method
JP5873336B2 (en) Focus adjustment device
US10165174B2 (en) Control apparatus, image capturing apparatus, control method, and non-transitory computer-readable storage medium
JP2014106411A (en) Automatic focus adjustment device and automatic focus adjustment method
JP6159055B2 (en) Automatic focusing device, imaging device, and automatic focusing method
JP5075110B2 (en) Focus adjustment apparatus and method
JP5913902B2 (en) Automatic focusing device and automatic focusing method
JP2010044210A (en) Focusing device and focusing method
JP5882745B2 (en) Focus adjustment device
JP5898455B2 (en) Automatic focusing device and automatic focusing method
US9357124B2 (en) Focusing control device and controlling method of the same
JP5164711B2 (en) Focus adjustment apparatus and control method thereof
JP5932340B2 (en) Focus adjustment device
JP5017195B2 (en) Focus adjustment apparatus and method
JP6120478B2 (en) Automatic focusing device and automatic focusing method
JP6053287B2 (en) Automatic focusing apparatus and automatic focusing method
JP2015194648A (en) Imaging apparatus, method for controlling imaging apparatus, program, and storage medium
JP2014126857A (en) Focus adjustment device and method, and imaging device
US20210058543A1 (en) Lens control apparatus, control method, and storage medium

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20141225

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20150910

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20150924

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20151118

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20160419

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20160620

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20161025

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20161226

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20170511

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20170609

R151 Written notification of patent or utility model registration

Ref document number: 6159055

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R151

RD03 Notification of appointment of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: R3D03