CN101399915B - Image taking apparatus and face region determining method in image taking apparatus - Google Patents
Image taking apparatus and face region determining method in image taking apparatus Download PDFInfo
- Publication number
- CN101399915B CN101399915B CN2008101680191A CN200810168019A CN101399915B CN 101399915 B CN101399915 B CN 101399915B CN 2008101680191 A CN2008101680191 A CN 2008101680191A CN 200810168019 A CN200810168019 A CN 200810168019A CN 101399915 B CN101399915 B CN 101399915B
- Authority
- CN
- China
- Prior art keywords
- face
- zone
- motion
- neighboring area
- test section
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Landscapes
- Studio Devices (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The present invention provides a camera shooting device and a face area determining method in the camera shooting device. The face position can be appropriately estimated when the face of shot object changes from a forward direction to a transverse direction or a backward direction. The face area is detected through template matching in the image data obtained from a camera shooting part (S200). At a state that the face area is detected (S201; yes), the position of face area is determined through the detecting result (S202). A body area moving equivalently with the face area is detected (S203-S205). The time series movement (S207, S209) of the detected face area and body area in the frame are detected respectively through graph pattern matching (S207, S209). Even the face area is not detected (S201; no), through using the face area movement detecting result (S211; yes, S221; yes) or the body area movement detecting result (S210; yes, S211; no), the position of the face area can be appropriately estimated continuously.
Description
Technical field
The present invention relates to the face region determining method in camera heads such as small digital camera and the camera head.
Background technology
In the past, as the technology that detects face according to captured input picture, following technology was known, that is: use template matching method to detect the face position, detected the characteristic point position of face from detected face position, thereby detected size, position and the direction of face.Also be well known that, make the scope that adopts the detected face position of this face detection tech in focus, can improve focusing precision (for example, with reference to patent documentation 1) at the face of subject by carrying out auto-focus control (AF control) etc.
2006-No. 227080 communiques of [patent documentation 1] TOHKEMY
Yet the face detection tech shown in patent documentation 1 grade is that the characteristic point with faces such as eye, nose, mouths serves as that face is detected by template matches in the basis.Therefore, passable when face is in forward, and face towards horizontal or backward-facing to situation under, the face verification and measurement ratio descends.Therefore, though face be present in the photographs, as long as face become from the front laterally or the back to situation under, just can't detect face, thereby have the problem of on face position in addition, focusing.
Summary of the invention
The present invention In view of the foregoing makes, even the purpose of this invention is to provide a kind of face in subject become from forward laterally, the back to situation under, also can suitably continue to estimate the camera head of face position and the face region determining method in the camera head.
In order to solve above-mentioned problem and to realize purpose, the invention provides the camera head in face zone that a kind of decision has the subject of motion, it is characterized in that having: image pickup part, it receives object light and also carries out opto-electronic conversion, is that unit obtains view data with the frame; The face test section, it detects the zone that face exists from described acquired view data; Face periphery test section, it detects the neighboring area in described detected face zone from described acquired view data; Motion detection portion, it detects the time series motion between described detected face zone and each comfortable described picture frame of neighboring area; And face determining positions portion, its testing result according to the testing result of described face test section and described motion detection portion decides the face zone of current images frame.
In addition, the feature that relates to camera head of the present invention is that this camera head has the imaging conditions configuration part, and its view data according to the face zone of described decision is set imaging conditions.
In addition, the feature that relates to camera head of the present invention is that described face periphery test section detects and is predicted to be the body zone of moving equivalently with the motion in described detected face zone as described neighboring area.
And the feature that relates to camera head of the present invention is that described face periphery test section detects and is set to the neighboring area of moving equivalently with the motion in described detected face zone as described neighboring area.
What in addition, relate to camera head of the present invention is characterized in that the described neighboring area of being detected by described face periphery test section is to set according to the position in the described face zone in the camera picture or the size in described face zone.
In addition, the feature that relates to camera head of the present invention is that described motion detection portion each zone described face is regional and described neighboring area is divided into a plurality of zones respectively, and detect the location variation of the variation of a plurality of cut zone that change in location is arranged in each zone after cutting apart based on this, move as time series.
And, the feature that relates to camera head of the present invention is to be detected under the situation in face zone by described face test section in the current images frame, described face determining positions portion decides face zone in the current images frame according to the testing result of this face test section, do not detect the face zone at face test section described in the current images frame, and detect by described motion detection portion under the situation of motion of face zone between picture frame and neighboring area, described face determining positions portion is the face position at least according to the testing result of this motion detection portion to the face regional movement with the decision of the zone of the face in the current images frame.
In addition, the feature that relates to camera head of the present invention is that this camera head has reliability decision portion, it judges that described motion detection portion is to the testing result of face regional movement and the reliability of the relative motion between the detection of motion result of neighboring area, not under the situation more than the prescribed level, reset the neighboring area in the current images frame in the reliability that is judged to be relative motion.
Also have, the feature that relates to camera head of the present invention is to be detected under the situation in face zone by described face test section in the current images frame, described face determining positions portion decides face zone in the current images frame according to the testing result of this face test section, do not detect the face zone at face test section described in the current images frame, and described motion detection portion does not detect the motion in the face zone between picture frame, and detect by this motion detection portion under the situation of motion of the neighboring area between picture frame, described face determining positions portion estimates face zone in the current images frame according to this motion detection portion to neighboring area detection of motion result, and its decision is the face position.
In addition, the invention provides a kind of face region determining method that determines the captured subject face zone that motion is arranged, it is characterized in that this face region determining method has: the shooting step, it receives object light by image pickup part, and carry out opto-electronic conversion, be that unit obtains view data with the frame; Face detects step, and it detects the zone that face exists from described acquired view data; The face periphery detects step, and it detects the neighboring area in described detected face zone from described acquired view data; The motion detection step, it detects the time series motion between described detected face zone and each comfortable described picture frame of neighboring area; And face determining positions step, its testing result that detects the testing result of step and described motion detection step according to described face decides the face zone of current images frame.
Face region determining method in camera head of the present invention and the camera head is from by detecting the face zone the view data that image pickup part obtained, and detect the neighboring area in detected face zone, detect the time series motion of detected face zone and each comfortable interframe of neighboring area, decide face zone in the present frame according to the testing result of the testing result of face test section or face detection step and motion detection portion or motion detection step.Obtain such effect thus, that is: the result of the motion detection of the neighboring area in motion detection by using face zone self and face zone can suitably continue to estimate the position in face zone.
Description of drawings
Fig. 1 is the schematic block diagram of Denso system configuration example that the camera head of embodiment of the present invention 1 is shown.
Fig. 2 is the summary sequential chart of the action example of the major part among the Fig. 1 that illustrates when photographing.
Fig. 3-the 1st illustrates the key diagram of the two field picture example of N frame.
Fig. 3-the 2nd illustrates the key diagram of the two field picture example of (N+1) frame.
Fig. 3-the 3rd illustrates the key diagram of the two field picture example of (N+2) frame.
Fig. 3-the 4th illustrates the key diagram of the two field picture example of (N+3) frame.
Fig. 4 is the general flowchart that the elemental motion control example of power connection/disconnection of following Electrofax is shown.
Fig. 5 is that the face that illustrates in the execution mode 1 detects the general flowchart of handling example.
Fig. 6 illustrates the general flowchart that the body regional prediction is handled example.
Fig. 7-the 1st illustrates the key diagram of the two field picture example that the body regional prediction handles.
Fig. 7-the 2nd illustrates the key diagram of another two field picture example that the body regional prediction handles.
Fig. 7-the 3rd illustrates the key diagram of the another two field picture example that the body regional prediction handles.
Fig. 8-the 1st illustrates the routine schematic diagram of cutting apart of face zone and body zone macro block separately.
Fig. 8-the 2nd, another that face zone and body zone macro block separately be shown cut apart routine schematic diagram.
Fig. 9 is the general flowchart that face regional graphics matching treatment example is shown.
Figure 10 is the general flowchart that body regional graphics matching treatment example is shown.
Figure 11 illustrates the general flowchart that relative body zone vector reliability decision is handled example.
Figure 12 is the schematic block diagram of Denso system configuration example that the camera head of embodiment of the present invention 2 is shown.
Figure 13-the 1st illustrates the key diagram of the two field picture example of N frame.
Figure 13-the 2nd illustrates the key diagram of the two field picture example of (N+1) frame.
Figure 13-the 3rd illustrates the key diagram of the two field picture example of (N+2) frame.
Figure 14 is that the face that illustrates in the execution mode 2 detects the general flowchart of handling example.
Figure 15 illustrates the neighboring area prediction to set the general flowchart of handling example.
Figure 16-the 1st illustrates the key diagram that the two field picture example of setting is predicted in the neighboring area.
Figure 16-the 2nd illustrates the key diagram that another two field picture example of setting is predicted in the neighboring area.
Figure 16-the 3rd illustrates the key diagram that the another two field picture example of setting is predicted in the neighboring area.
Figure 16-the 4th illustrates the key diagram that other two field picture examples of setting are predicted in the neighboring area.
Figure 17-the 1st illustrates the routine schematic diagram of cutting apart of face zone and neighboring area macro block separately.
Figure 17-the 2nd, another that face zone and neighboring area macro block separately be shown cut apart routine schematic diagram.
Figure 18 is the general flowchart that neighboring area figure matching treatment example is shown.
Figure 19 illustrates the general flowchart that relative neighboring area vector reliability decision is handled example.
Label declaration
12: imaging apparatus; 15: motion detection portion; 18: the face test section; 19: body regional prediction portion; 27: system controller; 32: the configuration part, neighboring area.
Embodiment
Below, be used for implementing the camera head of optimal way of the present invention and the face region determining method of camera head with reference to description of drawings.The invention is not restricted to each execution mode, so long as in the scope that does not break away from purport of the present invention, just can carry out various distortion.
(execution mode 1)
Fig. 1 is the schematic block diagram of Denso system configuration example that the camera head of embodiment of the present invention 1 is shown.The camera head 1 of present embodiment 1 is Electrofaxs such as small digital camera, AFE (analog front end)), frame memory 14, motion detection portion 15, RAM16, image processing part 17, face test section 18, body regional prediction portion 19, recording medium interface 20, recording medium maintaining part 21, recording medium 22, video encoder 23, video signal output terminal 23a, lcd driver 24, LCD25, ROM26 and system controller 27 etc. as shown in Figure 1, have: image pickup optical system 11, imaging apparatus 12, AFE (Analog Front End:.
Image pickup optical system 11 comprises phtographic lens etc., and the shot object image of incident is imaged on the imaging apparatus 12.Imaging apparatus 12 as image pickup part is made of solid-state imagers such as CCD, cmos sensors, and the light beam that receives from subject by image pickup optical system 11 obtains the view data of frame unit via carrying out opto-electronic conversion.AFE13 reads the view data (analog electrical signal) that obtains from imaging apparatus 12, implement AGC (Automatic Gain Control: automatic gain is controlled) processing and A/D conversion process etc., and output is based on the view data of numerical data.Carried out each one that digitized view data is imported into frame memory 14, motion detection portion 15 and RAM16 by AFE13.
Face test section 18 is made of the template matches module, (known template matching method such as order テ Application プ レ-ト) (for example at used use profile template, mesh template by the view data that imaging apparatus 12 obtained for this template matches module, with reference to 8-No. 63597 communiques of Japanese kokai publication hei) judge in view data, whether there is the face image, when having the face image, detect this face zone.Face test section 18 is kept at the information of the direction of the coordinate in detected face zone, face, face component part (eye, nose, mouth etc.) etc. in the RAM16.
Body regional prediction portion 19 is according to the information by the direction of the coordinate in face test section 18 detected face zones, face, face component part etc., and the position and the size in body zone also calculated in prediction.The position in the body zone that calculates and size conduct are stored in the RAM16 with the information that face test section 18 detected faces zone formation one are carried out the neighboring area of equivalent movement.
Motion detection portion 15 utilizes figure coupling (pattern matching) method to come the time series motion of detected face zone and each comfortable interframe of body zone is detected.That is, motion detection portion 15 uses the view data that is stored in the previous frame in the frame memory 14 and from the view data of the present frame that AFE13 imported, obtains motion vector by the figure coupling.The scope of the motion vector of obtaining at this moment, is face zone and body zone (the face neighboring area of being obtained by face test section 18 and body regional prediction portion 19 according to the view data of previous frame).Information by motion detection portion 15 detected face zones and body zone motion vector separately is stored in the RAM16.
ROM26 also stores the template data that used by face test section 18 etc. in template matches except storing the control program of being carried out by system controller 27 in advance.
Fig. 2 is the summary sequential chart that the action example of major part when photography among Fig. 1 is shown.Among Fig. 2, the two field picture that " A "~" F " expression follows exposure actions to be taken successively by imaging apparatus 12.Captured two field picture carries out digitlization by AFE13, stores RAM16 or frame memory 14 into by DMA (direct memory access (DMA)) control afterwards.The two field picture that is stored in RAM16 carries out image processing successively by image processing part 17, and becomes the face detection process object of face test section 18.In motion detection portion 15, mate by representative point and to obtain motion vector.Promptly, according to be stored in the frame memory 14 previous two field picture and from the view data of the obtained present frame of AFE13, mate by the representative point in the pattern matching method and to obtain the face zone imported from face test section 18 and body regional prediction portion 19 and the motion vector in body zone.Then, in the face determining positions of system controller 27,, decide the final face regional location in the present frame according to the face that goes out by representative point matching detection zone with by the testing result in face test section 18 detected face zones.Then, carry out AE, AF, AWB action according to the final face regional location that is determined, and set imaging conditions.LCD shows that the two field picture that will show is shown as the browse graph picture on LCD25.
Below, with reference to the summary of the distinctive face region determining method of Fig. 3-1~Fig. 3-4 explanation present embodiment 1.Fig. 3-1~Fig. 3-the 4th illustrates the key diagram of the seasonal effect in time series two field picture example from the N frame to (N+3) frame.(a) of each figure and (b) same number of frames image example is shown, (a) of each figure illustrates the face test example of template matches module (face test section 18), and (b) of each figure illustrates the face test example of motion detection block (body regional prediction portion 19 and motion detection portion 15).
At first, shown in Fig. 3-1 (a), be to be prerequisite from view data, to detect the face zone by the template matches in the face test section 18 about the two field picture of N frame.Detecting by face test section 18 under the situation in face zone, detected face zone is being carried out the AE action of N two field picture etc. as object.Then, shown in Fig. 3-1 (b), the information in detected face zone is notified to body regional prediction portion 19.Shown in the square box among Fig. 3-1 (b), body regional prediction portion 19 predicts the body zone according to the face zone of being notified.
Then, about the two field picture of (N+1) frame shown in Fig. 3-2 (a), suppose because the direction of face etc. are former thereby can't detect the face zone by the template matches in the face test section 18.In the case, shown in Fig. 3-2 (b), can detect at N frame, (N+1) interframe, by the coupling of the figure in the motion detection portion 15 under the situation of motion in face zone, the AE that carries out (N+1) two field picture as object by these motion detection portion 15 detected face zones be moved etc.At this moment, no matter whether can mate the motion that detects the body zone by the figure in the motion detection portion 15.
In addition, about the two field picture of (N+2) frame shown in Fig. 3-3 (a), suppose because the direction of face etc. are former thereby can't detect the face zone by the template matches in the face test section 18.At this moment, shown in Fig. 3-2 (b),, can not detect the motion in face zone by the coupling of the figure in the motion detection portion 15, but can detect the motion in body zone in (N+1) frame, (N+2) interframe.So, estimate the position in face zone according to the motion in this body zone, estimated face zone is carried out the AE action of (N+2) two field picture etc. as object.
On the other hand, about the two field picture of (N+3) frame shown in Fig. 3-4 (a), suppose by the template matches in the face test section 18 to detect the face zone once more.Detecting by face test section 18 under the situation in face zone, detected face zone is being carried out the AE action of (N+3) two field picture etc. as object.Then, shown in Fig. 3-4 (b), the information in detected face zone is notified to body regional prediction portion 19.Shown in the square box among Fig. 3-4 (b), body regional prediction portion 19 predicts the body zone again according to the face zone of being notified.
Like this, in present embodiment 1, utilize face test section 18 to detect the face zone, and according to the size in detected face zone, the direction of face, the information of face component part, by the position and the size of body regional prediction portion 19 prediction bodies.Then, in motion detection portion 15, move in the time series of interframe with the body zone of being predicted by face test section 18 detected face zones by the figure matching detection.Face image in the detection converges in the two field picture, and can not detect under the situation of face according to the direction of face template matches by face test section 18, in the time can detecting the motion in face zone, upgrade position as the face zone in the current frame image of object according to the motion in detected face zone by the figure in the motion detection portion 15 coupling.In addition, when according to face towards motions such as back, by the figure in the motion detection portion 15 coupling can not detect the motion in face zone, in the time of detecting the motion in body zone by the coupling of the figure in the motion detection portion 15, predict the motion in the face zone in the current frame image according to the motion in detected body zone, and upgrade the position in face zone.That is,, thereby be conceived to also can estimate the motion this point of face, in the time can not detecting the motion in face zone self or face zone, can effectively utilize the motion in detected body zone according to the motion of body because the motion of face is roughly consistent with the motion of health.Thus, even as physical culture scene or the child's that goes round photography the time, under the situation that changes of the direction of face, also can follow the trail of the position of setting face exactly.
Below, describe action control example in the present embodiment 1 in detail with reference to Fig. 4~Figure 11.Fig. 4 is the general flowchart that the elemental motion control example of power connection/disconnection of following Electrofax is shown.At first, when the power connection of Electrofax, judge whether it is photograph mode (step S100).(step S100 when being set at photograph mode; Be; The shooting step), the beginning face detects and handles (step S101).This face detects the details of handling and describes in the back.Judge whether face detects the result who handles, promptly detect face zone (step S102) from current frame image.(step S102 under the situation that does not detect the face zone; Not), the common scope in the current frame image is carried out the AF/AE/AWB action as object, and set imaging conditions (step S103).On the other hand, (step S102 under the situation that detects the face zone; Be), the view data of the face detection range in the current frame image (face zone) is carried out the AF/AE/AWB action as object, and set imaging conditions (step S104; The imaging conditions configuration part).Then, about release-push, judge whether the 1st release-push connects (step S105), (step S105 under the situation of access failure; Not), the processing of repeating step S101~S104.
(step S105 under the situation of on; Be), the beginning face detects handles (step S106).This face detects the details of handling and describes in the back.Judge whether face detects the result who handles, promptly detect face zone (step S107) from current frame image.(step S107 under the situation that does not detect the face zone; Not), the common scope in the current frame image is carried out the AF/AE/AWB action as object, and set imaging conditions (step S108).On the other hand, (step S107 under the situation that detects the face zone; Be), the view data of the face detection range in the current frame image (face zone) is carried out the AF/AE/AWB action as object, set imaging conditions (step S109; The imaging conditions configuration part).Then, about release-push, judge whether the 2nd release-push connects (step S110), (step S110 under the situation of access failure; Not), the processing of repeating step S106~S109.
(step S110 under the situation of the 2nd release-push on; Be), carry out photograph processing (step S111 according to the imaging conditions that sets; The shooting step).Then, judge whether photograph mode finishes (step S112), (step S112 under unclosed situation; ), do not return step S101.On the other hand, (step S112 under the situation that photograph mode finishes; Be), judge whether to have selected reproduction mode (step S113).(step S113 under the situation of non-selected reproduction mode; Not), the camera power supply disconnects, end process.
In addition, not under the situation of photograph mode in step S100, perhaps in step S113, selected under the situation of reproduction mode, operate according to the user and to select reproduced image (step S114), selected still image/dynamic image of finishing photography is reproduced on the picture that is shown to LCD25 (step S115).Afterwards, judge whether reproduction mode finishes (step S116), (step S116 under unclosed situation; ), do not return step S114.On the other hand, (step S116 under the situation that reproduction mode finishes; Be), judge whether the camera power supply disconnects (step S117), (step S117 under the situation about not disconnecting at the camera power supply; Not), be photograph mode, return step S101.(step S117 under the situation that the camera power supply disconnects; Be), end process.
Below, come the face of description of step S101 or step S106 to detect with reference to Fig. 5 and handle.Fig. 5 illustrates face to detect the general flowchart of handling example.At first, system controller 27 uses face test sections 18 to carry out face based on known template matching method etc. to detect and handle (step S200).Face at face test section 18 detects in the processing, is detecting (step S201 under the situation of face; Be), detected face zone is kept at (step S202 in the RAM16 as the face zone in the present frame; Face determining positions portion, face determining positions step).This is equivalent to the example shown in Fig. 3-1 (a), Fig. 3-4 (a).After this face detected, system controller 27 used body regional prediction portion 19 to carry out the body regional prediction and handles (step S203).This is equivalent to the example shown in Fig. 3-1 (b), Fig. 3-4 (b).This body regional prediction is handled in the back and is described.
After the body regional prediction is handled, continue to judge and in present frame, whether detect (measurable go out) this body zone (step S204).(step S204 under detected situation; Be), detected body zone is kept at (step S205) in the RAM16 as the body zone in the present frame, the face that finishes this detects to be handled.In (under inscrutable situation) under the nd situation, the face that directly finishes this detects to be handled.Face at face test section 18 detects in the processing, for follow-up two field picture, as long as continue to detect face, just similarly repeats these steps S201; Be~processing of step S205.Therefore, the decision of the information in face test section 18 detected face zones is the face zone in the present frame, and is kept in the RAM16, the subject area of setting as imaging conditions.
On the other hand, the face at face test section 18 detects (step S201 under the situation that can not detect the face zone in the processing; Not), system controller 27 is carried out regional detection of face of the figure coupling of using motion detection portion 15 and is handled, and the position in decision face zone.At first, judge about the former frame image whether the face zone is stored in (step S206) in the RAM16.This is because the processing of motion detection portion 15 is to be that prerequisite is carried out to be gone out the face zone and the face zone is stored in the RAM16 by face test section 18 temporary detecting in the two field picture of at least formerly going temporarily.Be not stored in (step S206 under the situation in the RAM16 in the face zone; Not), owing to do not carry out the processing of motion detection portion 15, the face that therefore finishes this detects to be handled.
If there is face zone (step S206 in the former frame image; Be), then in motion detection portion 15, carry out and the relevant figure matching treatment in face zone, detect the time series motion (step S207) in the face zone between former frame and the present frame.The figure matching treatment in this face zone is described in the back.Next, judge at the former frame image whether the body zone is stored in (step S208) in the RAM16.There is (step S208 under the situation in body zone at the former frame image; Be), in motion detection portion 15, carry out and the relevant figure matching treatment in body zone, detect the time series motion (step S209) in the body zone between former frame and the present frame.The figure matching treatment in this body zone is described in the back.Be not stored in (step S208 under the situation in the RAM16 in the body zone; Not), the processing of skips steps S209.
Then, about the figure matching result in the body zone of motion detection portion 15, judge whether to detect the motion vector (step S210) relevant with the body zone.(step S210 under the situation of the motion vector that detects the body zone; Be), about the figure matching result in the face zone of motion detection portion 15, judge whether to detect the motion vector (step S211) relevant with the face zone.(step S211 under the situation of the motion vector that detects the face zone; Be), calculate the face zone in the present frame and it is kept at (step S212 in the RAM16 according to the motion in detected face zone; Face determining positions portion, face determining positions step).This is equivalent to the example shown in Fig. 3-2 (b).Therefore, not detecting face zone, motion detection portion 15 at face test section 18 detects under the situation of motion in face zone, the information decision in the face zone that will calculate according to the motion in face zone is the face zone in the present frame, and is kept in the RAM16, the subject area of setting as imaging conditions.
Then, according to the motion detection result in while detected body zone, carry out and relevant determination processing (the step S213 of relative body zone vector reliability; Reliability decision portion).This relative body zone vector reliability decision is handled in the back and is described.About the result of this determination processing, judge whether the reliability of relative body zone vector is prescribed level above (step S214), (step S214 under the situation that the reliability more than the prescribed level is arranged; Be), calculate the body zone in the present frame and it is kept at (step S217) in the RAM16 according to the motion in detected body zone.(step S216 under the situation that does not have the reliability more than the prescribed level; Not), system controller 27 uses body regional prediction portion 19 to carry out the body regional prediction and handles (step S215), sets the body zone once more.This body regional prediction is handled in the back and is described.After the body regional prediction is handled, judge in present frame, whether to detect (measurable) this body zone (step S216).(step S216 under detected situation; Be), detected body zone is kept at (step S217) in the RAM16 as the body zone in the present frame, the face that finishes this detects to be handled.In (under inscrutable situation) under the nd situation, the face that directly finishes this detects to be handled.
On the other hand, in step S211, do not detect under the situation of motion vector in face zone, the motion vector in detected body zone is considered as the motion vector (step S218) in face zone, calculates the face zone in the present frame and it is kept at (step S219 in the RAM16 according to the motion vector (motion vector in body zone) in face zone; Face determining positions portion, face determining positions step).This is equivalent to the example shown in Fig. 3-3 (b).Therefore, not detecting face zone, motion detection portion 15 at face test section 18 does not detect the motion in face zone but detects under the situation of motion in body zone, to estimate and the information decision in the face zone of calculating is the face zone in the present frame according to the motion in body zone, and be kept in the RAM16, as the subject area of imaging conditions setting.And, calculate the body zone in the present frame and it is kept at (step S220) in the RAM16 according to the motion in detected body zone.
In addition, (step S210 under the situation of the motion vector that in step S210, does not detect the body zone; Not), the figure matching result about the face zone of motion detection portion 15 judges whether to detect the motion vector (step S221) relevant with the face zone.(step S221 under the situation of the motion vector that detects the face zone; Be), calculate the face zone in the present frame and it is kept at (step S222 in the RAM16 according to the motion in detected face zone; Face determining positions portion, face determining positions step).Therefore, utilize face test section 18 not detect the face zone, utilizing motion detection portion 15 to detect under the situation of motion in face zone, the information decision in the face zone that will calculate according to the motion in face zone is the face zone in the present frame, and be kept in the RAM16, as the subject area of imaging conditions setting.
Next, system controller 27 uses body regional prediction portion 19 to carry out the body regional prediction and handles (step S223).This body regional prediction is handled in the back and is described.After the body regional prediction is handled, continue to judge in present frame, whether to detect (whether can predict) this body zone (step S224).(step S224 under detected situation; Be), detected body zone is kept at (step S225) in the RAM16 as the body zone in the present frame, the face that finishes this detects to be handled.In (under inscrutable situation) under the nd situation, the face that directly finishes this detects to be handled.
Below, the body regional prediction of description of step S203, S215, S223 is handled.Fig. 6 is illustrated in the general flowchart that the body regional prediction of being carried out by body regional prediction portion 19 under the control of system controller 27 is handled example.At first, according to the face area information, the body width is calculated as face width * face coefficient of angularity (step S300).Then, according to the face area information, be face height * body height coefficient (step S301) with the body high computational.Calculate body zone (step S302) according to these result of calculations.Then, judge whether the whole bodies zone that calculates closes at the scope interior (step S303) of the photography angle of visual field.(step S303 under the situation in the scope that closes at the photography angle of visual field; Be), finish the body regional prediction and handle.
On the other hand, (step S303 under the situation in the scope that does not close at the photography angle of visual field; Not), judge whether the defined threshold % of body closes at the scope interior (step S304) of the photography angle of visual field.(step S304 under the situation in the scope that closes at the photography angle of visual field; Be), calculate the body zone (step S305) that closes in the photography angle of visual field scope once more, finish the body regional prediction and handle.
Then, (step S304 under the situation in the scope that does not close at the photography angle of visual field; Not), remove the body zone (step S306) that calculates, finish the body regional prediction and handle.In this case, become the body zone and do not detect, detect in next body zone and judge in (step S204, S216, S224) that result of determination is for denying.
Here, with reference to Fig. 7-1~Fig. 7-3 body regional prediction processing example shown in Figure 6 is described.It is the interior two field picture example of scope that horizontal and whole bodies close at the photography angle of visual field that Fig. 7-1 illustrates face.In this case, because body also is horizontal possibility height, so face width and body width are thought of as same degree.That is, establish face coefficient of angularity=1, calculate the body width.And, establish the body height and be the face height 5 times.That is, establish body height coefficient=5 and calculate the body height.In addition, the body position be thought of as along the direction of face be present in face below.According to more than, prediction is also calculated the position in body zone.In this embodiment because all body closes in the scope of the photography angle of visual field, so with the position in the body zone obtained as the body zone.
Fig. 7-2 illustrates the two field picture example in face closes at the photography angle of visual field towards forward and more than half body the scope.In this case, because body also is the possibility height of forward, thereby the body width is thought of as about 1.5 times of the face width.That is, establish face coefficient of angularity=1.5, calculate the body width.And, establish the body height and be the face height 5 times.That is, establish body height coefficient=5 and calculate the body height.And, the body position be thought of as along the direction of face be present in face below.According to more than, prediction is also calculated the position in body zone.In this embodiment, the ratio in body zone in the scope of the photography angle of visual field of closing at is defined threshold %, for example more than 30%, therefore judges to be fit to as the body zone.At this moment, calculate the body zone that closes in the photography angle of visual field scope once more.
Fig. 7-3 illustrates the two field picture example in face do not close at the photography angle of visual field towards forward and more than half body the scope.In this case, because body also is the possibility height of forward, so the body width is thought of as 1.5 times of degree of face width.That is, establish face coefficient of angularity=1.5, calculate the body width.And, establish the body height and be the face height 5 times.That is, establish body height coefficient=5 and calculate the body height.And, the body position be thought of as along the direction of face be present in face below.According to more than, prediction is also calculated the position in body zone.In this embodiment, the ratio in body zone in the scope of the photography angle of visual field of closing at is not defined threshold %, for example more than 30%, and it is unaccommodated therefore being judged as the body zone.Therefore, remove the body zone of obtaining.
Below, the face regional graphics matching treatment of the step S207 between former frame and the present frame and the body regional graphics matching treatment of step S209 are described.Here, in order to improve the precision of matching treatment, and be that a plurality of macro blocks (cut zone) carry out the figure matching treatment with face zone and body Region Segmentation respectively.Fig. 8-the 1st, be illustrated in face zone hour the face zone and body zone macro block separately cut apart routine schematic diagram.Fig. 8-1 illustrates the example that face zone etc. is divided into 4 macro blocks and body zone etc. is divided into 20 macro blocks.Fig. 8-the 2nd, be illustrated in face zone when big the face zone and body zone macro block separately cut apart routine schematic diagram.Fig. 8-2 illustrates the example that face zone etc. is divided into 30 macro blocks and body zone etc. is divided into 54 macro blocks.In addition, for each face zone and body zone, the size of macro block need not identical without exception, also can be different, and can set suitable cutting apart and count (number).
Fig. 9 is the general flowchart that is illustrated in the face regional graphics matching treatment example of being carried out by motion detection portion 15 under the control of system controller 27, and Figure 10 is the general flowchart that is illustrated in the body regional graphics matching treatment example of being carried out by motion detection portion 15 under the control of system controller 27.In these figure matching treatment, following situation as basic, that is: as the time series motion in face zone and body zone, is come the detection position variable quantity according to the variation of the interior a plurality of macro blocks that change in location is arranged in each zone.
At first, when beginning face regional graphics matching treatment, to the size/number (step S310) of face zone decision macro block.Then, carry out related operation (step S311),, calculate the motion vector (step S312) of each macro block by comprehensive judgement correlated results by each macro block.When carrying out the related operation of each macro block, in present frame, set scope with respect to the big circle of former frame, and by judging where each pixel motion carries out.Then, at the motion vector that calculates, judge reliability (step S313) by each macro block.The judgement of reliability for example be according to the direction of the motion vector that calculates whether the size of the correlation of unanimity etc. carry out.Then, calculate the motion vector (step S314) in face zone according to the macro block that reliability is arranged.Motion vector under this situation can be by for example adopting motion vector mean value or the method for the high motion vector of frequency etc. obtain.Then, at the motion vector that calculates, judge the reliability (step S315) of face regional integration, (step S316 under the situation of regional integration reliability of having the face; Be), the motion vector in the face zone that will obtain in step S314 is considered as effectively, finishes face regional graphics matching treatment.There is not (step S316 under the situation of reliability at the face regional integration; Not), remove the motion vector (step S317) in the face zone of in step S314, obtaining, finish face regional graphics matching treatment.
Body regional graphics matching treatment is also identical with face regional graphics matching treatment.At first, when beginning body regional graphics matching treatment, to the size/number (step S320) of body zone decision macro block.Then, carry out related operation (step S321),, calculate the motion vector (step S322) of each macro block by comprehensive judgement correlated results by each macro block.When carrying out the related operation of each macro block, in present frame, set scope with respect to the big circle of former frame, and by judging where each pixel motion carries out.Then, at the motion vector that calculates, judge reliability (step S323) by each macro block.The judgement of reliability for example be according to the direction of the motion vector that calculates whether the correlation size of unanimity etc. carry out.Then, calculate the motion vector (step S324) in body zone according to the macro block that reliability is arranged.Motion vector under this situation can be by for example adopting motion vector mean value or the method for the high motion vector of frequency etc. obtain.Then,, judge the reliability (step S325) of body regional integration, (step S326 under the situation of reliability is arranged at the body regional integration at the motion vector that calculates; Be), the motion vector in the body zone that will obtain in step S324 is considered as effectively, finishes body regional graphics matching treatment.There is not (step S326 under the situation of reliability at the body regional integration; Not), remove the motion vector (step S327) in the body zone of in step S324, obtaining, finish body regional graphics matching treatment.
In addition, the relative body zone vector reliability decision among the description of step S213 is handled.Figure 11 illustrates the general flowchart that the relative body zone vector reliability decision of being carried out by system controller 27 is handled example.Whether this processing is following processing, when the motion vector in the motion vector of obtaining the face zone and body zone, is divided into motion in length and breadth that is:, have homogeneity to judge the reliability of the motion vector in detected body zone according to the motion on all directions.
At first, according to | horizontal body zone vector-horizontal face zone vector | come calculated level difference vector (step S330), and according to | vertical body zone vector-vertical face zone vector | calculate vertical difference vector (step S331).Then, judge the level error vector, and judge that the vertical difference vector is whether less than the vertical vector threshold value (step S333) of predefined regulation whether less than the horizontal vector threshold value (step S332) of predefined regulation.At the level error vector less than horizontal vector threshold value (step S332; Be) and the vertical difference vector less than (step S333 under the situation of vertical vector threshold value; Be), being judged to be relative body zone vector has reliability (step S334), finishes relative body zone vector reliability decision and handles.On the other hand, be (step S332 more than the horizontal vector threshold value at the level error vector; Not) or the vertical difference vector be (step S333 under the above situation of vertical vector threshold value; Not), being judged to be relative body zone vector does not have reliability (step S335), finishes relative body zone vector reliability decision and handles.
Like this, according to present embodiment 1, utilize face test section 18 from the view data that is obtained by imaging apparatus 12, to detect the face zone, and under the prediction of body regional prediction portion 19, detect the body zone of making equivalent movement with the face zone, the time series of utilizing motion detection portion 15 to detect detected face zone and each comfortable interframe of body zone is moved, and decides face zone in the present frame according to the testing result of the testing result of face test section 18 and motion detection portion 15.Therefore, even the face of subject from forward become laterally, the back to etc. and cause face test section 18 can not detect under the situation in face zone, the result of the motion detection that also can be by using face zone self and the motion detection in body zone suitably continues to estimate the position in face zone.
(execution mode 2)
Below, with reference to Figure 12~Figure 19 embodiments of the present invention 2 are described.The part identical with part illustrated in the execution mode 1 uses same numeral to represent, and omits explanation.Figure 12 is the schematic block diagram of Denso system configuration example that the camera head of present embodiment 2 is shown.The camera head 31 of present embodiment 2 utilizes the neighboring area to replace body zone as doing the neighboring area of equivalent movement with the face zone, and has configuration part, neighboring area 32 and replace body regional prediction portion 19.
Configuration part, neighboring area 32 is at setting the neighboring area by face test section 18 detected face zones.The position of the neighboring area that sets and size are made the information of the neighboring area of equivalent movement and are stored in the RAM16 as constituting one with the detected face zone of face test section 18.
In addition, the motion detection portion 15 in the present embodiment 2 is used for detecting by pattern matching method the time series motion of detected face zone and each comfortable interframe of neighboring area.That is, motion detection portion 15 uses the view data that is stored in the previous frame in the frame memory 14 and from the view data of the present frame that AFE13 imported, and mates by figure and obtains motion vector.The scope of the motion vector of obtaining at this moment, is face zone of being obtained by face test section 18 according to the view data of previous frame and the neighboring area (face neighboring area) that is set by configuration part, neighboring area 32.Information by motion detection portion 15 detected face zones and neighboring area motion vector separately is stored in the RAM16.
Below, with reference to the summary of the distinctive face region determining method of Figure 13-1~Figure 13-3 explanation present embodiment 2.Figure 13-1~Figure 13-the 3rd illustrates the key diagram of the seasonal effect in time series two field picture example from the N frame to (N+2) frame.(a) of each figure and (b) same number of frames image example is shown, (a) of each figure illustrates the face test example of template matches module (face test section 18), and (b) of each figure illustrates the face test example of motion detection block (configuration part, neighboring area 32 and motion detection portion 15).
At first, shown in Figure 13-1 (a), be to be prerequisite from view data, to detect the face zone by the template matches in the face test section 18 about the two field picture of N frame.Detecting by face test section 18 under the situation in face zone, detected face zone is being carried out the AE action of N two field picture etc. as object.Then, shown in Figure 13-1 (b), the information in detected face zone is notified to configuration part, neighboring area 32.Shown in the square box among Figure 13-1 (b), configuration part, neighboring area 32 is according to the face zone of being notified, a plurality of neighboring areas of virtual setting around the face zone.Here, for example in the setting on every side in face zone and 8 neighboring areas of detected face zone same size.Then, utilize motion detection portion 15, begin the figure matching treatment that face zone and its peripheral neighboring area are carried out.At this moment, the motion of two field picture integral body is also mated by figure and is followed the trail of.
Figure 13-2 (a) illustrates about the two field picture of (N+1) frame and detects the situation in face zone by face test section 18, detected face zone is carried out the AE action of N two field picture etc. as object.Then, in Figure 13-2 (b), according to by motion detection portion 15 that carry out with the results of the past N frame to the relevant figure coupling of the motion of face zone, neighboring area and the integral image of present frame, from the neighboring area of interim setting, extract a part of neighboring area of making equivalent movement with the face zone and be set at the face subzone.Promptly, under the motion of the motion unanimity of face zone in detected each amount of exercise, shown in the arrow among Figure 13-2 (b) and a part of neighboring area and picture integral body and the regional different situation of face, a part of neighboring area consistent with the motion in face zone is set at the face subzone.In the example shown in Figure 13-2 (b), the example that the subject personage moves with car is shown, face zone down 3 zones of row and a zone on right side, face zone adds up to 4 neighboring areas to carry out the motion identical with the face zone, and be set to the face subzone (face neighboring area) of making equivalent movement with the face zone, remaining 4 neighboring areas are judged as the background area on the picture, and are excluded from the motion detection object of motion detection portion 15.
In addition, the two field picture about (N+2) frame shown in Figure 13-3 (a) owing to the reasons such as direction of face, can not detect the face zone by the template matches in the face test section 18.At this moment, motion detection portion 15 utilizes pattern matching method to detect the motion of the face subzone in detected face zone and the neighboring area, shown in Figure 13-3 (b), under the motion of face zone and face subzone can both detected situation, calculate the position in the face zone of (N+2) frame according to the motion in detected face zone.Then, detected face zone is carried out the AE action etc. of N two field picture as object.
At this moment, motion detection portion 15 can not detect the motion in face zone and can only detect the face subzone the situation of motion under, predict the position in face zone according to the motion of face subzone, and calculate the position in the face zone of (N+2) frame.Then, the face zone that calculates is carried out the AE action etc. of N two field picture as object.
Like this, in present embodiment 2, utilize face test section 18 to detect the face zone, and, from the neighboring area, extract and set the face subzone that equivalent movement is made in expression and face zone at the regional next virtual setting neighboring area of detected face.Then, in motion detection portion 15, mate and detect the detected face zone of face test section 18 and the face subzone in the neighboring area moves in the time series of interframe by figure.Though the face image in detecting closes in the two field picture, but because the direction of face changes, template matches by face test section 18 can not detect under the situation of face, in the time can detecting the motion in face zone, upgrade position as the face zone in the current frame image of object according to the motion in detected face zone by the figure in the motion detection portion 15 coupling.Then, can not detect the motion in face zone but can detect the motion of face subzone by the coupling of the figure in the motion detection portion 15 time when wait motion to cause backward owing to face by the figure in the motion detection portion 15 coupling, motion according to detected face subzone, the motion in the face zone in the prediction current frame image, and the position in renewal face zone.Promptly, from the neighboring area, extract and set expression and face zone and make the face subzone of equivalent movement, be conceived to estimate the motion this point in face zone by the motion of following the trail of the face subzone, under the situation of the motion that can not detect face zone self or face zone, can effectively utilize the motion of detected face subzone.
Below, describe action control example in the present embodiment 2 in detail with reference to Figure 14~Figure 19.In addition, owing to the elemental motion control example of power connection/disconnection of following Electrofax is identical with the situation of Fig. 4, thereby omit diagram and explanation.
Below, the face that the present embodiment 2 of step S101 among Fig. 4 or step S106 is described with reference to Figure 14 detects to be handled.Figure 14 is that the face that illustrates in the execution mode 2 detects the general flowchart of handling example.At first, system controller 27 face that uses face test sections 18 to carry out to utilize known template matching method etc. to carry out detects and handles (step S400).Handle (step S401 under the situation that detects face in the face detection that utilizes face test section 18; Be), detected face zone is kept at (step S402 in the RAM16 as the zone of the face in the present frame; Face determining positions portion, face determining positions step).This is equivalent to the example shown in Figure 13-1 (a), Figure 13-2 (a).After such face detects, system controller 27 uses configuration part, neighboring area 32 to come the performance period edge regions to set and handles (step S403).This is equivalent to the example shown in Figure 13-1 (b), Figure 13-2 (b).This neighboring area is set to handle in the back and is described.
After processing is set in the neighboring area, judge in present frame, whether to detect (can set) this neighboring area (step S404).(step S404 under detected situation; Be), detected neighboring area is kept at (step S405) in the RAM16 as the neighboring area in the present frame, the face that finishes this detects to be handled.In (under situation about can not set) under the nd situation, the face that directly finishes this detects to be handled.During the face of face test section 18 detects and handles,,, just similarly repeat these steps S401 as long as continue to detect face for follow-up two field picture; Be~processing of step S405.Therefore, be decided to be face zone in the present frame by the information in face test section 18 detected face zones, and be kept in the RAM16, the subject area of setting as imaging conditions.
On the other hand, handle (step S401 under the situation that can not detect the face zone in face detection by face test section 18; Not), system controller 27 is carried out the face zone detection of the figure coupling of having used motion detection portion 15 and is handled, and the position in decision face zone.At first, judge at the former frame image whether the face zone is stored in (step S406) in the RAM16.This is because the processing of motion detection portion 15 is to be that prerequisite is carried out to go out by face test section 18 temporary detecting that face zone and face zone be temporarily stored in the RAM16 in the two field picture of at least formerly going.Be not stored in (step S406 under the situation in the RAM16 in the face zone; Not), owing to can not carry out the processing of motion detection portion 15, thereby finish this face detection processing.
(step S406 under the former frame image is had the face regional situation; Be), in motion detection portion 15, carry out and the relevant figure matching treatment in face zone, and detect the time series motion (step S407) in the face zone between former frame and the present frame.Next, judge about the former frame image whether the neighboring area is stored in (step S408) in the RAM16.(step S408 under the situation of neighboring area is arranged at the former frame image; Be), in motion detection portion 15, carry out the figure matching treatment relevant, and detect the time series motion (step S409) of the neighboring area between former frame and the present frame with the neighboring area.The figure matching treatment of this neighboring area is described in the back.(step S408 under the situation in the neighboring area is not stored in RAM16; Not), the processing of skips steps S409.
Then, the figure matching result about the neighboring area of motion detection portion 15 judges whether to detect the motion vector relevant with the neighboring area (step S410).(step S410 under the situation of the motion vector that detects the neighboring area; Be), about the figure matching result in the face zone of motion detection portion 15, judge whether to detect the motion vector (step S411) relevant with the face zone.(step S411 under the situation of the motion vector that detects the face zone; Be), according to the motion detection result of detected neighboring area of while, carry out determination processing (the step S412 relevant with the reliability of relative neighboring area vector; Reliability decision portion).This relative neighboring area vector reliability decision is handled in the back and is described.Then, calculate the face zone in the present frame and it is kept at (step S413 in the RAM16 according to the motion in detected face zone; Face determining positions portion, face determining positions step).This is equivalent to Figure 13-2 (b).Therefore, not detecting face zone, motion detection portion 15 at face test section 18 detects under the situation of motion in face zone, the information in the face zone of calculating according to the motion in face zone is decided to be the face zone in the present frame, and is kept in the RAM16, the subject area of setting as imaging conditions.
On the other hand, in step S411, do not detect under the situation of motion vector in face zone, the motion vector of detected neighboring area is considered as the motion vector (step S413) in face zone, calculates the face zone in the present frame and it is kept at (step S214 in the RAM16 according to the motion vector in face zone; Face determining positions portion, face determining positions step).Therefore, do not detect that face zone, motion detection portion 15 do not detect the motion in face zone but motion detection portion 15 detects under the situation of motion of neighboring area at face test section 18, will estimate and the information decision in the face zone that calculates is the face zone in the present frame according to the motion of neighboring area.The information in the face zone that is determined like this is stored in the RAM16, as the subject area of imaging conditions setting.
Afterwards, about the determination processing result that relative neighboring area vector reliability decision is handled, judge whether the reliability of relative neighboring area vector is prescribed level above (step S415), (step S415 under the situation that the reliability more than the prescribed level is arranged; Be), calculate the neighboring area in the present frame and it is kept at (step S416) in the RAM16 according to the motion of detected neighboring area.(step S415 under the situation that does not have the reliability more than the prescribed level; Not), system controller 27 uses configuration part, neighboring area 32 to come the performance period edge regions to set and handles (step S417), sets the neighboring area once more.This neighboring area is set to handle in the back and is described.After processing is set in the neighboring area, continue to judge in present frame, whether to detect (can set) this neighboring area (step S418).(step S418 under detected situation; Be), detected neighboring area is kept at (step S416) in the RAM16 as the neighboring area in the present frame, the face that finishes this detects to be handled.In (under situation about can not set) under the nd situation, the face that directly finishes this detects to be handled.
And, in step S410, not detecting under the situation of motion vector of neighboring area, the figure matching result about the face zone of motion detection portion 15 judges whether to detect the motion vector (step S419) relevant with the face zone.(step S419 under the situation of the motion vector that detects the face zone; Be), calculate the face zone in the present frame and it is kept at (step S420 in the RAM16 according to the motion in detected face zone; Face determining positions portion, face determining positions step).Therefore, not detecting face zone, motion detection portion 15 at face test section 18 detects under the situation of motion in face zone, the information decision in the face zone that will calculate according to the motion in face zone is the face zone in the present frame, and is kept in the RAM16, the subject area of setting as imaging conditions.
Below, the neighboring area of description of step S403, S417 is set and is handled.Figure 15 is illustrated in the neighboring area prediction of being carried out by configuration part, neighboring area 32 under the control of system controller 27 to set the general flowchart of handling example.At first, according to the face area information, circumferential width is calculated as face width * circumferential width coefficient (step S500).In this case, when the face width is given size when following, establishing the circumferential width coefficient is 1, and when greater than given size, it is big more then more little that the circumferential width coefficient is configured to the face width.Then, according to the face area information, be face height * peripheral height coefficient (step S501) with peripheral high computational.Calculate neighboring area (step S502) according to these result of calculations.Then, judge whether the whole neighboring areas that calculate close at the scope interior (step S503) of the photography angle of visual field.(step S503 under the situation in the scope that closes at the photography angle of visual field; Be), finish the neighboring area prediction and set processing.
On the other hand, (step S503 under the situation in the scope that does not close at the photography angle of visual field; ), do not judge whether the defined threshold % of periphery is received in the scope interior (step S304) of the photography angle of visual field.(step S504 under the situation in the scope that closes at the photography angle of visual field; Be), the neighboring area that update calculation goes out (step S505) finishes the neighboring area prediction and sets processing.
And, (step S504 under the situation in the scope that does not close at the photography angle of visual field; Not), remove the neighboring area (step S506) in the scope that does not close at the photography angle of visual field, the prediction of end neighboring area is set and is handled.In this case, if Zone Full does not close in the scope of the photography angle of visual field, then become the neighboring area and do not detect, detect in next neighboring area and judge in (step S404, S418), result of determination is for denying.
Here, with reference to Figure 16-1~Figure 16-4 prediction setting example in neighboring area shown in Figure 15 is described.Figure 16-1 illustrates the interior two field picture example of scope that 8 neighboring areas all close at the photography angle of visual field.In this case, utilize motion detection portion 15 to carry out the figure coupling at all 8 neighboring areas and face zone.That is, all 8 neighboring areas are set at effective neighboring area.
Figure 16-2 illustrates the interior two field picture example of scope that 8 parts in the neighboring area do not close at the photography angle of visual field.Promptly, under the situation of illustrated example, 3 neighboring areas of following row do not close in the scope of the photography angle of visual field, but for example all close at more than 50% in the scope of the photography angle of visual field, thereby the dimension modifying of 3 neighboring areas that will the row of descending becomes to close to photograph in the scope of the angle of visual field.Then, all 8 neighboring areas are set at effective neighboring area, and utilize motion detection portion 15 to carry out the figure coupling at all 8 neighboring areas and face zone.
Figure 16-3 illustrates the interior two field picture example of scope that 8 parts in the neighboring area do not close at the photography angle of visual field.That is, under the situation of illustrated example, 3 neighboring areas of following row do not close in the scope of the photography angle of visual field, and for example all do not close at more than 50% in the scope of the photography angle of visual field, and 3 neighboring areas that therefore will the row of descending are got rid of from the neighboring area.Then, remaining all 5 neighboring areas are set at effective neighboring area, utilize motion detection portion 15 to carry out the figure coupling at 5 neighboring areas and face zone.
Figure 16-4 illustrates the two field picture example of face zone greater than given size.In this case, the size of neighboring area is set at size less than the face zone (for example 1/4), and sets the neighboring area number that makes in the scope that closes at the photography angle of visual field for and increase.In illustrated example, 8 examples that are increased to 12 of neighboring area number from standard are shown.
Below, the face regional graphics matching treatment of the step S407 between former frame and the present frame and the neighboring area figure matching treatment of step S409 are described.Here, the figure matching treatment is in order to improve the precision of matching treatment, carries out and face zone and neighboring area are divided into a plurality of macro blocks (cut zone) respectively.Figure 17-the 1st, be illustrated in face zone hour the face zone and neighboring area macro block separately cut apart routine schematic diagram.Figure 17-1 illustrates face zone etc. is divided into 4 macro blocks and the example that is divided into 4 macro blocks is also waited in each neighboring area.Figure 17-the 2nd, be illustrated in face zone when big the face zone and neighboring area macro block separately cut apart routine schematic diagram.Figure 17-2 illustrates face zone etc. is divided into 20 macro blocks and the example that is divided into 20 macro blocks is also waited in each neighboring area.In addition, the size of the macro block of each face zone and neighboring area need not identical without exception, also can be different, and can set suitable cutting apart and count (number).And the face regional graphics matching treatment of step S407 is owing to identical with the situation of Fig. 9, thereby omission diagram and explanation.
Figure 18 is the general flowchart that is illustrated in the neighboring area figure matching treatment example of being carried out by motion detection portion 15 under the control of system controller 27.In this figure matching treatment, following situation as basic, that is: as the time series motion of neighboring area, is come the detection position variable quantity according to the variation of a plurality of macro blocks that change in location is arranged in each zone.
At first, beginning neighboring area figure matching treatment is at the size/number (step S520) of neighboring area decision macro block.Then, carry out related operation (step S521),, calculate the motion vector (step S522) of each macro block by comprehensive judgement correlated results by each macro block.When carrying out the related operation of each macro block, in present frame, set scope than the big circle of former frame, and by judging where each pixel motion carries out.Then, at the motion vector that calculates, judge reliability (step S523) by each macro block.The judgement of reliability for example be according to the direction of the motion vector that calculates whether the size of the correlation of unanimity etc. carry out.Then, calculate the motion vector (step S524) of each neighboring area according to the macro block that reliability is arranged.Motion vector under this situation can be by for example adopting motion vector mean value or the method for the high motion vector of frequency etc. obtain.Then,, judge neighboring area whole reliability (step S525), (step S526 under the situation of reliability is arranged in neighboring area integral body about the motion vector that calculates; Be), the motion vector of the neighboring area that will obtain in step S524 is considered as effectively, finishes neighboring area figure matching treatment.There is not (step S526 under the situation of reliability in neighboring area integral body; Not), remove the motion vector (step S527) of the neighboring area of in step S524, obtaining, finish neighboring area figure matching treatment.
And the relative neighboring area vector reliability decision among the description of step S412 is handled.Figure 19 illustrates the general flowchart that the relative neighboring area vector reliability decision of being carried out by system controller 27 is handled example.Whether this processing is such processing, under the situation of the motion vector of the motion vector of obtaining the face zone and neighboring area, is divided into motion in length and breadth that is:, have homogeneity to judge reliability according to the motion on all directions.
At first, according to | horizontal perimeter zone vector-horizontal face zone vector | come calculated level difference vector (step S530), and according to | vertical peripheral zone vector-vertical face zone vector | calculate vertical difference vector (step S531).Then, judge the level error vector, and judge that the vertical difference vector is whether less than the vertical vector threshold value (step S533) of predefined regulation whether less than the horizontal vector threshold value (step S532) of predefined regulation.When the level error vector less than horizontal vector threshold value (step S532; Be) and vertical difference vector (step S533 during less than the vertical vector threshold value; Be), being judged to be relative neighboring area vector has reliability (step S534), finishes relative neighboring area vector reliability decision and handles.On the other hand, when the level error vector be (step S532 more than the horizontal vector threshold value; Not) or the vertical difference vector be vertical vector threshold value (step S533 when above; Not), being judged to be relative neighboring area vector does not have reliability (step S535), finishes relative neighboring area vector reliability decision and handles.
Like this, according to present embodiment 2, utilize face test section 18 from the view data that is obtained by imaging apparatus 12, to detect the face zone, and under the setting of configuration part, neighboring area 23, detect the neighboring area of making equivalent movement with the face zone, the time series of utilizing motion detection portion 15 to detect detected face zone and each comfortable interframe of neighboring area is moved, and decides face zone in the present frame according to the testing result of the testing result of face test section 18 and motion detection portion 15.Therefore, even the face of subject from forward become laterally, the back to etc. and can not detect by face test section 18 under the situation in face zone, the result of the motion detection that also can be by using face zone self and the motion detection of neighboring area suitably continues to estimate the position in face zone.
Claims (10)
1. camera head, its decision has the face zone of the subject of motion, it is characterized in that, and this camera head has:
Image pickup part, it receives object light and carries out opto-electronic conversion, is that unit obtains view data with the frame;
The face test section, it detects the zone that face exists from described acquired view data;
Face periphery test section, it detects the neighboring area in described detected face zone from described acquired view data;
Motion detection portion, it detects the time series motion between described detected face zone and each comfortable described picture frame of neighboring area; And
Face determining positions portion, its testing result according to the testing result of described face test section and described motion detection portion decides the face zone of current images frame.
2. camera head according to claim 1 is characterized in that this camera head has the imaging conditions configuration part, and its view data according to the face zone of described decision is set imaging conditions.
3. camera head according to claim 1 is characterized in that, described face periphery test section detects and is predicted to be the body zone of moving equivalently with the motion in described detected face zone as described neighboring area.
4. camera head according to claim 1 is characterized in that, described face periphery test section detects and is set to the neighboring area of moving equivalently with the motion in described detected face zone as described neighboring area.
5. camera head according to claim 1 is characterized in that, the described neighboring area of being detected by described face periphery test section is to set according to the position in the described face zone in the camera picture or the size in described face zone.
6. camera head according to claim 1, it is characterized in that, described motion detection portion each zone described face is regional and described neighboring area is divided into a plurality of zones respectively, and detect the location variation of the variation of a plurality of cut zone that change in location is arranged in each zone after cutting apart based on this, move as time series.
7. camera head according to claim 1, it is characterized in that, in the current images frame, detect under the situation in face zone by described face test section, described face determining positions portion decides face zone in the current images frame according to the testing result of this face test section, do not detect the face zone at face test section described in the current images frame, and detect by described motion detection portion under the situation of motion of face zone between picture frame and neighboring area, described face determining positions portion is the face position at least according to the testing result of this motion detection portion to the face regional movement with the decision of the zone of the face in the current images frame.
8. camera head according to claim 7 is characterized in that this camera head has reliability decision portion, and it judges described motion detection portion to the testing result of face regional movement and the reliability of the relative motion between the detection of motion result of neighboring area,
Not under the situation more than the prescribed level, reset the neighboring area in the current images frame in the reliability that is judged to be relative motion.
9. camera head according to claim 1, it is characterized in that, in the current images frame, detect under the situation in face zone by described face test section, described face determining positions portion decides face zone in the current images frame according to the testing result of this face test section, do not detect the face zone at face test section described in the current images frame, and described motion detection portion does not detect the motion in the face zone between picture frame, and detect by this motion detection portion under the situation of motion of the neighboring area between picture frame, described face determining positions portion estimates face zone in the current images frame according to this motion detection portion to neighboring area detection of motion result, and its decision is the face position.
10. face region determining method, the face zone of the subject that motion is arranged that its decision is captured is characterized in that this face region determining method has:
The shooting step, it receives object light by image pickup part, and carries out opto-electronic conversion, is that unit obtains view data with the frame;
Face detects step, and it detects the zone that face exists from described acquired view data;
The face periphery detects step, and it detects the neighboring area in described detected face zone from described acquired view data;
The motion detection step, it detects the time series motion between described detected face zone and each comfortable described picture frame of neighboring area; And
Face determining positions step, its testing result that detects the testing result of step and described motion detection step according to described face decides the face zone of current images frame.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2007249969 | 2007-09-26 | ||
JP2007-249969 | 2007-09-26 | ||
JP2007249969A JP2009081714A (en) | 2007-09-26 | 2007-09-26 | Imaging device and face region determination method thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101399915A CN101399915A (en) | 2009-04-01 |
CN101399915B true CN101399915B (en) | 2011-06-29 |
Family
ID=40518142
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2008101680191A Expired - Fee Related CN101399915B (en) | 2007-09-26 | 2008-09-25 | Image taking apparatus and face region determining method in image taking apparatus |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP2009081714A (en) |
CN (1) | CN101399915B (en) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5814557B2 (en) | 2011-02-07 | 2015-11-17 | キヤノン株式会社 | Image display control device, imaging device, display control method, and control program |
JP5829679B2 (en) | 2011-04-18 | 2015-12-09 | パナソニック インテレクチュアル プロパティ コーポレーション オブアメリカPanasonic Intellectual Property Corporation of America | IMAGING DEVICE, FOCUSING CONTROL METHOD OF IMAGING DEVICE, AND INTEGRATED CIRCUIT |
CN102831430B (en) * | 2011-06-14 | 2015-02-04 | 华晶科技股份有限公司 | Method for predicting photographing time point and device adopting same |
WO2013121711A1 (en) * | 2012-02-15 | 2013-08-22 | 日本電気株式会社 | Analysis processing device |
WO2013121713A1 (en) * | 2012-02-15 | 2013-08-22 | 日本電気株式会社 | Analysis processing device |
KR101257207B1 (en) * | 2012-02-23 | 2013-04-22 | 인텔 코오퍼레이션 | Method, apparatus and computer-readable recording medium for head tracking |
JP5959923B2 (en) | 2012-04-26 | 2016-08-02 | キヤノン株式会社 | Detection device, control method thereof, control program, imaging device and display device |
JP5963525B2 (en) * | 2012-04-27 | 2016-08-03 | キヤノン株式会社 | Recognition device, control method thereof, control program, imaging device and display device |
JP6033044B2 (en) * | 2012-11-06 | 2016-11-30 | キヤノン株式会社 | Image display apparatus, control method thereof, control program, and imaging apparatus |
JP6188452B2 (en) * | 2013-06-28 | 2017-08-30 | キヤノン株式会社 | Image processing apparatus, image processing method, and program |
JP7360304B2 (en) * | 2019-11-08 | 2023-10-12 | 株式会社デンソーテン | Image processing device and image processing method |
WO2021090943A1 (en) * | 2019-11-08 | 2021-05-14 | 株式会社デンソーテン | Image processing device and image processing method |
-
2007
- 2007-09-26 JP JP2007249969A patent/JP2009081714A/en active Pending
-
2008
- 2008-09-25 CN CN2008101680191A patent/CN101399915B/en not_active Expired - Fee Related
Also Published As
Publication number | Publication date |
---|---|
JP2009081714A (en) | 2009-04-16 |
CN101399915A (en) | 2009-04-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101399915B (en) | Image taking apparatus and face region determining method in image taking apparatus | |
EP1956831B1 (en) | Focus adjusting device, image pickup apparatus, and focus adjustment method | |
KR101130775B1 (en) | Image capturing apparatus, method of determining presence or absence of image area, and recording medium | |
EP2494498B1 (en) | Method and apparatus for image detection with undesired object removal | |
US8384792B2 (en) | Imaging apparatus, method for controlling the same, and program | |
CN107707871B (en) | Image processing apparatus, image capturing apparatus, image processing method, and storage medium | |
JP2014143673A (en) | Image processing device, image capturing apparatus, image processing method and recording medium | |
US8098897B2 (en) | Multi dimensional imaging system | |
WO2012002069A1 (en) | Method and device for shape extraction, and dimension measuring device and distance measuring device | |
CN101494735B (en) | Imaging apparatus, imaging apparatus control method | |
WO2015174228A1 (en) | Attitude estimation device, attitude estimation system, attitude estimation method, attitude estimation program, and computer-readable recording medium whereupon attitude estimation program is recorded | |
CN102033388A (en) | Quick focusing method of digital camera | |
TW201109811A (en) | Fast-focus method of digital camera | |
CN102870402B (en) | Imaging device and formation method | |
CN103312972A (en) | Electronic device and focus adjustment method thereof | |
JP2007281555A (en) | Imaging apparatus | |
CN102959942A (en) | Image capture device for stereoscopic viewing-use and control method of same | |
CN102023460B (en) | Quick focusing method for digital camera | |
KR101475684B1 (en) | Apparatus and method for improving face image in digital image processing device | |
CN105744152A (en) | Object Tracking Apparatus, Control Method Therefor And Storage Medium | |
CN106027917B (en) | Picture pick-up device and its control method | |
JP2021132362A (en) | Subject tracking device, subject tracking method, computer program, and storage medium | |
JP2001249265A (en) | Range finder | |
JP2013113922A (en) | Imaging apparatus | |
JP2013005002A (en) | Imaging apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C41 | Transfer of patent application or patent right or utility model | ||
TR01 | Transfer of patent right |
Effective date of registration: 20151118 Address after: Tokyo, Japan, Japan Patentee after: Olympus Corporation Address before: Tokyo, Japan Patentee before: Olympus Imaging Corp. |
|
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20110629 Termination date: 20200925 |
|
CF01 | Termination of patent right due to non-payment of annual fee |