Embodiment
Below, the execution mode that present invention will be described in detail with reference to the accompanying.
(the 1st execution mode)
Fig. 1 is the block diagram of circuit structure that the photographic equipment of the 1st execution mode of the present invention is shown.
In Fig. 1, photographic equipment 10 has the image pickup part 2 that is made of imaging apparatuss such as CCD or cmos sensors.Image pickup part 2 constitutes and can process and control part 1 be controlled aperture, focus, zoom etc. by signal, can carry out the shooting corresponding with various compositions, subject.
Image pickup part 2 is processed by signal and control part 1 drives control, and image pickup part 2 is taken subject and exported photographed images.Signal is processed and control part 1 is exported the driving signal of imaging apparatuss to image pickup part 2, and reads the photographed images from image pickup part 2.Carry out obtaining of this photographed images by reading part 1a.Reading part 1a will offer from the photographed images that image pickup part 2 reads out a minute adapted blotter section 3.
Minute adapted blotter section 3 has the capacity from the photographed images of image pickup part 2 of preserving the scheduled period, and the photographed images (dynamic image and still image) that reads from image pickup part 2 is stored maintenance.
Signal is processed and control part 1 has and reads image processing part 1b, the 1c that is recorded in the photographed images in minute adapted blotter section 3 and carries out two systems of image processing. Image processing part 1b, 1c carry out prearranged signal to the image of inputting to be processed, and for example color signal generates processing, matrix conversion is processed and other various signals are processed.And image processing part 1b, 1c constitute the adjusted size that can carry out input picture is carried out adjusted size and process, generate the various images such as processing of the image in the part zone of input picture (below be called parts of images) and process.In addition, later on about the image in the whole zone of input picture, all be called general image no matter whether there is adjusted size to process.That is, with respect to general image, parts of images is the narrow image of the angle of visual field, i.e. close-up image.
In the situation of the operation (following part assigned operation) of the part zone of having been carried out specifying input picture for the generating portion image by the user (below be called the part appointed area), or detect in the situation that is judged as the zone that comprises the specific shot bodies such as personage (below be called the part specific region) by the graphical analysis for photographed images, the image information of part appointed area and part specific region (following these zones are called the subregion) is provided from image processing part 1b to feature detection unit 1d, judge the characteristic quantity of personage's the object subjects such as face, and result of determination is outputed to tracking variation detection unit 1e1 and other candidate's determination sections 1e2 as the information of the object subject that detects in the subregion.In addition, feature detection unit 1d offers the parts of images characteristic storage 3a of section with the information of the characteristic quantity of each subject and stores.
Follow the trail of and change detection unit 1e1 from the characteristic quantity of the parts of images characteristic storage 3a of section reading object subject, follow the trail of from the input picture of input successively and to comprise the zone of the part with characteristic quantity consistent with the characteristic quantity of object subject, and will follow the trail of the result and output to image processing part 1c.Thus, Yi Bian image processing part 1c can follow the trail of the image section of the subregion that comprises the object subject all the time, Yi Bian this image is generated as parts of images.
And, in the present embodiment, being provided with other candidate's determination sections 1e2, these other candidate's determination section 1e2 and feature detection unit 1d and tracking change between the detection unit 1e1 receives and sends messages, and determines other candidates.Other candidate's determination sections 1e2 is in can not the situation of the subregional parts of images of tracking part, the information of the subject in other more than one subregions is provided, and selecting wherein, any one subject offers tracking variation detection unit 1e1 as the object subject and with selection result.Follow the trail of to change detection unit 1e1 according to this selection result, read the characteristic quantity of new object subject and carry out tracking process from the parts of images characteristic storage 3a of section, and will follow the trail of the result and output to image processing part 1c.
Thus, even for example in situation about can not follow the trail of based on the parts of images of part assigned operation, also can be in image processing part 1c the parts of images of generating portion specific region.In addition, in the present embodiment, even in the situation of the tracking process of having carried out tracking variation detection unit 1e1 fully, when the major part of parts of images has disappeared in the subregion etc., also be judged to be and can not follow the trail of.
To offer image selection portion 1f from dynamic image and the still image of image processing part 1b, 1c.Dynamic image and still image that image selection portion 1f selection is inputted offer display control unit 1g, and offer S compression unit 1h, M compression unit 1i and M compression unit 1j.
Display control unit 1g carries out offering the Graphics Processing that display part 5 shows for the dynamic image that will input and still image.Display part 5 is made of LCD etc., and dynamic image and the still image that provides from display control unit 1g shown.
On the other hand, S compression unit 1h compresses the still image of inputting and offers recording control part 1k, and M compression unit 1i, 1j compress the dynamic image of inputting and offer recording control part 1k.Compression dynamic image after recording control part 1k will compress and compressing static image offer record section 4 and carry out record.Record section 4 is by recording control part 1k control, compression dynamic image and compressing static image that record is inputted.As record section 4, can adopt for example card interface, record section 4 can be recorded to image information and acoustic information etc. in the recording mediums such as storage card.
In addition, in photographic equipment 10, also be provided with operating portion 6.Operating portion 6 has various switches and the button of photograph mode setting etc., produces based on the operation signal of user's operation and offer signal to process and control part 1.For example, in Fig. 1, show the dynamic image shooting operation 6a of section and the still image shooting operation 6b of section as the object lesson of operating portion 6.The dynamic image shooting operation 6a of section is used to indicate the dynamic image photography, by the operation dynamic image shooting operation 6a of section, will offer signal for the operation signal of beginning dynamic image photography and process and control part 1.In addition, the still image shooting operation 6b of section is used to indicate the still image photography, by the operation still image shooting operation 6b of section, will offer signal for the operation signal of beginning still image photography and process and control part 1.Signal is processed and control part 1 is controlled each several part according to operation signal.
And, can also adopt touch panel as operating portion 6.For example, can be provided as by the display frame at display part 5 touch panel of operating portion 6, produce operation signal corresponding to position in the display frame of indicating with user's usefulness finger.Thus, the user can specify presumptive area in the image in the display frame that is presented at display part 5 as the operation of part appointed area simply.
Signal is processed and control part 1 passes through the control each several part, sets the photograph mode based on user's operation, realizes the camera function corresponding with each photograph mode.In this situation, signal is processed and control part 1 can only by the operation dynamic image shooting operation 6a of section and the still image shooting operation 6b of section, change photograph mode.
In the present embodiment, the user's that other candidate's determination sections 1e2 can be by utilizing operating portion 6 part assigned operation is registered in advance a plurality of object subjects, and is given preferential precedence.Other candidate's determination sections 1e2 offers to follow the trail of by the information with the preferential precedence of each object subject and changes detection unit 1e1, can make to follow the trail of in situation about can follow the trail of to change detection unit 1e1 and follow the trail of the higher subject of preferential precedence.
The action of the execution mode of such formation then, is described with reference to Fig. 2 to Figure 10.
At first, with reference to Fig. 2 to Fig. 5, the photograph mode that the photographic equipment of present embodiment is had is to take pictures in the dynamic image, look back photo and multiframe describes.Fig. 2 to Fig. 4 is respectively the key diagram that the action of taking pictures, looking back photo and multiframe is shown in the dynamic image.In addition, Fig. 5 is the key diagram that the example of input picture and parts of images is shown.In Fig. 2 to Fig. 4, utilize continuous tetragonal frame to show to read each frame from the dynamic image output 11a of the reading part 1a of the image of image pickup part 2.
In in dynamic image shown in Figure 2, taking pictures, show the situation that the dynamic image 11c that will respectively each frame of dynamic image output 11a be carried out adjusted size and obtain offers M compression unit 1i, 1j. M compression unit 1i, 1j compress processing to the dynamic image 11c that inputs, and obtain compressing dynamic image.Record this compression dynamic image as the dynamic image in taking pictures in the dynamic image.
In addition, in in dynamic image, taking pictures, undertaken in the situation of operation of the still image shooting operation 6b of section by the user in the dynamic image photographic process, the still image 11b that dynamic image is exported the frame corresponding with the still image shooting operation in each frame of 11a offers S compression unit 1h.S compression unit 1h compresses processing to the still image 11b that inputs, and obtains compressing static image.Record this compressing static image as the still image in taking pictures in the dynamic image.
In review photo shown in Figure 3, each frame of blotter dynamic image output 11a.In looking back photo, undertaken in the situation of operation of the still image shooting operation 6b of section by the user in the dynamic image photographic process, the still image 11b that dynamic image is exported the frame corresponding with the still image shooting operation in each frame of 11a offers S compression unit 1h.S compression unit 1h compresses processing to the still image 11b that inputs, and obtains compressing static image.Record this compressing static image as the still image of looking back in the photo.
On the other hand, after the frame to the front and back of still image 11b carries out adjusted size, it is offered M compression unit 1i, 1j as the review dynamic image 11d related with still image 11b. M compression unit 1i, 1j compress processing to the review dynamic image 11d that inputs, and obtain compressing dynamic image.Record this compression dynamic image as the review dynamic image of looking back in the photo.
In multiframe shown in Figure 4, show the situation that the overall dynamics image 11e that will respectively each frame of dynamic image output 11a be carried out adjusted size and obtain offers M compression unit 1i, 1j. M compression unit 1i, 1j compress processing to the overall dynamics image 11e that inputs, and obtain compressing dynamic image.Record this compression dynamic image as the overall dynamics image in the multiframe.
Fig. 5 (a) shows input picture.In the example of Fig. 5, these 3 personages of A~C in input picture, have been taken.In multiframe, the user specifies the part assigned operation of the part appointed area of the face that comprises personage A shown in the dashed region of Fig. 5 (a).So, extract the image (parts of images) of the part appointed area shown in Fig. 5 (b) from input picture.And, utilize feature detection unit 1d, tracking to change detection unit 1e1 and from input picture, track after the parts of images, from each frame, extract the parts of images of personage A.And, will export the partial dynamic image 11f that the parts of images of each frame of 11a consists of by dynamic image and offer M compression unit 1i, 1j. M compression unit 1i, 1j compress processing to the partial dynamic image 11f that inputs, and obtain compressing dynamic image.Record this compression dynamic image as the partial dynamic image in the multiframe.
Then, control with reference to the flowchart text camera shooting of Fig. 6.
After connecting the power supply of photographic equipment 10 by the operation of operating portion 6, signal processing and control part 1 begin demonstration and the blotter of live view in step S1.That is, signal processing and control part 1 drive image pickup part 2 and take subject.Reading part 1a reads photographed images and is recorded to successively a minute adapted blotter section 3 from image pickup part 2.Photographed images during minute adapted blotter section 3 is recording scheduled successively.
Signal is processed and control part 1 reads the photographed images that is recorded in minute adapted blotter section 3 successively, and offers image processing part 1b or 1c. Image processing part 1b or 1c are after the picture signal that photographed images is scheduled to is processed, and 1f offers display control unit 1g via the image selection portion.Display control unit 1g offers display part 5 with input picture and shows.Thus, the display frame at display part 5 shows the live view image.
In the present embodiment, in following step S19, carry out the appointment of preferential precedence.Fig. 7 is the key diagram be used to the appointment that preferential precedence is described.Other candidate's determination sections 1e2 especially during the multiframe pattern, determines preferential precedence for the object subject of following the trail of as parts of images when dynamic image is photographed.Other candidate's determination sections 1e2 is control display control part 1g for example, makes display part 5 carry out object subject candidate's demonstration.Fig. 7 (a) shows this demonstration, is provided with the display frame 5a of display part 5 at the back side of the main body 21 of photographic equipment 10, is provided with phtographic lens 22 at front surface, is provided with shutter release button 23 at upper surface.Downside on display frame 5a shows object subject candidate display 24.In the example of Fig. 7 (a), show personage B, A, C as object subject candidate from being arranged in order from left to right.
In addition, the object subject can utilize the part assigned operation of operating portion 6 to specify to the photographed images that live view shows by the user, also can automatically be detected by feature detection unit 1d characteristic quantity according to predefined personage's face etc. from photographed images.
The user touches object subject candidate's image section with finger 25 under the state that shows object subject candidate display 24.Fig. 7 (a) shows the state of the image section that touches personage B.The operating portion 6 that consists of touch panel will represent that the user has specified the operation signal of personage B to output to other candidate's determination sections 1e2.By this operation, other candidate's determination sections 1e2 makes the selection of having selected personage B on the object subject candidate display 24 that is illustrated in personage B show that 26 show, and the upside on display frame 5a shows that preferential precedence shows 27.The preferential precedence of demonstration " 1 ", " 2 ", " 3 " has been shown in Fig. 7 (b).Under this state, the user shows 27 image section with the preferential precedence that finger 25 touches preferential precedence " 3 ".This operation signal is offered other candidate's determination sections 1e2, and other candidate's determination sections 1e2 is set as the 3rd with the preferential precedence of personage B.Afterwards, the operation of each personage of user by repeating to touch object subject candidate display 24 image section and touch the operation of image section that preferential precedence shows the numeral of each preferential precedence of 27, the candidate sets preferential precedence for the object subject.
Then, signal processing and control part 1 determine whether the still image shooting operation 6b of section is operated in step S2.And then signal processing and control part 1 determine whether the dynamic image shooting operation 6a of section is operated in step S5.
Signal is processed and control part 1 is judged as in step S2 in the situation that has operated the still image shooting operation 6b of section, set and look back picture mode, in step S5, be judged as in the situation that has operated the dynamic image shooting operation 6a of section, set exposal model in the dynamic image.
When having operated the still image shooting operation 6b of section in step S2, the still image that signal is processed and control part 1 is looked back in step S3 in the photo is photographed, and looks back the review dynamic image record in the photo in step S4.That is, signal is processed and control part 1 reads the photographed images (still image) of obtaining from reading part 1a from a minute adapted blotter section 3 constantly in the operation of the still image shooting operation 6b of section, and offers image processing part 1b or 1c. Image processing part 1b or 1c carry out the image processing and offer S compression unit 1h via image selection portion 1f the still image of inputting.S compression unit 1h compresses the still image of inputting, and the still image after recording control part 1k will compress offers record section 4 and records (step S3).
In addition, signal is processed and control part 1 reads in the dynamic image that record the front and back of the still image that reads from minute adapted blotter section 3, and offers image processing part 1b or 1c. Image processing part 1b or 1c carry out adjusted size to the dynamic image of inputting, and will look back dynamic image via image selection portion 1f and offer M compression unit 1i or 1j. M compression unit 1i or 1j compress the review dynamic image of inputting, and the review dynamic image after recording control part 1k will compress offers record section 4 and records (step S4).
When in step S5, having operated the dynamic image shooting operation 6a of section, the dynamic image photography during signal is processed and control part 1 carries out taking pictures in the dynamic image in step S6.That is, signal processing and control part 1 read and are recorded in successively the dynamic image in minute adapted blotter section 3 and offer image processing part 1b or 1c. Image processing part 1b or 1c carry out adjusted size to the dynamic image of inputting, and offer M compression unit 1i or 1j via image selection portion 1f. M compression unit 1i or 1j compress the dynamic image in taking pictures in the dynamic image of inputting, and the dynamic image after recording control part 1k will compress offers record section 4 and records (step S6).
In the exposal model, when having carried out the operation of the still image shooting operation 6b of section, signal is processed and control part 1 is transferred to step S8 via step S16 with processing from step S7 in this dynamic image, carries out the still image photography.That is, signal is processed and control part 1 reads the photographed images (still image) of obtaining from reading part 1a from a minute adapted blotter section 3 constantly in the operation of the still image shooting operation 6b of section, and offers image processing part 1b or 1c. Image processing part 1b or 1c carry out the image processing and offer S compression unit 1h via image selection portion 1f the still image of inputting.S compression unit 1h compresses the still image of inputting, and the still image after recording control part 1k will compress offers record section 4 and records (step S8).Thus, the still image record in carrying out taking pictures in the dynamic image.
In addition, if signal processing and control part 1 do not detect the operation of the still image shooting operation 6b of section of step S7 in the exposal model in dynamic image, then in step S9, determine whether and carried out end operation, do not carrying out in the situation of end operation, step S10 is transferred in processing, determine whether to have the part assigned operation.
In the present embodiment, in dynamic image, produced in the exposal model in the situation of part assigned operation and transferred to the multiframe pattern, in the situation that does not have the generating portion assigned operation, continued to take pictures in the dynamic image.Namely, signal process and control part 1 in dynamic image during exposal model, in step S10, do not detect in the situation of generation of part assigned operation, in step S11, be judged to be the multiframe pattern of not transferring to, processing is turned back to step S6 to be continued to take pictures in the dynamic image, in the situation of the generation that detects the part assigned operation, step S12 is transferred in processing set the multiframe pattern.
When the multiframe pattern, carry out the dynamic image record with the general image that similarly obtains input picture is carried out adjusted size during exposal model in the dynamic image.Therefore, even transfer to the multiframe pattern, signal is processed and control part 1 also reads dynamic image from a minute adapted blotter section 3, in image processing part 1b, this dynamic image is carried out adjusted size and obtain general image, offer M compression unit 1i or 1j compresses via image selection portion 1f, and in record section 4, carry out record by recording control part 1k.
In addition, when the multiframe pattern, signal is processed and control part 1 will offer image processing part 1c from the image that minute adapted blotter section 3 reads, and shears out the parts of images corresponding with the part assigned operation.In this situation, image processing part 1c shears out the part appointed area that comprises the parts of images that tracks by tracking variation detection unit 1e1.Thus, the dynamic image that is made of parts of images from image processing part 1c output offers M compression unit 1i or 1j compresses via image selection portion 1f, and records (step S13) by recording control part 1k in record section 4.
And, in the present embodiment, when the multiframe pattern, show to improve convenience by general image and parts of images are carried out picture-in-picture.Fig. 8 is for the key diagram of the picture disply of explanation multiframe pattern, shows the example in the situation of inputting the input picture identical with Fig. 5.
Current, be made as the input picture shown in Fig. 5 (a) is input to image processing part 1b, 1c.In the example of Fig. 5 (a), 3 personages of A~C have been taken as input picture.The user specifies the part assigned operation of the part appointed area of the face that comprises personage A shown in the dashed region of Fig. 5 (a).In addition, this part assigned operation is the operation for the parts of images of determining the multiframe pattern, thinks that it is the 1st object subject candidate's image section that the user specifies the preferential precedence that comprises among the step S19.But, if to comprise from the preferential precedence of appointment in step S19 be in the different personage's of the 1st object subject candidate the situation of part assigned operation of part appointed area having carried out specifying, can be with this part assigned operation only as to the transfer operation of multiframe pattern and transfer to next step, also the object subject candidate based on this part assigned operation can be set as preferential precedence the 1st and reset other object subjects candidate's preferential precedence, and the designated treatment of the preferential precedence of execution in step S19 again, reset preferential precedence.
Current, the personage A of Fig. 5 (a) is made as the object subject candidate of priority bit order 1.In this situation, extract the image (parts of images) of part appointed area of the dotted portion of Fig. 5 (a) from input picture.And after the tracking part partial image, image processing part 1c specifies the zone of the parts of images that comprises personage A as the part appointed area from input picture utilizing feature detection unit 1d, tracking variation detection unit 1e1.
Image processing part 1c is according to the indication that follow the trail of to change detection unit 1e1, on one side the tracking part partial image shear on one side.Image processing part 1c directly outputs to M compression unit 1i or 1j with the parts of images shown in Fig. 5 (b) via image selection portion 1f, and after parts of images and general image were synthesized, 1f outputed to display control unit 1g with composograph via the image selection portion.
Namely, shown in Fig. 8 (a), image processing part 1c with configuration in the main viewing area 5b in the roughly whole zone of the display frame 5a of display part 5 based on the master image of general image, the configuration section image carries out the synthetic of general image and parts of images as the mode of sub-picture in the 5c of the subregion of display frame 5a (below be called secondary viewing area).That is, in this situation, image processing part 1c carries out sub-picture (step S12) to parts of images.
Signal processing and control part 1 judge in step S14 whether the user has touched secondary viewing area 5c when the multiframe pattern.Signal is processed and control part 1 has touched the user in the situation of secondary viewing area 5c, in step S15, current sub-picture is carried out master image, and current master image is carried out sub-picture.
Fig. 8 (b) shows this state, when under the state of Fig. 8 (a), having touched secondary viewing area 5c, in main viewing area 5b, be presented at show among the secondary viewing area 5c as the parts of images of sub-picture as master image, in secondary viewing area 5c, be presented at show among the main viewing area 5b as the general image of master image as sub-picture.In addition, when under the state of Fig. 8 (b), having touched secondary viewing area 5c, in main viewing area 5b, be presented at show among the secondary viewing area 5c as the general image of sub-picture as master image, in secondary viewing area 5c, be presented at show among the main viewing area 5b as the parts of images of master image as sub-picture.That is, return to the demonstration of Fig. 8 (a).
Signal process and control part 1 when the multiframe pattern, when in step S7, detecting the user and having operated the still image shooting operation 6b of section, transfer to exposal model in the dynamic image.In this situation, from the multiframe mode shifts to dynamic image exposal model, therefore process and transfer to step S17 from step S16, carry out the still image photography.
And then before the photography of the still image of step S17, shown in Fig. 8 (a) and (b), on the display frame 5a of display part 5, general image and parts of images are carried out compound display.Therefore, during the photography of still image in the dynamic image of step S17, taking pictures, all still images of the composograph of general image, parts of images and general image and parts of images are carried out record.That is, will offer from the general image of image processing part 1b, 1c S compression unit 1h via image selection portion 1f and compress, and in record section 4, carry out record by recording control part 1k.Image processing part 1c shears out parts of images, via image selection portion 1f this parts of images is offered S compression unit 1h and compresses, and carry out record by recording control part 1k in record section 4.In addition, image processing part 1c has generated the composograph of general image and parts of images.Namely, to carry out the composograph behind the sub-picture and general image will be carried out these two composographs of composograph behind the sub-picture offering S compression unit 1h and compressing to parts of images via image selection portion 1f, and in record section 4, carry out record by recording control part 1k.Thus, from the multiframe mode shifts to the dynamic image in the situation of exposal model, record is become 4 still images of image construction by two opening and closing of general image, parts of images and general image and parts of images in the operation by the still image shooting operation 6b of section.
Fig. 9 and Figure 10 illustrate general image in the multiframe pattern or the key diagram of parts of images.Showing input picture is the situation of the state of Fig. 9 (c) from the state of Fig. 9 (a) through the state variation of Fig. 9 (b).In addition, Fig. 9 shows personage A~C of having taken 3 people as the situation of subject, and the personage A that shows wherein is the object subject of 1 of priority bit order and has specified the zone of the face that comprises personage A as the situation of subregion.In addition, Figure 10 shows parts of images, and Figure 10 (a) to (c) shows respectively the parts of images of the subregion that be shown in dotted line of Fig. 9 (a) to the input picture of (c).
In the multiframe pattern, the dynamic image of the general image that obtains based on input picture is carried out adjusted size of record and based on the dynamic image of the parts of images of the subregion in the input picture.Current, to the input picture 35a shown in Fig. 9 (a), set the subregion 36a shown in the dotted line.Follow the trail of variation detection unit 1e1 and in situation about as input picture 35b, 35c, changing, also follow the trail of appointed area, test section 36b, 36c.In the multiframe pattern, the dynamic image of the general image that record obtains based on input picture 35a to 35c is carried out adjusted size and the parts of images 37a corresponding with part appointed area 36a~36c~37c(Figure 10).
According to this multiframe pattern, not only can utilize the general image recording user to wish that the object subject of taking is the situation on every side of personage A, can also utilize the expression of parts of images record personal A etc.In Figure 10 (a)~(c), can take facial 38a~38c of the personage A corresponding with each input picture 35a~35c, can be recorded to the expression of the face of face 39a~39c etc.Thus, the user can also record whole motion and the expression of the subject paid close attention to, can appreciate and want the part of observing.
For example, when the photography of the scene of a plurality of personages chorus etc., each personage's motion is less, can carry out above-mentioned effective record.But, as the situation of the scene of taking a plurality of personages' races etc., in the situation that a plurality of personages' position changes relatively, think that the image of object subject of tracing object can not observe owing to other subjects etc.Therefore, in the present embodiment, can by switching the record subject as parts of images, record all the time effective parts of images.
That is, in the present embodiment, switch the object subject of tracing object according to preferential precedence.Follow the trail of variation detection unit 1e1 in the time can not following the trail of the object subject of tracing object, inquire the object subject of next preferential precedence to other candidate's determination sections 1e2.Other candidate's determination sections 1e2 excludes the object subject of current tracing object, sets the highest object subject of preferential precedence among a plurality of object subject candidates to follow the trail of changing detection unit 1e1.
Thus, follow the trail of to change detection unit 1e1 with the object subject that sets as new tracing object, read about the characteristic quantity of this object subject from the parts of images characteristic storage 3a of section and to follow the trail of, and will follow the trail of the result and output to image processing part 1c.In addition, follow the trail of variation detection unit 1e1 in situation about can not follow the trail of the object subject of new settings, further inquire the object subject of next preferential precedence to other candidate's determination sections 1e2.Thus, follow the trail of variation detection unit 1e1 and carry out the tracking of the object subject that has been endowed higher preferential precedence in the traceable object subject.
Image processing part 1c is according to following the trail of the tracking result who changes detection unit 1e1, according to input picture generating portion image.This parts of images is the image that comprises the object subject with higher preferential precedence in the traceable object subject, can recording user the parts of images of expectation.
Thus in the present embodiment, in the multiframe pattern, give the preferential precedence that the user sets to a plurality of object subjects, record comprises the parts of images of the higher object subject of preferential precedence in the traceable object subject.Thus, even in the situation of the object subject that can not follow the trail of the preferential precedence with the 1st, also can follow the trail of have the 2nd, the 3rd ... the object subject of preferential precedence come the obtaining section partial image, can carry out the shooting that the user wishes.For example, during the photography of the race of participating in oneself child, not only utilize parts of images to record the own child's of the preferential precedence of having given the 1st expression, in the situation about can also be blocked by other children's image at own child's image, in other children's that utilize during this period the parts of images recording user to set expression, be effective record for the user.
(the 2nd execution mode)
Figure 11 to Figure 17 relates to the 2nd execution mode of the present invention, and Figure 11 is the flow chart that is illustrated in the handling process that adopts in the 2nd execution mode.The hardware configuration of present embodiment is identical with the 1st execution mode, and only signal is processed different from the 1st execution mode with the handling process of control part 1.
In the 1st execution mode, following example is illustrated: give preferential precedence to a plurality of object subjects, the object subject of having given higher preferential precedence of following the trail of in the traceable object subject generates parts of images.Relative therewith, in the present embodiment, other candidate's determination sections 1e2 selects 1 object subject according to predefined condition according to the output of feature detection unit 1d from the candidate of the object subject more than 1, and sets in following the trail of variation detection unit 1e1.For example, other candidate's determination sections 1e2 according to follow the trail of in the position relationship, motion, selection number of times, select time etc. of object subject select 1 object subject, specify this object subject among the detection unit 1e1 following the trail of to change.
In addition, other candidate's determination sections 1e2 also can be same with the 1st execution mode, give preferential precedence to a plurality of object subjects, in having given the object subject of preferential precedence, do not exist in the situation of traceable object subject, according to predefined condition, never give and select 1 object subject in the object subject of preferential precedence and set following the trail of to change among the detection unit 1e1.
In addition, other candidate's determination sections 1e2 also can control display control part 1g, comprise the demonstration of scope of each object subject candidate's subregion so that in the live view image, show expression, and in the live view image, show the demonstration of the object subject of expression tracing object.
Figure 11 shows the processing corresponding with the multiframe dynamic image photograph processing of the step S13 of Fig. 6.In addition, Figure 12 is the key diagram for the switching of description object subject.Figure 12 (a1)~(a3) shows an example of the live view image that is in time photographed images successively that obtains based on the race of taking 3 personage A~C, shows general image as master image, the parts of images composograph as sub-picture.In addition, Figure 12 (b1), (b2) show the live view image corresponding with Figure 12 (a1), Figure 12 (b3), (b4) show the live view image corresponding with Figure 12 (a2), Figure 12 (b5), (b6) show the live view image corresponding with Figure 12 (a3), show general image as sub-picture, the parts of images composograph as master image.In addition, Figure 12 shows and does not give preferential precedence, and the part assigned operation that comprises the subregion of personage A by appointment is transferred to the example of multiframe pattern.
Input picture is the state of Figure 12 (a3) from the state variation of state process Figure 12 (a2) of Figure 12 (a1).Show respectively that in the live view image 41a~41c of the input picture shown in Figure 12 (a1)~(a3) frame that expression comprises the subregion of each object subject shows (dotted line), and show sub-picture 42a~42c that the parts of images based on the object subject of current tracing object is shown.
Think that also comparing the user with general image more wants to take while observing parts of images (close-up image).In this situation, if touch the image section of sub-picture 42a~42c of Figure 12 (a1)~(a3), then can be as Figure 12 (b1)~(b6), parts of images is carried out master image pay attention to confirming the photography of expressing one's feelings.In addition, also can reproduce demonstration in mode shown in Figure 12 according to the image of in the multiframe pattern, taking.
In the step S21 of Figure 11, signal is processed and control part 1 begins the dynamic image record with general image and parts of images as each dynamic image respectively.Feature detection unit 1d is for each the object subject in the photographed images of input, obtains such as characteristic quantity and the positional information on the picture of face etc. and is recorded among the parts of images characteristic storage 3a of section (step S22).Thus, the position in the time of can recording captured personage, its face and shooting for each photographed images etc.
In following step S23, other candidate's determination sections 1e2 judges whether the image of the subregion in the current tracking satisfies as the required condition of parts of images to be recorded.For example, situation about can not follow the trail of, blocked the personage's face in the subregion in following the trail of a part situation or surpassed as the situation during the object subject of tracing object inferiorly, other candidate's determination sections 1e2 is judged to be the subregion and do not satisfy required condition.
In the live view image 41a of Figure 12 (a1), the subregion of image that comprises personage A~C is substantially not overlapping, and other candidate's determination sections 1e2 then changes the subregion of setting the image that comprises personage A among the detection unit 1e1 following the trail of.Thus, shown in Figure 12 (b1), (b2), in live view image 43a1,43a2, the image that will comprise the subregion of personage A carries out master image as parts of images 44a1,44a2, and general image 44 is carried out sub-picture.
That is, under the state of the live view image 41a of Figure 12 (a1), the image that other candidate's determination sections 1e2 is judged to be in step S23 in the subregion satisfies required condition.In this situation, the dynamic image photography of the subregion during signal processing and control part 1 are proceeded to follow the trail of in step S24 is recorded dynamic image in step S25.
Then, in the moment that shows the live view image 41b shown in Figure 12 (a2), the subregion that comprises the personage A in the tracking is blocked by the image of personage C.Like this too in the moment that shows the live view image 41c shown in Figure 12 (a3).
Therefore, other candidate's determination sections 1e2 transfers to step S31 with processing and has determined whether other candidates.Also existing in the situation of the object subject of having given preferential precedence beyond the object subject of personage A, step S38 is transferred in processing, other candidate's determination sections 1e2 is following the trail of the object subject that changes next preferential precedence of setting among the detection unit 1e1.Follow the trail of the information that detection unit 1e1 passes through reading section characteristics of image storage part 3a that changes, follow the trail of the object subject that sets, in image processing part 1c, set the subregion that comprises the object subject of following the trail of.The parts of images of the subregion that thus, record is new is as dynamic image (step S24, S25).
In the example of Figure 12, owing to do not set preferential precedence, so other candidate's determination sections 1e2 transfers to following step S32 with processing, determines whether the object subject that has the subregion of blocking the object subject in the tracking.In the live view image 41b of Figure 12 (a2), the subregion of the personage A in the tracking is blocked by the image of personage C, so other candidate's determination sections 1e2 carries out personage C the processing of step S35 as the object subject.
In step S35, determine whether the object subject that does not have excessively to select personage C.For example, when thinking the photography of the race participated in oneself child, the user only wants the child of oneself is come record as parts of images.But, in race personage's motion violent, own child's image is blocked by other people child's image sometimes.In this situation, block own child's other people child's image of image by shooting as parts of images, not only can record the child of oneself, can also record other people child's expression shape change etc., can carry out colourful Findings.Owing to this reason, also carry out the record of parts of images for the object subject beyond the object subject of the object subject of having given preferential precedence, user's appointment.
But not only oneself child of record has also recorded other people child in general image, and that originally want to record as parts of images is not other people child, but the child of oneself.Therefore, in the situation of the subject beyond the object subject of taking user's appointment, be subject to recording the restriction of number of times, frequency and writing time etc.In step S35, other candidate's determination sections 1e2 determines whether the restriction that has surpassed this record number of times or record frequency for the object subject that reselects.
In step S35, do not have excessively to select in the situation of selected object subject, other candidate's determination sections 1e2 in following step S36, within the scheduled time (for example several seconds during) with the object subject of selected object subject as tracing object.Other candidate's determination sections 1e2 is in order to obtain recording the information of number of times and frequency, and the information relevant with the object subject of the tracing object of new settings is stored in the not shown storage part.Then, other candidate's determination sections 1e2 sets new object subject (step S38) in following the trail of variation detection unit 1e1.
Thus, shown in Figure 12 (b3), live view image 43b1 becomes the composograph that the image that will comprise the subregion of personage C carries out master image as parts of images 44b1 and general image 44 carried out sub-picture.That is, follow the trail of to change detection unit 1e1 and within the scheduled period, personage C is followed the trail of as object subject to be followed the trail of, and will follow the trail of the result and output to image processing part 1c.Thus, signal processing and control part 1 are proceeded the dynamic image photography of the parts of images of personage C in step S24, record dynamic image in step S25.
The live view image 43b2 of Figure 12 (b4) and Figure 12 (b3) are same, show the situation that the image that will comprise the subregion of personage C carries out master image as parts of images 44b2 and general image 44 carried out sub-picture.In addition, before the process scheduled period of in step S36, setting, when if the subregion of the personage A that is blocked by personage C has not been blocked, other candidate's determination sections 1e2 can transfer to step S38 through step S31 from step S23, personage A is set as the object subject, the image of the subregion of personage A is made as parts of images.
Then, in the moment that shows the live view image 41c shown in Figure 12 (a3), the subregion that comprises the personage A of user's appointment is not blocked by other personage B, the image of C, but according to live view image 41b, 41c more as can be known, the image of personage B moves larger.In this situation, other candidate's determination sections 1e2 is judged to be in step S33 and exists motion greater than the object subject of predetermined threshold.Other candidate's determination sections 1e2 determines whether does not have excessively to select the large personage B(step S35 of motion).
Thus, shown in Figure 12 (b5), live view image 43c1 becomes the composograph that the image that will comprise the subregion of personage B carries out master image as parts of images 44c1 and general image 44 carried out sub-picture.That is, follow the trail of to change detection unit 1e1 and within the scheduled period, personage B is followed the trail of as object subject to be followed the trail of, and will follow the trail of the result and output to image processing part 1c.Thus, signal processing and control part 1 are proceeded the dynamic image photography of the parts of images of personage B in step S24, record dynamic image in step S25.
When finishing during the tracking of personage B, other candidate's determination sections 1e2 will process from step S31 and transfer to step S32.Other candidate's determination sections 1e2 does not exist in the situation of object subject in step S32, S33, transfers to step S34, determines whether to have unselected subject.Other candidate's determination sections 1e2 exists in the situation of unselected subject, from wherein selecting 1 object subject, repeating step S36 and processing afterwards thereof.In addition, in the situation that does not have unselected subject, other candidate's determination sections 1e2 sends the indication (step S39) that general image is set as parts of images to image processing part 1c.
In addition, even in the situation of not giving preferential precedence, when the user had specified the object subject by the part assigned operation, other candidate's determination sections 1e2 also can be made as new tracing object with this object subject.Figure 12 (b6) shows this state, and live view image 43c2 becomes the composograph that the image that will comprise the subregion of personage A carries out master image as parts of images 44c2 and general image 44 carried out sub-picture.
Thus, the subject that the subject of object subject that can be by selecting to block user's appointment or motion are larger etc. change in picture as parts of images, carry out colourful Findings.
Figure 13 is the flow chart of an example of processing that specifically illustrates the step S32 of Figure 11, and Figure 14 is the key diagram of an example that specifically illustrates the processing of step S32.
Figure 14 (a) to (c) shows the variation of photographed images, is made as the photographed images 51a shown in Figure 14 (a) and is changed to the photographed images 51b shown in Figure 14 (b), and further be changed to the photographed images 51c shown in Figure 14 (c).In Figure 14, show and in photographed images 51a, 51b, taken personage A~C, in photographed images 51c, taken personage B, C.In addition, about photographed images 51a, be that personage A describes as the object subject of tracing object.
Other candidate's determination sections 1e2 according to predefined facial characteristics, judges personage's face of the periphery of the subregion that is present in the object subject that comprises in the tracking in step S41.For example, in the photographed images 51a of Figure 14 (a), other candidate's determination sections 1e2 is from the subregion 52a(dotted line of personage A) periphery detect the face of personage C.In addition, in Figure 14 (a), be shown in broken lines the subregion 53c that comprises personage C.
Then, other candidate's determination sections 1e2 judges the facial subregion that comprises the object subject in the tracking that whether approaches.In situation about keeping off, other candidate's determination sections 1e2 transfers to the "No" branch of the step S32 of Figure 11.
For example, as the photographed images 51b of Figure 14 (b), the face of personage C is during near the subregion 52b of personage A, and other candidate's determination sections 1e2 transfers to step S43 with processing, judges whether the facial characteristics in the subregion in following the trail of disappears.As photographed images 51b, when the face of personage A is blocked by personage C, can not determine personage A according to the characteristic quantity of the face of the personage A that obtains from photographed images 51b.In this situation, the face that other candidate's determination sections 1e2 will approach, be personage C as object subject (step S44), and enter into the "Yes" branch of step S32.Thus, shown in Figure 14 (c), in photographed images 51c, personage C becomes tracing object, according to the subregion 53c generating portion image that comprises personage C.
Figure 15 to Figure 17 specifically illustrates the example of processing of the step S33 of Figure 11, and Figure 15 is that flow chart, Figure 16 are that key diagram, Figure 17 are the figure charts.
Figure 16 (a) and (b) show the variation of photographed images, are made as the photographed images 61 shown in Figure 16 (a) and are changed to the photographed images 62 shown in Figure 16 (b).In Figure 16 (a), show the facial 63a~63c that in photographed images 61, has taken personage A~C, in photographed images 62, taken facial 64a~64d of personage A~D.In addition, about photographed images 61,62, the object subject that is made as tracing object is that personage A describes.
Other candidate's determination sections 1e2 judges according to predefined facial characteristics whether face-image is arranged in photographed images in step S51.In the photographed images 61,62 of Figure 16, take the face that the personage is arranged, thereby other candidate's determination sections 1e2 transfers to following step S52 with processing.
In step S52, other candidate's determination sections 1e2 calculates the before variation of position with each face.Figure 17 gets lateral attitude on the image in the transverse axis time of getting at the longitudinal axis, shows the lateral attitude of each facial 63a~63c, 64a~64d.The time T 1 of Figure 17 is constantly suitable with the shooting of photographed images 61, and the shooting of time T 2 and photographed images 62 quite constantly.
Figure 17 is take the left side of image as benchmark shows each facial position, shows to compare facial C at time T 1 facial B and be positioned at the right side.In addition, in time T 2, be positioned at the position on the more right side of image according to the order of facial D, C, B.Other candidate's determination sections 1e2 obtains position poor of time T 1, T2 according to each face.In the example of Figure 16, the change in location of facial B, C there are differences, and as shown in figure 17, the change in location △ c of facial C is greater than the change in location △ b of facial B.
Other candidate's determination sections 1e2 judges in step S53 whether facial change in location there are differences.In situation about not there are differences, other candidate's determination sections 1e2 transfers to the "No" branch of the step S32 of Figure 11.In the example of Figure 16, other candidate's determination sections 1e2 transfers to step S54 with processing, and the facial B that change in location is large enters into the "Yes" branch of step S32 as the object subject (step S54) of tracing object.Thus, in photographed images 61,62 example, personage B becomes tracing object, according to the subregion generating portion image that comprises personage B.
Thus in the present embodiment, when the multiframe pattern, according to the condition of setting, select 1 object subject and be set as parts of images.Thus, can carry out colourful Findings, can recording user the image of expectation.