CN103002207A - Camera shooting device - Google Patents

Camera shooting device Download PDF

Info

Publication number
CN103002207A
CN103002207A CN201210262811XA CN201210262811A CN103002207A CN 103002207 A CN103002207 A CN 103002207A CN 201210262811X A CN201210262811X A CN 201210262811XA CN 201210262811 A CN201210262811 A CN 201210262811A CN 103002207 A CN103002207 A CN 103002207A
Authority
CN
China
Prior art keywords
image
record
images
control part
personage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201210262811XA
Other languages
Chinese (zh)
Other versions
CN103002207B (en
Inventor
望月良祐
野中修
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aozhixin Digital Technology Co ltd
Original Assignee
Olympus Imaging Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Olympus Imaging Corp filed Critical Olympus Imaging Corp
Publication of CN103002207A publication Critical patent/CN103002207A/en
Application granted granted Critical
Publication of CN103002207B publication Critical patent/CN103002207B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Studio Devices (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention provides a camera shooting device capable of always recording an image expected by a user in a multi-frame mode. The camera shooting device is provided with: a camera shooting part outputting a cameral shooting image obtained by shooting of a shot object; a first image acquiring part acquiring an image of a first field angle from the cameral shooting image; a second image acquiring part acquiring more than one second image of a second field angle narrower than the first field angle; a record control part performing a dynamic image record of one image among the more than one second image; and a control part under the condition that the second image in record in the record control part does not satisfy the condition of a first shot object, switches to another second image satisfying the condition of a second shot object among the more than one second image acquired by the second image acquiring part, thereby making the record control part performing the dynamic image record.

Description

Photographic equipment
Technical field
The present invention relates to have the photographic equipment of multiframe pattern.
Background technology
In recent years, digital camera etc. use image to process with the portable equipment (photographic equipment) of camera function, possess various camera functions.For example, in patent documentation 1, disclose the camera with following function: the image that will take by the wide-angle side in order to catch whole atmosphere and the image of taking by the side of looking in the distance in order to catch gripping moment carry out composite traces, and show two images at 1 picture.
In addition, in photographic equipment, mostly not only have the still image camera function, also have the dynamic image camera function.For example, the take pictures photographic equipment of (photo in movie) function of having in the so-called dynamic image of well-known, this function is taken still image in the dynamic image photography.And, also there is the photographic equipment with so-called review photo function, this function records the dynamic image of still image and this still image front and back by pressing shutter release button.In addition, also have the photographic equipment with so-called multiframe (multiframe) function, this function record carries out these two kinds of dynamic images of dynamic image of a part of dynamic image after the adjusted size and picture as dynamic image to picture integral body.
[patent documentation 1] TOHKEMY 2010-268019 communique
In the multiframe pattern, while follow the trail of the dynamic image of the part of picture, the image of for example particular persons carries out record.But in this multiframe pattern, there are the following problems: image to be followed the trail of arrived outside the image pickup scope situation or by the situation of other image maskings under, image that can not the recording user expectation is as the dynamic image of the part of picture.
Summary of the invention
The object of the present invention is to provide a kind of also all the time can recording user in the multiframe pattern photographic equipment of image of expectation.
Photographic equipment of the present invention possesses: image pickup part, and it exports the photographed images of taking subject and obtaining; The 1st image obtaining section, it obtains the image of the 1st angle of visual field from described photographed images; The 2nd image obtaining section, it obtains the 2nd image more than 1 of 2nd angle of visual field narrower than described the 1st angle of visual field from described photographed images; Recording control part, it carries out the dynamic image record to 1 image in described the 2nd image more than 1; And control part, described the 2nd image that it is recording in described recording control part does not satisfy in the situation of the 1st subject condition, switches to other the 2nd images that satisfy the 2nd subject condition in the 2nd image more than 1 that described the 2nd image obtaining section obtains and makes described recording control part carry out the dynamic image record.
According to the present invention, has in the multiframe pattern also all the time the effect of image that can the recording user expectation.
Description of drawings
Fig. 1 is the block diagram of circuit structure that the photographic equipment of the 1st execution mode of the present invention is shown.
Fig. 2 is the key diagram that the action of taking pictures in the dynamic image is shown.
Fig. 3 is the key diagram that the action of looking back photo is shown.
Fig. 4 is the key diagram that the action of multiframe is shown.
Fig. 5 is the key diagram that the example of input picture and parts of images is shown.
Fig. 6 is the flow chart that camera shooting control is shown.
Fig. 7 is the key diagram be used to the appointment that preferential precedence is described.
Fig. 8 is the key diagram for the picture disply of explanation multiframe pattern.
Fig. 9 is the key diagram that the general image in the multiframe pattern is shown.
Figure 10 is the key diagram that the parts of images in the multiframe pattern is shown.
Figure 11 is the flow chart that is illustrated in the handling process that adopts in the 2nd execution mode.
Figure 12 is the key diagram for the switching of description object subject.
Figure 13 is the flow chart of an example of processing that specifically illustrates the step S32 of Figure 11.
Figure 14 is the key diagram of an example that specifically illustrates the processing of step S32.
Figure 15 is the flow chart of an example of processing that specifically illustrates the step S33 of Figure 11.
Figure 16 is the key diagram of an example of processing that specifically illustrates the step S33 of Figure 11.
Figure 17 is the curve chart of an example of processing that specifically illustrates the step S33 of Figure 11.
Label declaration
1: signal is processed and control part; 1a: reading part; 1b, 1c: image processing part; 1d: feature detection unit; 1e1: follow the trail of the variation detection unit; 1e2: other candidate's determination sections; 1f: image selection portion; 1g: display control unit; The 1h:S compression unit; 1i, 1j:M compression unit; 1k: recording control part; 2: image pickup part; 3: minute adapted blotter section; 4: record section; 5: display part; 6: operating portion; 6a: dynamic image shooting operation section; 6b: still image shooting operation section.
Embodiment
Below, the execution mode that present invention will be described in detail with reference to the accompanying.
(the 1st execution mode)
Fig. 1 is the block diagram of circuit structure that the photographic equipment of the 1st execution mode of the present invention is shown.
In Fig. 1, photographic equipment 10 has the image pickup part 2 that is made of imaging apparatuss such as CCD or cmos sensors.Image pickup part 2 constitutes and can process and control part 1 be controlled aperture, focus, zoom etc. by signal, can carry out the shooting corresponding with various compositions, subject.
Image pickup part 2 is processed by signal and control part 1 drives control, and image pickup part 2 is taken subject and exported photographed images.Signal is processed and control part 1 is exported the driving signal of imaging apparatuss to image pickup part 2, and reads the photographed images from image pickup part 2.Carry out obtaining of this photographed images by reading part 1a.Reading part 1a will offer from the photographed images that image pickup part 2 reads out a minute adapted blotter section 3.
Minute adapted blotter section 3 has the capacity from the photographed images of image pickup part 2 of preserving the scheduled period, and the photographed images (dynamic image and still image) that reads from image pickup part 2 is stored maintenance.
Signal is processed and control part 1 has and reads image processing part 1b, the 1c that is recorded in the photographed images in minute adapted blotter section 3 and carries out two systems of image processing. Image processing part 1b, 1c carry out prearranged signal to the image of inputting to be processed, and for example color signal generates processing, matrix conversion is processed and other various signals are processed.And image processing part 1b, 1c constitute the adjusted size that can carry out input picture is carried out adjusted size and process, generate the various images such as processing of the image in the part zone of input picture (below be called parts of images) and process.In addition, later on about the image in the whole zone of input picture, all be called general image no matter whether there is adjusted size to process.That is, with respect to general image, parts of images is the narrow image of the angle of visual field, i.e. close-up image.
In the situation of the operation (following part assigned operation) of the part zone of having been carried out specifying input picture for the generating portion image by the user (below be called the part appointed area), or detect in the situation that is judged as the zone that comprises the specific shot bodies such as personage (below be called the part specific region) by the graphical analysis for photographed images, the image information of part appointed area and part specific region (following these zones are called the subregion) is provided from image processing part 1b to feature detection unit 1d, judge the characteristic quantity of personage's the object subjects such as face, and result of determination is outputed to tracking variation detection unit 1e1 and other candidate's determination sections 1e2 as the information of the object subject that detects in the subregion.In addition, feature detection unit 1d offers the parts of images characteristic storage 3a of section with the information of the characteristic quantity of each subject and stores.
Follow the trail of and change detection unit 1e1 from the characteristic quantity of the parts of images characteristic storage 3a of section reading object subject, follow the trail of from the input picture of input successively and to comprise the zone of the part with characteristic quantity consistent with the characteristic quantity of object subject, and will follow the trail of the result and output to image processing part 1c.Thus, Yi Bian image processing part 1c can follow the trail of the image section of the subregion that comprises the object subject all the time, Yi Bian this image is generated as parts of images.
And, in the present embodiment, being provided with other candidate's determination sections 1e2, these other candidate's determination section 1e2 and feature detection unit 1d and tracking change between the detection unit 1e1 receives and sends messages, and determines other candidates.Other candidate's determination sections 1e2 is in can not the situation of the subregional parts of images of tracking part, the information of the subject in other more than one subregions is provided, and selecting wherein, any one subject offers tracking variation detection unit 1e1 as the object subject and with selection result.Follow the trail of to change detection unit 1e1 according to this selection result, read the characteristic quantity of new object subject and carry out tracking process from the parts of images characteristic storage 3a of section, and will follow the trail of the result and output to image processing part 1c.
Thus, even for example in situation about can not follow the trail of based on the parts of images of part assigned operation, also can be in image processing part 1c the parts of images of generating portion specific region.In addition, in the present embodiment, even in the situation of the tracking process of having carried out tracking variation detection unit 1e1 fully, when the major part of parts of images has disappeared in the subregion etc., also be judged to be and can not follow the trail of.
To offer image selection portion 1f from dynamic image and the still image of image processing part 1b, 1c.Dynamic image and still image that image selection portion 1f selection is inputted offer display control unit 1g, and offer S compression unit 1h, M compression unit 1i and M compression unit 1j.
Display control unit 1g carries out offering the Graphics Processing that display part 5 shows for the dynamic image that will input and still image.Display part 5 is made of LCD etc., and dynamic image and the still image that provides from display control unit 1g shown.
On the other hand, S compression unit 1h compresses the still image of inputting and offers recording control part 1k, and M compression unit 1i, 1j compress the dynamic image of inputting and offer recording control part 1k.Compression dynamic image after recording control part 1k will compress and compressing static image offer record section 4 and carry out record.Record section 4 is by recording control part 1k control, compression dynamic image and compressing static image that record is inputted.As record section 4, can adopt for example card interface, record section 4 can be recorded to image information and acoustic information etc. in the recording mediums such as storage card.
In addition, in photographic equipment 10, also be provided with operating portion 6.Operating portion 6 has various switches and the button of photograph mode setting etc., produces based on the operation signal of user's operation and offer signal to process and control part 1.For example, in Fig. 1, show the dynamic image shooting operation 6a of section and the still image shooting operation 6b of section as the object lesson of operating portion 6.The dynamic image shooting operation 6a of section is used to indicate the dynamic image photography, by the operation dynamic image shooting operation 6a of section, will offer signal for the operation signal of beginning dynamic image photography and process and control part 1.In addition, the still image shooting operation 6b of section is used to indicate the still image photography, by the operation still image shooting operation 6b of section, will offer signal for the operation signal of beginning still image photography and process and control part 1.Signal is processed and control part 1 is controlled each several part according to operation signal.
And, can also adopt touch panel as operating portion 6.For example, can be provided as by the display frame at display part 5 touch panel of operating portion 6, produce operation signal corresponding to position in the display frame of indicating with user's usefulness finger.Thus, the user can specify presumptive area in the image in the display frame that is presented at display part 5 as the operation of part appointed area simply.
Signal is processed and control part 1 passes through the control each several part, sets the photograph mode based on user's operation, realizes the camera function corresponding with each photograph mode.In this situation, signal is processed and control part 1 can only by the operation dynamic image shooting operation 6a of section and the still image shooting operation 6b of section, change photograph mode.
In the present embodiment, the user's that other candidate's determination sections 1e2 can be by utilizing operating portion 6 part assigned operation is registered in advance a plurality of object subjects, and is given preferential precedence.Other candidate's determination sections 1e2 offers to follow the trail of by the information with the preferential precedence of each object subject and changes detection unit 1e1, can make to follow the trail of in situation about can follow the trail of to change detection unit 1e1 and follow the trail of the higher subject of preferential precedence.
The action of the execution mode of such formation then, is described with reference to Fig. 2 to Figure 10.
At first, with reference to Fig. 2 to Fig. 5, the photograph mode that the photographic equipment of present embodiment is had is to take pictures in the dynamic image, look back photo and multiframe describes.Fig. 2 to Fig. 4 is respectively the key diagram that the action of taking pictures, looking back photo and multiframe is shown in the dynamic image.In addition, Fig. 5 is the key diagram that the example of input picture and parts of images is shown.In Fig. 2 to Fig. 4, utilize continuous tetragonal frame to show to read each frame from the dynamic image output 11a of the reading part 1a of the image of image pickup part 2.
In in dynamic image shown in Figure 2, taking pictures, show the situation that the dynamic image 11c that will respectively each frame of dynamic image output 11a be carried out adjusted size and obtain offers M compression unit 1i, 1j. M compression unit 1i, 1j compress processing to the dynamic image 11c that inputs, and obtain compressing dynamic image.Record this compression dynamic image as the dynamic image in taking pictures in the dynamic image.
In addition, in in dynamic image, taking pictures, undertaken in the situation of operation of the still image shooting operation 6b of section by the user in the dynamic image photographic process, the still image 11b that dynamic image is exported the frame corresponding with the still image shooting operation in each frame of 11a offers S compression unit 1h.S compression unit 1h compresses processing to the still image 11b that inputs, and obtains compressing static image.Record this compressing static image as the still image in taking pictures in the dynamic image.
In review photo shown in Figure 3, each frame of blotter dynamic image output 11a.In looking back photo, undertaken in the situation of operation of the still image shooting operation 6b of section by the user in the dynamic image photographic process, the still image 11b that dynamic image is exported the frame corresponding with the still image shooting operation in each frame of 11a offers S compression unit 1h.S compression unit 1h compresses processing to the still image 11b that inputs, and obtains compressing static image.Record this compressing static image as the still image of looking back in the photo.
On the other hand, after the frame to the front and back of still image 11b carries out adjusted size, it is offered M compression unit 1i, 1j as the review dynamic image 11d related with still image 11b. M compression unit 1i, 1j compress processing to the review dynamic image 11d that inputs, and obtain compressing dynamic image.Record this compression dynamic image as the review dynamic image of looking back in the photo.
In multiframe shown in Figure 4, show the situation that the overall dynamics image 11e that will respectively each frame of dynamic image output 11a be carried out adjusted size and obtain offers M compression unit 1i, 1j. M compression unit 1i, 1j compress processing to the overall dynamics image 11e that inputs, and obtain compressing dynamic image.Record this compression dynamic image as the overall dynamics image in the multiframe.
Fig. 5 (a) shows input picture.In the example of Fig. 5, these 3 personages of A~C in input picture, have been taken.In multiframe, the user specifies the part assigned operation of the part appointed area of the face that comprises personage A shown in the dashed region of Fig. 5 (a).So, extract the image (parts of images) of the part appointed area shown in Fig. 5 (b) from input picture.And, utilize feature detection unit 1d, tracking to change detection unit 1e1 and from input picture, track after the parts of images, from each frame, extract the parts of images of personage A.And, will export the partial dynamic image 11f that the parts of images of each frame of 11a consists of by dynamic image and offer M compression unit 1i, 1j. M compression unit 1i, 1j compress processing to the partial dynamic image 11f that inputs, and obtain compressing dynamic image.Record this compression dynamic image as the partial dynamic image in the multiframe.
Then, control with reference to the flowchart text camera shooting of Fig. 6.
After connecting the power supply of photographic equipment 10 by the operation of operating portion 6, signal processing and control part 1 begin demonstration and the blotter of live view in step S1.That is, signal processing and control part 1 drive image pickup part 2 and take subject.Reading part 1a reads photographed images and is recorded to successively a minute adapted blotter section 3 from image pickup part 2.Photographed images during minute adapted blotter section 3 is recording scheduled successively.
Signal is processed and control part 1 reads the photographed images that is recorded in minute adapted blotter section 3 successively, and offers image processing part 1b or 1c. Image processing part 1b or 1c are after the picture signal that photographed images is scheduled to is processed, and 1f offers display control unit 1g via the image selection portion.Display control unit 1g offers display part 5 with input picture and shows.Thus, the display frame at display part 5 shows the live view image.
In the present embodiment, in following step S19, carry out the appointment of preferential precedence.Fig. 7 is the key diagram be used to the appointment that preferential precedence is described.Other candidate's determination sections 1e2 especially during the multiframe pattern, determines preferential precedence for the object subject of following the trail of as parts of images when dynamic image is photographed.Other candidate's determination sections 1e2 is control display control part 1g for example, makes display part 5 carry out object subject candidate's demonstration.Fig. 7 (a) shows this demonstration, is provided with the display frame 5a of display part 5 at the back side of the main body 21 of photographic equipment 10, is provided with phtographic lens 22 at front surface, is provided with shutter release button 23 at upper surface.Downside on display frame 5a shows object subject candidate display 24.In the example of Fig. 7 (a), show personage B, A, C as object subject candidate from being arranged in order from left to right.
In addition, the object subject can utilize the part assigned operation of operating portion 6 to specify to the photographed images that live view shows by the user, also can automatically be detected by feature detection unit 1d characteristic quantity according to predefined personage's face etc. from photographed images.
The user touches object subject candidate's image section with finger 25 under the state that shows object subject candidate display 24.Fig. 7 (a) shows the state of the image section that touches personage B.The operating portion 6 that consists of touch panel will represent that the user has specified the operation signal of personage B to output to other candidate's determination sections 1e2.By this operation, other candidate's determination sections 1e2 makes the selection of having selected personage B on the object subject candidate display 24 that is illustrated in personage B show that 26 show, and the upside on display frame 5a shows that preferential precedence shows 27.The preferential precedence of demonstration " 1 ", " 2 ", " 3 " has been shown in Fig. 7 (b).Under this state, the user shows 27 image section with the preferential precedence that finger 25 touches preferential precedence " 3 ".This operation signal is offered other candidate's determination sections 1e2, and other candidate's determination sections 1e2 is set as the 3rd with the preferential precedence of personage B.Afterwards, the operation of each personage of user by repeating to touch object subject candidate display 24 image section and touch the operation of image section that preferential precedence shows the numeral of each preferential precedence of 27, the candidate sets preferential precedence for the object subject.
Then, signal processing and control part 1 determine whether the still image shooting operation 6b of section is operated in step S2.And then signal processing and control part 1 determine whether the dynamic image shooting operation 6a of section is operated in step S5.
Signal is processed and control part 1 is judged as in step S2 in the situation that has operated the still image shooting operation 6b of section, set and look back picture mode, in step S5, be judged as in the situation that has operated the dynamic image shooting operation 6a of section, set exposal model in the dynamic image.
When having operated the still image shooting operation 6b of section in step S2, the still image that signal is processed and control part 1 is looked back in step S3 in the photo is photographed, and looks back the review dynamic image record in the photo in step S4.That is, signal is processed and control part 1 reads the photographed images (still image) of obtaining from reading part 1a from a minute adapted blotter section 3 constantly in the operation of the still image shooting operation 6b of section, and offers image processing part 1b or 1c. Image processing part 1b or 1c carry out the image processing and offer S compression unit 1h via image selection portion 1f the still image of inputting.S compression unit 1h compresses the still image of inputting, and the still image after recording control part 1k will compress offers record section 4 and records (step S3).
In addition, signal is processed and control part 1 reads in the dynamic image that record the front and back of the still image that reads from minute adapted blotter section 3, and offers image processing part 1b or 1c. Image processing part 1b or 1c carry out adjusted size to the dynamic image of inputting, and will look back dynamic image via image selection portion 1f and offer M compression unit 1i or 1j. M compression unit 1i or 1j compress the review dynamic image of inputting, and the review dynamic image after recording control part 1k will compress offers record section 4 and records (step S4).
When in step S5, having operated the dynamic image shooting operation 6a of section, the dynamic image photography during signal is processed and control part 1 carries out taking pictures in the dynamic image in step S6.That is, signal processing and control part 1 read and are recorded in successively the dynamic image in minute adapted blotter section 3 and offer image processing part 1b or 1c. Image processing part 1b or 1c carry out adjusted size to the dynamic image of inputting, and offer M compression unit 1i or 1j via image selection portion 1f. M compression unit 1i or 1j compress the dynamic image in taking pictures in the dynamic image of inputting, and the dynamic image after recording control part 1k will compress offers record section 4 and records (step S6).
In the exposal model, when having carried out the operation of the still image shooting operation 6b of section, signal is processed and control part 1 is transferred to step S8 via step S16 with processing from step S7 in this dynamic image, carries out the still image photography.That is, signal is processed and control part 1 reads the photographed images (still image) of obtaining from reading part 1a from a minute adapted blotter section 3 constantly in the operation of the still image shooting operation 6b of section, and offers image processing part 1b or 1c. Image processing part 1b or 1c carry out the image processing and offer S compression unit 1h via image selection portion 1f the still image of inputting.S compression unit 1h compresses the still image of inputting, and the still image after recording control part 1k will compress offers record section 4 and records (step S8).Thus, the still image record in carrying out taking pictures in the dynamic image.
In addition, if signal processing and control part 1 do not detect the operation of the still image shooting operation 6b of section of step S7 in the exposal model in dynamic image, then in step S9, determine whether and carried out end operation, do not carrying out in the situation of end operation, step S10 is transferred in processing, determine whether to have the part assigned operation.
In the present embodiment, in dynamic image, produced in the exposal model in the situation of part assigned operation and transferred to the multiframe pattern, in the situation that does not have the generating portion assigned operation, continued to take pictures in the dynamic image.Namely, signal process and control part 1 in dynamic image during exposal model, in step S10, do not detect in the situation of generation of part assigned operation, in step S11, be judged to be the multiframe pattern of not transferring to, processing is turned back to step S6 to be continued to take pictures in the dynamic image, in the situation of the generation that detects the part assigned operation, step S12 is transferred in processing set the multiframe pattern.
When the multiframe pattern, carry out the dynamic image record with the general image that similarly obtains input picture is carried out adjusted size during exposal model in the dynamic image.Therefore, even transfer to the multiframe pattern, signal is processed and control part 1 also reads dynamic image from a minute adapted blotter section 3, in image processing part 1b, this dynamic image is carried out adjusted size and obtain general image, offer M compression unit 1i or 1j compresses via image selection portion 1f, and in record section 4, carry out record by recording control part 1k.
In addition, when the multiframe pattern, signal is processed and control part 1 will offer image processing part 1c from the image that minute adapted blotter section 3 reads, and shears out the parts of images corresponding with the part assigned operation.In this situation, image processing part 1c shears out the part appointed area that comprises the parts of images that tracks by tracking variation detection unit 1e1.Thus, the dynamic image that is made of parts of images from image processing part 1c output offers M compression unit 1i or 1j compresses via image selection portion 1f, and records (step S13) by recording control part 1k in record section 4.
And, in the present embodiment, when the multiframe pattern, show to improve convenience by general image and parts of images are carried out picture-in-picture.Fig. 8 is for the key diagram of the picture disply of explanation multiframe pattern, shows the example in the situation of inputting the input picture identical with Fig. 5.
Current, be made as the input picture shown in Fig. 5 (a) is input to image processing part 1b, 1c.In the example of Fig. 5 (a), 3 personages of A~C have been taken as input picture.The user specifies the part assigned operation of the part appointed area of the face that comprises personage A shown in the dashed region of Fig. 5 (a).In addition, this part assigned operation is the operation for the parts of images of determining the multiframe pattern, thinks that it is the 1st object subject candidate's image section that the user specifies the preferential precedence that comprises among the step S19.But, if to comprise from the preferential precedence of appointment in step S19 be in the different personage's of the 1st object subject candidate the situation of part assigned operation of part appointed area having carried out specifying, can be with this part assigned operation only as to the transfer operation of multiframe pattern and transfer to next step, also the object subject candidate based on this part assigned operation can be set as preferential precedence the 1st and reset other object subjects candidate's preferential precedence, and the designated treatment of the preferential precedence of execution in step S19 again, reset preferential precedence.
Current, the personage A of Fig. 5 (a) is made as the object subject candidate of priority bit order 1.In this situation, extract the image (parts of images) of part appointed area of the dotted portion of Fig. 5 (a) from input picture.And after the tracking part partial image, image processing part 1c specifies the zone of the parts of images that comprises personage A as the part appointed area from input picture utilizing feature detection unit 1d, tracking variation detection unit 1e1.
Image processing part 1c is according to the indication that follow the trail of to change detection unit 1e1, on one side the tracking part partial image shear on one side.Image processing part 1c directly outputs to M compression unit 1i or 1j with the parts of images shown in Fig. 5 (b) via image selection portion 1f, and after parts of images and general image were synthesized, 1f outputed to display control unit 1g with composograph via the image selection portion.
Namely, shown in Fig. 8 (a), image processing part 1c with configuration in the main viewing area 5b in the roughly whole zone of the display frame 5a of display part 5 based on the master image of general image, the configuration section image carries out the synthetic of general image and parts of images as the mode of sub-picture in the 5c of the subregion of display frame 5a (below be called secondary viewing area).That is, in this situation, image processing part 1c carries out sub-picture (step S12) to parts of images.
Signal processing and control part 1 judge in step S14 whether the user has touched secondary viewing area 5c when the multiframe pattern.Signal is processed and control part 1 has touched the user in the situation of secondary viewing area 5c, in step S15, current sub-picture is carried out master image, and current master image is carried out sub-picture.
Fig. 8 (b) shows this state, when under the state of Fig. 8 (a), having touched secondary viewing area 5c, in main viewing area 5b, be presented at show among the secondary viewing area 5c as the parts of images of sub-picture as master image, in secondary viewing area 5c, be presented at show among the main viewing area 5b as the general image of master image as sub-picture.In addition, when under the state of Fig. 8 (b), having touched secondary viewing area 5c, in main viewing area 5b, be presented at show among the secondary viewing area 5c as the general image of sub-picture as master image, in secondary viewing area 5c, be presented at show among the main viewing area 5b as the parts of images of master image as sub-picture.That is, return to the demonstration of Fig. 8 (a).
Signal process and control part 1 when the multiframe pattern, when in step S7, detecting the user and having operated the still image shooting operation 6b of section, transfer to exposal model in the dynamic image.In this situation, from the multiframe mode shifts to dynamic image exposal model, therefore process and transfer to step S17 from step S16, carry out the still image photography.
And then before the photography of the still image of step S17, shown in Fig. 8 (a) and (b), on the display frame 5a of display part 5, general image and parts of images are carried out compound display.Therefore, during the photography of still image in the dynamic image of step S17, taking pictures, all still images of the composograph of general image, parts of images and general image and parts of images are carried out record.That is, will offer from the general image of image processing part 1b, 1c S compression unit 1h via image selection portion 1f and compress, and in record section 4, carry out record by recording control part 1k.Image processing part 1c shears out parts of images, via image selection portion 1f this parts of images is offered S compression unit 1h and compresses, and carry out record by recording control part 1k in record section 4.In addition, image processing part 1c has generated the composograph of general image and parts of images.Namely, to carry out the composograph behind the sub-picture and general image will be carried out these two composographs of composograph behind the sub-picture offering S compression unit 1h and compressing to parts of images via image selection portion 1f, and in record section 4, carry out record by recording control part 1k.Thus, from the multiframe mode shifts to the dynamic image in the situation of exposal model, record is become 4 still images of image construction by two opening and closing of general image, parts of images and general image and parts of images in the operation by the still image shooting operation 6b of section.
Fig. 9 and Figure 10 illustrate general image in the multiframe pattern or the key diagram of parts of images.Showing input picture is the situation of the state of Fig. 9 (c) from the state of Fig. 9 (a) through the state variation of Fig. 9 (b).In addition, Fig. 9 shows personage A~C of having taken 3 people as the situation of subject, and the personage A that shows wherein is the object subject of 1 of priority bit order and has specified the zone of the face that comprises personage A as the situation of subregion.In addition, Figure 10 shows parts of images, and Figure 10 (a) to (c) shows respectively the parts of images of the subregion that be shown in dotted line of Fig. 9 (a) to the input picture of (c).
In the multiframe pattern, the dynamic image of the general image that obtains based on input picture is carried out adjusted size of record and based on the dynamic image of the parts of images of the subregion in the input picture.Current, to the input picture 35a shown in Fig. 9 (a), set the subregion 36a shown in the dotted line.Follow the trail of variation detection unit 1e1 and in situation about as input picture 35b, 35c, changing, also follow the trail of appointed area, test section 36b, 36c.In the multiframe pattern, the dynamic image of the general image that record obtains based on input picture 35a to 35c is carried out adjusted size and the parts of images 37a corresponding with part appointed area 36a~36c~37c(Figure 10).
According to this multiframe pattern, not only can utilize the general image recording user to wish that the object subject of taking is the situation on every side of personage A, can also utilize the expression of parts of images record personal A etc.In Figure 10 (a)~(c), can take facial 38a~38c of the personage A corresponding with each input picture 35a~35c, can be recorded to the expression of the face of face 39a~39c etc.Thus, the user can also record whole motion and the expression of the subject paid close attention to, can appreciate and want the part of observing.
For example, when the photography of the scene of a plurality of personages chorus etc., each personage's motion is less, can carry out above-mentioned effective record.But, as the situation of the scene of taking a plurality of personages' races etc., in the situation that a plurality of personages' position changes relatively, think that the image of object subject of tracing object can not observe owing to other subjects etc.Therefore, in the present embodiment, can by switching the record subject as parts of images, record all the time effective parts of images.
That is, in the present embodiment, switch the object subject of tracing object according to preferential precedence.Follow the trail of variation detection unit 1e1 in the time can not following the trail of the object subject of tracing object, inquire the object subject of next preferential precedence to other candidate's determination sections 1e2.Other candidate's determination sections 1e2 excludes the object subject of current tracing object, sets the highest object subject of preferential precedence among a plurality of object subject candidates to follow the trail of changing detection unit 1e1.
Thus, follow the trail of to change detection unit 1e1 with the object subject that sets as new tracing object, read about the characteristic quantity of this object subject from the parts of images characteristic storage 3a of section and to follow the trail of, and will follow the trail of the result and output to image processing part 1c.In addition, follow the trail of variation detection unit 1e1 in situation about can not follow the trail of the object subject of new settings, further inquire the object subject of next preferential precedence to other candidate's determination sections 1e2.Thus, follow the trail of variation detection unit 1e1 and carry out the tracking of the object subject that has been endowed higher preferential precedence in the traceable object subject.
Image processing part 1c is according to following the trail of the tracking result who changes detection unit 1e1, according to input picture generating portion image.This parts of images is the image that comprises the object subject with higher preferential precedence in the traceable object subject, can recording user the parts of images of expectation.
Thus in the present embodiment, in the multiframe pattern, give the preferential precedence that the user sets to a plurality of object subjects, record comprises the parts of images of the higher object subject of preferential precedence in the traceable object subject.Thus, even in the situation of the object subject that can not follow the trail of the preferential precedence with the 1st, also can follow the trail of have the 2nd, the 3rd ... the object subject of preferential precedence come the obtaining section partial image, can carry out the shooting that the user wishes.For example, during the photography of the race of participating in oneself child, not only utilize parts of images to record the own child's of the preferential precedence of having given the 1st expression, in the situation about can also be blocked by other children's image at own child's image, in other children's that utilize during this period the parts of images recording user to set expression, be effective record for the user.
(the 2nd execution mode)
Figure 11 to Figure 17 relates to the 2nd execution mode of the present invention, and Figure 11 is the flow chart that is illustrated in the handling process that adopts in the 2nd execution mode.The hardware configuration of present embodiment is identical with the 1st execution mode, and only signal is processed different from the 1st execution mode with the handling process of control part 1.
In the 1st execution mode, following example is illustrated: give preferential precedence to a plurality of object subjects, the object subject of having given higher preferential precedence of following the trail of in the traceable object subject generates parts of images.Relative therewith, in the present embodiment, other candidate's determination sections 1e2 selects 1 object subject according to predefined condition according to the output of feature detection unit 1d from the candidate of the object subject more than 1, and sets in following the trail of variation detection unit 1e1.For example, other candidate's determination sections 1e2 according to follow the trail of in the position relationship, motion, selection number of times, select time etc. of object subject select 1 object subject, specify this object subject among the detection unit 1e1 following the trail of to change.
In addition, other candidate's determination sections 1e2 also can be same with the 1st execution mode, give preferential precedence to a plurality of object subjects, in having given the object subject of preferential precedence, do not exist in the situation of traceable object subject, according to predefined condition, never give and select 1 object subject in the object subject of preferential precedence and set following the trail of to change among the detection unit 1e1.
In addition, other candidate's determination sections 1e2 also can control display control part 1g, comprise the demonstration of scope of each object subject candidate's subregion so that in the live view image, show expression, and in the live view image, show the demonstration of the object subject of expression tracing object.
Figure 11 shows the processing corresponding with the multiframe dynamic image photograph processing of the step S13 of Fig. 6.In addition, Figure 12 is the key diagram for the switching of description object subject.Figure 12 (a1)~(a3) shows an example of the live view image that is in time photographed images successively that obtains based on the race of taking 3 personage A~C, shows general image as master image, the parts of images composograph as sub-picture.In addition, Figure 12 (b1), (b2) show the live view image corresponding with Figure 12 (a1), Figure 12 (b3), (b4) show the live view image corresponding with Figure 12 (a2), Figure 12 (b5), (b6) show the live view image corresponding with Figure 12 (a3), show general image as sub-picture, the parts of images composograph as master image.In addition, Figure 12 shows and does not give preferential precedence, and the part assigned operation that comprises the subregion of personage A by appointment is transferred to the example of multiframe pattern.
Input picture is the state of Figure 12 (a3) from the state variation of state process Figure 12 (a2) of Figure 12 (a1).Show respectively that in the live view image 41a~41c of the input picture shown in Figure 12 (a1)~(a3) frame that expression comprises the subregion of each object subject shows (dotted line), and show sub-picture 42a~42c that the parts of images based on the object subject of current tracing object is shown.
Think that also comparing the user with general image more wants to take while observing parts of images (close-up image).In this situation, if touch the image section of sub-picture 42a~42c of Figure 12 (a1)~(a3), then can be as Figure 12 (b1)~(b6), parts of images is carried out master image pay attention to confirming the photography of expressing one's feelings.In addition, also can reproduce demonstration in mode shown in Figure 12 according to the image of in the multiframe pattern, taking.
In the step S21 of Figure 11, signal is processed and control part 1 begins the dynamic image record with general image and parts of images as each dynamic image respectively.Feature detection unit 1d is for each the object subject in the photographed images of input, obtains such as characteristic quantity and the positional information on the picture of face etc. and is recorded among the parts of images characteristic storage 3a of section (step S22).Thus, the position in the time of can recording captured personage, its face and shooting for each photographed images etc.
In following step S23, other candidate's determination sections 1e2 judges whether the image of the subregion in the current tracking satisfies as the required condition of parts of images to be recorded.For example, situation about can not follow the trail of, blocked the personage's face in the subregion in following the trail of a part situation or surpassed as the situation during the object subject of tracing object inferiorly, other candidate's determination sections 1e2 is judged to be the subregion and do not satisfy required condition.
In the live view image 41a of Figure 12 (a1), the subregion of image that comprises personage A~C is substantially not overlapping, and other candidate's determination sections 1e2 then changes the subregion of setting the image that comprises personage A among the detection unit 1e1 following the trail of.Thus, shown in Figure 12 (b1), (b2), in live view image 43a1,43a2, the image that will comprise the subregion of personage A carries out master image as parts of images 44a1,44a2, and general image 44 is carried out sub-picture.
That is, under the state of the live view image 41a of Figure 12 (a1), the image that other candidate's determination sections 1e2 is judged to be in step S23 in the subregion satisfies required condition.In this situation, the dynamic image photography of the subregion during signal processing and control part 1 are proceeded to follow the trail of in step S24 is recorded dynamic image in step S25.
Then, in the moment that shows the live view image 41b shown in Figure 12 (a2), the subregion that comprises the personage A in the tracking is blocked by the image of personage C.Like this too in the moment that shows the live view image 41c shown in Figure 12 (a3).
Therefore, other candidate's determination sections 1e2 transfers to step S31 with processing and has determined whether other candidates.Also existing in the situation of the object subject of having given preferential precedence beyond the object subject of personage A, step S38 is transferred in processing, other candidate's determination sections 1e2 is following the trail of the object subject that changes next preferential precedence of setting among the detection unit 1e1.Follow the trail of the information that detection unit 1e1 passes through reading section characteristics of image storage part 3a that changes, follow the trail of the object subject that sets, in image processing part 1c, set the subregion that comprises the object subject of following the trail of.The parts of images of the subregion that thus, record is new is as dynamic image (step S24, S25).
In the example of Figure 12, owing to do not set preferential precedence, so other candidate's determination sections 1e2 transfers to following step S32 with processing, determines whether the object subject that has the subregion of blocking the object subject in the tracking.In the live view image 41b of Figure 12 (a2), the subregion of the personage A in the tracking is blocked by the image of personage C, so other candidate's determination sections 1e2 carries out personage C the processing of step S35 as the object subject.
In step S35, determine whether the object subject that does not have excessively to select personage C.For example, when thinking the photography of the race participated in oneself child, the user only wants the child of oneself is come record as parts of images.But, in race personage's motion violent, own child's image is blocked by other people child's image sometimes.In this situation, block own child's other people child's image of image by shooting as parts of images, not only can record the child of oneself, can also record other people child's expression shape change etc., can carry out colourful Findings.Owing to this reason, also carry out the record of parts of images for the object subject beyond the object subject of the object subject of having given preferential precedence, user's appointment.
But not only oneself child of record has also recorded other people child in general image, and that originally want to record as parts of images is not other people child, but the child of oneself.Therefore, in the situation of the subject beyond the object subject of taking user's appointment, be subject to recording the restriction of number of times, frequency and writing time etc.In step S35, other candidate's determination sections 1e2 determines whether the restriction that has surpassed this record number of times or record frequency for the object subject that reselects.
In step S35, do not have excessively to select in the situation of selected object subject, other candidate's determination sections 1e2 in following step S36, within the scheduled time (for example several seconds during) with the object subject of selected object subject as tracing object.Other candidate's determination sections 1e2 is in order to obtain recording the information of number of times and frequency, and the information relevant with the object subject of the tracing object of new settings is stored in the not shown storage part.Then, other candidate's determination sections 1e2 sets new object subject (step S38) in following the trail of variation detection unit 1e1.
Thus, shown in Figure 12 (b3), live view image 43b1 becomes the composograph that the image that will comprise the subregion of personage C carries out master image as parts of images 44b1 and general image 44 carried out sub-picture.That is, follow the trail of to change detection unit 1e1 and within the scheduled period, personage C is followed the trail of as object subject to be followed the trail of, and will follow the trail of the result and output to image processing part 1c.Thus, signal processing and control part 1 are proceeded the dynamic image photography of the parts of images of personage C in step S24, record dynamic image in step S25.
The live view image 43b2 of Figure 12 (b4) and Figure 12 (b3) are same, show the situation that the image that will comprise the subregion of personage C carries out master image as parts of images 44b2 and general image 44 carried out sub-picture.In addition, before the process scheduled period of in step S36, setting, when if the subregion of the personage A that is blocked by personage C has not been blocked, other candidate's determination sections 1e2 can transfer to step S38 through step S31 from step S23, personage A is set as the object subject, the image of the subregion of personage A is made as parts of images.
Then, in the moment that shows the live view image 41c shown in Figure 12 (a3), the subregion that comprises the personage A of user's appointment is not blocked by other personage B, the image of C, but according to live view image 41b, 41c more as can be known, the image of personage B moves larger.In this situation, other candidate's determination sections 1e2 is judged to be in step S33 and exists motion greater than the object subject of predetermined threshold.Other candidate's determination sections 1e2 determines whether does not have excessively to select the large personage B(step S35 of motion).
Thus, shown in Figure 12 (b5), live view image 43c1 becomes the composograph that the image that will comprise the subregion of personage B carries out master image as parts of images 44c1 and general image 44 carried out sub-picture.That is, follow the trail of to change detection unit 1e1 and within the scheduled period, personage B is followed the trail of as object subject to be followed the trail of, and will follow the trail of the result and output to image processing part 1c.Thus, signal processing and control part 1 are proceeded the dynamic image photography of the parts of images of personage B in step S24, record dynamic image in step S25.
When finishing during the tracking of personage B, other candidate's determination sections 1e2 will process from step S31 and transfer to step S32.Other candidate's determination sections 1e2 does not exist in the situation of object subject in step S32, S33, transfers to step S34, determines whether to have unselected subject.Other candidate's determination sections 1e2 exists in the situation of unselected subject, from wherein selecting 1 object subject, repeating step S36 and processing afterwards thereof.In addition, in the situation that does not have unselected subject, other candidate's determination sections 1e2 sends the indication (step S39) that general image is set as parts of images to image processing part 1c.
In addition, even in the situation of not giving preferential precedence, when the user had specified the object subject by the part assigned operation, other candidate's determination sections 1e2 also can be made as new tracing object with this object subject.Figure 12 (b6) shows this state, and live view image 43c2 becomes the composograph that the image that will comprise the subregion of personage A carries out master image as parts of images 44c2 and general image 44 carried out sub-picture.
Thus, the subject that the subject of object subject that can be by selecting to block user's appointment or motion are larger etc. change in picture as parts of images, carry out colourful Findings.
Figure 13 is the flow chart of an example of processing that specifically illustrates the step S32 of Figure 11, and Figure 14 is the key diagram of an example that specifically illustrates the processing of step S32.
Figure 14 (a) to (c) shows the variation of photographed images, is made as the photographed images 51a shown in Figure 14 (a) and is changed to the photographed images 51b shown in Figure 14 (b), and further be changed to the photographed images 51c shown in Figure 14 (c).In Figure 14, show and in photographed images 51a, 51b, taken personage A~C, in photographed images 51c, taken personage B, C.In addition, about photographed images 51a, be that personage A describes as the object subject of tracing object.
Other candidate's determination sections 1e2 according to predefined facial characteristics, judges personage's face of the periphery of the subregion that is present in the object subject that comprises in the tracking in step S41.For example, in the photographed images 51a of Figure 14 (a), other candidate's determination sections 1e2 is from the subregion 52a(dotted line of personage A) periphery detect the face of personage C.In addition, in Figure 14 (a), be shown in broken lines the subregion 53c that comprises personage C.
Then, other candidate's determination sections 1e2 judges the facial subregion that comprises the object subject in the tracking that whether approaches.In situation about keeping off, other candidate's determination sections 1e2 transfers to the "No" branch of the step S32 of Figure 11.
For example, as the photographed images 51b of Figure 14 (b), the face of personage C is during near the subregion 52b of personage A, and other candidate's determination sections 1e2 transfers to step S43 with processing, judges whether the facial characteristics in the subregion in following the trail of disappears.As photographed images 51b, when the face of personage A is blocked by personage C, can not determine personage A according to the characteristic quantity of the face of the personage A that obtains from photographed images 51b.In this situation, the face that other candidate's determination sections 1e2 will approach, be personage C as object subject (step S44), and enter into the "Yes" branch of step S32.Thus, shown in Figure 14 (c), in photographed images 51c, personage C becomes tracing object, according to the subregion 53c generating portion image that comprises personage C.
Figure 15 to Figure 17 specifically illustrates the example of processing of the step S33 of Figure 11, and Figure 15 is that flow chart, Figure 16 are that key diagram, Figure 17 are the figure charts.
Figure 16 (a) and (b) show the variation of photographed images, are made as the photographed images 61 shown in Figure 16 (a) and are changed to the photographed images 62 shown in Figure 16 (b).In Figure 16 (a), show the facial 63a~63c that in photographed images 61, has taken personage A~C, in photographed images 62, taken facial 64a~64d of personage A~D.In addition, about photographed images 61,62, the object subject that is made as tracing object is that personage A describes.
Other candidate's determination sections 1e2 judges according to predefined facial characteristics whether face-image is arranged in photographed images in step S51.In the photographed images 61,62 of Figure 16, take the face that the personage is arranged, thereby other candidate's determination sections 1e2 transfers to following step S52 with processing.
In step S52, other candidate's determination sections 1e2 calculates the before variation of position with each face.Figure 17 gets lateral attitude on the image in the transverse axis time of getting at the longitudinal axis, shows the lateral attitude of each facial 63a~63c, 64a~64d.The time T 1 of Figure 17 is constantly suitable with the shooting of photographed images 61, and the shooting of time T 2 and photographed images 62 quite constantly.
Figure 17 is take the left side of image as benchmark shows each facial position, shows to compare facial C at time T 1 facial B and be positioned at the right side.In addition, in time T 2, be positioned at the position on the more right side of image according to the order of facial D, C, B.Other candidate's determination sections 1e2 obtains position poor of time T 1, T2 according to each face.In the example of Figure 16, the change in location of facial B, C there are differences, and as shown in figure 17, the change in location △ c of facial C is greater than the change in location △ b of facial B.
Other candidate's determination sections 1e2 judges in step S53 whether facial change in location there are differences.In situation about not there are differences, other candidate's determination sections 1e2 transfers to the "No" branch of the step S32 of Figure 11.In the example of Figure 16, other candidate's determination sections 1e2 transfers to step S54 with processing, and the facial B that change in location is large enters into the "Yes" branch of step S32 as the object subject (step S54) of tracing object.Thus, in photographed images 61,62 example, personage B becomes tracing object, according to the subregion generating portion image that comprises personage B.
Thus in the present embodiment, when the multiframe pattern, according to the condition of setting, select 1 object subject and be set as parts of images.Thus, can carry out colourful Findings, can recording user the image of expectation.

Claims (5)

1. a photographic equipment is characterized in that, this photographic equipment has:
Image pickup part, it exports the photographed images of taking subject and obtaining;
The 1st image obtaining section, it obtains the image of the 1st angle of visual field from described photographed images;
The 2nd image obtaining section, it obtains the 2nd image more than 1 of 2nd angle of visual field narrower than described the 1st angle of visual field from described photographed images;
Recording control part, it carries out the dynamic image record to 1 image in described the 2nd image more than 1; And
Control part, described the 2nd image that it is recording in described recording control part does not satisfy in the situation of the 1st subject condition, switches to other the 2nd images that satisfy the 2nd subject condition in the 2nd image more than 1 that described the 2nd image obtaining section obtains and makes described recording control part carry out the dynamic image record.
2. photographic equipment according to claim 1 is characterized in that,
The part whether described control part will can follow the trail of the 2nd image in the described record, the 2nd image in the described record whether be not blocked and whether surpass during the record of the 2nd image in the described record at least 1 be made as described the 1st subject condition.
3. photographic equipment according to claim 1 and 2 is characterized in that,
Described control part will whether be endowed preferential precedence image, whether block the image of the 2nd image in the described record, whether in described photographed images motion greater than the image of predetermined threshold and whether at least 1 in the unselected image be made as described the 2nd subject condition.
4. photographic equipment according to claim 1 is characterized in that,
Described control part is given preferential precedence to the 2nd image more than 1 that described the 2nd image obtaining section obtains, and selects to satisfy the 2nd higher image of preferential precedence in described other the 2nd images of described the 2nd subject condition and makes described recording control part carry out the dynamic image record.
5. photographic equipment according to claim 1 is characterized in that,
2nd image of described control part in described record do not satisfy in the situation of described the 1st subject condition, select described other the 2nd images that next will record according to the preferential precedence that is given to described the 2nd image, in the situation of described other the 2nd images that can not select according to described preferential precedence next will record, whether the image of the 2nd image in the described record will be blocked, whether in described photographed images motion greater than the image of predetermined threshold and whether at least 1 in the unselected image select described other the 2nd images that next will record as described the 2nd subject condition.
CN201210262811.XA 2011-09-08 2012-07-26 Photographic equipment Expired - Fee Related CN103002207B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2011-196257 2011-09-08
JPJP2011-196257 2011-09-08
JP2011196257A JP5694097B2 (en) 2011-09-08 2011-09-08 Photography equipment

Publications (2)

Publication Number Publication Date
CN103002207A true CN103002207A (en) 2013-03-27
CN103002207B CN103002207B (en) 2016-04-27

Family

ID=47930304

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210262811.XA Expired - Fee Related CN103002207B (en) 2011-09-08 2012-07-26 Photographic equipment

Country Status (2)

Country Link
JP (1) JP5694097B2 (en)
CN (1) CN103002207B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106454135A (en) * 2016-11-29 2017-02-22 维沃移动通信有限公司 Photographing reminding method and mobile terminal

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014236405A (en) * 2013-06-04 2014-12-15 Necカシオモバイルコミュニケーションズ株式会社 Imaging control device, imaging control method, program therefor and electronic equipment
EP4325879A1 (en) * 2018-10-15 2024-02-21 Huawei Technologies Co., Ltd. Method for displaying image in photographic scene and electronic device
JP7325180B2 (en) * 2018-12-11 2023-08-14 キヤノン株式会社 Tracking device and method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090238550A1 (en) * 2008-03-19 2009-09-24 Atsushi Kanayama Autofocus system
CN101894375A (en) * 2009-05-21 2010-11-24 富士胶片株式会社 Person tracking method and person tracking apparatus
JP2010268019A (en) * 2009-05-12 2010-11-25 Nikon Corp Photographing apparatus
CN101960834A (en) * 2008-03-03 2011-01-26 三洋电机株式会社 Imaging device
CN102148931A (en) * 2010-02-09 2011-08-10 三洋电机株式会社 Image sensing device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4228391B2 (en) * 2003-10-07 2009-02-25 日本ビクター株式会社 Monitoring system
JP2006109119A (en) * 2004-10-06 2006-04-20 Omron Corp Moving image recorder and moving image reproducing apparatus
JP2009229586A (en) * 2008-03-19 2009-10-08 Fujinon Corp Autofocus system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101960834A (en) * 2008-03-03 2011-01-26 三洋电机株式会社 Imaging device
US20090238550A1 (en) * 2008-03-19 2009-09-24 Atsushi Kanayama Autofocus system
JP2010268019A (en) * 2009-05-12 2010-11-25 Nikon Corp Photographing apparatus
CN101894375A (en) * 2009-05-21 2010-11-24 富士胶片株式会社 Person tracking method and person tracking apparatus
CN102148931A (en) * 2010-02-09 2011-08-10 三洋电机株式会社 Image sensing device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106454135A (en) * 2016-11-29 2017-02-22 维沃移动通信有限公司 Photographing reminding method and mobile terminal

Also Published As

Publication number Publication date
JP5694097B2 (en) 2015-04-01
CN103002207B (en) 2016-04-27
JP2013058921A (en) 2013-03-28

Similar Documents

Publication Publication Date Title
CN103037164B (en) Camera head and image capture method
CN1972414B (en) Information processing apparatus, imaging device, information processing method
US20110085778A1 (en) Imaging device, image processing method, and program thereof
CN101419666A (en) Image processing apparatus, image capturing apparatus, image processing method and recording medium
JP2007104529A (en) Digital camera and time lag setting method
JP5940394B2 (en) Imaging device
CN103002207A (en) Camera shooting device
JP2007142565A (en) Imaging apparatus and method thereof
CN103369244A (en) Image synthesis apparatus and image synthesis method
CN102469267A (en) Image producing apparatus
JP2014017665A (en) Display control unit, control method for display control unit, program, and recording medium
JP6529108B2 (en) Image processing apparatus, image processing method and image processing program
JP2015091048A (en) Imaging apparatus, imaging method, and program
JP6918605B2 (en) Imaging control device, control method, program, and storage medium
JP5785038B2 (en) Photography equipment
JP2019047270A (en) Recording controller and control method thereof
CN107800956B (en) Image pickup apparatus, control method, and storage medium
KR20160022247A (en) Image extracting apparatus, image extracting method and computer program stored in recording medium
JP5965037B2 (en) Photography equipment
JP5942002B2 (en) Imaging device and method for controlling imaging device
CN108132705A (en) Electronic equipment, control method and storage medium
JP2018026608A (en) Imaging device, control method and program of imaging device, and storage medium
JP5293769B2 (en) Imaging apparatus and program thereof
KR100917166B1 (en) Leture shooting device and leture shooting method
JP7532042B2 (en) IMAGING CONTROL DEVICE, CONTROL METHOD FOR IMAGING CONTROL DEVICE, PROG

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20151208

Address after: Tokyo, Japan

Applicant after: OLYMPUS Corp.

Address before: Tokyo, Japan

Applicant before: Olympus Imaging Corp.

C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20211206

Address after: Tokyo, Japan

Patentee after: Aozhixin Digital Technology Co.,Ltd.

Address before: Tokyo, Japan

Patentee before: OLYMPUS Corp.

TR01 Transfer of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160427

CF01 Termination of patent right due to non-payment of annual fee