CN102265597B - Image pickup equipment - Google Patents
Image pickup equipment Download PDFInfo
- Publication number
- CN102265597B CN102265597B CN2009801522398A CN200980152239A CN102265597B CN 102265597 B CN102265597 B CN 102265597B CN 2009801522398 A CN2009801522398 A CN 2009801522398A CN 200980152239 A CN200980152239 A CN 200980152239A CN 102265597 B CN102265597 B CN 102265597B
- Authority
- CN
- China
- Prior art keywords
- subject
- image
- display part
- zone
- mark
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B17/00—Details of cameras or camera bodies; Accessories therefor
- G03B17/18—Signals indicating condition of a camera member or suitability of light
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B15/00—Special procedures for taking photographs; Apparatus therefor
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/633—Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
- H04N23/635—Region indicators; Field of view indicators
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Studio Devices (AREA)
- Indication In Cameras, And Counting Of Exposures (AREA)
Abstract
The present invention provides image pickup equipment provided with: an image pickup section (110) that generates image data obtained by picking up an image of a subject; a system control section (150) that automatically determines the chief subject from among subjects contained in the image indicated by the image data that has thus been generated; an operation section (160) that accepts selection made by a user regarding which subject should be selected as the chief subject from among the subjects included in the image indicated by the image data that has been generated; and a display section (170) that displays in superimposed fashion the image indicated by the image data that has been generated and a display of the subject designated as the chief subject. When a subject that is automatically determined by the system control section (150) is displayed as the chief subject, the subject display is different from when the subject that is selected by the user using the operation section(160) is displayed as the chief subject.
Description
Technical field
The camera head that the present invention relates to carry out subject identification and can indicate the demonstration of the subject of identifying.
Background technology
In the past, have following technology: obtaining subject most interested for the cameraman characteristic quantity of (being called " main subject " later on), serves as that the zone that main subject exists is inferred on the basis from view data with this characteristic quantity.Especially, because the view data at continuous input is that dynamic image data is obtained the processing that has the existing zone of main subject successively, move according to the mode of catching up with the main subject that moves, therefore in most cases this processing is called to follow and handles or tracking process.
There is multiple camera head at present, it handles the position of obtaining main subject by this following, and carry out the photography control that can suitably photograph to main subject, this photography control comprise with the focal point control of main subject focusing, with the lightness of main subject be adjusted to proper level exposure control, make that main subject is in picture central authorities carry out that finding a view of translation, inclination, zoom etc. controlled etc.
Patent documentation 1 discloses a kind of camera head, when selecting this main subject of following processing, and the zone that can select the cameraman to want simply.In this camera, subject is carried out frame demonstration etc. in the picture that identifies when subject is identified, the zone that the user wants among the subject that is shown by frame by peripheral operation or according to the sight line Information Selection.
Patent documentation 1:JP spy opens flat 5-328197 communique
Yet, in the above-mentioned camera head, selecting will to show the zone of identifying subject in advance under the situation of scope as the subject zone of centered by user's appointed positions, determining.Therefore, even if can be set under the situation in subject zone in zone in addition, zone, might also can bring the misunderstanding that only can select shown identification subject zone to the cameraman.
Summary of the invention
The object of the present invention is to provide a kind of camera head, can when the cameraman selects the subject zone, improve user's convenience.
Camera head of the present invention is in order to solve above-mentioned problem, and it possesses: image pickup part generates image thereby shot object image made a video recording; Subject detects in subject detection portion from the image of described generation; Display part can show that the image of described generation and expression are by the mark of the detected subject of described subject detection portion; Operating portion is accepted for being the operation of subject preference pattern with Working mode set, can select the optional position on the picture of described display part in this subject preference pattern; And control part, showing at described display part under the state of described mark, when described operating portion has been accepted the operation that changes to described subject preference pattern, control described display part and make it eliminate the demonstration of described mark.
Camera head of the present invention is when selecting subject, and the mark that does not at first carry out detected zone shows.Thus, the cameraman can identify the zone that can not only select show tags, can also select the zone of subject arbitrarily in the display frame.Like this, the cameraman can not fettered by the extraction result of subject, can freely specify subject.That is to say, can improve subject zone that the cameraman the carries out user's in selecting convenience.
Description of drawings
Fig. 1 is the block diagram of the structure of the video camera in the expression embodiments of the present invention 1.
The flow chart of the processing when Fig. 2 A is the subject selection of representing in the embodiments of the present invention 1.
The flow chart of the processing when Fig. 2 B is the subject selection of representing in the embodiments of the present invention 1.
The demonstration example of the display unit when Fig. 3 is the main subject candidate display of expression in the embodiments of the present invention 1.
Fig. 4 is the demonstration example of the display unit of the main subject of expression in the embodiments of the present invention 1 when specifying.
Among the figure:
110 optical systems
130 feature detection portions
140 record I/F portions
150 systems control divisions
160 operating portions
170 display parts
Embodiment
Below, utilize accompanying drawing that embodiments of the present invention are elaborated.
Execution mode 1
1-1. the structure of camera head
Fig. 1 is the block diagram of the structure of the video camera 100 in the expression embodiments of the present invention 1.In Fig. 1, represent video camera 100 by the scope of dotted line.In addition, in Fig. 1, only illustrate the part relevant with the present invention in the video camera 100.
A/D transformation component 115 will be transformed to digital image signal from the analog image signal of optical system 110.Signal of video signal handling part 120 is adjusted at implementing to gain from the digital image signal of A/D transformation component 115 outputs, known signal of video signal such as noise is removed, gamma-corrected, aperture processing, flex point processing are handled.Y/C transformation component 125 is the Y/C form with digital image signal from the RGB formal argument.Be converted into the digital image signal of Y/C form by Y/C transformation component 125, be stored in the buffer storage 135 as digital image information by system bus 132, implement irreversible compression by CODEC137 and handle, be recorded in the recording medium 147 that is electrically connected on slot 145 by record I/F portion 140 via system bus 132 once again.
CODEC137 carries out DCT (discrete cosine transform), huffman coding etc., thus view data is compressed.CODEC137 is for example by being that the compressed format of benchmark is come compressing image data with MPEG-2 or specification H.264.In addition, also can utilize MPEG-2 and the form beyond the form H.264 as compressed format.CODEC137 is incompressible state being reproduced by display part 170 under the situation of the view data of finishing compression etc. with this image data decoding.CODEC137 generates the AV data that record in the recording medium 147 based on the compression sound data that does not show in compressing image data and the present embodiment 1.Have, the AV data of record are decoded in the recording medium 147 of CODEC137 again, generate compressing image data and compression sound data.
Record I/F portion 140 can and be electrically connected with recording medium 147 machineries.The data of record I/F portion 147 read-out recording mediums 147 are in addition with data writing recording medium 147.Recording medium 147 can be preserved various data such as AV data.Recording medium 147 both can be to be installed on video camera 100 with loading and unloading, also can be built in video camera 100.Recording medium 147 is semiconductor memory card, hard disk, DVD (Digital Versatile Disc), BD (Blu-ray Disc) etc.
1-2. the extraction of main subject is handled
Next, the extraction processing to main subject zone describes.Feature detection portion 130 extracts the high zone of possibility that has main subject according to the characteristic quantity of main subject in image.In the present embodiment, main subject is face, and the characteristic quantity of main subject for example is that performance is like the characteristic quantity of face degree.The characteristic quantity of main subject for example is fixed and is kept among the ROM (Read Only Memory) in the feature detection portion 130.The characteristic quantity of the main subject of preserving among the digital image signal that feature detection portion 130 relatively imports, the ROM.In addition, the digital image signal of importing is implemented image rightly and is dwindled processing, in feature detection portion 130 in the not shown memory as after the image data storage, with the characteristic quantity of main subject relatively.Extracting method as main subject for example has the method that is called as " template matches ".In the method, with the pixel groups in main subject zone itself as the subject characteristic quantity, carry out the zone of pixel unit at the candidate region in a plurality of subjects zone of setting in the view data and compare, the candidate region that similarity is the highest is set at main subject zone.At this, the candidate region of subject is for example selected in the zone of the fixed range centered by the position in detected main subject zone last time.This is can not carry out this hypothesis of bigger movement at short notice according to main subject, wishes to reduce treating capacity and improves accuracy of detection by the existing similar subject in position that ignorance departs from main subject.
In addition, in the present embodiment, main subject is the subject beyond the face.The characteristic quantity of the main subject of this moment is the color by the position beyond the face of user's selection.For the characteristic quantity of main subject, when the position beyond the face is selected by the user, be kept in the memory not shown in the systems control division 150 as a view data.Under this situation, the view data of preserving in the not shown memory in 130 pairs of digital image signals of importing of feature detection portion and the systems control division 150 compares, and carries out the extraction in main subject zone thus and handles.
Detecting relevant various technology with above-mentioned this subject all is studied always and is developing.Variety of way can be utilized as subject characteristic quantity and subject detection method, also a plurality of technology can be used in combination.
1-3. corresponding relation
The structure of being made up of optical system 110 and signal of video signal handling part 120 is an example of image pickup part.Feature detection portion 130 is examples of feature detection portion.Display part 170 is examples of display part.Operating portion 160 is examples of operating portion.Systems control division 150 is examples of control part.Operating portion 160 is examples that subject is selected operating portion.
2. action
Action when the flow chart that utilizes Fig. 2 A, Fig. 2 B is selected the subject of video camera 100 describes.The image that the display part 170 of video camera 100 shows in the shooting that generates by optical system 110, A/D transformation component 115, signal of video signal handling part 120 and Y/C conversion.
In Fig. 2 A, when being pressed the face detection button that constitutes operating portion 160 by the cameraman, feature detection portion 130 beginning face detection action (among the S200 for being).Thus, feature detection portion 130 is at carry out face detection action (S201) from the view data of signal of video signal handling part 120.
Next, systems control division 150 confirms whether to exist the zone (S202) that is detected face by feature detection portion 130.Systems control division 150 is (S202 is for denying) under the situation that does not have the zone that is detected face by feature detection portion 130, continues face detection action (S201).Systems control division 150 detects in existence under the situation in zone of face (S202 is for being), utilizes and determines that regional display part 171 carries out mark at surveyed area and shows (step S203).Particularly, determine the image of regional display part 171 after display part 170 is exported the mark demonstration that superposeed.For example, shown in Fig. 3 (a), in the zone 301 that detects face, 302 show tags 304,305.
Afterwards, systems control division 150 detects and whether supresses the subject selection key (S204) that constitutes operating portion 160.Under the situation that the subject selection key is pressed (S204 is for being), systems control division 150 makes video camera 100 be transferred to the subject preference pattern, eliminates in the mark 304 of the surveyed area of the definite regional display part 171 of display part 170 overlapping demonstrations, 305 action (S205).That is to say that under the situation that the subject selection key is pressed, systems control division 150 does not show the zone shown in the mark 304 and 305 shown in Fig. 3 (a) shown in Fig. 3 (b).Like this, by eliminating mark 304,305, the cameraman can identify: in the display frame on the display part 170 shown in Fig. 3 (b), the subject 301,302 shown in can not only selected marker 304,305 can also the unshowned subject 303 of selected marker.Under the situation that the subject selection key is not pressed (Figure 20 4 is for denying), systems control division 150 returns step S201 and carries out above-mentioned action.According to this structure, the cameraman detects the mark 304 in the zone of face, 305 demonstration thereby eliminate by feature detection portion 130, therefore by pressing the subject selection key, the zone of demonstration not only can be freely selected to be labeled, the subject of demonstration can also be selected not to be labeled.In addition, the subject of being selected by the cameraman in the subject preference pattern becomes the object of automatic focusing, becomes the object of following processing.
The flow chart that utilizes Fig. 2 B describes the detailed content of the control among this step S206.
Be (S403 is for being) under the situation of face area in selected zone, the regional show tags (S404) in expression face.At this moment, determine that regional display part 171 is to the folded tagged image of display part 170 outputs.For example, the cameraman is under the situation in the part zone of the face of the selection of the display frame shown in Fig. 3 (b) subject 302, systems control division 150 shown in Fig. 3 (c) like that with the location independent of actual selection, at the whole show tags 306 of face area (detection is the zone of face area in step S201 in advance) of subject 302.At this, systems control division 150 is (S403 is for being) and be not that (S403 is for denying) can make the display format of mark different under the situation of face area under the situation of face area in the selected zone of cameraman.Particularly, can make the shape of mark or the color difference of frame.For example, when face area is selected, the mark 306 shown in Fig. 3 (c) can be shown, when being selected in the zone that is not face area, the mark 307 shown in Fig. 3 (d) can be shown.Thus, the cameraman can select face area easily, and can identify the selection face area well.
On the other hand, do not exist expression by feature detection portion 130 in advance during detected face regional (step S402 for not), perhaps chosen position is not included under the situation in the face detection zone arbitrarily (S403 for not), and systems control division 150 is at the regional show tags of being selected by the cameraman (S405).Especially at this moment, systems control division 150 can make at selected the shown mark of subject and different at the display format (color of shape, frame etc.) of the shown mark of automatic detected subject by the cameraman.For example, can will be the mark 304,305 shown in Fig. 3 (a) at the shown flag settings of automatic detected subject, will be the mark 306 shown in Fig. 3 (c) at the shown flag settings of the subject that the cameraman selects.Thus, the cameraman is not the subject that automatically detects show tags, can identify the selected part of cameraman easily.
In addition, when passing to the state shown in Fig. 3 (b), shown in Fig. 4 (a), like that, can show the message 310 of impelling the cameraman to select the zone.Thus, the cameraman not only is prompted to select, and can specify other zones but also can identify except detected regional 301,302.
In addition, as Fig. 3 (c), shown after the 302 selecteed marks 306 of expression subject zone, for example subject zone 302 moves to outside the image pickup scope, can't identify at systems control division 150 under the situation in subject zone 302, can show the message 310 shown in Fig. 4 (a) again.Thus, can impel the cameraman to select subject.
In addition, when subject is selected in the display frame on the display part 170 shown in Fig. 3 (b), shown in Fig. 4 (b), are backgrounds etc. owing to the selected subject 311 of cameraman like that, when systems control division 150 can't be identified subject 311 thus, systems control division 150 can change the display format of mark.For example such shown in Fig. 4 (b), make the mark 308 of indication subject 311 become dotted line.Perhaps, in solid line or dotted line, dull the color or be set at different colors.By showing and different usually marks that the cameraman can identify video camera 100 can't identify selected subject.In addition, can prevent that the cameraman from repeatedly selecting identical zone.Systems control division 150 after 308 process stipulated times, can make mark 308 flickers eliminate then from show tags.Under this situation, systems control division 150 does not show message 310 as described above.This is because eliminate after making mark 308 flickers, also can impel the cameraman to select.In addition, if after thereby mark 308 flickers are eliminated, go back display message 310, can allow the cameraman be fed up with.
In addition, the selecteed moment of subject in the display frame shown in Fig. 3 (b), such shown in Fig. 4 (c), can be at automatic detected zone and selected regional both sides' while show tags 312,313.
3. sum up
As above-mentioned, in the video camera 100 of present embodiment, when selecting subject, at first do not carry out being shown by the mark in feature detection portion 130 detected zones.Thus, the cameraman can identify: not only can select to be labeled the zone of demonstration, can also select the zone of subject arbitrarily in the display frame.Thus, the cameraman can not fettered mistakenly by subject candidate's extraction result, can freely specify subject.
In addition, in the video camera 100 of present embodiment, the subject that will determine as the situation of main subject under and the user under the situation of selected subject as main subject, made all differences of demonstration of subject.Thus, the cameraman can to identify selecteed subject well be the subject of oneself selecting.That is to say, can improve the convenience of the user in the selection in the subject zone that the cameraman carries out.
More than, execution mode suitable among the present invention is illustrated.In addition, the present invention is not limited to above-described embodiment, and this is external not to break away from the scope of purport of the present invention and can implement various distortion certainly yet.
Utilize possibility on the industry
According to camera head of the present invention, can improve the selectivity in the subject zone that the cameraman carries out, be useful for various camera heads such as digital camera, digital cameras.
Claims (4)
1. camera head, it possesses:
Image pickup part generates image thereby shot object image made a video recording;
Subject detection portion detects subject from the image of described generation;
Display part can show that the image of described generation and expression are by the mark of the detected subject of described subject detection portion;
Operating portion is accepted for being the operation of subject preference pattern with Working mode set, can select the optional position on the picture of described display part in this subject preference pattern; With
Control part is showing at described display part under the state of described mark, when described operating portion has been accepted the operation that changes to described subject preference pattern, controls described display part and makes it all eliminate the demonstration of described mark.
2. camera head according to claim 1, wherein,
Described control part along with the demonstration of eliminating described mark, shows the message of impelling selection subject in the image of user in described generation at described display part.
3. camera head according to claim 2, wherein,
Described control part after showing described message, along with through the stipulated time, is controlled described display part and is made it eliminate described message,
Make after it eliminates described message at the described display part of control, in the time of can't identifying the subject of being selected by the user, show described message again.
4. camera head according to claim 1, wherein,
Also possess the portion of following, this portion's of following identification is also followed the subject of selecting in described subject preference pattern,
Described control part, in described subject preference pattern, selected subject after, described when following portion and can't follow subject, show at described display part and to impel the message of selecting subject in the image of user in described generation.
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2008332551 | 2008-12-26 | ||
JP2008332547 | 2008-12-26 | ||
JP2008-332551 | 2008-12-26 | ||
JP2008-332547 | 2008-12-26 | ||
PCT/JP2009/007103 WO2010073608A1 (en) | 2008-12-26 | 2009-12-22 | Image pickup equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102265597A CN102265597A (en) | 2011-11-30 |
CN102265597B true CN102265597B (en) | 2013-10-09 |
Family
ID=42287255
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2009801522398A Active CN102265597B (en) | 2008-12-26 | 2009-12-22 | Image pickup equipment |
Country Status (4)
Country | Link |
---|---|
US (1) | US8493494B2 (en) |
JP (2) | JP5623915B2 (en) |
CN (1) | CN102265597B (en) |
WO (1) | WO2010073608A1 (en) |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8520131B2 (en) * | 2009-06-18 | 2013-08-27 | Nikon Corporation | Photometric device, imaging device, and camera |
US20110141225A1 (en) * | 2009-12-11 | 2011-06-16 | Fotonation Ireland Limited | Panorama Imaging Based on Low-Res Images |
US8294748B2 (en) * | 2009-12-11 | 2012-10-23 | DigitalOptics Corporation Europe Limited | Panorama imaging using a blending map |
US20110141224A1 (en) * | 2009-12-11 | 2011-06-16 | Fotonation Ireland Limited | Panorama Imaging Using Lo-Res Images |
US10080006B2 (en) | 2009-12-11 | 2018-09-18 | Fotonation Limited | Stereoscopic (3D) panorama creation on handheld device |
US20110141229A1 (en) * | 2009-12-11 | 2011-06-16 | Fotonation Ireland Limited | Panorama imaging using super-resolution |
US9118832B2 (en) | 2010-08-17 | 2015-08-25 | Nokia Technologies Oy | Input method |
CN107105157B (en) * | 2010-11-29 | 2020-02-14 | 快图有限公司 | Portrait image synthesis from multiple images captured by a handheld device |
CN103210332B (en) | 2011-04-18 | 2015-09-30 | 松下电器(美国)知识产权公司 | The focusing control method of camera head, camera head and integrated circuit |
JP5907738B2 (en) * | 2012-01-23 | 2016-04-26 | オリンパス株式会社 | Imaging apparatus, display method, and program |
JP2013219541A (en) * | 2012-04-09 | 2013-10-24 | Seiko Epson Corp | Photographing system and photographing method |
DE102012008986B4 (en) * | 2012-05-04 | 2023-08-31 | Connaught Electronics Ltd. | Camera system with adapted ROI, motor vehicle and corresponding method |
JP5966778B2 (en) * | 2012-09-05 | 2016-08-10 | カシオ計算機株式会社 | Imaging apparatus, imaging method, and program |
CN103607538A (en) * | 2013-11-07 | 2014-02-26 | 北京智谷睿拓技术服务有限公司 | Photographing method and photographing apparatus |
JP6519205B2 (en) * | 2015-01-30 | 2019-05-29 | キヤノンマーケティングジャパン株式会社 | Program, information processing apparatus and processing method |
JP6862202B2 (en) | 2017-02-08 | 2021-04-21 | キヤノン株式会社 | Image processing device, imaging device and control method |
JP7049195B2 (en) * | 2018-06-28 | 2022-04-06 | キヤノン株式会社 | Electronic devices and their control methods, programs, and storage media |
EP3904956A4 (en) * | 2018-12-28 | 2022-02-16 | Sony Group Corporation | Imaging device, imaging method, and program |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1460186A (en) * | 2001-03-28 | 2003-12-03 | 皇家菲利浦电子有限公司 | Method for assisting automated video tracking system in reaquiring target |
CN101364029A (en) * | 2007-08-10 | 2009-02-11 | 佳能株式会社 | Image capturing apparatus and control method therefor |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6118484A (en) * | 1992-05-22 | 2000-09-12 | Canon Kabushiki Kaisha | Imaging apparatus |
JP3192483B2 (en) | 1992-05-22 | 2001-07-30 | キヤノン株式会社 | Optical equipment |
US7298412B2 (en) * | 2001-09-18 | 2007-11-20 | Ricoh Company, Limited | Image pickup device, automatic focusing method, automatic exposure method, electronic flash control method and computer program |
US7362368B2 (en) * | 2003-06-26 | 2008-04-22 | Fotonation Vision Limited | Perfecting the optics within a digital image acquisition device using face detection |
US7616233B2 (en) * | 2003-06-26 | 2009-11-10 | Fotonation Vision Limited | Perfecting of digital image capture parameters within acquisition devices using face detection |
US20050104958A1 (en) * | 2003-11-13 | 2005-05-19 | Geoffrey Egnal | Active camera video-based surveillance systems and methods |
JP2006023384A (en) * | 2004-07-06 | 2006-01-26 | Konica Minolta Photo Imaging Inc | Imaging device |
JP2007184683A (en) * | 2006-01-04 | 2007-07-19 | Eastman Kodak Co | Image data processing apparatus and method |
JP2008085737A (en) * | 2006-09-28 | 2008-04-10 | Nikon Corp | Electronic camera |
EP1986421A3 (en) * | 2007-04-04 | 2008-12-03 | Nikon Corporation | Digital camera |
JP4883413B2 (en) * | 2007-06-28 | 2012-02-22 | ソニー株式会社 | Imaging apparatus, image display control method, and program |
JP4953971B2 (en) * | 2007-08-03 | 2012-06-13 | キヤノン株式会社 | Image processing apparatus, image processing apparatus control method, and program for executing the same |
-
2009
- 2009-12-22 CN CN2009801522398A patent/CN102265597B/en active Active
- 2009-12-22 WO PCT/JP2009/007103 patent/WO2010073608A1/en active Application Filing
- 2009-12-22 JP JP2010543848A patent/JP5623915B2/en active Active
- 2009-12-23 US US12/645,940 patent/US8493494B2/en active Active
- 2009-12-25 JP JP2009295236A patent/JP2010171964A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1460186A (en) * | 2001-03-28 | 2003-12-03 | 皇家菲利浦电子有限公司 | Method for assisting automated video tracking system in reaquiring target |
CN101364029A (en) * | 2007-08-10 | 2009-02-11 | 佳能株式会社 | Image capturing apparatus and control method therefor |
Also Published As
Publication number | Publication date |
---|---|
JP5623915B2 (en) | 2014-11-12 |
US8493494B2 (en) | 2013-07-23 |
WO2010073608A1 (en) | 2010-07-01 |
JP2010171964A (en) | 2010-08-05 |
JPWO2010073608A1 (en) | 2012-06-07 |
US20100188560A1 (en) | 2010-07-29 |
CN102265597A (en) | 2011-11-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102265597B (en) | Image pickup equipment | |
US8704914B2 (en) | Apparatus to automatically tag image and method thereof | |
CN102334332B (en) | Imaging apparatus, image display apparatus, imaging method, method of displaying image and method of correcting position of focusing-area frame | |
CN101124816B (en) | Image processing apparatus and image processing method | |
CN101079964B (en) | Image reproducing device, image reproducing method, and image capturing device | |
CN100414975C (en) | Imaging apparatus and control method therefor | |
US20050219383A1 (en) | Electronic camera | |
CN101945212A (en) | Image capture device, image processing method and program | |
KR20150061277A (en) | image photographing apparatus and photographing method thereof | |
EP2947870A2 (en) | Recording apparatus, reproducing apparatus, recording/reproducing apparatus, image pickup apparatus, recording method, and program | |
CN102265602A (en) | Image capture device | |
CN106257914A (en) | Focus detection device and focus detecting method | |
US20100253801A1 (en) | Image recording apparatus and digital camera | |
CN105027553A (en) | Image processing device, image processing method, and storage medium on which image processing program is stored | |
CN102404503A (en) | Automatic focusing apparatus and imaging apparatus | |
US20120087636A1 (en) | Moving image playback apparatus, moving image management apparatus, method, and storage medium for controlling the same | |
EP2053540B1 (en) | Imaging apparatus for detecting a scene where a person appears and a detecting method thereof | |
JP4277592B2 (en) | Information processing apparatus, imaging apparatus, and content selection method | |
JP4986264B2 (en) | External recording medium management apparatus, external recording medium management method, and program | |
CN101631221B (en) | Video recording apparatus, video recording method, and recording medium | |
US8345124B2 (en) | Digital camera controlled by a control circuit | |
WO2006016461A1 (en) | Imaging device | |
CN103986865A (en) | Imaging apparatus, control method, and program | |
KR101276724B1 (en) | Method for controlling digital photographing apparatus, and digital photographing apparatus adopting the method | |
JP5262695B2 (en) | Information processing apparatus, information processing method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
ASS | Succession or assignment of patent right |
Owner name: MATSUSHITA ELECTRIC (AMERICA) INTELLECTUAL PROPERT Free format text: FORMER OWNER: MATSUSHITA ELECTRIC INDUSTRIAL CO, LTD. Effective date: 20140717 |
|
C41 | Transfer of patent application or patent right or utility model | ||
TR01 | Transfer of patent right |
Effective date of registration: 20140717 Address after: California, USA Patentee after: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA Address before: Osaka Japan Patentee before: Matsushita Electric Industrial Co.,Ltd. |