CN101415076A - Composition determining apparatus, composition determining method, and program - Google Patents
Composition determining apparatus, composition determining method, and program Download PDFInfo
- Publication number
- CN101415076A CN101415076A CNA2008101679423A CN200810167942A CN101415076A CN 101415076 A CN101415076 A CN 101415076A CN A2008101679423 A CNA2008101679423 A CN A2008101679423A CN 200810167942 A CN200810167942 A CN 200810167942A CN 101415076 A CN101415076 A CN 101415076A
- Authority
- CN
- China
- Prior art keywords
- composition
- image
- composition determining
- individual subject
- control
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 239000000203 mixture Substances 0.000 title claims abstract description 392
- 238000000034 method Methods 0.000 title claims abstract description 71
- 238000001514 detection method Methods 0.000 claims description 116
- 238000012360 testing method Methods 0.000 claims description 5
- 238000003384 imaging method Methods 0.000 description 79
- 238000012545 processing Methods 0.000 description 60
- 230000005484 gravity Effects 0.000 description 56
- 230000033001 locomotion Effects 0.000 description 56
- 230000003068 static effect Effects 0.000 description 51
- 230000008569 process Effects 0.000 description 41
- 238000003860 storage Methods 0.000 description 40
- 230000006854 communication Effects 0.000 description 33
- 238000004891 communication Methods 0.000 description 31
- 239000000463 material Substances 0.000 description 21
- 238000013138 pruning Methods 0.000 description 20
- 230000008859 change Effects 0.000 description 15
- 230000000007 visual effect Effects 0.000 description 15
- 230000003287 optical effect Effects 0.000 description 14
- 239000000284 extract Substances 0.000 description 12
- 230000007246 mechanism Effects 0.000 description 12
- 230000006870 function Effects 0.000 description 9
- 244000141353 Prunus domestica Species 0.000 description 8
- 230000004044 response Effects 0.000 description 8
- 238000004458 analytical method Methods 0.000 description 7
- 238000007639 printing Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 241000272165 Charadriidae Species 0.000 description 4
- 230000006835 compression Effects 0.000 description 4
- 238000007906 compression Methods 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000006073 displacement reaction Methods 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 2
- 239000010931 gold Substances 0.000 description 2
- 229910052737 gold Inorganic materials 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 241001269238 Data Species 0.000 description 1
- 241000196324 Embryophyta Species 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- BQCADISMDOOEFD-UHFFFAOYSA-N Silver Chemical compound [Ag] BQCADISMDOOEFD-UHFFFAOYSA-N 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 230000005693 optoelectronics Effects 0.000 description 1
- 229910052709 silver Inorganic materials 0.000 description 1
- 239000004332 silver Substances 0.000 description 1
- GGCZERPQGJTIQP-UHFFFAOYSA-N sodium;9,10-dioxoanthracene-2-sulfonic acid Chemical compound [Na+].C1=CC=C2C(=O)C3=CC(S(=O)(=O)O)=CC=C3C(=O)C2=C1 GGCZERPQGJTIQP-UHFFFAOYSA-N 0.000 description 1
- 229910052727 yttrium Inorganic materials 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/69—Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/64—Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/695—Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2101/00—Still video cameras
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
- Image Analysis (AREA)
- Measuring Or Testing Involving Enzymes Or Micro-Organisms (AREA)
Abstract
The invention provides a composition determining apparatus, a composition determining method and a program. The composition determining apparatus includes a subject detecting unit configured to detect existence of one or more specific subjects in an image based on image data; and a composition determining unit configured to determine a composition in accordance with the number of subjects detected by the subject detecting unit.
Description
Technical field
The present invention relates to judge the composition determining apparatus of composition (composition) of the picture material of static image data or the like, and relate to composition determining method.In addition, the present invention relates to the program carried out by this equipment.
Background technology
One of the technical factor that shooting stays the photo of sound impression is that composition is set.The term here " composition " is also referred to as " finding a view ", and is the layout as the object in the image of photo or the like.
Exist some to be used to obtain the typical case and the basic skills of good composition.But for common camera user, the photo of taking good composition is very very difficult, unless he has the sufficient knowledge and technology about photography.Owing to this reason, need a kind of user of making fast easily to obtain the technical configuration of the photograph image of good composition.
For example, patent documentation 1 (Japan is substantive examination public announcement of a patent application No.59-208983 not) discloses a kind of technical configuration of automatic tracking device.In this technical configuration, difference between the image of detection Fixed Time Interval, the center of gravity of the difference between the computed image, control imaging device by amount of exercise and direction of motion detected object image with respect to the amount of exercise and the direction of motion of imaging screen, and object images is set in the reference area of imaging screen based on center of gravity.
In addition, patent documentation 2 (Japan is substantive examination public announcement of a patent application No.2001-268425 not) discloses a kind of technical configuration about automatic tracking device.In this technical configuration.Be positioned at screen center in the zone on the top 20% of the whole people on the screen and be positioned under the situation of screen center, automatically the people is followed the tracks of, thereby can follow the tracks of this people when face reliably takes the people with person who happens to be on hand for an errand's face.
When from these technical configuration of viewpoint of decision composition, can search for automatically as people's object and with predetermined composition this object is positioned in the imaging screen.
Summary of the invention
Best composition may depend on the predetermined state of object or condition and different.But disclosed technology can only be placed tracked object with certain fixing composition in the above-mentioned patent documentation.In other words, may carry out shooting by changing composition according to the object situation.
Therefore, the present invention is devoted to a kind of technology of suggestion, is used for obtaining at an easy rate the good composition as the image of photo or the like.Particularly, the present invention is devoted to determine more suitably and neatly composition according to the situation of object and the change of condition.
According to one embodiment of the invention, a kind of composition determining apparatus is provided, comprising: object test equipment is used for coming based on view data the existence of one or more special objects of detected image; And the composition determining device, be used for according to judging composition by the number of the detected object of described object test equipment.
In above-mentioned configuration, best composition is according to judging based on view data number of detected object in image.For example, best composition depends on the number of the object that exists in the screen and is different.According to embodiments of the invention, can be according to the change of condition, i.e. the change of object number obtains best composition.
According to embodiments of the invention, can obtain the best composition of the picture material of view data according to object number.That is to say, compare, more suitably and neatly determine composition automatically with come simply the situation of placing objects based on fixing composition.Therefore, the operation of using the user of the equipment used the embodiment of the invention to need not trouble just can obtain the image of best composition, thereby higher convenience can be provided.
Description of drawings
Fig. 1 is the figure that illustrates according to the outward appearance ios dhcp sample configuration IOS DHCP of the imaging system that comprises Digital Still Camera and The Cloud Terrace (pan/tilthead) of the embodiment of the invention;
Fig. 2 A and 2B are the figure of the motion of schematically illustrated imaging system according to this embodiment, promptly illustrate to shake (pan) and the figure of the example of the motion of (tilt) direction of fascinating along the Digital Still Camera that is attached to The Cloud Terrace;
Fig. 3 is the figure that illustrates according to the ios dhcp sample configuration IOS DHCP of the Digital Still Camera of this embodiment;
Fig. 4 is the figure that illustrates according to the ios dhcp sample configuration IOS DHCP of the The Cloud Terrace of this embodiment;
Fig. 5 is that be provided with in the Digital Still Camera that is illustrated in according to this embodiment corresponding with composition control is the figure of the function of unit with the piece;
Fig. 6 A and 6B illustrate the center of gravity of individual subject and the figure of the center of gravity of the synthetic object that is made of a plurality of individual subject;
Fig. 7 be illustrated in take the photograph the figure of the origin set on the screen of view data;
Fig. 8 A and 8B are that the number that is shown schematically in detected individual subject is the figure of the example of first composition control under 1 the situation;
Fig. 9 A and 9B are that the number that is shown schematically in detected individual subject is the figure of the example of first composition control under 2 the situation;
Figure 10 A and 10B are that the number that is shown schematically in detected individual subject is the figure of the example of first composition control under 3 or the more a plurality of situation;
Figure 11 illustrates the flow chart of example of the processing procedure of first composition control;
Figure 12 A and 12B are that the number that is shown schematically in detected individual subject is the figure of the example of second composition control under 1 the situation;
Figure 13 A and 13B be the number that is shown schematically in detected individual subject be 2 and individual subject between distance detected (seizure) be the figure of the example of second composition control under the situation that is equal to or less than predetermined value;
Figure 14 illustrates the flow chart of example of the processing procedure of second composition control;
Figure 15 A and 15B illustrate the figure that distinguishes according to the object of this embodiment;
Figure 16 illustrates the flow chart of the example of the processing procedure that is used to realize that object is distinguished according to this embodiment;
Figure 17 illustrates as to the figure according to the ios dhcp sample configuration IOS DHCP of the modification of the imaging system of this embodiment;
Figure 18 illustrates as to the figure according to the ios dhcp sample configuration IOS DHCP of another modification of the imaging system of this embodiment;
Figure 19 is the figure that illustrates based on the application example of the composition determining of the embodiment of the invention;
Figure 20 is the figure that illustrates based on the application example of the composition determining of the embodiment of the invention;
Figure 21 is the figure that illustrates based on the application example of the composition determining of the embodiment of the invention;
Figure 22 is the figure that illustrates based on the application example of the composition determining of the embodiment of the invention;
Figure 23 is the figure that illustrates based on the application example of the composition determining of the embodiment of the invention;
Figure 24 is the figure that illustrates based on the application example of the composition determining of the embodiment of the invention;
Figure 25 is the figure that illustrates based on the application example of the composition determining of the embodiment of the invention; And
Figure 26 is the figure that illustrates based on the application example of the composition determining of the embodiment of the invention.
Embodiment
Embodiments of the invention are described below.Particularly, provide the description about such a case, i.e. configuration based on the embodiment of the invention is applied to the imaging system that comprises Digital Still Camera and be attached with the The Cloud Terrace of this Digital Still Camera.
Fig. 1 is the front view that illustrates according to the outward appearance ios dhcp sample configuration IOS DHCP of the imaging system of this embodiment.
As shown in Figure 1, the imaging system of this embodiment comprises Digital Still Camera 1 and The Cloud Terrace 10.
Digital Still Camera 1 can generate static image data based on the imaging that obtains by the lens unit on the front panel that is arranged on main body 3, and this static image data is stored in the storage medium that is loaded into wherein.That is to say that Digital Still Camera 1 has and will be stored in function in the storage medium with the form of static image data as the image of photo picked-up.When this photography of manual execution, the user presses shutter (release) button 2 on the upper surface that is arranged on main body.
Digital Still Camera 1 can be attached to The Cloud Terrace 10 by it is fixed.That is to say that The Cloud Terrace 10 and Digital Still Camera 1 have the mechanism's part that enables to be attached to each other.
The Cloud Terrace 10 has and shakes/tilting equipment, so that attached thereon Digital Still Camera 1 is moved shaking on (level) and the both direction that fascinates.
By Digital Still Camera 1 that the shaking of The Cloud Terrace 10/tilting equipment is realized shake and the direction of fascinating on the example of motion shown in Fig. 2 A and the 2B.Fig. 2 A and 2B show respectively from the in-plane and the Digital Still Camera that is attached to The Cloud Terrace 10 1 of direction from the side.
About shaking direction, the location status of the straight line X1 coupling among the horizontal direction of the main body of Digital Still Camera 1 and Fig. 2 A is considered to normal condition.For example, during along the rotation of direction of rotation+α, provide swing movement to the right when carrying out around rotating shaft Ct1.When the rotation carried out along direction of rotation-α, provide swing movement left.
On the other hand, about the direction of fascinating, the location status of the straight line Y1 coupling among the vertical direction of the main body of Digital Still Camera 1 and Fig. 2 B is considered to normal condition.For example, during along the rotation of direction of rotation+β, provide the downward motion of fascinating when carrying out around rotating shaft Ct2.When the rotation carried out along direction of rotation-β, provide the motion of fascinating upwards.
Do not mention shown in Fig. 2 A and the 2B each ± α and ± the maximum movable anglec of rotation on the β direction.But preferably, the maximum movable anglec of rotation is big as much as possible, so that the user can have more opportunity to come captured object.
Fig. 3 is the figure that illustrates according to the internal configurations example of the Digital Still Camera 1 of this embodiment.
With reference to figure 3, optical system unit 21 comprises the imaging len of one group of predetermined number, for example zoom lens and condenser lens; And aperture.Optical system unit 21 forms image based on the incident light as imaging on the optical receiving surface of imageing sensor 22.
In addition, optical system unit 21 comprises the driving mechanism that is used to drive zoom lens, condenser lens, aperture or the like.The operational example of these driving mechanisms is as being controlled zoom (visual angle) control in this way of described camera control example, auto focus control and automatic exposure control by control unit 27 performed so-called cameras.
Be imported into A/D converter 23 from the imaging signal of imageing sensor 22 outputs, and be converted into digital signal, digital signal is imported into signal processing unit 24 then.
When taking the photograph of generating in the above described manner by signal processing unit 24 view data will be used as image information when being stored in the storage card 40 that serves as storage medium (storage media device), take the photograph to such an extent that view data is outputed to coding/decoding unit 25 from signal processing unit 24 with still image is corresponding.
25 pairs of coding/decoding unit from taking the photograph of the still image of signal processing unit 24 outputs view data is carried out compressed encoding by the predetermined static image compression encoding method, and head or the like is added in the control of carrying out according to control unit 27, thus will take the photograph view data convert to taking the photograph of predetermined format compression view data.What then, coding/decoding unit 25 will generate by this way takes the photograph to such an extent that view data is sent to medium controller 26.The control that medium controller 26 carries out according to control unit 27 with taking the photograph of being transmitted view data writes on the storage card 40, thereby take the photograph to such an extent that view data is stored in the storage card 40.
The storage card 40 of Cai Yonging is to have card profile that meets preassigned and the storage medium that comprises the Nonvolatile semiconductor memory device such as flash memory in the case.Replace storage card 40, the storage medium of another kind of type and form can be used to storing image data.
According to the signal processing unit 24 of this embodiment can utilize obtain in the above described manner take the photograph to such an extent that view data is come carries out image processing, with detected object.The details that object detection among this embodiment is handled is described hereinafter.
In addition, Digital Still Camera 1 can be taken the photograph to such an extent that the view data carries out image shows by what 33 utilizations of permission display unit obtained in signal processing unit 24, show so-called transmitted image (through image), this transmitted image is the current image that just is being ingested.Particularly, signal processing unit 24 is taken into from the imaging signal of A/D converter 23 output, and generate and still image corresponding take the photograph view data, as mentioned above.By continue this operation, signal processing unit 24 sequentially generates and moving image in two field picture corresponding take the photograph view data.Then, signal processing unit 24 is taken the photograph to such an extent that view data is sent to display driver 32 according to the control that control unit 27 carries out with what order generated.Therefore, transmitted image is shown.
In addition, Digital Still Camera 1 can be taken the photograph to such an extent that view data is reproduced and image is presented in the display unit 33 on the storage card 40 to being recorded in.
For this reason, control unit 27 specify take the photograph view data and instruction media controller 26 from storage card 40 reading of data.In response to instruction, writing down on the medium controller 26 visit storage cards 40 specified take the photograph the address and the reading of data of view data, then the data that read are sent to coding/decoding unit 25.
The control that coding/decoding unit 25 carries out according to control unit 27, from transmit from taking the photograph of medium controller 26 extract substantial data the view data as compression back static image data, and carry out and the corresponding decoding processing of compressed encoding compressing the back static image data, thus obtain with still image corresponding take the photograph view data.Then, coding/decoding unit 25 will be taken the photograph to such an extent that view data is sent to display driver 32.Therefore, be recorded in and take the photograph to such an extent that the image of view data is reproduced and be presented in the display unit 33 on the storage card 40.
User interface image can and be taken the photograph to such an extent that the reproduced image of view data is displayed in the display unit 33 with above-mentioned monitoring picture.In this case, control unit 27 generates the view data that will show as the user interface image of necessity according to mode of operation at that time, and the view data that is generated is outputed to display driver 32.Therefore, user interface image is displayed in the display unit 33.This user interface image can with monitoring picture or take the photograph being displayed on the display screen of display unit 33 of being separated of the reproduced image of view data as specific menu screen or the like.Perhaps, user interface image can be superimposed on monitoring picture or take the photograph on the reproduced image of view data or be combined into it and be shown in a part of.
The control unit 27 actual CPU (CPU) that comprise, and constitute microcomputer with ROM (read-only memory) 28 and RAM (random access storage device) 29.ROM 28 storages will be used as the program of the CPU execution of control unit 27, each bar set information relevant with the operation of Digital Still Camera 1, or the like.RAM 29 serves as the main storage device of CPU.
In the case, be provided with flash memory 30, as being used for each bar set information that storage should change (rewriting) according to user's operation or operation history.When the nonvolatile memory such as flash memory was used as ROM 28, a part of storage area among the ROM 28 can replace flash memory 30 and be used.
Operating unit 31 comprises the various action buttons that are arranged in the Digital Still Camera 1, and is used to generate with the corresponding operation information signal of operation that these action buttons are carried out and the signal that is generated is outputed to the operation information signal output unit of CPU.Control unit 27 is carried out predetermined processing in response to each operation information signal of importing from operating unit 31.Therefore, Digital Still Camera 1 is operated according to user's operation.
The Cloud Terrace compatible communication unit 34 is according to predetermined communication means execution The Cloud Terrace 10 and the communication between the Digital Still Camera 1, and have: physical layer configurations, this physical layer configurations make Digital Still Camera 1 be attached in the state of The Cloud Terrace 10 can to/carry out wired or wireless signal of communication transmission/reception from the communication unit of The Cloud Terrace 10; And realize configuration with the corresponding communication process in more upper strata of being scheduled to.
Fig. 4 is the block diagram that the ios dhcp sample configuration IOS DHCP of The Cloud Terrace 10 is shown.
As mentioned above, The Cloud Terrace 10 comprises and shaking/tilting equipment.As the corresponding element of mechanism therewith, The Cloud Terrace 10 comprises head motion unit 53, shakes motor 54, tilting equipment unit 56 and tilting motor 57.
Similarly, when the motion of tilting equipment unit 56 was controlled, control unit 51 was to fascinate driver element 58 output and tilting equipment unit 56 required amount of exercise and the corresponding control signal of the direction of motion.The driver element 58 that fascinates generates the corresponding motor drive signal of control signal with input, and motor drive signal is outputed to tilting motor 57.On the direction of rotation of necessity,, rotate tilting motor 57 by motor drive signal with the anglec of rotation of necessity.As a result, tilting equipment unit 56 is actuated to move with corresponding amount of exercise on the corresponding sports direction.
In the imaging system that comprises Digital Still Camera 1 with above-mentioned configuration and The Cloud Terrace 10, the existence of object is detected if the people is considered to main object (being designated hereinafter simply as object) and after the search of detected object, then the shaking of The Cloud Terrace 10/tilting equipment is driven, with the best composition (carrying out the best finds a view) of the image that obtains to comprise object.Then, in the timing that obtains best composition, She Qu view data was recorded on the storage medium (storage card 40) at that time.
That is to say that in the imaging system according to this embodiment, during the photography of being undertaken by Digital Still Camera 1, automatically performing is that the object that is found determines (judgement) best composition and carries out the operation of taking and writing down.Like this, the photograph image of the good quality of appropriateness be can obtain to have, composition determining and shooting carried out and need not the user.In addition, in this system, do not need someone gripping camera and just can carry out shooting, thus everyone the become object in the spot for photography.In addition, even the angular field of view that does not consciously enter camera as the user of object, object also can be taken in photo.That is to say, increased the chance that photographs the natural appearance that is present in the people in the spot for photography, thereby can obtain many photos with unprecedented atmosphere.
Best composition may depend on the number of object and different.But, in according to the decision of the composition of this embodiment, can judge different best compositions based on the number of detected object.Therefore, compare, in having the embodiment of comprehensive viewpoint, can obtain the image of better quality with the situation of not considering object number ground decision composition.
Below, the composition control according to this embodiment is described.
Fig. 5 show in Digital Still Camera 1 be provided with the ios dhcp sample configuration IOS DHCP of controlling corresponding functional unit according to the composition of this embodiment.
With reference to figure 5, object detection piece 61 utilize taking the photograph of in signal processing unit 24, obtaining based on the imaging signal that in imageing sensor 22, obtains view data, carry out the object detection of the Search Control that comprises object and handle.Here, object detection handle refer to distinguish and detect take the photograph the processing in the picture material of view data as people's object.The information (detection information) that obtains as testing result comprises the number as people's object, the positional information of each individual subject in the screen and the size (occupied area) of each individual subject in the image.The structure that depends on the composition determining algorithm can realize controlling according to the composition of this embodiment as detection information by only obtaining object number.
Ad hoc approach as object detection is handled can use face detection techniques.In addition, in correlation technique, used some face area detecting methods, but be not particularly limited, can consider that detection accuracy and design difficulty adopt suitable method for the method that will adopt among this embodiment.
At the object search control period, the control signal that is used to drive above-mentioned shaking/tilting equipment is output via communications control block 63, with shaking/tilting equipment of control The Cloud Terrace 10.
The result's who handles as object detection who is generated by object detection piece 61 detection information is imported into composition controll block 62.
Composition controll block 62 utilizes the input detection information about object wherein to decide the composition (best composition) that is considered to best.Then, composition controll block 62 is carried out the best composition (composition control) of control to obtain to be determined.The control of in the case composition comprise change the visual angle (in this embodiment, it refer to according to the control of zoom lens and the changeable visual field) control, along the control (shaking control) of the shooting direction of shaking (a right or left side) direction and along the control (control of fascinating) of the shooting direction of (go up or the following) direction of fascinating.In order to change the visual angle, carry out in the following at least any one: the zoom control of the zoom lens in the optical system unit 21 of mobile digital still camera 1; And cutting is taken the photograph to such an extent that the picture signal of the image on the view data is handled.Shake control and the control of fascinating is to carry out by control and the shaking of mobile The Cloud Terrace 10/tilting equipment.When to shake/when the control of tilting equipment is performed, composition controll block 62 allow to be used for shake/control signal that tilting equipment is set in desired location is sent to The Cloud Terrace 10 via communications control block 63.
Can carry out based on program by control unit 27 (CPU) by the decision of above-mentioned composition controll block 62 execution and the processing of control composition.Perhaps, the processing of being carried out based on program by signal processing unit 24 can be used together.Communications control block 63 is handled according to communication unit 52 executive communications of predetermined agreement and The Cloud Terrace 10, and serves as and The Cloud Terrace compatible communication unit 34 corresponding functional units.
Next, the example of handling with reference to the object detection of figure 6A and 61 execution of 6B description object detection piece.
Suppose object detection piece 61 be taken into have taking the photograph of the picture material shown in Fig. 6 A view data.Take the photograph to such an extent that the picture material of view data is wherein to exist the image as people's object to obtain by picked-up.Fig. 6 A (with Fig. 6 B) shows the state that screen is divided by matrix pattern.This schematically illustrates as taking the photograph to such an extent that the screen of view data is that level and vertical pixel by one group of predetermined number constitutes.
By to have taking the photograph of the picture material shown in Fig. 6 A view data is carried out the face that object detection (face detection) detects the individual subject SBJ shown in publishing picture.That is to say, handle detection to face by face detection and be equal to detection individual subject.As result, obtain the information of number, position and the size of individual subject, as mentioned above to the detection of individual subject.
As for the number of individual subject, can obtain the number of detected face by face detection.In the situation shown in Fig. 6 A, the number of detected face is 1, thereby the number of individual subject is 1.
As the positional information of each individual subject, obtain at least as take the photograph the individual subject SBJ in the image of view data center of gravity G (X, Y).In the case, take the photograph on the screen of view data as center of gravity B (X, the X of benchmark Y) and Y origin P (0,0) be with the corresponding X-direction of screen size (horizontal direction) on the mid point of width (horizontal image size) Cx and the intersection point of the mid point of width (vertical image size) Cy on the Y direction (vertical direction), as shown in Figure 7.
Can adopt method according to the center of gravity that is used for detected object of correlation technique define image individual subject center of gravity G the position or set center of gravity G.
Can specify and detect the size that obtains each individual subject for the number of pixels in the zone of face's part by calculating by face detection processing or the like.
On the other hand, if take the photograph to such an extent that view data is taken into shown in Fig. 6 B, and object detection piece 61 carries out object detection and handle, and then the existence of two faces is detected by face detection, and is obtained thereby the number that indicates individual subject is 2 result.Here, two individual subject are distinguished each other and come: that is individual subject SBJ0 for the left side; That is individual subject SBJ1 for the right.The coordinate of the center of gravity G of individual subject SBJ0 and SBJ1 be respectively G0 (X0, Y0) and G1 (X1, Y1).
Under the situation that detects two or more individual subject by this way, calculate the center of gravity of the synthetic object that constitutes by a plurality of individual subject, promptly the center of gravity Gt of synthetic object (Xg, Yg).
Exist some modes to set the center of gravity Gt of synthetic object.In the case, adopt the easiest mode: the mid point that connects the line of the center of gravity of the Far Left on screen and rightmost individual subject in a plurality of detected individual subject is set to the center of gravity Gt of synthetic object.The center of gravity Gt of synthetic object is the information that is used in the composition control, and is as described below, and is can be by calculate the information that obtains after the information of the center of gravity of individual subject is obtained.Therefore, the center of gravity Gt of synthetic object can be obtained by object detection piece 61, and is output as detection information.Perhaps, the center of gravity Gt of synthetic object can be utilized as obtaining about information the most left and the center of gravity of right individual subject in the information of the position of the center of gravity of the indication individual subject that detects information acquisition by composition controll block 62.
Except said method, also can use following establishing method.That is to say,, and utilize weight coefficient arrangement, so that the position of the center of gravity Gt of synthetic object is near an individual subject with bigger size in the individual subject according to the appointment weight coefficient of a plurality of individual subject.
The big I of each among individual subject SBJ0 and the SBJ1 obtains by the number of pixels that face occupied that calculates by detected this object.
Next, with reference to figure 8A to 10B the composition that the composition control as first example of this embodiment can obtain is described.
Fig. 8 A shows such situation, and promptly as the result of the object detection before the composition control, the picture material that comprises individual subject SBJ0 is as taking the photograph to such an extent that view data is obtained.In this embodiment, when the The Cloud Terrace 10 that is attached to when Digital Still Camera 1 is normally set, Digital Still Camera 1 towards being set so that level towards image be ingested.Thereby, first example be based on by imaging acquisition level towards the hypothesis of image.
Shown in Fig. 8 A, detecting under the situation of body object one by one, execution reduces the zoom control at visual angle, so that this individual subject SBJ0 is taking the photograph to such an extent that the occupation rate in the screen of view data has predetermined value, thereby the size of individual subject is changed, shown in the transformation of Fig. 8 A to 8B.Fig. 8 A and 8B show the visual angle and are reduced situation with the size that increases individual subject SBJ0.But if the occupation rate of individual subject in screen surpasses above-mentioned predetermined value in the stage that individual subject is detected, the zoom control that then increases the visual angle is performed so that occupation rate is reduced to this predetermined value.
When the number of individual subject was 1, in this embodiment, individual subject was positioned in the position at the almost center on the horizontal direction of screen.For this reason, the horizontal direction position of the center of gravity G of individual subject SBJ0 is positioned in the position at the almost center of screen.
Then, shown in Fig. 9 A, under the situation that has detected two individual subject, at first, as composition control, calculates between two individual subject SBJ0 and the SBJ1 apart from K (object arrives object distance).For example can represent apart from K by the distance (X1-X0) between the X coordinate (X1) of the image G1 of the X coordinate (X0) of the center of gravity G0 of individual subject SBJ0 and individual subject SBJ1.Then, adjust the visual angle, so that the object of Ji Suaning is 1/3rd (K=Cx/3) of horizontal image size Cx to object distance K in the above described manner, shown in Fig. 9 B.In the case, the zone of two individual subject SBJ0 and SBJ1 also is positioned in the almost center on the horizontal direction of screen.For this reason, be positioned in the horizontal direction center of the center of gravity Gt of the synthetic object that constitutes by individual subject SBJ0 and SBJ1.
By the way, object is set at 1/3rd of Cx to object distance K and is based on the composition establishing method that is called as " three fens rules (the rule of thirds) ".Rule was one of the most basic composition establishing method in three minutes.In the method, object is positioned in rectangular screen is divided into respectively in three sections the dummy line any one, so that obtain good composition in the horizontal and vertical directions.By as mentioned above object being set at Cx/3 and with center in the horizontal direction, the center of gravity Gt of synthetic object location to object distance K, the center of gravity G0 of individual subject SBJ0 is positioned in the left dummy line along the vertical direction of screen substantially, and the center of gravity G1 of individual subject SBJ1 is positioned in substantially along the right dummy line of the vertical direction of screen.That is to say, can obtain composition based on three fens rules.
In addition, under the situation that has detected three individual subject shown in Figure 10 A, as composition control, the most left individual subject SBJ0 and the object between the rightest object SBJ2 that calculate in the screen arrive object distance K.Particularly, come calculating object to arrive object distance K in the following manner.That is to say, be under the situation of " n " at the number of detected individual subject, assigns number 0 to n-1 to the right to individual subject from the left side of screen.The X coordinate of the center of gravity G0 of the most left individual subject SBJ0 in the screen is by (X0) expression, and the X coordinate of the center of gravity Gn-1 of the rightest individual subject SBJ (n-1) is represented by (Xn-1).Then, can utilize general formula (Xn-1)-(X0) calculate apart from K.
In the case, the control visual angle is so that object is half of the big or small Cx of horizontal image to object distance K, shown in Figure 10 B.As for the object's position on the horizontal direction, the center of gravity Gt of synthetic object is positioned in the position at the almost center on the horizontal direction of screen, is positioned in the position at the almost center on the horizontal direction of screen so that comprise the area part of three individual subject.According to this embodiment, if detect three or more individual subject, the then control of the composition shown in execution graph 10A and the 10B.
In screen, exist under the situation of three or more individual subject, meeting three fens with loyalty compares object regularly to 1/3rd the situation that object distance K is set at horizontal image size Cx, when object is higher to the ratio of object distance K and horizontal image size Cx, generally can obtain better composition.Thereby, in this embodiment,, then form object is set to Cx/2 to object distance K composition if detect three or more individual subject in above-mentioned situation.
As mentioned above, in according to the control of the composition of this embodiment, in the number of detected individual subject is each situation of 1,2 and 3, carry out difference adjustment to the visual angle.
Figure 11 shows the process example by object detection piece 61 shown in Figure 5, composition controll block 62 and communications control block 63 first example of controlling with reference to the described composition of figure 8A to 10B that carry out, above.Processing shown in Figure 11 is to realize as the signal processing unit 24 of DSP and the CPU executive program in the control unit 27 time.This program is to be written into during manufacture and to be stored among ROM or the like.Perhaps, this program can be stored in the non-volatile memory medium, is installed (comprising renewal) from storage medium then, so that be stored in the nonvolatile storage or flash memory 30 with the DSP compatibility.In addition, this program can be installed via the data-interface such as USB or IEEE 1394 under the control of another main process equipment.In addition, when allowing Digital Still Camera 1 to have network function, can be with in the memory device of this procedure stores server on network or the like and by download the acquisition program from server.
Step S101 to S106 is corresponding to the search and the process of detected object, and mainly by 61 execution of object detection piece.
In step S101, be taken into and obtain based on from taking the photograph of the imaging signal of imageing sensor 22 view data.In step S102, what utilization obtained in step S101 takes the photograph to such an extent that view data is carried out the object detection processing.In object detection is handled, utilize above-mentioned face area detecting method or the like to judge as taking the photograph to such an extent that whether have individual subject in the picture material of view data.If there is individual subject, then obtains the number of individual subject and the position of each individual subject (center of gravity) and size at least and be used as detection information.
In step S103, judge whether the result who handles as the object detection among the step S102 has detected the existence of individual subject.If obtain negative result of determination, that is to say, if do not detect exist (number of detected individual subject is 0) of individual subject as yet, then handle and advance to step S104, in this step, carry out the zoom lens motion control (zooming out (zoom-out) control) that increases the visual angle.By increasing the visual angle, can absorb the more image of wide region, thereby correspondingly can catch individual subject at an easy rate.Simultaneously, in step S105, carry out the control (shaking/fascinate control) of the shaking of mobile The Cloud Terrace 10/tilting equipment, so that object search.At this moment, carry out control, so that object detection piece 61 is provided for shaking/fascinate the control signal of controlling to communications control block 63, and this control signal is sent to the communication unit 52 of The Cloud Terrace 10.
Shaking/fascinating the shaking of mobile The Cloud Terrace 10 in the control/tilting equipment and can so determined, so that search is carried out efficiently with the pattern of object search.
In step S106, mode flags " f " is set to 0 (f=0), and processing turns back to step S101.
Like this, the process of repeating step S101 to S106, up to taking the photograph detect at least one individual subject in the picture material of view data till.At this moment, the system that comprises Digital Still Camera 1 and The Cloud Terrace 10 be in Digital Still Camera 1 by shake and the direction of fascinating on move in the state with object search.
If in step S103, obtain sure result of determination,, then handle and advance to step S107 if promptly detected the existence of individual subject.Mainly carry out from the process of step S107 by composition controll block 62.
In step S107, judge the current value of in mode flags " f ", setting.
If judge f==0, then this value indicates and should carry out initial rough Object Snap pattern and be used as composition control, thereby carries out the process that begins from step S108, as shown in figure 11.
In step S108, the center of gravity Gt that judges synthetic object whether be positioned in take the photograph view data screen (by demonstration take the photograph the screen that obtains of the picture material of view data) on origin P (0,0) locate (see figure 7).If obtain negative result of determination, that is to say, if the center of gravity Gt of synthetic object is not positioned in the origin place as yet, then handle and advance to step S109, in this step, the control of carrying out the shaking of mobile The Cloud Terrace 10/tilting equipment is handled then and is turned back to step S101 so that the center of gravity Gt of synthetic object is positioned in the origin place.As mentioned above, in the trap mode of first process of the composition control in the state that the existence as individual subject has been detected, carry out shaking/tilting equipment of The Cloud Terrace 10, so that the center of gravity Gt of synthetic object is positioned in the origin place as original reference position, thereby comprise that the image-region of detected individual subject is positioned in the center of screen.
Now, be used for the example that the algorithm of control was shaken/fascinated in actual execution among the description step S109.
Individual subject be detected and the state of mode flags f==0 in, object detection piece 61 is according to carrying out calculating with following formula (1) to obtain to shake the necessary amount of exercise Stilt on the necessary amount of exercise Span on the direction and the direction of fascinating.In with following formula (1), the number of the detected individual subject of " n " expression, " p (Xi, Yi) " expression is assigned the X and Y coordinates with the center of gravity of i individual subject in the individual subject of number 0 to n-1.In order to confirm, origin in the case (0,0) is positioned at the mid point on the horizontal direction of screen and the intersection point place of the mid point on the vertical direction, as shown in Figure 7.
For example, in step S108, whether the absolute value by judging the necessary amount of exercise Span calculate in the above described manner and Stilt (be 0 strictly speaking, but this value can be greater than 0) in preset range, judges whether the center of gravity Gt of synthetic object is positioned at origin P place.Then, in step S109, carry out and to shake/to fascinate control, make the absolute value of necessary amount of exercise Span and Stilt in preset range.The speed of head motion unit 53 and tilting equipment unit 56 can be constant when shaking at this moment ,/fascinating control.Perhaps, this speed can be changed, and for example, this speed can become big and is increased along with necessary amount of exercise Span and Stilt.Therefore, though shake or the necessary amount of exercise that fascinates bigger, also can be in the relatively short time center of gravity Gt of synthetic object be positioned at the origin place.
If obtain sure result of determination in step S108, if promptly the center of gravity Gt of synthetic object is positioned in the origin place, then in step S110, mode flags " f " is set to 1 (f=1), and processing turns back to step S101.Mode flags in step S110 " f " 1 the state of being set to is a kind of like this state, has promptly finished as the trap mode of first process in the composition control and first composition adjustment control (composition adjustment modes) should be performed.
Be set under the situation that f==1 and the first composition adjustment modes should be performed in mode flags, handle and advance to step S111 from step S107.In the first composition adjustment modes, can understand that from following description zoom (visual angle) adjustment is performed, to obtain best composition according to the number of detected individual subject.Note, depend on the visual angle adjustment, can change the size of each individual subject in the screen and the distance between the individual subject.
In step S111, judge the number of detected individual subject.If number is 1, then carry out the process that begins from step S112.
In step S112, judge whether the size of detected individual subject is suitable.The sizeable state of individual subject refers to the state that the occupation rate of image section in screen as individual subject has the value in preset range, shown in Fig. 8 B.If in step S112, obtain negative result of determination, then handle advancing to step S113, in this step, carry out zoom lens drive controlling (zoom control), so that occupation rate has the value in preset range, and processing turns back to step S101.At this moment, in keeping step S109, set be in the level of the center of gravity G (the center of gravity Gt of synthetic object) of the individual subject of the corresponding position of X coordinate (X=0) (about) carry out zoom control in the direction position.Therefore, can keep the be positioned state of position at almost center in the horizontal direction of individual subject.In addition, owing to zoom out control carrying out during object search and the detecting operation in step S104, the zoom control of therefore carrying out in step S113 might be (zoom-in) control that furthers.But,, then in step S113, carry out and zoom out control so that occupation rate has the value in preset range if obtain the result of determination of negating for a certain reason outside the preset range of occupation rate in screen and in step S112.
If in step S112, obtain sure result of determination, then handle advancing to step S114, mode flags in this step " f " is set to 2.Then, processing turns back to step S101.The state that mode flags is set to f==2 is the state that the first composition adjustment has been finished and releasing operation should be performed after the second composition adjustment is performed, and this can understand from following description.
If judge that in step S111 the number of detected individual subject is 2, then carry out the process that begins from step S115.
In step S115, judge and to take the photograph to such an extent that whether two individual distance between objects K in the screen of view data are 1/3rd (K==Cx/3) of horizontal image size Cx, shown in Fig. 9 B.If obtain negative result of determination here, then handle advancing to step S116, in this step, carry out zoom control so that satisfy K==Cx/3.At this moment, carrying out zoom control equally locates so that the horizontal direction position of the center of gravity Gt of synthetic object is maintained at the X coordinate of setting among the step S109 (X=0).This is identical with following step S119.Then, if obtain sure result of determination in step S115, if promptly satisfy K==Cx/3, then handle and advance to step S117, in this step, mode flags " f " is set to 2.Then, processing turns back to step S101.
If judge that in step S111 the number of detected individual subject is 3, then carry out the process that begins from step S118.
In step S118, judge and to take the photograph to such an extent that whether the object in the screen of view data is the big or small Cx of horizontal image half (K==Cx/2) to object distance K (being the distance between the center of gravity of the rightest individual subject in the center of gravity of the most left individual subject in the screen and the screen in the case), shown in Figure 10 B.If obtain negative result of determination here, then handle advancing to step S119, in this step, carry out zoom control so that satisfy K==Cx/2.Then, if obtain sure result of determination in step S118, if promptly satisfy K==Cx/2, then handle and advance to step S120, mode flags in this step " f " is set to 2.Then, processing turns back to step S101.
In mode flags " f " is set to 2 state, more than the number described with reference to figure 8A to 10B with individual subject be that the process of 1,2 or 3 the corresponding composition control of situation is finished.Thereby,, then the process that begins from step S121, carry out the second composition adjustment modes if determinating mode sign " f " is 2 in step S107.
For example, in reference to the description of figure 8A to 10B, in order simply there not to be to describe the position of center of gravity on the vertical direction of screen how to set individual subject to composition control.But, in fact, by with this position with respect to screen center's (skew) a certain necessary amount that moves up, can obtain better composition.Thereby, in actual composition control, also can set the side-play amount on the vertical direction of center of gravity Gt of synthetic object according to this embodiment, be used as best composition so that obtain better composition.The process that is used for this setting is the second composition adjustment modes, and it is carried out as following step S121 and S122.
In step S121, judge synthetic object center of gravity Gt (, then being the center of gravity G of individual subject) if the number of individual subject is 1 the position whether with respect to through horizontal line (X-axis) displacement of the origin P on the screen predetermined side-play amount (whether centre-of gravity shift suitable).
If the result of determination of negating in step S121 is then handled and being advanced to step S122, in this step, carry out the control of fascinating with the tilting equipment of mobile The Cloud Terrace 10 so that center of gravity by the side-play amount that sets by displacement, and handle and turn back to step S101.In step S121, obtain to have obtained best composition in the stage of sure result of determination according to the number of individual subject.
Have the corresponding certain methods that is used to set as the value of the side-play amount of centre-of gravity shift, and this method is not limited especially with step S121 and S122.As one of the simplest establishing method, can provide center with respect to vertical direction based on three fens rules and have deviant with the corresponding length of sixth of vertical image size Cy.Certainly, according to predetermined rule, can set the different deviants of the number that depends on individual subject.
If in step S121, obtain sure result of determination, then carry out begin from step S123 with the corresponding process of releasing operation.Here, releasing operation refer to taking the photograph of obtaining at that time view data is stored in operation in the storage medium (storage card 40) as static image data.Particularly, under the situation that manual shutter operation is performed, releasing operation refer in response to taking the photograph of will obtaining at that time of shutter operation view data is recorded in operation in the storage medium as static image data.
In step S123, judge the current condition of carrying out releasing operation that whether satisfies.These conditions for example comprise that focus state (when auto focus control is effective) has been established and the shaking of The Cloud Terrace 10/tilting equipment is in halted state.
If in step S123, obtain negative result of determination, then handle turning back to step S101, be met so that wait for up to the condition of carrying out releasing operation.If in step S123, obtain sure result of determination, then in step S124, carry out releasing operation.Can write down like this, in this embodiment taking the photograph of best composition view data.
After releasing operation finishes, in step S125, carry out initial setting to call parameter.Utilize this setting, mode flags " f " is set to initial value 0.In addition, the position of zoom lens is returned to default initial position.
After step S125, handle turning back to step S101.By making processing turn back to S101 from step S125, automatically the repeat search object, obtain best composition and carry out the operation of imaging and record (releasing operation) according to the number of the detected individual subject of search.
The above releasing operation of describing with reference to Figure 11 is with based on taking the photograph to such an extent that the still image of image is recorded in operation in the recording medium.On more wide in range meaning, according to the releasing operation of this embodiment comprise with above-mentioned still image be recorded on the recording medium operation and from taking the photograph to such an extent that image obtains the operation of necessary static image data.Thereby, releasing operation comprise also that the Digital Still Camera 1 of this embodiment carries out from taking the photograph to such an extent that image obtains static image data static image data is sent to the operation of another recording equipment via data-interface.
With reference to Figure 11, according to the result of determination that in step S111, obtains, carry out with step S112 and the corresponding zoom control of S113, can be considered to change composition determining method according to the number of detected individual subject with step S115 and the corresponding zoom control of S116 or with the configuration of step S118 and the corresponding zoom control of S119.
Here, the change of composition determining method refers to the change of the algorithm that is used for composition determining and composition control or is used for composition determining and the change of the parameter of composition control.If judge that in step S111 the number of detected individual subject is 1, then in step S112 and S113, carry out zoom control based on the occupation rate of image section in screen of individual subject.On the other hand, if judge that in step S111 the number of detected individual subject is 2 or more a plurality of, then carry out zoom control to object distance K rather than occupation rate based on object.This means about the algorithm of the composition determining of the size adjustment of each individual subject and composition control number and be changed according to detected individual subject.In addition, under the number of detected individual subject was 2 or more a plurality of situation, the number that the different value of Cx/3 and Cx/2 is set to individual subject was that the object of the best composition under 2 and 3 the situation is to object distance K.This means about the parameter of the composition determining of the size adjustment of individual subject and composition control number and be changed according to detected individual subject.
Below, second composition control according to this embodiment is described.In second composition control, according to the number of detected individual subject between vertical composition and horizontal composition, switch take the photograph the setting screen (composition) of view data, as described below.
In the control of second composition, at first in the initial condition that horizontal composition is set, carry out detection to object.
Then, suppose taking the photograph to such an extent that detect individual subject SBJ0 in the screen of view data, shown in Figure 12 A.At the number of detected individual subject is 1 in this case, sets vertical composition in the control of second composition, shown in the transformation from Figure 12 A to Figure 12 B.
Then, carry out size and adjust (zoom) control, so that the occupation rate of individual subject SBJ0 in screen has the value in preset range.In the case, the position of the horizontal direction of individual subject SBJ0 is almost at the center.The position of vertical direction according to predetermined rule by with respect to the center to top offset.
When the number of object was 1, especially to liking man-hour, from comprehensive viewpoint, vertical composition rather than horizontal composition were considered to better composition.Based on this viewpoint, when the number of individual subject is 1, adopt vertical composition, in the control of second composition, adjust the size and the position of individual subject then.
In this embodiment, can by from taking the photograph of horizontal composition, obtaining extract vertical composition size in the view data image-region horizontal composition is changed to vertical composition.Can use the view data part of the vertical composition size of extracting by this way.
Perhaps, the mechanism that Digital Still Camera 1 can be switched to horizontal state orientation and vertical state orientation can be set in The Cloud Terrace 10, so that can change composition by the driving of controlling this mechanism.
In addition, suppose and taking the photograph to such an extent that detect two individual subject SBJ0 and SBJ1 in the screen of view data, as shown in FIG. 13A.In the control of second composition, when detecting two individual subject under the situation like this like that, judge whether the object under the visual angle when detecting is equal to or less than predetermined threshold value to object distance K.
If object is equal to or less than threshold value to object distance K, then can judge two individual subject and considerably close to each other.In this state, preferably adopt vertical composition rather than horizontal composition.Thereby in the case, composition is changed to vertical composition, shown in the transformation from Figure 13 A to Figure 13 B.The method that is used to change composition is described hereinbefore.Then, carry out zoom control or shake/fascinate control, so that individual subject SBJ0 and SBJ1 have suitable size and be positioned in position.In the case, the horizontal direction position of the image section that is made of individual subject SBJ0 and SBJ1 in the screen also is set to almost at the center.The position of vertical direction according to predetermined rule with respect to the center by to top offset.
On the other hand, if the object between two detected individual subject SBJ0 and the SBJ1 surpasses threshold value to object distance K, then can judge two individual subject separated from one another corresponding distance.In the case, preferably adopt horizontal composition.Thereby, in the case, carry out with above with reference to figure 9A and the described identical composition control of 9B.
In addition, suppose and taking the photograph to such an extent that detect three or more individual subject SBJ0 to SBJn (n is equal to or greater than 3 natural number) in the screen of view data.In the case, preferably the horizontal composition of employing is used as whole composition.Thereby, in the case, carry out with above and be used as the control of second composition with reference to figure 10A and the described identical composition control of 10B.
Figure 14 shows the example that second composition of carrying out with object detection piece 61 shown in Figure 5, composition controll block 62 and communications control block 63 is controlled corresponding process.
In Figure 14, the process of the step S101 to S110 among the process of step S201 to S210 and Figure 11 is identical.But, in step S204, carry out in step S104 and zoom out control, and when the composition of current setting is vertical composition, also carry out the control that composition is set to initial condition (horizontal composition).
In mode flags is in the state of f==1, judges the number of detected individual subject in step S111, judges promptly whether number is 1,2,3 or more.
If judge that in step S211 the number of individual subject is 1, then carry out the process that begins from step S212.
In step S212,, then carry out the control of it being changed into vertical composition if the composition of current setting is horizontal composition.As this control, can carry out from taking the photograph of horizontal composition extract the signal processing of the image-region of vertical composition size the view data, as mentioned above.This control is to be realized by the function as the composition controll block 62 in the signal processing unit 24.After step S212, handle advancing to step S213.
Step S213 to S215 is identical with step S112 to S114 among Figure 11.
By the process of execution in step S212 to S215, can carry out above composition control (except the upwards skew of individual subject) with reference to figure 12A and 12B description.
If judge that in step S211 the number of individual subject is 2, then carry out the process that begins from step S216.
In step S216, judge whether the object between two detected individual subject is equal to or less than threshold value to object distance K.If obtain sure result of determination, then handle and advance to step S217.If the composition of current setting is horizontal composition, then carry out the control that changes it to vertical composition.Then, the process of execution in step S213 to S215.
Note, when handling when step S217 advances to S213, in the process in step S213 to S215, use be different from the corresponding value of body object one by one, with the preset range of two corresponding occupation rates of individual subject in value.Then, have the value in this preset range, then judge to have obtained the suitable size of individual subject, and in step S215, mode flags " f " is set at 2 if judge the occupation rate of two individual subject.
On the other hand, if in step S216, obtain negative result of determination, then carry out the process that begins from step S218.
In step S218,, then carry out the control that it is changed over vertical composition if the composition of current setting is vertical composition.Step S219 to S221 afterwards is identical with step S115 to S117 among Figure 11.
In the process of the process of step S216 to S221 and the step S213 to S215 after the step S217, the number of carrying out individual subject is second composition control under 2 the situation.That is to say, use two types composition control: set the composition control of vertical composition when object to object distance K more in short-term; And the composition control of when object is longer to object distance K, setting horizontal composition.
In step S216, have only the object on the horizontal direction of screen to be used as the factor that is used to judge the composition (level or vertical) that to set to object distance K.But in fact, the object on the horizontal direction of screen is to object distance K, and the object on the vertical direction of screen also can be used as decision factor to object distance Kv.Object can be defined as in the screen distance between the center of gravity of the center of gravity of individual subject topmost and individual subject bottom to object distance Kv.
For example, have such situation, promptly two individual subject distance in vertical direction is quite long in actual screen.In this case, even the distance on the horizontal direction between two individual subject to a certain degree, better composition also can obtain by adopting vertical composition.
Below, provide the description about the algorithm example under the following situation: in step S216, the object on the vertical direction of screen is used as decision factor to the object of object distance Kv on the horizontal direction of screen to object distance K.
For example, the object on the horizontal direction of acquisition screen arrives the ratio K/Kv of object distance Kv to the object on the vertical direction of object distance K and screen.Then, judge whether K/Kv is equal to or greater than predetermined threshold value.If judge that K/Kv is equal to or greater than threshold value, can judge that then two individual distance between objects are more to a certain degree longer than individual distance between objects on the vertical direction on the horizontal direction.In the case, in step S218, set horizontal composition.On the other hand, if judge that K/Kv less than threshold value, can judge that then individual distance between objects length has arrived to a certain degree on the vertical direction.In the case, in step S217, set vertical composition.
Perhaps,, the object on the horizontal direction of screen can be compared with predetermined threshold value to object distance K with the same in the above-mentioned situation, and if object be equal to or greater than threshold value to object distance K, then in step S217, set vertical composition.On the other hand, if object surpasses threshold value to object distance K, then the object on the vertical direction of screen is compared with predetermined threshold value to object distance Kv.Needing not be equal to threshold value that object is compared to object distance Kv and to be used for the threshold value of object to object distance K, is the value that object is suitably set to object distance Kv but can use.If object is equal to or less than threshold value to object distance Kv, then in step S218, set horizontal composition.If object surpasses threshold value to object distance Kv, then in step S217, set vertical composition.
If judge that in step S211 the number of individual subject is 3, then carry out from the process of step S222.In step S222,, then carry out the control that changes it to horizontal composition if the composition of current setting is vertical composition.The process of the step S118 to S120 of step S223 in the process of S225 and Figure 11 is identical afterwards.
In the result as said process, mode flags " f " is set in 2 the state, carries out the process that begins from step S226.
Step S121 among the process of step S226 and S227 and Figure 11 and S122's is identical.By carrying out this process, with reference to as described in the figure 12A to 13B, can obtain wherein individual subject by with respect to the composition of screen center to top offset as above.
The same with the step S123 to S125 among Figure 11, step S228 to S230 is corresponding to the process about releasing operation.By carrying out this process, by composition control obtained taking the photograph of best composition view data can be recorded in the storage medium.
In the whole flow process of the process that every kind of composition shown in Figure 11 and 14 is controlled, judge and determine to be considered to best composition according to the number of detected individual subject, and suitably carry out zoom control and shake/fascinate control come actual obtain taking the photograph of composition that (reflection) judged view data.
In the process of the control of every kind of composition shown in Figure 11 and 14, basically, the number that composition is based on detected individual subject be 1,2 and 3 or more three kinds of conditions in anyly judge.But this is an example, when the number of individual subject is 3 or when more a plurality of, can judge composition based on individual subject number more specifically.
For example, for about setting which the algorithm of composition determining in vertical composition and the horizontal composition, if the number of detected individual subject is 2 in Figure 14, then select in vertical composition and the horizontal composition any one to object distance K according to object, if but the number of detected individual subject is 3 or more a plurality of, then sets horizontal composition uniformly.Perhaps, even the number of detected individual subject is 3 or more a plurality of, also can be based upon threshold value that each number of detected individual subject sets and current object and selects in vertical composition and the horizontal composition any one to the comparative result between the object distance K.That is to say,, can carry out the judgement of vertical composition or horizontal composition to object distance K based on object if the number of detected individual subject is 2 or more a plurality of.In addition, the object on the vertical direction of above contact step S216 description also can be added to decision factor to object distance Kv.
When the imaging system of using according to this embodiment, following situation may take place.That is, in the environment around many people are present in imaging system, should only carry out composition control to one or more specific people.In this case, if under object detection is handled hypothesis based on face detection techniques, uses simply, so just can not carry out suitable composition to specific people and control the algorithm of all detected face recognitions as individual subject.Particularly, in the composition control according to this embodiment, different compositions is to set according to the number of individual subject, thereby the possibility that may set out the undesirable composition of user has uprised.
In order taking measures, can to carry out following object during step S102 in Figure 11 or the object detection of the step S202 among Figure 14 are handled and distinguish processing at the said circumstances among this embodiment.
In the case, set, so that can be by the operation of Digital Still Camera 1 is set as the maximum number of the individual subject (target individual object) of the target of composition control.The information of the maximum number of the target individual object that sets is for example preserved by object detection piece 61.In the case, suppose that 2 are set to the maximum number of target individual object.
Then, suppose result as object search operation (step S105 or S205) obtain taking the photograph of the picture material shown in Figure 15 A view data.Object detection in corresponding step S102 of situation or S202 therewith detects the existence of four individual subject in handling by face detection.Detected individual subject is considered to " candidate target " in this stage.In Figure 15 A, four candidate targets in the screen are indicated by label DSBJ0, DSBJ1, DSBJ2 and DSBJ3 from left to right.
Like this, as the result of simple face detection, detect four objects (candidate target).But in the case, 2 are set to the maximum number of target individual object, as mentioned above.Based on this maximum number, object detection piece 61 from four candidate target DSBJ0, DSBJ1, DSBJ2 and DSBJ3 by size descending select two candidate targets.Selected object is considered to the target individual object.In the case, two individual subject that have largest amount among candidate target DSBJ0, DSBJ1, DSBJ2 and the DSBJ3 are candidate target DSBJ2 and DSBJ3.Thereby object detection piece 61 thinks that candidate target DSBJ2 and DSBJ3 are respectively target individual object SBJ0 and SBJ1, and candidate target DSBJ0 and DSBJ1 are ignored as non-target individual object.Then, in the process that is used for composition control that the step S207 from the step S107 of Figure 11 or Figure 14 begins, only the target individual object is carried out control.Distinguish by carrying out such object, even in the environment or situation around many people are present in imaging system, when those people as the composition controlled target are placed on the position of the most close imaging system, also can carry out shooting based on specific people's suitable composition is controlled.
Flow chart among Figure 16 shows the process example of distinguishing as the above-mentioned object of the part execution of the object detection processing of step S102 among Figure 11 or the step S202 among Figure 14.
In the reason, detected all objects all are considered to candidate target in face detection is handled herein.In step S301, judge whether detect at least one candidate target.If judge to have detected at least one candidate target, then handle advancing to step S302.
In step S302, judge whether the maximum number of the target individual object of current setting is equal to or greater than the number of the candidate target that detects in step S301.
If in step S302, obtain sure result of determination, can judge that then the number of candidate target does not surpass the maximum number of target individual object.Thereby, handle and advance to step S303, in this step, all detected individual subject all are set to the target individual object.
On the other hand, if in step S302, obtain negative result of determination, then can judge the maximum number of the number of candidate target greater than the target individual object.In the case, handle to advance to step S304, in this step, descending is selected from detected candidate target and the corresponding candidate target of maximum number of target individual object by size.Then, in step S305, selected candidate target is set to the target individual object.Therefore, can carry out object distinguishes.
By carrying out process shown in Figure 16, as the result that the object detection of carrying out in the step S102 of Figure 11 or the step S202 among Figure 14 is handled, be included in the number of the target individual object of setting in step S303 or 305 and the size of each target individual object and the information of position and be output to composition controll block 62 as detection information.Composition controll block 62 utilizes this detection information to carry out the composition control that step S107 from Figure 11 or the step S207 among Figure 14 begin.
Figure 17 shows the ios dhcp sample configuration IOS DHCP of conduct according to the modification of the imaging system of this embodiment.
In Figure 17, take the photograph to such an extent that view data is sent to The Cloud Terrace 10 via communications control block 63 from Digital Still Camera 1 by what signal processing unit 24 generated based on imaging.
In Figure 17, The Cloud Terrace 10 comprises communications control block 71, shakes/fascinate controll block 72, object detection piece 73 and composition controll block 74.
Communications control block 71 is and communication unit shown in Figure 4 52 corresponding functional units, and according to predetermined agreement and the communications control block 63 on the Digital Still Camera 1 (The Cloud Terrace compatible communication unit 34) executive communication.
Take the photograph to such an extent that view data is provided to object detection piece 73 by what communications control block 71 received.Object detection piece 73 comprises can carry out the signal processing unit of handling with the performed object detection that is equal to mutually of object detection piece shown in Figure 5 61 at least, to offer taking the photograph of it view data is carried out object detection and is handled, and detection information is outputed to composition controll block 74.
Shake/fascinate controll block 72 corresponding to the function of carrying out in the performed control and treatment of control unit shown in Figure 4 51 about the processing of shaking/fascinating control, and in response to be input to wherein control signal and to shaking driver element 55 or the signal of the motion of fascinate driver element 58 output control head motion unit 53 or tilting equipment unit 56.Therefore, carry out the composition that shakes or fascinate and judge by composition controll block 62 to obtain.
As mentioned above, in imaging system shown in Figure 17, take the photograph to such an extent that view data is sent to The Cloud Terrace 10 from Digital Still Camera 1, and based on taking the photograph to such an extent that the object detection of view data is handled and composition is controlled and carried out The Cloud Terrace 10 1 sides.
Figure 18 shows the ios dhcp sample configuration IOS DHCP of conduct according to another modification of the imaging system of this embodiment.In Figure 18, indicate by identical label with part identical among Figure 17, and corresponding the description is omitted.
In this system, in The Cloud Terrace 10, be provided with image-generating unit 75.Image-generating unit 75 comprises optical system and the image device (imager) that is used for imaging, to obtain the signal (imaging signal) based on imaging.In addition, image-generating unit 75 comprises signal processing unit, with generate based on imaging signal take the photograph view data.This configuration corresponding to shown in Figure 3 comprise optical system unit 21, imageing sensor 22, A/D converter 23 and signal processing unit 24 be used for obtain to take the photograph the unit of signal processing level of view data.Take the photograph to such an extent that view data is output to object detection piece 73 by what image-generating unit 75 generated.By the way, image-generating unit 75 imaging direction that is taken into the optical system unit 21 (lens unit 3) that imaging direction of light (imaging direction) was set to and was positioned over the Digital Still Camera 1 on the The Cloud Terrace 10 mates as much as possible.
In the case, object detection piece 73 is carried out object detection processing and composition control and treatment with composition controll block 74 in the mode identical with Figure 17.But composition controll block 74 is in the case carried out and is shaken/fascinate control, and allows communications control block 71 to send the signal that releases order in the timing of carrying out releasing operation to Digital Still Camera 1.In Digital Still Camera 1, carry out releasing operation when releasing order signal receiving.
As mentioned above, in this revised, the whole object detection except releasing operation was handled and composition control all can be carried out The Cloud Terrace 10 1 sides.
In addition, can revise in the following manner according to object detection in the imaging system of this embodiment and composition control.
Below do not describe level especially (about) composition control on the direction.But,, for example,, can obtain good composition to the right by making object or any one the direction displacement in the left side with respect to the center according to three fens rules.Thereby as the composition control according to the number of individual subject, the center of gravity (center of gravity of individual subject or synthetic object) that in fact can make object to the right or be moved to the left necessary amount.
Carry out in the control of composition shown in Figure 11 and 14 shake control and the control of fascinating is to carry out by the motion of the shaking of control The Cloud Terrace 10/tilting equipment.Perhaps, replace The Cloud Terrace 10, can adopt another configuration.For example, can allow to enter the lens unit 3 of Digital Still Camera 1, and can mobile reverberation shake/fascinate the result with the image that obtains to obtain based on imaging by the imaging of mirror reflects.
In addition, by carry out control with in the horizontal direction with vertical direction on mobile pixel region to be taken on the effect as imaging signal from the image of the imageing sensor 22 of Digital Still Camera 1, can obtain and shake/fascinate the result who is equal to mutually.In the case, The Cloud Terrace 10 or the alternate device that is used to shake/fascinate except that digital still camera 1 are unnecessary, and can be carried out alone by Digital Still Camera 1 according to the whole composition control of this embodiment.
In addition, can by be provided with can be in the horizontal direction with vertical direction on change the lens in the optical system unit 21 optical axis mechanism and control this motion of mechanism and carry out and shake/fascinate.
Can be employed other system or equipment except the above imaging system of describing as embodiment based on the configuration that is used to judge composition of the embodiment of the invention.Below, the application example according to the composition determining of the embodiment of the invention is described.
At first, with reference to Figure 19, be applied to single imaging device, for example Digital Still Camera according to the composition determining of the embodiment of the invention.For example, when the suitable composition that in imaging pattern, obtains by the image of imaging device picked-up, this fact is notified to the user by showing.
The configuration that should be provided with in imaging device comprises object detection/composition determining piece 81, notice controll block 82 and display unit 83 for this reason.
Object detection/composition determining piece 81 be taken into take the photograph view data, and execution and the object detection piece shown in Figure 5 61 performed object detection that are equal to are mutually handled, and the performed composition determining that is equal to the mutually processing of the detection information and executing of utilizing the result who handles as object detection and composition controll block shown in Figure 5 62.
For example, suppose that the imaging device that the user will be set to imaging pattern is held in the hand, and he can by carry out releasing operation (shutter release button operation) write down at any time take the photograph image.
Under this state, object detection/composition determining piece 81 be taken at that time by taking the photograph of obtaining of imaging view data, and carry out object detection.Then, in the composition control and treatment, specify best composition according to number of detected individual subject or the like.Note, in this composition determining is handled, judge taking the photograph of obtaining at that time the composition of picture material of view data and consistency and the similitude between the best composition.If obtain predetermined extent or bigger similitude, then judge by take taking the photograph of actual acquisition the picture material of view data has best composition.In fact, dispose an algorithm,, and judge and take the photograph to such an extent that the composition of picture material of view data mates best composition, then provide the judgement of best composition if so that obtain predetermined extent or bigger similitude.The algorithm that has various calculating consistency and similitude, thereby concrete example is not described here.
Indicate and take the photograph to such an extent that information with result of determination of best composition of the picture material of view data is output to notice controll block 82.After receiving this information, notice controll block 82 is carried out and is shown control, is displayed in a predetermined manner in the display unit 83 so that indicate the notice that the current image that is just absorbing has a best composition to the user.Notice controll block 82 is the presentation control functions by the microcomputer that comprises in imaging device (CPU) and realizes that in display unit 83 the shown image processing function that image shows realizes.Indicating the notice to the user that has obtained best composition can utilize the sound such as electro-acoustic or synthetic speech to carry out.
Display unit 83 is corresponding to the display unit 33 of the Digital Still Camera 1 of this embodiment.Usually, when being exposed, the display floater of display unit is set in the precalculated position of imaging device, and the current image that just is being ingested (so-called transmitted image) is shown thereon in screening-mode.Thereby in the imaging device of reality, the image that best composition is notified to the user is displayed in the display unit 83 on being superimposed on transmitted image.When this notice image occurred, the user carried out releasing operation.Therefore, even do not possess the photo that the user of sufficient photographic knowledge and technology also can photograph good composition at an easy rate.
Figure 20 shows as among Figure 19, is applied to the example of the single imaging device such as Digital Still Camera according to the composition determining of the embodiment of the invention.
In configuration shown in Figure 20, with the same among Figure 19, object detection/composition determining piece 81 be taken at that time by taking the photograph of obtaining of imaging view data and carry out object detection and handle, and judge based on object detection information and to take the photograph to such an extent that whether the picture material of view data has best composition.After judging that picture material has best composition, object detection/composition determining piece 81 is notified to result of determination and discharges controll block 84.
Utilize this configuration, when imaging device can be ingested at the image of best composition automatically record take the photograph image.
Configuration shown in Figure 19 and 20 can be applied in the classification that is in still camera, have a Digital Still Camera of configuration shown in Figure 1.In addition, imageing sensor by the light after the imaging that division obtains by optical system being set and being taken into division and from the data image signal processing unit of imageing sensor received signal and processing signals, these configurations can be applied on silver salt film record take the photograph the so-called silver film camera of image.
Figure 21 shows the example that the embodiment of the invention is applied to the editing equipment of editor's conventional images data.
Figure 21 shows editing equipment 90.Editing equipment 90 acquisitions from the view data (reproduced image data) of storage medium, are used as the conventional images data by reproducing.Except from the view data of storage medium reproducing, also can be obtained via the view data of network download.That is to say, be used for obtaining to take the photograph to such an extent that the approach of view data is not particularly limited for editing equipment 90.
Taken the photograph to such an extent that view data is imported into each of pruning in piece 91 and the object detection/composition determining piece 92 by reproducing of obtaining of editing equipment 90.
At first, the object detection that object detection/composition determining piece 92 is carried out as in Figure 19 and 20 is handled, and output detection information.Then, handle as the composition determining that utilizes detection information, object detection/composition determining piece 92 as taking the photograph of input being reproduced wherein specify the image section with predetermined aspect ratio (image section of best composition) that obtains best composition in the whole screen of view data.Then, after the image section of specifying best composition, object detection/composition determining piece 92 is to the information (pruning command information) of the position of pruning piece 91 these image sections of output indication.
In response to the input of pruning command information, prune piece 91 carries out image processing with from be input to wherein reproduced take the photograph to such an extent that extract the view data and prune the indicated image section of command information, and the image section that is extracted is exported as the independent image data.This be through editor take the photograph view data.
Utilize this configuration,, automatically perform the pruning of being undertaken by a part of from the picture material of raw image data, extracting best composition the view data of new acquisition as editing and processing to view data.This editting function can be used the application that is used as being installed to the edited image data among personal computer or the like or as the image editing function in the application of managing image data.
Figure 22 is the ios dhcp sample configuration IOS DHCP that is applied to the imaging device such as Digital Still Camera according to the composition determining of the embodiment of the invention.
The imaging of being undertaken by the image-generating unit (not shown) obtains takes the photograph to such an extent that the view data object detection/composition determining piece 101 and the file that are imported in the imaging device 100 generate piece 103.In the case, be input to taking the photograph of imaging device 100 view data be should by releasing operation or the like be stored in the storage medium take the photograph view data, and be based on that imaging signal that the imaging undertaken by the image-generating unit (not shown) obtains generates.
At first, object detection/101 pairs of composition determining pieces be input to wherein take the photograph to such an extent that view data is carried out object detection and judged best composition based on detection information.Particularly, the same with the situation shown in Figure 21, can obtain to specify taking the photograph of input the information of image section of the best composition in the whole screen of view data.Then, object detection/composition determining piece 101 will be indicated the information of the result of determination of the best composition that obtains by this way to output to metadata and be generated piece 102.
Metadata generates piece 102 and generates based on the information of input and comprise and takes the photograph to such an extent that view data obtains the metadata (composition editing meta-data) of best composition information necessary from corresponding, and metadata is outputed to file generates piece 103.The composition editing meta-data for example comprises indicating to be taken the photograph to such an extent that will carry out the positional information of the image-region part of pruning in the screen of view data to it as corresponding.
In imaging device shown in Figure 22 100, take the photograph to such an extent that view data is recorded on the storage medium, so that these data are managed as the static image file of predetermined format.For this reason, file generation piece 103 will be taken the photograph to such an extent that view data converts static image file form (generation static image file) to.
At first, file generate 103 pairs of pieces be input to wherein take the photograph to such an extent that view data is carried out and the corresponding image compression encoding of image file format so that generate by take the photograph the document body that constitutes of view data.In addition, when the composition editing meta-data that will generate piece 102 receptions from metadata was stored in the predetermined memory location, file generated the data division that piece 103 also generates head and comprises the additional information piece.Then, file generates piece 103 and generates static image file based on document body, head and additional information piece, and the output static image file.Therefore, as shown in figure 22, can obtain to be recorded on the storage medium and have comprise take the photograph the static image file of configuration of view data and metadata (composition editing meta-data).
Figure 23 shows the ios dhcp sample configuration IOS DHCP of the editing equipment that the static image file of equipment generation shown in Figure 22 is edited.
Editing equipment 110 shown in Figure 23 is taken into the data of static image file and these data is input to separated from meta-data piece 111.Separated from meta-data piece 111 will be taken the photograph to such an extent that view data and separated from meta-data come with document body is corresponding in the data of static image file.Be output to metadata analysis piece 112 by separating the metadata that obtains, and take the photograph to such an extent that view data is output to pruning piece 113.
The metadata of 112 pairs of acquisitions of metadata analysis piece is analyzed.As analyzing and processing, metadata analysis piece 112 is with reference to the information that is used to obtain best composition that comprises in the composition editing meta-data, specifies at least correspondingly to take the photograph to such an extent that should carry out the image-region of pruning in the view data on it.Then, metadata analysis piece 112 is pruned command information to pruning piece 113 outputs, so that the instruction of the pruning of carrying out the specify image zone to be provided.
The same with the pruning piece 91 shown in Figure 21, prune piece 113 carries out image processing with from input from taking the photograph of separated from meta-data piece 111 extract by input from the indicated image section of the pruning command information of metadata analysis piece 112 view data, and with the image section that extracts as through taking the photograph of editor view data output, wherein through taking the photograph of editor view data is a view data independently.
According to the system that comprises imaging device shown in Figure 22 and 23 and editing equipment, when the original static view data by take obtaining (take the photograph view data) is stored in undressed state, can carry out and utilize metadata from the original static view data, to extract the editor of the image of best composition.In addition, decision and the corresponding image section that will extract of best composition automatically.
Figure 24 shows the embodiment of the invention and is applied to the example that can absorb and write down the imaging device of moving image such as video camera.
Motion image data is imported into imaging device shown in Figure 24 120.Motion image data is based on that imaging signal that the imaging undertaken by the image-generating unit that comprises in the imaging device 120 obtains generates.Motion image data is imported into object detection/composition determining piece 122 and the moving image record piece 124 in the imaging device 120.
Object detection in this situation/composition determining piece 122 judges that the composition that is input to motion image data wherein is good or bad.For example, the parameter of object detection/good composition of composition determining piece 122 in store definition (the corresponding parameter of good composition).These parameters comprise as required occupation rate in the screen of setting for each detected individual subject, and object is to object distance K.Object detection/122 pairs of composition determining pieces are input to wherein motion image data and carry out composition determining continuously and (for example calculate the composition parameter, such as the individual subject of reality in motion image data occupation rate and object to object distance K), and the composition parameter of the motion image data that will obtain as result of determination is compared with the above-mentioned corresponding parameter of good composition.If the composition parameter of motion image data has predetermined extent or bigger similitude with the good corresponding parameter of composition, judge that then motion image data has good composition.Otherwise, judge that motion image data has bad composition.
If object detection/composition determining piece 122 judges that motion image data has good composition, then it generates piece 123 outputs to metadata and indicates the information (good composition image section indication information) that has been judged as the image section (good composition image section) with good composition in the motion image data.Good composition image section indication information for example is to indicate the original position of the good composition image section in the motion image data and the information of end position.
Metadata in the case generates piece 123 generations will write down the metadata of piece 124 as various necessity of the motion image data of file logging on storage medium by moving image about as described below.When in the above described manner when object detection/composition determining piece 122 receives good composition image section indication information, metadata generation piece 123 generates the image section that indicates by the good composition image section indication information indication of importing and has the metadata of good composition, and this metadata is outputed to moving image record piece 124.
Moving image record piece 124 is carried out control and is recorded in storage medium with the motion image data with input, so that this motion image data is managed as the motion pictures files of predetermined format.When metadata was generated piece 123 outputs from metadata, moving image record piece 124 was carried out control so that this metadata is recorded in being included in the metadata that invests motion pictures files.
Therefore, as shown in figure 24, the motion pictures files that is recorded on the storage medium comprises motion image data that obtains by imaging and the metadata that indicates the image section with good composition, and this metadata is attached to this motion image data.
Can be to have the image section of moving image of certain time width or the still image that from motion image data, extracts by the image section with good composition of metadata indication in the above described manner.Perhaps, have the motion image data of image section of good composition or static image data and can replace above-mentioned metadata and be generated, and the data that generated can be used as the secondary static image data (perhaps as the file that is independent of motion pictures files) that adds motion pictures files to and are recorded.
In addition, comprise in the configuration of object detection/composition determining piece 122 having only the moving image section that is judged to be good composition image section by object detection/composition determining piece 122 to can be used as motion pictures files and be recorded at imaging device shown in Figure 24 120.In addition, be judged to be the corresponding view data of the image section with good composition by object detection/composition determining piece 122 and can be output to external equipment via data-interface or the like.
Figure 25 shows the embodiment of the invention and is applied to the example of carrying out the printing device of printing.
In the case, printing device 130 is taken into the view data (still image) with the picture material that will print.The data that have been taken into are imported into prunes piece 131 and object detection/composition determining piece 132.
At first, object detection/composition determining piece 132 is carried out the identical object detection/composition determining processing performed with object detection shown in Figure 21/composition determining piece 92, so that the image section of the best composition in the whole screen of appointment input image data, generate the pruning command information according to result, and this information is outputed to pruning piece 131.
Prune piece 131 with pruning piece 91 shown in Figure 21 in identical mode, carries out image processing is to extract from the view data of input by pruning the indicated image section of command information.Then, prune piece 131 and the data of the image section that extracts are outputed to print controll block 133, with as the view data that will print.
Printing controll block 133 utilizes the print image data of wanting of input to carry out control with printer operation structure (not shown).
Utilize this operation, in printing device 130, the image section with best composition is extracted from the picture material of input image data, and be printed on the paper automatically.
Example shown in Figure 26 preferably is applied to the many static image file of storage and utilizes these static image file that the equipment or the system of service are provided.
Many static image file are stored in the memory cell 141.
Object detection/composition determining piece 142 is taken into the static image file that is stored in the memory cell 141 in predetermined timing, and extracts the static image data that is stored in its document body.Then, 142 pairs of static image datas of object detection/composition determining piece carry out with the performed identical processing of object detection shown in Figure 22/composition determining piece 101 obtaining to indicate information about the result of determination of best composition, and subsequently this information is outputed to metadata generation piece 143.
The same with the metadata generation piece 102 shown in Figure 22, metadata generates the information generator data (composition editing meta-data) of piece 143 based on input.Then, in the case, metadata generates piece 143 metadata that is generated is registered in the metadata table of storage in the memory cell 141.Metadata table is an information unit, its storing metadata like this so that indicate with memory cell 141 in the corresponding relation of static image data of storage.That is to say, metadata table indicates metadata (composition editing meta-data) and the object detection of the generator data carried out as object detection/composition determining piece 142 is handled and the static image file of the target that composition determining is handled between corresponding relation.
Static image file in being stored in memory cell 141 in response to come from the outside to the request of static image file and will be output the time (for example, in server, static image file in response to from the download request of client and be downloaded), static image file IOB 144 search memory cell 141 are seeking the static image file of being asked and to be taken into this document, and search metadata table is to seek with the corresponding metadata of static image file (composition editing meta-data) of being searched for and to be taken into this metadata.
Static image file IOB 144 comprises with metadata analysis piece shown in Figure 23 112 at least and prunes piece 113 corresponding functional blocks.
In static image file IOB 144, wherein the metadata analysis piece of She Zhiing is analyzed to obtain to prune command information obtained metadata.Then, the pruning piece that wherein is provided with is carried out pruning according to pruning command information to the static image data that is stored in the obtained static image file.Static image file IOB 144 generates new static image data based on the image section that obtains by pruning then, and exports new static image data.
System configuration shown in Figure 26 may be used on various services.
For example, this system configuration may be used on the photo print service via network.Particularly, the view data (static image file) that will print via network of user uploads to the server of print service.In server, the static image file of being uploaded is stored in the memory cell 141, and is generated and is registered in the metadata table with the corresponding metadata of this document.Then, when actual printout, static image file IOB 144 will be by extracting static image data that best composition generates as the view data output that will print.That is to say, in this service, the request of printing in response to comparison film and send the print image that composition wherein is corrected to best composition.
In addition, this system configuration can be applied to the server of blog (blog) or the like.The text data of blog and the view data of uploading are stored in the memory cell 141.Therefore, can from the view data that the user uploads, extract the image of best composition, and the image that extracts can be attached on the page of blog.
The above configuration of describing with reference to Figure 17 to 26 is an example, also can be applied to other equipment, system and application software according to the composition determining of the embodiment of the invention.
It is this hypothesis of people that the description to embodiment that more than provides is based on object (individual subject), is not people and for example be the situation of animal or plant but the embodiment of the invention also can be applied to object.
In addition, should not be limited to the data that obtain by imaging (take the photograph view data) as the view data of the target of object detection.Can use the view data of picture material with drawing or design drawing.
The composition of judging based on the embodiment of the invention (best composition) might not be limited to only based on three fens regular compositions that determine.For example, can adopt another kind of method, for example based on the composition establishing method of gold ratio (golden ratio).In addition, best composition is not limited to generally to think based on three fens rules or gold ratio the composition of good composition.For example, depend on the setting of composition, also can be evaluated as interesting or good composition by the user even it is generally acknowledged bad composition.Therefore, can consider that practicality and amusement characteristic set composition arbitrarily and be used as the composition (best composition) judged based on the embodiment of the invention, and be not particularly limited in practice.
Those skilled in the art should be understood that depends on designing requirement and other factors, can carry out various modifications, combination, sub-portfolio and change, as long as they are within the scope of claims or its equivalent.
The present invention comprises the relevant theme of submitting to Japan Patent office with on October 17th, 2007 of Japanese patent application JP 2007-270392, by reference the full content of this application is incorporated into here.
Claims (12)
1. composition determining apparatus comprises:
Object test equipment is used for coming based on view data the existence of one or more special objects of detected image; And
The composition determining device is used for according to judging composition by the number of the detected object of described object test equipment.
2. composition determining apparatus as claimed in claim 1,
Wherein said composition determining device changes the composition determining mode according to the number of described detected object.
3. composition determining apparatus as claimed in claim 1,
Wherein said composition determining device changes the composition determining algorithm according to the number of described detected object.
4. composition determining apparatus as claimed in claim 2,
Wherein, when the number of described detected object is two or more, described composition determining device is according to the number of described detected object, sets the ratio that different values is used as being positioned at the distance between objects and the described image length in the horizontal direction of right-hand member and left end.
5. composition determining apparatus as claimed in claim 2,
Wherein said composition determining device is according to the number of described detected object, and coming the orientation judging of the described image of rectangle is vertically and in the level any one.
6. composition determining apparatus as claimed in claim 5,
Wherein, when the number of described detected object was equal to or greater than predetermined value, described composition determining device was based on the distance between objects that is positioned at right-hand member and left end, was in vertical and the level any one with the orientation judging of rectangular image.
7. composition determining apparatus as claimed in claim 6,
Wherein, when the number of described detected object is equal to or greater than described predetermined value and described number for two or more predetermined value, described composition determining device is based on the distance between objects that is positioned at right-hand member and left end, is in vertical and the level any one with the orientation judging of described rectangular image.
8. composition determining apparatus as claimed in claim 5,
Wherein, when the number of described detected object is equal to or greater than predetermined value, described composition determining device is based on the distance between objects that is positioned at right-hand member and left end and be positioned at the distance between objects of top and bottom, be vertically and in the level any one with the orientation judging of rectangular image.
9. composition determining apparatus as claimed in claim 8,
Wherein, when the number of described detected object is equal to or greater than described predetermined value and described number for two or more predetermined value, described composition determining device is based on the distance between objects that is positioned at right-hand member and left end and be positioned at the distance between objects of top and bottom, be vertically and in the level any one with the orientation judging of described rectangular image.
10. composition determining method may further comprise the steps:
Come the existence of the one or more special objects in the detected image based on view data; And
Number according to detected object in described object detection step is judged composition.
11. a program allows composition determining apparatus to carry out:
Come the existence of the one or more special objects in the detected image based on view data; And
Number according to detected object in described object detection step is judged composition.
12. a composition determining apparatus comprises:
Subject detecting unit, it is configured to come based on view data the existence of the one or more special objects in the detected image; And
The composition determining unit, it is configured to according to judging composition by the number of the detected object of described subject detecting unit.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201110045756.4A CN102158650B (en) | 2007-10-17 | 2008-10-16 | Image processing equipment and image processing method |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2007270392 | 2007-10-17 | ||
JP2007270392A JP4894712B2 (en) | 2007-10-17 | 2007-10-17 | Composition determination apparatus, composition determination method, and program |
JP2007-270392 | 2007-10-17 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201110045756.4A Division CN102158650B (en) | 2007-10-17 | 2008-10-16 | Image processing equipment and image processing method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101415076A true CN101415076A (en) | 2009-04-22 |
CN101415076B CN101415076B (en) | 2011-04-20 |
Family
ID=40010825
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2008101679423A Expired - Fee Related CN101415076B (en) | 2007-10-17 | 2008-10-16 | Composition determining apparatus, composition determining method |
CN201110045756.4A Expired - Fee Related CN102158650B (en) | 2007-10-17 | 2008-10-16 | Image processing equipment and image processing method |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201110045756.4A Expired - Fee Related CN102158650B (en) | 2007-10-17 | 2008-10-16 | Image processing equipment and image processing method |
Country Status (6)
Country | Link |
---|---|
US (1) | US8164643B2 (en) |
EP (1) | EP2051505A3 (en) |
JP (1) | JP4894712B2 (en) |
KR (1) | KR20090039595A (en) |
CN (2) | CN101415076B (en) |
TW (1) | TWI410125B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102111540A (en) * | 2009-12-28 | 2011-06-29 | 索尼公司 | Image pickup control apparatus, image pickup control method and program |
CN102316269A (en) * | 2010-07-05 | 2012-01-11 | 索尼公司 | Imaging control apparatus, image formation control method and program |
CN104883506A (en) * | 2015-06-26 | 2015-09-02 | 重庆智韬信息技术中心 | Self-service shooting method based on face identification technology |
CN103197491B (en) * | 2013-03-28 | 2016-03-30 | 华为技术有限公司 | The method of fast automatic focusing and image collecting device |
CN108702448A (en) * | 2017-09-27 | 2018-10-23 | 深圳市大疆创新科技有限公司 | Unmanned plane image-pickup method and unmanned plane |
CN109863745A (en) * | 2017-05-26 | 2019-06-07 | 深圳市大疆创新科技有限公司 | Mobile platform, flying body support device, portable terminal, camera shooting householder method, program and recording medium |
CN111756998A (en) * | 2020-06-22 | 2020-10-09 | 维沃移动通信有限公司 | Composition method, composition device and electronic equipment |
Families Citing this family (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4904243B2 (en) * | 2007-10-17 | 2012-03-28 | 富士フイルム株式会社 | Imaging apparatus and imaging control method |
JP5206095B2 (en) | 2008-04-25 | 2013-06-12 | ソニー株式会社 | Composition determination apparatus, composition determination method, and program |
KR101599871B1 (en) * | 2009-02-11 | 2016-03-04 | 삼성전자주식회사 | Photographing apparatus and photographing method |
JP2010268184A (en) * | 2009-05-14 | 2010-11-25 | Hoya Corp | Imaging apparatus |
JP5577623B2 (en) * | 2009-05-14 | 2014-08-27 | リコーイメージング株式会社 | Imaging device |
JP5347802B2 (en) * | 2009-07-27 | 2013-11-20 | ソニー株式会社 | Composition control apparatus, imaging system, composition control method, and program |
JP5446546B2 (en) * | 2009-07-28 | 2014-03-19 | ソニー株式会社 | Imaging control apparatus, imaging control method, program, imaging system |
JP5434339B2 (en) | 2009-07-29 | 2014-03-05 | ソニー株式会社 | Imaging control apparatus, imaging system, imaging method, program |
JP5434338B2 (en) | 2009-07-29 | 2014-03-05 | ソニー株式会社 | Imaging control apparatus, imaging method, and program |
WO2011043104A1 (en) * | 2009-10-09 | 2011-04-14 | シャープ株式会社 | Liquid crystal display device, image display method, program and recording medium |
JP2011082881A (en) * | 2009-10-09 | 2011-04-21 | Sanyo Electric Co Ltd | Electronic camera |
JP5533048B2 (en) * | 2010-03-08 | 2014-06-25 | ソニー株式会社 | Imaging control apparatus and imaging control method |
JP5648298B2 (en) * | 2010-03-15 | 2015-01-07 | ソニー株式会社 | Information processing apparatus, information processing method, and program |
JP5625443B2 (en) * | 2010-03-30 | 2014-11-19 | ソニー株式会社 | Imaging system and imaging apparatus |
US9526156B2 (en) * | 2010-05-18 | 2016-12-20 | Disney Enterprises, Inc. | System and method for theatrical followspot control interface |
JP2013214858A (en) * | 2012-04-02 | 2013-10-17 | Sony Corp | Imaging apparatus, imaging apparatus control method, and computer program |
JP6171628B2 (en) * | 2013-06-28 | 2017-08-02 | 株式会社ニコン | Imaging apparatus, control program, and electronic device |
KR102469694B1 (en) * | 2015-08-24 | 2022-11-23 | 삼성전자주식회사 | Scheme for supporting taking picture in apparatus equipped with camera |
CN106331508B (en) * | 2016-10-19 | 2020-04-03 | 深圳市道通智能航空技术有限公司 | Method and device for shooting composition |
CN108235816B (en) * | 2018-01-10 | 2020-10-16 | 深圳前海达闼云端智能科技有限公司 | Image recognition method, system, electronic device and computer program product |
JP7049632B2 (en) * | 2018-03-31 | 2022-04-07 | 株式会社ウォンツ | Vehicle imaging system |
JP6873186B2 (en) * | 2019-05-15 | 2021-05-19 | 日本テレビ放送網株式会社 | Information processing equipment, switching systems, programs and methods |
JP7544043B2 (en) | 2019-06-04 | 2024-09-03 | ソニーグループ株式会社 | Imaging device and imaging control method |
CN112154654A (en) * | 2019-08-21 | 2020-12-29 | 深圳市大疆创新科技有限公司 | Match shooting method, electronic equipment, unmanned aerial vehicle and storage medium |
CN111277763A (en) * | 2020-03-06 | 2020-06-12 | 维沃移动通信有限公司 | Electronic equipment and shooting prompting method |
KR20220152019A (en) * | 2021-05-07 | 2022-11-15 | 에스케이하이닉스 주식회사 | Image sensing device and operating method thereof |
JP6970945B1 (en) * | 2021-06-18 | 2021-11-24 | パナソニックIpマネジメント株式会社 | Imaging device |
Family Cites Families (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0614698B2 (en) | 1983-05-13 | 1994-02-23 | 株式会社日立製作所 | Automatic tracking camera |
JP2000098456A (en) * | 1998-09-28 | 2000-04-07 | Minolta Co Ltd | Camera provided with automatic composing function |
JP2001268425A (en) | 2000-03-16 | 2001-09-28 | Fuji Photo Optical Co Ltd | Automatic tracking device |
EP1107584B1 (en) * | 1999-12-03 | 2008-08-27 | Fujinon Corporation | Automatic following device |
JP2001167253A (en) * | 1999-12-10 | 2001-06-22 | Fuji Photo Film Co Ltd | Image pickup device for evaluating picked-up image and recording medium |
US7035477B2 (en) * | 2000-12-22 | 2006-04-25 | Hewlett-Packard Development Comapny, L.P. | Image composition evaluation |
CA2445171A1 (en) * | 2001-04-24 | 2002-10-31 | Mcgill University | A 150 kda tgf-beta 1 accessory receptor acting as a negative modulator of tgf-beta signaling |
DE10261295A1 (en) * | 2002-12-27 | 2004-07-08 | Bernhard Rauscher | Method for detection of position of camera when picture was taken, to be used for later digital processing, comprising position recording sensor |
JP4046079B2 (en) * | 2003-12-10 | 2008-02-13 | ソニー株式会社 | Image processing device |
US8045007B2 (en) * | 2004-12-24 | 2011-10-25 | Fujifilm Corporation | Image capturing system and image capturing method |
JP4404805B2 (en) * | 2005-05-10 | 2010-01-27 | 富士フイルム株式会社 | Imaging device |
JP4630749B2 (en) * | 2005-07-26 | 2011-02-09 | キヤノン株式会社 | Image output apparatus and control method thereof |
JP4555197B2 (en) | 2005-09-16 | 2010-09-29 | 富士フイルム株式会社 | Image layout apparatus and method, and program |
WO2007072663A1 (en) | 2005-12-22 | 2007-06-28 | Olympus Corporation | Photographing system and photographing method |
JP4466585B2 (en) * | 2006-02-21 | 2010-05-26 | セイコーエプソン株式会社 | Calculating the number of images that represent the object |
JP4894328B2 (en) | 2006-03-31 | 2012-03-14 | 東レ株式会社 | Crimp yarn for aliphatic polyester carpet |
TW200828988A (en) * | 2006-12-22 | 2008-07-01 | Altek Corp | System and method for image evaluation |
JP2009088710A (en) * | 2007-09-27 | 2009-04-23 | Fujifilm Corp | Photographic apparatus, photographing method, and photographing program |
-
2007
- 2007-10-17 JP JP2007270392A patent/JP4894712B2/en not_active Expired - Fee Related
-
2008
- 2008-08-26 KR KR1020080083200A patent/KR20090039595A/en active IP Right Grant
- 2008-09-02 TW TW097133600A patent/TWI410125B/en not_active IP Right Cessation
- 2008-10-09 US US12/287,412 patent/US8164643B2/en not_active Expired - Fee Related
- 2008-10-15 EP EP08166659A patent/EP2051505A3/en not_active Withdrawn
- 2008-10-16 CN CN2008101679423A patent/CN101415076B/en not_active Expired - Fee Related
- 2008-10-16 CN CN201110045756.4A patent/CN102158650B/en not_active Expired - Fee Related
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102111540A (en) * | 2009-12-28 | 2011-06-29 | 索尼公司 | Image pickup control apparatus, image pickup control method and program |
CN102316269A (en) * | 2010-07-05 | 2012-01-11 | 索尼公司 | Imaging control apparatus, image formation control method and program |
CN103197491B (en) * | 2013-03-28 | 2016-03-30 | 华为技术有限公司 | The method of fast automatic focusing and image collecting device |
US9521311B2 (en) | 2013-03-28 | 2016-12-13 | Huawei Technologies Co., Ltd. | Quick automatic focusing method and image acquisition apparatus |
CN104883506A (en) * | 2015-06-26 | 2015-09-02 | 重庆智韬信息技术中心 | Self-service shooting method based on face identification technology |
CN109863745A (en) * | 2017-05-26 | 2019-06-07 | 深圳市大疆创新科技有限公司 | Mobile platform, flying body support device, portable terminal, camera shooting householder method, program and recording medium |
CN108702448A (en) * | 2017-09-27 | 2018-10-23 | 深圳市大疆创新科技有限公司 | Unmanned plane image-pickup method and unmanned plane |
CN111756998A (en) * | 2020-06-22 | 2020-10-09 | 维沃移动通信有限公司 | Composition method, composition device and electronic equipment |
CN111756998B (en) * | 2020-06-22 | 2021-07-13 | 维沃移动通信有限公司 | Composition method, composition device and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
US8164643B2 (en) | 2012-04-24 |
EP2051505A2 (en) | 2009-04-22 |
EP2051505A3 (en) | 2012-01-25 |
CN102158650B (en) | 2015-11-25 |
JP4894712B2 (en) | 2012-03-14 |
CN101415076B (en) | 2011-04-20 |
JP2009100301A (en) | 2009-05-07 |
TW200934228A (en) | 2009-08-01 |
CN102158650A (en) | 2011-08-17 |
KR20090039595A (en) | 2009-04-22 |
TWI410125B (en) | 2013-09-21 |
US20090102942A1 (en) | 2009-04-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101415076B (en) | Composition determining apparatus, composition determining method | |
CN101415077B (en) | Composition determining apparatus, composition determining method | |
CN101616261B (en) | Image recording apparatus, image recording method, image processing apparatus, and image processing method | |
CN100462999C (en) | Image processing apparatus and method | |
CN102906745B (en) | Determine that key video snippet forms video summary using selection criterion | |
CN102006409B (en) | Photographing condition setting apparatus, photographing condition setting method, and photographing condition setting program | |
US8594390B2 (en) | Composition determination device, composition determination method, and program | |
CN104378547B (en) | Imaging device, image processing equipment, image processing method and program | |
US20110292288A1 (en) | Method for determining key video frames | |
CN102883104A (en) | Automatic image capture | |
CN101335835B (en) | Image pickup device, image display control method | |
CN102783136A (en) | Imaging device for capturing self-portrait images | |
CN103220463A (en) | Image capture apparatus and control method of image capture apparatus | |
US20110292229A1 (en) | Ranking key video frames using camera fixation | |
CN103605720B (en) | Retrieve device, search method and interface screen display methods | |
CN101841654B (en) | Image processing apparatus and image processing method | |
JP2008035125A (en) | Image pickup device, image processing method, and program | |
CN101141565A (en) | Image processing apparatus and image processing method, computer program, and imaging apparatus | |
KR101140414B1 (en) | Digital Camera Apparatus for Supporting Deblurring and Method thereof | |
JP7264675B2 (en) | processor and program | |
JP2012029338A (en) | Composition determination apparatus, composition determination method, and program | |
US11653087B2 (en) | Information processing device, information processing system, and information processing method | |
JP6375114B2 (en) | Image reproducing apparatus and method for controlling image reproducing apparatus | |
JP2005260596A (en) | Photographic system, digital camera and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20110420 Termination date: 20211016 |