CN104113702A - Flash control method and device and image collection method and device - Google Patents

Flash control method and device and image collection method and device Download PDF

Info

Publication number
CN104113702A
CN104113702A CN201410361259.9A CN201410361259A CN104113702A CN 104113702 A CN104113702 A CN 104113702A CN 201410361259 A CN201410361259 A CN 201410361259A CN 104113702 A CN104113702 A CN 104113702A
Authority
CN
China
Prior art keywords
depth
flash
distributed intelligence
subject
subject area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410361259.9A
Other languages
Chinese (zh)
Other versions
CN104113702B (en
Inventor
王正翔
杜琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhigu Ruituo Technology Services Co Ltd
Original Assignee
Beijing Zhigu Ruituo Technology Services Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhigu Ruituo Technology Services Co Ltd filed Critical Beijing Zhigu Ruituo Technology Services Co Ltd
Priority to CN201410361259.9A priority Critical patent/CN104113702B/en
Publication of CN104113702A publication Critical patent/CN104113702A/en
Application granted granted Critical
Publication of CN104113702B publication Critical patent/CN104113702B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

An embodiment of the invention discloses a flash control method and device. The flash control method comprises obtaining the distribution information of at least one shot object in a scene to be shot; determining a plurality of object areas which are corresponding to a plurality of depth ranges of the scene to be shot according to the distribution information respectively; determining a plurality of groups of flash parameters which are corresponding to the plurality of object areas respectively. The embodiment of the invention also discloses an image coaction method and device. The image collection method comprises obtaining the plurality of groups of flash parameters which are corresponding to the plurality of depth ranges of object areas respectively; responding to a shooting instruction, performing flashing on the scene to be shot and the plurality of groups of flash parameters repeatedly and shooting the scene to be shot repeatedly to obtain a plurality of initial images; synthesizing the plurality of initial images. According to the flash control method and device and the image collection method and device, the plurality of groups of flash parameters are determined according to the distribution information of the shot object of the scene to be shot and accordingly images of the scene to be shot can be collected and the images are good in exposure effect.

Description

Flash control method and control device, image-pickup method and harvester
Technical field
The application relates to acquisition technology field, relates in particular to a kind of flash control method and control device, image-pickup method and harvester.
Background technology
Under the bad condition of surround lighting light, particularly when night, carry out the shooting of photo and need to carry out light filling to scene with photoflash lamp, the light sending by photoflash lamp in taking illuminates scene, obtains better photograph effect.Some photoflash lamps are directly installed on camera, for example, on mobile phone, family expenses camera, generally can have built-in flash module; Also there are some more professional phase chances to adopt Supported Speedlights, so that scene is carried out to better light filling.
Summary of the invention
The application's object is: a kind of control technology scheme and relevant acquisition technology scheme of glistening is provided.
First aspect, the application's a possible embodiment provides a kind of flash control method, comprising:
Obtain the distributed intelligence for the treatment of at least one subject in photographed scene;
Described in determining according to described distributed intelligence, treat multiple depth boundses and multiple subject area corresponding to described multiple depth bounds of photographed scene with respect to a shooting reference position;
Determine the many groups of flash of light parameters corresponding to described multiple subject area.
Second aspect, the application's a possible embodiment provides a kind of flash of light control device, comprising:
Submodule is obtained in distributed intelligence, for obtaining the distributed intelligence for the treatment of at least one subject of photographed scene;
Subject area is determined submodule, described in determining according to described distributed intelligence, treats multiple depth boundses and multiple subject area corresponding to described multiple depth bounds of photographed scene with respect to a shooting reference position;
Parameter is determined submodule, for determining the many groups of flash of light parameters corresponding to described multiple subject area.
The third aspect, the application's a possible embodiment provides a kind of image-pickup method, comprising:
Obtain many groups of multiple subject area flash of light parameters treating multiple depth boundses in photographed scene corresponding to one;
In response to a shooting instruction, treat that to described photographed scene repeatedly glistens with described many group flash of light parameters, and treat that photographed scene is repeatedly taken and obtain multiple initial pictures described, wherein, described each shooting in repeatedly taking is corresponding with described each flash of light in repeatedly glistening;
Synthetic described multiple initial pictures.
Fourth aspect, the application's a possible embodiment provides a kind of image collecting device, comprising:
Parameter acquisition module, for obtaining many groups of multiple subject area flash of light parameters treating the multiple depth boundses of photographed scene corresponding to one;
Flash module, in response to a shooting instruction, treats that to described photographed scene repeatedly glistens with described many group flash of light parameters;
Image capture module, in response to described shooting instruction, treats that photographed scene is repeatedly taken and obtains multiple initial pictures described, and wherein, described each shooting in repeatedly taking is corresponding with described each flash of light in repeatedly glistening;
Processing module, for the synthesis of described multiple initial pictures.
At least one embodiment of the embodiment of the present application is according to the distributed intelligence for the treatment of at least one subject in photographed scene, the many groups of flash of light parameters that multiple subject area definite and in multiple depth boundses are corresponding, and then make to described in the time that photographed scene is taken, photoflash lamp can carry out suitable light filling to the described subject for the treatment of multiple different depths in photographed scene according to described many group flash of light parameters, and then collects the image for the treatment of photographed scene that exposure effect is good.
Brief description of the drawings
Fig. 1 is the schematic flow sheet of a kind of flash control method of the embodiment of the present application;
Fig. 2 is the application scenarios schematic diagram of a kind of flash control method of the embodiment of the present application;
Fig. 3 a and Fig. 3 b are respectively the application scenarios schematic diagram of two kinds of flash control methods of the embodiment of the present application;
Fig. 4 is the structural representation block diagram of a kind of control device that glistens of the embodiment of the present application;
Fig. 5 a is the structural representation block diagram of the another kind flash of light control device of the embodiment of the present application;
Fig. 5 b-5d is the structural representation block diagram that submodule is obtained in the distributed intelligence of three kinds of flash of light control device of the embodiment of the present application;
Fig. 5 e is the structural representation block diagram of the depth bounds determining unit of a kind of control device that glistens of the embodiment of the present application;
Fig. 5 f is the structural representation block diagram of the subject area determining unit of a kind of control device that glistens of the embodiment of the present application;
Fig. 5 g is the structural representation block diagram that the parameter of a kind of control device that glistens of the embodiment of the present application is determined submodule;
Fig. 6 is the structural representation block diagram of another flash of light control device of the embodiment of the present application;
Fig. 7 is the flow chart of a kind of image-pickup method of the embodiment of the present application;
Fig. 8 a-8d is the synthetic schematic diagram of image in a kind of image-pickup method of the embodiment of the present application;
Fig. 9 is the structural representation block diagram of a kind of image collecting device of the embodiment of the present application;
Figure 10 a is the structural representation block diagram of the another kind of image collecting device of the embodiment of the present application;
Figure 10 b is the structural representation block diagram of another image collecting device of the embodiment of the present application;
Figure 10 c is the structural representation block diagram of second definite submodule of a kind of image collecting device of the embodiment of the present application;
Figure 11 is the structural representation block diagram of another image collecting device of the embodiment of the present application.
Embodiment
Below in conjunction with accompanying drawing (in some accompanying drawings, identical label represents identical element) and embodiment, the application's embodiment is described in further detail.Following examples are used for illustrating the application, but are not used for limiting the application's scope.
It will be understood by those skilled in the art that the term such as " first ", " second " in the application, only for distinguishing different step, equipment or module etc., neither represents any particular technology implication, also do not represent the inevitable logical order between them.
Present inventor finds, in the time comprising apart from the different multiple subject of the camera site degree of depth in photographed scene, often be difficult to obtain suitable flash effect, for example: when photometry point is during away from described camera site, subject nearby can receive too much flash of light and occur the situation of overexposure; In the time of the close described camera site of photometry point, subject at a distance can be because glisten the inadequate under exposed situation that occurs.For this situation, as shown in Figure 1, a kind of possible execution mode of the embodiment of the present application provides a kind of flash control method, comprising:
S110 obtains the distributed intelligence for the treatment of at least one subject in photographed scene;
S120 treats multiple depth boundses and multiple subject area corresponding to described multiple depth bounds of photographed scene with respect to a shooting reference position described in determining according to described distributed intelligence;
S130 determines the many groups of flash of light parameters corresponding to described multiple subject area.
For instance, flash of light control device provided by the invention, as the executive agent of the present embodiment, is carried out S110~S130.Particularly, described flash of light control device can be arranged in subscriber equipment in the mode of software, hardware or software and hardware combining; Described subscriber equipment includes but not limited to: camera, have mobile phone, the intelligent glasses etc. of image collecting function.
The technical scheme of the embodiment of the present application is according to the distributed intelligence for the treatment of at least one subject in photographed scene, determine the many group flash of light parameters corresponding with multiple subject area of multiple depth boundses, and then make to described in the time that photographed scene is taken, photoflash lamp can carry out suitable light filling to the described subject for the treatment of multiple different depths in photographed scene according to described many group flash of light parameters, and then collects the image for the treatment of photographed scene that exposure effect is good.
By execution mode below, each step of the embodiment of the present application is further detailed:
S110 obtains the distributed intelligence for the treatment of at least one subject in photographed scene.
In the embodiment of the present application, described distributed intelligence comprises the depth information of described at least one subject with respect to a shooting reference position.
In the embodiment of the present application, described shooting reference position is and a relatively-stationary position, position for treating an image collecting device of photographed scene described in taking, can arranges as required.For example, in a kind of possible execution mode of the embodiment of the present application, described shooting reference position can be imaging surface or the camera lens position of described image collecting device; In the possible execution mode of another kind, described shooting reference position can be for example the position at depth information acquisition module place; Or in another possible execution mode, described shooting reference position can be for example the position at photoflash lamp place.
In the embodiment of the present application, described in, treat generally to comprise in photographed scene at least one subject that degree of depth span is larger.For example, in a kind of possible execution mode, described in treat that photographed scene comprises destination object, is positioned at the background object at described destination object rear and is positioned at described destination object front foreground object.Here, described various objects can be an object independently, and a for example personage, can be also the part of an object, and the palm that a for example personage stretches forwardly can be a destination object, and personage's body part is a background object.
The mode that the embodiment of the present application is obtained described depth information can comprise multiple, for example:
Can obtain described depth information by degree of depth collection.
In a kind of possible execution mode, can obtain described depth information by a depth transducer of described flash of light control device.Described depth transducer can be for example: infrared distance sensor, ultrasonic distance transducer or stereo camera shooting range sensor etc.
In the possible execution mode of another kind, can also obtain described depth information from least one external equipment.For example, in a kind of possible execution mode, described flash of light control device does not have described depth transducer, and other subscriber equipment, for example user's intelligent glasses has described depth transducer, now, can obtain described depth information from described other subscriber equipment.In the present embodiment, described flash of light control device can communicate to obtain described depth information by a communication device and described external equipment.
In the embodiment of the present application, described distributed intelligence also comprise described at least one subject along the cross direction profiles information that is basically perpendicular to depth direction.
Alternatively, in a kind of possible execution mode, described cross direction profiles information can be that the image-region of described at least one subject correspondence on a shooting imaging surface is in the Two dimensional Distribution information of described shooting imaging surface.
Alternatively, in a kind of possible execution mode, can treat that photographed scene obtains described distributed intelligence with respect to a depth map of described shooting reference position according to described.
Those skilled in the art can know, treat depth value corresponding to each subject in photographed scene described in described depth map comprises, and therefore, it had both comprised depth information recited above, also comprised described cross direction profiles information.For example, in a kind of possible execution mode, cross direction profiles information can be for example described at least one subject corresponding region Two dimensional Distribution information on described depth map on described depth map.
In the embodiment of the present application, two kinds of Two dimensional Distribution information recited above can obtain by the mode of information gathering.For example: in a kind of possible execution mode, can be by treating the image of photographed scene described in pre-shooting one, then obtain the described Two dimensional Distribution information of the described image-region that described at least one subject is corresponding by the mode of image processing; In the possible execution mode of another kind, can obtain above-mentioned depth map by a depth transducer, and then can obtain described depth information and described Two dimensional Distribution information simultaneously.
In the possible execution mode of the another kind of the embodiment of the present application, with above-mentioned to obtain described depth information similar, also can obtain described Two dimensional Distribution information from least one external equipment.
S120 treats multiple depth boundses and multiple subject area corresponding to described multiple depth bounds of photographed scene with respect to a shooting reference position described in determining according to described distributed intelligence.
In a kind of possible execution mode of the embodiment of the present application, determine that according to described distributed intelligence described multiple subject area comprises:
Determine described multiple depth bounds according to described depth information;
Determine at least one object region in described multiple subject area that in described multiple depth bounds, each depth bounds is corresponding according to described Two dimensional Distribution information.
Wherein, in a kind of possible execution mode, describedly determine that according to described depth information described multiple depth bounds comprises:
Determine the depth distribution of described at least one subject according to described depth information;
Determine described multiple depth bounds according to described depth distribution.
For example, as shown in Figure 2, in a kind of possible execution mode, described in treat to comprise in photographed scene three subjects, the depth distribution of these three subjects is respectively: the first object 211 is a personage, and there is the depth d 1 of 2 meters its relative shooting reference position 220; Second object 212 is a view, corresponding to the depth d of 3 meters 2; The 3rd object is a city wall background 213, corresponding to the depth d of 4 meters 3; Now, can determine for example three depth boundses according to this depth distribution: the first depth bounds: 1.8 meters~2.2 meters, the second depth bounds: 2.8 meters~3.2 meters, the second depth bounds: 3.8 meters~4.2 meters.
In a kind of possible execution mode of the embodiment of the present application, describedly determine that according to described Two dimensional Distribution information the described at least one object region that described each depth bounds is corresponding can comprise:
Determine described at least one subject cross direction profiles perpendicular to described depth direction in described each depth bounds according to described Two dimensional Distribution information;
Determine described at least one object region according to described cross direction profiles.
Below to treat described in basis that the depth map of photographed scene obtains described depth information and described cross direction profiles and describes as example.
Be the depth map of the scene shown in Fig. 2 with respect to described shooting reference position 220 as shown in Figure 3 a, wherein, the first object 211 is corresponding to first area 311, second object 212 is corresponding to second area 312, the 3rd object 213 is corresponding to the 3rd region 313, in Fig. 3 a, dissimilar shadow representation is to the different distance of described shooting reference position 220.
According to described depth map is processed, can obtain the Two dimensional Distribution information of described first area 311 on described depth map, for example, shape and positional information according to described first area 311 on depth map, can obtain the cross direction profiles in described the first depth bounds at the first object 211.Equally, can obtain the cross direction profiles of the subject in depth bounds corresponding to second object 212 and the 3rd object 213 difference.
Obtain the subject area in each depth bounds according to described cross direction profiles.In Fig. 3 a illustrated embodiment, in each depth bounds, only has a subject area, in other possible execution mode of the embodiment of the present application, in a depth bounds, also may there be multiple subject area, in example embodiment as shown in Figure 3 b, in the first depth bounds, there are two subjects, correspond respectively to the first subregion 311a and the second subregion 311b that laterally separately distribute, therefore, corresponding, in described the first depth bounds, can there be two subject area.
Certainly, those skilled in the art can know, alternatively, in the time of definite described multiple depth boundses and described multiple subject area, can also determine in conjunction with the function of a flash module.For example, in the time that described flash module does not possess direction translation function, two subject area in the first depth bounds shown in Fig. 3 b can be divided into a large subject area that comprises described two subject area.
S130 determines the many groups of flash of light parameters corresponding to described multiple subject area.
In the embodiment of the present application, described multiple subject area is corresponding one by one with described many group flash of light parameters.
Condition below described step S130 can meet in the time determining one group of flash of light parameter corresponding with a subject area a: when flash module glistens with this group flash of light parameter, described flash of light covers described subject area to meet the light intensity of established standards.
In the embodiment of the present application, every group of flash of light parameter in described many group flash of light parameters comprises:
Flash of light distance parameter.
The flash of light distance that flash of light distance parameter described here for example, arrives with the described light intensity that meets established standards (a light filling standard does not owe to expose to the sun, expose to the sun only) corresponding to flash of light.After described flash module glistens according to the flash of light distance parameter corresponding with a subject area, the flash of light distance of this flash of light correspondence is applicable to this depth bounds.In a kind of possible execution mode, because depth bounds is generally value range, can determine described flash of light distance according to the mean depth of a depth bounds.
Alternatively, in a kind of possible execution mode of the embodiment of the present application, described flash of light distance parameter can comprise:
Flash power.
In general, the flash power of flash module is larger, and its flash of light distance is also far away.
Alternatively, in the possible execution mode of another kind, described flash of light distance parameter can comprise:
Flash of light focal length.
In general, the flash of light focal length of flash module is larger, and its light is more assembled, and flash of light distance is far away.
Alternatively, in another possible execution mode, described flash module comprises multiple external flash of light submodules, the degree of depth difference of the described shooting direction of described multiple external flash of light submodule distance, now, the flash of light of described flash module is apart from determining by the flash of light position of described flash module, therefore, in this embodiment, described flash of light distance parameter also comprises:
Flash of light position.
For example, in shooting direction, be respectively 0.5 meter with respect to the described degree of depth with reference to camera site, 1 meter, 2 meters, 3 meters, the position of 5 meters is respectively arranged with 5 external flash of light submodules; For example have in the above in the execution mode of three subjects, the flash of light position corresponding with described personage for example can be 1 meter, the flash of light position corresponding with described view for example can be 2 meters, and the flash of light position corresponding with described city wall background for example can be 3 meters.Certainly, in the time of definite described flash of light position, can also be with reference to the factor such as flash of light ability and installment state of described external flash of light submodule.Certainly, in the possible execution mode of another kind, adjustable can be also the position of described flash module time.
Certainly, those skilled in the art can know, in other possible execution mode of the embodiment of the present application, described flash of light distance parameter can comprise multiple in described flash power, flash of light focal length and flash of light position.For example,, by regulate described flash power and described flash of light focal length to determine the flash of light distance of described flash module simultaneously.Or other can also can be applied in the execution mode of the embodiment of the present application for the parameter of the flash of light distance that regulates described flash module.
In a kind of possible execution mode of the embodiment of the present application, for the light filling better effects if of the subject area to corresponding, described every group of flash of light parameter also comprises:
Flash direction.
In a kind of possible execution mode of the embodiment of the present application, likely described flash module is in the limited coverage area of a depth bounds, for example, in the time that described depth bounds is far away, described flash module reaches the standard of described setting for the light intensity that ensures to glisten, flash of light focal length need to be tuned up, and in the time that flash of light focal length becomes large, flash of light angle of coverage can diminish, therefore need to make described subject area in the scope of described flash of light angle of coverage, to reach better light filling effect, now just need to regulate described flash direction.Taking the embodiment shown in Fig. 3 b as example, those skilled in the art can know, two flash direction corresponding to two subject area corresponding with described the first subregion 311a and the second subregion 311b are can one to the left respectively, another is to the right.
In the another kind of possible execution mode of the embodiment of the present application, alternatively, described every group of flash of light parameter also comprises:
Flash of light angle of coverage.
Can be known by description above, can be by regulating flash of light that the flash of light angle of coverage of flash module determines that flash module the sends horizontal coverage at a depth bounds.For example, in the time that described flash of light angle of coverage is larger, described flash of light is larger in hot spot overlay area corresponding to a depth bounds, otherwise less.Therefore, can determine described flash of light angle of coverage according to the size of described subject area.Those skilled in the art can know, at described flash module in the time that the light intensity of a subject area meets the standard of described setting, if the energy that the less described flash module in the hot spot overlay area of the described subject area of described covering consumes can be less, therefore, determine that according to described subject area suitable flash of light angle of coverage can conserve energy.In addition, as described above, can regulate described flash of light distance and described flash of light angle of coverage by the flash of light focal length that regulates flash module, wherein said flash of light focal length is larger simultaneously, and described flash of light distance is larger, and described flash of light angle of coverage is less.Therefore,, in power one timing of flash module, when described subject area is smaller, can reach larger flash of light distance by described flash of light angle of coverage being turned down to just in time covering described subject area.
Certainly, those skilled in the art can know, in other possible execution mode of the embodiment of the present application, in the time determining described many group flash of light parameters, can also consider to treat the factor such as color, brightness of photographed scene simultaneously.
In the above-described embodiment, described flash of light control device does not comprise flash module, just produces described many group flash of light parameters, then described many group flash of light parameters can be offered to one or more flash modules.In the possible execution mode of the another kind of the embodiment of the present application, described flash of light control device can also comprise described flash module, and now, described flash control method also comprises:
In response to a shooting instruction, repeatedly glisten with described many group flash of light parameters.
In the present embodiment, once glistening corresponding to one group of flash of light parameter in described many group flash of light parameters in described repeatedly flash of light.In another embodiment, alternatively, for example, when described flash module comprises multiple flash of light submodule, once flash of light also may be glistened to organize flash of light parameter corresponding to multiple flash of light submodules more, in example execution mode as shown in Figure 3 b, likely in once glistening, once glisten with corresponding with the first subregion 311a first group of flash of light parameter with second group of flash of light parameter corresponding to the second subregion 311b respectively by the different flash of light submodule of both direction simultaneously, carry out light filling to treat respectively first personage on the depth areas left side and the personage on the right in photographic images.
Those skilled in the art can find out because described many group flash of light parameters are corresponding to multiple subject area of multiple different depth boundses, therefore, described repeatedly flash of light also can be corresponding to different flash of light distance and/or different Flash ranges.In the embodiment of the present application, can carry out suitable light filling to the subject of different depth, different cross direction profiles by described repeatedly flash of light, avoid the situation that occurs that exposure is uneven.
It will be appreciated by those skilled in the art that, in the said method of the application's embodiment, the sequence number size of each step does not also mean that the priority of execution sequence, the execution sequence of each step should be definite with its function and internal logic, and should not form any restriction to the implementation process of the application's embodiment.
As shown in Figure 4, a kind of possible execution mode of the embodiment of the present application provides a kind of flash of light control device 400, comprising:
Submodule 410 is obtained in distributed intelligence, for obtaining the distributed intelligence for the treatment of at least one subject of photographed scene;
Subject area is determined submodule 420, described in determining according to described distributed intelligence, treats multiple depth boundses and multiple subject area corresponding to described multiple depth bounds of photographed scene with respect to a shooting reference position;
Parameter is determined submodule 430, for determining the many groups of flash of light parameters corresponding to described multiple subject area.
The technical scheme of the embodiment of the present application is according to the distributed intelligence for the treatment of at least one subject in photographed scene, determine the many group flash of light parameters corresponding with multiple subject area of multiple depth boundses, and then make to described in the time that photographed scene is taken, photoflash lamp can carry out suitable light filling to the described subject for the treatment of multiple different depths in photographed scene according to described many group flash of light parameters, and then collects the image for the treatment of photographed scene that exposure effect is good.
By execution mode below, each module of the embodiment of the present application is further detailed:
As shown in Figure 5 a, in a kind of possible execution mode of the embodiment of the present application, described distributed intelligence is obtained submodule 410 and can be comprised:
Information acquisition unit 414, for obtaining described distributed intelligence by information gathering.
Can be further detailed according to described distributed intelligence below.
As shown in Figure 5 b, in the another kind of possible execution mode of the embodiment of the present application, described distributed intelligence is obtained submodule 410 and can be comprised:
Communication unit 415, for obtaining described distributed intelligence from least one external equipment.
In a kind of possible execution mode, described distributed intelligence comprises in the embodiment of the present application:
Described at least one subject is with respect to the depth information of described shooting reference position;
As shown in Figure 5 c, described distributed intelligence is obtained submodule 410 and can be comprised:
Depth Information Acquistion unit 411, for obtaining described depth information.
In the embodiment of the present application, described shooting reference position is and a relatively-stationary position, position for treating an image collecting device of photographed scene described in taking, can arranges as required.Specifically can, referring to description corresponding in said method embodiment, repeat no more here.
In the embodiment of the present application, described in, treat generally to comprise in photographed scene at least one subject that degree of depth span is larger.Specifically can, referring to description corresponding in said method embodiment, repeat no more here.
In a kind of possible execution mode of the embodiment of the present application, described Depth Information Acquistion unit 411 can be depth transducer, for gathering described depth information.In the possible execution mode of another kind, described Depth Information Acquistion unit 411 can also be communication device, for obtain described depth information from external equipment.Specifically can, referring to description corresponding in said method embodiment, repeat no more here.
In the embodiment of the present application, described distributed intelligence also comprise described at least one subject along the cross direction profiles information that is basically perpendicular to depth direction.
Alternatively, in a kind of possible execution mode, described cross direction profiles information can be that the image-region of described at least one subject correspondence on a shooting imaging surface is in the Two dimensional Distribution information of described shooting imaging surface.As shown in Figure 5 c, in the present embodiment, described distributed intelligence is obtained submodule 410 and is also comprised:
Two dimensional Distribution information acquisition unit 412, for obtaining described Two dimensional Distribution information.
In a kind of possible execution mode of the embodiment of the present application, described Two dimensional Distribution information acquisition unit 412 can comprise an imageing sensor, treats the image of photographed scene, then obtain described Two dimensional Distribution information by the method for image processing described in obtaining.Certainly, in another possible execution mode, described Two dimensional Distribution information acquisition unit 412 also can be a communication device, for obtain described Two dimensional Distribution information from an external equipment.
As shown in Fig. 5 d, alternatively, in a kind of possible execution mode, described distributed intelligence is obtained submodule 410 and can be comprised:
Depth map processing unit 413, for treating described in basis that photographed scene obtains described distributed intelligence with respect to a depth map of described shooting reference position.
In the present embodiment, described distributed intelligence, except above-mentioned depth information, also comprises:
The Two dimensional Distribution information of the region of described at least one subject correspondence on described depth map on described depth map.
Therefore, in the present embodiment, described depth map processing unit 413 further can be for obtaining the described Two dimensional Distribution information on depth information recited above and described depth map according to described depth map.
In the possible execution mode of the another kind of the embodiment of the present application, same, also can obtain described Two dimensional Distribution information from least one external equipment.
As shown in Figure 5 a, in a kind of possible execution mode of the embodiment of the present application, described subject area determines that submodule 420 comprises:
Depth bounds determining unit 421, for determining described multiple depth bounds according to described depth information;
Subject area determining unit 422, for determining at least one object region in described multiple subject area that the each depth bounds of described multiple depth bounds is corresponding according to described Two dimensional Distribution information.
As shown in Fig. 5 e, in the present embodiment, described depth bounds determining unit 421 comprises:
Depth distribution is determined subelement 4211, for determine the depth distribution of described at least one subject according to described depth information;
Depth bounds is determined subelement 4212, for determine described multiple depth bounds according to described depth distribution.
In described depth bounds determining unit 421, the function of each subelement realizes referring to description corresponding to described embodiment of the method above, repeats no more here.
As shown in Fig. 5 f, in the present embodiment, described subject area determining unit 422 comprises:
Cross direction profiles is determined subelement 4221, for determine described at least one subject cross direction profiles perpendicular to described depth direction in described each depth bounds according to described Two dimensional Distribution information;
Subject area is determined subelement 4222, for determine described at least one object region according to described cross direction profiles.
In described subject area determining unit 422, the function of each subelement realizes referring to description corresponding to described embodiment of the method above, repeats no more here.
As shown in Figure 5 a, in a kind of possible execution mode of the embodiment of the present application, described parameter determines that submodule 430 comprises:
Flash of light distance parameter determining unit 431, for determining the flash of light distance parameter corresponding to the each subject area of described multiple subject area;
Flash direction determining unit 432, for determining the flash direction corresponding to described each subject area.
Wherein, described flash of light distance parameter can be made up of following one or more parameters:
Flash power, flash of light focal length and flash of light position.
The function of described flash of light distance parameter determining unit 431 and flash direction determining unit 432 realizes specifically referring to corresponding description in embodiment of the method above.
As shown in Fig. 5 g, in the another kind of possible execution mode of the embodiment of the present application, except described flash of light distance parameter determining unit 431 and described flash direction determining unit 432, described parameter determines that submodule 430 also comprises:
Flash of light angle determining unit 433, for determining the flash of light angle of coverage corresponding to described each subject area.
The function of described flash of light angle determining unit 433 realizes specifically referring to corresponding description in embodiment of the method above.
As shown in Figure 5 a, in a kind of possible execution mode of the embodiment of the present application, described device 400 can also comprise:
Flash module 440, in response to a shooting instruction, repeatedly glistens with described many group flash of light parameters.
The function of described flash module 440 realizes specifically referring to corresponding description in embodiment of the method above.
Those skilled in the art can find out because described many group flash of light parameters are corresponding to multiple subject area of multiple different depth boundses, therefore, described repeatedly flash of light also can be corresponding to different flash of light distance and/or different Flash ranges.In the embodiment of the present application, can carry out suitable light filling to the subject of different depth, different cross direction profiles by described repeatedly flash of light, avoid the situation that occurs that exposure is uneven.
The structural representation of another flash of light control device 500 that Fig. 6 provides for the embodiment of the present application, the application's specific embodiment does not limit the specific implementation of flash of light control device 500.As shown in Figure 6, this flash of light control device 500 can comprise:
Processor (processor) 510, communication interface (Communications Interface) 520, memory (memory) 530 and communication bus 540.Wherein:
Processor 510, communication interface 520 and memory 530 complete mutual communication by communication bus 540.
Communication interface 520, for net element communication such as client etc.
Processor 510, for executive program 532, specifically can carry out the correlation step in said method embodiment.
Particularly, program 532 can comprise program code, and described program code comprises computer-managed instruction.
Processor 510 may be a central processor CPU, or specific integrated circuit ASIC (Application Specific Integrated Circuit), or is configured to implement one or more integrated circuits of the embodiment of the present application.
Memory 530, for depositing program 532.Memory 530 may comprise high-speed RAM memory, also may also comprise nonvolatile memory (non-volatile memory), for example at least one magnetic disc store.Program 532 specifically can be for making described flash of light control device 500 carry out following steps:
Obtain the distributed intelligence for the treatment of at least one subject in photographed scene;
Described in determining according to described distributed intelligence, treat multiple depth boundses and multiple subject area corresponding to described multiple depth bounds of photographed scene with respect to a shooting reference position;
Determine the many groups of flash of light parameters corresponding to described multiple subject area.
In program 532, the specific implementation of each step can, referring to description corresponding in the corresponding steps in above-described embodiment and unit, be not repeated herein.Those skilled in the art can be well understood to, and for convenience and simplicity of description, the specific works process of the equipment of foregoing description and module, can describe with reference to the corresponding process in preceding method embodiment, does not repeat them here.
As shown in Figure 7, a kind of possible execution mode of the embodiment of the present application provides a kind of image-pickup method, comprising:
S610 obtains many groups of multiple subject area flash of light parameters treating multiple depth boundses in photographed scene corresponding to one;
S620 is in response to a shooting instruction, treat that to described photographed scene repeatedly glistens with described many group flash of light parameters, and treat that photographed scene is repeatedly taken and obtain multiple initial pictures described, wherein, described each shooting in repeatedly taking is corresponding with described each flash of light in repeatedly glistening;
S630 synthesizes described multiple initial pictures.
For instance, image collecting device provided by the invention, as the executive agent of the present embodiment, is carried out S610~S630.Particularly, described image collecting device can be arranged in subscriber equipment in the mode of software, hardware or software and hardware combining, or described image collecting device itself is exactly described subscriber equipment; Described subscriber equipment includes but not limited to: camera, have mobile phone, the intelligent glasses etc. of image collecting function.
The technical scheme of the embodiment of the present application is according to the distributed intelligence for the treatment of at least one subject in photographed scene, determine the many group flash of light parameters corresponding with multiple subject area of multiple depth boundses, and then make to described in the time that photographed scene is taken, photoflash lamp can carry out suitable light filling to the described subject for the treatment of multiple different depths in photographed scene according to described many group flash of light parameters, and then collects the image for the treatment of photographed scene that exposure effect is good.
By execution mode below, each step of the embodiment of the present application is further detailed:
S610 obtains many groups of multiple subject area flash of light parameters treating multiple depth boundses in photographed scene corresponding to one.
In the embodiment of the present application, the mode that described step S610 obtains described many group flash of light parameters can have multiple, for example:
In a kind of possible execution mode, obtain described many group flash of light parameters from least one external equipment.
In a kind of possible execution mode, described image collecting device can be a digital camera, another subscriber equipment of user, the depth transducer that for example mobile phone or intelligent glasses are equipped with by self obtains the current depth information for the treatment of photographed scene, and obtaining described many group flash of light parameters according to described depth information, described image collecting device is by obtaining described many group flash of light parameters with communicating by letter of described external equipment.
In the possible execution mode of another kind, the mode that described step S610 obtains described many group flash of light parameters, with to obtain the glisten mode of parameters of described many groups in flash control method embodiment illustrated in fig. 1 identical, comprising:
Described in obtaining, treat the distributed intelligence of at least one subject in photographed scene;
Described in determining according to described distributed intelligence, treat described multiple depth boundses and described multiple subject area corresponding to described multiple depth bounds of photographed scene with respect to a shooting reference position;
Determine the described many group flash of light parameters corresponding to described multiple subject area.
Wherein, alternatively, in a kind of possible execution mode, described distributed intelligence can comprise:
Described at least one subject is with respect to the depth information of described shooting reference position.
Alternatively, in a kind of possible execution mode, described distributed intelligence can also comprise:
The image-region of described at least one subject correspondence on a shooting imaging surface is in the Two dimensional Distribution information of described shooting imaging surface.
Alternatively, in a kind of possible execution mode, obtaining described distributed intelligence can comprise:
Treat that according to described photographed scene obtains described distributed intelligence with respect to a depth map of described shooting reference position.
In the present embodiment, described distributed intelligence, except described depth information, can also comprise:
The Two dimensional Distribution information of the region of described at least one subject correspondence on described depth map on described depth map.
Alternatively, in a kind of possible execution mode, determine that according to described distributed intelligence described multiple subject area comprises:
Determine described multiple depth bounds according to described depth information;
Determine at least one object region in described multiple subject area that in described multiple depth bounds, each depth bounds is corresponding according to described Two dimensional Distribution information.
Alternatively, in a kind of possible execution mode, describedly determine that according to described depth information described multiple depth bounds comprises:
Determine the depth distribution of described at least one subject according to described depth information;
Determine described multiple depth bounds according to described depth distribution.
Alternatively, in a kind of possible execution mode, determine that according to described Two dimensional Distribution information the described at least one object region that described each depth bounds is corresponding comprises:
Determine described at least one subject cross direction profiles perpendicular to described depth direction in described each depth bounds according to described Two dimensional Distribution information;
Determine described at least one object region according to described cross direction profiles.
The description referring to correspondence in Fig. 1-Fig. 3 b illustrated embodiment that further describes of obtaining described many group flash of light parameters, repeats no more here.
S620 is in response to a shooting instruction, treat that to described photographed scene repeatedly glistens with described many group flash of light parameters, and treat that photographed scene is repeatedly taken and obtain multiple initial pictures described, wherein, described each shooting in repeatedly taking is corresponding with described each flash of light in repeatedly glistening.
In a kind of possible execution mode of the embodiment of the present application, described shooting instruction can be the instruction producing according to user's operational motion, and for example, action, the voice command of shooting etc. of pressing shutter according to a user produce described shooting instruction; In the possible execution mode of another kind, described shooting instruction can also be that the meeting of shooting condition setting in advance according to some produces, for example: under a monitoring scene, preset every 5 minutes and clap a film,fault; Or, while having the object of motion to enter, take pictures.
In the embodiment of the present application, corresponding with described many group flash of light parameters, repeatedly glisten, wherein carry out corresponding with each flash of light once taken, and treats an initial pictures of photographed scene described in obtaining, after described repeatedly flash of light, also can complete repeatedly and take, obtain described multiple initial pictures.
Wherein, in the present embodiment, each parameter of taking can be identical.Certainly,, in other possible execution mode of the embodiment of the present application, according to the needs of user's shooting effect, also can adjust according to described many group flash of light parameters, for example: each focal length of taking matches with the flash of light distance of corresponding flash of light.
S630 synthesizes described multiple initial pictures.
In a kind of possible execution mode of the embodiment of the present application, described step S630 can comprise:
According at least one image region of each initial pictures in multiple initial pictures described at least one exposure standard;
According to the synthetic described multiple initial pictures of described at least one image region of described each initial pictures.
On the initial pictures corresponding with a flash of light, subject in the depth bounds corresponding with this flash of light is by proper exposure, this part subject corresponding image-region on described initial pictures should meet at least one exposure standard (for example: luminance standard, resolution standard etc.), therefore, in the present embodiment, only determine at least one image region on described each initial pictures according to the exposure effect in each region on the described multiple initial pictures that obtain.
Obtaining after at least one image region of each initial pictures in described multiple initial pictures, can select suitable multiple image regions to splice fusion, wherein, in a kind of possible execution mode, the boundary pixel between each image region can adopt integration technology to carry out virtualization or equalization to keep the continuity of whole photo.
Except carrying out according to the exposure effect of the initial pictures that obtains described image synthetic, in the another kind of possible execution mode of the embodiment of the present application, can also be according to treating described in each shooting correspondence that the target image subregion on the corresponding described initial pictures in target area in photographed scene carries out the synthetic of described image.For example: described step S630 can comprise:
Determine at least one target image subregion of each initial pictures in described multiple initial pictures according to described multiple subject area;
According to the synthetic described multiple initial pictures of described at least one target image subregion of described each initial pictures.
Wherein, alternatively, in a kind of possible execution mode, describedly determine that according to described multiple subject area described at least one target image subregion of described each initial pictures comprises:
At least one target subject of taking each time in repeatedly taking described in determining according to described multiple subject area;
Determine described at least one target image subregion of described each initial pictures according to described described at least one target subject of taking each time.
In example execution mode as shown in Figure 2, can determine on each initial pictures respectively and the first object 211 according to the described depth map for the treatment of photographed scene, the image region of second object 212 and the 3rd object 213 correspondences, wherein, determine after three groups of flash of light parameters according to three of described three objects subject area, can determine, for example, the target subject of first group of flash of light parameter is described the first object 211, the target subject of second group of flash of light parameter is described second object 212, the target subject of the 3rd group of flash of light parameter is described the 3rd object 213.Therefore, as shown in Fig. 8 a-8c, with described first group flash of light parameter glisten and take in the first initial pictures 710, its target image subregion is the first object image region 711 (target image subregion represents with diagonal line hatches line) of described the first object 211 correspondences; Same, the target image subregion in the second initial pictures 720 corresponding with described second group of flash of light parameter is the second right target image subregion 721 of described second object 212; Target image subregion in the 3rd initial pictures 730 corresponding with described the 3rd group of flash of light parameter is the 3rd right target image subregion 731 of described the 3rd object 213.As shown in Fig. 8 d, these three target image subregions are synthesized and can obtain each degree of depth all by the composograph 740 of proper exposure.
Those skilled in the art can find out, treat photographed scene by the method for the embodiment of the present application and carry out the repeatedly flash of light for different depth and/or different cross direction profiles and can carry out suitable light filling to the subject of different depth and/or different cross direction profiles, avoid the situation that occurs that exposure is uneven.
It will be appreciated by those skilled in the art that, in the said method of the application's embodiment, the sequence number size of each step does not also mean that the priority of execution sequence, the execution sequence of each step should be definite with its function and internal logic, and should not form any restriction to the implementation process of the application's embodiment.
As shown in Figure 9, a kind of possible execution mode of the embodiment of the present application provides a kind of image collecting device 800, comprising:
Parameter acquisition module 810, for obtaining many groups of multiple subject area flash of light parameters treating the multiple depth boundses of photographed scene corresponding to one;
Flash module 820, in response to a shooting instruction, treats that to described photographed scene repeatedly glistens with described many group flash of light parameters;
Image capture module 830, in response to described shooting instruction, treats that photographed scene is repeatedly taken and obtains multiple initial pictures described, and wherein, described each shooting in repeatedly taking is corresponding with described each flash of light in repeatedly glistening;
Processing module 840, for the synthesis of described multiple initial pictures.
The technical scheme of the embodiment of the present application is according to the distributed intelligence for the treatment of at least one subject in photographed scene, determine the many group flash of light parameters corresponding with multiple subject area of multiple depth boundses, and then make to described in the time that photographed scene is taken, photoflash lamp can carry out suitable light filling to the described subject for the treatment of multiple different depths in photographed scene according to described many group flash of light parameters, and then collects the image for the treatment of photographed scene that exposure effect is good.
Below each module of the embodiment of the present application is further detailed:
Alternatively, as shown in Figure 10 a, in a kind of possible execution mode of the embodiment of the present application, described parameter acquisition module 810 can comprise:
Communicator module 811, for obtaining described many group flash of light parameters from least one external equipment.
For example, in a kind of possible execution mode, described image collecting device 800 can be a digital camera, another subscriber equipment of user, the depth transducer that for example mobile phone or intelligent glasses are equipped with by self obtains the current depth information for the treatment of photographed scene, and obtaining described many group flash of light parameters according to described depth information, described image collecting device 800 is by obtaining described many group flash of light parameters with communicating by letter of described external equipment.
Alternatively, as shown in Figure 10 b, in a kind of possible execution mode of the embodiment of the present application, described parameter acquisition module 810 can comprise:
Submodule 812 is obtained in distributed intelligence, for treating the distributed intelligence of at least one subject of photographed scene described in obtaining;
Subject area is determined submodule 813, described in determining according to described distributed intelligence, treats described multiple depth boundses and described multiple subject area corresponding to described multiple depth bounds of photographed scene with respect to a shooting reference position;
Parameter is determined submodule 814, for determining the described many group flash of light parameters corresponding to described multiple subject area.
In the present embodiment, the structure function of described parameter acquisition module 810 can be identical with flash of light control device 400 recited above, that is: described distributed intelligence is obtained submodule 812, described subject area and determined that submodule 813 and described parameter determine that the 26S Proteasome Structure and Function of submodule 814 distributes and obtain submodule 410, described subject area with the described distributed intelligence of described flash of light control device 40 and determine that submodule 420 and described parameter determine that submodule 430 is identical.In the present embodiment, no longer the 26S Proteasome Structure and Function of described parameter acquisition module 810 is repeated, the 26S Proteasome Structure and Function of the control device 400 that glistens in concrete embodiment shown in Figure 4 is described.
Alternatively, as shown in Figure 10 a, in a kind of possible execution mode, described processing module 840 comprises:
First determines submodule 841, for according at least one image region of the each initial pictures of multiple initial pictures described at least one exposure standard;
The first synthon module 842, for synthesizing described multiple initial pictures according to described at least one image region of described each initial pictures.
On the initial pictures corresponding with a flash of light, subject in the depth bounds corresponding with this flash of light is by proper exposure, this part subject corresponding image-region on described initial pictures meets at least one exposure standard (for example: luminance standard, resolution standard etc.), therefore, in the present embodiment, described first definite submodule 841 can only be determined at least one image region on described each initial pictures according to the exposure effect in each region on the described multiple initial pictures that obtain.
Described first determine submodule 841 obtain described multiple initial pictures in after at least one image region of each initial pictures, described the first synthon module 842 can select suitable multiple image regions to splice fusion, wherein, in a kind of possible execution mode, the boundary pixel between each image region can adopt integration technology to carry out virtualization or equalization to keep the continuity of whole photo.
Alternatively, as shown in Figure 10 b, in the possible execution mode of another kind, described processing module 840 comprises:
Second determines submodule 843, for determine at least one target image subregion of the each initial pictures of described multiple initial pictures according to described multiple subject area;
The second synthon module 844, for synthesizing described multiple initial pictures according to described at least one target image subregion of described each initial pictures.
Alternatively, as shown in Figure 10 c, in a kind of possible execution mode, described second determines that submodule 843 comprises:
Target determining unit 8431 is repeatedly taken at least one target subject of taking each time described in determining according to described multiple subject area;
Subregion determining unit 8432, for determining described at least one target image subregion of described each initial pictures according to described described at least one target subject of taking each time.
In Figure 10 c illustrated embodiment, the function of each module, unit can, referring to description corresponding in Fig. 8 a-8d illustrated embodiment, repeat no more here.
The structural representation of another image collecting device 1000 that Figure 11 provides for the embodiment of the present application, the application's specific embodiment does not limit the specific implementation of image collecting device 1000.As shown in figure 11, this image collecting device 1000 can comprise:
Processor (processor) 1010, communication interface (CommunicationsInterface) 1020, memory (memory) 1030 and communication bus 1040.Wherein:
Processor 1010, communication interface 1020 and memory 1030 complete mutual communication by communication bus 1040.
Communication interface 1020, for net element communication such as client etc.
Processor 1010, for executive program 1032, specifically can carry out the correlation step in said method embodiment.
Particularly, program 1032 can comprise program code, and described program code comprises computer-managed instruction.
Processor 1010 may be a central processor CPU, or specific integrated circuit ASIC (Application Specific Integrated Circuit), or is configured to implement one or more integrated circuits of the embodiment of the present application.
Memory 1030, for depositing program 1032.Memory 1030 may comprise high-speed RAM memory, also may also comprise nonvolatile memory (non-volatile memory), for example at least one magnetic disc store.Program 1032 specifically can be for making described image collecting device 1000 carry out following steps:
Obtain many groups of multiple subject area flash of light parameters treating multiple depth boundses in photographed scene corresponding to one;
In response to a shooting instruction, treat that to described photographed scene repeatedly glistens with described many group flash of light parameters, and treat that photographed scene is repeatedly taken and obtain multiple initial pictures described, wherein, described each shooting in repeatedly taking is corresponding with described each flash of light in repeatedly glistening;
Synthetic described multiple initial pictures.
In program 1032, the specific implementation of each step can, referring to description corresponding in the corresponding steps in above-described embodiment and unit, be not repeated herein.Those skilled in the art can be well understood to, and for convenience and simplicity of description, the specific works process of the equipment of foregoing description and module, can describe with reference to the corresponding process in preceding method embodiment, does not repeat them here.
Those of ordinary skill in the art can recognize, unit and the method step of each example of describing in conjunction with embodiment disclosed herein, can realize with the combination of electronic hardware or computer software and electronic hardware.These functions are carried out with hardware or software mode actually, depend on application-specific and the design constraint of technical scheme.Professional and technical personnel can realize described function with distinct methods to each specifically should being used for, but this realization should not thought and exceeds the application's scope.
If described function realizes and during as production marketing independently or use, can be stored in a computer read/write memory medium using the form of SFU software functional unit.Based on such understanding, the part that the application's technical scheme contributes to prior art in essence in other words or the part of this technical scheme can embody with the form of software product, this computer software product is stored in a storage medium, comprise that some instructions (can be personal computers in order to make a computer equipment, server, or the network equipment etc.) carry out all or part of step of method described in each embodiment of the application.And aforesaid storage medium comprises: USB flash disk, portable hard drive, read-only memory (ROM, Read-Only Memory), the various media that can be program code stored such as random access memory (RAM, Random Access Memory), magnetic disc or CD.
Above execution mode is only for illustrating the application; and the not restriction to the application; the those of ordinary skill in relevant technologies field; in the case of not departing from the application's spirit and scope; can also make a variety of changes and modification; therefore all technical schemes that are equal to also belong to the application's category, and the application's scope of patent protection should be defined by the claims.

Claims (52)

1. a flash control method, is characterized in that, comprising:
Obtain the distributed intelligence for the treatment of at least one subject in photographed scene;
Described in determining according to described distributed intelligence, treat multiple depth boundses and multiple subject area corresponding to described multiple depth bounds of photographed scene with respect to a shooting reference position;
Determine the many groups of flash of light parameters corresponding to described multiple subject area.
2. the method for claim 1, is characterized in that, described distributed intelligence comprises:
Described at least one subject is with respect to the depth information of described shooting reference position.
3. method as claimed in claim 2, is characterized in that, described distributed intelligence also comprises:
The image-region of described at least one subject correspondence on a shooting imaging surface is in the Two dimensional Distribution information of described shooting imaging surface.
4. method as claimed in claim 2, is characterized in that, obtains described distributed intelligence and comprises:
Treat that according to described photographed scene obtains described distributed intelligence with respect to a depth map of described shooting reference position.
5. method as claimed in claim 4, is characterized in that, described distributed intelligence also comprises:
The Two dimensional Distribution information of the region of described at least one subject correspondence on described depth map on described depth map.
6. the method for claim 1, is characterized in that, obtains described distributed intelligence and comprises:
Obtain described distributed intelligence by information gathering.
7. the method for claim 1, is characterized in that, obtains described distributed intelligence and comprises:
Obtain described distributed intelligence from least one external equipment.
8. the method as described in claim 3 or 5, is characterized in that, determines that according to described distributed intelligence described multiple subject area comprises:
Determine described multiple depth bounds according to described depth information;
Determine at least one object region in described multiple subject area that in described multiple depth bounds, each depth bounds is corresponding according to described Two dimensional Distribution information.
9. method as claimed in claim 8, is characterized in that, describedly determines that according to described depth information described multiple depth bounds comprises:
Determine the depth distribution of described at least one subject according to described depth information;
Determine described multiple depth bounds according to described depth distribution.
10. method as claimed in claim 8, is characterized in that, determines that according to described Two dimensional Distribution information the described at least one object region that described each depth bounds is corresponding comprises:
Determine described at least one subject cross direction profiles perpendicular to described depth direction in described each depth bounds according to described Two dimensional Distribution information;
Determine described at least one object region according to described cross direction profiles.
11. the method for claim 1, is characterized in that, every group of flash of light parameter in described many group flash of light parameters comprises:
Flash of light distance parameter and flash direction.
12. methods as claimed in claim 11, is characterized in that, described every group of flash of light parameter also comprises:
Flash of light angle of coverage.
13. the method for claim 1, is characterized in that, described method also comprises:
In response to a shooting instruction, repeatedly glisten with described many group flash of light parameters.
14. 1 kinds of flash of light control device, is characterized in that, comprising:
Submodule is obtained in distributed intelligence, for obtaining the distributed intelligence for the treatment of at least one subject of photographed scene;
Subject area is determined submodule, described in determining according to described distributed intelligence, treats multiple depth boundses and multiple subject area corresponding to described multiple depth bounds of photographed scene with respect to a shooting reference position;
Parameter is determined submodule, for determining the many groups of flash of light parameters corresponding to described multiple subject area.
15. devices as claimed in claim 14, is characterized in that, described distributed intelligence comprises:
Described at least one subject is with respect to the depth information of described shooting reference position;
Described distributed intelligence is obtained submodule and is comprised:
Depth Information Acquistion unit, for obtaining described depth information.
16. devices as claimed in claim 15, is characterized in that, described distributed intelligence also comprises:
The image-region of described at least one subject correspondence on a shooting imaging surface is in the Two dimensional Distribution information of described shooting imaging surface;
Described distributed intelligence is obtained submodule and is also comprised:
Two dimensional Distribution information acquisition unit, for obtaining described Two dimensional Distribution information.
17. devices as claimed in claim 14, is characterized in that, described distributed intelligence is obtained submodule and comprised:
Depth map processing unit, for treating described in basis that photographed scene obtains described distributed intelligence with respect to a depth map of described shooting reference position.
18. devices as claimed in claim 17, is characterized in that, described distributed intelligence also comprises:
The Two dimensional Distribution information of the region of described at least one subject correspondence on described depth map on described depth map;
Described depth map processing unit is further used for, and obtains described Two dimensional Distribution information according to described depth map.
19. devices as claimed in claim 14, is characterized in that, described distributed intelligence is obtained submodule and comprised:
Information acquisition unit, for obtaining described distributed intelligence by information gathering.
20. devices as claimed in claim 14, is characterized in that, described distributed intelligence is obtained submodule and comprised:
Communication unit, for obtaining described distributed intelligence from least one external equipment.
21. devices as described in claim 16 or 18, is characterized in that, described subject area determines that submodule comprises:
Depth bounds determining unit, for determining described multiple depth bounds according to described depth information;
Subject area determining unit, for determining at least one object region in described multiple subject area that the each depth bounds of described multiple depth bounds is corresponding according to described Two dimensional Distribution information.
22. devices as claimed in claim 21, is characterized in that, described depth bounds determining unit comprises:
Depth distribution is determined subelement, for determine the depth distribution of described at least one subject according to described depth information;
Depth bounds is determined subelement, for determine described multiple depth bounds according to described depth distribution.
23. devices as claimed in claim 21, is characterized in that, described subject area determining unit comprises:
Cross direction profiles is determined subelement, for determine described at least one subject cross direction profiles perpendicular to described depth direction in described each depth bounds according to described Two dimensional Distribution information;
Subject area is determined subelement, for determine described at least one object region according to described cross direction profiles.
24. devices as claimed in claim 14, is characterized in that, described parameter determines that submodule comprises:
Flash of light distance parameter determining unit, for determining the flash of light distance parameter corresponding to the each subject area of described multiple subject area;
Flash direction determining unit, for determining the flash direction corresponding to described each subject area.
25. devices as claimed in claim 24, is characterized in that, described parameter determines that submodule also comprises:
Flash of light angle determining unit, for determining the flash of light angle of coverage corresponding to described each subject area.
26. devices as claimed in claim 14, is characterized in that, described device also comprises:
Flash module, in response to a shooting instruction, repeatedly glistens with described many group flash of light parameters.
27. 1 kinds of image-pickup methods, is characterized in that, comprising:
Obtain many groups of multiple subject area flash of light parameters treating multiple depth boundses in photographed scene corresponding to one;
In response to a shooting instruction, treat that to described photographed scene repeatedly glistens with described many group flash of light parameters, and treat that photographed scene is repeatedly taken and obtain multiple initial pictures described, wherein, described each shooting in repeatedly taking is corresponding with described each flash of light in repeatedly glistening;
Synthetic described multiple initial pictures.
28. methods as claimed in claim 27, is characterized in that, obtain described many group flash of light parameters and comprise:
Obtain described many group flash of light parameters from least one external equipment.
29. methods as claimed in claim 27, is characterized in that, obtain described many group flash of light parameters and comprise:
Described in obtaining, treat the distributed intelligence of at least one subject in photographed scene;
Described in determining according to described distributed intelligence, treat described multiple depth boundses and described multiple subject area corresponding to described multiple depth bounds of photographed scene with respect to a shooting reference position;
Determine the described many group flash of light parameters corresponding to described multiple subject area.
30. methods as claimed in claim 29, is characterized in that, described distributed intelligence comprises:
Described at least one subject is with respect to the depth information of described shooting reference position.
31. methods as claimed in claim 30, is characterized in that, described distributed intelligence also comprises:
The image-region of described at least one subject correspondence on a shooting imaging surface is in the Two dimensional Distribution information of described shooting imaging surface.
32. methods as claimed in claim 30, is characterized in that, obtain described distributed intelligence and comprise:
Treat that according to described photographed scene obtains described distributed intelligence with respect to a depth map of described shooting reference position.
33. methods as claimed in claim 32, is characterized in that, described distributed intelligence also comprises:
The Two dimensional Distribution information of the region of described at least one subject correspondence on described depth map on described depth map.
34. methods as described in claim 31 or 33, is characterized in that, determine that according to described distributed intelligence described multiple subject area comprises:
Determine described multiple depth bounds according to described depth information;
Determine at least one object region in described multiple subject area that in described multiple depth bounds, each depth bounds is corresponding according to described Two dimensional Distribution information.
35. methods as claimed in claim 34, is characterized in that, describedly determine that according to described depth information described multiple depth bounds comprises:
Determine the depth distribution of described at least one subject according to described depth information;
Determine described multiple depth bounds according to described depth distribution.
36. methods as claimed in claim 34, is characterized in that, determine that according to described Two dimensional Distribution information the described at least one object region that described each depth bounds is corresponding comprises:
Determine described at least one subject cross direction profiles perpendicular to described depth direction in described each depth bounds according to described Two dimensional Distribution information;
Determine described at least one object region according to described cross direction profiles.
37. methods as claimed in claim 27, is characterized in that, described synthetic described multiple initial pictures comprise:
According at least one image region of each initial pictures in multiple initial pictures described at least one exposure standard;
According to the synthetic described multiple initial pictures of described at least one image region of described each initial pictures.
38. methods as claimed in claim 27, is characterized in that, described synthetic described multiple initial pictures comprise:
Determine at least one target image subregion of each initial pictures in described multiple initial pictures according to described multiple subject area;
According to the synthetic described multiple initial pictures of described at least one target image subregion of described each initial pictures.
39. methods as claimed in claim 38, is characterized in that, describedly determine that according to described multiple subject area described at least one target image subregion of described each initial pictures comprises:
At least one target subject of taking each time in repeatedly taking described in determining according to described multiple subject area;
Determine described at least one target image subregion of described each initial pictures according to described described at least one target subject of taking each time.
40. 1 kinds of image collecting devices, is characterized in that, comprising:
Parameter acquisition module, for obtaining many groups of multiple subject area flash of light parameters treating the multiple depth boundses of photographed scene corresponding to one;
Flash module, in response to a shooting instruction, treats that to described photographed scene repeatedly glistens with described many group flash of light parameters;
Image capture module, in response to described shooting instruction, treats that photographed scene is repeatedly taken and obtains multiple initial pictures described, and wherein, described each shooting in repeatedly taking is corresponding with described each flash of light in repeatedly glistening;
Processing module, for the synthesis of described multiple initial pictures.
41. devices as claimed in claim 40, is characterized in that, described parameter acquisition module comprises:
Communicator module, for obtaining described many group flash of light parameters from least one external equipment.
42. devices as claimed in claim 40, is characterized in that, described parameter acquisition module comprises:
Submodule is obtained in distributed intelligence, for treating the distributed intelligence of at least one subject of photographed scene described in obtaining;
Subject area is determined submodule, described in determining according to described distributed intelligence, treats described multiple depth boundses and described multiple subject area corresponding to described multiple depth bounds of photographed scene with respect to a shooting reference position;
Parameter is determined submodule, for determining the described many group flash of light parameters corresponding to described multiple subject area.
43. devices as claimed in claim 42, is characterized in that, described distributed intelligence comprises:
Described at least one subject is with respect to the depth information of described shooting reference position;
Described distributed intelligence is obtained submodule and is comprised:
Depth Information Acquistion unit, for obtaining described depth information.
44. devices as claimed in claim 43, is characterized in that, described distributed intelligence also comprises:
The image-region of described at least one subject correspondence on a shooting imaging surface is in the Two dimensional Distribution information of described shooting imaging surface;
Described distributed intelligence is obtained submodule and is also comprised:
Two dimensional Distribution information acquisition unit, for obtaining described Two dimensional Distribution information.
45. devices as claimed in claim 42, is characterized in that, described distributed intelligence is obtained submodule and comprised:
Depth map processing unit, for treating described in basis that photographed scene obtains described distributed intelligence with respect to a depth map of described shooting reference position.
46. devices as claimed in claim 45, is characterized in that, described distributed intelligence also comprises:
The Two dimensional Distribution information of the region of described at least one subject correspondence on described depth map on described depth map;
Described depth map processing unit is further used for, and obtains described Two dimensional Distribution information according to described depth map.
47. devices as described in claim 43 or 45, is characterized in that, described subject area determines that submodule comprises:
Depth bounds determining unit, for determining described multiple depth bounds according to described depth information;
Subject area determining unit, for determining at least one object region in described multiple subject area that the each depth bounds of described multiple depth bounds is corresponding according to described Two dimensional Distribution information.
48. devices as claimed in claim 47, is characterized in that, described depth bounds determining unit comprises:
Depth distribution is determined subelement, for determine the depth distribution of described at least one subject according to described depth information;
Depth bounds is determined subelement, for determine described multiple depth bounds according to described depth distribution.
49. devices as claimed in claim 47, is characterized in that, described subject area determining unit comprises:
Cross direction profiles is determined subelement, for determine described at least one subject cross direction profiles perpendicular to described depth direction in described each depth bounds according to described Two dimensional Distribution information;
Subject area is determined subelement, for determine described at least one object region according to described cross direction profiles.
50. devices as claimed in claim 40, is characterized in that, described processing module comprises:
First determines submodule, for according at least one image region of the each initial pictures of multiple initial pictures described at least one exposure standard;
The first synthon module, for synthesizing described multiple initial pictures according to described at least one image region of described each initial pictures.
51. devices as claimed in claim 40, is characterized in that, described processing module comprises:
Second determines submodule, for determine at least one target image subregion of the each initial pictures of described multiple initial pictures according to described multiple subject area;
The second synthon module, for synthesizing described multiple initial pictures according to described at least one target image subregion of described each initial pictures.
52. devices as claimed in claim 51, is characterized in that, described second determines that submodule comprises:
Target determining unit is repeatedly taken at least one target subject of taking each time described in determining according to described multiple subject area;
Subregion determining unit, for determining described at least one target image subregion of described each initial pictures according to described described at least one target subject of taking each time.
CN201410361259.9A 2014-07-25 2014-07-25 Flash control method and control device, image-pickup method and harvester Active CN104113702B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410361259.9A CN104113702B (en) 2014-07-25 2014-07-25 Flash control method and control device, image-pickup method and harvester

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410361259.9A CN104113702B (en) 2014-07-25 2014-07-25 Flash control method and control device, image-pickup method and harvester

Publications (2)

Publication Number Publication Date
CN104113702A true CN104113702A (en) 2014-10-22
CN104113702B CN104113702B (en) 2018-09-04

Family

ID=51710325

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410361259.9A Active CN104113702B (en) 2014-07-25 2014-07-25 Flash control method and control device, image-pickup method and harvester

Country Status (1)

Country Link
CN (1) CN104113702B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104866191A (en) * 2015-04-30 2015-08-26 广东欧珀移动通信有限公司 Photography method and mobile terminal
CN105049704A (en) * 2015-06-17 2015-11-11 青岛海信移动通信技术股份有限公司 Shooting method and equipment
CN106303267A (en) * 2015-05-25 2017-01-04 北京智谷睿拓技术服务有限公司 Image capture device and method
WO2018205229A1 (en) * 2017-05-11 2018-11-15 深圳市大疆创新科技有限公司 Supplemental light control device, system, method, and mobile device
CN109218623A (en) * 2018-11-05 2019-01-15 浙江大华技术股份有限公司 A kind of light compensation method and device, computer installation and readable storage medium storing program for executing
CN111246119A (en) * 2018-11-29 2020-06-05 杭州海康威视数字技术股份有限公司 Camera and light supplement control method and device
CN111866373A (en) * 2020-06-19 2020-10-30 北京小米移动软件有限公司 Method, device and medium for displaying shooting preview image
CN112312113A (en) * 2020-10-29 2021-02-02 贝壳技术有限公司 Method, device and system for generating three-dimensional model
WO2022095012A1 (en) * 2020-11-09 2022-05-12 深圳市大疆创新科技有限公司 Shutter adjustment method and device, photography device, and movable platform

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6654062B1 (en) * 1997-11-13 2003-11-25 Casio Computer Co., Ltd. Electronic camera
CN101051168A (en) * 2006-04-03 2007-10-10 三星Techwin株式会社 Photographing apparatus and method
CN102065223A (en) * 2009-11-11 2011-05-18 卡西欧计算机株式会社 Image capture apparatus and image capturing method
CN103118163A (en) * 2011-11-16 2013-05-22 中兴通讯股份有限公司 Method and terminal of controlling photographing flash light
CN103685875A (en) * 2012-08-28 2014-03-26 株式会社理光 Imaging apparatus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6654062B1 (en) * 1997-11-13 2003-11-25 Casio Computer Co., Ltd. Electronic camera
CN101051168A (en) * 2006-04-03 2007-10-10 三星Techwin株式会社 Photographing apparatus and method
CN102065223A (en) * 2009-11-11 2011-05-18 卡西欧计算机株式会社 Image capture apparatus and image capturing method
CN103118163A (en) * 2011-11-16 2013-05-22 中兴通讯股份有限公司 Method and terminal of controlling photographing flash light
CN103685875A (en) * 2012-08-28 2014-03-26 株式会社理光 Imaging apparatus

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107623819B (en) * 2015-04-30 2019-08-09 Oppo广东移动通信有限公司 A kind of method taken pictures and mobile terminal and related media production
CN107623819A (en) * 2015-04-30 2018-01-23 广东欧珀移动通信有限公司 A kind of method taken pictures and mobile terminal and related media production
CN104866191B (en) * 2015-04-30 2017-09-19 广东欧珀移动通信有限公司 A kind of method taken pictures and mobile terminal
CN104866191A (en) * 2015-04-30 2015-08-26 广东欧珀移动通信有限公司 Photography method and mobile terminal
CN106303267A (en) * 2015-05-25 2017-01-04 北京智谷睿拓技术服务有限公司 Image capture device and method
CN106303267B (en) * 2015-05-25 2019-06-04 北京智谷睿拓技术服务有限公司 Image capture device and method
CN105049704A (en) * 2015-06-17 2015-11-11 青岛海信移动通信技术股份有限公司 Shooting method and equipment
WO2018205229A1 (en) * 2017-05-11 2018-11-15 深圳市大疆创新科技有限公司 Supplemental light control device, system, method, and mobile device
CN109218623A (en) * 2018-11-05 2019-01-15 浙江大华技术股份有限公司 A kind of light compensation method and device, computer installation and readable storage medium storing program for executing
CN111246119A (en) * 2018-11-29 2020-06-05 杭州海康威视数字技术股份有限公司 Camera and light supplement control method and device
CN111866373A (en) * 2020-06-19 2020-10-30 北京小米移动软件有限公司 Method, device and medium for displaying shooting preview image
CN111866373B (en) * 2020-06-19 2021-12-28 北京小米移动软件有限公司 Method, device and medium for displaying shooting preview image
US11617023B2 (en) 2020-06-19 2023-03-28 Beijing Xiaomi Mobile Software Co., Ltd. Method for brightness enhancement of preview image, apparatus, and medium
CN112312113A (en) * 2020-10-29 2021-02-02 贝壳技术有限公司 Method, device and system for generating three-dimensional model
CN112312113B (en) * 2020-10-29 2022-07-15 贝壳技术有限公司 Method, device and system for generating three-dimensional model
WO2022095012A1 (en) * 2020-11-09 2022-05-12 深圳市大疆创新科技有限公司 Shutter adjustment method and device, photography device, and movable platform

Also Published As

Publication number Publication date
CN104113702B (en) 2018-09-04

Similar Documents

Publication Publication Date Title
CN104113702A (en) Flash control method and device and image collection method and device
US10425638B2 (en) Equipment and method for promptly performing calibration and verification of intrinsic and extrinsic parameters of a plurality of image capturing elements installed on electronic device
US9544503B2 (en) Exposure control methods and apparatus
US10516877B2 (en) Light field collection control methods and apparatuses, light field collection devices
CN104092955A (en) Flash control method and device, as well as image acquisition method and equipment
CN107948538B (en) Imaging method, imaging device, mobile terminal and storage medium
CN104580878A (en) Automatic effect method for photography and electronic apparatus
CN102957871A (en) Field depth processing method for shot image of mobile terminal
CN108322646A (en) Image processing method, device, storage medium and electronic equipment
CN103780840A (en) High-quality imaging double camera shooting and imaging device and method thereof
CN104092956A (en) Flash control method, flash control device and image acquisition equipment
CN104853112A (en) Method and apparatus for controlling long exposure time
CN104780315A (en) Shooting method and system for camera shooting device
CN104092954A (en) Flash control method and control device and image collection method and collection device
CN105611182B (en) Brightness compensation method and device
CN105976325A (en) Method for adjusting brightness of multiple images
CN105163047A (en) HDR (High Dynamic Range) image generation method and system based on color space conversion and shooting terminal
CN104660909A (en) Image acquisition method, image acquisition device and terminal
CN104079812A (en) Method and device of acquiring image information
TWI543615B (en) Image processing method and electronic apparatus using the same
CN112261292B (en) Image acquisition method, terminal, chip and storage medium
TWI543582B (en) Image editing method and a related blur parameter establishing method
KR102506363B1 (en) A device with exactly two cameras and how to create two images using the device
CN105049716A (en) Preview image processing method and user terminal
CN102629973B (en) Camera head and image capture method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant