Specific embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation is described, it is clear that described embodiment is a part of embodiment of the invention, rather than whole embodiments.Based on this hair
Embodiment in bright, the every other implementation that those of ordinary skill in the art are obtained under the premise of creative work is not made
Example, belongs to the scope of protection of the invention.
First embodiment
A kind of light measuring method disclosed in the present embodiment, is applied to include the mobile end of the first camera and second camera
End, with reference to shown in Fig. 1, including:
Step 101:During the first camera collection preview image is exported to preview interface of taking pictures, the preview is obtained
The target photometry region of image.
When camera is opened, mobile terminal is in preview interface of taking pictures, and camera can be adopted constantly to external sights
Collection, obtains preview image, exports to preview interface of taking pictures, and the preview image is shown in preview interface of taking pictures, and this first
Camera collection preview image is exported to the process of preview interface of taking pictures, including the first camera is acquired to external sights,
Treatment obtains preview image, and preview image is conveyed to display screen, and by the pre- of the first camera collection in preview interface of taking pictures
The process that image of looking at shows.
Wherein, the target photometry region is a part for viewing area in preview interface.The acquisition of the target photometry region
Can be the automatic acquisition process of mobile terminal, or acquisition process is realized according to the selection of user instruction.Based on the target
Photometry region is carrying out ensuing light-metering treatment.
Specifically, as one preferred embodiment, the step of target photometry region of the acquisition preview image, including:
Mobile terminal user is in the clicking operation in preview interface of taking pictures for detection;When clicking operation is detected, clicking operation is obtained
Position;The preview image being included, the region of the preset first range of the position of the clicking operation is defined as the target
Photometry region.
The process, corresponds specifically to user and is exported to preview interface of taking pictures in the first camera collection preview image, takes pictures
When showing preview image in preview interface, carry out clicking on the situation for choosing target area, this time shift in preview interface of taking pictures
Dynamic terminal is detecting user when the clicking operation in preview interface is taken pictures, you can obtain the target light-metering area selected by user
Domain, the generation position of clicking operation described in the target photometry region is clicked on automatically according to user and obtains target area, makes target
The actual demand for being determined for compliance with user of photometry region.
Specifically, as it is another preferred embodiment, the step of the target photometry region of the acquisition preview image, bag
Include:The region of the preset second range in preview image is defined as target photometry region.
The process, specially directly carries out area locking in the preview image of target photometry region, the target light-metering area
Domain can be specifically a default default zone in systems, and the region of the preset second range can be for display screen just
Central area, to meet conventional photographing operation, the automatic selection area of system makes light-metering process simple to use.
Specifically, as it is another preferred embodiment, the step of the target photometry region of the acquisition preview image, bag
Include:Face datection is carried out to preview image;When face is detected, the region where face is defined as target photometry region.
The process, specific to light-metering demand when user is to portrait, directly by human face region in preview image
Can be automatically and efficiently main with human face region when can make in reference object comprising personage as target photometry region
Parameter acquiring region, ensuing contrast matching process is carried out according to the parameter of the human face region, with the people captured by user
Thing is attached most importance to, and realizes final face exposure enhancing, is maximized guarantee human face region and is obtained good light-metering effect, to enter
One step ensures the final good shooting effect of human face region, meets user's request.
Step 102:Target field depth where obtaining the target photometry region that second camera is detected.
The realization of the process, especially by both second camera and the first camera coordinate, second camera by itself
The view data absorbed, can be by two cameras with reference to the view data obtained during the first camera collection preview image
The method of triangle polyester fibre between made thing body, obtains the depth of field parameter information of subject in preview image, and then available
Target field depth in preview image where target photometry region.
Step 103:Based on the target field depth, the light metering weight of preview image is adjusted.
Because the reference object shown in preview image spatially has position difference all around, on diverse location
The depth of field value of reference object there is difference.Wherein, the target field depth is right with shooting shown in target photometry region
As corresponding, shown reference object is a part for shown reference object in preview image in target photometry region,
Based on the field depth where a part of region, it is adjusted come the light metering weight to overall preview image, with subrange
Field depth information determine the light-metering strategy of entire scope, there is the shooting of difference to preview image spatial location in realization
Object, differentiated light-metering operation is carried out with differentiated light metering weight, it is ensured that the accuracy of light-metering, lift light-metering effect.
Step 104:Based on the light metering weight after adjustment, light-metering is carried out to preview image.
Based on the target field depth, the light metering weight of the preview image after being adjusted is held according to the light metering weight
Final light-metering operation of the row to preview image.
Light measuring method in the embodiment of the present invention, is exported to preview circle of taking pictures by gathering preview image in the first camera
During face, the target photometry region of the preview image is obtained, and obtain the target photometry region that second camera is detected
The target field depth at place, based on the target field depth, adjusts the light metering weight of preview image, final to realize based on adjustment
Light metering weight afterwards, light-metering is carried out to preview image, the process, by the cooperation between two cameras, obtains preview image
And in image the place of target photometry region target field depth, based on the field depth where target photometry region, come it is right
The light metering weight of overall preview image is adjusted, and the light-metering plan of entire scope is determined with the field depth information of subrange
Slightly, realization exist to preview image spatial location difference reference object, carried out with differentiated light metering weight it is differentiated
Light-metering is operated, it is ensured that the accuracy of light-metering, lifts light-metering effect, the photometric data of more realistic imaging demand is obtained, with reality
Now follow-up reasonable exposure, it is ensured that effect of taking pictures.
Second embodiment
A kind of light measuring method disclosed in the present embodiment, is applied to include the mobile end of the first camera and second camera
End, with reference to shown in Fig. 2, including:
Step 201:During the first camera collection preview image is exported to preview interface of taking pictures, preview graph is obtained
The target photometry region of picture.
When camera is opened, mobile terminal is in preview interface of taking pictures, and camera can be adopted constantly to external sights
Collection, obtains preview image, exports to preview interface of taking pictures, and the preview image is shown in preview interface of taking pictures, and this first
Camera collection preview image is exported to the process of preview interface of taking pictures, including the first camera is acquired to external sights,
Treatment obtains preview image, and preview image is conveyed to display screen, and by the pre- of the first camera collection in preview interface of taking pictures
The process that image of looking at shows.
Wherein, the target photometry region is the part of viewing area in preview interface of taking pictures.The target photometry region
Acquisition can be the automatic acquisition process of mobile terminal, or realize acquisition process according to the selection of user instruction.Based on this
Target photometry region is carrying out ensuing light-metering treatment.
Step 202:Target field depth where obtaining the target photometry region that second camera is detected.
The realization of the process, especially by both second camera and the first camera coordinate, second camera by itself
The view data absorbed, can be by two cameras with reference to the view data obtained during the first camera collection preview image
The method of triangle polyester fibre between made thing body, obtains the depth of field parameter information of subject in preview image, and then available
Target field depth in preview image where target photometry region.
Step 203:The field depth of all pixels point in fringe region in control second camera detection preview image.
Wherein, the fringe region is:All image-regions in addition to target photometry region.
The realization of the process, especially by both second camera and the first camera coordinate, second camera by itself
The view data absorbed, can be by two cameras with reference to the view data obtained during the first camera collection preview image
The method of triangle polyester fibre between made thing body, obtains the depth of field parameter information of subject in preview image, and then available
The corresponding field depth of pixel in all image-regions in preview image in addition to target photometry region.
Step 204:In edge region, at least one object edge area that field depth is the target field depth is extracted
Domain.
The step, based on the target field depth where target photometry region, is carried out in the fringe region of preview image
Matching, extracts the object edge region that field depth is the target field depth, each at least one object edge region
The corresponding depth of field numerical value of pixel is in the target field depth.
Step 205:Photometry region centered on target photometry region and at least one object edge region are determined.
The process, extracts in the target field depth according to where target photometry region, in edge region and obtains the depth of field
After scope is at least one object edge region of the target field depth, the target photometry region and at least one object edge
Region turns into center photometry region.The center photometry region be preview image in be in identical field depth (target field depth)
The region that is constituted of pixel.
The process, the corresponding target field depth of target photometry region that will be selected in preview image as comparison condition,
Contrasting the field depth corresponding to the pixel in which region in the display image of the whole preview interface of matching is and the target scape
What deep scope was consistent, the viewing area being consistent with the target field depth i.e. centered on photometry region.
For example, corresponding to the kitten and doggie of identical field depth in such as preview image, user clicks kitten,
Kitten and doggie so can be carried out into the big light-metering of weight all as center photometry region to center photometry region.By right
One region choose, you can the image-region of field depth identical with this region is determined automatically, complete to center
The determination and division of photometry region, improve light-metering process accuracy with it is intelligent.
Step 206:All image-regions outside the photometry region of preview image Zhong Chu centers are defined as to aid in photometry region.
Accordingly, the auxiliary photometry region is similarly the part in preview image, after center photometry region is determined,
And then residual image region is auxiliary photometry region in obtaining preview image.
Step 207:The light metering weight of adjustment center photometry region and auxiliary photometry region.
Specifically, the weight of the light metering weight less than the center photometry region of the auxiliary photometry region is set.
Wherein, the span of light metering weight is 0-100%.The determination of light metering weight, for the difference obtained to division
Region sets different light metering weights, with the light-metering for preview image stress, it is ensured that such as face or Jiao of focusing of taking pictures
The important area such as shooting picture at point place can have rationally, have stressing property and more meet the photometric data of actually imaging demand, with
Realize follow-up reasonable exposure, it is ensured that effect of taking pictures.
Step 208:Based on the light metering weight after adjustment, light-metering is carried out to preview image.
Based on the target field depth, the light metering weight of the preview image after being adjusted is held according to the light metering weight
Light-metering operation of the row to the center photometry region in preview image and auxiliary photometry region.
Wherein, the process of light-metering is carried out to preview image based on the light metering weight after adjustment, specially based on preview image
The corresponding monochrome information of pixel is carried out right of execution and is resurveyed light operation in the different areas, in gathering the different zones for determining respectively
The corresponding brightness value of pixel, the average brightness in different zones is weighted with the corresponding light metering weight ratio in each region
Calculate, obtain photometry result, to realize follow-up luminance compensation, improve the accuracy of light-metering.
Further, wherein, adjustment center photometry region and auxiliary photometry region light metering weight, based on the survey after adjustment
Light weight, the implementation process of light-metering is carried out to preview image, to that should have different implementation process.
On the one hand, the step of light metering weight of the adjustment center photometry region and auxiliary photometry region, including:Adjustment center
The light metering weight of photometry region is 100%;The light metering weight of adjustment auxiliary photometry region is 0.
Then, the light metering weight after adjustment should be based on, the step of carry out light-metering to preview image, including:According to center
The light metering weight 100% of photometry region, the center photometry region to preview image carries out light-metering.
In that case, according to 100% light metering weight, light-metering is carried out to center photometry region, completely with preview
In image it is all meet target photometry region where the region of target field depth carry out light-metering, ignore field depth for other
The viewing area of scope, light-metering operation is carried out to focus on performance center photometry region, meets the light-metering of taking pictures under particular case
Need.
On the one hand, the step of light metering weight of the adjustment center photometry region and auxiliary photometry region, including:Adjustment is described
The light metering weight of center photometry region is a%;The light metering weight of the adjustment auxiliary photometry region is b%.
Then, the light metering weight after adjustment should be based on, the step of carry out light-metering to preview image, including:According to the center
The light metering weight b% of the light metering weight a% of photometry region and the auxiliary photometry region, to the center photometry region and described
Auxiliary photometry region carries out light-metering.
Wherein, 50<a<The span of 100, a+b=100, i.e. light metering weight is 0-100%, by the figure in preview image
As region is divided into two, two-part light metering weight sum is 100%.Wherein, the light metering weight of center photometry region is more than auxiliary
Help the light metering weight of photometry region.Preferably, a values are preferably 80, b values and are preferably 20.
The process, realization is had any different, is had the division of emphasis to preview image, and light-metering is implemented according to different weights
Journey, different zones that can preferably in display picture carry out light-metering, improve the accuracy of light-metering, obtain more realistic
The photometric data of imaging demand, to realize follow-up reasonable exposure, it is ensured that effect of taking pictures.
Light measuring method in the embodiment of the present invention, is exported to preview circle of taking pictures by gathering preview image in the first camera
During face, the target photometry region of the preview image is obtained, and obtain the target photometry region that second camera is detected
In the target field depth and fringe region at place the field depth of all pixels point come determine center photometry region and auxiliary survey
Light region and its corresponding light metering weight, it is final to realize, based on the light metering weight after adjustment, light-metering being carried out to preview image, should
Process, by the cooperation between two cameras, obtains the target scape at the place of target photometry region in preview image and image
Deep scope, based on the field depth where target photometry region, is adjusted, with office come the light metering weight to overall preview image
The field depth information of portion's scope determines the light-metering strategy of entire scope, and realization has difference to preview image spatial location
Reference object, differentiated light-metering operation is carried out with differentiated light metering weight, it is ensured that the accuracy of light-metering, lifting light-metering effect
Really.
3rd embodiment
A kind of light measuring method disclosed in the present embodiment, is applied to include the mobile end of the first camera and second camera
End, with reference to shown in Fig. 3, including:
Step 301:During the first camera collection preview image is exported to preview interface of taking pictures, preview graph is obtained
The target photometry region of picture.
When camera is opened, mobile terminal is in preview interface of taking pictures, and camera can be adopted constantly to external sights
Collection, obtains preview image, exports to preview interface of taking pictures, and the preview image is shown in preview interface of taking pictures, and this first
Camera collection preview image is exported to the process of preview interface of taking pictures, including the first camera is acquired to external sights,
Treatment obtains preview image, and preview image is conveyed to display screen, and by the pre- of the first camera collection in preview interface of taking pictures
The process that image of looking at shows.
Step 302:Target field depth where obtaining the target photometry region that second camera is detected.
The realization of the process, especially by both second camera and the first camera coordinate, second camera by itself
The view data absorbed, can be by two cameras with reference to the view data obtained during the first camera collection preview image
The method of triangle polyester fibre between made thing body, obtains the depth of field parameter information of subject in preview image, and then available
Target field depth in preview image where target photometry region.
Step 303:Based on target field depth, the average depth of field c of all pixels point in target photometry region is calculated.
Wherein, there are many pixels in the corresponding target photometry region of the target field depth, each pixel correspondence
There is the depth of field numerical value of itself shown portion in preview image, the corresponding depth of field numerical value of those pixels is added and is averaged
To the average depth of field, it is worth to by asking mathematic(al) mean.
Step 304:The field depth of all pixels point in fringe region in control second camera detection preview image.
Wherein, the fringe region is:All image-regions in addition to target photometry region.
The realization of the process, especially by both second camera and the first camera coordinate, second camera by itself
The view data absorbed, can be by two cameras with reference to the view data obtained during the first camera collection preview image
The method of triangle polyester fibre between made thing body, obtains the depth of field parameter information of subject in preview image, and then available
The corresponding field depth of pixel in all image-regions in preview image in addition to target photometry region.
Step 305:In target photometry region and fringe region, it is [c-w that field depth is extracted successively1,c+w1]、[c-
w2,c-w1)∪(c+w1,c+w2]、[c-w3,c-w2)∪(c+w2,c+w3]……[c-wm,c-wn)∪(c+wn,c+wm] image
Region.
Wherein, 0≤w1<w2<w3……<wn<wm。
The process, depth of view information according to pixel carries out weight light-metering.All pixels point in target photometry region
The average depth of field is c, specifically, for example, can be by the depth of field for the pixel of c assigns highest light-metering weights in light-metering, the depth of field is big
In c-5, the pixel less than c+5, secondary light metering weight high is assigned in light-metering, the rest may be inferred, wherein, the corresponding scape of pixel
Gap between deep numerical value and c is bigger, and pixel corresponding region is smaller in the weight of light-metering.
Step 306:Adjustment field depth is [c-w respectively1,c+w1]、[c-w2,c-w1)∪(c+w1,c+w2]、[c-w3,c-
w2)∪(c+w2,c+w3]……[c-wm,c-wn)∪(c+wn,c+wm] image-region light metering weight be m1%, m2%,
m3% ... mn%.
Wherein, mn<……<m3<m2<m1, m1%+m2%+m3%+ ...+mn%=100%.
Specifically, the w1Value can be 0, can be specifically, be 0 when numerical value is differed in preview image between the average depth of field
When, it is maximum that the corresponding area metering weight of those pixels is set;Work as w1Value is 0, w2When value is 5, corresponding to field depth
For [c-5, c) ∪ (c, c+5] image-region, now the depth of field numerical value of pixel is differed between the average depth of field in the image-region
Numerical value is in the range of first, i.e., [- 5,0) and ∪ (0,5], the corresponding image-region of the field depth assigns secondary high in light-metering
Light metering weight;Similarly, numerical value is differed between the average depth of field in preview image to be in (such as -10~-5, and 5 in the range of second
~pixel corresponding region 10) assigns smaller light metering weight in light-metering, wherein, all numerical value is absolute in the second scope
Value is all higher than the absolute value of all numerical value of the first scope, the like, light metering weight is reduced, preferably to carry out according to main body
Light-metering, improves the accuracy of light-metering.
For example, including kitten and thick grass in such as preview image, user clicks kitten, then calculate kitten institute
The average depth of field c of all pixels point, the depth of field is carried out in image range where kitten and image range where thick grass in the zone
The matching of scope, can by preview image be divided into the depth of field be c pixel region set highest light metering weight, by the depth of field with it is average
The pixel region that the difference of the depth of field is between 0 < c≤5 sets time weight high, and the depth of field is in into 5 with the difference of the average depth of field
Pixel region between < c≤10 sets smaller weight, and the rest may be inferred.Chosen by a region, you can by this
The average depth of field is automatically obtained the region division to preview image in region, improve the accuracy of light-metering process with it is intelligent.
Step 307:It is [c-w according to field depth1,c+w1]、[c-w2,c-w1)∪(c+w1,c+w2]、[c-w3,c-w2)∪
(c+w2,c+w3]……[c-wm,c-wn)∪(c+wn,c+wm] the corresponding light metering weight m of image-region1%, m2%,
m3% ... mn%, is [c-w to field depth1,c+w1]、[c-w2,c-w1)∪(c+w1,c+w2]、[c-w3,c-w2)∪(c+w2,
c+w3]……[c-wm,c-wn)∪(c+wn,c+wm] each image-region carry out light-metering.
Based on the target field depth, the average depth of field of all pixels point in target photometry region is calculated, extracted
The different images region in target photometry region and fringe region is obtained, and different light-metering power are adjusted to different images region
Weight, finally implements light-metering and operates according to different light metering weights to different images region, and the process, realization has to preview image
The division of emphasis is distinguished, had, light-metering process, different zones that can preferably in display picture are implemented according to different weights
Light-metering is carried out, the accuracy of light-metering is improved, the photometric data of more realistic imaging demand is obtained, to realize follow-up reasonable exposure
Light, it is ensured that effect of taking pictures.
Light measuring method in the embodiment of the present invention, by obtaining the target photometry region of the preview image, and target light-metering
Target field depth where region, is calculated the average depth of field of all pixels point in target photometry region, and extraction obtains mesh
Different images region in mark photometry region and fringe region, and different light metering weights are adjusted to different images region, finally
Realize, based on the light metering weight after adjustment, light-metering being carried out to preview image, the process, by the cooperation between two cameras,
The target field depth at the place of target photometry region in preview image and image is obtained, based on the scape where target photometry region
Deep scope, is adjusted come the light metering weight to overall preview image, and entirety is determined with the field depth information of subrange
The light-metering strategy of scope, realization there is the reference object of difference to preview image spatial location, with differentiated light metering weight
Carry out differentiated light-metering operation, it is ensured that the accuracy of light-metering, lift light-metering effect.
Fourth embodiment
A kind of light measuring method disclosed in the present embodiment, is applied to include the mobile end of the first camera and second camera
End, with reference to shown in Fig. 4, including:
Step 401:During the first camera collection preview image is exported to preview interface of taking pictures, preview graph is obtained
The target photometry region of picture.
When camera is opened, mobile terminal is in preview interface of taking pictures, and camera can be adopted constantly to external sights
Collection, obtains preview image, exports to preview interface of taking pictures, and the preview image is shown in preview interface of taking pictures, and this first
Camera collection preview image is exported to the process of preview interface of taking pictures, including the first camera is acquired to external sights,
Treatment obtains preview image, and preview image is conveyed to display screen, and by the pre- of the first camera collection in preview interface of taking pictures
The process that image of looking at shows.
Wherein, the target photometry region is the part of viewing area in preview interface of taking pictures.The target photometry region
Acquisition can be the automatic acquisition process of mobile terminal, or realize acquisition process according to the selection of user instruction.Based on this
Target photometry region is carrying out ensuing light-metering treatment.
Step 402:Target field depth where obtaining the target photometry region that second camera is detected.
The realization of the process, especially by both second camera and the first camera coordinate, second camera by itself
The view data absorbed, can be by two cameras with reference to the view data obtained during the first camera collection preview image
The method of triangle polyester fibre between made thing body, obtains the depth of field parameter information of subject in preview image, and then available
Target field depth in preview image where target photometry region.
Step 403:Based on target field depth, the average depth of field d of all pixels point in target photometry region is calculated.
Wherein, there are many pixels in the corresponding target photometry region of the target field depth, each pixel correspondence
There is the depth of field numerical value of itself shown portion in preview image, the corresponding depth of field numerical value of those pixels is added and is averaged
To the average depth of field, it is worth to by asking mathematic(al) mean.
Step 404:It is determined that the target span residing for average depth of field d.
Being calculated in target photometry region after the average depth of field of all pixels point, determine that the average depth of field d is in
Which target span, the determination process of ensuing light metering weight is carried out with the target span according to residing for it.
Step 405:According to default span and the corresponding relation of light metering weight, determine that target span is corresponding
Light metering weight s%.
The light metering weight is corresponding with the span residing for average depth of field d, it is determined that the target residing for average depth of field d takes
After value scope, from the corresponding relation of span and light metering weight, light metering weight corresponding with target span is obtained.
Wherein, average depth of field d values are bigger, and s% values are also bigger, the size of average depth of field d and the size of light metering weight
Positive correlation, when depth of field numerical value is larger, to increase the follow-up exposure compensating to respective regions, improves effect of taking pictures.
Step 406:The light metering weight for determining the fringe region in preview image is t%.
The fringe region is:All image-regions in addition to target photometry region;Wherein s+t=100.The process, will be pre-
The image-region look in image is divided into two, and two-part light metering weight sum is 100%.Preferably, s>T, i.e. s values are more than
The value of t.
For example, including kitten and thick grass in such as preview image, user clicks kitten, then calculate kitten institute
The average depth of field c of all pixels point in the zone, if c is 5, the span in 4≤c < 6, it is determined that the survey in kitten region
Light weight be with 4≤c of scope < 6 corresponding 70%, thick grass area metering weight be 30%;If c is 7, in taking for 6≤c < 8
Value scope, it is determined that the light metering weight in kitten region is to be with 6≤c of scope < 8 corresponding 80%, thick grass area metering weight
20%;By that analogy.Chosen by a region, you can determined by the span where the average depth of field in the region
Selected areas and the light metering weight in other regions, improve light-metering reasonability with it is intelligent.Step 407:According to center light-metering area
The light metering weight s% in the domain and light metering weight t% of fringe region, light-metering is carried out to center photometry region and fringe region.
Wherein, the center photometry region is the target photometry region, the target depth of field where target photometry region is obtained
Scope, and the average depth of field d of all pixels point in target photometry region is calculated, it is determined that the target residing for average depth of field d takes
Value scope, determines the light metering weight of the corresponding light metering weight s% of target span and fringe region for after t%, you can according to
The light metering weight performs the light-metering operation to the center photometry region in preview image and auxiliary photometry region.
Light measuring method in the embodiment of the present invention, by obtaining the target photometry region of the preview image, and target light-metering
Target field depth where region, is calculated the average depth of field of all pixels point in target photometry region, and then obtain mesh
Different light metering weight in mark photometry region and fringe region, it is final to realize based on the light metering weight after adjustment, to preview image
Light-metering is carried out, the process, by the cooperation between two cameras, obtains the institute of target photometry region in preview image and image
Target field depth, based on the field depth where target photometry region, enter come the light metering weight to overall preview image
Row adjustment, determines the light-metering strategy of entire scope with the field depth information of subrange, realizes to hollow of preview image
There is the reference object of difference, differentiated light-metering operation carried out with differentiated light metering weight in position, it is ensured that light-metering it is accurate
Property, lift light-metering effect.
5th embodiment
The embodiment of the present invention discloses a kind of mobile terminal, can realize the light measuring method in first embodiment to fourth embodiment
Details, and reach identical effect.Including the first camera and second camera, with reference to shown in Fig. 5, Fig. 6, also include:The
One acquisition module 501, the second acquisition module 502, adjusting module 503 and light-metering module 504.
First acquisition module 501, for being exported to preview interface of taking pictures in first camera collection preview image
During, obtain the target photometry region of the preview image.
Second acquisition module 502, the institute for obtaining first acquisition module 501 that the second camera is detected
Target field depth where stating target photometry region.
Adjusting module 503, for the target field depth obtained based on second acquisition module 502, adjusts institute
State the light metering weight of preview image.
Light-metering module 504, for based on the light metering weight after the adjustment of the adjusting module 503, to the preview graph
As carrying out light-metering.
Wherein, the adjusting module 503 includes:First control submodule 5031, the first extracting sub-module 5032, first are true
Stator modules 5033, the second determination sub-module 5034 and the first adjustment submodule 5035.
First control submodule 5031, for controlling the fringe region in the second camera detection preview image
The field depth of middle all pixels point.
First extracting sub-module 5032, in edge region, extracting the control of the first control submodule 5031 institute
State at least one object edge region that second camera detects that the field depth for obtaining is the target field depth.
First determination sub-module 5033, for the target photometry region and first extracting sub-module 5032 to be extracted
At least one object edge region determine centered on photometry region.
Second determination sub-module 5034, for will remove what first determination sub-module 5033 determined in the preview image
All image-regions outside the center photometry region are defined as aiding in photometry region.
First adjustment submodule 5035, for adjusting the center light-metering area that first determination sub-module 5033 determines
The light metering weight of the auxiliary photometry region that domain and second determination sub-module 5034 determine;Wherein, the fringe region
For:All image-regions in addition to target photometry region.
Wherein, the first adjustment submodule 5035 includes:First adjustment unit 50351 and the second adjustment unit 50352.
First adjustment unit 50351, the light metering weight for adjusting the center photometry region is 100%.
Second adjustment unit 50352, the light metering weight for adjusting the auxiliary photometry region is 0.
Then the light-metering module 504 includes:First light-metering submodule 5041.
First light-metering submodule 5041, for the light metering weight 100% according to the center photometry region, to the preview
The center photometry region of image carries out light-metering.
Wherein, the first adjustment submodule 5035 includes:3rd adjustment unit 50353 and the 4th adjustment unit 50354.
3rd adjustment unit 50353, the light metering weight for adjusting the center photometry region is a%.
4th adjustment unit 50354, the light metering weight for adjusting the auxiliary photometry region is b%.
Then the light-metering module 504 includes:Second light-metering submodule 5042.
Second light-metering submodule 5042, surveys for the light metering weight a% according to the center photometry region and the auxiliary
The light metering weight b% in light region, light-metering is carried out to the center photometry region and the auxiliary photometry region;Wherein, 50<a<
100, a+b=100.
Wherein, the adjusting module 503 includes:First calculating sub module 5036, the second control submodule 5037, second carry
Take the adjustment submodule 5039 of submodule 5038 and second.
First calculating sub module 5036, for based on the target field depth, calculating institute in the target photometry region
There is the average depth of field c of pixel.
Second control submodule 5037, for controlling the fringe region in the second camera detection preview image
The field depth of middle all pixels point.
Second extracting sub-module 5038, in the target photometry region and the fringe region, institute being extracted successively
It is [c-w to state the second control submodule 5037 and control the field depth that the second camera detection is obtained1,c+w1]、[c-w2,c-
w1)∪(c+w1,c+w2]、[c-w3,c-w2)∪(c+w2,c+w3]……[c-wm,c-wn)∪(c+wn,c+wm] image-region.
Second adjustment submodule 5039, the scape for obtaining is extracted for adjusting second extracting sub-module 5038 respectively
Deep scope is [c-w1,c+w1]、[c-w2,c-w1)∪(c+w1,c+w2]、[c-w3,c-w2)∪(c+w2,c+w3]……[c-wm,c-
wn)∪(c+wn,c+wm] image-region light metering weight be m1%, m2%, m3% ... mn%.
Then the light-metering module 504 includes:3rd light-metering submodule 5043.
3rd light-metering submodule 5043, for being [c-w according to the field depth1,c+w1]、[c-w2,c-w1)∪(c+
w1,c+w2]、[c-w3,c-w2)∪(c+w2,c+w3]……[c-wm,c-wn)∪(c+wn,c+wm] the corresponding survey of image-region
Light weight m1%, m2%, m3% ... mn%, is [c-w to the field depth1,c+w1]、[c-w2,c-w1)∪(c+w1,c+
w2]、[c-w3,c-w2)∪(c+w2,c+w3]……[c-wm,c-wn)∪(c+wn,c+wm] each image-region carry out light-metering;Its
In, 0≤w1<w2<w3……<wn<wm, mn<……<m3<m2<m1, m1%+m2%+m3%+ ...+mn%=100%, the side
Edge region is:All image-regions in addition to target photometry region.
Wherein, the adjusting module 503 includes:Second calculating sub module 50310, the 3rd determination sub-module 50311 and
Four determination sub-modules 50312.
Second calculating sub module 50310, for based on the target field depth, calculating institute in the target photometry region
There is the average depth of field d of pixel.
3rd determination sub-module 50311, for determining the described average depth of field that second calculating sub module 50310 is calculated
Target span residing for d.
4th determination sub-module 50312, for the corresponding relation according to default span and light metering weight, determines institute
State the corresponding light metering weight s% of the target span of the determination of the 3rd determination sub-module 50311.
Then the light-metering module 504 includes:5th determination sub-module 5044 and the 4th light-metering submodule 5045.
5th determination sub-module 5044, the light metering weight for determining the fringe region in the preview image is t%.
4th light-metering submodule 5045, determines for the 5th determination sub-module 5044 according to center photometry region
The light metering weight t% of light metering weight s% and the fringe region, surveys to the center photometry region and the fringe region
Light;Wherein, the center photometry region is the target photometry region, and the fringe region is:In addition to target photometry region
All image-regions, the average depth of field d values are bigger, and s% values are also bigger, s+t=100.
Wherein, first acquisition module 501 includes:Detection sub-module 5011, acquisition submodule 5012 and the 6th determine
Submodule 5013.
Detection sub-module 5011, for detecting mobile terminal user in the clicking operation in preview interface of taking pictures.
Acquisition submodule 5012, obtains for when clicking operation is detected, obtaining the detection of detection sub-module 5011
Clicking operation position.
6th determination sub-module 5013, for the preview image to be included into the institute that the acquisition submodule 5012 is obtained
The region for stating the preset first range of the position of clicking operation is defined as the target photometry region.
Wherein, first acquisition module 501 includes:7th determination sub-module 5014.
7th determination sub-module 5014, it is described for the region of the preset second range in the preview image to be defined as
Target photometry region.
Wherein, first acquisition module 501 includes:The determination sub-module 5016 of detection sub-module 5015 and the 8th.
Detection sub-module 5015, for carrying out Face datection to the preview image.
8th determination sub-module 5016, for when face is detected, the region where the face being defined as described
Target photometry region.
Mobile terminal in the embodiment of the present invention, is exported to preview circle of taking pictures by gathering preview image in the first camera
During face, the target photometry region of the preview image is obtained, and obtain the target photometry region that second camera is detected
The target field depth at place, based on the target field depth, adjusts the light metering weight of preview image, final to realize based on adjustment
Light metering weight afterwards, light-metering is carried out to preview image, the process, by the cooperation between two cameras, obtains preview image
And in image the place of target photometry region target field depth, based on the field depth where target photometry region, come it is right
The light metering weight of overall preview image is adjusted, and the light-metering plan of entire scope is determined with the field depth information of subrange
Slightly, realization exist to preview image spatial location difference reference object, carried out with differentiated light metering weight it is differentiated
Light-metering is operated, it is ensured that the accuracy of light-metering, lifts light-metering effect.
Sixth embodiment
As shown in fig. 7, the mobile terminal 600 includes:At least one processor 601, memory 602, at least one network
Interface 604 and user interface 603 and camera, the camera include the first camera 606 and second camera 607.It is mobile
Each component in terminal 600 is coupled by bus system 605.It is understood that bus system 605 is used to realize these groups
Connection communication between part.Bus system 605 in addition to including data/address bus, also including power bus, controlling bus and state
Signal bus.But for the sake of for clear explanation, various buses are all designated as bus system 605 in the figure 7.
Wherein, user interface 603 can include display, keyboard or pointing device (for example, mouse, trace ball
(trackball), touch-sensitive plate or touch-screen etc..
It is appreciated that the memory 602 in the embodiment of the present invention can be volatile memory or nonvolatile memory,
Or may include both volatibility and nonvolatile memory.Wherein, nonvolatile memory can be read-only storage (Read-
Only Memory, ROM), programmable read only memory (Programmable ROM, PROM), the read-only storage of erasable programmable
Device (Erasable PROM, EPROM), Electrically Erasable Read Only Memory (Electrically EPROM, EEPROM) or
Flash memory.Volatile memory can be random access memory (Random Access Memory, RAM), and it is used as outside height
Speed caching.By exemplary but be not restricted explanation, the RAM of many forms can use, such as static RAM
(Static RAM, SRAM), dynamic random access memory (Dynamic RAM, DRAM), Synchronous Dynamic Random Access Memory
(Synchronous DRAM, SDRAM), double data speed synchronous dynamic RAM (Double Data Rate
SDRAM, DDRSDRAM), enhanced Synchronous Dynamic Random Access Memory (Enhanced SDRAM, ESDRAM), synchronized links
Dynamic random access memory (Synch Link DRAM, SLDRAM) and direct rambus random access memory (Direct
Rambus RAM, DRRAM).The embodiment of the present invention description system and method memory 602 be intended to including but not limited to these
With the memory of any other suitable type.
In some embodiments, memory 602 stores following element, can perform module or data structure, or
Person their subset, or their superset:Operating system 6021 and application program 6022.
Wherein, operating system 6021, comprising various system programs, such as ccf layer, core library layer, driving layer etc. are used for
Realize various basic businesses and process hardware based task.Application program 6022, comprising various application programs, such as media
Player (Media Player), browser (Browser) etc., for realizing various applied business.Realize the embodiment of the present invention
The program of method may be embodied in application program 6022.
In embodiments of the present invention, by the program for calling memory 602 to store or instruction, specifically, can be application
The program stored in program 6022 or instruction, processor 601 are used to be exported to bat in first camera collection preview image
During preview interface, the target photometry region of the preview image is obtained;Obtain what the second camera was detected
Target field depth where the target photometry region;Based on the target field depth, the survey of the preview image is adjusted
Light weight;The light metering weight after based on adjustment, light-metering is carried out to the preview image.
The method that the embodiments of the present invention are disclosed can apply in processor 601, or be realized by processor 601.
Processor 601 is probably a kind of IC chip, the disposal ability with signal.In implementation process, the above method it is each
Step can be completed by the instruction of the integrated logic circuit of the hardware in processor 601 or software form.Above-mentioned treatment
Device 601 can be general processor, digital signal processor (Digital Signal Processor, DSP), special integrated electricity
Road (Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field
Programmable Gate Array, FPGA) or other PLDs, discrete gate or transistor logic,
Discrete hardware components.Can realize or perform disclosed each method in the embodiment of the present invention, step and logic diagram.It is general
Processor can be microprocessor or the processor can also be any conventional processor etc..With reference to embodiment of the present invention institute
The step of disclosed method, can be embodied directly in hardware decoding processor and perform completion, or with the hardware in decoding processor
And software module combination performs completion.Software module may be located at random access memory, and flash memory, read-only storage may be programmed read-only
In the ripe storage medium in this area such as memory or electrically erasable programmable memory, register.The storage medium is located at
Memory 602, processor 601 reads the information in memory 602, with reference to the step of its hardware completion above method.
It is understood that the embodiment of the present invention description these embodiments can with hardware, software, firmware, middleware,
Microcode or its combination are realized.Realized for hardware, processing unit can be realized in one or more application specific integrated circuits
(Application Specific Integrated Circuits, ASIC), digital signal processor (Digital Signal
Processing, DSP), digital signal processing appts (DSP Device, DSPD), programmable logic device (Programmable
Logic Device, PLD), field programmable gate array (Field-Programmable Gate Array, FPGA), general place
In reason device, controller, microcontroller, microprocessor, other electronic units for performing herein described function or its combination.
For software realize, can by perform the module (such as process, function etc.) of function described in the embodiment of the present invention come
Realize the technology described in the embodiment of the present invention.Software code is storable in memory and by computing device.Memory can
To realize within a processor or outside processor.
Alternatively, processor 601 is additionally operable to:Obtain the overall distance shown between object and the reference plane in preview interface
Parameter information;First area is chosen in the preview interface;According to the overall distance parameter information, determine that the display is right
As the distance information value in the first area.
Used as another embodiment, the mobile terminal includes at least two cameras, and processor 601 is additionally operable to:Control
The second camera detects the field depth of all pixels point in the fringe region in the preview image;Edge region
In, extract at least one object edge region that field depth is the target field depth;By the target photometry region and
Photometry region centered at least one object edge region determination;The center photometry region will be removed in the preview image
Outer all image-regions are defined as aiding in photometry region;Adjust the survey of the center photometry region and the auxiliary photometry region
Light weight;Wherein, the fringe region is:All image-regions in addition to target photometry region.
Alternatively, as another embodiment, processor 601 is additionally operable to:Adjust the light-metering power of the center photometry region
Weight is 100%;The light metering weight of the adjustment auxiliary photometry region is 0;According to the light metering weight of the center photometry region
100%, the center photometry region to the preview image carries out light-metering.
Alternatively, as another embodiment, processor 601 is additionally operable to:Adjust the light-metering power of the center photometry region
Weight is a%;The light metering weight of the adjustment auxiliary photometry region is b%;According to the light metering weight a% of the center photometry region
With the light metering weight b% of the auxiliary photometry region, light-metering is carried out to the center photometry region and the auxiliary photometry region;
Wherein, 50<a<100, a+b=100.
Alternatively, as another embodiment, processor 601 is additionally operable to:Based on the target field depth, calculate described
The average depth of field c of all pixels point in target photometry region;Side in controlling the second camera to detect the preview image
The field depth of all pixels point in edge region;In the target photometry region and the fringe region, the depth of field is extracted successively
Scope is [c-w1,c+w1]、[c-w2,c-w1)∪(c+w1,c+w2]、[c-w3,c-w2)∪(c+w2,c+w3]……[c-wm,c-
wn)∪(c+wn,c+wm] image-region;The field depth is adjusted respectively for [c-w1,c+w1]、[c-w2,c-w1)∪(c+w1,
c+w2]、[c-w3,c-w2)∪(c+w2,c+w3]……[c-wm,c-wn)∪(c+wn,c+wm] the light metering weight of image-region be
m1%, m2%, m3% ... mn%;It is [c-w according to the field depth1,c+w1]、[c-w2,c-w1)∪(c+w1,c+w2]、
[c-w3,c-w2)∪(c+w2,c+w3]……[c-wm,c-wn)∪(c+wn,c+wm] the corresponding light metering weight of image-region
m1%, m2%, m3% ... mn%, is [c-w to the field depth1,c+w1]、[c-w2,c-w1)∪(c+w1,c+w2]、[c-
w3,c-w2)∪(c+w2,c+w3]……[c-wm,c-wn)∪(c+wn,c+wm] each image-region carry out light-metering;Wherein, 0≤w1
<w2<w3……<wn<wm, mn<……<m3<m2<m1, m1%+m2%+m3%+ ...+mn%=100%, the fringe region is:
All image-regions in addition to target photometry region.
Alternatively, as another embodiment, processor 601 is additionally operable to:Based on the target field depth, calculate described
The average depth of field d of all pixels point in target photometry region;Determine the target span residing for the average depth of field d;According to
Default span and the corresponding relation of light metering weight, determine the corresponding light metering weight s% of the target span;It is determined that
The light metering weight of the fringe region in the preview image is t%;Light metering weight s% and the side according to center photometry region
The light metering weight t% in edge region, light-metering is carried out to the center photometry region and the fringe region;Wherein, the center is surveyed
Light region is the target photometry region, and the fringe region is:All image-regions in addition to target photometry region are described flat
Equal depth of field d values are bigger, and s% values are also bigger, s+t=100.
Alternatively, as another embodiment, processor 601 is additionally operable to:Mobile terminal user is in preview circle of taking pictures for detection
Clicking operation on face;When clicking operation is detected, the position of clicking operation is obtained;The preview image is included described
The region of the preset first range of the position of clicking operation is defined as the target photometry region.
Alternatively, as another embodiment, processor 601 is additionally operable to:By default second model in the preview image
The region enclosed is defined as the target photometry region.
Alternatively, as another embodiment, processor 601 is additionally operable to:Face datection is carried out to the preview image;When
When detecting face, the region where the face is defined as the target photometry region.
The mobile terminal can realize each process of terminal realization in previous embodiment, to avoid repeating, here no longer
Repeat.
Mobile terminal in the embodiment of the present invention, is exported to preview circle of taking pictures by gathering preview image in the first camera
During face, the target photometry region of the preview image is obtained, and obtain the target photometry region that second camera is detected
The target field depth at place, based on the target field depth, adjusts the light metering weight of preview image, final to realize based on adjustment
Light metering weight afterwards, light-metering is carried out to preview image, the process, by the cooperation between two cameras, obtains preview image
And in image the place of target photometry region target field depth, based on the field depth where target photometry region, come it is right
The light metering weight of overall preview image is adjusted, and the light-metering plan of entire scope is determined with the field depth information of subrange
Slightly, realization exist to preview image spatial location difference reference object, carried out with differentiated light metering weight it is differentiated
Light-metering is operated, it is ensured that the accuracy of light-metering, lifts light-metering effect.
7th embodiment
As shown in figure 8, the mobile terminal 700 can be mobile phone, panel computer, personal digital assistant (Personal
Digital Assistant, PDA) or vehicle-mounted computer etc..
Mobile terminal 700 in Fig. 8 includes radio frequency (Radio Frequency, RF) circuit 710, memory 720, input
Unit 730, display unit 740, processor 760, voicefrequency circuit 770, WiFi (Wireless Fidelity) module 780 and electricity
Source 790 and camera, the camera include the first camera 751 and second camera 752.
Wherein, input block 730 can be used to receive the numeral or character information of user input, and produce and mobile terminal
700 user is set and the relevant signal input of function control.Specifically, in the embodiment of the present invention, the input block 730 can
With including contact panel 731.Contact panel 731, also referred to as touch-screen, can collect user thereon or neighbouring touch operation
(such as user uses the operations of any suitable object or annex on contact panel 731 such as finger, stylus), and according to advance
The formula of setting drives corresponding attachment means.Optionally, contact panel 731 may include touch detecting apparatus and touch controller
Two parts.Wherein, touch detecting apparatus detect the touch orientation of user, and detect the signal that touch operation brings, by signal
Send touch controller to;Touch controller receives touch information from touch detecting apparatus, and is converted into contact coordinate,
Give the processor 760 again, and the order sent of receiving processor 760 and can be performed.Furthermore, it is possible to using resistance-type,
The polytypes such as condenser type, infrared ray and surface acoustic wave realize contact panel 731.Except contact panel 731, input block
730 can also include other input equipments 732, and other input equipments 732 can include but is not limited to physical keyboard, function key
One or more in (such as volume control button, switch key etc.), trace ball, mouse, action bars etc..
Wherein, display unit 740 can be used for display by the information of user input or be supplied to information and the movement of user
The various menu interfaces of terminal 700.Display unit 740 may include display panel 741, optionally, can use LCD or organic hairs
The forms such as optical diode (Organic Light-Emitting Diode, OLED) configure display panel 741.
It should be noted that contact panel 731 can cover display panel 741, touch display screen is formed, when touch display screen inspection
Measure thereon or after neighbouring touch operation, processor 760 is sent to determine the type of touch event, with preprocessor
760 provide corresponding visual output according to the type of touch event in touch display screen.
Touch display screen includes Application Program Interface viewing area and conventional control viewing area.The Application Program Interface viewing area
And the arrangement mode of the conventional control viewing area is not limited, can be arranged above and below, left-right situs etc. can distinguish two and show
Show the arrangement mode in area.The Application Program Interface viewing area is displayed for the interface of application program.Each interface can be with
The interface element such as the icon comprising at least one application program and/or widget desktop controls.The Application Program Interface viewing area
It can also be the empty interface not comprising any content.The conventional control viewing area be used for show utilization rate control higher, for example,
Application icons such as settings button, interface numbering, scroll bar, phone directory icon etc..
Wherein processor 760 is the control centre of mobile terminal 700, using various interfaces and connection whole mobile phone
Various pieces, by running or performing software program and/or module of the storage in first memory 721, and call storage
Data in second memory 722, perform the various functions and processing data of mobile terminal 700, so as to mobile terminal 700
Carry out integral monitoring.Optionally, processor 760 may include one or more processing units.
In embodiments of the present invention, by call store the first memory 721 in software program and/or module and/
Or the data in the second memory 722, processor 760 is used to be exported to taking pictures in first camera collection preview image
During preview interface, the target photometry region of the preview image is obtained;Obtain the institute that the second camera is detected
Target field depth where stating target photometry region;Based on the target field depth, the light-metering of the preview image is adjusted
Weight;The light metering weight after based on adjustment, light-metering is carried out to the preview image.
Alternatively, as another embodiment, processor 760 is additionally operable to control the second camera to detect the preview graph
The field depth of all pixels point in fringe region as in;In edge region, extraction field depth is the target depth of field
At least one object edge region of scope;The target photometry region and at least one object edge region are defined as
Center photometry region;All image-regions in the preview image in addition to the center photometry region are defined as to aid in light-metering
Region;Adjust the light metering weight of the center photometry region and the auxiliary photometry region;Wherein, the fringe region is:Remove
All image-regions outside target photometry region.
Alternatively, as another embodiment, the light metering weight that processor 760 is additionally operable to adjust the center photometry region is
100%;The light metering weight of the adjustment auxiliary photometry region is 0;According to the light metering weight 100% of the center photometry region,
The center photometry region to the preview image carries out light-metering.
Alternatively, as another embodiment, the light metering weight that processor 760 is additionally operable to adjust the center photometry region is
A%;The light metering weight of the adjustment auxiliary photometry region is b%;Light metering weight a% and institute according to the center photometry region
The light metering weight b% of auxiliary photometry region is stated, light-metering is carried out to the center photometry region and the auxiliary photometry region;Its
In, 50<a<100, a+b=100.
Alternatively, as another embodiment, processor 760 is additionally operable to, based on the target field depth, calculate the mesh
The average depth of field c of all pixels point in mark photometry region;Edge in controlling the second camera to detect the preview image
The field depth of all pixels point in region;In the target photometry region and the fringe region, depth of field model is extracted successively
It is [c-w to enclose1,c+w1]、[c-w2,c-w1)∪(c+w1,c+w2]、[c-w3,c-w2)∪(c+w2,c+w3]……[c-wm,c-wn)
∪(c+wn,c+wm] image-region;The field depth is adjusted respectively for [c-w1,c+w1]、[c-w2,c-w1)∪(c+w1,c+
w2]、[c-w3,c-w2)∪(c+w2,c+w3]……[c-wm,c-wn)∪(c+wn,c+wm] the light metering weight of image-region be
m1%, m2%, m3% ... mn%;It is [c-w according to the field depth1,c+w1]、[c-w2,c-w1)∪(c+w1,c+w2]、
[c-w3,c-w2)∪(c+w2,c+w3]……[c-wm,c-wn)∪(c+wn,c+wm] the corresponding light metering weight of image-region
m1%, m2%, m3% ... mn%, is [c-w to the field depth1,c+w1]、[c-w2,c-w1)∪(c+w1,c+w2]、[c-
w3,c-w2)∪(c+w2,c+w3]……[c-wm,c-wn)∪(c+wn,c+wm] each image-region carry out light-metering;Wherein, 0≤w1
<w2<w3……<wn<wm, mn<……<m3<m2<m1, m1%+m2%+m3%+ ...+mn%=100%, the fringe region is:
All image-regions in addition to target photometry region.
Alternatively, as another embodiment, processor 760 is additionally operable to, based on the target field depth, calculate the mesh
The average depth of field d of all pixels point in mark photometry region;Determine the target span residing for the average depth of field d;According to pre-
If span and light metering weight corresponding relation, determine the corresponding light metering weight s% of the target span;Determine institute
The light metering weight for stating the fringe region in preview image is t%;According to the light metering weight s% and the edge of center photometry region
The light metering weight t% in region, light-metering is carried out to the center photometry region and the fringe region;Wherein, the center light-metering
Region is the target photometry region, and the fringe region is:All image-regions in addition to target photometry region, it is described average
Depth of field d values are bigger, and s% values are also bigger, s+t=100.
Alternatively, as another embodiment, processor 760 is additionally operable to detection mobile terminal user in preview interface of taking pictures
On clicking operation;When clicking operation is detected, the position of clicking operation is obtained;The preview image is included into the point
The region for hitting the preset first range of the position of operation is defined as the target photometry region.
Alternatively, as another embodiment, processor 760 is additionally operable to the preset second range in the preview image
Region be defined as the target photometry region.
Alternatively, as another embodiment, processor 760 is additionally operable to carry out Face datection to the preview image;When
When detecting face, the region where the face is defined as the target photometry region.
The mobile terminal can realize each process of terminal realization in previous embodiment, to avoid repeating, here no longer
Repeat.
Mobile terminal in the embodiment of the present invention, is exported to preview circle of taking pictures by gathering preview image in the first camera
During face, the target photometry region of the preview image is obtained, and obtain the target photometry region that second camera is detected
The target field depth at place, based on the target field depth, adjusts the light metering weight of preview image, final to realize based on adjustment
Light metering weight afterwards, light-metering is carried out to preview image, the process, by the cooperation between two cameras, obtains preview image
And in image the place of target photometry region target field depth, based on the field depth where target photometry region, come it is right
The light metering weight of overall preview image is adjusted, and the light-metering plan of entire scope is determined with the field depth information of subrange
Slightly, realization exist to preview image spatial location difference reference object, carried out with differentiated light metering weight it is differentiated
Light-metering is operated, it is ensured that the accuracy of light-metering, lifts light-metering effect.
Those of ordinary skill in the art it is to be appreciated that with reference to disclosed in the embodiment of the present invention embodiment description it is each
The unit and algorithm steps of example, can be realized with the combination of electronic hardware or computer software and electronic hardware.These
Function is performed with hardware or software mode actually, depending on the application-specific and design constraint of technical scheme.Specialty
Technical staff can realize described function to each specific application using distinct methods, but this realization should not
Think beyond the scope of this invention.
It is apparent to those skilled in the art that, for convenience and simplicity of description, the system of foregoing description,
The specific work process of device and unit, may be referred to the corresponding process in preceding method embodiment, will not be repeated here.
In embodiment provided herein, it should be understood that disclosed apparatus and method, can be by other
Mode is realized.For example, device embodiment described above is only schematical, for example, the division of the unit, is only
A kind of division of logic function, can there is other dividing mode when actually realizing, such as multiple units or component can combine or
Person is desirably integrated into another system, or some features can be ignored, or does not perform.Another, shown or discussed is mutual
Between coupling or direct-coupling or communication connection can be the INDIRECT COUPLING or communication link of device or unit by some interfaces
Connect, can be electrical, mechanical or other forms.
The unit that is illustrated as separating component can be or may not be it is physically separate, it is aobvious as unit
The part for showing can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple
On NE.Some or all of unit therein can be according to the actual needs selected to realize the mesh of this embodiment scheme
's.
In addition, during each functional unit in each embodiment of the invention can be integrated in a processing unit, it is also possible to
It is that unit is individually physically present, it is also possible to which two or more units are integrated in a unit.
If the function is to realize in the form of SFU software functional unit and as independent production marketing or when using, can be with
Storage is in a computer read/write memory medium.Based on such understanding, technical scheme is substantially in other words
The part contributed to prior art or the part of the technical scheme can be embodied in the form of software product, the meter
Calculation machine software product is stored in a storage medium, including some instructions are used to so that a computer equipment (can be individual
People's computer, server, or network equipment etc.) perform all or part of step of each embodiment methods described of the invention.
And foregoing storage medium includes:USB flash disk, mobile hard disk, ROM, RAM, magnetic disc or CD etc. are various can be with store program codes
Medium.
The above, specific embodiment only of the invention, but protection scope of the present invention is not limited thereto, and it is any
Those familiar with the art the invention discloses technical scope in, change or replacement can be readily occurred in, should all contain
Cover within protection scope of the present invention.Therefore, protection scope of the present invention should be defined by scope of the claims.
Each embodiment in this specification is described by the way of progressive, what each embodiment was stressed be with
The difference of other embodiment, between each embodiment identical similar part mutually referring to.
Although having been described for the preferred embodiment of the embodiment of the present invention, those skilled in the art once know base
This creative concept, then can make other change and modification to these embodiments.So, appended claims are intended to be construed to
Including preferred embodiment and fall into having altered and changing for range of embodiment of the invention.
Finally, in addition it is also necessary to explanation, in embodiments of the present invention, such as first and second or the like relational terms are only
Only it is used for making a distinction an entity or operation with another entity or operation, and not necessarily requires or imply these realities
There is any this actual relation or order between body or operation.And, term " including ", "comprising" or its it is any its
His variant is intended to including for nonexcludability, so that process, method, article or terminal including a series of key elements set
It is standby not only to include those key elements, but also other key elements including being not expressly set out, or also include being this process, side
Method, article or the intrinsic key element of terminal device.In the absence of more restrictions, being limited by sentence "including a ..."
Fixed key element, it is not excluded that also exist in the process including the key element, method, article or terminal device other identical
Key element.
Above-described is the preferred embodiment of the present invention, it should be pointed out that the ordinary person for the art comes
Say, some improvements and modifications can also be made under the premise of principle of the present invention is not departed from, and these improvements and modifications also exist
In protection scope of the present invention.