CN109246362A - A kind of image processing method and mobile terminal - Google Patents

A kind of image processing method and mobile terminal Download PDF

Info

Publication number
CN109246362A
CN109246362A CN201710297673.1A CN201710297673A CN109246362A CN 109246362 A CN109246362 A CN 109246362A CN 201710297673 A CN201710297673 A CN 201710297673A CN 109246362 A CN109246362 A CN 109246362A
Authority
CN
China
Prior art keywords
depth
field
layer
exposure amount
mobile terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710297673.1A
Other languages
Chinese (zh)
Other versions
CN109246362B (en
Inventor
杜月荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN201710297673.1A priority Critical patent/CN109246362B/en
Publication of CN109246362A publication Critical patent/CN109246362A/en
Application granted granted Critical
Publication of CN109246362B publication Critical patent/CN109246362B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the invention discloses a kind of image processing method and mobile terminals, this method comprises: obtaining the corresponding N number of depth of field figure layer of image to be processed by the first camera and second camera;Wherein, N is the natural number more than or equal to 2;Determine the corresponding target light exposure amount of each depth of field figure layer;Wherein, the corresponding target light exposure amount of at least two depth of field figure layers is different;Corresponding depth of field figure layer is handled according to each target light exposure amount.

Description

A kind of image processing method and mobile terminal
Technical field
The present invention relates to technical field of image processing more particularly to a kind of image processing methods and mobile terminal.
Background technique
With the continuous development of the mobile terminal technologies such as smart phone, tablet computer, mobile terminal not only limits to already In the simple function of communication, but integrate the equipment of the functions such as leisure, communications and entertainment.For example, general mobile terminal On all have camera, take pictures or image demand for meet user.
The development trend of camera gradually switchs to dual camera, each movement from original single camera on mobile terminal Terminal production firm is also using this as standard configuration.It can recorde the depth of view information of photo by the hardware of dual camera, perceive It is photographed object distance, does Fuzzy Processing by object and background separation, or even by background.Furthermore it is also possible to take 3D effect Photo, and the functions such as 360 degree of pan-shots are provided.
Dual camera on existing mobile terminal, under normal light on daytime, the image exposure of camera shooting is opposite Uniformly, the brightness of distant objects and near objects is distinguished less, and the contrast that image is shown is not high, and crystalistic sense is not strong;At night Under evening dim light, object can clearly be differentiated for human eye, and the more difficult resolution of camera.The image exposure of camera shooting Unevenly, and distant objects are generally very dark, and near objects are slightly better than the effect of image distant objects.
In existing image processing method, when image to be processed is exposed processing, for image pair to be processed The whole depth of field figure layers answered are all made of identical processing method, for example, by the corresponding exposure of whole depth of field figure layers of image to be processed Light quantity is adjusted to target light exposure amount from current exposure amount, to realize the exposure-processed to image to be processed.
In the implementation of the present invention, at least there are the following problems in the prior art for inventor's discovery:
In existing image processing method, identical exposure is carried out for the corresponding each depth of field figure layer of image to be processed Light processing, image processing effect are poor.
Summary of the invention
To solve existing technical problem, an embodiment of the present invention is intended to provide a kind of image processing method and movement are whole End uses different exposure-processeds for the corresponding each depth of field figure layer of image to be processed, and image processing effect is preferable.
In order to achieve the above objectives, the technical solution of the embodiment of the present invention is achieved in that
The embodiment of the invention provides a kind of image processing methods, which comprises
The corresponding N number of depth of field figure layer of image to be processed is obtained by the first camera and second camera;Wherein, N is big In the natural number for being equal to 2;
Determine the corresponding target light exposure amount of each depth of field figure layer;Wherein, the corresponding target light exposure of at least two depth of field figure layers Amount is different;
Corresponding depth of field figure layer is handled according to each target light exposure amount.
In the above-described embodiments, the corresponding target light exposure amount of each depth of field figure layer of the determination, comprising:
Determine the corresponding reference exposure amount of each depth of field figure layer;
The corresponding factor of each depth of field figure layer is set;
The corresponding target of each depth of field figure layer is obtained according to the corresponding reference exposure amount of each depth of field figure layer and factor Light exposure.
In the above-described embodiments, the corresponding reference exposure amount of each depth of field figure layer of the determination, comprising:
Obtain the corresponding object brightness parameter of each depth of field figure layer;
According to the corresponding relationship of the object brightness parameter and reference exposure amount that pre-save, each object brightness parameter is searched Corresponding reference exposure amount.
In the above-described embodiments, the corresponding factor of each depth of field figure layer of setting, comprising:
N number of depth of field figure layer is divided into M group depth of field figure layer;Wherein, M is the natural number more than or equal to 2;
Obtain the corresponding exposure index of the image to be processed;
According to the exposure index and pre-set exposure index threshold value, each depth map in M group depth of field figure layer is set The corresponding factor of layer.
It is in the above-described embodiments, described that N number of depth of field figure layer is divided into M group depth of field figure layer, comprising:
According to the average brightness of the image to be processed pre-saved and the corresponding reference exposure amount of each depth of field figure layer Determine the depth of field boundary layer in N number of depth of field figure layer;
N number of depth of field figure layer is divided into first kind depth of field figure layer and the second class depth of field figure layer according to the depth of field boundary layer;
Whole first kind depth of field figure layers and whole second class depth of field figure layers are respectively divided into P group and Q group;Wherein, P and Q Be more than or equal to 1 natural number and the sum of P and Q be M.
The embodiment of the invention also provides a kind of mobile terminal, the mobile terminal include: acquiring unit, determination unit and Processing unit;Wherein,
The acquiring unit, for obtaining the corresponding N number of scape of image to be processed by the first camera and second camera Deep figure layer;Wherein, N is the natural number more than or equal to 2;
The determination unit, for determining the corresponding target light exposure amount of each depth of field figure layer;Wherein, at least two depth map The corresponding target adjustment amount of layer is different;
The processing unit, for being handled according to each target adjustment amount corresponding depth of field figure layer.
In the above-described embodiments, the determination unit comprises determining that subelement, setting subelement and obtains subelement;Its In,
The determining subelement, for determining the corresponding reference exposure amount of each depth of field figure layer;
The setting subelement, for the corresponding factor of each depth of field figure layer to be arranged;
The acquisition subelement, it is each for being obtained according to the corresponding reference exposure amount of each depth of field figure layer and factor The corresponding target light exposure amount of depth of field figure layer.
In the above-described embodiments, the determining subelement is specifically used for obtaining the corresponding object brightness of each depth of field figure layer Parameter;According to the corresponding relationship of the object brightness parameter and reference exposure amount that pre-save, each object brightness parameter pair is searched The reference exposure amount answered.
In the above-described embodiments, the setting subelement, specifically for N number of depth of field figure layer is divided into M group depth of field figure layer; Wherein, M is the natural number more than or equal to 2;Obtain the corresponding exposure index to be processed;According to the exposure index and in advance The corresponding factor of each depth of field figure layer in M group depth of field figure layer is arranged in the exposure index threshold value of setting.
In the above-described embodiments, the setting subelement, specifically for according to the image to be processed pre-saved Average brightness and the corresponding reference exposure amount of each depth of field figure layer determine the depth of field boundary layer in N number of depth of field figure layer;According to described N number of depth of field figure layer is divided into first kind depth of field figure layer and the second class depth of field figure layer by depth of field boundary layer;By whole first kind depth of field Figure layer and whole second class depth of field figure layers are respectively divided into P group and Q group;Wherein, P and Q is the natural number and P more than or equal to 1 It is M with the sum of Q.
It can be seen that being obtained first by the first camera and second camera in the technical solution of the embodiment of the present invention The corresponding N number of depth of field figure layer of image to be processed, then determines the corresponding target light exposure amount of each depth of field figure layer, finally according to each Target light exposure amount handles corresponding depth of field figure layer.That is, being moved in the technical solution of the embodiment of the present invention Dynamic terminal can carry out different exposure-processeds to each depth of field figure layer of image.And in the prior art, mobile terminal is to figure Each depth of field figure layer of picture carries out identical exposure-processed.Therefore, compared to the prior art, the image that the embodiment of the present invention proposes Processing method and mobile terminal use different exposure-processeds for the corresponding each depth of field figure layer of image to be processed, at image It is preferable to manage effect;Also, the technical solution realization of the embodiment of the present invention is simple and convenient, and convenient for universal, the scope of application is wider.
Detailed description of the invention
Fig. 1 is the implementation process schematic diagram of image processing method in the embodiment of the present invention;
Fig. 2 is the composed structure schematic diagram of the corresponding N number of depth of field figure layer of image to be processed in the embodiment of the present invention;
Fig. 3 is the implementation method process signal that the corresponding target light exposure amount of each depth of field figure layer is determined in the embodiment of the present invention Figure;
Fig. 4 is the implementation method process signal that the corresponding reference exposure amount of each depth of field figure layer is determined in the embodiment of the present invention Figure;
Fig. 5 is the implementation method process signal that the corresponding factor of each depth of field figure layer is arranged in the embodiment of the present invention Figure;
Fig. 6 is the first composed structure schematic diagram of mobile terminal in the embodiment of the present invention;
Fig. 7 is the second composed structure schematic diagram of mobile terminal in the embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description.
Fig. 1 is the implementation process schematic diagram of image processing method in the embodiment of the present invention.As shown in Figure 1, image processing method Method may comprise steps of:
Step 101 obtains the corresponding N number of depth of field figure layer of image to be processed by the first camera and second camera;Its In, N is the natural number more than or equal to 2.
In a specific embodiment of the present invention, mobile terminal can be obtained by the first camera and second camera wait locate The image of reason and the corresponding N number of depth of field figure layer of image to be processed.Specifically, in a specific embodiment of the present invention, it can move Two cameras of postposition in terminal;First camera can be used as master and take the photograph;Second camera can be used as auxiliary take the photograph;Specifically Ground, first camera uses color sensor, it is characterized in that color rendition is preferable, the image of acquisition is figure to be processed Picture.Second camera uses black and white sensor, it is characterized in that photoperceptivity is stronger, wide dynamic range is suitble to shooting details, The image of acquisition is primarily used to reference to the calculating depth of field.Two cameras are disposed in parallel, and separated by a distance.
Specifically, in a specific embodiment of the present invention, mobile terminal arrives figure to be processed by the way that dual camera is available As corresponding N number of depth of field figure layer.Fig. 2 is the composed structure of the corresponding N number of depth of field figure layer of image to be processed in the embodiment of the present invention Schematic diagram.As shown in Fig. 2, the corresponding N number of depth of field figure layer of image to be processed is respectively as follows: depth_1, depth_2, depth_ 3 ..., depth_N;Wherein, N is the natural number more than or equal to 2.Specifically, depth_1 indicates first depth of field figure layer, Depth_2 indicates second depth of field figure layer, and depth_3 indicates third depth of field figure layer ..., and depth_N indicates n-th depth map Layer.For example, the depth of field of image to be processed is 20 meters, mobile terminal can set depth_1 for the figure layer that the depth of field is 1m, by scape The figure layer of deep 2m is set as depth_2 ..., sets depth_20 for the figure layer of depth of field 20m, this 20 depth of field figure layers be exactly to Handle the corresponding whole depth of field figure layers of image.
Step 102 determines the corresponding target light exposure amount of each depth of field figure layer;Wherein, at least two depth of field figure layers are corresponding Target light exposure amount is different.
In a specific embodiment of the present invention, mobile terminal get the corresponding N number of depth of field figure layer of image to be processed it Afterwards, the corresponding target light exposure amount of each depth of field figure layer can be determined;Wherein, the corresponding target light exposure amount of at least two depth of field figure layers It is different.Preferably, in a specific embodiment of the present invention, the corresponding target light exposure amount of each depth of field figure layer be can wrap It includes: gain G ain and time for exposure Exp.
Fig. 3 is the implementation method process signal that the corresponding target light exposure amount of each depth of field figure layer is determined in the embodiment of the present invention Figure.As shown in figure 3, the method that mobile terminal determines the corresponding target light exposure amount of each depth of field figure layer may comprise steps of:
Step 102a, the corresponding reference exposure amount of each depth of field figure layer is determined.
In a specific embodiment of the present invention, mobile terminal can first determine the corresponding reference exposure of each depth of field figure layer Amount;Wherein, the corresponding reference exposure amount of each depth of field figure layer may include: gain gain, time for exposure exp and brightness lum.
Preferably, in a specific embodiment of the present invention, in each depth of field figure layer there may be relatively clearly with opposite mould The region of paste, mobile terminal clearly can choose some regional area in region relatively in each depth of field figure layer in advance, Then mobile terminal determines the reference exposure amount for the regional area chosen.The reference for the regional area that each depth map layer choosing takes exposes Light quantity is the corresponding reference exposure amount of each depth of field figure layer.
Fig. 4 is the implementation method process signal that the corresponding reference exposure amount of each depth of field figure layer is determined in the embodiment of the present invention Figure.As shown in figure 4, the method that mobile terminal determines the corresponding reference exposure amount of each depth of field figure layer may comprise steps of:
Step 401 obtains the corresponding object brightness parameter of each depth of field figure layer.
In a specific embodiment of the present invention, it is corresponding can to obtain each depth of field figure layer in particular memory location for mobile terminal Object brightness parameter;Preferably, in a specific embodiment of the present invention, mobile terminal can obtain each in particular memory location The corresponding object brightness parameter of some regional area of a depth of field figure layer, the object brightness parameter are that depth of field figure layer is corresponding Object brightness parameter.
The corresponding relationship of object brightness parameter and reference exposure amount that step 402, basis pre-save, searches each target The corresponding reference exposure amount of luminance parameter.
In a specific embodiment of the present invention, mobile terminal can be protected after getting each object brightness in advance It is searched in the corresponding relationship of the object brightness parameter and reference exposure amount deposited, so that it is corresponding to obtain each object brightness parameter Reference exposure amount.
Specifically, in a specific embodiment of the present invention, mobile terminal can pre-save object brightness parameter and reference The corresponding relationship of light exposure;Wherein, the value interval of object brightness parameter can be with are as follows: and 0~255, each object brightness parameter One group of reference exposure amount can be corresponded to.Specifically, pair of mobile terminal pre-saves object brightness parameter and reference exposure amount Should be related to can be as described in Table 1:
Y gain exp lum
0 gain0 exp0 lum0
1 gain1 exp1 lum1
2 gain2 exp2 lum2
255 gain255 exp255 lum255
Table 1
As shown in Table 1 above, object brightness parameter is Y, and reference exposure amount includes: gain gain, time for exposure exp and Brightness lum.Gain0, gain1, gain2 ..., gain255 are respectively that object brightness parameter is 0,1,2 ..., and 255 when is corresponding Gain, exp0, exp1, exp2 ..., exp255 are respectively that object brightness parameter is 0,1,2 ..., when 255 when corresponding exposure Between, lum0, lum1, lum2 ..., lum255 are respectively that object brightness parameter is 0,1,2 ..., the corresponding brightness of 255 when.
In a specific embodiment of the present invention, the corresponding ginseng of each object brightness parameter can directly be found out according to table 1 Examine light exposure.When object brightness parameter is greater than 255, which can be cast out or be normalized to object brightness parameter For in 255 range.
According to above-mentioned analysis it is found that through the above steps 401~402, mobile terminal can be in particular memory location The corresponding object brightness parameter of each depth of field figure layer is obtained, according to pair of the object brightness parameter and reference exposure amount that pre-save It should be related to, obtain the corresponding reference exposure amount of each object brightness parameter.
Step 102b, the corresponding factor of each depth of field figure layer is set.
In a specific embodiment of the present invention, corresponding factor can be arranged to each depth of field figure layer in mobile terminal, Then according to the corresponding reference exposure amount of each depth of field figure layer and factor, the corresponding target light exposure of each depth of field figure layer is obtained Amount.Specifically, N number of depth of field figure layer first can be divided into first kind scape according to the corresponding brightness of each depth of field figure layer by mobile terminal Then deep figure layer and the second class depth of field figure layer whole first kind depth of field figure layers and whole second class depth of field figure layers are respectively divided again M group scape is arranged finally according to the corresponding exposure index of image to be processed and pre-set exposure index threshold value in multiple groups out The corresponding factor of each depth of field figure layer in deep figure layer.Fig. 5 is corresponding for each depth of field figure layer is arranged in the embodiment of the present invention The implementation method flow diagram of factor.As shown in figure 5, the corresponding factor of each depth of field figure layer is arranged in mobile terminal Method may comprise steps of:
N number of depth of field figure layer is divided into M group depth of field figure layer by step 501;Wherein, M is the natural number more than or equal to 2.
In a specific embodiment of the present invention, mobile terminal is according to the average brightness of the image to be processed pre-saved and each The corresponding reference exposure amount of a depth of field figure layer determines the depth of field boundary layer in N number of depth of field figure layer, and by depth of field boundary layer first by N A depth of field figure layer is divided into two class depth of field figure layers, is respectively as follows: first kind depth of field figure layer and the second class depth of field figure layer, then will all the A kind of depth of field figure layer and whole second class depth of field figure layers are respectively divided into P group and Q group;Wherein, P and Q is oneself more than or equal to 1 Right number and the sum of P and Q are M.
Specifically, in a specific embodiment of the present invention, mobile terminal can determine a depth of field in N number of depth of field figure layer Figure layer is depth of field boundary layer, using depth of field boundary layer as the boundary of the second class depth of field figure layer of first kind depth of field figure layer.Mobile terminal The depth of field boundary layer in N number of depth of field figure layer can be determined according to the average brightness of image to be processed;Wherein, mobile terminal can be with Its average brightness is obtained in particular memory location, each depth of field figure layer that the depth of field corresponds to the depth of field less than depth of field boundary layer is divided into First kind depth of field figure layer;Each depth of field figure layer that the depth of field corresponds to the depth of field greater than depth of field boundary layer is divided into the second class depth map Layer;Wherein, depth of field boundary layer can neither be divided into first kind depth of field figure layer, also not be divided into the second class depth of field figure layer.
Specifically, in a specific embodiment of the present invention, mobile terminal is searched equal with average brightness in whole brightness Perhaps closest brightness is by depth of field figure layer corresponding to the brightness equal or closest with average brightness found It is determined as depth of field boundary layer.For example, mobile terminal is found in N number of depth of field figure layer in the corresponding reference exposure amount of depth_i Brightness it is equal with the average brightness of image to be processed, that is, determine depth_i be depth of field boundary layer.Therefore, by depth_1, Depth_2 ..., depth_i-1 form first kind depth of field figure layer, by depth_i+1, depth_i+2 ..., depth_N composition the Two class depth of field figure layers;Wherein, i is the natural number more than or equal to 1.Preferably, when mobile terminal find with average brightness phase Deng or closest brightness it is not unique when, can be corresponding by any one brightness equal or closest with average brightness Depth of field figure layer is determined as depth of field boundary layer.
Specifically, in a specific embodiment of the present invention, N number of depth of field figure layer is divided into first kind depth map by mobile terminal After layer and the second class depth of field figure layer, whole first kind depth of field figure layers further can be divided into P group, all the second class depth map Layer is divided into Q group;Wherein, P and Q is natural number more than or equal to 1 and the sum of P and Q are M.Specifically, of the invention specific In embodiment, mobile terminal is in the process that whole first kind depth of field figure layers and whole second class depth of field figure layers are divided into multiple groups In, the division of every group of depth of field figure layer can be determined according to arbitrary number.For example, mobile terminal has obtained the 10 of image to be processed A depth of field figure layer, and determine that wherein the 6th depth of field figure layer is depth of field boundary layer, the 1st to the 5th depth of field figure layer is the first kind Depth of field figure layer, the 7th to the 10th depth of field figure layer are the second class depth of field figure layer.Mobile terminal can be by the 1st to the 3rd scape Deep figure layer is divided into first group of depth of field figure layer, and the 4th and the 5th are divided into second group of depth of field figure layer, by the 7th to the 10th It is divided into third group depth of field figure layer.
Step 502 obtains the corresponding exposure index of image to be processed.
In a specific embodiment of the present invention, it is corresponding can to obtain image to be processed in particular memory location for mobile terminal Exposure index.Exposure index can indicate the depth of exposure of image to be processed, then mobile terminal according to exposure index and in advance The exposure index threshold value of setting, can judge light condition locating for image to be processed;Wherein, the light condition includes: Bright light and dim lights.
Step 503, according to exposure index and pre-set exposure index threshold value, each scape in M group depth of field figure layer is set The corresponding factor of deep figure layer.
In a specific embodiment of the present invention, mobile terminal is getting exposure index and pre-set exposure index threshold After value, each depth of field figure layer in M group depth of field figure layer can be arranged according to exposure index and pre-set exposure index threshold value Corresponding factor.
Specifically, in a specific embodiment of the present invention, mobile terminal can first determine first kind depth of field figure layer and second The setting range of the corresponding factor of each depth of field figure layer in class depth of field figure layer.When exposure index is greater than or equal to exposure index When threshold value, it may be assumed that when light condition locating for image to be processed is dim lights, mobile terminal can will be in first kind depth of field figure layer The corresponding factor of each depth of field figure layer is set as 0~1 numerical value, by each depth of field figure layer pair in the second class depth of field figure layer The factor answered is set greater than 1 numerical value;Alternatively, when exposure index is less than exposure index threshold value, i.e., image to be processed When locating light condition is bright light, mobile terminal can be by the corresponding exposure of depth of field figure layer each in first kind depth of field figure layer Backscatter extinction logarithmic ratio is set greater than 1 numerical value, 0 is set by the corresponding factor of depth of field figure layer each in the second class depth of field figure layer~ 1 numerical value.For example, including: depth_1, depth_2 ..., depth_i-1, the second class depth of field figure layer in first kind depth of field figure layer In include: depth_i+1, depth_i+2 ..., depth_N.When the light condition locating for the image to be processed is dim lights, Depth_1, depth_2 ..., depth_i-1 corresponding factor ratio_1, ratio_2 ... is arranged in mobile terminal, The numerical value that ratio_i-1 is 0~1, depth_i+1, depth_i+2 ..., the corresponding factor ratio_i+1 of depth_N, Ratio_i+2 ..., ratio_N are the numerical value greater than 1;It is mobile when the light condition locating for the image to be processed is bright light Depth_1, depth_2 ..., depth_i-1 corresponding factor ratio_1, ratio_2 ..., ratio_i- is arranged in terminal 1 is the numerical value greater than 1, depth_i+1, depth_i+2 ..., depth_N corresponding factor ratio_i+1, ratio_i+ The numerical value that 2 ..., ratio_N are 0~1.The corresponding factor of depth of field boundary layer depth_i light locating for image to be processed Lines part is to be disposed as 1 under bright light or dim lights, i.e. depth of field boundary layer depth_i in exposure process not It deals with.
Specifically, mobile terminal each depth of field figure layer in determining first kind depth of field figure layer and the second class depth of field figure layer is corresponding Factor setting range after, the setting model of different factors can be set further directed to different depth of field figure layer groups It encloses, so that the corresponding factor of each depth of field figure layer more accurately be arranged.For example, light condition locating for image to be processed is When dim lights, whole depth of field figure layers are divided into 6 groups by mobile terminal, wherein whole depth maps in the 1st group to the 4th group Layer is first kind depth of field figure layer, and whole depth of field figure layers in the 5th group and the 6th group are the second class depth of field figure layer.At this point, mobile Terminal can set the corresponding factor of depth of field figure layer each in the 1st group of depth of field figure layer to 0~0.3 numerical value, the 2nd group of scape The corresponding factor of each depth of field figure layer is set as 0.3~0.5 numerical value, each scape in the 3rd group of depth of field figure layer in deep figure layer The corresponding factor of deep figure layer is set as 0.5~0.8 numerical value, the corresponding exposure of each depth of field figure layer in the 4th group of depth of field figure layer Backscatter extinction logarithmic ratio is set as 0.8~1 numerical value, in the 5th group of depth of field figure layer the corresponding factor of each depth of field figure layer be set as 1~ 1.5 numerical value, the corresponding factor of each depth of field figure layer is set as 1.5~1.8 numerical value in the 6th group of depth of field figure layer.
Step 102c, each depth of field figure layer pair is obtained according to the corresponding reference exposure amount of each depth of field figure layer and factor The target light exposure amount answered.
In a specific embodiment of the present invention, mobile terminal is provided with the corresponding factor of each depth of field figure layer, simultaneously The corresponding reference exposure amount of each depth of field figure layer is got, according to the corresponding factor of each depth of field figure layer and with reference to exposure The corresponding target light exposure amount of each depth of field figure layer can be calculated in light quantity.
Specifically, in a specific embodiment of the present invention, the corresponding target light exposure amount of each depth of field figure layer is each scape The product of the corresponding reference exposure amount of the corresponding factor of deep figure layer.Specifically, the corresponding target of each depth of field figure layer Light exposure is as described in Table 2:
Gain Exp
Gain_1=ratio_1 × gain_1 Exp_1=ratio_1 × exp_1
Gain_2=ratio_2 × gain_2 Exp_2=ratio_2 × exp_2
Gain_3=ratio_3 × gain_3 Exp_3=ratio_3 × exp_3
Gain_i=ratio_i × gain_i Exp_i=ratio_i × exp_i
Gain_N=ratio_N × gain_N Exp_N=ratio_N × exp_N
Table 2
In each expression formula of above-mentioned table 2, ratio_1, ratio_2, ratio_3 ..., ratio_i ..., ratio_N For the corresponding factor of each depth of field figure layer, gain_1, gain_2, gain_3 ..., gain_i ..., gain_N are each scape Gain in the corresponding reference exposure amount of deep figure layer, exp_1, exp_2, exp_3 ..., exp_i ..., exp_N are each depth map Time for exposure in the corresponding reference exposure amount of layer, Gain_1, Gain_2, Gain_3 ..., Gain_i ..., Gain_N are each Gain in the corresponding target light exposure amount of depth of field figure layer, Exp_1, Exp_2, Exp_3 ..., Exp_i ..., Exp_N are each scape Time for exposure in the corresponding target light exposure amount of deep figure layer.
According to above-mentioned analysis it is found that 102a~102c, mobile terminal can determine image to be processed through the above steps The corresponding target light exposure amount of each depth of field figure layer.Specifically, mobile terminal can be corresponding with reference to exposure according to each depth of field figure layer Light quantity and the corresponding exposure parameter of each depth of field figure layer, obtain the corresponding target light exposure amount of each depth of field figure layer.
Step 103 is handled corresponding depth of field figure layer according to each target light exposure amount.
In a specific embodiment of the present invention, mobile terminal can be according to each target light exposure amount to corresponding depth of field Figure layer is handled.Specifically, mobile terminal can be treated after getting the corresponding target light exposure amount of each depth of field figure layer Each depth of field figure layer of processing image is exposed processing.Each depth of field figure layer is realized according to its corresponding target light exposure amount to be increased Beneficial Gain, the adjustment of time for exposure Exp obtain new depth of field figure layer.Each new depth of field figure layer is fused to one by mobile terminal It rises, available new image realizes and carries out different exposure-processeds to the different depth of field figure layers of image to be processed.
The image processing method that the embodiment of the present invention proposes, is obtained by the first camera and second camera wait locate first The corresponding N number of depth of field figure layer of image is managed, the corresponding target light exposure amount of each depth of field figure layer is then determined, finally according to each target Light exposure handles corresponding depth of field figure layer.That is, being moved in the image processing method of the embodiment of the present invention Dynamic terminal can carry out different exposure-processeds to each depth of field figure layer of image.And in the prior art, mobile terminal is to figure Each depth of field figure layer of picture carries out identical exposure-processed.Therefore, compared to the prior art, the image that the embodiment of the present invention proposes Processing method uses different exposure-processeds for the corresponding each depth of field figure layer of image to be processed, and image processing effect is preferable; Also, the technical solution realization of the embodiment of the present invention is simple and convenient, and convenient for universal, the scope of application is wider.
Fig. 6 is the first composed structure schematic diagram of mobile terminal in the embodiment of the present invention.As shown in fig. 6, described mobile whole End includes: acquiring unit 601, determination unit 602 and processing unit 603;Wherein,
The acquiring unit 601, it is corresponding N number of for obtaining image to be processed by the first camera and second camera Depth of field figure layer;Wherein, N is the natural number more than or equal to 2;
The determination unit 602, for determining the corresponding target light exposure amount of each depth of field figure layer;Wherein, at least two scape The corresponding target adjustment amount of deep figure layer is different;
The processing unit 603, for being handled according to each target adjustment amount corresponding depth of field figure layer.
Fig. 7 is the second composed structure schematic diagram of mobile terminal in the embodiment of the present invention.As shown in fig. 7, described determining single Member 602 comprises determining that subelement 6021, setting subelement 6022 and obtains subelement 6023;Wherein,
The determining subelement 6021, for determining the corresponding reference exposure amount of each depth of field figure layer;
The setting subelement 6022, for the corresponding factor of each depth of field figure layer to be arranged;
The acquisition subelement 6023, for being obtained according to the corresponding reference exposure amount of each depth of field figure layer and factor The corresponding target light exposure amount of each depth of field figure layer.
Further, the determining subelement 6021 is specifically used for obtaining the corresponding object brightness ginseng of each depth of field figure layer Number;According to the corresponding relationship of the object brightness parameter and reference exposure amount that pre-save, it is corresponding to search each object brightness parameter Reference exposure amount.
Further, the setting subelement 6022, specifically for N number of depth of field figure layer is divided into M group depth of field figure layer;Its In, M is the natural number more than or equal to 2;Obtain the corresponding exposure index to be processed;It sets according to the exposure index and in advance The corresponding factor of each depth of field figure layer in M group depth of field figure layer is arranged in the exposure index threshold value set.
Further, the setting subelement 6023, specifically for according to the flat of the image to be processed pre-saved Equal brightness and the corresponding reference exposure amount of each depth of field figure layer determine the depth of field boundary layer in N number of depth of field figure layer;According to the scape N number of depth of field figure layer is divided into first kind depth of field figure layer and the second class depth of field figure layer by deep boundary layer;By whole first kind depth maps Layer and whole second class depth of field figure layers are respectively divided into P group and Q group;Wherein, P and Q is the natural number and P and Q more than or equal to 1 The sum of be M.
In practical applications, the acquiring unit 601, determination unit 602, processing unit 603 can be by mobile terminals Central processing unit (CPU), microprocessor (MPU), digital signal processor (DSP) or the field programmable gate array in portion (FPGA) etc. it realizes.
In addition, each unit in the present embodiment can physically exist alone, it can also be with two or more units It is integrated in one unit.Above-mentioned integrated unit both can take the form of hardware realization, can also use software function mould The form of block is realized.
If the integrated unit realizes that being not intended as independent product is sold in the form of software function module Or in use, can store in a mobile terminal read/write memory medium, based on this understanding, the technology of the present embodiment Substantially all or part of the part that contributes to existing technology or the technical solution can be with software in other words for scheme The form of product embodies, which is stored in a storage medium, including some instructions are to make Obtain a mobile terminal device (can be individual mobile terminal, server or the network equipment etc.) or processor (processing Device) execute the present embodiment the method all or part of the steps.And storage medium above-mentioned include: USB flash disk, mobile hard disk, only Read memory (ROM, Read Only Memory), random access memory (RAM, Random Access Memory), magnetic disk Or the various media that can store program code such as CD.
Specifically, the corresponding program of mobile terminal instruction of one of the present embodiment image processing method can be stored In CD, hard disk, on the storage mediums such as USB flash disk, when the mobile terminal journey corresponding with a kind of image processing method in storage medium Sequence instruction is read or is performed by an electronic equipment, includes the following steps:
The corresponding N number of depth of field figure layer of image to be processed is obtained by the first camera and second camera;Wherein, N is big In the natural number for being equal to 2;
Determine the corresponding target light exposure amount of each depth of field figure layer;Wherein, the corresponding target light exposure of at least two depth of field figure layers Amount is different;
Corresponding depth of field figure layer is handled according to each target light exposure amount.
The mobile terminal that the embodiment of the present invention proposes obtains figure to be processed by the first camera and second camera first As corresponding N number of depth of field figure layer, the corresponding target light exposure amount of each depth of field figure layer is then determined, finally according to each target light exposure Amount handles corresponding depth of field figure layer.That is, mobile terminal can in the mobile terminal of the embodiment of the present invention Different exposure-processeds is carried out with each depth of field figure layer to image.And in the prior art, mobile terminal is to each of image Depth of field figure layer carries out identical exposure-processed.Therefore, compared to the prior art, the mobile terminal that the embodiment of the present invention proposes, needle Each depth of field figure layer corresponding to image to be processed uses different exposure-processeds, and image processing effect is preferable;Also, the present invention The technical solution realization of embodiment is simple and convenient, and convenient for universal, the scope of application is wider.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as method, system or computer program Product.Therefore, the shape of hardware embodiment, software implementation or embodiment combining software and hardware aspects can be used in the present invention Formula.Moreover, the present invention, which can be used, can use storage in the computer that one or more wherein includes computer usable program code The form for the computer program product implemented on medium (including but not limited to magnetic disk storage and optical memory etc.).
The present invention be referring to according to the method for the embodiment of the present invention, the process of equipment (system) and computer program product Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates, Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one The step of function of being specified in a box or multiple boxes.
The foregoing is only a preferred embodiment of the present invention, is not intended to limit the scope of the present invention.

Claims (10)

1. a kind of image processing method, which is characterized in that the described method includes:
The corresponding N number of depth of field figure layer of image to be processed is obtained by the first camera and second camera;Wherein, N be greater than etc. In 2 natural number;
Determine the corresponding target light exposure amount of each depth of field figure layer;Wherein, the corresponding target light exposure amount of at least two depth of field figure layers is It is different;
Corresponding depth of field figure layer is handled according to each target light exposure amount.
2. the method according to claim 1, wherein the corresponding target light exposure of each depth of field figure layer of the determination Amount, comprising:
Determine the corresponding reference exposure amount of each depth of field figure layer;
The corresponding factor of each depth of field figure layer is set;
The corresponding target light exposure of each depth of field figure layer is obtained according to the corresponding reference exposure amount of each depth of field figure layer and factor Amount.
3. according to the method described in claim 2, it is characterized in that, the corresponding reference exposure of each depth of field figure layer of the determination Amount, comprising:
Obtain the corresponding object brightness parameter of each depth of field figure layer;
According to the corresponding relationship of the object brightness parameter and reference exposure amount that pre-save, it is corresponding to search each object brightness parameter Reference exposure amount.
4. according to the method described in claim 2, it is characterized in that, the corresponding factor of each depth of field figure layer of the setting, Include:
N number of depth of field figure layer is divided into M group depth of field figure layer;Wherein, M is the natural number more than or equal to 2;
Obtain the corresponding exposure index of the image to be processed;
According to the exposure index and pre-set exposure index threshold value, each depth of field figure layer pair in M group depth of field figure layer is set The factor answered.
5. according to the method described in claim 4, it is characterized in that, described be divided into M group depth of field figure layer for N number of depth of field figure layer, Include:
N is determined according to the average brightness of the image to be processed pre-saved and the corresponding reference exposure amount of each depth of field figure layer Depth of field boundary layer in a depth of field figure layer;
N number of depth of field figure layer is divided into first kind depth of field figure layer and the second class depth of field figure layer according to the depth of field boundary layer;
Whole first kind depth of field figure layers and whole second class depth of field figure layers are respectively divided into P group and Q group;Wherein, P and Q are Natural number and the sum of P and Q more than or equal to 1 are M.
6. a kind of mobile terminal, which is characterized in that the mobile terminal includes: acquiring unit, determination unit and processing unit;Its In,
The acquiring unit, for obtaining the corresponding N number of depth map of image to be processed by the first camera and second camera Layer;Wherein, N is the natural number more than or equal to 2;
The determination unit, for determining the corresponding target light exposure amount of each depth of field figure layer;Wherein, at least two depth of field figure layer pair The target adjustment amount answered is different;
The processing unit, for being handled according to each target adjustment amount corresponding depth of field figure layer.
7. mobile terminal according to claim 6, which is characterized in that the determination unit comprises determining that subelement, setting Subelement and acquisition subelement;Wherein,
The determining subelement, for determining the corresponding reference exposure amount of each depth of field figure layer;
The setting subelement, for the corresponding factor of each depth of field figure layer to be arranged;
The acquisition subelement, for obtaining each depth of field according to the corresponding reference exposure amount of each depth of field figure layer and factor The corresponding target light exposure amount of figure layer.
8. mobile terminal according to claim 7, which is characterized in that the determining subelement is specifically used for obtaining each The corresponding object brightness parameter of depth of field figure layer;According to the corresponding relationship of the object brightness parameter and reference exposure amount that pre-save, Search the corresponding reference exposure amount of each object brightness parameter.
9. mobile terminal according to claim 7, which is characterized in that the setting subelement is specifically used for N number of depth of field Figure layer is divided into M group depth of field figure layer;Wherein, M is the natural number more than or equal to 2;Obtain the corresponding exposure index to be processed; According to the exposure index and pre-set exposure index threshold value, it is corresponding that each depth of field figure layer in M group depth of field figure layer is set Factor.
10. mobile terminal according to claim 9, which is characterized in that the setting subelement is specifically used for according in advance The average brightness and the corresponding reference exposure amount of each depth of field figure layer of the image to be processed saved determine in N number of depth of field figure layer Depth of field boundary layer;N number of depth of field figure layer is divided into first kind depth of field figure layer and the second class depth of field according to the depth of field boundary layer Figure layer;Whole first kind depth of field figure layers and whole second class depth of field figure layers are respectively divided into P group and Q group;Wherein, P and Q are Natural number and the sum of P and Q more than or equal to 1 are M.
CN201710297673.1A 2017-04-28 2017-04-28 Image processing method and mobile terminal Active CN109246362B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710297673.1A CN109246362B (en) 2017-04-28 2017-04-28 Image processing method and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710297673.1A CN109246362B (en) 2017-04-28 2017-04-28 Image processing method and mobile terminal

Publications (2)

Publication Number Publication Date
CN109246362A true CN109246362A (en) 2019-01-18
CN109246362B CN109246362B (en) 2021-03-16

Family

ID=65082756

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710297673.1A Active CN109246362B (en) 2017-04-28 2017-04-28 Image processing method and mobile terminal

Country Status (1)

Country Link
CN (1) CN109246362B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101621629A (en) * 2008-06-30 2010-01-06 睿致科技股份有限公司 Method of automatic exposure
CN104010212A (en) * 2014-05-28 2014-08-27 华为技术有限公司 Method and device for synthesizing multiple layers
CN105578026A (en) * 2015-07-10 2016-05-11 宇龙计算机通信科技(深圳)有限公司 Photographing method and user terminal
US20160191896A1 (en) * 2014-12-31 2016-06-30 Dell Products, Lp Exposure computation via depth-based computational photography
CN106408518A (en) * 2015-07-30 2017-02-15 展讯通信(上海)有限公司 Image fusion method and apparatus, and terminal device
CN106550184A (en) * 2015-09-18 2017-03-29 中兴通讯股份有限公司 Photo processing method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101621629A (en) * 2008-06-30 2010-01-06 睿致科技股份有限公司 Method of automatic exposure
CN104010212A (en) * 2014-05-28 2014-08-27 华为技术有限公司 Method and device for synthesizing multiple layers
US20160191896A1 (en) * 2014-12-31 2016-06-30 Dell Products, Lp Exposure computation via depth-based computational photography
CN105578026A (en) * 2015-07-10 2016-05-11 宇龙计算机通信科技(深圳)有限公司 Photographing method and user terminal
CN106408518A (en) * 2015-07-30 2017-02-15 展讯通信(上海)有限公司 Image fusion method and apparatus, and terminal device
CN106550184A (en) * 2015-09-18 2017-03-29 中兴通讯股份有限公司 Photo processing method and device

Also Published As

Publication number Publication date
CN109246362B (en) 2021-03-16

Similar Documents

Publication Publication Date Title
CN101356546B (en) Image high-resolution upgrading device, image high-resolution upgrading method image high-resolution upgrading system
CN109961406B (en) Image processing method and device and terminal equipment
CN106600686A (en) Three-dimensional point cloud reconstruction method based on multiple uncalibrated images
CN104205827B (en) Image processing apparatus and method and camera head
CN108830892A (en) Face image processing process, device, electronic equipment and computer readable storage medium
CN108848367B (en) Image processing method and device and mobile terminal
CN103634588A (en) Image composition method and electronic apparatus
JP2022515798A (en) Lighting rendering methods, equipment, electronic equipment and computer programs
US20150146082A1 (en) Specular and diffuse image generator using polarized light field camera and control method thereof
CN104092956A (en) Flash control method, flash control device and image acquisition equipment
CN104205825A (en) Image processing device and method, and imaging device
CN106231411A (en) The switching of main broadcaster's class interaction platform client scene, loading method and device, client
CN109064526A (en) Method and device for generating jigsaw puzzle
CN110012236A (en) A kind of information processing method, device, equipment and computer storage medium
CN109445569A (en) Information processing method, device, equipment and readable storage medium storing program for executing based on AR
CN104345423A (en) Image collecting method and image collecting equipment
CN117456076A (en) Material map generation method and related equipment
CN108388636A (en) Streetscape method for retrieving image and device based on adaptive segmentation minimum enclosed rectangle
CN117218273A (en) Image rendering method and device
Liu et al. Stereo-based bokeh effects for photography
CN105893578A (en) Method and device for selecting photos
CN109246362A (en) A kind of image processing method and mobile terminal
CN107527323A (en) The scaling method and device of lens distortion
CN114005066B (en) HDR-based video frame image processing method and device, computer equipment and medium
Faluvégi et al. A 3D convolutional neural network for light field depth estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant