CN107707819A - Image capturing method, device and storage medium - Google Patents

Image capturing method, device and storage medium Download PDF

Info

Publication number
CN107707819A
CN107707819A CN201710908032.5A CN201710908032A CN107707819A CN 107707819 A CN107707819 A CN 107707819A CN 201710908032 A CN201710908032 A CN 201710908032A CN 107707819 A CN107707819 A CN 107707819A
Authority
CN
China
Prior art keywords
image
shooting image
shooting
camera assembly
light intensity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710908032.5A
Other languages
Chinese (zh)
Other versions
CN107707819B (en
Inventor
彭波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201710908032.5A priority Critical patent/CN107707819B/en
Publication of CN107707819A publication Critical patent/CN107707819A/en
Application granted granted Critical
Publication of CN107707819B publication Critical patent/CN107707819B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals

Abstract

The disclosure is directed to a kind of image capturing method, device and storage medium, it is related to field of terminal technology.This method is applied in terminal, the terminal is configured with the first camera assembly and the second camera assembly, the high resolution of first camera assembly is in the resolution ratio of second camera assembly, and the pixel size of first camera assembly is less than the pixel size of second camera assembly.Under background blurring pattern, determine that the terminal is currently located the light intensity value of environment, when receiving shooting instruction, object to be shot is shot by first camera assembly, obtains the first shooting image, and the object to be shot is shot by second camera assembly, obtain the second shooting image, based on the light intensity value, first shooting image and second shooting image, final output image is determined, to realize image taking.In this way, ensure the quality of the final output image is not influenceed by external environmental light, image taking quality is improved.

Description

Image capturing method, device and storage medium
Technical field
This disclosure relates to field of terminal technology, more particularly to a kind of image capturing method, device and storage medium.
Background technology
With the fast development of terminal technology, the function that such as terminal of mobile phone, tablet personal computer etc can be realized is more next It is abundanter.For example, terminal is provided with shoot function.At present, in order to lift shooting effect, background blurring pattern can generally be used Shot, the depth of field can be made to shoal by the background blurring processing, so that focus is on the theme of shooting.Wherein, scape Refer to the distance range before and after the subject that picture shooting assembly can obtain that the imaging of picture rich in detail determined deeply, that is to say, After the completion of focusing, it can be formed in the scope before and after focus clearly as this distance range one in front and one in back, being called scape It is deep.
The content of the invention
To overcome problem present in correlation technique, the disclosure provides a kind of image capturing method, device and storage medium.
First aspect, there is provided a kind of image capturing method, applied in terminal, the terminal is configured with the first camera assembly With the second camera assembly, the high resolution of first camera assembly is in the resolution ratio of second camera assembly, and described The pixel size of one camera assembly is less than the pixel size of second camera assembly, and methods described includes:
Under background blurring pattern, determine that the terminal is currently located the light intensity value of environment;
When receiving shooting instruction, object to be shot is shot by first camera assembly, obtains first Shooting image, and the object to be shot is shot by second camera assembly, obtain the second shooting image;
Based on the light intensity value, first shooting image and second shooting image, final output image is determined, To realize image taking.
Alternatively, it is described to be based on the light intensity value, first shooting image and second shooting image, it is determined that most Whole output image, including:
Based on the light intensity value, the selection target shooting figure from first shooting image and second shooting image Picture, the target shooting image refer to the image for treating virtualization processing;
Based on first shooting image and second shooting image, the depth of field is determined;
Based on the depth of field, virtualization processing is carried out to the target shooting image, obtains the final output image.
Alternatively, it is described to be based on the light intensity value, selected from first shooting image and second shooting image Target shooting image is selected, including:
When the light intensity value is more than preset strength, first shooting image is defined as the target shooting figure Picture;
When the light intensity value is less than or equal to the preset strength, second shooting image is defined as the mesh Mark shooting image.
Alternatively, it is described to be based on first shooting image and second shooting image, the depth of field is determined, including:
Based on first shooting image and second shooting image, by binocular stereo vision algorithm, it is determined that described The depth of field.
Alternatively, the light intensity value for determining the terminal and being currently located environment, including:
The CMOS configured by the terminal detects luminous intensity, and determines to be detected by the sensor devices Light intensity value corresponding to the luminous intensity arrived.
Second aspect, there is provided a kind of image capturing device, be configured in terminal, the terminal is configured with the first camera assembly With the second camera assembly, the high resolution of first camera assembly is in the resolution ratio of second camera assembly, and described The pixel size of one camera assembly is less than the pixel size of second camera assembly, and described device includes:
First determining module, under background blurring pattern, determining that the terminal is currently located the light intensity value of environment;
Taking module, for when receiving shooting instruction, being carried out by first camera assembly to object to be shot Shooting, obtains the first shooting image, and the object to be shot is shot by second camera assembly, obtains the Two shooting images;
Second determining module, for based on the light intensity value, first shooting image and second shooting image, Final output image is determined, to realize image taking.
Alternatively, second determining module includes:
Selecting unit, for based on the light intensity value, from first shooting image and second shooting image Selection target shooting image, the target shooting image refer to the image for treating virtualization processing;
Determining unit, for based on first shooting image and second shooting image, determining the depth of field;
Processing unit, for based on the depth of field, carrying out virtualization processing to the target shooting image, obtaining described final Output image.
Alternatively, the selecting unit is used for:
When the light intensity value is more than preset strength, first shooting image is defined as the target shooting figure Picture;
When the light intensity value is less than or equal to the preset strength, second shooting image is defined as the mesh Mark shooting image.
Alternatively, the determining unit is used for:
Based on first shooting image and second shooting image, by binocular stereo vision algorithm, it is determined that described The depth of field.
Alternatively, first determining module is used for:
The CMOS configured by the terminal detects luminous intensity, and determines to be detected by the sensor devices Light intensity value corresponding to the luminous intensity arrived.
The third aspect, there is provided a kind of image capturing device, described device include:
Processor;
For storing the memory of processor-executable instruction;
Wherein, the processor is configured as performing the image capturing method described in above-mentioned first aspect.
Fourth aspect, there is provided a kind of computer-readable recording medium, be stored with finger on the computer-readable recording medium The image capturing method described in above-mentioned first aspect is realized in order, the instruction when being executed by processor.
5th aspect, there is provided a kind of computer program product for including instruction, when run on a computer so that Computer performs the image capturing method described in above-mentioned first aspect.
The technical scheme provided by this disclosed embodiment can include the following benefits:
Under background blurring pattern, in order to ensure that the image shot can adapt to the luminous intensity in external environment, it is determined that The light intensity value of environment where terminal.When receiving shooting instruction, explanation will be shot to object to be shot, now, point The object to be shot is not shot by the first camera assembly and the second camera assembly of terminal, it is corresponding to obtain the first shooting Image and the second shooting image.Afterwards, based on the light intensity value, the first shooting image and second shooting image, it is determined that finally Output image.In this way, shoot to obtain two shooting images by two camera assemblies, and light intensity value determines with reference to determined by Final output image so that the final output image can adapt to the luminous intensity in external environment, ensure the final output image Quality do not influenceed by external environmental light, improve image taking quality.
It should be appreciated that the general description and following detailed description of the above are only exemplary and explanatory, not The disclosure can be limited.
Brief description of the drawings
Accompanying drawing herein is merged in specification and forms the part of this specification, shows the implementation for meeting the disclosure Example, and be used to together with specification to explain the principle of the disclosure.
Fig. 1 is a kind of flow chart of image capturing method according to an exemplary embodiment.
Fig. 2 is a kind of flow chart of image capturing method according to another exemplary embodiment.
Fig. 3 is a kind of block diagram of image capturing device according to an exemplary embodiment.
Fig. 4 is a kind of block diagram of image capturing device according to an exemplary embodiment.
Embodiment
Here exemplary embodiment will be illustrated in detail, its example is illustrated in the accompanying drawings.Following description is related to During accompanying drawing, unless otherwise indicated, the same numbers in different accompanying drawings represent same or analogous key element.Following exemplary embodiment Described in embodiment do not represent all embodiments consistent with the disclosure.On the contrary, they be only with it is such as appended The example of the consistent apparatus and method of some aspects be described in detail in claims, the disclosure.
Before being described in detail to the image capturing method that the embodiment of the present disclosure is related to, first the embodiment of the present disclosure is related to And noun, application scenarios and executive agent simply introduced.
First, the noun being related to the embodiment of the present disclosure is simply introduced.
The depth of field:Refer to before and after the subject that picture shooting assembly can obtain that the imaging of picture rich in detail determined apart from model Enclose, that is to say, after the completion of focusing, can be formed in the scope before and after focus clearly as, this one in front and one in back apart from model Enclose, be called the depth of field.Generally influenceing the factor of the depth of field includes lens focus, the distance of subject, aperture size etc..
The resolution ratio of camera assembly:Its size may decide that the definition of the image captured by the camera assembly.Usual feelings Under condition, resolution ratio is bigger, and the definition for the image that corresponding camera assembly is shot is higher, conversely, resolution ratio is smaller, it is corresponding The definition for the image that camera assembly is shot is lower.
The pixel of camera assembly:Its size may decide that the brightness of the image captured by the camera assembly.Under normal circumstances, Pixel is bigger, and the brightness for the image that corresponding camera assembly is shot is higher, conversely, pixel is smaller, corresponding camera assembly is clapped The brightness of the image taken out is lower.
It is background blurring:It is the depth of field is shoaled, makes focus on theme.When actually realizing, its main purpose be by The background of image carries out Fuzzy Processing, clearly to embody the main body in image.
Secondly, the application scenarios being related to the embodiment of the present disclosure are simply introduced.
At present, in order to lift image taking effect, terminal provides background blurring pattern, is carried out using background blurring pattern The background blurring of shooting image can be made during shooting, so that the main body of shooting image protrudes.However, when using background blurring When pattern is shot, problems be present:
, may be due to camera assembly resolution ratio not when being shot using background blurring pattern when ambient is stronger The reasons such as enough height, easily cause the image sharpness of shooting inadequate.When ambient is weaker, carried out using background blurring pattern During shooting, it may easily cause the picture brightness shot inadequate, noise is more due to light intensity is inadequate.Therefore, How to solve the above problems simultaneously as the focus of research.
Therefore, the embodiment of the present disclosure provides a kind of image capturing method, this method can solve above-mentioned problem, The quality of final output image can be ensured not to be influenceed by external environmental light, so as to improve image taking quality.It has Body implementation process may refer to the image capturing method that below figure 1 and embodiment illustrated in fig. 2 provide.
Next, the executive agent being related to the embodiment of the present disclosure is simply introduced.
The image capturing method that the embodiment of the present disclosure provides can be performed by terminal.The terminal is configured with the first camera assembly With the second camera assembly, the i.e. double camera assemblies of terminal configuration, also, this pair of camera assembly is generally disposed in the same of terminal Side, for example, this pair of camera assembly is the rear camera of the terminal.In addition, the high resolution of first camera assembly is in this The resolution ratio of second camera assembly, and the pixel size of first camera assembly is big less than the pixel of second camera assembly It is small.Further, CMOS is can be configured with the terminal, the terminal can detect extraneous by the sensor devices The luminous intensity of environment, so that it is determined that the light intensity value of external environment.
In practical implementations, the terminal can be setting for such as mobile phone, tablet personal computer, computer, intelligent camera etc Standby, the embodiment of the present disclosure is not limited this.
, next will be with reference to accompanying drawing 1 after noun, application scenarios and the executive agent that the embodiment of the present disclosure is related to has been introduced With accompanying drawing 2, the image capturing method that the embodiment of the present disclosure is related to is described in detail, specifically refers to below figure 1 and Fig. 2 institutes The embodiment shown.
Fig. 1 is a kind of flow chart of image capturing method according to an exemplary embodiment, as shown in figure 1, the figure As image pickup method is in above-mentioned terminal, the image capturing method can to realize step including following:
In a step 101, under background blurring pattern, determine that the terminal is currently located the light intensity value of environment.
In a step 102, when receiving shooting instruction, object to be shot is clapped by first camera assembly Take the photograph, obtain the first shooting image, and the object to be shot is shot by second camera assembly, obtain the second shooting Image.
In step 103, based on the light intensity value, first shooting image and second shooting image, it is determined that final defeated Go out image, to realize image taking.
Crossed in the disclosure in embodiment, under background blurring pattern, in order to ensure that the image shot can adapt in outer The luminous intensity of boundary's environment, the light intensity value of environment where determining terminal.When receiving shooting instruction, explanation will treat subject Body is shot, and now, the object to be shot is clapped by the first camera assembly and the second camera assembly of terminal respectively Take the photograph, it is corresponding to obtain the first shooting image and the second shooting image.Afterwards, based on the light intensity value, the first shooting image and this Two shooting images, determine final output image.In this way, shooting to obtain two shooting images by two camera assemblies, and combine Identified light intensity value determines final output image so that the final output image can adapt to the light intensity in external environment Degree, ensure the quality of the final output image is not influenceed by external environmental light, improves image taking quality.
Alternatively, based on the light intensity value, first shooting image and second shooting image, final output figure is determined Picture, including:
Based on the light intensity value, the selection target shooting image from first shooting image and second shooting image should Target shooting image refers to the image for treating virtualization processing;
Based on first shooting image and second shooting image, the depth of field is determined;
Based on the depth of field, virtualization processing is carried out to the target shooting image, obtains the final output image.
Alternatively, based on the light intensity value, selection target is shot from first shooting image and second shooting image Image, including:
When the light intensity value is more than preset strength, first shooting image is defined as the target shooting image;
When the light intensity value is less than or equal to the preset strength, second shooting image is defined as the target shooting figure Picture.
Alternatively, based on first shooting image and second shooting image, the depth of field is determined, including:
Based on first shooting image and second shooting image, by binocular stereo vision algorithm, the depth of field is determined.
Optionally it is determined that the terminal is currently located the light intensity value of environment, including:
The CMOS detection luminous intensity configured by the terminal, and detected by sensor devices determination Light intensity value corresponding to luminous intensity.
Above-mentioned all optional technical schemes, can form the alternative embodiment of the disclosure according to any combination, and the disclosure is real Example is applied no longer to repeat this one by one.
Fig. 2 is a kind of flow chart of image capturing method according to another exemplary embodiment, as shown in Fig. 2 should Image capturing method is used in above-mentioned terminal, i.e., the terminal is configured with the first camera assembly and the second camera assembly, and this first is taken the photograph As the high resolution of component is in the resolution ratio of second camera assembly, and the pixel size of first camera assembly be less than this The pixel size of two camera assemblies.The image capturing method can realize step including following:
In step 201, under background blurring pattern, determine that the terminal is currently located the light intensity value of environment.
In practical application scene, the terminal can be provided with background blurring option, and the background blurring option can be thing Manage button, or virtual key.When user needs to be shot using background blurring pattern, background void can be clicked on Change option, to trigger background blurring instruction.Correspondingly, after terminal receives the background blurring instruction, the background blurring mould is started Formula, in this way, user can be shot under the background blurring pattern.
Certainly, only it is to be carried out by the terminal exemplified by the background blurring pattern is started after receiving background blurring instruction here Illustrate, in practical application scene, the terminal, which can also be given tacit consent to, starts the background blurring pattern, that is to say, the terminal is being opened After screening-mode, the automatic start background blurring pattern, the embodiment of the present disclosure is not limited this.
As it was noted above, during due to being shot under background blurring pattern, may if the light of external environment is stronger Due to not high etc. enough reason of camera assembly resolution ratio, easily cause the image sharpness of shooting inadequate, if conversely, external environment Light is weaker, then may easily cause the picture brightness shot inadequate, noise is more due to light intensity is inadequate. In order to be adapted to the light intensity of external environment, the embodiment of the present disclosure determines that the terminal is currently located the luminous intensity of environment Value.
In the specific implementation, determining that the terminal is currently located the specific implementation of the light intensity value of environment and can included:Pass through The CMOS detection luminous intensity of terminal configuration, and determined by the sensor devices corresponding to detected luminous intensity Light intensity value.
In step 202, when receiving shooting instruction, object to be shot is clapped by first camera assembly Take the photograph, obtain the first shooting image, and the object to be shot is shot by second camera assembly, obtain the second shooting Image.
Wherein, shooting instruction can be triggered by user, and the user can be triggered by assigned operation, and the assigned operation can So that including clicking operation, slide etc., the embodiment of the present disclosure is not construed as limiting to this.
For example, being provided with shooting option in the terminal, the shooting option can be physical button, or virtually press Key.When user wants to shoot object to be shot, the shooting option can be clicked on, is instructed with triggering above-mentioned shooting.
When terminal receives shooting instruction, it may be determined that user will shoot to object to be shot.Herein, it is Lifting shooting effect, the terminal is shot by two camera assemblies configured to the object to be shot respectively, to deserved To first shooting image and second shooting image.For example, obtained first shooting image and second shooting image point Other A and B.
In step 203, based on the light intensity value, the selection target from first shooting image and second shooting image Shooting image, the target shooting image refer to the image for treating virtualization processing.
In practical implementations, due to finally to export a shooting image, therefore, terminal passes through above-mentioned two camera assembly Shot to obtain after the first shooting image and the second shooting image, it is necessary to from first shooting image and second shooting image In select as the main shooting image of main output, obtained most in order to subsequently carry out virtualization process to the main shooting image Whole output image, i.e. terminal select to treat the target shooting figure of virtualization processing from first shooting image and second shooting image Picture.
In the specific implementation, above-mentioned be based on the light intensity value, selected from first shooting image and second shooting image The following two kinds situation can be included by selecting the specific implementation of target shooting image:
The first situation:When the light intensity value is more than preset strength, first shooting image is defined as the target and clapped Take the photograph image.
Wherein, the preset strength can also be given tacit consent to and be set by user's self-defined setting according to the actual requirements by the terminal Put, the embodiment of the present disclosure is not limited this.
When the light intensity value is more than the preset strength, illustrate the terminal be presently in environment light it is stronger, such as above It is described, because the size of the pixel of camera assembly may decide that the brightness of shooting image, therefore, if now that resolution ratio is relative , may when the second shooting image determination that the second camera assembly relatively low, that pixel is of a relatively high shoots to obtain is main shooting image Due to not high etc. enough reason of camera assembly resolution ratio, cause the definition of second shooting image relatively low, it is final defeated so as to cause The definition for going out image is relatively low.
Therefore, when the light intensity value is more than the preset strength, i.e., terminal be presently in environment light it is stronger when, can will The the first shooting image selection for shooting to obtain by first camera assembly is main shooting image, will pass through the first shooting group The first shooting image that part shoots to obtain is defined as target shooting image, for example, now determining that target shooting image is A.Also It is to say, now, using first camera assembly as main camera assembly, the second camera assembly is as second camera component.
Second of situation:When the light intensity value is less than or equal to the preset strength, second shooting image is defined as The target shooting image.
When the light intensity value is less than or equal to the preset strength, illustrate the terminal be presently in environment illumination it is weaker, As it was noted above, because the size of the pixel of camera assembly determines the brightness of shooting image, therefore, if now by resolution ratio phase , can when the first shooting image determination that first camera assembly relatively low to higher, pixel shoots to obtain is main shooting image It due to luminance shortage, can cause that the noise of first shooting image is more, brightness is low, so as to cause final output image Brightness it is relatively low.
Therefore, when the light intensity value is less than or equal to the preset strength, i.e., terminal be presently in environment light it is weaker when, Can be main shooting image by the second shooting image selection for shooting to obtain by second camera assembly, will by this second The second shooting image that camera assembly shoots to obtain is defined as target shooting image, for example, now determining that target shooting image is B.That is, now, using second camera assembly as main camera assembly, the first camera assembly is as second camera component.
It should be noted that it is above-mentioned be only by by the light intensity value compared between preset strength exemplified by, in reality , can also be by judging the scope residing for the light intensity value come from first shooting image and second shooting image in realization Selection target shooting image.
For example, when the light intensity value is in the first preset range, first shooting image is defined as target shooting Image, when the light intensity value is in the second preset range, second shooting image is defined as the target shooting image.Its In, the lower limit of first preset range can be more than the upper limit of second preset range, also, first preset range and this Two preset ranges can be by user's self-defined setting according to the actual requirements, can also be real by the terminal default setting, the disclosure Example is applied not limit this.
In step 204, based on first shooting image and second shooting image, the depth of field is determined.
For the ease of subsequently carrying out background blurring processing to the above-mentioned target shooting image selected, need exist for being based on being somebody's turn to do First shooting image and second shooting image determine the depth of field, will first shooting image and second shooting image compared Compared with to determine the depth of field.It that is to say, it is another after first shooting image and second shooting image determine target shooting image Open shooting image and actually here function as auxiliary calculating effect.Such as, however, it is determined that first shooting image is target shooting figure Picture, then second shooting image is actually to be used to aid in calculating the depth of field, or, however, it is determined that second shooting image is that target is clapped Image is taken the photograph, then first shooting image is actually used to aid in calculating the depth of field.
Further, in practical implementations, it is above-mentioned that the depth of field is determined based on first shooting image and second shooting image Specific implementation can include:Based on first shooting image and second shooting image, by binocular stereo vision algorithm, really The fixed depth of field.
It should be noted that based on first shooting image and second shooting image, pass through binocular stereo vision algorithm Determine that the specific implementation of the depth of field may refer to correlation technique, the embodiment of the present disclosure is not described in detail herein.
In step 205, based on the depth of field, virtualization processing is carried out to the target shooting image, obtains the final output figure Picture, to realize image taking.
When it is determined that after the depth of field, carrying out Fuzzy Processing to the background of determined main shooting image based on the depth of field, that is, be based on The depth of field carries out virtualization processing to the target shooting image, so as to obtain final output image.
If for example, the light intensity value that the terminal is currently located environment is more than above-mentioned preset strength, i.e. the terminal is presently in The light of environment is stronger, then the target shooting image that can determine to select by above-mentioned implementation procedure is A, here, the terminal Based on the depth of field, virtualization processing is carried out to the target shooting image, obtains final output image.
In this way, light intensity of the terminal according to external environment, based on first camera assembly and the second camera assembly, automatically Switch main camera assembly, solve under background blurring pattern, due to camera assembly resolution ratio when the light of external environment is stronger It is not high enough, cause the image sharpness of shooting poor, or, the picture brightness shot is caused because the light of external environment is weaker Not enough, the problem of noise is more, background blurring effect is improved.
It should be noted that the embodiment of the present disclosure is realized based on the luminous intensity by above-mentioned steps 203 to step 205 Value, first shooting image and second shooting image, the step of determining final output image.
Further, after obtaining the final output image, the final output image can be locally stored in the terminal, namely It is, for a user, after it is determined that terminal-pair object to be shot has performed shooting operation, in the shooting that can locally inquire Image is the final output image, that is to say, that above-mentioned first shooting image and above-mentioned second shooting image are for user It is non.
In the disclosed embodiments, under background blurring pattern, in order to ensure that the image shot can adapt in the external world The luminous intensity of environment, the light intensity value of environment where determining terminal.When receiving shooting instruction, explanation will be to object to be shot Shot, now, the object to be shot shot by the first camera assembly and the second camera assembly of terminal respectively, It is corresponding to obtain the first shooting image and the second shooting image.Afterwards, based on the light intensity value, the first shooting image and the second count Image is taken the photograph, determines final output image.In this way, shooting to obtain two shooting images by two camera assemblies, and combine institute really Fixed light intensity value determines final output image so that the final output image can adapt to the luminous intensity in external environment, protect Demonstrate,prove the quality of the final output image is not influenceed by external environmental light, improves image taking quality.
Fig. 3 is a kind of block diagram of image capturing device according to an exemplary embodiment.Reference picture 3, the device bag Include the first determining module 310, the determining module 330 of taking module 320 and second.
First determining module 310, under background blurring pattern, determining that the terminal is currently located the luminous intensity of environment Value;.
The taking module 320, when receiving shooting instruction, object to be shot is clapped by first camera assembly Take the photograph, obtain the first shooting image, and the object to be shot is shot by second camera assembly, obtain the second shooting Image;.
Second determining module 330, for based on the light intensity value, first shooting image and second shooting image, Final output image is determined, to realize image taking.
Alternatively, second determining module includes:
Selecting unit, for based on the light intensity value, mesh to be selected from first shooting image and second shooting image Shooting image is marked, the target shooting image refers to the image for treating virtualization processing;
Determining unit, for based on first shooting image and second shooting image, determining the depth of field;
Processing unit, for based on the depth of field, carrying out virtualization processing to the target shooting image, obtaining the final output figure Picture.
Alternatively, the selecting unit is used for:
When the light intensity value is more than preset strength, first shooting image is defined as the target shooting image;
When the light intensity value is less than or equal to the preset strength, second shooting image is defined as the target shooting figure Picture.
Alternatively, the determining unit is used for:
Based on first shooting image and second shooting image, by binocular stereo vision algorithm, the depth of field is determined.
Alternatively, first determining module is used for:
The CMOS detection luminous intensity configured by the terminal, and detected by sensor devices determination Light intensity value corresponding to luminous intensity.
In the disclosed embodiments, under background blurring pattern, in order to ensure that the image shot can adapt in the external world The luminous intensity of environment, the light intensity value of environment where determining terminal.When receiving shooting instruction, explanation will be to object to be shot Shot, now, the object to be shot shot by the first camera assembly and the second camera assembly of terminal respectively, It is corresponding to obtain the first shooting image and the second shooting image.Afterwards, based on the light intensity value, the first shooting image and the second count Image is taken the photograph, determines final output image.In this way, shooting to obtain two shooting images by two camera assemblies, and combine institute really Fixed light intensity value determines final output image so that the final output image can adapt to the luminous intensity in external environment, protect Demonstrate,prove the quality of the final output image is not influenceed by external environmental light, improves image taking quality.
On the device in above-described embodiment, wherein modules perform the concrete mode of operation in relevant this method Embodiment in be described in detail, explanation will be not set forth in detail herein.
Fig. 4 is a kind of block diagram of image capturing device 400 according to an exemplary embodiment.For example, device 400 can To be mobile phone, computer, digital broadcast terminal, messaging devices, game console, tablet device, Medical Devices, it is good for Body equipment, personal digital assistant, intelligent camera etc..
Reference picture 4, device 400 can include following one or more assemblies:Processing component 402, memory 404, power supply Component 406, multimedia groupware 408, audio-frequency assembly 410, the interface 412 of input/output (I/O), sensor cluster 414, and Communication component 416.
The integrated operation of the usual control device 400 of processing component 402, such as communicated with display, call, data, phase The operation that machine operates and record operation is associated.Processing component 402 can refer to including one or more processors 420 to perform Order, to complete all or part of step of above-mentioned method.In addition, processing component 402 can include one or more modules, just Interaction between processing component 402 and other assemblies.For example, processing component 402 can include multi-media module, it is more to facilitate Interaction between media component 408 and processing component 402.
Memory 404 is configured as storing various types of data to support the operation in device 400.These data are shown Example includes the instruction of any application program or method for being operated on device 400, contact data, telephone book data, disappears Breath, picture, video etc..Memory 404 can be by any kind of volatibility or non-volatile memory device or their group Close and realize, as static RAM (SRAM), Electrically Erasable Read Only Memory (EEPROM) are erasable to compile Journey read-only storage (EPROM), programmable read only memory (PROM), read-only storage (ROM), magnetic memory, flash Device, disk or CD.
Power supply module 406 provides power supply for the various assemblies of device 400.Power supply module 406 can include power management system System, one or more power supplys, and other components associated with generating, managing and distributing power supply for device 400.
Multimedia groupware 408 is included in the screen of one output interface of offer between described device 400 and user.One In a little embodiments, screen can include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, screen Curtain may be implemented as touch-screen, to receive the input signal from user.Touch panel includes one or more touch sensings Device is with the gesture on sensing touch, slip and touch panel.The touch sensor can not only sensing touch or sliding action Border, but also detect and touched or the related duration and pressure of slide with described.In certain embodiments, more matchmakers Body component 408 includes a front camera and/or rear camera.When device 400 is in operator scheme, such as screening-mode or During video mode, front camera and/or rear camera can receive outside multi-medium data.Each front camera and Rear camera can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio-frequency assembly 410 is configured as output and/or input audio signal.For example, audio-frequency assembly 410 includes a Mike Wind (MIC), when device 400 is in operator scheme, during such as call model, logging mode and speech recognition mode, microphone by with It is set to reception external audio signal.The audio signal received can be further stored in memory 404 or via communication set Part 416 is sent.In certain embodiments, audio-frequency assembly 410 also includes a loudspeaker, for exports audio signal.
I/O interfaces 412 provide interface between processing component 402 and peripheral interface module, and above-mentioned peripheral interface module can To be keyboard, click wheel, button etc..These buttons may include but be not limited to:Home button, volume button, start button and lock Determine button.
Sensor cluster 414 includes one or more sensors, and the state for providing various aspects for device 400 is commented Estimate.For example, sensor cluster 414 can detect opening/closed mode of device 400, and the relative positioning of component, for example, it is described Component is the display and keypad of device 400, and sensor cluster 414 can be with 400 1 components of detection means 400 or device Position change, the existence or non-existence that user contacts with device 400, the orientation of device 400 or acceleration/deceleration and device 400 Temperature change.Sensor cluster 414 can include proximity transducer, be configured to detect in no any physical contact The presence of neighbouring object.Sensor cluster 414 can also include optical sensor, such as CMOS or ccd image sensor, for into As being used in application.In certain embodiments, the sensor cluster 414 can also include acceleration transducer, gyro sensors Device, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 416 is configured to facilitate the communication of wired or wireless way between device 400 and other equipment.Device 400 can access the wireless network based on communication standard, such as WiFi, 2G or 3G, or combinations thereof.In an exemplary implementation In example, communication component 416 receives broadcast singal or broadcast related information from external broadcasting management system via broadcast channel. In one exemplary embodiment, the communication component 416 also includes near-field communication (NFC) module, to promote junction service.Example Such as, in NFC module radio frequency identification (RFID) technology can be based on, Infrared Data Association (IrDA) technology, ultra wide band (UWB) technology, Bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, device 400 can be believed by one or more application specific integrated circuits (ASIC), numeral Number processor (DSP), digital signal processing appts (DSPD), PLD (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are realized, real shown in above-mentioned Fig. 1 or Fig. 2 for performing The image capturing method of example offer is provided.
In the exemplary embodiment, a kind of non-transitorycomputer readable storage medium including instructing, example are additionally provided Such as include the memory 404 of instruction, above-mentioned instruction can be performed to complete the above method by the processor 420 of device 400.For example, The non-transitorycomputer readable storage medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk With optical data storage devices etc..
A kind of non-transitorycomputer readable storage medium, when the instruction in the storage medium is by the processing of mobile terminal When device performs so that the image capturing method that mobile terminal is able to carry out above-mentioned Fig. 1 or embodiment illustrated in fig. 2 provides.
A kind of computer program product for including instruction, when run on a computer so that computer performs above-mentioned The image capturing method that Fig. 1 or embodiment illustrated in fig. 2 provide.
Those skilled in the art will readily occur to the disclosure its after considering specification and putting into practice invention disclosed herein Its embodiment.The application is intended to any modification, purposes or the adaptations of the disclosure, these modifications, purposes or Person's adaptations follow the general principle of the disclosure and including the undocumented common knowledges in the art of the disclosure Or conventional techniques.Description and embodiments are considered only as exemplary, and the true scope of the disclosure and spirit are by following Claim is pointed out.
It should be appreciated that the precision architecture that the disclosure is not limited to be described above and is shown in the drawings, and And various modifications and changes can be being carried out without departing from the scope.The scope of the present disclosure is only limited by appended claim.

Claims (12)

1. a kind of image capturing method, applied in terminal, it is characterised in that the terminal is configured with the first camera assembly and Two camera assemblies, the high resolution of first camera assembly is in the resolution ratio of second camera assembly, and described first takes the photograph As the pixel size of component is less than the pixel size of second camera assembly, methods described includes:
Under background blurring pattern, determine that the terminal is currently located the light intensity value of environment;
When receiving shooting instruction, object to be shot is shot by first camera assembly, obtains the first shooting Image, and the object to be shot is shot by second camera assembly, obtain the second shooting image;
Based on the light intensity value, first shooting image and second shooting image, final output image is determined, with reality Existing image taking.
2. the method as described in claim 1, it is characterised in that described based on the light intensity value, first shooting image With second shooting image, final output image is determined, including:
Based on the light intensity value, the selection target shooting image from first shooting image and second shooting image, The target shooting image refers to the image for treating virtualization processing;
Based on first shooting image and second shooting image, the depth of field is determined;
Based on the depth of field, virtualization processing is carried out to the target shooting image, obtains the final output image.
3. method as claimed in claim 2, it is characterised in that it is described to be based on the light intensity value, from first shooting figure Selection target shooting image in picture and second shooting image, including:
When the light intensity value is more than preset strength, first shooting image is defined as the target shooting image;
When the light intensity value is less than or equal to the preset strength, second shooting image is defined as the target and clapped Take the photograph image.
4. method as claimed in claim 2, it is characterised in that described to be shot based on first shooting image with described second Image, the depth of field is determined, including:
Based on first shooting image and second shooting image, by binocular stereo vision algorithm, the depth of field is determined.
5. the method as described in claim 1, it is characterised in that the luminous intensity for determining the terminal and being currently located environment Value, including:
The CMOS detection luminous intensity configured by the terminal, and detected by sensor devices determination Light intensity value corresponding to luminous intensity.
6. a kind of image capturing device, is configured in terminal, it is characterised in that the terminal is configured with the first camera assembly and Two camera assemblies, the high resolution of first camera assembly is in the resolution ratio of second camera assembly, and described first takes the photograph As the pixel size of component is less than the pixel size of second camera assembly, described device includes:
First determining module, under background blurring pattern, determining that the terminal is currently located the light intensity value of environment;
Taking module, for when receiving shooting instruction, being shot by first camera assembly to object to be shot, The first shooting image is obtained, and the object to be shot is shot by second camera assembly, obtains second count Take the photograph image;
Second determining module, for based on the light intensity value, first shooting image and second shooting image, it is determined that Final output image, to realize image taking.
7. device as claimed in claim 6, it is characterised in that second determining module includes:
Selecting unit, for based on the light intensity value, being selected from first shooting image and second shooting image Target shooting image, the target shooting image refer to the image for treating virtualization processing;
Determining unit, for based on first shooting image and second shooting image, determining the depth of field;
Processing unit, for based on the depth of field, carrying out virtualization processing to the target shooting image, obtaining the final output Image.
8. device as claimed in claim 7, it is characterised in that the selecting unit is used for:
When the light intensity value is more than preset strength, first shooting image is defined as the target shooting image;
When the light intensity value is less than or equal to the preset strength, second shooting image is defined as the target and clapped Take the photograph image.
9. device as claimed in claim 7, it is characterised in that the determining unit is used for:
Based on first shooting image and second shooting image, by binocular stereo vision algorithm, the depth of field is determined.
10. device as claimed in claim 6, it is characterised in that first determining module is used for:
The CMOS detection luminous intensity configured by the terminal, and detected by sensor devices determination Light intensity value corresponding to luminous intensity.
11. a kind of image capturing device, it is characterised in that described device includes:
Processor;
For storing the memory of processor-executable instruction;
Wherein, the processor is configured as the step of perform claim requires any one method described in 1-5.
12. a kind of computer-readable recording medium, instruction is stored with the computer-readable recording medium, it is characterised in that The step of instruction realizes any one method described in claim 1-5 when being executed by processor.
CN201710908032.5A 2017-09-29 2017-09-29 Image shooting method, device and storage medium Active CN107707819B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710908032.5A CN107707819B (en) 2017-09-29 2017-09-29 Image shooting method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710908032.5A CN107707819B (en) 2017-09-29 2017-09-29 Image shooting method, device and storage medium

Publications (2)

Publication Number Publication Date
CN107707819A true CN107707819A (en) 2018-02-16
CN107707819B CN107707819B (en) 2021-04-13

Family

ID=61175601

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710908032.5A Active CN107707819B (en) 2017-09-29 2017-09-29 Image shooting method, device and storage medium

Country Status (1)

Country Link
CN (1) CN107707819B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113206960A (en) * 2021-03-24 2021-08-03 上海闻泰电子科技有限公司 Photographing method, photographing apparatus, computer device, and computer-readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130300860A1 (en) * 2012-05-11 2013-11-14 Canon Kabushiki Kaisha Depth measurement apparatus, image pickup apparatus, depth measurement method, and depth measurement program
CN103888669A (en) * 2012-12-21 2014-06-25 辉达公司 Approach for camera control
CN104333700A (en) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 Image blurring method and image blurring device
CN104410785A (en) * 2014-11-17 2015-03-11 联想(北京)有限公司 An information processing method and electronic device
US9007490B1 (en) * 2013-03-14 2015-04-14 Amazon Technologies, Inc. Approaches for creating high quality images
CN105245775A (en) * 2015-09-25 2016-01-13 小米科技有限责任公司 Method and device for camera imaging, and mobile terminal
WO2016061757A1 (en) * 2014-10-22 2016-04-28 宇龙计算机通信科技(深圳)有限公司 Image generation method based on dual camera module and dual camera module
CN106603896A (en) * 2015-10-13 2017-04-26 三星电机株式会社 Camera module and method of manufacturing the same

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130300860A1 (en) * 2012-05-11 2013-11-14 Canon Kabushiki Kaisha Depth measurement apparatus, image pickup apparatus, depth measurement method, and depth measurement program
CN103888669A (en) * 2012-12-21 2014-06-25 辉达公司 Approach for camera control
US9007490B1 (en) * 2013-03-14 2015-04-14 Amazon Technologies, Inc. Approaches for creating high quality images
WO2016061757A1 (en) * 2014-10-22 2016-04-28 宇龙计算机通信科技(深圳)有限公司 Image generation method based on dual camera module and dual camera module
CN104410785A (en) * 2014-11-17 2015-03-11 联想(北京)有限公司 An information processing method and electronic device
CN104333700A (en) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 Image blurring method and image blurring device
CN105245775A (en) * 2015-09-25 2016-01-13 小米科技有限责任公司 Method and device for camera imaging, and mobile terminal
CN106603896A (en) * 2015-10-13 2017-04-26 三星电机株式会社 Camera module and method of manufacturing the same

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
肖进胜等: "基于多聚焦图像深度信息提取的背景虚化显示", 《自动化学报》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113206960A (en) * 2021-03-24 2021-08-03 上海闻泰电子科技有限公司 Photographing method, photographing apparatus, computer device, and computer-readable storage medium

Also Published As

Publication number Publication date
CN107707819B (en) 2021-04-13

Similar Documents

Publication Publication Date Title
US10284773B2 (en) Method and apparatus for preventing photograph from being shielded
CN106454336B (en) The method and device and terminal that detection terminal camera is blocked
CN105245775B (en) camera imaging method, mobile terminal and device
CN108234873A (en) A kind of method and apparatus for obtaining image
CN104090735B (en) The projecting method and device of a kind of picture
CN104182313B (en) Be delayed the method and apparatus taken pictures
CN107155060A (en) Image processing method and device
CN105744133B (en) Video light compensation method and device
CN104506772A (en) Method and device for regulating shooting parameters
CN104967788A (en) Shooting method and shooting device
CN104216525B (en) Method and device for mode control of camera application
CN104035674B (en) Picture displaying method and device
CN107122693A (en) Two-dimensional code identification method and device
CN107426502A (en) Image pickup method and device, electronic equipment
CN107347136A (en) Photographic method, device and terminal device
CN107015648B (en) Picture processing method and device
CN107820006A (en) Control the method and device of camera shooting
CN107463052A (en) Shoot exposure method and device
CN108040204A (en) A kind of image capturing method based on multi-cam, device and storage medium
CN105631804A (en) Image processing method and device
CN107426489A (en) Processing method, device and terminal during shooting image
CN104243829A (en) Self-shooting method and self-shooting device
CN106210495A (en) Image capturing method and device
CN105528765A (en) Method and device for processing image
CN107566750A (en) Control method, device and the storage medium of flash lamp

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant