CN108200351A - Image pickup method, terminal and computer-readable medium - Google Patents

Image pickup method, terminal and computer-readable medium Download PDF

Info

Publication number
CN108200351A
CN108200351A CN201711392242.XA CN201711392242A CN108200351A CN 108200351 A CN108200351 A CN 108200351A CN 201711392242 A CN201711392242 A CN 201711392242A CN 108200351 A CN108200351 A CN 108200351A
Authority
CN
China
Prior art keywords
colour
skin
image
terminal
environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201711392242.XA
Other languages
Chinese (zh)
Inventor
李逸超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Jinli Communication Equipment Co Ltd
Original Assignee
Shenzhen Jinli Communication Equipment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Jinli Communication Equipment Co Ltd filed Critical Shenzhen Jinli Communication Equipment Co Ltd
Priority to CN201711392242.XA priority Critical patent/CN108200351A/en
Publication of CN108200351A publication Critical patent/CN108200351A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/743Bracketing, i.e. taking a series of images with varying exposure conditions

Abstract

The embodiment of the invention discloses a kind of image pickup method, terminal and computer-readable medium, wherein method includes:Obtain the ambient light data of environment to be captured and object color data;According to the ambient light data and/or the object color data, present filming scene is determined;If the present filming scene is preset photographed scene, at least two groups of exposure parameters are set, and the environment to be captured is shot according to the exposure parameter of setting according to the ambient light data and the object color data;The image obtained to shooting carries out synthesis processing, obtains target image.The embodiment of the present invention can shoot to obtain the more suitable image of brightness of subject area and background area.

Description

Image pickup method, terminal and computer-readable medium
Technical field
The present invention relates to a kind of technical field of image processing more particularly to image pickup method, terminal and computer-readable mediums.
Background technology
At present, gray scale or so-called target brightness levels are appreciated in order to be realized in different lighting conditions and scene Not, so that the video or image of shooting were both less dark or less bright, user would generally be clapped using auto exposure mode It takes the photograph.Under auto exposure mode, the exposure value for the scene shot that camera can be according to measured by photometric system is produced according to producer When set shutter and aperture exposure combination, be automatically set shutter speed and f-number, prevent over-exposed (overexposure) or Under-exposed (owing to expose).
But for very bright place existing in scene, and the scene for having the very dark this light and shade contrast in place very big, from Dynamic exposure mode is just difficult to take into account each place in scene, is easy to cause local overexposure or owes to expose.For example, when the colour of skin is deeper User is susceptible to the excessively bright situation of human face region in self-timer on daytime;When more people are in step with, differ greatly if there is the colour of skin Individual (such as Black people white man takes pictures together), it is also difficult to take into account each one, be susceptible to the excessive lightness or darkness feelings of part human face region Shape.
Invention content
The embodiment of the present invention provides a kind of image pickup method, can shoot to obtain the brightness of subject area and background area compared with For suitable image.
In a first aspect, an embodiment of the present invention provides a kind of image pickup method, the method includes:
Obtain the ambient light data of environment to be captured and object color data;
According to the ambient light data and/or the object color data, present filming scene is determined;
If the present filming scene is preset photographed scene, according to the ambient light data and the object face Chromatic number shoots the environment to be captured according to the exposure parameter of setting according to setting at least two groups of exposure parameters;
The image obtained to shooting carries out synthesis processing, obtains target image.
Second aspect, an embodiment of the present invention provides a kind of terminal, the terminal includes performing above-mentioned first aspect Method unit.
The third aspect, an embodiment of the present invention provides another terminal, the terminal includes processor, input equipment, defeated Going out equipment and memory, the processor, input equipment, output equipment and memory are connected with each other, wherein, the memory is used Terminal is supported to perform the computer program of the above method in storage, the computer program includes program instruction, the processor It is configured for calling described program instruction, the method for performing above-mentioned first aspect.
Fourth aspect, an embodiment of the present invention provides a kind of computer readable storage medium, the computer storage media Computer program is stored with, the computer program includes program instruction, and described program instruction makes institute when being executed by a processor State the method that processor performs above-mentioned first aspect.
The embodiment of the present invention is detecting present filming scene for light and shade of the different objects in the colour of skin be widely different or ring The light and shade of border and the object colour of skin is widely different when complex scenes, is set according to specific ambient light data and object colour of skin data Multigroup exposure parameter, and shot according to the exposure parameter of setting to obtain multiple shooting images, then shooting is obtained more A shooting image carries out synthesis processing, and varying environment, the subject area and background of the different colours of skin are taken into account so as to obtain exposure The more suitable target image of the brightness in region.
Description of the drawings
Technical solution in order to illustrate the embodiments of the present invention more clearly, below will be to needed in embodiment description Attached drawing is briefly described, it should be apparent that, the accompanying drawings in the following description is some embodiments of the present invention, general for this field For logical technical staff, without creative efforts, other attached drawings are can also be obtained according to these attached drawings.
Fig. 1 is a kind of schematic flow diagram of image pickup method provided in an embodiment of the present invention;
Fig. 2 is the schematic flow diagram of another image pickup method provided in an embodiment of the present invention;
Fig. 3 is a kind of schematic block diagram of terminal provided in an embodiment of the present invention;
Fig. 4 is the schematic block diagram of another terminal provided in an embodiment of the present invention;
Fig. 5 is the schematic block diagram of another terminal provided in an embodiment of the present invention.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present invention, the technical solution in the embodiment of the present invention is carried out clear, complete Site preparation describes, it is clear that described embodiment is part of the embodiment of the present invention, instead of all the embodiments.Based on this hair Embodiment in bright, the every other implementation that those of ordinary skill in the art are obtained without making creative work Example, shall fall within the protection scope of the present invention.
It it should be appreciated that ought be special described by the instruction of term " comprising " use in this specification and in the appended claims Sign, entirety, step, operation, the presence of element and/or component, but be not precluded one or more of the other feature, entirety, step, Operation, element, component and/or its presence or addition gathered.
It is also understood that the term used in this description of the invention is merely for the sake of the mesh for describing specific embodiment And be not intended to limit the present invention.As description of the invention and it is used in the attached claims, unless on Other situations are hereafter clearly indicated, otherwise " one " of singulative, "one" and "the" are intended to include plural form.
It will be further appreciated that the term "and/or" used in description of the invention and the appended claims is Refer to any combinations and all possible combinations of one or more of the associated item listed, and including these combinations.
As used in this specification and in the appended claims, term " if " can be solved according to context Be interpreted as " when ... " or " once ".Similarly, phrase " if it is determined that " or " if receiving [described condition or event] " can be according to Be interpreted to mean according to context " once it is determined that " or " in response to determining " or " once receiving [described condition or event] " Or " in response to receiving [described condition or event] ".
In the specific implementation, the terminal described in the embodiment of the present invention including but not limited to such as with camera and touches quick Feel the camera (for example, digital camera or slr camera) of surface (for example, touch-screen display and/or touch tablet), mobile electricity Other portable devices of words, laptop computer or tablet computer etc.It is to be further understood that in some embodiments In, the equipment is not portable communication device, but with camera and touch sensitive surface (for example, touch-screen display And/or touch tablet) desktop computer.
In discussion below, the terminal for including display, camera and touch sensitive surface is described.It however, should When understanding, one or more of the other physical User that terminal can include such as physical keyboard, mouse and/or control-rod connects Jaws equipment.
Terminal supports various application programs, such as one or more of following:Drawing application program, demonstration application journey Sequence, word-processing application, website create application program, disk imprinting application program, spreadsheet applications, game application Program, telephony application, videoconference application, email application, instant messaging applications, exercise Support application program, photo management application program, digital camera application program, digital camera application program, web-browsing application Program, digital music player application and/or video frequency player application program.
The various application programs that can be performed in terminal can use at least one public of such as touch sensitive surface Physical user-interface device.It can adjust and/or change among applications and/or in corresponding application programs and touch sensitive table The corresponding information shown in the one or more functions and terminal in face.In this way, the public physical structure of terminal is (for example, touch Sensing surface) it can support the various application programs with user interface intuitive and transparent for a user.
Gray scale or so-called object brightness rank are appreciated in order to be realized in different lighting conditions and scene, so as to So that the video or image of shooting (capture) were both less dark or less bright, when terminal needs to adjust lens opening, exposure sensor Between, sensor analog gain and sensor/picture signal processing (Image Signal Processing, ISP) digital gain, This process is known as automatic exposure (Automatic Exposure).Specifically, the main three kinds of Exposure modes of automatic exposure:Entirely certainly Traverse formula exposes, aperture priority exposure and ShutterPriority Auto exposure.Wherein, in full-automatic formula Exposure mode, user is not required to Shutter speed and f-number are adjusted manually, it is only necessary to good focus in the image preview interface of terminal, press shutter release Button just, according to the exposure value of the picture shot measured by the photometric systems such as light sensor according to producer can give birth to by terminal Set shutter and aperture exposure combination, are automatically set shutter speed and f-number during production.In aperture priority Exposure mode In, user needs to adjust f-number manually, and then terminal can be according to the picture shot measured by the photometric systems such as light sensor The exposure value in face is automatically set shutter speed.Since aperture is smaller, the depth of field is longer, therefore this mode is relatively specific for needing Determine the photographed scene (such as landscape shooting, portrait) of the depth of field.In ShutterPriority Auto Exposure mode, user needs to adjust manually Shutter speed, then terminal can automatically be set according to the exposure value of the picture shot measured by the photometric systems such as light sensor Iris setting value.
Although automatic exposure has provided very big facility to the user in the shooting process of terminal so that user can facilitate Ground takes the suitable video of brightness or image.But for very bright place existing in scene, and there is the very dark place this The very big scene of light and shade contrast, automatic exposure are just difficult to take into account each place in scene, are easy to cause partial exposure excessively (mistake Expose) or it is under-exposed (owing to expose).For example, when the deeper user of the colour of skin is in self-timer on daytime, it is excessively bright to be susceptible to human face region Situation;When more people are in step with, the individual that differs greatly if there is the colour of skin (such as Black people white man takes pictures together), it is also difficult to take into account each People is susceptible to the excessive lightness or darkness situation of part human face region.
To solve the above-mentioned problems, an embodiment of the present invention provides a kind of image pickup method, face is added in shooting process Detection, and according to the exposure parameter of face complexion bright-dark degree adjusting terminal, can so that face's brightness is more suitable.Specifically Ground, terminal add face complexion identification in shooting process.When having detected that complex scene occurs, according to preset method Judge concrete scene, for concrete scene, terminal is respectively adopted different exposure parameters and is shot to obtain multiple shooting images, Then the multiple shooting images obtained to shooting carry out synthesis processing, obtain the image that exposure takes into account the different colours of skin, different scenes.
Further specifically, after the shooting function for opening terminal in user, terminal can export normally in screen Preview screen, while it is being exposed parameter setting from the background.During parameter setting is exposed, terminal is first with interior The light sensor put obtains the environment bright angle value (i.e. ambient light data) of current environment (environment i.e. to be captured).According to acquisition The environment bright angle value arrived and preset environment classification rule, terminal can determine the environmental form described in current environment.Its In, the environment category can include but is not limited to outside room on daytime, in daytime room, half-light (night) etc..
Further, terminal can obtain the face complexion data in preview screen.According to the face complexion number got According to this and preset skin color classification rule and colour of skin partition of the level rule, terminal can determine each face in preview screen Colour of skin type and colour of skin rank.Wherein, the colour of skin type can include but is not limited to white, yellow, black etc..At one In specific embodiment, the colour of skin rank can for example include 1 to 15 grades.Wherein, the 1st grade to the 5th grade of colour of skin rank pair The colour of skin type answered is white, and the 6th grade to the 10th grade of the corresponding colour of skin type of colour of skin rank is yellow, the 11st grade to the 15th grade The corresponding colour of skin type of colour of skin rank be black.
Further, current shooting may determine that according to the environmental form, colour of skin type and colour of skin rank, terminal that get Whether scene is complex scene.It should be noted that photographed scene can distinguish simple scenario and complex scene.Wherein, for Simple scenario does not have the king-sized place of light and shade difference in scene, is not in office when terminal is shot by automatic exposure Portion's overexposure owes the situation exposed.For complex scene, existing very bright place in scene, and have very dark place, terminal passes through It is easy to appear local overexposure or the deficient situations exposed when automatic exposure is shot.Specifically, when most skin lightening color in preview screen When the colour of skin data difference of the most dark colour of skin reaches default value, terminal may determine that present filming scene is complex scene. For example, if existing white man has Black people again in preview screen, then terminal may determine that present filming scene is complex scene. Alternatively, when environmental form and colour of skin type meet the first preset relation, terminal may determine that present filming scene for complexity Scene.Wherein, when environmental form is daytime, colour of skin type is black, alternatively, environmental form is half-light, colour of skin type is white When, terminal can determine that environmental form and colour of skin type meet the first preset relation.
Further, when it is complex scene to judge present filming scene, terminal can be that present filming scene is set Multigroup (at least two groups) exposure parameter.Specifically, terminal can be that the object with different colour of skin ranks sets different exposures Parameter.For example, when there are one colour of skin type being the object of yellow and when a colour of skin type is the object of black in preview screen, Terminal can be that two objects set two groups of different exposure parameters.Wherein, the exposure parameter can include but is not limited to expose Between light time, f-number, shutter speed etc..Further, terminal can also be that current environment sets exposure parameter.Specifically, When environmental form and colour of skin type meet the second preset relation, terminal can be that one group of exposure ginseng is separately provided in current environment Number, this group of exposure parameter are different from any group of exposure parameter for object setting.Wherein, when environmental form be daytime, object Colour of skin type be black, alternatively, when environmental form be half-light, when the colour of skin type of object is white, terminal can be true Determine environmental form and colour of skin type meets the second preset relation.Specifically, when environmental form and colour of skin type are unsatisfactory for second in advance If during relationship, terminal can be that current environment sets one group of exposure parameter, this group of exposure parameter and terminal are its of object setting In one group of exposure parameter it is identical.For example, working as environmental form for daytime, there are two pairs that colour of skin type is yellow in preview screen As, and during the colour of skin rank difference of described two objects, terminal can be colour of skin rank in current environment and described two objects Higher object sets identical exposure parameter.
Further, under every group of exposure parameter of setting, terminal can quickly be shot respectively, obtain multiple shootings Image.
Further, terminal can will shoot obtained multiple shooting images one image of synthesis.Specifically, according to object Detection process, terminal can obtain background area and subject area (one or more) in shooting image, according to exposure parameter With the relationship of object and environment, terminal can determine the interpolation weight in background area and each object region in each shooting image Weight, and processing is overlapped to multiple shooting images according to the interpolation weights determined, obtain final target image.It is optional Ground, terminal can also export in screen and store the target image.Below in conjunction with Fig. 1 to Fig. 5 to the embodiment of the present invention The image pickup method and terminal of offer are specifically described respectively.
It is a kind of schematic flow diagram of image pickup method provided in an embodiment of the present invention referring firstly to Fig. 1.Specific implementation In, each step that this method includes can be performed by terminal.Image pickup method as shown in Figure 1 can include:
S11, the ambient light data and object color data for obtaining environment to be captured.
In a specific embodiment, terminal can obtain the environment of environment to be captured by built-in light sensor Light data.Wherein, the ambient light data for example can be ambient brightness value.
It should be noted that the described object of the embodiment of the present invention refers specifically to human body.In a specific implementation In example, the object color data can be specially object colour of skin data.In the specific implementation, it can be shown in terminal screen described The preview screen of environment to be captured.Specifically, terminal can carry out recognition of face processing to the preview screen, determine described Human face region in preview screen.Further, terminal can be determined according to the color data of the pixel of the human face region The face complexion data of the human face region.
Optionally, it (is in embodiments of the present invention people that terminal can also carry out object detection processing to the preview screen Body detection process), determine the subject area (human region) in the preview screen.Further, terminal can also be by institute Face complexion data are stated as object colour of skin data.
S12, according to the ambient light data and/or the object color data, determine present filming scene.
Specifically, when the object color data are object colour of skin data, terminal performs described according to the ambient light Data and/or the object color data, determine that present filming scene can specifically include:According to the ambient light data, really The environmental form of the fixed environment to be captured;According to the object colour of skin data, each object in the environment to be captured is determined Colour of skin attribute;According to the environmental form and the colour of skin attribute, present filming scene is determined.
Wherein, the environment category can include but is not limited to outside room on daytime, in daytime room, half-light (night) etc..Institute It states colour of skin attribute and includes colour of skin type and/or colour of skin rank.Wherein, the colour of skin type can include but is not limited to white, Huang Color, black etc..Colour of skin rank can be used for characterizing the bright-dark degree of the colour of skin.It is understood that different colour of skin types has Different colour of skin ranks.Since same colour of skin type is also divided into colour of skin light and shade, each colour of skin type has corresponding Colour of skin rank.In a specific embodiment, the colour of skin rank can for example include 1 to 15 grades.Wherein, the 1st grade to the 5th The corresponding colour of skin type of colour of skin rank of grade is white, and the 6th grade to the 10th grade of the corresponding colour of skin type of colour of skin rank is yellow, 11st grade to the 15th grade of the corresponding colour of skin type of colour of skin rank is black.It should be noted that colour of skin rank is higher, show skin Color is brighter.
It should be noted that photographed scene can distinguish simple scenario and complex scene.Wherein, for simple scenario, field There is no the king-sized place of light and shade difference in scape, when terminal is shot by automatic exposure be not in local overexposure or owe exposure Situation.For complex scene, existing very bright place in scene, and have very dark place, terminal is carried out by automatic exposure It is easy to appear local overexposure or the deficient situations exposed during shooting.
In a specific embodiment, when there is the object with different colour of skin ranks in the environment to be captured, Terminal may determine that present filming scene is complex scene.If for example, in the environment to be captured existing white man have again it is black People, then terminal may determine that present filming scene is complex scene.In another example if there are two in the environment to be captured When colour of skin rank is respectively the 6th grade and the 10th grade of yellow, then terminal may determine that present filming scene is complicated field Scape.
In a specific embodiment, when environmental form and colour of skin type meet the first preset relation, terminal can be with It is complex scene to judge present filming scene.Wherein, when the brightness value of the environment to be captured and the skin of one of object When colour brightness light and shade is widely different, terminal can determine that environmental form meets the first preset relation with colour of skin type.For example, work as ring Border type is daytime, there are the object that colour of skin type is black in the environment to be captured, alternatively, when environmental form is half-light, There are during the object that colour of skin type is white in the environment to be captured, terminal can determine that environmental form and colour of skin type meet First preset relation.
If S13, the present filming scene are preset photographed scene, according to the ambient light data and described right As color data sets at least two groups of exposure parameters, and the environment to be captured is shot according to the exposure parameter of setting.
Wherein, the preset photographed scene is above-mentioned complex scene.It is understood that only in present filming scene During for complex scene, terminal just needs to set multigroup (at least two groups) exposure parameter.
Specifically, terminal can be that the object with different colour of skin ranks sets different exposure parameters.For example, when described When in environment to be captured there are two colour of skin types being respectively the object of yellow and black, terminal can be that described two objects be set Put two groups of different exposure parameters.Wherein, the exposure parameter can include but is not limited to time for exposure, f-number, shutter speed Degree etc..
Further, terminal can also be that the environment to be captured sets exposure parameter.Specifically, when environmental form and skin When color type meets the second preset relation, terminal can be that one group of exposure parameter is separately provided in the environment to be captured, which exposes Optical parameter is different from any group of exposure parameter for object setting.Wherein, when the brightness of the environment to be captured with it is all right When the skin brightness light and shade difference of elephant is very big, terminal can determine that environmental form and colour of skin type meet the second preset relation. For example, working as environmental form for daytime, the colour of skin type of all objects is black in the environment to be captured, alternatively, working as environment Type is half-light, and when the colour of skin type of all objects is white in the environment to be captured, terminal can determine environmental form Meet the second preset relation with colour of skin type.Specifically, when environmental form and colour of skin type are unsatisfactory for the second preset relation, eventually End can be that current environment sets one group of exposure parameter, this group of exposure parameter is joined with one of which exposure of the terminal for object setting Number is identical.For example, when environmental form is daytime, had altogether in the environment to be captured there are two the object that colour of skin type is yellow, And during the colour of skin rank difference of described two objects, terminal can be colour of skin rank higher in current environment and described two objects Object identical exposure parameter is set.
Further, under every group of exposure parameter of setting, terminal can quickly be shot respectively, obtain multiple shootings Image.Optionally, terminal can also store multiple shooting images that shooting obtains.
S14, the image obtained to shooting carry out synthesis processing, obtain target image.
Specifically, terminal performs the image that described pair of shooting obtains and carries out synthesis processing, and obtaining target image can be specific Including:Object detection is carried out to the shooting image that shooting obtains to handle to obtain background area and at least one subject area;It determines Shoot the background area in obtained each shooting image and the interpolation weights of each subject area;According to the interpolation weight determined Weight is overlapped processing to the obtained each shooting image that shoots, obtains target image.Further, terminal can be with It is exported in screen and stores the target image.
Wherein, the object detection processing for example can be human testing processing.
It is understood that each shooting image has corresponded to unique one group of exposure parameter.It should be noted that Mr. Yu A shooting image, exposure parameter can be that the exposure parameter of only targeted object region setting (is an object or more A colour of skin rank it is identical object setting exposure parameter) or only background area setting exposure parameter (be The exposure parameter of the environment setting to be captured), it can also be the exposure ginseng for targeted object region and background area setting Number.So as to which for each shooting image, exposure parameter has all corresponded to subject area and/or background area.
Specifically, for some shooting image, terminal can be by the corresponding region of its exposure parameter (including subject area And/or background area) interpolation weights be set as the first weighted value, the interpolation weight in other regions in the shooting image is reseted It is set to the second weighted value.For example, when the environment to be captured includes the first object and second two objects of object, wherein, The colour of skin rank of first object and the second object is identical, and the light and shade of the environment to be captured and two objects is widely different. So as to which, terminal can be that first object and the second object set the first exposure parameter, for the environment setting to be captured the Two exposure parameters, and according to the first exposure parameter and the second exposure parameter the environment to be captured is shot respectively to obtain One shooting image and the second shooting image.So, terminal carries out pair to the described first shooting image and/or the second shooting image As background area, the first subject area and the second subject area can be obtained after detection process.Image is shot for described first, Its first acquisition parameters has corresponded to first subject area and the second subject area, so as to which terminal can be by described first pair As the interpolation weights in region and the second subject area are set as first weighted value, the interpolation weight of the background area is reseted It is set to second weighted value.For the described second shooting image, the second acquisition parameters have corresponded to the background area, from And the interpolation weights of the background area can be set as first weighted value by terminal, by first subject area and The interpolation weights of second subject area are set as second weighted value.
In a specific embodiment, first weighted value is 1, and second weighted value is 0.
In embodiments of the present invention, terminal is detecting that present filming scene is light and shade difference of the different objects in the colour of skin The very big or light and shade of environment and the object colour of skin is widely different when complex scenes, according to specific ambient light data and the object colour of skin Data set multigroup exposure parameter, and are shot according to the exposure parameter of setting to obtain multiple shooting images, then to shooting Obtained multiple shooting images carry out synthesis processing, so as to obtain the target area that exposure takes into account varying environment, the different colours of skin The more suitable target image of the brightness of domain and background area.
Fig. 2 is referred to, is the schematic flow diagram of another image pickup method provided in an embodiment of the present invention.In the specific implementation, Each step that this method includes can be performed by terminal.Image pickup method as shown in Figure 2 can include:
S21, the ambient light data and object color data for obtaining environment to be captured.
Wherein, the ambient light data for example can be ambient brightness value, and the object color data can be specially pair As colour of skin data.
It should be noted that the particular technique details of step S21 can refer to the phase of the application step S11 shown in FIG. 1 It closes part to describe, details are not described herein.
S22, according to the ambient light data and/or the object color data, determine present filming scene.
Wherein, photographed scene can distinguish simple scenario and complex scene.For simple scenario, there is no the light and shade poor in scene Different king-sized place is not in local overexposure or owes the situation exposed when terminal is shot by automatic exposure.For multiple Miscellaneous scene, existing very bright place in scene, and have very dark place, terminal is easy to out when being shot by automatic exposure Current situation portion overexposure owes the situation exposed.
It should be noted that the particular technique details of step S22 can refer to the phase of the application step S12 shown in FIG. 1 It closes part to describe, details are not described herein.
If S23, the present filming scene are preset photographed scene, set for the object in the environment to be captured Put the first exposure parameter set.
Wherein, the preset photographed scene is above-mentioned complex scene.
Wherein, the number of exposure parameter is determined according to the colour of skin rank in the first exposure parameter set.Specifically, Terminal can be that the object with different colour of skin ranks sets different exposure parameters.For example, it is deposited when in the environment to be captured When two colour of skin types are respectively the object of yellow and black, terminal can be that described two objects set two groups of different exposures Optical parameter.Wherein, the exposure parameter can include but is not limited to time for exposure, f-number, shutter speed etc..
S24, judge whether the environmental form meets the second preset relation with the colour of skin type.
Wherein, when the skin brightness light and shade difference of the brightness of the environment to be captured and all objects is very big, terminal It can determine that environmental form and colour of skin type meet the second preset relation, otherwise can determine that environmental form and colour of skin type are discontented with The second preset relation of foot.For example, working as environmental form for daytime, the colour of skin type of all objects is black in the environment to be captured Color, alternatively, working as environmental form for half-light, when the colour of skin type of all objects is white in the environment to be captured, terminal can To determine that environmental form and colour of skin type meet the second preset relation.
When environmental form and colour of skin type meet the second preset relation, terminal can perform step S25;Work as environmental form When being unsatisfactory for the second preset relation with colour of skin type, terminal can perform step S26.
S25, the second exposure parameter is set for the environmental form.
Wherein, second exposure parameter is differed with each exposure parameter in the first exposure parameter set.
S26, third exposure parameter is set for the environmental form.
Wherein, the third exposure parameter is identical with one of exposure parameter in the first exposure parameter set. For example, when environmental form is daytime, the object there are two colour of skin type for yellow, and described two are had in the environment to be captured altogether During the colour of skin rank difference of a object, terminal can be that colour of skin rank is higher right in the environmental form and described two objects As setting identical exposure parameter.
S27, the environment to be captured is shot according to the exposure parameter of setting, and to the obtained image of shooting into Row synthesis is handled, and obtains target image.
In a specific embodiment, terminal performs the image that described pair of shooting obtains and carries out synthesis processing, obtains mesh Logo image can specifically include:Object detection is carried out to the obtained shooting image of shooting and handles to obtain background area and at least one Subject area;The determining background area shot in obtained each shooting image and the interpolation weights of each subject area;According to The interpolation weights determined are overlapped processing to the obtained each shooting image that shoots, obtain target image.
Wherein, the object detection processing for example can be human testing processing.
It is understood that each shooting image has corresponded to unique one group of exposure parameter.It should be noted that Mr. Yu A shooting image, exposure parameter can be that the exposure parameter of only targeted object region setting (is an object or more A colour of skin rank it is identical object setting exposure parameter) or only background area setting exposure parameter (be The exposure parameter of the environment setting to be captured), it can also be the exposure ginseng for targeted object region and background area setting Number.So as to which for each shooting image, exposure parameter has all corresponded to subject area and/or background area.
Specifically, for some shooting image, terminal can be by the corresponding region of its exposure parameter (including subject area And/or background area) interpolation weights be set as the first weighted value, the interpolation weight in other regions in the shooting image is reseted It is set to the second weighted value.For example, when the environment to be captured includes the first object and second two objects of object, wherein, The colour of skin rank of first object and the second object is identical, and the light and shade of the environment to be captured and two objects is widely different. So as to which, terminal can be that first object and the second object set the first exposure parameter, for the environment setting to be captured the Two exposure parameters, and according to the first exposure parameter and the second exposure parameter the environment to be captured is shot respectively to obtain One shooting image and the second shooting image.So, terminal carries out pair to the described first shooting image and/or the second shooting image As background area, the first subject area and the second subject area can be obtained after detection process.Image is shot for described first, Its first acquisition parameters has corresponded to first subject area and the second subject area, so as to which terminal can be by described first pair As the interpolation weights in region and the second subject area are set as first weighted value, the interpolation weight of the background area is reseted It is set to second weighted value.For the described second shooting image, the second acquisition parameters have corresponded to the background area, from And the interpolation weights of the background area can be set as first weighted value by terminal, by first subject area and The interpolation weights of second subject area are set as second weighted value.
In a specific embodiment, first weighted value is 1, and second weighted value is 0.
In another specific embodiment, terminal performs the image that described pair of shooting obtains and carries out synthesis processing, obtains Target image can specifically include:Object detection processing is carried out to the obtained image of shooting, obtains background area and at least one Subject area;The determining effective coverage shot in obtained each image, the effective coverage include background area and/or object Region;Splicing is carried out to the effective coverage determined, obtains target image.
Specifically, for some shooting image, terminal can be by the corresponding region of its exposure parameter (including subject area And/or background area) as effective coverage.For example, when the environment to be captured includes an object, and the ring to be captured When the light and shade of border and the object is widely different, terminal can be that the object and the environment to be captured set third to expose respectively Optical parameter and the 4th exposure parameter, and the environment to be captured is carried out respectively according to third exposure parameter and the 4th exposure parameter Shooting obtains third shooting image and the 4th shooting image.So, terminal is shooting the third image and/or the 4th shooting Image can obtain background area and subject area after carrying out object detection processing.Image, third are shot for the third Acquisition parameters have corresponded to the subject area, so as to which the subject area can be determined as the third shooting image by terminal Effective coverage.For the described 4th shooting image, the 4th acquisition parameters have corresponded to the background area, so as to which terminal can The background area to be determined as to the effective coverage of the 4th shooting image.Further, terminal can be by the third It shoots the effective coverage of image and the effective coverage of the 4th shooting image and carries out splicing, obtain target image.
Further, terminal can also export in screen and store the target image.
In embodiments of the present invention, terminal is detecting that present filming scene is light and shade difference of the different objects in the colour of skin The very big or light and shade of environment and the object colour of skin is widely different when complex scenes, according to specific ambient light data and the object colour of skin Data set multigroup exposure parameter, and are shot according to the exposure parameter of setting to obtain multiple shooting images, then to shooting Obtained multiple shooting images carry out synthesis processing, so as to obtain the target area that exposure takes into account varying environment, the different colours of skin The more suitable target image of the brightness of domain and background area.
The embodiment of the present invention also provides a kind of terminal, which includes the list for performing the application method shown in FIG. 1 Member.Specifically, Fig. 3 is referred to, is a kind of schematic block diagram of terminal provided in an embodiment of the present invention.Terminal as shown in Figure 3 It can include:Acquiring unit 31, determination unit 32, setting unit 33, shooting unit 34 and synthesis unit 35.Wherein,
The acquiring unit 31, for obtaining the ambient light data of environment to be captured and object color data.
The determination unit 32, for the ambient light data got according to the acquiring unit 31 and/or the object Color data determines present filming scene.
The setting unit 33 is preset shooting field for working as the present filming scene that the determination unit 32 is determined Jing Shi sets at least two groups of exposure parameters according to the ambient light data and the object color data.
The shooting unit 34, for according to the exposure parameter that the setting unit 33 is set to the environment to be captured into Row shooting.
The synthesis unit 35, the image for being obtained to the shooting unit 34 shooting carry out synthesis processing, obtain mesh Logo image.
It is carried it should be noted that the specific workflow of terminal provided in an embodiment of the present invention please refers to the embodiment of the present invention The method flow part of confession, details are not described herein.
In embodiments of the present invention, terminal is detecting that present filming scene is light and shade difference of the different objects in the colour of skin The very big or light and shade of environment and the object colour of skin is widely different when complex scenes, according to specific ambient light data and the object colour of skin Data set multigroup exposure parameter, and are shot according to the exposure parameter of setting to obtain multiple shooting images, then to described Multiple shooting images carry out synthesis processing, so as to obtain taking into account varying environment, the subject area and background area of the different colours of skin The more suitable target image of the brightness in domain.
The embodiment of the present invention also provides another terminal, which includes performing the application method shown in Fig. 2 Unit.Specifically, Fig. 4 is referred to, is the schematic block diagram of another terminal provided in an embodiment of the present invention.As shown in Figure 4 Terminal can include:Acquiring unit 41, the first determination unit 42, setting unit 43, shooting unit 44 and synthesis unit 45.Its In,
The acquiring unit 41, for obtaining the ambient light data of environment to be captured and object color data.
First determination unit 42, for the ambient light data that is got according to the acquiring unit 41 and/or described Object color data, determine present filming scene.
Optionally, the object color data include object colour of skin data.First determination unit 42 performs described The ambient light data and/or the object color data got according to the acquiring unit 41, when determining present filming scene, tool Body is used for the ambient light data got according to the acquiring unit 41, determines the environmental form of the environment to be captured;According to The object colour of skin data that the acquiring unit 41 is got determine the colour of skin attribute of each object in the environment to be captured;Root According to the environmental form and the colour of skin attribute, present filming scene is determined.
Still optionally further, first determination unit 42 performs described according to the environmental form and the colour of skin category Property, when determining present filming scene, specifically for when the environmental form and the colour of skin type meet the first preset relation, It is preset photographed scene to determine present filming scene.
Optionally, the environment to be captured includes at least two objects;The object color data include the object colour of skin Data.First determination unit 42 performs the ambient light data got according to the acquiring unit 41 and/or described Object color data, when determining present filming scene, specifically for work as the object colour of skin data maxima and minima it Between difference be more than default value when, determine present filming scene be preset photographed scene.
The setting unit 43 is preset bat for working as the present filming scene that first determination unit 42 is determined When taking the photograph scene, ambient light data and object color data the setting at least two groups of exposures got according to the acquiring unit 41 are joined Number.
Optionally, the colour of skin attribute includes colour of skin type and colour of skin rank.The setting unit 43 performs the basis When ambient light data and object color data setting at least two groups of exposure parameters that the acquiring unit 41 is got, it is specifically used for First exposure parameter set is set for the object in the environment to be captured, exposure parameter in the first exposure parameter set Number is determined according to the colour of skin rank;When the environmental form and the colour of skin type meet the second preset relation, for institute It states environmental form and the second exposure parameter, second exposure parameter and each exposure in the first exposure parameter set is set Parameter differs;It is the environmental form when the environmental form and the colour of skin type are unsatisfactory for the second preset relation Third exposure parameter, the third exposure parameter and one of exposure parameter phase in the first exposure parameter set are set Together.
The shooting unit 44, for according to the exposure parameter that the setting unit 43 is set to the environment to be captured into Row shooting.
The synthesis unit 45, the image for being obtained to the shooting unit 44 shooting carry out synthesis processing, obtain mesh Logo image.
Optionally, it is single can to specifically include detection unit 451, the second determination unit 452 and superposition for the synthesis unit 45 Member 453.Wherein,
The detection unit 451 for carrying out object detection processing to the obtained image of shooting, obtains background area and extremely A few subject area.
Second determination unit 452, for determining to shoot the background area in obtained each image and each object The interpolation weights in region.
The superpositing unit 453, for the interpolation weights determined according to second determination unit 452, to the bat The each image taken the photograph is overlapped processing, obtains target image.
Optionally, the synthesis unit 45 can also specifically include third determination unit 454 and concatenation unit 455.Wherein,
The third determination unit 454, for determining to shoot the effective coverage in obtained each image, the effective district Domain includes background area and/or subject area.
The concatenation unit 455, the effective coverage for being determined to the third determination unit 454 carry out stitching portion Reason, obtains target image.
It is carried it should be noted that the specific workflow of terminal provided in an embodiment of the present invention please refers to the embodiment of the present invention The method flow part of confession, details are not described herein.
In embodiments of the present invention, terminal is detecting that present filming scene is light and shade difference of the different objects in the colour of skin The very big or light and shade of environment and the object colour of skin is widely different when complex scenes, according to specific ambient light data and the object colour of skin Data set multigroup exposure parameter, and are shot according to the exposure parameter of setting to obtain multiple shooting images, then to shooting Obtained multiple shooting images carry out synthesis processing, so as to obtain the target area that exposure takes into account varying environment, the different colours of skin The more suitable target image of the brightness of domain and background area.
Fig. 5 is referred to, is the schematic block diagram of another terminal provided in an embodiment of the present invention.This reality as shown in Figure 5 Applying the terminal in example can include:One or more processors 51, one or more input equipments 52, one or more outputs are set Standby 53 and memory 54.The processor 51, input equipment 52, output equipment 53 and memory 54 are connected by bus 55.Institute Memory 54 is stated for storing computer program, the computer program includes program instruction.
Specifically, the processor 51 is configured for that described program instruction is called to perform:
Obtain the ambient light data of environment to be captured and object color data;
According to the ambient light data and/or the object color data, present filming scene is determined;
If the present filming scene is preset photographed scene, according to the ambient light data and the object face Chromatic number shoots the environment to be captured according to the exposure parameter of setting according to setting at least two groups of exposure parameters;
The image obtained to shooting carries out synthesis processing, obtains target image.
Optionally, the object color data include object colour of skin data.The processor 51 is configured for calling institute State program instruction perform it is described according to the ambient light data and/or the object color data, when determining present filming scene It is specific to perform:
According to the ambient light data, the environmental form of the environment to be captured is determined;
According to the object colour of skin data, the colour of skin attribute of each object in the environment to be captured is determined;
According to the environmental form and the colour of skin attribute, present filming scene is determined.
Optionally, the colour of skin attribute includes colour of skin type.The processor 51 is configured for that described program is called to refer to It enables execution described according to the environmental form and the colour of skin attribute, determines specifically to perform during present filming scene:
When the environmental form and the colour of skin type meet the first preset relation, it is preset to determine present filming scene Photographed scene.
Optionally, the environment to be captured includes at least two objects;The object color data include the object colour of skin Data.The processor 51 is configured for described program instruction is called to perform described according to the ambient light data and/or institute Object color data are stated, determine specifically to perform during present filming scene:
When the difference between the maxima and minima of the object colour of skin data is more than default value, current clap is determined Scene is taken the photograph as preset photographed scene.
Optionally, the colour of skin attribute includes colour of skin type and colour of skin rank.The processor 51 is configured for calling Described program instruction performs described according to the ambient light data and object color data setting at least two groups of exposure parameters When specifically perform:
First exposure parameter set is set for the object in the environment to be captured, is exposed in the first exposure parameter set The number of optical parameter is determined according to the colour of skin rank;
When the environmental form and the colour of skin type meet the second preset relation, second is set for the environmental form Exposure parameter, second exposure parameter are differed with each exposure parameter in the first exposure parameter set;
When the environmental form and the colour of skin type are unsatisfactory for the second preset relation, for environmental form setting the Three exposure parameters, the third exposure parameter are identical with one of exposure parameter in the first exposure parameter set.
Optionally, the processor 51 is configured for that described program instruction is called to perform the image that described pair of shooting obtains Synthesis processing is carried out, obtains specifically performing during target image:
Object detection processing is carried out to the image that shooting obtains, obtains background area and at least one subject area;
The determining background area shot in obtained each image and the interpolation weights of each subject area;
According to the interpolation weights determined, processing is overlapped to the obtained each image that shoots, obtains target figure Picture.
Optionally, the processor 51 is configured for that described program instruction is called to perform the image that described pair of shooting obtains Synthesis processing is carried out, obtains specifically performing during target image:
Object detection processing is carried out to the image that shooting obtains, obtains background area and at least one subject area;
The determining effective coverage shot in obtained each image, the effective coverage include background area and/or object Region;
Splicing is carried out to the effective coverage determined, obtains target image.
It should be appreciated that in embodiments of the present invention, alleged processor 51 can be central processing unit (Central Processing Unit, CPU), which can also be other general processors, digital signal processor (Digital Signal Processor, DSP), application-specific integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic Device, discrete gate or transistor logic, discrete hardware components etc..General processor can be microprocessor or this at It can also be any conventional processor etc. to manage device.
The input equipment 52 can include Trackpad, fingerprint collecting sensor (for acquire user finger print information and The directional information of fingerprint), microphone etc., the output equipment 53 can include display (liquid crystal display (Liquid Crystal Display, LCD) etc.), loud speaker etc..
The memory 54 can include read-only memory (Read-Only Memory, ROM) and random access memory (Random Access Memory, RAM), and provide computer program and data to the processor 51.The memory 54 A part of can also include nonvolatile RAM.For example, the memory 54 can be with storage device type Information.
In the specific implementation, the processor 51, input equipment 52, output equipment 53 described in the embodiment of the present invention are executable The realization method of the application image pickup method shown in fig. 1 or fig. 2, details are not described herein.
In embodiments of the present invention, the processor 51 calls the program instruction being stored in the memory 54, is examining Measure present filming scene for light and shade of the different objects in the colour of skin be widely different or the light and shade difference of environment and the object colour of skin very Greatly when complex scenes, multigroup exposure parameter is set, and according to setting according to specific ambient light data and object colour of skin data Exposure parameter shot to obtain multiple shooting images, synthesis processing then is carried out to the obtained multiple shooting images of shooting, So as to obtain the more suitable mesh of brightness that exposure takes into account varying environment, the subject area and background area of the different colours of skin Logo image.
A kind of computer readable storage medium, the computer readable storage medium are also provided in an embodiment of the present invention Computer program is stored with, the computer program includes program instruction, and the processor is configured for calling the journey Sequence instructs, and performs the application image pickup method shown in fig. 1 or fig. 2.
The computer readable storage medium can be the internal storage unit of the terminal described in aforementioned any embodiment, example Such as the hard disk or memory of terminal.The computer readable storage medium can also be the External memory equipment of the terminal, such as The plug-in type hard disk being equipped in the terminal, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card, flash card (Flash Card) etc..Further, the computer readable storage medium can also be wrapped both The internal storage unit for including the terminal also includes External memory equipment.The computer readable storage medium is described for storing Other programs and data needed for computer program and the terminal.The computer readable storage medium can be also used for temporarily When store the data that has exported or will export.
Those of ordinary skill in the art may realize that each exemplary lists described with reference to the embodiments described herein Member and algorithm steps can be realized with the combination of electronic hardware, computer software or the two, in order to clearly demonstrate hardware With the interchangeability of software, each exemplary combination is generally described according to function in the above description and obtain and walk Suddenly.These functions are performed actually with hardware or software mode, depending on the specific application of technical solution and design constraint item Part.Professional technician can realize described function to each specific application using distinct methods, but this It realizes it is not considered that beyond the scope of this invention.
In several embodiments provided herein, it should be understood that disclosed terminal and method can pass through it Its mode is realized.For example, the apparatus embodiments described above are merely exemplary, for example, the division of the unit, only Only a kind of division of logic function can have other dividing mode in actual implementation, such as multiple units or component can be tied It closes or is desirably integrated into another system or some features can be ignored or does not perform.In addition, shown or discussed phase Coupling, direct-coupling or communication connection between mutually can be INDIRECT COUPLING or the communication by some interfaces, device or unit Connection or electricity, the connection of mechanical or other forms.
It is apparent to those skilled in the art that for convenience of description and succinctly, the end of foregoing description End and the specific work process of unit can refer to the corresponding process in preceding method embodiment, and details are not described herein.
The unit illustrated as separating component may or may not be physically separate, be shown as unit The component shown may or may not be physical unit, you can be located at a place or can also be distributed to multiple In network element.Some or all of unit therein can be selected according to the actual needs to realize the embodiment of the present invention Purpose.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it can also It is that each unit is individually physically present or two or more units integrate in a unit.It is above-mentioned integrated The form that hardware had both may be used in unit is realized, can also be realized in the form of SFU software functional unit.
If the integrated unit is realized in the form of SFU software functional unit and is independent product sale or uses When, it can be stored in a computer read/write memory medium.Based on such understanding, technical scheme of the present invention is substantially The part to contribute in other words to the prior art or all or part of the technical solution can be in the form of software products It embodies, which is stored in a storage medium, is used including several computer programs so that one Computer equipment (can be personal computer, server or the network equipment etc.) performs side described in each embodiment of the present invention The all or part of step of method.And aforementioned storage medium includes:USB flash disk, mobile hard disk, ROM, RAM, magnetic disc or CD etc. are each Kind can store the medium of program code.
The above description is merely a specific embodiment, but protection scope of the present invention is not limited thereto, any Those familiar with the art in the technical scope disclosed by the present invention, can readily occur in various equivalent modifications or replace It changes, these modifications or substitutions should be covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be with right It is required that protection domain subject to.

Claims (10)

1. a kind of image pickup method, which is characterized in that including:
Obtain the ambient light data of environment to be captured and object color data;
According to the ambient light data and/or the object color data, present filming scene is determined;
If the present filming scene is preset photographed scene, according to the ambient light data and the object color number According to setting at least two groups of exposure parameters, and the environment to be captured is shot according to the exposure parameter of setting;
The image obtained to shooting carries out synthesis processing, obtains target image.
2. according to the method described in claim 1, it is characterized in that, the object color data include object colour of skin data;Institute It states according to the ambient light data and/or the object color data, determines present filming scene, including:
According to the ambient light data, the environmental form of the environment to be captured is determined;
According to the object colour of skin data, the colour of skin attribute of each object in the environment to be captured is determined;
According to the environmental form and the colour of skin attribute, present filming scene is determined.
3. according to the method described in claim 2, it is characterized in that, the colour of skin attribute includes colour of skin type;It is described according to institute Environmental form and the colour of skin attribute are stated, determines present filming scene, including:
When the environmental form and the colour of skin type meet the first preset relation, it is preset bat to determine present filming scene Take the photograph scene.
4. according to the method described in claim 1, it is characterized in that, the environment to be captured includes at least two objects;Institute It states object color data and includes object colour of skin data;It is described according to the ambient light data and/or the object color data, really Determine present filming scene, including:
When the difference between the maxima and minima of the object colour of skin data is more than default value, current shooting field is determined Scape is preset photographed scene.
5. according to the method described in claim 2, it is characterized in that, the colour of skin attribute includes colour of skin type and colour of skin rank; It is described that at least two groups of exposure parameters are set according to the ambient light data and the object color data, including:
First exposure parameter set is set for the object in the environment to be captured, exposes ginseng in the first exposure parameter set Several numbers is determined according to the colour of skin rank;
When the environmental form and the colour of skin type meet the second preset relation, for the second exposure of environmental form setting Parameter, second exposure parameter are differed with each exposure parameter in the first exposure parameter set;
When the environmental form and the colour of skin type are unsatisfactory for the second preset relation, third is set to expose for the environmental form Optical parameter, the third exposure parameter are identical with one of exposure parameter in the first exposure parameter set.
6. according to the method described in claim 1, it is characterized in that, described pair shoots obtained image and carry out synthesis processing, obtain To target image, including:
Object detection processing is carried out to the image that shooting obtains, obtains background area and at least one subject area;
The determining background area shot in obtained each image and the interpolation weights of each subject area;
According to the interpolation weights determined, processing is overlapped to the obtained each image that shoots, obtains target image.
7. according to the method described in claim 1, it is characterized in that, described pair shoots obtained image and carry out synthesis processing, obtain To target image, including:
Object detection processing is carried out to the image that shooting obtains, obtains background area and at least one subject area;
The determining effective coverage shot in obtained each image, the effective coverage include background area and/or subject area;
Splicing is carried out to the effective coverage determined, obtains target image.
8. a kind of terminal, which is characterized in that performed including being used for such as the unit of claim 1-7 any one of them methods.
9. a kind of terminal, which is characterized in that the processor, defeated including processor, input equipment, output equipment and memory Enter equipment, output equipment and memory to be connected with each other, wherein, the memory is used to store computer program, the computer Program includes program instruction, and the processor is configured for calling described program instruction, perform such as any one of claim 1-7 The method.
10. a kind of terminal, which is characterized in that the computer storage media is stored with computer program, the computer program Including program instruction, described program instruction makes the processor perform such as any one of claim 1-7 when being executed by a processor The method.
CN201711392242.XA 2017-12-21 2017-12-21 Image pickup method, terminal and computer-readable medium Withdrawn CN108200351A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711392242.XA CN108200351A (en) 2017-12-21 2017-12-21 Image pickup method, terminal and computer-readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711392242.XA CN108200351A (en) 2017-12-21 2017-12-21 Image pickup method, terminal and computer-readable medium

Publications (1)

Publication Number Publication Date
CN108200351A true CN108200351A (en) 2018-06-22

Family

ID=62577355

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711392242.XA Withdrawn CN108200351A (en) 2017-12-21 2017-12-21 Image pickup method, terminal and computer-readable medium

Country Status (1)

Country Link
CN (1) CN108200351A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109510941A (en) * 2018-12-11 2019-03-22 努比亚技术有限公司 A kind of shooting processing method, equipment and computer readable storage medium
CN109727212A (en) * 2018-12-24 2019-05-07 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN109919891A (en) * 2019-03-14 2019-06-21 Oppo广东移动通信有限公司 Imaging method, device, terminal and storage medium
CN110335216A (en) * 2019-07-09 2019-10-15 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, terminal device and readable storage medium storing program for executing
CN111050081A (en) * 2019-12-27 2020-04-21 维沃移动通信有限公司 Shooting method and electronic equipment
CN111402341A (en) * 2020-03-10 2020-07-10 创新奇智(广州)科技有限公司 Camera parameter determination method and device, electronic equipment and readable storage medium
CN111654643A (en) * 2020-07-22 2020-09-11 苏州臻迪智能科技有限公司 Exposure parameter determination method and device, unmanned aerial vehicle and computer readable storage medium
WO2020244374A1 (en) * 2019-06-06 2020-12-10 Oppo广东移动通信有限公司 High dynamic range (hdr) image generation method and apparatus, and electronic device and computer-readable storage medium
CN112860372A (en) * 2020-12-31 2021-05-28 上海米哈游天命科技有限公司 Method and device for shooting image, electronic equipment and storage medium
EP3816848A4 (en) * 2018-07-20 2022-01-19 Huawei Technologies Co., Ltd. Image acquiring method and device, and terminal

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11810277B2 (en) 2018-07-20 2023-11-07 Huawei Technologies Co., Ltd. Image acquisition method, apparatus, and terminal
EP3816848A4 (en) * 2018-07-20 2022-01-19 Huawei Technologies Co., Ltd. Image acquiring method and device, and terminal
CN109510941A (en) * 2018-12-11 2019-03-22 努比亚技术有限公司 A kind of shooting processing method, equipment and computer readable storage medium
CN109510941B (en) * 2018-12-11 2021-08-03 努比亚技术有限公司 Shooting processing method and device and computer readable storage medium
CN109727212A (en) * 2018-12-24 2019-05-07 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN109919891A (en) * 2019-03-14 2019-06-21 Oppo广东移动通信有限公司 Imaging method, device, terminal and storage medium
WO2020244374A1 (en) * 2019-06-06 2020-12-10 Oppo广东移动通信有限公司 High dynamic range (hdr) image generation method and apparatus, and electronic device and computer-readable storage medium
CN110335216A (en) * 2019-07-09 2019-10-15 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, terminal device and readable storage medium storing program for executing
CN110335216B (en) * 2019-07-09 2021-11-30 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, terminal device, and readable storage medium
CN111050081A (en) * 2019-12-27 2020-04-21 维沃移动通信有限公司 Shooting method and electronic equipment
CN111402341A (en) * 2020-03-10 2020-07-10 创新奇智(广州)科技有限公司 Camera parameter determination method and device, electronic equipment and readable storage medium
CN111654643A (en) * 2020-07-22 2020-09-11 苏州臻迪智能科技有限公司 Exposure parameter determination method and device, unmanned aerial vehicle and computer readable storage medium
CN111654643B (en) * 2020-07-22 2021-08-31 苏州臻迪智能科技有限公司 Exposure parameter determination method and device, unmanned aerial vehicle and computer readable storage medium
CN112860372A (en) * 2020-12-31 2021-05-28 上海米哈游天命科技有限公司 Method and device for shooting image, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108200351A (en) Image pickup method, terminal and computer-readable medium
CN104488258B (en) Method and apparatus for double camera shutter
CN108933899B (en) Panorama shooting method, device, terminal and computer readable storage medium
WO2020038109A1 (en) Photographing method and device, terminal, and computer-readable storage medium
CN114205522B (en) Method for long-focus shooting and electronic equipment
CN107920211A (en) A kind of photographic method, terminal and computer-readable recording medium
CN104333708B (en) Photographic method, camera arrangement and terminal
CN110505411A (en) Image capturing method, device, storage medium and electronic equipment
CN104580922B (en) A kind of control method and device for shooting light filling
CN110198417A (en) Image processing method, device, storage medium and electronic equipment
CN105227857B (en) A kind of method and apparatus of automatic exposure
US20170332009A1 (en) Devices, systems, and methods for a virtual reality camera simulator
US8558935B2 (en) Scene information displaying method and apparatus and digital photographing apparatus using the scene information displaying method and apparatus
CN109040603A (en) High-dynamic-range image acquisition method, device and mobile terminal
CN109194882A (en) Image processing method, device, electronic equipment and storage medium
KR101930460B1 (en) Photographing apparatusand method for controlling thereof
CN109218627A (en) Image processing method, device, electronic equipment and storage medium
US10958825B2 (en) Electronic apparatus and method for controlling the same
CN108200335A (en) Photographic method, terminal and computer readable storage medium based on dual camera
CN107637063A (en) Method and filming apparatus for the gesture control function based on user
US9313375B1 (en) Software-implemented graduated neutral density filter for balancing exposure of a photograph
CN115689963A (en) Image processing method and electronic equipment
CN109089045A (en) A kind of image capture method and equipment and its terminal based on multiple photographic devices
CN105554366A (en) Multimedia photographing processing method and device and intelligent terminal
CN107155000B (en) Photographing behavior analysis method and device and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20180622

WW01 Invention patent application withdrawn after publication