WO2019091487A1 - 拍摄图像的方法、装置、终端和存储介质 - Google Patents

拍摄图像的方法、装置、终端和存储介质 Download PDF

Info

Publication number
WO2019091487A1
WO2019091487A1 PCT/CN2018/115227 CN2018115227W WO2019091487A1 WO 2019091487 A1 WO2019091487 A1 WO 2019091487A1 CN 2018115227 W CN2018115227 W CN 2018115227W WO 2019091487 A1 WO2019091487 A1 WO 2019091487A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
parameter value
target
preset
parameter
Prior art date
Application number
PCT/CN2018/115227
Other languages
English (en)
French (fr)
Inventor
陈岩
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to EP18877131.5A priority Critical patent/EP3713213A4/en
Publication of WO2019091487A1 publication Critical patent/WO2019091487A1/zh
Priority to US16/848,761 priority patent/US11102397B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/0202Portable telephone sets, e.g. cordless phones, mobile phones or bar type handsets
    • H04M1/026Details of the structure or mounting of specific components
    • H04M1/0264Details of the structure or mounting of specific components for a camera module assembly
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/676Bracketing for image capture at varying focusing conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6812Motion detection based on additional sensors, e.g. acceleration sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/743Bracketing, i.e. taking a series of images with varying exposure conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/52Details of telephonic subscriber devices including functional features of a camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/65Control of camera operation in relation to power supply

Definitions

  • the present invention relates to the field of electronic technologies, and in particular, to a method, an apparatus, a terminal, and a storage medium for capturing an image.
  • the photo app is a very popular app. Users can take photos through the photo app.
  • Embodiments of the present invention provide a method, an apparatus, a terminal, and a storage medium for capturing an image, which can improve the efficiency of photographing.
  • the technical solution is as follows:
  • a method of capturing an image comprising:
  • the image capturing parameter in the current blurred scene includes a composite number of sheets
  • the captured image processing is performed based on the predicted image capturing parameter value.
  • an apparatus for capturing an image comprising:
  • a first acquiring module configured to acquire a preview image by the capturing component in a state in which the terminal is in an image to be captured, and acquire an exposure parameter value corresponding to the preview image
  • a prediction module configured to predict an image capturing mode in a current blurred scene according to the preview image, the exposure parameter value, and a pre-trained image capturing parameter prediction model with image data parameters, exposure parameters, and image capturing parameters as variables a parameter value, wherein the image capturing parameter in the current blurred scene includes a composite number of sheets;
  • the execution module is configured to perform the captured image processing according to the predicted image capturing parameter value when the shooting instruction is received.
  • a terminal comprising a processor and a memory, wherein the memory stores at least one instruction, at least one program, a code set or a set of instructions, the at least one instruction, the at least one program
  • the code set or set of instructions is loaded and executed by the processor to implement the method of capturing an image as described in the first aspect.
  • a computer readable storage medium having stored therein at least one instruction, at least one program, a code set, or a set of instructions, the at least one instruction, the at least one program, the code
  • the set or set of instructions is loaded and executed by the processor to implement the method of capturing an image as described in the first aspect.
  • FIG. 1 is a flowchart of a method for capturing an image according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a target preview image set corresponding to multiple preset composite numbers and multiple exposure parameter values according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram of a composite image corresponding to a preset composite number, an exposure parameter value, and a plurality of preset terminal performance parameter values according to an embodiment of the present invention
  • FIG. 4 is a schematic structural diagram of an apparatus for capturing an image according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of an apparatus for capturing an image according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of an apparatus for capturing an image according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of an apparatus for capturing an image according to an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of a terminal according to an embodiment of the present disclosure.
  • FIG. 9 is a schematic structural diagram of a terminal according to an embodiment of the present disclosure.
  • FIG. 10 is a schematic structural diagram of a terminal according to an embodiment of the present disclosure.
  • FIG. 11 is a schematic structural diagram of a terminal according to an embodiment of the present disclosure.
  • FIG. 12 is a schematic structural diagram of a terminal according to an embodiment of the present disclosure.
  • FIG. 13 is a schematic structural diagram of a terminal according to an embodiment of the present disclosure.
  • FIG. 14 is a schematic structural diagram of a terminal according to an embodiment of the present disclosure.
  • FIG. 15 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
  • An embodiment of the present invention provides a method for capturing an image.
  • the preview image is acquired by the imaging component, and an exposure parameter value corresponding to the preview image is acquired.
  • the image capturing parameter in the current blurred scene includes a composite number of sheets
  • the captured image processing is performed based on the predicted image capturing parameter value.
  • the image capturing parameter in the current blurred scene further includes an exposure parameter.
  • the method for capturing an image further includes: based on a correspondence relationship between each preview image, an exposure parameter value, and an image capturing parameter value in a pre-stored training set, based on an image capturing parameter value predicted by the image capturing parameter prediction model Close to the pre-stored training principle of the image capturing parameter value corresponding to the preview image and the exposure parameter value, the image capturing parameter prediction model is trained to obtain the image data parameter, the exposure parameter, and the image capturing parameter after the training.
  • the image capture parameter prediction model of the variable based on a correspondence relationship between each preview image, an exposure parameter value, and an image capturing parameter value in a pre-stored training set, based on an image capturing parameter value predicted by the image capturing parameter prediction model Close to the pre-stored training principle of the image capturing parameter value corresponding to the preview image and the exposure parameter value.
  • the method further includes: acquiring, by the photographing component, a first preview image, and acquiring a first exposure parameter value corresponding to the first preview image;
  • the performing a image synthesizing process on the target preview image set to obtain a composite image corresponding to the preset parameter number and the exposure parameter value including:
  • Determining, in all the synthesized images obtained, a target composite image with a corresponding image quality, and determining a target preset number of composites and a target exposure parameter value corresponding to the target composite image as target image capturing parameter values include:
  • the method further includes: recording a power consumption value consumed when the composite image corresponding to the preset parameter number and the exposure parameter value is obtained;
  • the terminal further includes a motion sensor, where the image is to be captured, the preview image is obtained by the photographing component, and the exposure parameter value corresponding to the preview image is obtained, including:
  • the motion parameter indicates that the terminal is in a motion state
  • the preview image is acquired by the photographing component, and an exposure parameter value corresponding to the preview image is acquired.
  • acquiring a preview image by the photographing component, and acquiring an exposure parameter value corresponding to the preview image includes:
  • the motion parameter model includes at least one of a walking motion model, a riding motion model, and a boarding vehicle motion model.
  • the embodiment of the invention provides a method for capturing an image, and the execution body of the method is a terminal.
  • the terminal may be a terminal having a function of capturing an image, for example, the terminal may be a terminal installed with a camera application.
  • the terminal may include components such as a processor, a memory, a photographing component, a screen, and the like.
  • the processor may be a CPU (Central Processing Unit) or the like, and may be used to determine image capturing parameter values and perform related processing of capturing images.
  • the memory can be RAM (Random Access Memory), Flash (Flash), etc., and can be used to store received data, data required for processing, data generated during processing, etc., such as image capture parameter prediction Models, etc.
  • the shooting component can be a camera that can be used to capture a preview image.
  • the screen may be a touch screen, which may be used to display a preview image acquired by the shooting component, and may also be used to detect a touch signal or the like.
  • the terminal can include a motion sensor.
  • the motion sensor can be at least one of a gravity sensor, a speed sensor, a gyroscope, and an acceleration sensor.
  • the camera application when the user takes a picture through the camera application, the camera application can also provide a jitter synthesis function in order to facilitate the user to take photos of the moving object.
  • the user when the user wants to take a picture of the moving object, the user can find the switch button of the jitter synthesis function, and then click the switch button to turn on the jitter synthesis function.
  • the terminal When the user presses the camera button, the terminal can be synthesized based on the preset. The number of sheets is processed to perform image processing.
  • the terminal can continuously acquire a plurality of preset composite images by a shooting component (such as a camera).
  • the image may be referred to as a preview image, and the photographed preset composite image is subjected to image synthesis processing to obtain a composite image. That is, the terminal obtains images of multiple preview images combined and stores them in the library of the terminal. Based on the above processing method, whenever the user wants to take a picture of a moving object, it is necessary to first find the switch button of the jitter synthesis function, and then manually click the switch button.
  • Step 101 Acquire a preview image by the photographing component in a state where the terminal is in an image to be captured, and acquire an exposure parameter value corresponding to the preview image.
  • the exposure parameter value corresponding to the preview image may be an exposure parameter value determined when the preview image is acquired.
  • a camera application may be installed in the terminal, and when the user wants to take a photo, the icon of the photo application may be clicked.
  • the terminal will receive a startup command corresponding to the camera application, and in turn, the camera application can be launched.
  • the terminal will be in the state of the image to be captured, that is, the shooting component in the terminal is in the on state at this time.
  • the terminal can acquire a preview image through the photographing part.
  • the preview image may be an image displayed in a terminal acquired by the photographing component, that is, an image that has not been synthesized, that is, the preview image may be an image acquired by the photographing component before the user presses the photographing button.
  • the terminal can determine the exposure parameter value according to the ambient brightness and the color of the light source in the environment in real time, so that the terminal performs the captured image processing according to the exposure parameter value.
  • the exposure parameter value may include parameter values such as exposure duration and white balance.
  • the terminal can also obtain the exposure parameter value corresponding to the preview image, that is, the exposure parameter value based on when the preview image is captured.
  • an acquisition period may be set in the terminal.
  • the terminal may obtain a preview image through the shooting component and obtain an exposure parameter value corresponding to the preview image.
  • the motion sensor is further included in the terminal.
  • the terminal may determine whether to perform the step shown in step 101 by using the motion parameter acquired by the motion sensor.
  • the current motion parameter of the terminal is collected by the motion sensor in a state where the terminal is in the image to be captured, and the motion parameter is used to indicate the motion state of the terminal.
  • the motion parameter indicates that the terminal is in the motion state
  • the preview image is acquired by the photographing component, and the exposure parameter value corresponding to the preview image is acquired. That is, when the terminal is in the state of the image to be captured, the terminal will collect the current motion parameters of the terminal through the motion sensor.
  • the motion parameters include at least one of velocity, acceleration, angular velocity, and angular acceleration.
  • the terminal acquires an image through the photographing component, and acquires an exposure parameter value corresponding to the preview image.
  • the terminal compares the motion parameter with the motion parameter model to obtain a similarity, which is used to indicate the degree of similarity between the motion parameter and the motion parameter model.
  • the motion parameters may be expressed in the form of fractions or percentages.
  • the similarity is greater than the similarity threshold, the terminal determines that the terminal itself is in a motion state. For example, when the similarity threshold is 80% and the similarity is 86%, the terminal determines that it is in motion.
  • the motion parameter model may be a function model in which a motion parameter preset in the terminal changes on a time axis, or may be a function model in which a preset set of motion parameters in the terminal changes on a time axis.
  • the motion parameter model includes at least one of a walking motion model, a riding motion model, and a boarding vehicle motion model.
  • the walking motion model is used to indicate that the motion parameter of the terminal changes with time when the user walks;
  • the riding motion model is used to indicate that the motion parameter of the terminal changes with time when the user rides;
  • the motion model capable of indicating that the terminal is in motion is within the protection scope of the present invention.
  • the specific motion model described above is merely illustrative and does not limit the scope of protection of the present invention.
  • the algorithm flow of the present disclosure can be executed while avoiding the need to process the jitter scene, and the terminal can be saved under the premise of ensuring the effect of capturing the image. Consumption.
  • Step 102 Predicting an image capturing parameter value in a current blurred scene according to a preview image, an exposure parameter value, and a pre-trained image capturing parameter prediction model with image data parameters, exposure parameters, and image capturing parameters as variables, wherein the image
  • the shooting parameters include the number of composite sheets.
  • the trained image capturing parameter prediction model may be pre-stored in the terminal, wherein the image capturing parameter prediction model may be used to predict the current based on the exposure parameter values corresponding to the preview image and the preview image currently acquired by the terminal.
  • Image capture parameter values in a blurred scene After obtaining the preview parameter and the exposure parameter value corresponding to the preview image, the terminal may input the preview image and its corresponding exposure parameter value into the pre-trained image capturing parameter prediction model to obtain an output of the image capturing parameter prediction model. , that is, the image capturing parameter value in the current blurred scene can be obtained.
  • the terminal may use the data of the preview image as a parameter value of the image data parameter, and use the exposure parameter value corresponding to the preview image as a parameter value of the exposure parameter.
  • the image capturing parameter prediction model is brought in, and the image capturing parameter values in the current blurred scene are obtained, and the number of synthesized sheets is obtained.
  • the image capturing parameter prediction model may be pre-trained by the terminal or the server.
  • the training process may be as follows: according to the preset relationship between the preview image, the exposure parameter value, and the image capturing parameter value in the pre-stored training set. And based on the image capturing parameter value predicted by the image capturing parameter prediction model, and the training principle of the image capturing parameter value corresponding to the preview image and the exposure parameter value stored in advance, the image capturing parameter prediction model is trained to obtain The image capturing parameter prediction model with the image data parameter, the exposure parameter, and the image capturing parameter as variables after the training.
  • a training set may be pre-stored in the terminal.
  • the training set may include a correspondence relationship between each preview image, an exposure parameter value, and an image capturing parameter value, where the image capturing parameter value in each corresponding relationship item in the correspondence relationship may be a preview in the corresponding relationship item.
  • the synthesized image quality can be optimized to the value of the image capturing parameter.
  • the terminal may train the image capturing parameter prediction model including the to-be-determined parameter according to the pre-stored training set, that is, the terminal approximates the image capturing parameter value corresponding to the preview image and the exposure parameter value predicted by the image capturing parameter prediction model.
  • the stored preview image, the training principle of the image capturing parameter value corresponding to the exposure parameter value, and the image capturing parameter prediction model are trained.
  • the terminal may input the preview image and the exposure parameter value in the corresponding relationship item to the image capturing including the parameter to be determined.
  • an image capturing parameter value including a parameter to be determined is obtained, and further, an objective function can be obtained based on a training principle that the obtained image capturing parameter value of the pending parameter approaches the image capturing parameter value in the corresponding relationship item.
  • the objective function may be a function of the obtained image capture parameter value including the parameter to be determined minus the image capture parameter value in the corresponding relationship item.
  • the terminal can obtain the training value of the parameter to be determined by using the gradient descent method, and use the training value as the parameter value corresponding to the undetermined parameter when training according to the next corresponding relationship item.
  • the training values of the parameters to be determined can be obtained.
  • the image capturing parameter prediction model may be a convolutional neural network model.
  • the undetermined parameter may be each convolution kernel in the neural network model.
  • each corresponding relationship item in each of the foregoing correspondence relationships may be selected according to a preset composite number of frames and an image quality of the synthesized composite image corresponding to the plurality of exposure parameter values, and correspondingly, the processing may be As follows: the image capturing parameter in the current blurred scene further includes an exposure parameter; acquiring a first preview image by the capturing component, and acquiring a first exposure parameter value corresponding to the first preview image; and determining, according to the first exposure parameter value and the preset number of attenuations a percentage, determining a preset number of exposure parameter values; respectively obtaining a preview image by the photographing component according to each of the preset number of exposure parameter values, obtaining a preview image corresponding to each exposure parameter value; for pre-stored a preview image corresponding to each of the plurality of preset composite numbers and each of the preset number of exposure parameter values, selecting the first preview image and repeatedly selecting the preset composite number of frames a preview image corresponding to the exposure parameter value, obtaining a preset composite
  • the target composite image is determined as the target image capturing parameter value corresponding to the target preset composite number and the target exposure parameter value corresponding to the target composite image; and the first preview image, the first exposure parameter value, and the target image capturing parameter value are correspondingly stored in the training concentrated.
  • each corresponding relationship item in the foregoing correspondence relationship may be determined by the terminal according to the preview image obtained by acquiring the first preview image and a preset number of different exposure parameter values, wherein different correspondence items may be It is determined by the first preview image acquired by the terminal in different scenarios and a preset number of preview images, for example, the first correspondence item is obtained by the terminal capturing the scene of the object moving at the first motion speed. Determining, by a preview image and a preset number of preview images, the second correspondence item is determined by the first preview image acquired by the terminal capturing the scene of the object moving at the second motion speed and the preset number of preview images. .
  • each corresponding relationship item in the corresponding relationship the number of composites in the corresponding image capturing parameter is each corresponding relationship item, and the exposure parameter value in the image capturing parameter value may be a preset value (for example 0), the corresponding image item in the corresponding image capturing parameter is greater than 1, each of the corresponding relationship items, the exposure parameter value in the image capturing parameter value may be a preset number of exposure parameter values determined according to the first exposure parameter value The value of an exposure parameter.
  • the determination process of one of the corresponding relationship items is described in detail below, and the determination process of other corresponding relationship items is the same.
  • the terminal may obtain a preview image by using a photographing component (the preview image may be referred to as a first preview image), wherein the preview image is an image directly acquired by the photographing component and not processed by the image, and further
  • the exposure parameter value corresponding to the first preview image (which may be referred to as a first exposure parameter value) may be obtained, where the first exposure parameter value is an unadjusted value determined by the terminal according to the ambient brightness and the color of the light source in the environment.
  • a preset number of attenuation percentages may be pre-stored in the terminal, and after obtaining the first exposure parameter value corresponding to the first preview image, each of the preset number of attenuation percentages and the first exposure parameter value may be respectively respectively Multiply to get a preset number of exposure parameter values.
  • the preset number of attenuation percentages is 80%, 60%, 40%, respectively, and the first exposure parameter value is A, then the preset number of exposure parameter values that can be obtained are A*80%, A*60%, respectively. A*40%.
  • the preview image may be acquired by the photographing component based on each of the preset number of exposure parameter values, and each exposure parameter value of the preset number of exposure parameter values is obtained.
  • the corresponding preview image that is, the obtained preset number of preview images is obtained based on the adjusted exposure parameter values, and is not determined according to the ambient brightness and the color of the light source in the environment.
  • a plurality of preset composite numbers may be pre-stored in the terminal, and after acquiring the first preview image and the preset number of preview images, each preset composite number of the plurality of preset composite numbers is greater than 1.
  • a preview image corresponding to each of the preset number of exposure parameter values the first preview image may be selected, and the preset composite image number is repeatedly selected by subtracting one preview image corresponding to the exposure parameter value to obtain the preset a set of target preview images corresponding to the number of synthesized sheets and the exposure parameter value, that is, each preset number of composite sheets whose value is greater than 1 corresponds to a preset number of target preview image sets, and for a preset number of composite sheets having a value of 1,
  • a preview image is determined as the target preview image set corresponding to the preset composite number, and the exposure parameter value corresponding to the composite image obtained when performing the image synthesis processing on the target preview image set may be a preset value.
  • the first preview image acquired by the terminal is the image 1 and the first exposure parameter value is A
  • the obtained preset number of preview images are respectively image 2, image 3, image 4, image 2, image 3, and image 4, respectively.
  • the corresponding exposure parameter values are B, C, and D.
  • the number of preset composite sheets is 1, 2, and 3 respectively.
  • the terminal can select image 1 to obtain the preset composite number 1 corresponding.
  • the target preview image collection for the preset composite number 2, the terminal may select the image 1 and select one image 2 to obtain the target preview image set corresponding to the preset composite number 2 and the exposure parameter value B, and the terminal may select the image 1 And selecting one image 3, obtaining a target preview image set corresponding to the preset composite number 2 and the exposure parameter value C, the terminal may select the image 1 and select one image 4 to obtain the preset composite number 2 and the exposure parameter value D.
  • the terminal may select image 1 and repeatedly select 2 images 2 to obtain a target preview image set corresponding to the preset composite number 3 and the exposure parameter value B, and the terminal may select Image 1 and The two images 3 are repeatedly selected to obtain a target preview image set corresponding to the preset composite number 3 and the exposure parameter value C.
  • the terminal can select the image 1 and repeatedly select the two images 4 to obtain the preset composite number 3 and the exposure parameter value.
  • the target preview image set corresponding to D as shown in Figure 2.
  • Pre-synthesized number of sheets and the exposure parameter are obtained for each preview image corresponding to each of the plurality of preset composite sheets and the preset number of exposure parameter values.
  • the target preview image set may be subjected to image synthesis processing, that is, the image synthesis processing may be performed on each preview image in the target preview image set, and the preset composite number of sheets is obtained corresponding to the exposure parameter value.
  • Composite image After all the composite images are obtained (all composite images include composite images corresponding to all preset composite numbers), image quality (such as sharpness) corresponding to each composite image can be calculated, and further, corresponding images can be determined in each composite image.
  • the best quality composite image (which can be called the target composite image), and can determine the preset composite number corresponding to the target composite image (which can be called the target preset composite number) and the exposure parameter value (which can be called the target exposure parameter). Value), wherein, when the target preset composite number is 1, the target exposure parameter value is a preset value, and when the target preset composite number is greater than 1, the target exposure parameter value is a preset number of exposure parameter values.
  • An exposure parameter value After determining the target preset composite number and the target exposure parameter value, the determined target preset composite number and the target exposure parameter value may be determined as the target image capturing parameter value.
  • the terminal may also obtain corresponding preview images, exposure parameter values, and corresponding image capture parameter values according to the above processing procedure, thereby obtaining respective correspondence items in the training set.
  • the image capturing parameter may further include a terminal performance parameter.
  • the terminal when determining the training set, the preview image, the number of synthesized sheets corresponding to the exposure parameter, and the terminal performance parameter value may be determined.
  • the terminal may perform the following processing: Performing image synthesis processing on the target preview image set based on each of the preset terminal performance parameter values, respectively, to obtain a preset composite number, the exposure parameter value, and each preset terminal performance. The composite image corresponding to the parameter value.
  • the terminal may perform the following processing: determining, in all the synthesized images obtained, a target composite image with an optimal image quality, and combining the target preset composite number, the target exposure parameter value, and the target preset corresponding to the target composite image
  • the terminal performance parameter value is determined as the target image capturing parameter value.
  • each preset terminal performance parameter value of the multiple preset terminal performance parameter values may be, each preset of the two preset terminal performance parameter values. Terminal performance parameter value.
  • each preset terminal performance parameter value of the plurality of preset terminal performance parameter values may be, each preset terminal performance parameter of the at least three preset terminal performance parameter values value.
  • the terminal performance parameter may be a parameter that affects the performance of the terminal, such as a CPU running frequency, and the CPU running frequency may also be referred to as a CPU frequency.
  • a plurality of preset terminal performance parameter values may be pre-stored in the terminal. Determining the preset for each of the plurality of preset composite numbers and each of the preset number of exposure parameter values for the image capturing parameter further including the terminal performance parameter. After synthesizing the target preview image set corresponding to the exposure parameter value, the terminal may perform image synthesis processing on the target preview image set based on each of the preset preset terminal performance parameter values respectively.
  • the preset composite number, the exposure parameter value, and a composite image corresponding to each preset terminal performance parameter value That is to say, each preset composite number, each exposure parameter value and each preset terminal performance parameter value correspond to a composite image.
  • the terminal may perform image synthesis processing on the target preview image set when the terminal performance parameter is a (ie, the terminal may set the terminal performance parameter to a), and obtain a composite corresponding to the preset composite number 2, B, and a.
  • the terminal may perform image synthesis processing on the target preview image set in the case where the terminal performance parameter is b, and obtain a composite image corresponding to the preset composite number 2, B, and b, as shown in FIG.
  • the terminal can calculate the image quality of each composite image, and can determine the target composite image with the corresponding image quality optimally in the plurality of composite images, and further, the target preset corresponding to the target composite image can be determined.
  • the composite number of sheets, the target exposure parameter value, and the target preset terminal performance parameter value are determined as the target image capturing parameter values, and further, the three (ie, the first preview image, the first exposure parameter value, and the target image capturing parameter value) may be corresponding. Stored in the training set.
  • the terminal when the terminal performs image synthesis processing on each target preview image set, the terminal may also record the power consumption value consumed by the image synthesis process.
  • the process of determining, by the terminal, the target image capturing parameter value may be as follows: recording the power consumption value consumed when the composite image corresponding to the exposure parameter value is obtained by the preset composite number. In all the obtained composite images, the target composite image with the corresponding optimal image quality and power consumption value is determined, and the target preset composite number and the target exposure parameter value corresponding to the target composite image are determined as the target image capturing parameter values.
  • the preview image corresponding to each of the preset composite number of the plurality of preset composite numbers and the preset number of exposure parameter values is obtained, and the preset composite sheet is obtained.
  • image synthesis processing may be performed on the target preview image set, and the composite image corresponding to the exposure parameter value is obtained, and the image synthesis processing may be recorded.
  • the consumed power consumption value that is, the power consumption value consumed when the composite image corresponding to the preset parameter number and the exposure parameter value is obtained, wherein the power consumption value may be one or more of the following values: The amount of power consumed and the length of time consumed.
  • the terminal can determine the corresponding image quality and the power consumption value of the target synthetic image in the obtained composite image (where the image quality can be obtained)
  • the composite image with the largest quotient of the power consumption value is determined as the target composite image), and further, the target preset composite number and the target exposure parameter value corresponding to the target composite image may be determined as the target image capturing parameter value.
  • the first preview image in the target preview image set may be used as the composite image.
  • Step 103 when receiving the shooting instruction, performing the captured image processing based on the predicted image capturing parameter value.
  • the terminal when the terminal is in the state of the image to be captured, when the user wants to take a photo, the user may click the shooting button. At this time, the terminal will receive the click command of the shooting button, and then, according to the predicted The image capturing parameter value and the jitter synthesis algorithm in the current blurred scene perform the captured image processing, wherein the predicted image capturing parameter value in the current blurred scene may be the value of the image capturing parameter in the jitter synthesis algorithm, wherein, when predicting When the number of synthesized sheets is 1, the terminal can acquire a preview image by the photographing unit. In this case, the obtained preview image is the final composite image. It should be noted that this situation is equivalent to the fact that the terminal does not have the jitter synthesis function enabled.
  • the terminal may acquire the second preview image, and take a preview image based on the number of the composite sheets minus one of the preset exposure parameter values, and take a preview image, and The obtained second preview image and the number of composite sheets are subtracted from one preview image for image synthesis processing, and the final image is obtained and stored in the library.
  • the terminal may acquire the second preview image, and take a preview image according to the number of the composite sheets minus one of the preset exposure parameter values, and may take a preview image, and may Setting the terminal performance parameter to the predicted terminal performance parameter value, and further, performing image synthesis processing on the acquired second preview image and the number of composite images minus one preview image based on the predicted terminal performance parameter value, to obtain a final image. And store it in the gallery. From this, it can be seen that if the predicted number of synthesized sheets is 1, the terminal can acquire only the second preview image and store it as the final image.
  • step 101 for each preset acquisition period, the preview image is acquired by the imaging component, and the exposure parameter value corresponding to the preview image is obtained, and each time the preview image is acquired, the method may be followed according to step 102. Determining an image capturing parameter value in the current blurred scene (ie, corresponding to the current acquisition period), each time the receiving instruction is received in the current acquisition period, the terminal may perform shooting according to the predicted image capturing parameter value corresponding to the current acquisition period. Image Processing.
  • the condition of the exposure parameter is further included.
  • the processing of step 103 may be as follows: when receiving the shooting instruction, acquiring the second preview image by the shooting component, and based on the predicted exposure parameter value The photographing component continuously acquires the predicted number of composite sheets minus one preview image; and performs image synthesizing processing on the second preview image and the composite sheet number minus one preview image to obtain a composite image.
  • the larger the exposure duration the greater the brightness of the captured image.
  • the image capturing parameter prediction model the exposure parameter value on which the terminal performs the captured image processing, so that the sharpness of the captured image can be enhanced.
  • the image capturing parameter may further include an exposure parameter.
  • the input of the image capturing parameter prediction model may be an exposure parameter value corresponding to the preview image and the preview image. The exposure parameter value is determined by the terminal according to the ambient brightness and the color of the light source in the environment, and is not adjusted.
  • the output of the image capturing parameter prediction model may be the predicted composite number of sheets and the predicted exposure parameter value, wherein the predicted exposure parameter value may be an exposure parameter value for the terminal to perform the captured image processing, and the predicted exposure parameter value is smaller than the preview image. Corresponding exposure parameter value.
  • the camera button may be clicked.
  • the terminal will receive the shooting instruction, and then the terminal acquires the preview image through the shooting component (which may be referred to as a second preview image). And obtaining the predicted exposure parameter value and the composite number of sheets, wherein the second preview image may be an image acquired by the terminal through the photographing component when the photographing instruction is received.
  • the terminal may continuously obtain the predicted number of composite sheets by the photographing component and subtract a preview image based on the predicted exposure parameter value, and further, may perform the second preview image and the composite image.
  • the number of sheets is reduced by a preview image for image synthesis processing, and a composite image is obtained and stored in a library for viewing by the user.
  • the terminal may not store the obtained preview image. In this case, the user does not see the preview image, and the terminal uses the preview image to obtain a composite image. It can be seen that if the number of synthesized sheets is 1, the terminal only acquires the second preview image. In this case, the predicted exposure parameter value has no substantial meaning.
  • the preview image is acquired by the photographing component, and the exposure parameter value corresponding to the preview image is obtained; according to the preview image, and the image data parameter and the exposure parameter that are pre-trained,
  • the image capturing parameter is a variable image capturing parameter prediction model, and predicts an image capturing parameter value in the current blurred scene, wherein the image capturing parameter in the current blurred scene includes a composite number of sheets; when the shooting instruction is received, according to the predicted position
  • the image capture parameter value is described, and the captured image processing is performed.
  • the terminal can automatically calculate the number of composites in the current blurred scene.
  • the captured image processing can be performed based on the number of composites, without the user manually turning on the jitter synthesis function, thereby improving the efficiency of photographing.
  • an embodiment of the present invention further provides an apparatus for capturing an image.
  • the apparatus includes:
  • the first obtaining module 410 is configured to acquire a preview image by using a shooting component in a state where the terminal is in an image to be captured, and acquire an exposure parameter value corresponding to the preview image;
  • the prediction module 420 is configured to predict an image in a current blurred scene according to the preview image, the exposure parameter value, and a pre-trained image capturing parameter prediction model with image data parameters, exposure parameters, and image capturing parameters as variables Taking a parameter value, wherein the image capturing parameter includes a composite number of sheets;
  • the execution module 430 is configured to perform the captured image processing according to the predicted image capturing parameter value when receiving the shooting instruction.
  • the image capturing parameter further includes an exposure parameter.
  • the device further includes:
  • the training module 440 is configured to approximate the pre-stored image based on the image capturing parameter value predicted by the image capturing parameter prediction model according to the correspondence between each preview image, the exposure parameter value, and the image capturing parameter value in the pre-stored training set.
  • the training principle of the image capturing parameter value corresponding to the preview image and the exposure parameter value is trained, and the image capturing parameter prediction model is trained to obtain the image capturing parameter prediction with the image data parameter, the exposure parameter and the image capturing parameter as variables after training. model.
  • the apparatus further includes:
  • a second acquiring module 450 configured to acquire a first preview image by using a shooting component, and acquire a first exposure parameter value corresponding to the first preview image
  • the first determining module 460 is configured to determine a preset number of exposure parameter values according to the first exposure parameter value and a preset number of attenuation percentages;
  • the third obtaining module 470 is configured to obtain a preview image corresponding to each exposure parameter value by acquiring a preview image by using a shooting component according to each of the preset number of exposure parameter values;
  • a second determining module 480 configured to: for each preset image number of the preset plurality of preset number of composites, and a preview image corresponding to each of the preset number of exposure parameter values, Selecting the first preview image and repeatedly selecting the preset composite image number minus one preview image corresponding to the exposure parameter value, to obtain a target preview image set corresponding to the preset parameter number and the exposure parameter value; Performing image synthesis processing on the target preview image set to obtain a composite image corresponding to the preset parameter number and the exposure parameter value;
  • the third determining module 490 is configured to determine, in the obtained composite images, a target composite image with an optimal image quality, and determine a target preset number of composites and a target exposure parameter value corresponding to the target composite image as targets.
  • the storage module 4100 is configured to store the first preview image, the first exposure parameter value, and the target image capturing parameter value in a corresponding manner in the training set.
  • the second determining module 480 is configured to:
  • the third determining module 490 is configured to:
  • the apparatus further includes:
  • the recording module 4110 is configured to record a power consumption value consumed when the composite image corresponding to the preset parameter number and the exposure parameter value is obtained;
  • the third determining module 490 is configured to:
  • the terminal includes a motion sensor, and the first acquiring module is configured to:
  • the preview image is acquired by the imaging component, and an exposure parameter value corresponding to the preview image is acquired.
  • the first acquiring module is configured to:
  • the motion parameter model includes at least one of a walking motion model, a riding motion model, and a boarding vehicle motion model.
  • the terminal when the terminal is in an image to be captured, the preview image is acquired by the photographing component, and the exposure parameter value corresponding to the preview image is obtained; according to the preview image, and the image data parameter and the exposure parameter that are pre-trained,
  • the image capturing parameter is a variable image capturing parameter prediction model, and predicts an image capturing parameter value in a current blurred scene, wherein the image capturing parameter includes a composite number of sheets; and when the shooting instruction is received, the image capturing parameter value is predicted according to the image , perform shooting image processing.
  • the terminal can automatically calculate the number of composites in the current blurred scene.
  • the captured image processing can be performed based on the number of composites, without the user manually turning on the jitter synthesis function, thereby improving the efficiency of photographing.
  • the device for capturing an image provided by the foregoing embodiment is only illustrated by the division of the above functional modules when capturing an image. In actual applications, the function distribution may be completed by different functional modules as needed. The internal structure of the terminal is divided into different functional modules to complete all or part of the functions described above.
  • the device for capturing an image provided by the above embodiment is the same as the method for capturing an image. The specific implementation process is described in detail in the method embodiment, and details are not described herein again.
  • the terminal 100 can be a mobile phone, a tablet computer, a notebook computer, an e-book, and the like.
  • the terminal 100 in this application may include one or more of the following components: a processor 110, a memory 120, and a touch display screen 130.
  • Processor 110 can include one or more processing cores.
  • the processor 110 connects various portions of the entire terminal 100 using various interfaces and lines, and executes the terminal by running or executing an instruction, program, code set or instruction set stored in the memory 120, and calling data stored in the memory 120. 100 various functions and processing data.
  • the processor 110 may use at least one of a digital signal processing (DSP), a field-programmable gate array (FPGA), and a programmable logic array (PLA).
  • DSP digital signal processing
  • FPGA field-programmable gate array
  • PDA programmable logic array
  • a form of hardware is implemented.
  • the processor 110 may integrate one or a combination of a central processing unit (CPU), a graphics processing unit (GPU), a modem, and the like.
  • the CPU mainly processes the operating system, the user interface, the application, and the like; the GPU is responsible for rendering and rendering of the content that the display screen 130 needs to display; the modem is used to process wireless communication. It can be understood that the above modem may also be integrated into the processor 110 and implemented by a single chip.
  • the memory 120 may include a random access memory (RAM), and may also include a read-only memory.
  • the memory 120 includes a non-transitory computer-readable storage medium.
  • Memory 120 can be used to store instructions, programs, code, code sets, or sets of instructions.
  • the memory 120 may include a storage program area and a storage data area, wherein the storage program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), The instructions for implementing the various method embodiments described below, etc.; the storage data area can store data (such as audio data, phone book) created according to the use of the terminal 100, and the like.
  • the memory 120 stores a Linux kernel layer 220, a system runtime layer 240, an application framework layer 260, and an application layer 280.
  • the Linux kernel layer 220 provides the underlying drivers for various hardware of the terminal 100, such as display drivers, audio drivers, camera drivers, Bluetooth drivers, Wi-Fi drivers, power management, and the like.
  • the system runtime layer 240 provides major features support for the Android system through some C/C++ libraries. For example, the SQLite library provides support for the database, the OpenGL/ES library provides support for 3D graphics, and the Webkit library provides support for the browser kernel.
  • the Android runtime library (Android Runtime) is also provided in the system runtime layer 240. It mainly provides some core libraries, which can allow developers to write Android applications using the Java language.
  • the application framework layer 260 provides various APIs that may be used when building an application. Developers can also build their own applications by using these APIs, such as event management, window management, view management, notification management, content providers, Package management, call management, resource management, location management.
  • the application layer 280 runs at least one application, which may be a contact program, an SMS program, a clock program, a camera application, etc. that is provided by the operating system; or an application developed by a third-party developer, such as an instant. Communication programs, photo landscaping programs, etc.
  • the IOS system includes: a core operating system layer 320 (Core OS layer), a core service layer 340 (Core Services layer), and a media layer. 360 (Media layer), touchable layer 380 (Cocoa Touch Layer).
  • the core operating system layer 320 includes an operating system kernel, drivers, and an underlying program framework that provide functionality closer to the hardware for use by the program framework located at the core service layer 340.
  • the core service layer 340 provides system services and/or program frameworks required by the application, such as a Foundation framework, an account framework, an advertising framework, a data storage framework, a network connectivity framework, a geographic location framework, a motion framework, and the like.
  • the media layer 360 provides an interface for the audiovisual aspect of the application, such as a graphic image related interface, an audio technology related interface, a video technology related interface, and an audio and video transmission technology wireless play (AirPlay) interface.
  • the touch layer 380 provides various commonly used interface related frameworks for application development, and the touch layer 380 is responsible for user touch interaction operations on the terminal 100. Such as local notification service, remote push service, advertising framework, game tool framework, message user interface (UI) framework, user interface UIKit framework, map framework and so on.
  • the frameworks associated with most applications include, but are not limited to, the base framework in the core service layer 340 and the UIKit framework in the touchable layer 380.
  • the underlying framework provides many basic object classes and data types, providing the most basic system services for all applications, regardless of the UI.
  • the classes provided by the UIKit framework are the basic UI class libraries for creating touch-based user interfaces. iOS applications can provide UI based on the UIKit framework, so it provides the application infrastructure for building user interfaces, drawing , handling and user interaction events, responsive gestures, and more.
  • the touch display screen 130 is for receiving a touch operation on or near a user using any suitable object such as a finger, a touch pen, and the like, and displaying a user interface of each application.
  • the touch display screen 130 is typically disposed at the front panel of the terminal 130.
  • the touch display 130 can be designed as a full screen, a curved screen, or a profiled screen.
  • the touch display screen 130 can also be designed as a combination of a full screen and a curved screen, and the combination of the special screen and the curved screen is not limited in this embodiment. among them:
  • the full screen may refer to a screen design in which the touch screen 130 occupies a front panel of the terminal 100 that exceeds a threshold (eg, 80% or 90% or 95%).
  • One calculation method of the screen ratio is: (the area of the touch screen 130 / the area of the front panel of the terminal 100) * 100%; another calculation method of the screen ratio is: (touch the actual display area in the display 130 Area/area of the front panel of the terminal 100) *100%; another calculation of the screen ratio is: (diagonal of the touch screen 130 / diagonal of the front panel of the terminal 100) * 100% .
  • a threshold eg, 80% or 90% or 95%).
  • One calculation method of the screen ratio is: (the area of the touch screen 130 / the area of the front panel of the terminal 100) * 100%; another calculation method of the screen ratio is: (touch the actual display area in the display 130 Area/area of the front panel of the terminal 100) *100%; another calculation of the screen ratio is: (diagonal of the touch screen 130 / diagonal of the
  • the full screen may also be a screen design that integrates at least one front panel component inside or below the touch display screen 130.
  • the at least one front panel component comprises: a camera, a fingerprint sensor, a proximity light sensor, a distance sensor, and the like.
  • other components on the front panel of the conventional terminal are integrated in all areas or partial areas of the touch display screen 130, such as splitting the photosensitive elements in the camera into a plurality of photosensitive pixels, each of which is photosensitive.
  • the pixels are integrated in a black area in each display pixel in touch display 130. Since at least one front panel component is integrated inside the touch display 130, the full screen has a higher screen ratio.
  • the front panel component on the front panel of the conventional terminal may also be disposed on the side or the back of the terminal 100, such as an ultrasonic fingerprint sensor disposed under the touch display screen 130, and the bone conduction type.
  • the earpiece is disposed inside the terminal 100, and the camera is disposed to be located on the side of the terminal 100 and is pluggable.
  • a single side of the middle frame of the terminal 100 when the terminal 100 adopts a full screen, a single side of the middle frame of the terminal 100, or two sides (such as the left and right sides), or four sides (such as An edge touch sensor 120 is disposed on the four sides of the upper, lower, left, and right sides, and the edge touch sensor 120 is configured to detect a user's touch operation, a click operation, a pressing operation, a sliding operation, and the like on the middle frame. At least one operation.
  • the edge touch sensor 120 may be any one of a touch sensor, a thermal sensor, a pressure sensor, and the like. The user can apply an operation on the edge touch sensor 120 to control the application in the terminal 100.
  • the curved screen refers to a screen design in which the screen area of the touch display screen 130 is not in one plane.
  • the curved screen has at least one cross section: the cross section has a curved shape, and the projection of the curved screen in a plane perpendicular to the plane of the cross section is a flat screen design, wherein the curved shape may be U-shaped.
  • a curved screen refers to a screen design in which at least one side is curved.
  • the curved screen means that at least one side of the touch display screen 130 extends over the middle frame of the terminal 100.
  • the curved screen refers to a screen design in which the left and right side edges 42 are curved shapes; or the curved screen screen refers to a screen design in which the upper and lower sides are curved shapes; or The curved screen refers to the screen design in which the four sides of the upper, lower, left and right sides are curved.
  • the curved screen is fabricated using a touch screen material having a certain flexibility.
  • a profiled screen is a touchscreen display with an irregularly shaped shape, and the irregular shape is not a rectangle or a rounded rectangle.
  • the profiled screen refers to a screen design that is provided with raised, notched, and/or punctured holes on a rectangular or rounded rectangular touch display screen 130.
  • the protrusions, notches, and/or holes may be located at the edge of the touch screen display 130, at the center of the screen, or both. When the protrusions, notches, and/or holes are provided at one edge, they may be disposed at the middle or both ends of the edge; when the protrusions, notches, and/or holes are disposed in the center of the screen, they may be placed above the screen.
  • the protrusions, the notches, and the holes may be concentrated or distributed; they may be symmetrically distributed or asymmetrically distributed.
  • the number of the protrusions, the notches and/or the holes is also not limited.
  • the shaped screen covers the upper and/or lower forehead area of the touch display screen as a displayable area and/or an operable area, so that the touch display screen occupies more space on the front panel of the terminal, the shaped screen also has A larger screen ratio.
  • the notch and/or the bore are for receiving at least one front panel component, including a camera, a fingerprint sensor, a proximity light sensor, a distance sensor, an earpiece, an ambient light level sensor, a physical button At least one of them.
  • the indentation may be provided on one or more edges, which may be semi-circular notches, right-angled rectangular indentations, rounded rectangular indentations or irregularly shaped indentations.
  • the profiled screen may be a screen design in which a semicircular notch 43 is provided at a central position of the upper edge of the touchscreen display 130, and the position of the semicircular notch 43 is vacated.
  • the distance sensor also referred to as a proximity sensor
  • the earpiece and the ambient light brightness sensor; schematically, as shown in FIG. 13, the shaped screen may be at the lower edge of the touch display screen 130.
  • the central position is provided with a screen design of a semi-circular notch 44, the position of which is vacated for accommodating at least one of a physical button, a fingerprint sensor, and a microphone; schematicly as shown in FIG.
  • the profiled screen may be a screen design in which a semi-elliptical notch 45 is disposed at a central position of the lower edge of the touch display screen 130, and a semi-elliptical notch, two semi-ellipses are formed on the front panel of the terminal 100.
  • the shaped notch is formed into an elliptical area for accommodating a physical button or a fingerprint recognition module; in the example shown in FIG. 15, the shaped screen can be touched.
  • a screen design of at least one small hole 45 is disposed in the upper half of the display screen 130, and the position of the small hole 45 is used to accommodate at least one of the front panel of the camera, the distance sensor, the earpiece, and the ambient light sensor. component.
  • the structure of the terminal 100 shown in the above figure does not constitute a limitation on the terminal 100, and the terminal may include more or less components than the illustration, or a combination of some Parts, or different parts.
  • the terminal 100 further includes components such as a radio frequency circuit, an input unit, a sensor, an audio circuit, a wireless fidelity (WiFi) module, a power supply, a Bluetooth module, and the like, and details are not described herein.
  • WiFi wireless fidelity
  • the terminal when the terminal is in an image to be captured, the preview image is acquired by the photographing component, and the exposure parameter value corresponding to the preview image is obtained; according to the preview image, and the image data parameter and the exposure parameter that are pre-trained,
  • the image capturing parameter is a variable image capturing parameter prediction model, and predicts an image capturing parameter value in a current blurred scene, wherein the image capturing parameter includes a composite number of sheets; and when the shooting instruction is received, the image capturing parameter value is predicted according to the image , perform shooting image processing.
  • the terminal can automatically calculate the number of composites in the current blurred scene.
  • the captured image processing can be performed based on the number of composites, without the user manually turning on the jitter synthesis function, thereby improving the efficiency of photographing.
  • a person skilled in the art may understand that all or part of the steps of implementing the above embodiments may be completed by hardware, or may be instructed by a program to execute related hardware, and the program may be stored in a computer readable storage medium.
  • the storage medium mentioned may be a read only memory, a magnetic disk or an optical disk or the like.

Abstract

实施例公开了一种拍摄图像的方法、装置、终端和存储介质,属于电子技术领域。所述方法包括:在终端处于待拍摄图像的状态下,通过拍摄部件获取预览图像,并获取所述预览图像对应的曝光参数值;根据所述预览图像、所述曝光参数值,以及预先训练出的以图像数据参数、曝光参数、图像拍摄参数为变量的图像拍摄参数预测模型,预测当前模糊场景下的图像拍摄参数值,其中,所述当前模糊场景下的图像拍摄参数包括合成张数;当接收到拍摄指令时,根据预测出的所述图像拍摄参数值,执行拍摄图像处理。可以提高拍照的效率。

Description

拍摄图像的方法、装置、终端和存储介质
本申请实施例要求于2017年11月13日提交、申请号为201711117448.1、发明名称为“拍摄图像的方法、装置、终端和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请实施例中。
技术领域
本发明涉及电子技术领域,特别涉及一种拍摄图像的方法、装置、终端和存储介质。
背景技术
随着电子技术的发展,手机、计算机等终端得到了广泛的应用,相应的终端上的应用程序的种类越来越多、功能越来越丰富。拍照应用程序即是一种很常用的应用程序。用户可以通过拍照应用程序进行拍照。
发明内容
本发明实施例提供了一种拍摄图像的方法、装置、终端和存储介质,可以提高拍照的效率。所述技术方案如下:
一方面,提供了一种拍摄图像的方法,所述方法包括:
在终端处于待拍摄图像的状态下,通过拍摄部件获取预览图像,并获取所述预览图像对应的曝光参数值;
根据所述预览图像、所述曝光参数值,以及预先训练出的以图像数据参数、曝光参数、图像拍摄参数为变量的图像拍摄参数预测模型,预测当前模糊场景下的图像拍摄参数值,其中,所述当前模糊场景下的图像拍摄参数包括合成张数;
当接收到拍摄指令时,根据预测出的所述图像拍摄参数值,执行拍摄图像处理。
另一方面,提供了一种拍摄图像的装置,所述装置包括:
第一获取模块,用于在终端处于待拍摄图像的状态下,通过拍摄部件获取预览图像,并获取所述预览图像对应的曝光参数值;
预测模块,用于根据所述预览图像、所述曝光参数值,以及预先训练出的以图像数据参数、曝光参数、图像拍摄参数为变量的图像拍摄参数预测模型,预测当前模糊场景下的图像拍摄参数值,其中,所述当前模糊场景下的图像拍摄参数包括合成张数;
执行模块,用于当接收到拍摄指令时,根据预测出的所述图像拍摄参数值,执行拍摄图像处理。
另一方面,提供了一种终端,所述终端包括处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如第一方面所述的拍摄图像的方法。
另一方面,提供了一种计算机可读存储介质,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现如第一方面所述的拍摄图像的方法。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例提供的一种拍摄图像的方法流程图;
图2是本发明实施例提供的一种多个预设合成张数和多个曝光参数值对应的目标预览图像集合的示意图;
图3是本发明实施例提供的一种预设合成张数、曝光参数值和多个预设终端性能参数值对应的合成图像的示意图;
图4是本发明实施例提供的一种拍摄图像的装置结构示意图;
图5是本发明实施例提供的一种拍摄图像的装置结构示意图;
图6是本发明实施例提供的一种拍摄图像的装置结构示意图;
图7是本发明实施例提供的一种拍摄图像的装置结构示意图;
图8是本发明实施例提供的一种终端结构示意图;
图9是本发明实施例提供的一种终端结构示意图;
图10是本发明实施例提供的一种终端结构示意图;
图11是本发明实施例提供的一种终端结构示意图;
图12是本发明实施例提供的一种终端结构示意图;
图13是本发明实施例提供的一种终端结构示意图;
图14是本发明实施例提供的一种终端结构示意图;
图15是本发明实施例提供的一种终端结构示意图。
具体实施方式
为使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明实施方式作进一步地详细描述。
本发明实施例提供了一种拍摄图像的方法,在终端处于待拍摄图像的状态下,通过拍摄部件获取预览图像,并获取所述预览图像对应的曝光参数值;
根据所述预览图像、所述曝光参数值,以及预先训练出的以图像数据参数、曝光参数、图像拍摄参数为变量的图像拍摄参数预测模型,预测当前模糊场景下的图像拍摄参数值,其中,所述当前模糊场景下的图像拍摄参数包括合成张数;
当接收到拍摄指令时,根据预测出的所述图像拍摄参数值,执行拍摄图像处理。
可选地,所述当前模糊场景下的图像拍摄参数还包括曝光参数。
可选地,该拍摄图像的方法还包括:根据预先存储的训练集中的各个预览图像、曝光参数值、图像拍摄参数值的对应关系,基于通过图像拍摄参数预测模型预测得到的图像拍摄参数值趋近于预先存储的与预览图像、曝光参数值相对应的图像拍摄参数值的训练原则,对所述图像拍摄参数预测模型进行训练,得到训练后的以图像数据参数、曝光参数、图像拍摄参数为变量的图像拍摄参数预测模型。
可选地,该方法还包括:通过所述拍摄部件获取第一预览图像,并获取所述第一预览图像对应的第一曝光参数值;
根据所述第一曝光参数值以及预设数目个衰减百分比,确定预设数目个曝光参数值;
分别根据预设数目个曝光参数值中的每个曝光参数值,通过所述拍摄部件获取预览图像,得到每个曝光参数值对应的预览图像;
对于预先存储的多个预设合成张数中的每个预设合成张数和所述预设数目个曝光参数值中的每个曝光参数值对应的预览图像,选取所述第一预览图像和重复选取所述预设合成张数减一个所述曝光参数值对应的预览图像,得到所述预设合成张数与所述曝光参数值对应的目标预览图像集合;对所述目标预览图像集合,进行图像合成处理,得到所述预设合成张数与所述曝光参数值对应的合成图像;
在得到的所有合成图像中,确定对应的图像质量最优的目标合成图像,将所述目标合成图像对应的目标预设合成张数、目标曝光参数值确定为目标图像拍摄参数值;
将所述第一预览图像、所述第一曝光参数值、所述目标图像拍摄参数值对应存储到所述训练集中。
可选地,在该方法中,所述对所述目标预览图像集合,进行图像合成处理,得到所述预设合成张数与所述曝光参数值对应的合成图像,包括:
分别基于多个预设终端性能参数值中的每个预设终端性能参数值,对所述目标预览图像集合,进行图像合成处理,得到所述预设合成张数、所述曝光参数值与每个预设终端性能参数值对应的合成图像;
所述在得到的所有合成图像中,确定对应的图像质量最优的目标合成图像,将所述目标合成图像对应的目标预设合成张数、目标曝光参数值确定为目标图像拍摄参数值值,包括:
在得到的所有合成图像中,确定对应的图像质量最优的目标合成图像,将所述目标合成图像对应的目标预设合成张数、目标曝光参数值和目标预设终端性能参数值确定为目标图像拍摄参数值。
可选地,该方法还包括:记录得到所述预设合成张数与所述曝光参数值对应的合成图像时所消耗的功耗值;
所述在得到的所有合成图像中,确定对应的图像质量最优的目标合成图像,将所述目标合成图像对应的目标预设合成张数、目标曝光参数值确定为目标图像拍摄参数值,包括:
在得到的所有合成图像中,确定对应的图像质量和功耗值综合最优的目标合成图像,将所述目标合成图像对应的目标预设合成张数和目标曝光参数值确定为目标图像拍摄参数值。
可选地,终端还包括运动传感器,所述在终端处于待拍摄图像的状态下,通过拍摄部件获取预览图像,并获取所述预览图像对应的曝光参数值,包括:
在终端处于待拍摄图像的状态下,通过所述运动传感器采集所述终端当前的运动参数,所述运动参数用于指示所述终端的运动状态;
当所述运动参数指示终端处于运动状态时,通过所述拍摄部件获取预览图像,并获取所述预览图像 对应的曝光参数值。
可选地,所述当所述运动参数指示终端处于运动状态时,通过所述拍摄部件获取预览图像,并获取所述预览图像对应的曝光参数值,包括:
将所述运动参数和运动参数模型进行比较,获取相似度,所述相似度用于指示所述运动参数和所述抖动参数模型之间的相似程度;
当所述相似度大于相似阈值时,确定所述终端处于运动状态;
通过所述拍摄部件获取预览图像,并获取所述预览图像对应的曝光参数值。
可选地,所述运动参数模型包括步行运动模型、骑行运动模型和搭乘载具运动模型中至少一种。
本发明实施例提供了一种拍摄图像的方法,该方法的执行主体为终端。其中,该终端可以是具有拍摄图像功能的终端,比如该终端可以是安装有拍照应用程序的终端。终端可以包括处理器、存储器、拍摄部件、屏幕等部件。处理器可以为CPU(Central Processing Unit,中央处理单元)等,可以用于确定图像拍摄参数值和执行拍摄图像的相关处理。存储器可以为RAM(Random Access Memory,随机存取存储器)、Flash(闪存)等,可以用于存储接收到的数据、处理过程所需的数据、处理过程中生成的数据等,如图像拍摄参数预测模型等。拍摄部件可以是摄像头,可以用于获取预览图像。屏幕可以是触控屏,可以用于显示通过拍摄部件获取到的预览图像,还可以用于检测触碰信号等。可选地,终端可以包括运动传感器。在一种可能的方式中,运动传感器可以是重力传感器、速度传感器、陀螺仪和加速度传感器中至少一种。
在本发明的一种实施场景中,当用户通过拍照应用程序拍照时,为方便用户对运动的物体拍照,拍照应用程序还可以提供抖动合成功能。可选的,当用户想要对运动的物体拍照时,用户可以找到抖动合成功能的开关按钮,然后通过点击开关按钮,开启抖动合成功能,当用户按下拍照按钮时,终端可以基于预设合成张数,执行拍摄图像处理。可选的,终端可以通过拍摄部件(比如摄像头)连续获取预设合成张数个图像。其中,该图像可称为预览图像,并将拍摄的预设合成张数个图像进行图像合成处理,得到合成图像。即终端得到多张预览图像合成后的图像,并将其存储在终端的图库中。基于上述处理方式,每当用户想要对运动的物体拍照时,需要先找到抖动合成功能的开关按钮,再手动点击该开关按钮。
下面将结合具体实施方式,对图1所示的处理流程进行详细的说明,内容可以如下:
步骤101,在终端处于待拍摄图像的状态下,通过拍摄部件获取预览图像,并获取该预览图像对应的曝光参数值。
其中,预览图像对应的曝光参数值可以是获取该预览图像时确定出的曝光参数值。
在本实施例中,终端中可以安装有拍照应用程序,当用户想要进行拍照时,可以点击拍照应用程序的图标。此时,终端将会接收到对应拍照应用程序的启动指令,进而,可以启动拍照应用程序。此时,终端将会处于待拍摄图像的状态,即此时终端中的拍摄部件处于开启状态。在终端处于待拍摄图像的状态下,终端可以通过拍摄部件获取预览图像。其中,预览图像可以是拍摄部件获取的终端中显示的图像,即未经合成处理的图像,也即预览图像可以是用户按下拍摄按键前拍摄部件获取的图像。另外,拍照应用程序开启后,终端可以实时根据环境亮度和环境中光源颜色,确定曝光参数值,以便终端根据曝光参数值执行拍摄图像处理。其中,曝光参数值可以包括曝光时长、白平衡等参数值。此种情况下,终端通过拍摄部件获取到预览图像的同时,终端还可以获取预览图像对应的曝光参数值,即获取拍摄预览图像时基于的曝光参数值。
另外,终端中可以设置有获取周期,在终端处于待拍摄图像的状态下,每经过预设的获取周期时,终端可以通过拍摄部件获取预览图像,并获取预览图像对应的曝光参数值。
在一种可能的实施方式中,终端中还包含运动传感器。终端可以通过运动传感器获取的运动参数,确定是否执行步骤101所示的步骤。在本实施例所提供的一种实现方式中,在终端处于待拍摄图像的状态下,通过运动传感器采集终端当前的运动参数,运动参数用于指示终端的运动状态。当运动参数指示终端处于运动状态时,通过拍摄部件获取预览图像,并获取预览图像对应的曝光参数值。即,当终端处于待拍摄图像的状态下,终端将通过运动传感器采集终端当前的运动参数。在一种可能的方式中,运动参数包括速度、加速度、角速度和角加速度中至少一种。当运动参数指示终端处于运动状态时,终端通过拍摄部件获取图像,并获取预览图像对应的曝光参数值。
在一种通过运动参数确定终端处于运动状态的方式中,终端将运动参数和运动参数模型进行比较,获取相似度,该相似度用于指示运动参数和运动参数模型之间的相似程度。在一种相似度可能表示的方式中,运动参数可以通过小数或者百分比的形式表示。当该相似度大于相似阈值时,终端确定终端自身处于运动状态。例如,当相似阈值为80%,相似度为86%时,终端确定自身处于运动状态。
需要说明的是,运动参数模型可以是终端中预设的一个运动参数在时间轴上变化的函数模型,也可以是终端中预设的一组运动参数在时间轴上变化的函数模型。在一种可能的方式中,运动参数模型包括 步行运动模型、骑行运动模型和搭乘载具运动模型中至少一种。其中,步行运动模型用于指示终端随用户步行时的运动参数随时间变化的情况;骑行运动模型用于指示终端随用户骑行时的运动参数随时间变化的情况;搭乘载具运动模型用于指示终端随用户乘坐或驾驶车辆、有轨列车、船舶或飞行器时的运动参数随时间变化的情况。需要说明的是,能够指示终端处于运动状态的运动模型都在本发明的保护范围内,上述运具体的运动模型仅为示例性说明,不对本发明的保护范围形成限定。
当终端能够在确定自身处于运动状态时,在启动本发明提供的图像拍摄方法,能够在避免无需处理抖动场景时执行本公开的算法流程,在保证拍摄图像效果的前提下,节省了终端的能耗。
步骤102,根据预览图像、曝光参数值,以及预先训练出的以图像数据参数、曝光参数、图像拍摄参数为变量的图像拍摄参数预测模型,预测当前模糊场景下的图像拍摄参数值,其中,图像拍摄参数包括合成张数。
在本实施例中,终端中可以预先存储有经过训练的图像拍摄参数预测模型,其中,图像拍摄参数预测模型可以用于根据终端当前获取到的预览图像和预览图像对应的曝光参数值,预测当前模糊场景下的图像拍摄参数值。每获取到预览图像和该预览图像对应的曝光参数值后,终端可以将该预览图像和其对应的曝光参数值输入预先训练好的图像拍摄参数预测模型中,得到该图像拍摄参数预测模型的输出,即可以得到当前模糊场景下的图像拍摄参数值。可选的,获取到预览图像和该预览图像对应的曝光参数值后,终端可以将预览图像的数据作为图像数据参数的参数值,将该预览图像对应的曝光参数值作为曝光参数的参数值,带入图像拍摄参数预测模型,得到当前模糊场景下的图像拍摄参数值,得到合成张数。
可选的,上述图像拍摄参数预测模型可以是终端或服务器预先训练出的,相应的,训练过程可以如下:根据预先存储的训练集中的各个预览图像、曝光参数值、图像拍摄参数值的对应关系,基于通过图像拍摄参数预测模型预测得到的图像拍摄参数值,趋近于预先存储的与预览图像、曝光参数值相对应的图像拍摄参数值的训练原则,对图像拍摄参数预测模型进行训练,得到训练后的以图像数据参数、曝光参数、图像拍摄参数为变量的图像拍摄参数预测模型。
在本实施例中,终端中可以预先存储有训练集。其中,训练集中可以包括各个预览图像、曝光参数值和图像拍摄参数值的对应关系,其中,对应关系中的每个对应关系项中的图像拍摄参数值可以是,在该对应关系项中的预览图像和曝光参数值表达的场景下,能使合成后的图像质量达到最优的图像拍摄参数的数值。终端可以根据预先存储的训练集对包含待定参数的图像拍摄参数预测模型进行训练,即终端基于通过图像拍摄参数预测模型预测得到的预览图像、曝光参数值对应的图像拍摄参数值,趋近于预先存储的预览图像、曝光参数值对应的图像拍摄参数值的训练原则,对图像拍摄参数预测模型进行训练。可选的,对于预览图像、曝光参数值和图像拍摄参数值的对应关系中的每个对应关系项,终端可以将该对应关系项中的预览图像和曝光参数值输入到包含待定参数的图像拍摄参数预测模型中,得到包含待定参数的图像拍摄参数值,进而,可以基于得到的待定参数的图像拍摄参数值趋近于该对应关系项中的图像拍摄参数值的训练原则,得到目标函数。比如,该目标函数可以是得到的包含待定参数的图像拍摄参数值减去该对应关系项中的图像拍摄参数值的函数。得到目标函数后,终端可以通过梯度下降法,得到待定参数的训练值,并将该训练值作为根据下一个对应关系项进行训练时待定参数对应的参数值。以此类推,训练结束后,即可得到待定参数的训练值。另外,上述图像拍摄参数预测模型可以为卷积神经网络模型,此种情况下,待定参数可以是神经网络模型中的各个卷积核。
可选的,上述每个对应关系中的每个对应关系项可以是根据预设合成张数和多个曝光参数值对应的合成后的合成图像的图像质量选取出的,相应的,处理过程可以如下:当前模糊场景下的图像拍摄参数还包括曝光参数;通过拍摄部件获取第一预览图像,并获取第一预览图像对应的第一曝光参数值;根据第一曝光参数值以及预设数目个衰减百分比,确定预设数目个曝光参数值;分别根据预设数目个曝光参数值中的每个曝光参数值,通过拍摄部件获取预览图像,得到每个曝光参数值对应的预览图像;对于预先存储的多个预设合成张数中的每个预设合成张数和预设数目个曝光参数值中的每个曝光参数值对应的预览图像,选取第一预览图像和重复选取预设合成张数减一个该曝光参数值对应的预览图像,得到预设合成张数与该曝光参数值中的每个曝光参数值对应的目标预览图像集合;对目标预览图像集合,进行图像合成处理,得到预设合成张数与该曝光参数值对应的合成图像;在得到的所有合成图像中,确定对应的图像质量最优的目标合成图像,将目标合成图像对应的目标预设合成张数、目标曝光参数值确定为目标图像拍摄参数值;将第一预览图像、第一曝光参数值、目标图像拍摄参数值对应存储到训练集中。
在实施中,上述对应关系中的每个对应关系项,可以是由终端根据获取到第一预览图像和预设数目个不同曝光参数值对应的预览图像确定出的,其中,不同对应关系项可以是终端分别在不同场景下获取到的第一预览图像和预设数目个预览图像确定出的,比如,第一对应关系项是终端拍摄以第一运动速度运动的物体的场景下获取到的第一预览图像和预设数目个预览图像确定出的,第二对应关系项是终端拍摄以第二运动速度运动的物体的场景下获取到的第一预览图像和预设数目个预览图像确定出的。
需要说明的是,上述对应关系中的各对应关系项中,对应的图像拍摄参数中的合成张数是1的各对应关系项,图像拍摄参数值中的曝光参数值可以是预设数值(比如为0),对应的图像拍摄参数中的合成张数大于1的各对应关系项,图像拍摄参数值中的曝光参数值可以是根据第一曝光参数值确定出的预设数目个曝光参数值中的某曝光参数值。下面对其中的某个对应关系项的确定过程进行详细表述,其他对应关系项的确定过程与之相同。
可选的,在该场景下,终端可以通过拍摄部件获取预览图像(该预览图像可以称为第一预览图像),其中,预览图像是拍摄部件直接获取的,未经图像合成处理的图像,进而,可以获取第一预览图像对应的曝光参数值(可称为第一曝光参数值),其中,第一曝光参数值是终端根据环境亮度和环境中光源颜色确定出的,未经调整的数值。终端中可以预先存储有预设数目个衰减百分比,当获取到第一预览图像对应的第一曝光参数值后,可以分别将预设数目个衰减百分比中的每个衰减百分比与第一曝光参数值相乘,得到预设数目个曝光参数值。例如,预设数目个衰减百分比分别是80%、60%、40%,第一曝光参数值为A,则可以得到的预设数目个曝光参数值分别是A*80%、A*60%、A*40%。得到预设数目个曝光参数值后,可以分别基于预设数目个曝光参数值中的每个曝光参数值,通过拍摄部件获取预览图像,得到预设数目个曝光参数值中的每个曝光参数值对应的预览图像,即得到的预设数目个预览图像是基于调整后的曝光参数值拍摄得到的,不是根据环境亮度和环境中光源颜色确定出的。
终端中可以预先存储有多个预设合成张数,获取到第一预览图像和预设数目个预览图像后,对于多个预设合成张数中的数值大于1的每个预设合成张数和预设数目个曝光参数值中的每个曝光参数值对应的预览图像,可以选取第一预览图像和重复选取该预设合成张数减一个该曝光参数值对应的预览图像,得到该预设合成张数和该曝光参数值对应的目标预览图像集合,即数值大于1的每个预设合成张数对应预设数目个目标预览图像集合,对于数值为1的预设合成张数,将第一预览图像确定为该预设合成张数对应的目标预览图像集合,其中,对该目标预览图像集合进行图像合成处理时得到的合成图像对应的曝光参数值可以是预设数值。例如,终端获取到的第一预览图像为图像1和第一曝光参数值为A,得到的预设数目个预览图像分别为图像2、图像3、图像4,图像2、图像3、图像4分别对应的曝光参数值为B、C、D,多个预设合成张数分别为1、2、3,则对于预设合成张数1,终端可以选取图像1,得到预设合成张数1对应的目标预览图像集合;对于预设合成张数2,终端可以选取图像1和选取1个图像2,得到预设合成张数2和曝光参数值B对应的目标预览图像集合,终端可以选取图像1和选取1个图像3,得到预设合成张数2和曝光参数值C对应的目标预览图像集合,终端可以选取图像1和选取1个图像4,得到预设合成张数2和曝光参数值D对应的目标预览图像集合;对于预设合成张数3,终端可以选取图像1和重复选取2个图像2,得到预设合成张数3和曝光参数值B对应的目标预览图像集合,终端可以选取图像1和重复选取2个图像3,得到预设合成张数3和曝光参数值C对应的目标预览图像集合,终端可以选取图像1和重复选取2个图像4,得到预设合成张数3和曝光参数值D对应的目标预览图像集合,如图2所示。
对于预先存储的多个预设合成张数中的每个预设合成张数和预设数目个曝光参数值中的每个曝光参数值对应的预览图像,得到预设合成张数与该曝光参数值对应的目标预览图像集合后,可以对目标预览图像集合进行图像合成处理,即可以对目标预览图像集合中的各预览图像进行图像合成处理,得到该预设合成张数与该曝光参数值对应的合成图像。得到所有合成图像(所有合成图像包括所有预设合成张数对应的合成图像)后,可以计算各个合成图像对应的图像质量(比如清晰度),进而,可以在各个合成图像中,确定对应的图像质量最优的合成图像(可称为目标合成图像),并可以确定目标合成图像对应的预设合成张数(可称为目标预设合成张数)、曝光参数值(可称为目标曝光参数值),其中,当目标预设合成张数为1时,目标曝光参数值为预设数值,当目标预设合成张数大于1时,目标曝光参数值是预设数目个曝光参数值中的某曝光参数值。确定出目标预设合成张数和目标曝光参数值后,可以将确定出的目标预设合成张数和目标曝光参数值确定为目标图像拍摄参数值。得到第一预览图像、第一曝光参数值和目标图像拍摄参数值后,可以将三者对应存储到训练集中。对于其他场景,终端也可以按照上述处理过程,得到相应的预览图像、曝光参数值和对应的图像拍摄参数值、以此得到训练集中的各个对应关系项。
可选的,图像拍摄参数还可以包括终端性能参数,相应的,在确定训练集时,可以确定预览图像、曝光参数对应的合成张数和终端性能参数值,相应的,终端可以进行如下处理:分别基于多个预设终端性能参数值中的每个预设终端性能参数值,对目标预览图像集合,进行图像合成处理,得到预设合成张数、该曝光参数值与每个预设终端性能参数值对应的合成图像。相应的,终端可以进行如下处理:在得到的所有合成图像中,确定对应的图像质量最优的目标合成图像,将目标合成图像对应的目标预设合成张数、目标曝光参数值和目标预设终端性能参数值确定为目标图像拍摄参数值。
需要说明的是,在一种可能的实现方式中,多个预设终端性能参数值中的每个预设终端性能参数值, 可以是,两个预设终端性能参数值中的每个预设终端性能参数值。在另一种可能的实现方式中,多个预设终端性能参数值中的每个预设终端性能参数值,可以是,至少三个预设终端性能参数值中的每个预设终端性能参数值。
其中,终端性能参数可以是影响终端性能的参数,比如可以是CPU运行频率,CPU运行频率也可称为CPU主频。
在本实施例中,终端中可以预先存储有多个预设终端性能参数值。针对图像拍摄参数还包括终端性能参数的情况,对于多个预设合成张数中的每个预设合成张数和预设数目个曝光参数值中的每个曝光参数值,确定出该预设合成张数与该曝光参数值对应的目标预览图像集合后,终端可以分别基于多个预设终端性能参数值中的每个预设终端性能参数值,对目标预览图像集合进行图像合成处理,得到该预设合成张数、该曝光参数值与每个预设终端性能参数值对应的合成图像。也就是说,每个预设合成张数、每个曝光参数值与每个预设终端性能参数值均对应有合成图像。例如,某预设合成张数为2、曝光参数值为B,多个预设终端性能参数值分别为a、b,则对于预设合成张数2和曝光参数值为B对应的目标预览图像集合,终端可以在终端性能参数为a的情况下(即终端可以将终端性能参数设置为a),对目标预览图像集合进行图像合成处理,得到预设合成张数2、B和a对应的合成图像,终端还可以在终端性能参数为b的情况下,对目标预览图像集合进行图像合成处理,得到预设合成张数2、B和b对应的合成图像,如图3所示。
得到所有合成图像后,终端可以计算每个合成图像的图像质量,并可以在多个合成图像中,确定对应的图像质量最优的目标合成图像,进而,可以将目标合成图像对应的目标预设合成张数、目标曝光参数值和目标预设终端性能参数值确定为目标图像拍摄参数值,进而,可以将三者(即第一预览图像、第一曝光参数值、目标图像拍摄参数值)对应存储到训练集中。
可选的,终端在对每个目标预览图像集合进行图像合成处理时,还可以记录此次图像合成处理所消耗的功耗值。相应的,终端确定目标图像拍摄参数值的处理可以如下:记录得到预设合成张数与曝光参数值对应的合成图像时所消耗的功耗值。在得到的所有合成图像中,确定对应的图像质量和功耗值综合最优的目标合成图像,将目标合成图像对应的目标预设合成张数和目标曝光参数值确定为目标图像拍摄参数值。
在实施中,对于预先存储的多个预设合成张数中的每个预设合成张数和预设数目个曝光参数值中的每个曝光参数值对应的预览图像,得到该预设合成张数与该曝光参数值对应的目标预览图像集合后,可以对目标预览图像集合进行图像合成处理,得到该预设合成张数与该曝光参数值对应的合成图像,并可以记录此次图像合成处理所消耗的功耗值,即可以记录得到该预设合成张数与该曝光参数值对应的合成图像时所消耗的功耗值,其中,功耗值可以是以下数值中的一个或多个:消耗的电量、消耗的时长。此种情况下,得到所有合成图像和和对应的功耗值后,终端可以在得到的合成图像中,确定对应的图像质量和功耗值综合最优的目标合成图像(其中,可以将图像质量与功耗值的商最大的合成图像确定为目标合成图像),进而,可以将目标合成图像对应的目标预设合成张数和目标曝光参数值确定为目标图像拍摄参数值。
另外,需要说明的是,对预设合成张数1对应的目标预览图像集合进行图像合成处理时,即可将目标预览图像集合中的第一预览图像作为合成图像。
步骤103,当接收到拍摄指令时,根据预测出的图像拍摄参数值,执行拍摄图像处理。
在本实施例中,在终端处于待拍摄图像的状态下,当用户想要进行拍照时,可以点击拍摄按键,此时,终端将会接收到拍摄按键的点击指令,进而,可以根据预测出的当前模糊场景下的图像拍摄参数值和抖动合成算法,执行拍摄图像处理,其中,预测出的当前模糊场景下的图像拍摄参数值可以是抖动合成算法中的图像拍摄参数的数值,其中,当预测出的合成张数为1时,终端可以通过拍摄部件获取一个预览图像,此种情况下,获取到的该预览图像即是最终的合成图像。需要说明的是,此种情况相当于终端没有开启抖动合成功能。可选的,当图像拍摄参数包括合成张数时,终端可以获取第二预览图像,并基于合成张数减一个预设的曝光参数值中的每个曝光参数值,拍摄一个预览图像,并对获取的第二预览图像和合成张数减一个预览图像进行图像合成处理,得到最终的图像,并将其存储到图库中。当图像拍摄参数包括合成张数和终端性能参数时,终端可以获取第二预览图像,并基于合成张数减一个预设的曝光参数值中的每个曝光参数值,拍摄一个预览图像,并可以将终端性能参数设置为预测出的终端性能参数值,进而,可以基于预测出的终端性能参数值,对获取的第二预览图像和合成张数减一个预览图像进行图像合成处理,得到最终的图像,并将其存储到图库中。由此可知,如果预测出的合成张数为1时,终端即可只获取第二预览图像,并将其作为最终的图像,对其进行存储。
另外,针对步骤101中,每到预设的获取周期,通过拍摄部件获取预览图像,并获取预览图像对应的曝光参数值的情况,每当获取到预览图像时,均可按照步骤102的方式,确定当前模糊场景下(即当 前获取周期对应)的图像拍摄参数值,每当在当前获取周期内接收到拍摄指令时,终端均可以根据预测出的当前获取周期对应的图像拍摄参数值,执行拍摄图像处理。
可选的,针对图像拍摄参数还包括曝光参数的情况,相应的,步骤103的处理过程可以如下:当接收到拍摄指令时,通过拍摄部件获取第二预览图像,并基于预测出的曝光参数值,通过拍摄部件连续获取预测出的合成张数减一个预览图像;对第二预览图像和合成张数减一个预览图像进行图像合成处理,得到合成图像。
在本实施例中,一般情况下,曝光时长越大,拍摄的图像的亮度越大。对于运动的物体,曝光时长越长,拍摄的图像越模糊,曝光时长越短,拍摄的图像的亮度偏暗。因此,本方案中,还可以通过图像拍摄参数预测模型预测终端执行拍摄图像处理时基于的曝光参数值,以便可以增强拍摄的图像的清晰度。也就是说,图像拍摄参数还可以包括曝光参数,此种情况下,图像拍摄参数预测模型的输入可以是预览图像、预览图像对应的曝光参数值。其中,该曝光参数值是终端根据环境亮度和环境中光源颜色确定出的,未经调整。图像拍摄参数预测模型的输出可以是预测的合成张数和预测的曝光参数值,其中,预测的曝光参数值可以是用于终端执行拍摄图像处理的曝光参数值,预测的曝光参数值小于预览图像对应的曝光参数值。
可选的,用户打开拍照应用程序后,想要进行拍照时,可以点击拍照按键,此时,终端将会接收到拍摄指令,进而,终端通过拍摄部件获取预览图像(可称为第二预览图像),并可以获取预测出的曝光参数值和合成张数,其中,第二预览图像可以是接收到拍摄指令时终端通过拍摄部件获取到的图像。获取到预测出的曝光参数值和合成张数后,终端可以基于预测出的曝光参数值,通过拍摄部件连续获取预测出的合成张数减一个预览图像,进而,可以对第二预览图像和合成张数减一个预览图像进行图像合成处理,得到合成图像,并将其存储到图库中,以便用户查看。其中,终端可以不对获取的预览图像进行存储。在此情况下,用户查看不到预览图像,终端利用预览图像得到合成图像。由此可知,如果合成张数为1,则终端只获取第二预览图像,此种情况下,预测出的曝光参数值没有实质意义。
本发明实施例中,在终端处于待拍摄图像的状态下,通过拍摄部件获取预览图像,并获取预览图像对应的曝光参数值;根据预览图像,以及预先训练出的以图像数据参数、曝光参数、图像拍摄参数为变量的图像拍摄参数预测模型,预测当前模糊场景下的图像拍摄参数值,其中,当前模糊场景下的图像拍摄参数包括合成张数;当接收到拍摄指令时,根据预测出的所述图像拍摄参数值,执行拍摄图像处理。这样,终端可以自动计算出当前模糊场景下的合成张数,进而,可以基于合成张数执行拍摄图像处理,无需用户手动开启抖动合成功能,从而,可以提高拍照的效率。
基于相同的技术构思,本发明实施例还提供了一种拍摄图像的装置,如图4所示,该装置包括:
第一获取模块410,用于在终端处于待拍摄图像的状态下,通过拍摄部件获取预览图像,并获取所述预览图像对应的曝光参数值;
预测模块420,用于根据所述预览图像、所述曝光参数值,以及预先训练出的以图像数据参数、曝光参数、图像拍摄参数为变量的图像拍摄参数预测模型,预测当前模糊场景下的图像拍摄参数值,其中,所述图像拍摄参数包括合成张数;
执行模块430,用于当接收到拍摄指令时,根据预测出的所述图像拍摄参数值,执行拍摄图像处理。
可选的,所述图像拍摄参数还包括曝光参数。
可选的,如图5所示,所述装置还包括:
训练模块440,用于根据预先存储的训练集中的各个预览图像、曝光参数值、图像拍摄参数值的对应关系,基于通过图像拍摄参数预测模型预测得到的图像拍摄参数值趋近于预先存储的与预览图像、曝光参数值相对应的图像拍摄参数值的训练原则,对所述图像拍摄参数预测模型进行训练,得到训练后的以图像数据参数、曝光参数、图像拍摄参数为变量的图像拍摄参数预测模型。
可选的,如图6所示,所述装置还包括:
第二获取模块450,用于通过拍摄部件获取第一预览图像,并获取所述第一预览图像对应的第一曝光参数值;
第一确定模块460,用于根据所述第一曝光参数值以及预设数目个衰减百分比,确定预设数目个曝光参数值;
第三获取模块470,用于分别根据预设数目个曝光参数值中的每个曝光参数值,通过拍摄部件获取预览图像,得到每个曝光参数值对应的预览图像;
第二确定模块480,用于对于预先存储的多个预设合成张数中的每个预设合成张数和所述预设数目个曝光参数值中的每个曝光参数值对应的预览图像,选取所述第一预览图像和重复选取所述预设合成张数减一个所述曝光参数值对应的预览图像,得到所述预设合成张数与所述曝光参数值对应的目标预览图 像集合;对所述目标预览图像集合,进行图像合成处理,得到所述预设合成张数与所述曝光参数值对应的合成图像;
第三确定模块490,用于在得到的所有合成图像中,确定对应的图像质量最优的目标合成图像,将所述目标合成图像对应的目标预设合成张数、目标曝光参数值确定为目标图像拍摄参数值;
存储模块4100,用于将所述第一预览图像、所述第一曝光参数值、所述目标图像拍摄参数值对应存储到所述训练集中。
可选的,所述第二确定模块480,用于:
分别基于多个预设终端性能参数值中的每个预设终端性能参数值,对所述目标预览图像集合,进行图像合成处理,得到所述预设合成张数、所述曝光参数值与每个预设终端性能参数值对应的合成图像;
所述第三确定模块490,用于:
在得到的所有合成图像中,确定对应的图像质量最优的目标合成图像,将所述目标合成图像对应的目标预设合成张数、目标曝光参数值和目标预设终端性能参数值确定为目标图像拍摄参数值。
可选的,如图7所示,所述装置还包括:
记录模块4110,用于记录得到所述预设合成张数与所述曝光参数值对应的合成图像时所消耗的功耗值;
所述第三确定模块490,用于:
在得到的所有合成图像中,确定对应的图像质量和功耗值综合最优的目标合成图像,将所述目标合成图像对应的目标预设合成张数和目标曝光参数值确定为目标图像拍摄参数值。
可选的,所述终端中包含运动传感器,所述第一获取模块,用于:
在终端处于待拍摄图像的状态下,通过所述运动传感器采集所述终端当前的运动参数,所述运动参数用于指示所述终端的运动状态;
当所述运动参数指示终端处于运动状态时,通过所述拍摄部件获取预览图像,并获取所述预览图像对应的曝光参数值。
可选的,所述第一获取模块,用于:
将所述运动参数和运动参数模型进行比较,获取相似度,所述相似度用于指示所述运动参数和所述抖动参数模型之间的相似程度;
当所述相似度大于相似阈值时,确定所述终端处于运动状态;
通过所述拍摄部件获取预览图像,并获取所述预览图像对应的曝光参数值。
可选的,所述所述运动参数模型包括步行运动模型、骑行运动模型和搭乘载具运动模型中至少一种。
本发明实施例中,在终端处于待拍摄图像的状态下,通过拍摄部件获取预览图像,并获取预览图像对应的曝光参数值;根据预览图像,以及预先训练出的以图像数据参数、曝光参数、图像拍摄参数为变量的图像拍摄参数预测模型,预测当前模糊场景下的图像拍摄参数值,其中,图像拍摄参数包括合成张数;当接收到拍摄指令时,根据预测出的所述图像拍摄参数值,执行拍摄图像处理。这样,终端可以自动计算出当前模糊场景下的合成张数,进而,可以基于合成张数执行拍摄图像处理,无需用户手动开启抖动合成功能,从而,可以提高拍照的效率。
需要说明的是:上述实施例提供的拍摄图像的装置在拍摄图像时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将终端的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的拍摄图像的装置与拍摄图像的方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
参考图8和图9所示,其示出了本申请一个示例性实施例提供的终端100的结构方框图。该终端100可以是手机、平板电脑、笔记本电脑和电子书等。本申请中的终端100可以包括一个或多个如下部件:处理器110、存储器120和触摸显示屏130。
处理器110可以包括一个或者多个处理核心。处理器110利用各种接口和线路连接整个终端100内的各个部分,通过运行或执行存储在存储器120内的指令、程序、代码集或指令集,以及调用存储在存储器120内的数据,执行终端100的各种功能和处理数据。可选地,处理器110可以采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。处理器110可集成中央处理器(Central Processing Unit,CPU)、图像处理器(Graphics Processing Unit,GPU)和调制解调器等中的一种或几种的组合。其中,CPU主要处理操作系统、用户界面和应用程序等;GPU用于负责触摸显示屏130所需要显示的内容的渲染和绘制;调制解调器用于处理无线通信。可以理解的是,上述调制解调器也可以不集成到处理器110中,单独通过一块芯片进行实现。
存储器120可以包括随机存储器(Random Access Memory,RAM),也可以包括只读存储器(Read-Only Memory)。可选地,该存储器120包括非瞬时性计算机可读介质(non-transitory computer-readable storage medium)。存储器120可用于存储指令、程序、代码、代码集或指令集。存储器120可包括存储程序区和存储数据区,其中,存储程序区可存储用于实现操作系统的指令、用于至少一个功能的指令(比如触控功能、声音播放功能、图像播放功能等)、用于实现下述各个方法实施例的指令等;存储数据区可存储根据终端100的使用所创建的数据(比如音频数据、电话本)等。
以操作系统为安卓(Android)系统为例,存储器120中存储的程序和数据如图8所示,存储器120中存储有Linux内核层220、系统运行库层240、应用框架层260和应用层280。Linux内核层220为终端100的各种硬件提供了底层的驱动,如显示驱动、音频驱动、摄像头驱动、蓝牙驱动、Wi-Fi驱动、电源管理等。系统运行库层240通过一些C/C++库来为Android系统提供了主要的特性支持。如SQLite库提供了数据库的支持,OpenGL/ES库提供了3D绘图的支持,Webkit库提供了浏览器内核的支持等。在系统运行库层240中还提供有Android运行时库(Android Runtime),它主要提供了一些核心库,能够允许开发者使用Java语言来编写Android应用。应用框架层260提供了构建应用程序时可能用到的各种API,开发者也可以通过使用这些API来构建自己的应用程序,比如活动管理、窗口管理、视图管理、通知管理、内容提供者、包管理、通话管理、资源管理、定位管理。应用层280中运行有至少一个应用程序,这些应用程序可以是操作系统自带的联系人程序、短信程序、时钟程序、相机应用等;也可以是第三方开发者所开发的应用程序,比如即时通信程序、相片美化程序等。
以操作系统为IOS系统为例,存储器120中存储的程序和数据如图9所示,IOS系统包括:核心操作系统层320(Core OS layer)、核心服务层340(Core Services layer)、媒体层360(Media layer)、可触摸层380(Cocoa Touch Layer)。核心操作系统层320包括了操作系统内核、驱动程序以及底层程序框架,这些底层程序框架提供更接近硬件的功能,以供位于核心服务层340的程序框架所使用。核心服务层340提供给应用程序所需要的系统服务和/或程序框架,比如基础(Foundation)框架、账户框架、广告框架、数据存储框架、网络连接框架、地理位置框架、运动框架等等。媒体层360为应用程序提供有关视听方面的接口,如图形图像相关的接口、音频技术相关的接口、视频技术相关的接口、音视频传输技术的无线播放(AirPlay)接口等。可触摸层380为应用程序开发提供了各种常用的界面相关的框架,可触摸层380负责用户在终端100上的触摸交互操作。比如本地通知服务、远程推送服务、广告框架、游戏工具框架、消息用户界面接口(User Interface,UI)框架、用户界面UIKit框架、地图框架等等。
在图9所示出的框架中,与大部分应用程序有关的框架包括但不限于:核心服务层340中的基础框架和可触摸层380中的UIKit框架。基础框架提供许多基本的对象类和数据类型,为所有应用程序提供最基本的系统服务,和UI无关。而UIKit框架提供的类是基础的UI类库,用于创建基于触摸的用户界面,iOS应用程序可以基于UIKit框架来提供UI,所以它提供了应用程序的基础架构,用于构建用户界面,绘图、处理和用户交互事件,响应手势等等。
触摸显示屏130用于接收用户使用手指、触摸笔等任何适合的物体在其上或附近的触摸操作,以及显示各个应用程序的用户界面。触摸显示屏130通常设置在终端130的前面板。触摸显示屏130可被设计成为全面屏、曲面屏或异型屏。触摸显示屏130还可被设计成为全面屏与曲面屏的结合,异型屏与曲面屏的结合,本实施例对此不加以限定。其中:
全面屏
全面屏可以是指触摸显示屏130占用终端100的前面板的屏占比超过阈值(比如80%或90%或95%)的屏幕设计。屏占比的一种计算方式为:(触摸显示屏130的面积/终端100的前面板的面积)*100%;屏占比的另一种计算方式为:(触摸显示屏130中实际显示区域的面积/终端100的前面板的面积)*100%;屏占比的再一种计算方式为:(触摸显示屏130的对角线/在终端100的前面板的对角线)*100%。示意性的如图10所示的例子中,终端100的前面板上近乎所有区域均为触摸显示屏130,在终端100的前面板40上,除中框41所产生的边缘之外的其它区域,全部为触摸显示屏130。该触摸显示屏130的四个角可以是直角或者圆角。
全面屏还可以是将至少一种前面板部件集成在触摸显示屏130内部或下层的屏幕设计。可选地,该至少一种前面板部件包括:摄像头、指纹传感器、接近光传感器、距离传感器等。在一些实施例中,将传统终端的前面板上的其他部件集成在触摸显示屏130的全部区域或部分区域中,比如将摄像头中的感光元件拆分为多个感光像素后,将每个感光像素集成在触摸显示屏130中每个显示像素中的黑色区域中。由于将至少一种前面板部件集成在了触摸显示屏130的内部,所以全面屏具有更高的屏占比。
当然在另外一些实施例中,也可以将传统终端的前面板上的前面板部件设置在终端100的侧边或背面,比如将超声波指纹传感器设置在触摸显示屏130的下方、将骨传导式的听筒设置在终端100的内部、 将摄像头设置成位于终端100的侧边且可插拔的结构。
在一些可选的实施例中,当终端100采用全面屏时,终端100的中框的单个侧边,或两个侧边(比如左、右两个侧边),或四个侧边(比如上、下、左、右四个侧边)上设置有边缘触控传感器120,该边缘触控传感器120用于检测用户在中框上的触摸操作、点击操作、按压操作和滑动操作等中的至少一种操作。该边缘触控传感器120可以是触摸传感器、热力传感器、压力传感器等中的任意一种。用户可以在边缘触控传感器120上施加操作,对终端100中的应用程序进行控制。
曲面屏
曲面屏是指触摸显示屏130的屏幕区域不处于一个平面内的屏幕设计。一般的,曲面屏至少存在这样一个截面:该截面呈弯曲形状,且曲面屏在沿垂直于该截面的任意平面方向上的投影为平面的屏幕设计,其中,该弯曲形状可以是U型。可选地,曲面屏是指至少一个侧边是弯曲形状的屏幕设计方式。可选地,曲面屏是指触摸显示屏130的至少一个侧边延伸覆盖至终端100的中框上。由于触摸显示屏130的侧边延伸覆盖至终端100的中框,也即将原本不具有显示功能和触控功能的中框覆盖为可显示区域和/或可操作区域,从而使得曲面屏具有了更高的屏占比。可选地,如图11所示的例子中,曲面屏是指左右两个侧边42是弯曲形状的屏幕设计;或者,曲面屏是指上下两个侧边是弯曲形状的屏幕设计;或者,曲面屏是指上、下、左、右四个侧边均为弯曲形状的屏幕设计。在可选的实施例中,曲面屏采用具有一定柔性的触摸屏材料制备。
异型屏
异型屏是外观形状为不规则形状的触摸显示屏,不规则形状不是矩形或圆角矩形。可选地,异型屏是指在矩形或圆角矩形的触摸显示屏130上设置有凸起、缺口和/或挖孔的屏幕设计。可选地,该凸起、缺口和/或挖孔可以位于触摸显示屏130的边缘、屏幕中央或两者均有。当凸起、缺口和/或挖孔设置在一条边缘时,可以设置在该边缘的中间位置或两端;当凸起、缺口和/或挖孔设置在屏幕中央时,可以设置在屏幕的上方区域、左上方区域、左侧区域、左下方区域、下方区域、右下方区域、右侧区域、右上方区域中的一个或多个区域中。当设置在多个区域中时,凸起、缺口和挖孔可以集中分布,也可以分散分布;可以对称分布,也可以不对称分布。可选地,该凸起、缺口和/或挖孔的数量也不限。
由于异型屏将触摸显示屏的上额区和/或下额区覆盖为可显示区域和/或可操作区域,使得触摸显示屏在终端的前面板上占据更多的空间,所以异型屏也具有更大的屏占比。在一些实施例中,缺口和/或挖孔中用于容纳至少一种前面板部件,该前面板部件包括摄像头、指纹传感器、接近光传感器、距离传感器、听筒、环境光亮度传感器、物理按键中的至少一种。
示例性的,该缺口可以设置在一个或多个边缘上,该缺口可以是半圆形缺口、直角矩形缺口、圆角矩形缺口或不规则形状缺口。示意性的如图12所示的例子中,异型屏可以是在触摸显示屏130的上边缘的中央位置设置有半圆形缺口43的屏幕设计,该半圆形缺口43所空出的位置用于容纳摄像头、距离传感器(又称接近传感器)、听筒、环境光亮度传感器中的至少一种前面板部件;示意性的如图13所示,异型屏可以是在触摸显示屏130的下边缘的中央位置设置有半圆形缺口44的屏幕设计,该半圆形缺口44所空出的位置用于容纳物理按键、指纹传感器、麦克风中的至少一种部件;示意性的如图14所示的例子中,异型屏可以是在触摸显示屏130的下边缘的中央位置设置有半椭圆形缺口45的屏幕设计,同时在终端100的前面板上还形成有一个半椭圆型缺口,两个半椭圆形缺口围合成一个椭圆形区域,该椭圆形区域用于容纳物理按键或者指纹识别模组;示意性的如图15所示的例子中,异型屏可以是在触摸显示屏130中的上半部中设置有至少一个小孔45的屏幕设计,该小孔45所空出的位置用于容纳摄像头、距离传感器、听筒、环境光亮度传感器中的至少一种前面板部件。
除此之外,本领域技术人员可以理解,上述附图所示出的终端100的结构并不构成对终端100的限定,终端可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。比如,终端100中还包括射频电路、输入单元、传感器、音频电路、无线保真(Wireless Fidelity,WiFi)模块、电源、蓝牙模块等部件,在此不再赘述。
本发明实施例中,在终端处于待拍摄图像的状态下,通过拍摄部件获取预览图像,并获取预览图像对应的曝光参数值;根据预览图像,以及预先训练出的以图像数据参数、曝光参数、图像拍摄参数为变量的图像拍摄参数预测模型,预测当前模糊场景下的图像拍摄参数值,其中,图像拍摄参数包括合成张数;当接收到拍摄指令时,根据预测出的所述图像拍摄参数值,执行拍摄图像处理。这样,终端可以自动计算出当前模糊场景下的合成张数,进而,可以基于合成张数执行拍摄图像处理,无需用户手动开启抖动合成功能,从而,可以提高拍照的效率。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质 可以是只读存储器,磁盘或光盘等。
以上所述仅为本发明的较佳实施例,并不用以限制本发明,凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (20)

  1. 一种拍摄图像的方法,其特征在于,所述方法包括:
    在终端处于待拍摄图像的状态下,通过拍摄部件获取预览图像,并获取所述预览图像对应的曝光参数值;
    根据所述预览图像、所述曝光参数值,以及预先训练出的以图像数据参数、曝光参数、图像拍摄参数为变量的图像拍摄参数预测模型,预测当前模糊场景下的图像拍摄参数值,其中,所述当前模糊场景下的图像拍摄参数包括合成张数;
    当接收到拍摄指令时,根据预测出的所述图像拍摄参数值,执行拍摄图像处理。
  2. 根据权利要求1所述的方法,其特征在于,所述当前模糊场景下的图像拍摄参数还包括曝光参数。
  3. 根据权利要求1或2所述的方法,其特征在于,所述方法还包括:
    根据预先存储的训练集中的各个预览图像、曝光参数值、图像拍摄参数值的对应关系,基于通过图像拍摄参数预测模型预测得到的图像拍摄参数值趋近于预先存储的与预览图像、曝光参数值相对应的图像拍摄参数值的训练原则,对所述图像拍摄参数预测模型进行训练,得到训练后的以图像数据参数、曝光参数、图像拍摄参数为变量的图像拍摄参数预测模型。
  4. 根据权利要求3所述的方法,其特征在于,所述方法还包括:
    通过所述拍摄部件获取第一预览图像,并获取所述第一预览图像对应的第一曝光参数值;
    根据所述第一曝光参数值以及预设数目个衰减百分比,确定预设数目个曝光参数值;
    分别根据预设数目个曝光参数值中的每个曝光参数值,通过所述拍摄部件获取预览图像,得到每个曝光参数值对应的预览图像;
    对于预先存储的多个预设合成张数中的每个预设合成张数和所述预设数目个曝光参数值中的每个曝光参数值对应的预览图像,选取所述第一预览图像和重复选取所述预设合成张数减一个所述曝光参数值对应的预览图像,得到所述预设合成张数与所述曝光参数值对应的目标预览图像集合;对所述目标预览图像集合,进行图像合成处理,得到所述预设合成张数与所述曝光参数值对应的合成图像;
    在得到的所有合成图像中,确定对应的图像质量最优的目标合成图像,将所述目标合成图像对应的目标预设合成张数、目标曝光参数值确定为目标图像拍摄参数值;
    将所述第一预览图像、所述第一曝光参数值、所述目标图像拍摄参数值对应存储到所述训练集中。
  5. 根据权利要求4所述的方法,其特征在于,所述对所述目标预览图像集合,进行图像合成处理,得到所述预设合成张数与所述曝光参数值对应的合成图像,包括:
    分别基于多个预设终端性能参数值中的每个预设终端性能参数值,对所述目标预览图像集合,进行图像合成处理,得到所述预设合成张数、所述曝光参数值与每个预设终端性能参数值对应的合成图像;
    所述在得到的所有合成图像中,确定对应的图像质量最优的目标合成图像,将所述目标合成图像对应的目标预设合成张数、目标曝光参数值确定为目标图像拍摄参数值值,包括:
    在得到的所有合成图像中,确定对应的图像质量最优的目标合成图像,将所述目标合成图像对应的目标预设合成张数、目标曝光参数值和目标预设终端性能参数值确定为目标图像拍摄参数值。
  6. 根据权利要求4所述的方法,其特征在于,所述方法还包括:
    记录得到所述预设合成张数与所述曝光参数值对应的合成图像时所消耗的功耗值;
    所述在得到的所有合成图像中,确定对应的图像质量最优的目标合成图像,将所述目标合成图像对应的目标预设合成张数、目标曝光参数值确定为目标图像拍摄参数值,包括:
    在得到的所有合成图像中,确定对应的图像质量和功耗值综合最优的目标合成图像,将所述目标合成图像对应的目标预设合成张数和目标曝光参数值确定为目标图像拍摄参数值。
  7. 根据权利要求1所述的方法,其特征在于,所述终端中包含运动传感器,所述在终端处于待拍摄图像的状态下,通过拍摄部件获取预览图像,并获取所述预览图像对应的曝光参数值,包括:
    在终端处于待拍摄图像的状态下,通过所述运动传感器采集所述终端当前的运动参数,所述运动参数用于指示所述终端的运动状态;
    当所述运动参数指示终端处于运动状态时,通过所述拍摄部件获取预览图像,并获取所述预览图像对应的曝光参数值。
  8. 根据权利要求7所述的方法,其特征在于,所述当所述运动参数指示终端处于运动状态时,通过所述拍摄部件获取预览图像,并获取所述预览图像对应的曝光参数值,包括:
    将所述运动参数和运动参数模型进行比较,获取相似度,所述相似度用于指示所述运动参数和所述运动参数模型之间的相似程度;
    当所述相似度大于相似阈值时,确定所述终端处于运动状态;
    通过所述拍摄部件获取预览图像,并获取所述预览图像对应的曝光参数值。
  9. 根据权利要求8所述的方法,其特征在于,所述运动参数模型包括步行运动模型、骑行运动模型和搭乘载具运动模型中至少一种。
  10. 一种拍摄图像的装置,其特征在于,所述装置包括:
    第一获取模块,用于在终端处于待拍摄图像的状态下,通过拍摄部件获取预览图像,并获取所述预览图像对应的曝光参数值;
    预测模块,用于根据所述预览图像、所述曝光参数值,以及预先训练出的以图像数据参数、曝光参数、图像拍摄参数为变量的图像拍摄参数预测模型,预测当前模糊场景下的图像拍摄参数值,其中,所述当前模糊场景下的图像拍摄参数包括合成张数;
    执行模块,用于当接收到拍摄指令时,根据预测出的所述图像拍摄参数值,执行拍摄图像处理。
  11. 根据权利要求10所述的装置,其特征在于,所述图像拍摄参数还包括曝光参数。
  12. 根据权利要求10或11所述的装置,其特征在于,所述装置还包括:
    训练模块,用于根据预先存储的训练集中的各个预览图像、曝光参数值、图像拍摄参数值的对应关系,基于通过图像拍摄参数预测模型预测得到的图像拍摄参数值趋近于预先存储的与预览图像、曝光参数值相对应的图像拍摄参数值的训练原则,对所述图像拍摄参数预测模型进行训练,得到训练后的以图像数据参数、曝光参数、图像拍摄参数为变量的图像拍摄参数预测模型。
  13. 根据权利要求12所述的装置,其特征在于,所述装置还包括:
    第二获取模块,用于通过所述拍摄部件获取第一预览图像,并获取所述第一预览图像对应的第一曝光参数值;
    第一确定模块,用于根据所述第一曝光参数值以及预设数目个衰减百分比,确定预设数目个曝光参数值;
    第三获取模块,用于分别根据预设数目个曝光参数值中的每个曝光参数值,通过拍摄部件获取预览图像,得到每个曝光参数值对应的预览图像;
    第二确定模块,用于对于预先存储的多个预设合成张数中的每个预设合成张数和所述预设数目个曝光参数值中的每个曝光参数值对应的预览图像,选取所述第一预览图像和重复选取所述预设合成张数减一个所述曝光参数值对应的预览图像,得到所述预设合成张数与所述曝光参数值对应的目标预览图像集合;对所述目标预览图像集合,进行图像合成处理,得到所述预设合成张数与所述曝光参数值对应的合成图像;
    第三确定模块,用于在得到的所有合成图像中,确定对应的图像质量最优的目标合成图像,将所述目标合成图像对应的目标预设合成张数、目标曝光参数值确定为目标图像拍摄参数值;
    存储模块,用于将所述第一预览图像、所述第一曝光参数值、所述目标图像拍摄参数值对应存储到所述训练集中。
  14. 根据权利要求13所述的装置,其特征在于,所述第二确定模块,用于:
    分别基于多个预设终端性能参数值中的每个预设终端性能参数值,对所述目标预览图像集合,进行图像合成处理,得到所述预设合成张数、所述曝光参数值与每个预设终端性能参数值对应的合成图像;
    所述第三确定模块,用于:
    在得到的所有合成图像中,确定对应的图像质量最优的目标合成图像,将所述目标合成图像对应的目标预设合成张数、目标曝光参数值和目标预设终端性能参数值确定为目标图像拍摄参数值。
  15. 根据权利要求13所述的装置,其特征在于,所述装置还包括:
    记录模块,用于记录得到所述预设合成张数与所述曝光参数值对应的合成图像时所消耗的功耗值;
    所述第三确定模块,用于:
    在得到的所有合成图像中,确定对应的图像质量和功耗值综合最优的目标合成图像,将所述目标合成图像对应的目标预设合成张数和目标曝光参数值确定为目标图像拍摄参数值。
  16. 根据权利要求10所述的装置,其特征在于,所述终端中包含运动传感器,所述第一获取模块,用于:
    在终端处于待拍摄图像的状态下,通过所述运动传感器采集所述终端当前的运动参数,所述运动参数用于指示所述终端的运动状态;
    当所述运动参数指示终端处于运动状态时,通过所述拍摄部件获取预览图像,并获取所述预览图像对应的曝光参数值。
  17. 根据权利要求16所述的装置,其特征在于,所述第一获取模块,用于:
    将所述运动参数和运动参数模型进行比较,获取相似度,所述相似度用于指示所述运动参数和所述抖动参数模型之间的相似程度;
    当所述相似度大于相似阈值时,确定所述终端处于运动状态;
    通过所述拍摄部件获取预览图像,并获取所述预览图像对应的曝光参数值。
  18. 根据权利要求17所述的装置,其特征在于,所述所述运动参数模型包括步行运动模型、骑行运动模型和搭乘载具运动模型中至少一种。
  19. 一种终端,其特征在于,所述终端包括处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如权利要求1至9任一所述的拍摄图像的方法。
  20. 一种计算机可读存储介质,其特征在于,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现如权利要求1至9任一所述的拍摄图像的方法。
PCT/CN2018/115227 2017-11-13 2018-11-13 拍摄图像的方法、装置、终端和存储介质 WO2019091487A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP18877131.5A EP3713213A4 (en) 2017-11-13 2018-11-13 PHOTOGRAPHING METHOD AND DEVICE, TERMINAL DEVICE AND STORAGE MEDIUM
US16/848,761 US11102397B2 (en) 2017-11-13 2020-04-14 Method for capturing images, terminal, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711117448.1A CN107809592B (zh) 2017-11-13 2017-11-13 拍摄图像的方法、装置、终端和存储介质
CN201711117448.1 2017-11-13

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/848,761 Continuation US11102397B2 (en) 2017-11-13 2020-04-14 Method for capturing images, terminal, and storage medium

Publications (1)

Publication Number Publication Date
WO2019091487A1 true WO2019091487A1 (zh) 2019-05-16

Family

ID=61592058

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/115227 WO2019091487A1 (zh) 2017-11-13 2018-11-13 拍摄图像的方法、装置、终端和存储介质

Country Status (4)

Country Link
US (1) US11102397B2 (zh)
EP (1) EP3713213A4 (zh)
CN (1) CN107809592B (zh)
WO (1) WO2019091487A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110475072B (zh) 2017-11-13 2021-03-09 Oppo广东移动通信有限公司 拍摄图像的方法、装置、终端和存储介质
CN107809592B (zh) * 2017-11-13 2019-09-17 Oppo广东移动通信有限公司 拍摄图像的方法、装置、终端和存储介质
CN115514875A (zh) * 2021-06-22 2022-12-23 北京小米移动软件有限公司 图像处理方法和装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101909150A (zh) * 2009-06-03 2010-12-08 索尼公司 成像设备和成像控制方法
US8446481B1 (en) * 2012-09-11 2013-05-21 Google Inc. Interleaved capture for high dynamic range image acquisition and synthesis
CN103455170A (zh) * 2013-08-22 2013-12-18 西安电子科技大学 一种基于传感器的移动终端运动识别装置及方法
CN107205120A (zh) * 2017-06-30 2017-09-26 维沃移动通信有限公司 一种图像的处理方法和移动终端
CN107809592A (zh) * 2017-11-13 2018-03-16 广东欧珀移动通信有限公司 拍摄图像的方法、装置、终端和存储介质
CN107809593A (zh) * 2017-11-13 2018-03-16 广东欧珀移动通信有限公司 拍摄图像的方法、装置、终端和存储介质
CN107809591A (zh) * 2017-11-13 2018-03-16 广东欧珀移动通信有限公司 拍摄图像的方法、装置、终端和存储介质

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7697836B2 (en) * 2006-10-25 2010-04-13 Zoran Corporation Control of artificial lighting of a scene to reduce effects of motion in the scene on an image being acquired
JP5163031B2 (ja) * 2007-09-26 2013-03-13 株式会社ニコン 電子カメラ
KR101023946B1 (ko) * 2007-11-02 2011-03-28 주식회사 코아로직 객체 추적을 이용한 디지털 영상의 손떨림 보정 장치 및방법
US20090244301A1 (en) * 2008-04-01 2009-10-01 Border John N Controlling multiple-image capture
WO2015104236A1 (en) * 2014-01-07 2015-07-16 Dacuda Ag Adaptive camera control for reducing motion blur during real-time image capture
CN105635559B (zh) * 2015-07-17 2018-02-13 宇龙计算机通信科技(深圳)有限公司 用于终端的拍照控制方法及装置
CN105827971B (zh) * 2016-03-31 2019-01-11 维沃移动通信有限公司 一种图像处理方法及移动终端
CN107172296A (zh) 2017-06-22 2017-09-15 维沃移动通信有限公司 一种图像拍摄方法及移动终端
CN111147739A (zh) * 2018-03-27 2020-05-12 华为技术有限公司 拍照方法、拍照装置和移动终端

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101909150A (zh) * 2009-06-03 2010-12-08 索尼公司 成像设备和成像控制方法
US8446481B1 (en) * 2012-09-11 2013-05-21 Google Inc. Interleaved capture for high dynamic range image acquisition and synthesis
CN103455170A (zh) * 2013-08-22 2013-12-18 西安电子科技大学 一种基于传感器的移动终端运动识别装置及方法
CN107205120A (zh) * 2017-06-30 2017-09-26 维沃移动通信有限公司 一种图像的处理方法和移动终端
CN107809592A (zh) * 2017-11-13 2018-03-16 广东欧珀移动通信有限公司 拍摄图像的方法、装置、终端和存储介质
CN107809593A (zh) * 2017-11-13 2018-03-16 广东欧珀移动通信有限公司 拍摄图像的方法、装置、终端和存储介质
CN107809591A (zh) * 2017-11-13 2018-03-16 广东欧珀移动通信有限公司 拍摄图像的方法、装置、终端和存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3713213A4 *

Also Published As

Publication number Publication date
CN107809592B (zh) 2019-09-17
US11102397B2 (en) 2021-08-24
EP3713213A1 (en) 2020-09-23
EP3713213A4 (en) 2020-12-16
CN107809592A (zh) 2018-03-16
US20200244869A1 (en) 2020-07-30

Similar Documents

Publication Publication Date Title
US11412153B2 (en) Model-based method for capturing images, terminal, and storage medium
WO2019091411A1 (zh) 拍摄图像的方法、装置、终端和存储介质
CN107623793B (zh) 图像拍摄处理的方法和装置
US11418702B2 (en) Method and device for displaying shooting interface, and terminal
CN107087101B (zh) 用于提供动态全景功能的装置和方法
CN107566579B (zh) 拍摄方法、装置、终端及存储介质
US11102397B2 (en) Method for capturing images, terminal, and storage medium
CN106254807B (zh) 提取静止图像的电子设备和方法
CN111225138A (zh) 摄像头的控制方法、装置、存储介质及终端
US11961278B2 (en) Method and apparatus for detecting occluded image and medium
CN110881104A (zh) 拍照方法、装置、存储介质及终端
WO2019071618A1 (zh) 一种图像处理方法及设备
CN112770173A (zh) 直播画面处理方法、装置、计算机设备及存储介质
CN111866372A (zh) 自拍方法、装置、存储介质以及终端
CN107864333B (zh) 图像处理方法、装置、终端及存储介质
CN110968362A (zh) 应用运行方法、装置及存储介质
CN116916151B (zh) 拍摄方法、电子设备和存储介质
US20220264176A1 (en) Digital space management method, apparatus, and device
WO2016044983A1 (zh) 一种图像处理方法、装置及电子设备
CN115456895A (zh) 一种雾天场景的图像采集方法及装置
CN116095498A (zh) 图像采集方法、装置、终端及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18877131

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018877131

Country of ref document: EP

Effective date: 20200615