WO2019091412A1 - 拍摄图像的方法、装置、终端和存储介质 - Google Patents

拍摄图像的方法、装置、终端和存储介质 Download PDF

Info

Publication number
WO2019091412A1
WO2019091412A1 PCT/CN2018/114423 CN2018114423W WO2019091412A1 WO 2019091412 A1 WO2019091412 A1 WO 2019091412A1 CN 2018114423 W CN2018114423 W CN 2018114423W WO 2019091412 A1 WO2019091412 A1 WO 2019091412A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
target
composite
parameter value
preset
Prior art date
Application number
PCT/CN2018/114423
Other languages
English (en)
French (fr)
Inventor
陈岩
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to EP18875585.4A priority Critical patent/EP3713212B1/en
Publication of WO2019091412A1 publication Critical patent/WO2019091412A1/zh
Priority to US16/846,054 priority patent/US11412153B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/65Control of camera operation in relation to power supply
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing

Definitions

  • the present application relates to the field of electronic technologies, and in particular, to a method, an apparatus, a terminal, and a storage medium for capturing an image.
  • the photo app is a very popular app. Users can take photos through the photo app.
  • the embodiment of the present application provides a method, an apparatus, a terminal, and a storage medium for capturing an image, which can improve the efficiency of photographing.
  • the technical solution is as follows:
  • a method of capturing an image comprising:
  • HDR High Dynamic Range
  • the captured image processing is performed based on the predicted image capturing parameter value.
  • an apparatus for capturing an image comprising:
  • a first acquiring module configured to acquire a preview image by the capturing component in a state in which the terminal is in an image to be captured, and acquire an exposure parameter value corresponding to the preview image
  • a prediction module configured to predict an image capture in a current HDR scene according to the preview image, the exposure parameter value, and a pre-trained image capture parameter prediction model with image data parameters, exposure parameters, and image capture parameters as variables a parameter value, wherein the image capturing parameter comprises a composite number of sheets;
  • the execution module is configured to perform the captured image processing according to the predicted image capturing parameter value when the shooting instruction is received.
  • a terminal comprising a processor and a memory, wherein the memory stores at least one instruction, at least one program, a code set or a set of instructions, the at least one instruction, the at least one program
  • the code set or set of instructions is loaded and executed by the processor to implement the method of capturing an image as described in the first aspect.
  • a computer readable storage medium having stored therein at least one instruction, at least one program, a code set, or a set of instructions, the at least one instruction, the at least one program, the code
  • the set or set of instructions is loaded and executed by the processor to implement the method of capturing an image as described in the first aspect.
  • the preview image is acquired by the photographing component, and the exposure parameter value corresponding to the preview image is obtained; and the preview image, the exposure parameter value, and the pre-trained image capturing parameter prediction are obtained.
  • the model predicts an image capturing parameter value in the current HDR scene, and when receiving the shooting instruction, performs captured image processing based on the predicted image capturing parameter value.
  • the terminal can automatically calculate the number of composites in the current HDR scene, and perform shooting image processing based on the number of composite sheets, without requiring the user to manually turn on the HDR composition function, thereby improving the efficiency of photographing.
  • FIG. 1 is a flowchart of a method for capturing an image according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of a target preview image set corresponding to multiple preset composite numbers and multiple exposure parameter values according to an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of a composite image corresponding to a preset composite number, an exposure parameter value, and a plurality of preset terminal performance parameter values according to an embodiment of the present application;
  • FIG. 4 is a schematic structural diagram of an apparatus for capturing an image according to an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of an apparatus for capturing an image according to an embodiment of the present disclosure
  • FIG. 6 is a schematic structural diagram of an apparatus for capturing an image according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of an apparatus for capturing an image according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic structural diagram of a terminal according to an embodiment of the present disclosure.
  • FIG. 9 is a schematic structural diagram of a terminal according to an embodiment of the present disclosure.
  • FIG. 10 is a schematic structural diagram of a terminal according to an embodiment of the present disclosure.
  • FIG. 11 is a schematic structural diagram of a terminal according to an embodiment of the present disclosure.
  • FIG. 12 is a schematic structural diagram of a terminal according to an embodiment of the present disclosure.
  • FIG. 13 is a schematic structural diagram of a terminal according to an embodiment of the present disclosure.
  • FIG. 14 is a schematic structural diagram of a terminal according to an embodiment of the present disclosure.
  • FIG. 15 is a schematic structural diagram of a terminal provided by an embodiment of the present application.
  • Image Capture Parameter Prediction Model A mathematical model for predicting image capture parameter values in the current HDR scene based on input data.
  • the image capturing parameter prediction model includes, but is not limited to, a Convolutional Neural Network (CNN) model, a Deep Neural Network (DNN) model, and a Recurrent Neural Networks (RNN) model.
  • CNN Convolutional Neural Network
  • DNN Deep Neural Network
  • RNN Recurrent Neural Networks
  • GBDT Gradient Boosting Decision Tree
  • LR Logistic Regression
  • the CNN model is a network model for identifying object categories in an image.
  • the CNN model can also extract data features with tagged image data or unlabeled image data.
  • the CNN model is divided into a neural network model that can be trained by unlabeled image data and a neural network model that cannot be trained by unlabeled image data.
  • the DNN model is a deep learning framework.
  • the DNN model includes an input layer, at least one hidden layer (or intermediate layer), and an output layer.
  • the input layer, the at least one hidden layer (or intermediate layer), and the output layer each include at least one neuron for processing the received data.
  • the number of neurons between different layers may be the same; or it may be different.
  • the RNN model is a neural network model with a feedback structure.
  • the output of the neuron can be directly applied to itself at the next timestamp, ie, the input of the i-th layer neuron at time m, in addition to the output of the (i-1) layer neuron at that time, Its own output at time (m-1).
  • the embedding model is based on entity and relational distributed vector representations, and the relationship in each triple instance is treated as a translation from the entity header to the entity tail.
  • the instance of the triple includes the subject, the relationship, and the object, and the instance of the triple can be represented as (subject, relationship, object); the subject is the entity header, and the object is the entity tail.
  • Xiao Zhang s father is a big Zhang, which is represented by a triad instance (Xiao Zhang, Dad, Da Zhang).
  • the GBDT model is an iterative decision tree algorithm consisting of multiple decision trees, and the results of all trees are added together as the final result.
  • Each node of the decision tree gets a predicted value, taking the age as an example.
  • the predicted value is the average of the ages of all the people belonging to the node corresponding to the age.
  • the LR model refers to a model built on a linear regression based on a logic function.
  • the embodiment of the present application provides a method for capturing an image, and the execution body of the method is a terminal.
  • the terminal may be a terminal having a function of capturing images, such as a terminal installed with a camera application.
  • the terminal may include components such as a processor, a memory, a photographing component, a screen, and the like.
  • the processor may be a CPU (Central Processing Unit) or the like, and may be used to determine image capturing parameter values and perform related processing of capturing images.
  • the memory can be RAM (Random Access Memory), Flash (Flash), etc., and can be used to store received data, data required for processing, data generated during processing, etc., such as image capture parameter prediction Models, etc.
  • the shooting component can be a camera that can be used to capture a preview image.
  • the screen may be a touch screen, which may be used to display a preview image acquired by the shooting component, and may also be used to detect a touch signal or the like.
  • the camera application when the user takes a picture through the camera application, the camera application can also provide an HDR synthesis function in order to facilitate the user to obtain a clearer image in the backlight scene.
  • the user when the user wants to use the HDR compositing function, the user can find the switch button of the HDR compositing function, and then turn on the HDR compositing function by clicking the switch button, when the user presses the camera button, the terminal can synthesize the sheet based on the preset. Number, perform shooting image processing.
  • the terminal may continuously acquire a plurality of preset composite images by using a shooting component (such as a camera), wherein the image may be referred to as a preview image, and the preset composite images of the captured images are image-synthesized and processed.
  • the image is synthesized, that is, the image obtained by combining the plurality of preview images is obtained and stored in the library of the terminal.
  • An embodiment of the present application provides a method for capturing an image, the method comprising:
  • the captured image processing is performed based on the predicted image capturing parameter value.
  • the image capture parameters also include exposure parameters.
  • the method further includes:
  • the image capturing parameter value predicted based on the image capturing parameter prediction model approaches the preview image and the exposure parameter value stored in advance.
  • the training value of the corresponding image capturing parameter value is used to train the image capturing parameter prediction model, and the image capturing parameter prediction model with the image data parameter, the exposure parameter and the image capturing parameter as variables is obtained after training.
  • the method further includes:
  • the first preview image is selected, and in the first preset number of preview images and the second preset number of preview images, the pre-selection is selected
  • the composite image number is reduced by a preview image to obtain a target preview image set corresponding to the plurality of exposure parameter values, wherein the plurality of exposure parameter values include a first preset number of exposure parameter values and a second preset a number of exposure parameter values; performing image synthesis processing on the target preview image set to obtain a composite image corresponding to the plurality of exposure parameter values;
  • the first preview image, the first exposure parameter value, and the target image capturing parameter value are correspondingly stored in the training set.
  • performing image synthesis processing on the target preview image set, and obtaining a composite image corresponding to the preset number of composites and the plurality of exposure parameter values including:
  • the target composite image with the corresponding image quality is determined, and the target preset composite number and the target exposure parameter value corresponding to the target composite image are determined as the target image capturing parameter values, including:
  • the target composite image with the corresponding image quality is determined, and the target preset composite number, the target exposure parameter value and the target preset terminal performance parameter value corresponding to the target composite image are determined as the target image capturing. Parameter value.
  • the method further includes:
  • the target composite image with the corresponding image quality is determined, and the target preset composite number and the target exposure parameter value corresponding to the target composite image are determined as the target image capturing parameter values, including:
  • the target composite image with the corresponding optimal image quality and power consumption value is determined, and the target preset composite number and the target exposure parameter value corresponding to the target composite image are determined as the target image capturing parameter values.
  • Step 101 Acquire a preview image by the photographing component in a state where the terminal is in an image to be captured, and acquire an exposure parameter value corresponding to the preview image.
  • the exposure parameter value corresponding to the preview image may be an exposure parameter value determined when the preview image is acquired.
  • the state of the image to be captured is used to instruct the terminal to open the shooting component in the terminal after starting the camera application.
  • the preview image is an image obtained by the shooting part.
  • the exposure parameter value is a parameter value of the exposure parameter, and the exposure parameter value includes at least one of an exposure duration, a white balance, a shutter value, an aperture value, and a sensitivity.
  • a camera application is installed in the terminal. When the user wants to take a photo, the icon of the photo application is clicked. At this time, the terminal will receive a startup instruction of the corresponding photo application, and then start the photo application. At this time, the terminal will be in the state of the image to be captured, that is, the terminal opens the shooting component in the terminal at this time. In a state where the terminal is in an image to be captured, the terminal acquires a preview image through the photographing part.
  • the preview image may be an image displayed in a terminal acquired by the photographing component, that is, an image that has not been synthesized, that is, the preview image may be an image acquired by the photographing component before the user presses the photographing button.
  • the terminal can determine the exposure parameter value according to the ambient brightness and the color of the light source in the environment in real time (where the exposure parameter value may include parameter values such as exposure duration and white balance), so that the terminal performs shooting according to the exposure parameter value.
  • Image Processing In this case, the terminal can also obtain the exposure parameter value corresponding to the preview image, that is, the exposure parameter value based on when the preview image is captured.
  • the terminal acquires a preview image through the shooting component in real time, or acquires a preview image through the shooting component every acquisition cycle.
  • the acquisition period is set by the user, or is set by the terminal by default. This embodiment does not limit this.
  • an acquisition period may be set in the terminal.
  • the terminal may acquire a preview image through the shooting component and obtain an exposure parameter value corresponding to the preview image every time the preset acquisition period is reached.
  • Step 102 Predicting an image capturing parameter value in a current HDR scene according to a preview image, an exposure parameter value, and a pre-trained image capturing parameter prediction model with image data parameters, exposure parameters, and image capturing parameters as variables, wherein the image The shooting parameters include the number of composite sheets.
  • a pre-trained image capturing parameter prediction model is pre-stored in the terminal.
  • the image capturing parameter prediction model may be used to predict an image capturing parameter value in the current scene according to the exposure parameter value corresponding to the preview image and the preview image currently acquired by the terminal.
  • the terminal may input the preview image and its corresponding exposure parameter value into the pre-trained image capturing parameter prediction model to obtain an output of the image capturing parameter prediction model. , that is, the image capturing parameter value in the current scene can be obtained.
  • the terminal may use the preview image as a parameter value of the image data parameter, and bring the exposure parameter value corresponding to the preview image as a parameter value of the exposure parameter.
  • the image capturing parameter prediction model obtains image capturing parameter values in the current HDR scene to obtain a composite number of sheets.
  • the image capturing parameter prediction model is a model obtained by training a convolutional neural network by using a corresponding relationship between each preview image, an exposure parameter value, and an image capturing parameter value in the training set.
  • the image capturing parameter prediction model described above is pre-trained by the terminal or the server.
  • the training process of the image capture parameter prediction model is introduced below by taking the terminal training image capture parameter prediction model as an example.
  • the terminal acquires a training set, where the training set includes at least one set of sample data sets; and according to at least one set of sample data sets, the original parameter model is trained by using an error back propagation algorithm to obtain an image capturing parameter prediction model.
  • Each set of sample data sets includes: a correspondence relationship between each preview image, an exposure parameter value, and an image capture parameter value.
  • the training process may be as follows: according to the corresponding relationship between each preview image, the exposure parameter value, and the image capturing parameter value in the pre-stored training set, the image capturing parameter value predicted based on the image capturing parameter prediction model is approached to be pre-stored.
  • the training value of the image capturing parameter value corresponding to the preview image and the exposure parameter value is used to train the image capturing parameter prediction model, and the image capturing parameter prediction with the image data parameter, the exposure parameter and the image capturing parameter as variables is obtained after training. model.
  • a training set is pre-stored in the terminal.
  • the training set may include a correspondence relationship between each preview image, an exposure parameter value, and an image capturing parameter value, where the image capturing parameter value in each corresponding relationship item in the correspondence relationship may be a preview in the corresponding relationship item.
  • the synthesized image quality can be optimized to the value of the image capturing parameter.
  • the terminal may train the image capturing parameter prediction model including the to-be-determined parameter according to the pre-stored training set, that is, the terminal approximates the image capturing parameter value corresponding to the preview image and the exposure parameter value predicted by the image capturing parameter prediction model.
  • the stored preview image, the training value of the image capturing parameter value corresponding to the exposure parameter value, and the image capturing parameter prediction model are trained.
  • the terminal may input the preview image and the exposure parameter value in the corresponding relationship item to the image capturing including the parameter to be determined.
  • an image capturing parameter value including a parameter to be determined is obtained, and then an objective function can be obtained based on the obtained image capturing parameter value of the undetermined parameter approaching the training value of the image capturing parameter value in the corresponding relationship item (for example, The objective function may be a function of the obtained image capture parameter value including the parameter to be determined minus the image capture parameter value in the corresponding relationship item).
  • the training value of the parameter to be determined can be obtained by the gradient descent method, and the training value is used as the parameter value corresponding to the undetermined parameter when training according to the next corresponding relationship item.
  • the training values of the parameters to be determined can be obtained.
  • the image capturing parameter prediction model may be a convolutional neural network model.
  • the undetermined parameter may be each convolution kernel in the neural network model.
  • each of the corresponding relationship items in each of the foregoing correspondence relationships is selected according to a preset composite number of frames and an image quality of the synthesized composite image corresponding to the plurality of exposure parameter values.
  • the processing may be as follows: the image capturing parameter may further include an exposure parameter; acquiring a first preview image by the capturing component, and acquiring a first exposure parameter value corresponding to the first preview image; according to the first exposure parameter value, the first pre-pre Setting a percentage of attenuation and a percentage of enhancement of the second predetermined number, determining a first preset number of exposure parameter values and a second predetermined number of exposure parameter values; respectively, according to each of the first preset number of exposure parameter values Obtaining a parameter value, obtaining a preview image by the photographing component, obtaining a first preset number of preview images, and respectively obtaining a preview image by the photographing component according to each of the second preset number of exposure parameter values, a second preset number of preview images; for each of the pluralit
  • a target preview image set wherein the plurality of exposure parameter values include a first preset number of exposure parameter values and a second preset number of exposure parameter values; and for the target preview image set, performing image synthesis processing to obtain a preset composite sheet a composite image corresponding to a plurality of exposure parameter values; determining, in all the synthesized images obtained, a target composite image with an optimal image quality, and determining a target preset number of composites and a target exposure parameter value corresponding to the target composite image
  • the parameter value is captured for the target image; the first preview image, the first exposure parameter value, and the target image capturing parameter value are correspondingly stored in the training set.
  • each corresponding relationship item in the foregoing correspondence relationship is determined by the terminal according to the preview image corresponding to the acquired first preview image and the preset number of different exposure parameter values.
  • the different correspondence items are determined by the first preview image acquired by the terminal in different HDR scenes and a preset number of preview images.
  • each corresponding relationship item in the corresponding relationship the number of composites in the corresponding image capturing parameter is each corresponding relationship item, and the exposure parameter value in the image capturing parameter value may be a preset value (for example 0), the corresponding image item in the corresponding image capturing parameter is greater than 1, each of the corresponding relationship items, the exposure parameter value in the image capturing parameter value may be the first preset number of exposure parameters determined according to the first exposure parameter value And a value of the exposure parameter of the second preset number of exposure parameter values.
  • the determination process of one of the corresponding relationship items is described in detail below, and the determination process of other corresponding relationship items is the same.
  • the terminal may acquire a preview image by using a photographing component (the preview image may be referred to as a first preview image), wherein the preview image is an image directly acquired by the photographing component and not processed by the image,
  • the exposure parameter value (which may be referred to as a first exposure parameter value) corresponding to the first preview image may be obtained, wherein the first exposure parameter value is an unadjusted value determined by the terminal according to the ambient brightness and the color of the light source in the environment.
  • the terminal may be pre-stored with a first preset number of attenuation percentages (wherein the attenuation percentage may be a percentage less than 1) and a second preset number of enhancement percentages (wherein the enhancement percentage may be a percentage greater than 1), wherein The first preset number and the second preset number may be the same or different.
  • each attenuation percentage of the first preset number of attenuation percentages may be respectively multiplied by the first exposure parameter value to obtain a first preset number of exposure parameters.
  • a value, and each of the second preset number of enhancement percentages may be multiplied by the first exposure parameter value to obtain a second predetermined number of exposure parameter values.
  • the first preset number of attenuation percentages is 80%, 60%, respectively, and the first exposure parameter value is A, then the first preset number of exposure parameter values that can be obtained are A*80%, A*60%, respectively.
  • the second preset number of enhancement percentages is 110%, and the first exposure parameter value is A, and the second preset number of exposure parameter values that can be obtained are A*110%.
  • the preview image may be acquired by the shooting component based on each of the first preset number of exposure parameter values, to obtain a first preset number of preview images, and Obtaining a preview image by the photographing component according to each exposure parameter value of the second preset number of exposure parameter values, respectively, to obtain a second preset number of preview images, that is, the obtained first preset number and the second preset number
  • the preview image is taken based on the adjusted exposure parameter values, not based on the ambient brightness and the color of the light source in the environment.
  • the preset number of composite frames (which may be 1, 2, 3 respectively) may be pre-stored in the terminal, and after acquiring the first preview image and the first preset number of preview images and the second preset number of preview images, For the preset number of composite sheets having a value of 1, the first preview image is determined as the target preview image set corresponding to the preset number of composite sheets, wherein the composite image obtained when the target preview image set is subjected to image synthesis processing corresponds to
  • the exposure parameter value may be a preset value; for a preset number of composites in which the value of the plurality of preset composite sheets is 2, the first preview image may be selected, and respectively in the first preset number of preview images and the second Selecting a preview image from the preset number of preview images, and obtaining a target preview image set corresponding to the preset number of composite sheets and the first preset number of exposure parameter values, and the preset number of composite sheets and the second preset number a target preview image set corresponding to the exposure parameter value (that is, for each of the
  • the first preview image acquired by the terminal is the image 1 and the first exposure parameter value is A
  • the obtained first preset number of preview images are respectively the exposure parameters corresponding to the image 2, the image 3, and the image 2 and the image 3 respectively.
  • the value of B, C, the second preset number of preview images obtained is image 4
  • the exposure parameter value corresponding to image 4 is D
  • the plurality of preset composite numbers are 1, 2, and 3, respectively, for the preset synthesis
  • the terminal can select image 1 to obtain the target preview image set corresponding to the preset number of composite sheets 1; for the preset number of composite sheets 2, the terminal can select image 1 and select one image 2 to obtain the preset composite number of sheets.
  • the terminal may select the image 1 and select one image 3 to obtain the target preview image set corresponding to the preset composite number 2 and the exposure parameter value C, and the terminal may select the image 1 and Selecting one image 4 to obtain a target preview image set corresponding to the preset composite number 2 and the exposure parameter value D; for the preset composite number 3, the terminal may select the image 1, an image 2, and an image 4 to obtain a preset. Number of synthetic sheets 3
  • the target preview image set corresponding to the exposure parameter values B and D the terminal may select the image 1, the image 3 and the image 4 to obtain the target preview image set corresponding to the preset composite number 3 and the exposure parameter values C and D, as shown in picture 2.
  • the target preview image set corresponding to the plurality of exposure parameter values is obtained, and the target preview image set may be image-combined. Processing, that is, performing image synthesis processing on each preview image in the target preview image set to obtain a composite image corresponding to the plurality of exposure parameter values.
  • the image quality (such as sharpness) corresponding to each composite image can be calculated, and then the composite image with the best image quality (which can be called a target composite image) can be determined in each composite image, and Determining a preset composite number corresponding to the target composite image (which may be referred to as a target preset composite number) and an exposure parameter value (which may be referred to as a target exposure parameter value), wherein when the target preset composite number is 1, the target The exposure parameter value is a preset value (for example, 0). When the target preset composite number is greater than 1, the target exposure parameter value is a first preset number of exposure parameter values and/or a second preset number of exposure parameter values. The value of an exposure parameter in .
  • the determined target preset composite number and the target exposure parameter value may be determined as the target image capturing parameter value.
  • the three corresponding correspondences may be stored in the training set.
  • the terminal may also obtain corresponding preview images, exposure parameter values, and corresponding image capture parameter values according to the above processing process, thereby obtaining respective correspondence items in the training set.
  • the image capturing parameter further includes a terminal performance parameter.
  • the terminal when determining the training set, the preview image, the number of synthesized sheets corresponding to the exposure parameter, and the terminal performance parameter value may be determined.
  • the terminal may perform the following processing: Performing image synthesis processing on the target preview image set based on the plurality of preset terminal performance parameter values, and obtaining a composite image corresponding to the preset composite number, the plurality of exposure parameter values, and the plurality of preset terminal performance parameter values.
  • the terminal may perform the following processing: determining, in the composite image corresponding to the plurality of preset composite number, the plurality of exposure parameter values, and the plurality of preset terminal performance parameter values, determining a target image with the optimal image quality And determining, by the image, the target preset composite number, the target exposure parameter value, and the target preset terminal performance parameter value corresponding to the target composite image as the target image capturing parameter value.
  • the terminal performance parameter value is a parameter value of the terminal performance parameter
  • the terminal performance parameter is a parameter that affects the performance of the terminal.
  • the terminal performance parameter is a CPU running frequency (also referred to as a CPU frequency).
  • a plurality of preset terminal performance parameter values are pre-stored in the terminal.
  • the image capturing parameter further includes the terminal performance parameter
  • the target preview image set corresponding to the plurality of exposure parameter values is determined.
  • the terminal may perform image synthesis processing on the target preview image set based on each of the preset terminal performance parameter values, and obtain the preset composite number, multiple exposure parameter values, and multiple A composite image corresponding to a preset terminal performance parameter value.
  • a preset composite number is 3
  • an exposure parameter value is B, D
  • a plurality of preset terminal performance parameter values are a and b, respectively, and the preset composite number 3 and the exposure parameter value are B and D.
  • the target preview image set the terminal may perform image synthesis processing on the target preview image set when the terminal performance parameter is a (ie, the terminal may set the terminal performance parameter to a), and obtain the preset composite number 3, B, The composite image corresponding to D and a, the terminal may also perform image synthesis processing on the target preview image set when the terminal performance parameter is b, and obtain a composite image corresponding to the preset composite number 3, B, D, and b, such as Figure 3 shows.
  • the image quality of each composite image can be calculated, and the target composite image with the corresponding image quality can be determined in the plurality of composite images, and the target preset composite image corresponding to the target composite image can be synthesized.
  • the number, the target exposure parameter value, and the target preset terminal performance parameter value are determined as the target image capturing parameter values, and the three (ie, the first preview image, the first exposure parameter value, and the target image capturing parameter value) can be correspondingly stored to the training. concentrated.
  • the terminal adds the first preview image, the first exposure parameter value, and the target image capturing parameter value to the training set, obtains the updated training set, and trains the image capturing parameter prediction model according to the updated training set to obtain Updated image capture parameter prediction model.
  • the process of the image capturing parameter prediction model is trained according to the updated training set, and the process of obtaining the updated image capturing parameter prediction model can be analogized to the training process of the reference image capturing parameter prediction model, and details are not described herein.
  • the terminal when the terminal performs image synthesis processing on each target preview image set, the terminal also records the power consumption value consumed by the image synthesis processing.
  • the process of determining, by the terminal, the target image capturing parameter value may be as follows: recording a power consumption value consumed when the composite image corresponding to the plurality of exposure parameter values is obtained. Determining, in a composite image corresponding to the plurality of preset composite sheets and the plurality of exposure parameter values, determining a target composite image with a corresponding optimal image quality and power consumption value, and synthesizing the target preset composite image corresponding to the target composite image The number and target exposure parameter values are determined as target image capture parameter values.
  • the target preview image is obtained.
  • the image is subjected to image synthesis processing to obtain a composite image corresponding to the plurality of exposure parameter values, and the power consumption value consumed by the image synthesis processing can be recorded, that is, the preset composite number of sheets can be recorded and
  • the power consumption value consumed when the plurality of exposure parameter values correspond to the composite image wherein the power consumption value may be one or more of the following values: the amount of power consumed, and the duration of consumption.
  • the terminal can determine the corresponding image quality and power consumption value in the obtained composite image to obtain an optimal target composite image (where the image quality can be The merging composite image of the power consumption value is determined as the target composite image), and the target preset composite number and the target exposure parameter value corresponding to the target composite image may be determined as the target image capturing parameter value.
  • the first preview image in the target preview image set may be used as the composite image.
  • Step 103 when receiving the shooting instruction, performing the captured image processing based on the predicted image capturing parameter value.
  • the terminal when the terminal is in the state of the image to be captured, when the user wants to take a photo, the user may click the shooting button. At this time, the terminal will receive the click command of the shooting button, and then according to the predicted current scene.
  • the image capture parameter value and the HDR synthesis algorithm perform the captured image processing, wherein the predicted image capture parameter value in the current HDR scene may be the value of the image capture parameter in the HDR synthesis algorithm, wherein when the predicted composite sheet is predicted
  • the terminal can obtain a preview image through the shooting component. In this case, the obtained preview image is the final composite image (this case is equivalent to the terminal not turning on the HDR composite function).
  • the terminal may acquire the second preview image, and take a preview image based on the number of the composite sheets minus one of the preset exposure parameter values, and take a preview image, and The obtained second preview image and the number of composite sheets are subtracted from one preview image for image synthesis processing, and the final image is obtained and stored in the library.
  • the terminal may acquire the second preview image, and take a preview image according to the number of the composite sheets minus one of the preset exposure parameter values, and may take a preview image, and may Setting the value of the terminal performance parameter to the predicted terminal performance parameter value, and then performing image synthesis processing on the acquired second preview image and the number of composite images minus one preview image based on the predicted terminal performance parameter value, to obtain a final Image and store it in the gallery. From this, it can be seen that if the predicted number of synthesized sheets is 1, the terminal can acquire only the second preview image and store it as the final image.
  • step 101 for each preset acquisition period, the preview image is acquired by the imaging component, and the exposure parameter value corresponding to the preview image is obtained, and each time the preview image is acquired, the method may be followed according to step 102. Determining an image capturing parameter value in the current HDR scene (ie, corresponding to the current acquisition period), each time the receiving instruction is received in the current acquisition period, the terminal may perform shooting according to the predicted image capturing parameter value corresponding to the current acquisition period. Image Processing.
  • the condition of the exposure parameter is further included.
  • the processing of step 103 may be as follows: when receiving the shooting instruction, acquiring the second preview image by the shooting component, and based on each predicted exposure The parameter value is obtained by the photographing component, and a preview image is obtained by subtracting a preview image from the second preview image and the composite image number to obtain a composite image.
  • the image capturing parameter may further include an exposure parameter.
  • the input of the image capturing parameter prediction model may be an exposure parameter value corresponding to the preview image and the preview image (where the exposure parameter value is the terminal according to the ambient brightness and The color of the light source in the environment is determined, unadjusted, and the output of the image capturing parameter prediction model may be the predicted composite number of sheets and the predicted exposure parameter value, wherein the predicted exposure parameter value may be used for the terminal to perform the captured image processing.
  • the exposure parameter value, the predicted exposure parameter value is smaller than the exposure parameter value corresponding to the preview image.
  • the camera button may be clicked.
  • the terminal will receive the shooting instruction, and the terminal acquires the preview image through the shooting component (which may be referred to as a second preview image).
  • the predicted exposure parameter value and the composite number of sheets can be obtained, wherein the second preview image may be an image acquired by the terminal through the photographing component when the photographing instruction is received.
  • the terminal may set the value of the exposure parameter to each predicted exposure parameter value, and subtract each exposure value based on the predicted composite number of exposures.
  • the parameter value is obtained by the photographing component to obtain a preview image, and then the second preview image and the number of composite images obtained based on the predicted exposure parameter values are subtracted from the preview image for image synthesis processing, and the composite image is obtained and stored in the gallery.
  • the terminal may not store the acquired preview image (ie, the user does not see the preview image), but only obtain a composite image by using the preview image. It can be seen that if the number of synthesized sheets is 1, the terminal only acquires the second preview image. In this case, the predicted exposure parameter value has no substantial meaning.
  • the preview image is acquired by the photographing component, and the exposure parameter value corresponding to the preview image is obtained; and the preview image, the exposure parameter value, and the pre-trained image capturing parameter prediction are obtained.
  • the model predicts an image capturing parameter value in the current HDR scene, and when receiving the shooting instruction, performs captured image processing based on the predicted image capturing parameter value.
  • the terminal can automatically calculate the number of composites in the current HDR scene, and perform shooting image processing based on the number of composite sheets, without requiring the user to manually turn on the HDR composition function, thereby improving the efficiency of photographing.
  • the embodiment of the present application further provides an apparatus for capturing an image.
  • the apparatus includes:
  • the first obtaining module 410 is configured to acquire a preview image by using a shooting component in a state where the terminal is in an image to be captured, and obtain an exposure parameter value corresponding to the preview image;
  • the prediction module 420 is configured to predict an image capturing parameter value in the current HDR scene according to the preview image, the exposure parameter value, and the pre-trained image capturing parameter prediction model with the image data parameter, the exposure parameter, and the image capturing parameter as variables.
  • the image capturing parameter includes a composite number of sheets;
  • the execution module 430 is configured to perform the captured image processing according to the predicted image capturing parameter value when the shooting instruction is received.
  • the image capturing parameter further includes an exposure parameter.
  • the device further includes:
  • the training module 440 is configured to approximate the pre-stored image based on the image capturing parameter value predicted by the image capturing parameter prediction model according to the correspondence between each preview image, the exposure parameter value, and the image capturing parameter value in the pre-stored training set.
  • the training value of the image capturing parameter value corresponding to the preview image and the exposure parameter value is trained, and the image capturing parameter prediction model is trained, and the image capturing parameter prediction model with the image data parameter, the exposure parameter and the image capturing parameter as variables is obtained after training.
  • the device further includes:
  • a second acquiring module 450 configured to acquire a first preview image by using a shooting component, and acquire a first exposure parameter value corresponding to the first preview image
  • the first determining module 460 is configured to determine a first preset number of exposure parameter values and a second preset number according to the first exposure parameter value, the first preset number of attenuation percentages, and the second preset number of enhancement percentages Exposure parameter value;
  • the third obtaining module 470 is configured to obtain a preview image by using a shooting component according to each exposure parameter value of the first preset number of exposure parameter values, to obtain a first preset number of preview images, and respectively according to the second pre- Setting each exposure parameter value of the number of exposure parameter values, obtaining a preview image by the shooting component, to obtain a second preset number of preview images;
  • the second determining module 480 is configured to select a first preview image for each of the plurality of preset composite number of pre-stored presets, and select a first preset number of preview images and a second preset In the preview image of the number, the preset composite image number is selected minus one preview image, and the target preview image set corresponding to the plurality of exposure parameter values is obtained, wherein the plurality of exposure parameter values include the first preset number of Exposing the parameter value and the second preset number of exposure parameter values; performing image synthesis processing on the target preview image set to obtain a composite image corresponding to the plurality of exposure parameter values;
  • the third determining module 490 is configured to determine, in the obtained composite images, a target composite image with an optimal image quality, and determine a target preset composite number and a target exposure parameter value corresponding to the target composite image as the target image capturing. Parameter value
  • the storage module 4100 is configured to store the first preview image, the first exposure parameter value, and the target image capturing parameter value in a training set.
  • the second determining module 480 is configured to:
  • the third determining module 490 is configured to:
  • the target composite image with the corresponding image quality is determined, and the target preset composite number, the target exposure parameter value and the target preset terminal performance parameter value corresponding to the target composite image are determined as the target image capturing. Parameter value.
  • the device further includes:
  • the recording module 4110 is configured to record a power consumption value consumed when a composite image corresponding to the plurality of exposure parameter values is obtained by the preset number of composite sheets;
  • the third determining module 490 is configured to:
  • the target composite image with the corresponding optimal image quality and power consumption value is determined, and the target preset composite number and the target exposure parameter value corresponding to the target composite image are determined as the target image capturing parameter values.
  • the preview image is acquired by the photographing component, and the exposure parameter value corresponding to the preview image is obtained; according to the preview image, the exposure parameter value, and the pre-trained image data parameter.
  • the exposure parameter, the image capturing parameter is a variable image capturing parameter prediction model, and predicts an image capturing parameter value in the current HDR scene, wherein the image capturing parameter includes a composite number of sheets; when the shooting instruction is received, according to the predicted The image captures the parameter value and performs captured image processing.
  • the terminal can automatically calculate the number of composites in the current HDR scene, and then perform the captured image processing based on the number of composites, without the user manually turning on the HDR composite function, thereby improving the efficiency of photographing.
  • the device for capturing an image provided by the foregoing embodiment is only illustrated by the division of the above functional modules when capturing an image. In actual applications, the function distribution may be completed by different functional modules as needed. The internal structure of the terminal is divided into different functional modules to complete all or part of the functions described above.
  • the device for capturing an image provided by the above embodiment is the same as the method for capturing an image. The specific implementation process is described in detail in the method embodiment, and details are not described herein again.
  • the embodiment of the present application further provides a terminal, where the terminal includes a processor and a memory, where the memory stores at least one instruction, at least one program, a code set, or a command set, and the at least one instruction At least a portion of the program, the set of codes, or a set of instructions is loaded and executed by the processor to implement a method of capturing an image as described in various method embodiments above.
  • the embodiment of the present application further provides a computer readable storage medium, where the storage medium stores at least one instruction, at least one program, a code set, or a set of instructions, the at least one instruction, the at least one segment
  • the program, the set of codes, or the set of instructions is loaded and executed by the processor to implement a method of capturing an image as described in various method embodiments above.
  • the terminal 100 can be a mobile phone, a tablet computer, a notebook computer, an e-book, and the like.
  • the terminal 100 in this application may include one or more of the following components: a processor 110, a memory 120, and a touch display screen 130.
  • Processor 110 can include one or more processing cores.
  • the processor 110 connects various portions of the entire terminal 100 using various interfaces and lines, and executes the terminal by running or executing an instruction, program, code set or instruction set stored in the memory 120, and calling data stored in the memory 120. 100 various functions and processing data.
  • the processor 110 may use at least one of a digital signal processing (DSP), a field-programmable gate array (FPGA), and a programmable logic array (PLA).
  • DSP digital signal processing
  • FPGA field-programmable gate array
  • PDA programmable logic array
  • a form of hardware is implemented.
  • the processor 110 may integrate one or a combination of a central processing unit (CPU), a graphics processing unit (GPU), a modem, and the like.
  • the CPU mainly processes the operating system, user interface and application, etc.; the GPU is responsible for rendering and rendering of the content that needs to be displayed on the touch screen 130; the modem is used to process wireless communication. It can be understood that the above modem may also be integrated into the processor 110 and implemented by a single chip.
  • the memory 120 may include a random access memory (RAM), and may also include a read-only memory.
  • the memory 120 includes a non-transitory computer-readable storage medium.
  • Memory 120 can be used to store instructions, programs, code, code sets, or sets of instructions.
  • the memory 120 may include a storage program area and a storage data area, wherein the storage program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), The instructions for implementing the various method embodiments described below, etc.; the storage data area can store data (such as audio data, phone book) created according to the use of the terminal 100, and the like.
  • the memory 120 stores a Linux kernel layer 220, a system runtime layer 240, an application framework layer 260, and an application. Layer 280.
  • the Linux kernel layer 220 provides the underlying drivers for various hardware of the terminal 100, such as display drivers, audio drivers, camera drivers, Bluetooth drivers, Wi-Fi drivers, power management, and the like.
  • the system runtime layer 240 provides major features support for the Android system through some C/C++ libraries. For example, the SQLite library provides support for the database, the OpenGL/ES library provides support for 3D graphics, and the Webkit library provides support for the browser kernel.
  • the Android runtime library (English: Android Runtime) is also provided in the system runtime layer 240. It mainly provides some core libraries, which can allow developers to write Android applications using the Java language.
  • the application framework layer 260 provides various APIs that may be used when building an application. Developers can also build their own applications by using these APIs, such as event management, window management, view management, notification management, content providers, Package management, call management, resource management, location management.
  • the application layer 280 runs at least one application, which may be a contact program, an SMS program, a clock program, a camera application, etc. that is provided by the operating system; or an application developed by a third-party developer, such as an instant. Communication programs, photo landscaping programs, etc.
  • the IOS system includes: a core operating system layer 320 (English: Core OS layer) and a core service layer 340 (English: Core Services layer) ), media layer 360 (English: Media layer), touchable layer 380 (English: Cocoa Touch Layer).
  • the core operating system layer 320 includes an operating system kernel, drivers, and an underlying program framework that provide functionality closer to the hardware for use by the program framework located at the core service layer 340.
  • the core service layer 340 provides the system services and/or program frameworks required by the application, such as the Foundation framework, account framework, advertising framework, data storage framework, network connection framework, geographic location framework, motion framework, etc. .
  • the media layer 360 provides an interface for the audiovisual aspect of the application, such as a graphic image related interface, an audio technology related interface, a video technology related interface, and an audio and video transmission technology wireless play (English: AirPlay) interface.
  • the touch layer 380 provides various commonly used interface related frameworks for application development, and the touch layer 380 is responsible for user touch interaction operations on the terminal 100. Such as local notification service, remote push service, advertising framework, game tool framework, message user interface (UI) framework, user interface UIKit framework, map framework and so on.
  • the frameworks associated with most applications include, but are not limited to, the base framework in the core service layer 340 and the UIKit framework in the touchable layer 380.
  • the underlying framework provides many basic object classes and data types, providing the most basic system services for all applications, regardless of the UI.
  • the classes provided by the UIKit framework are the basic UI class libraries for creating touch-based user interfaces. iOS applications can provide UI based on the UIKit framework, so it provides the application infrastructure for building user interfaces, drawing , handling and user interaction events, responsive gestures, and more.
  • the touch display screen 130 is for receiving a touch operation on or near a user using any suitable object such as a finger, a touch pen, and the like, and displaying a user interface of each application.
  • the touch display screen 130 is typically disposed at the front panel of the terminal 130.
  • the touch display 130 can be designed as a full screen, a curved screen, or a profiled screen.
  • the touch display screen 130 can also be designed as a combination of a full screen and a curved screen, and the combination of the special screen and the curved screen is not limited in this embodiment. among them:
  • the full screen may refer to a screen design in which the touch screen 130 occupies a front panel of the terminal 100 that exceeds a threshold (eg, 80% or 90% or 95%).
  • One calculation method of the screen ratio is: (the area of the touch screen 130 / the area of the front panel of the terminal 100) * 100%; another calculation method of the screen ratio is: (touch the actual display area in the display 130 Area/area of the front panel of the terminal 100) *100%; another calculation of the screen ratio is: (diagonal of the touch screen 130 / diagonal of the front panel of the terminal 100) * 100% .
  • a threshold eg, 80% or 90% or 95%).
  • One calculation method of the screen ratio is: (the area of the touch screen 130 / the area of the front panel of the terminal 100) * 100%; another calculation method of the screen ratio is: (touch the actual display area in the display 130 Area/area of the front panel of the terminal 100) *100%; another calculation of the screen ratio is: (diagonal of the touch screen 130 / diagonal of the
  • the full screen may also be a screen design that integrates at least one front panel component inside or below the touch display screen 130.
  • the at least one front panel component comprises: a camera, a fingerprint sensor, a proximity light sensor, a distance sensor, and the like.
  • other components on the front panel of the conventional terminal are integrated in all areas or partial areas of the touch display screen 130, such as splitting the photosensitive elements in the camera into a plurality of photosensitive pixels, each of which is photosensitive.
  • the pixels are integrated in a black area in each display pixel in touch display 130. Since at least one front panel component is integrated inside the touch display 130, the full screen has a higher screen ratio.
  • the front panel component on the front panel of the conventional terminal may also be disposed on the side or the back of the terminal 100, such as an ultrasonic fingerprint sensor disposed under the touch display screen 130, and the bone conduction type.
  • the earpiece is disposed inside the terminal 100, and the camera is disposed to be located on the side of the terminal 100 and is pluggable.
  • a single side of the middle frame of the terminal 100 when the terminal 100 adopts a full screen, a single side of the middle frame of the terminal 100, or two sides (such as the left and right sides), or four sides (such as An edge touch sensor 120 is disposed on the four sides of the upper, lower, left, and right sides, and the edge touch sensor 120 is configured to detect a user's touch operation, a click operation, a pressing operation, a sliding operation, and the like on the middle frame. At least one operation.
  • the edge touch sensor 120 may be any one of a touch sensor, a thermal sensor, a pressure sensor, and the like. The user can apply an operation on the edge touch sensor 120 to control the application in the terminal 100.
  • the curved screen refers to a screen design in which the screen area of the touch display screen 130 is not in one plane.
  • the curved screen has at least one cross section: the cross section has a curved shape, and the projection of the curved screen in a plane perpendicular to the plane of the cross section is a flat screen design, wherein the curved shape may be U-shaped.
  • a curved screen refers to a screen design in which at least one side is curved.
  • the curved screen means that at least one side of the touch display screen 130 extends over the middle frame of the terminal 100.
  • the curved screen refers to a screen design in which the left and right side edges 42 are curved shapes; or the curved screen screen refers to a screen design in which the upper and lower sides are curved shapes; or The curved screen refers to the screen design in which the four sides of the upper, lower, left and right sides are curved.
  • the curved screen is fabricated using a touch screen material having a certain flexibility.
  • a profiled screen is a touchscreen display with an irregularly shaped shape, and the irregular shape is not a rectangle or a rounded rectangle.
  • the profiled screen refers to a screen design that is provided with raised, notched, and/or punctured holes on a rectangular or rounded rectangular touch display screen 130.
  • the protrusions, notches, and/or holes may be located at the edge of the touch screen display 130, at the center of the screen, or both. When the protrusions, notches, and/or holes are provided at one edge, they may be disposed at the middle or both ends of the edge; when the protrusions, notches, and/or holes are disposed in the center of the screen, they may be placed above the screen.
  • the protrusions, the notches, and the holes may be concentrated or distributed; they may be symmetrically distributed or asymmetrically distributed.
  • the number of the protrusions, the notches and/or the holes is also not limited.
  • the shaped screen covers the upper and/or lower forehead area of the touch display screen as a displayable area and/or an operable area, so that the touch display screen occupies more space on the front panel of the terminal, the shaped screen also has A larger screen ratio.
  • the notch and/or the bore are for receiving at least one front panel component, including a camera, a fingerprint sensor, a proximity light sensor, a distance sensor, an earpiece, an ambient light level sensor, a physical button At least one of them.
  • the indentation may be provided on one or more edges, which may be semi-circular notches, right-angled rectangular indentations, rounded rectangular indentations or irregularly shaped indentations.
  • the profiled screen may be a screen design in which a semicircular notch 43 is provided at a central position of the upper edge of the touchscreen display 130, and the position of the semicircular notch 43 is vacated.
  • the distance sensor also referred to as a proximity sensor
  • the earpiece and the ambient light brightness sensor; schematically, as shown in FIG. 13, the shaped screen may be at the lower edge of the touch display screen 130.
  • the central position is provided with a screen design of a semi-circular notch 44, the position of which is vacated for accommodating at least one of a physical button, a fingerprint sensor, and a microphone; schematicly as shown in FIG.
  • the profiled screen may be a screen design in which a semi-elliptical notch 45 is disposed at a central position of the lower edge of the touch display screen 130, and a semi-elliptical notch, two semi-ellipses are formed on the front panel of the terminal 100.
  • the shaped notch is formed into an elliptical area for accommodating a physical button or a fingerprint recognition module; in the example shown in FIG. 15, the shaped screen can be touched.
  • a screen design of at least one small hole 45 is disposed in the upper half of the display screen 130, and the position of the small hole 45 is used to accommodate at least one of the front panel of the camera, the distance sensor, the earpiece, and the ambient light sensor. component.
  • the structure of the terminal 100 shown in the above figure does not constitute a limitation on the terminal 100, and the terminal may include more or less components than the illustration, or a combination of some Parts, or different parts.
  • the terminal 100 further includes components such as a radio frequency circuit, an input unit, a sensor, an audio circuit, a wireless fidelity (WiFi) module, a power supply, a Bluetooth module, and the like, and details are not described herein.
  • WiFi wireless fidelity
  • the preview image is acquired by the photographing component, and the exposure parameter value corresponding to the preview image is obtained; according to the preview image, the exposure parameter value, and the pre-trained image data parameter.
  • the exposure parameter, the image capturing parameter is a variable image capturing parameter prediction model, and predicts an image capturing parameter value in the current HDR scene, wherein the image capturing parameter includes a composite number of sheets; when the shooting instruction is received, according to the predicted The image captures the parameter value and performs captured image processing.
  • the terminal can automatically calculate the number of composites in the current HDR scene, and then perform the captured image processing based on the number of composites, without the user manually turning on the HDR composite function, thereby improving the efficiency of photographing.
  • a person skilled in the art may understand that all or part of the steps of implementing the above embodiments may be completed by hardware, or may be instructed by a program to execute related hardware, and the program may be stored in a computer readable storage medium.
  • the storage medium mentioned may be a read only memory, a magnetic disk or an optical disk or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Exposure Control For Cameras (AREA)

Abstract

本申请实施例公开了一种拍摄图像的方法、装置、终端和存储介质,属于电子技术领域。所述方法包括:在终端处于待拍摄图像的状态下,通过拍摄部件获取预览图像,并获取所述预览图像对应的曝光参数值;根据所述预览图像、所述曝光参数值,以及预先训练出的以图像数据参数、曝光参数、图像拍摄参数为变量的图像拍摄参数预测模型,预测当前高动态范围HDR场景下的图像拍摄参数值,其中,所述图像拍摄参数包括合成张数;当接收到拍摄指令时,根据预测出的所述图像拍摄参数值,执行拍摄图像处理。采用本申请,可以提高拍照的效率。

Description

拍摄图像的方法、装置、终端和存储介质
本申请实施例要求于2017年11月13日提交的申请号为201711117449.6、发明名称为“拍摄图像的方法、装置、终端和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请实施例中。
技术领域
本申请涉及电子技术领域,特别涉及一种拍摄图像的方法、装置、终端和存储介质。
背景技术
随着电子技术的发展,手机、计算机等终端得到了广泛的应用,相应的终端上的应用程序的种类越来越多、功能越来越丰富。拍照应用程序即是一种很常用的应用程序。用户可以通过拍照应用程序进行拍照。
发明内容
本申请实施例提供了一种拍摄图像的方法、装置、终端和存储介质,可以提高拍照的效率。所述技术方案如下:
一方面,提供了一种拍摄图像的方法,所述方法包括:
在终端处于待拍摄图像的状态下,通过拍摄部件获取预览图像,并获取所述预览图像对应的曝光参数值;
根据所述预览图像、所述曝光参数值,以及预先训练出的以图像数据参数、曝光参数、图像拍摄参数为变量的图像拍摄参数预测模型,预测当前高动态范围(High Dynamic Range,HDR)场景下的图像拍摄参数值,其中,所述图像拍摄参数包括合成张数;
当接收到拍摄指令时,根据预测出的所述图像拍摄参数值,执行拍摄图像处理。
训练值预设数目的预设数目的预设数目的预设数目的预设数目的预设数目的预设数目的预设数目的预设数目的预设数目的预设数目的预设数目的另一方面,提供了一种拍摄图像的装置,所述装置包括:
第一获取模块,用于在终端处于待拍摄图像的状态下,通过拍摄部件获取预览图像,并获取所述预览图像对应的曝光参数值;
预测模块,用于根据所述预览图像、所述曝光参数值,以及预先训练出的以图像数据参数、曝光参数、图像拍摄参数为变量的图像拍摄参数预测模型,预测当前HDR场景下的图像拍摄参数值,其中,所述图像拍摄参数包括合成张数;
执行模块,用于当接收到拍摄指令时,根据预测出的所述图像拍摄参数值,执行拍摄图像处理。
训练值预设数目的预设数目的预设数目的预设数目的预设数目的预设数目的预设数目的预设数目的预设数目的预设数目的预设数目的预设数目的另一方面,提供了一种终端,所述终端包括处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如第一方面所述的拍摄图像的方法。
另一方面,提供了一种计算机可读存储介质,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现如第一方面所述的拍摄图像的方法。
本申请实施例提供的技术方案带来的有益效果至少包括:
本申请实施例中,在终端处于待拍摄图像的状态下,通过拍摄部件获取预览图像,并获取预览图像对应的曝光参数值;根据预览图像、曝光参数值,以及预先训练出的图像拍摄参数预测模型,预测当前HDR场景下的图像拍摄参数值,当接收到拍摄指令时,根据预测出的所述图像拍摄参数值,执行拍摄图像处理。这样,终端可以自动计算出当前HDR场景下的合成张数,基于合成张数执行拍摄图像处理,无需用户手动开启HDR合成功能,从而提高了拍照的效率。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种拍摄图像的方法流程图;
图2是本申请实施例提供的一种多个预设合成张数和多个曝光参数值对应的目标预览图像集合的示意图;
图3是本申请实施例提供的一种预设合成张数、曝光参数值和多个预设终端性能参数值对应的合成图像的示意图;
图4是本申请实施例提供的一种拍摄图像的装置结构示意图;
图5是本申请实施例提供的一种拍摄图像的装置结构示意图;
图6是本申请实施例提供的一种拍摄图像的装置结构示意图;
图7是本申请实施例提供的一种拍摄图像的装置结构示意图;
图8是本申请实施例提供的一种终端结构示意图;
图9是本申请实施例提供的一种终端结构示意图;
图10是本申请实施例提供的一种终端结构示意图;
图11是本申请实施例提供的一种终端结构示意图;
图12是本申请实施例提供的一种终端结构示意图;
图13是本申请实施例提供的一种终端结构示意图;
图14是本申请实施例提供的一种终端结构示意图;
图15是本申请实施例提供的一种终端结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
首先,对本申请涉及到的名词进行介绍。
图像拍摄参数预测模型:是一种用于根据输入的数据预测当前HDR场景下的图像拍摄参数值的数学模型。
可选地,图像拍摄参数预测模型包括但不限于:卷积神经网络(Convolutional Neural Network,CNN)模型、深度神经网络(Deep Neural Network,DNN)模型、循环神经网络(Recurrent Neural Networks,RNN)模型、嵌入(embedding)模型、梯度提升决策树(Gradient Boosting Decision Tree,GBDT)模型、逻辑回归(Logistic Regression,LR)模型中的至少一种。
CNN模型是用于对图像中物体类别进行识别的网络模型。CNN模型还可以对有标签图像数据或无标签图像数据的数据特征进行提取。CNN模型分为可通过无标签图像数据进行训练的神经网络模型以及不可以通过无标签图像数据进行训练的神经网络模型。
DNN模型是一种深度学习框架。DNN模型包括输入层、至少一层隐层(或称,中间层)和输出层。可选地,输入层、至少一层隐层(或称,中间层)和输出层均包括至少一个神经元,神经元用于对接收到的数据进行处理。可选地,不同层之间的神经元的数量可以相同;或者,也可以不同。
RNN模型是一种具有反馈结构的神经网络模型。在RNN模型中,神经元的输出可以在下一个时间戳直接作用到自身,即,第i层神经元在m时刻的输入,除了(i-1)层神经元在该时刻的输出外,还包括其自身在(m-1)时刻的输出。
embedding模型是基于实体和关系分布式向量表示,将每个三元组实例中的关系看作从实体头到实体尾的翻译。其中,三元组实例包括主体、关系、客体,三元组实例可以表示成(主体,关系,客体);主体为实体头,客体为实体尾。比如:小张的爸爸是大张,则通过三元组实例表示为(小张,爸爸,大张)。
GBDT模型是一种迭代的决策树算法,该算法由多棵决策树组成,所有树的结果累加起来作为最终结果。决策树的每个节点都会得到一个预测值,以年龄为例,预测值为属于年龄对应的节点的所有人年龄的平均值。
LR模型是指在线性回归的基础上,套用一个逻辑函数建立的模型。
本申请实施例提供了一种拍摄图像的方法,该方法的执行主体为终端。其中,该可以是具有拍摄图像功能的终端,比如可以是安装有拍照应用程序的终端。终端可以包括处理器、存储器、拍摄部件、屏幕等部件。处理器可以为CPU(Central Processing Unit,中央处理单元)等,可以用于确定图像拍摄参数值和执行拍摄图像的相关处理。存储器可以为RAM(Random Access Memory,随机存取存储器)、Flash(闪存)等,可以用于存储接收到的数据、处理过程所需的数据、处理过程中生成的数据等,如图像拍摄参数预测模型等。拍摄部件可以是摄像头,可以用于获取预览图像。屏幕可以是触控屏,可以用于显示通过拍摄部件获取到的预览图像,还可以用于检测触碰信号等。
相关技术中,当用户通过拍照应用程序拍照时,为方便用户在逆光场景下可以获得较清晰的图像,拍照应用程序还可以提供HDR合成功能。可选的,当用户想要使用HDR合成功能时,用户可以找到HDR合成功能的开关按钮,然后通过点击开关按钮,开启HDR合成功能,当用户按下拍照按钮时,终端可以基于预设合成张数,执行拍摄图像处理。可选的,终端可以通过拍摄部件(比如摄像头)连续获取预设合成张数个图像,其中,该图像可称为预览图像,并将拍摄的预设合成张数个图像进行图像合成处理,得到合成图像,即得到多张预览图像合成后的图像,并将其存储在终端的图库中。基于上述处理方式,每当用 户想要使用HDR合成功能时,需要先找到HDR合成功能的开关按钮,再手动点击该开关按钮,从而导致拍照的效率较低的问题。
本申请实施例提供了一种拍摄图像的方法,该方法包括:
在终端处于待拍摄图像的状态下,通过拍摄部件获取预览图像,并获取预览图像对应的曝光参数值;
根据预览图像、曝光参数值,以及预先训练出的以图像数据参数、曝光参数、图像拍摄参数为变量的图像拍摄参数预测模型,预测当前HDR场景下的图像拍摄参数值,其中,图像拍摄参数包括合成张数;
当接收到拍摄指令时,根据预测出的图像拍摄参数值,执行拍摄图像处理。
可选的,图像拍摄参数还包括曝光参数。
可选的,该方法还包括:
根据预先存储的训练集中的各个预览图像、曝光参数值、图像拍摄参数值的对应关系,基于通过图像拍摄参数预测模型预测得到的图像拍摄参数值趋近于预先存储的与预览图像、曝光参数值相对应的图像拍摄参数值的训练值,对图像拍摄参数预测模型进行训练,得到训练后的以图像数据参数、曝光参数、图像拍摄参数为变量的图像拍摄参数预测模型。
可选的,该方法还包括:
通过拍摄部件获取第一预览图像,并获取第一预览图像对应的第一曝光参数值;
根据第一曝光参数值、第一预设数目的衰减百分比和第二预设数目的增强百分比,确定第一预设数目的曝光参数值和第二预设数目的曝光参数值;
分别根据第一预设数目的曝光参数值中的每个曝光参数值,通过拍摄部件获取预览图像,得到第一预设数目的预览图像,并分别根据第二预设数目的曝光参数值中的每个曝光参数值,通过拍摄部件获取预览图像,得到第二预设数目的预览图像;
对于预先存储的多个预设合成张数中的每个预设合成张数,选取第一预览图像,并在第一预设数目的预览图像和第二预设数目的预览图像中,选取预设合成张数减一个预览图像,得到预设合成张数与多个曝光参数值对应的目标预览图像集合,其中,多个曝光参数值包括第一预设数目的曝光参数值和第二预设数目的曝光参数值;对目标预览图像集合,进行图像合成处理,得到预设合成张数与多个曝光参数值对应的合成图像;
在得到的所有合成图像中,确定对应的图像质量最优的目标合成图像,将目标合成图像对应的目标预设合成张数、目标曝光参数值确定为目标图像拍摄参数值;
将第一预览图像、第一曝光参数值、目标图像拍摄参数值对应存储到训练集中。
可选的,对目标预览图像集合,进行图像合成处理,得到预设合成张数与多个曝光参数值对应的合成图像,包括:
分别基于多个预设终端性能参数值,对目标预览图像集合,进行图像合成处理,得到预设合成张数、多个曝光参数值与多个预设终端性能参数值对应的合成图像;
在得到的所有合成图像中,确定对应的图像质量最优的目标合成图像,将目标合成图像对应的目标预设合成张数、目标曝光参数值确定为目标图像拍摄参数值,包括:
在得到的所有合成图像中,确定对应的图像质量最优的目标合成图像,将目标合成图像对应的目标预设合成张数、目标曝光参数值和目标预设终端性能参数值确定为目标图像拍摄参数值。
可选的,该方法还包括:
记录得到预设合成张数与多个曝光参数值对应的合成图像时所消耗的功耗值;
在得到的所有合成图像中,确定对应的图像质量最优的目标合成图像,将目标合成图像对应的目标预设合成张数、目标曝光参数值确定为目标图像拍摄参数值,包括:
在得到的所有合成图像中,确定对应的图像质量和功耗值综合最优的目标合成图像,将目标合成图像对应的目标预设合成张数和目标曝光参数值确定为目标图像拍摄参数值。
下面将结合具体实施方式,对图1所示的处理流程进行详细的说明,内容可以如下:
步骤101,在终端处于待拍摄图像的状态下,通过拍摄部件获取预览图像,并获取该预览图像对应的曝光参数值。
其中,预览图像对应的曝光参数值可以是获取该预览图像时确定出的曝光参数值。
可选的,待拍摄图像的状态用于指示终端在启动拍照应用程序后开启终端中的拍摄部件。预览图像是通过拍摄部件获取的图像。
可选的,曝光参数值为曝光参数的参数值,曝光参数值包括曝光时长、白平衡、快门值、光圈值和感光度中的至少一种。可选的,终端中安装有拍照应用程序,当用户想要进行拍照时,点击拍照应用程序的图标,此时,终端将会接收到对应的拍照应用程序的启动指令,进而启动拍照应用程序,此时,终端将会 处于待拍摄图像的状态,即此时终端开启终端中的拍摄部件。在终端处于待拍摄图像的状态下,终端通过拍摄部件获取预览图像。
其中,预览图像可以是拍摄部件获取的终端中显示的图像,即未经合成处理的图像,也即预览图像可以是用户按下拍摄按键前拍摄部件获取的图像。另外,拍照应用程序开启后,终端可以实时根据环境亮度和环境中光源颜色,确定曝光参数值(其中,曝光参数值可以包括曝光时长、白平衡等参数值),以便终端根据曝光参数值执行拍摄图像处理。此种情况下,终端通过拍摄部件获取到预览图像的同时,终端还可以获取预览图像对应的曝光参数值,即获取拍摄预览图像时基于的曝光参数值。
可选的,终端实时通过拍摄部件获取预览图像,或者每隔获取周期通过拍摄部件获取预览图像。
其中,获取周期是用户自定义设置的,或者是终端默认设置的。本实施例对此不加以限定。
另外,终端中可以设置有获取周期,在终端处于待拍摄图像的状态下,每到预设的获取周期时,终端可以通过拍摄部件获取预览图像,并获取预览图像对应的曝光参数值。
步骤102,根据预览图像、曝光参数值,以及预先训练出的以图像数据参数、曝光参数、图像拍摄参数为变量的图像拍摄参数预测模型,预测当前HDR场景下的图像拍摄参数值,其中,图像拍摄参数包括合成张数。
可选的,终端中预先存储有预先训练出的图像拍摄参数预测模型。
其中,图像拍摄参数预测模型可以用于根据终端当前获取到的预览图像和预览图像对应的曝光参数值,预测当前场景下的图像拍摄参数值。每获取到预览图像和该预览图像对应的曝光参数值后,终端可以将该预览图像和其对应的曝光参数值输入预先训练好的图像拍摄参数预测模型中,得到该图像拍摄参数预测模型的输出,即可以得到当前场景下的图像拍摄参数值。可选的,获取到预览图像和该预览图像对应的曝光参数值后,终端可以将预览图像作为图像数据参数的参数值,将该预览图像对应的曝光参数值作为曝光参数的参数值,带入图像拍摄参数预测模型,得到当前HDR场景下的图像拍摄参数值,得到合成张数。
可选的,图像拍摄参数预测模型是采用训练集中的各个预览图像、曝光参数值、图像拍摄参数值的对应关系对卷积神经网络进行训练得到的模型。
可选的,上述图像拍摄参数预测模型是终端或服务器预先训练出的。下面仅以终端训练图像拍摄参数预测模型为例,对图像拍摄参数预测模型的训练过程进行介绍。
可选的,终端获取训练集,训练集包括至少一组样本数据组;根据至少一组样本数据组,采用误差反向传播算法对原始参数模型进行训练,得到图像拍摄参数预测模型。每组样本数据组包括:各个预览图像、曝光参数值、图像拍摄参数值的对应关系。
相应的,训练过程可以如下:根据预先存储的训练集中的各个预览图像、曝光参数值、图像拍摄参数值的对应关系,基于通过图像拍摄参数预测模型预测得到的图像拍摄参数值趋近于预先存储的与预览图像、曝光参数值相对应的图像拍摄参数值的训练值,对图像拍摄参数预测模型进行训练,得到训练后的以图像数据参数、曝光参数、图像拍摄参数为变量的图像拍摄参数预测模型。可选的,终端中预先存储有训练集。其中,训练集中可以包括各个预览图像、曝光参数值和图像拍摄参数值的对应关系,其中,对应关系中的每个对应关系项中的图像拍摄参数值可以是,在该对应关系项中的预览图像和曝光参数值表达的场景下能使合成后的图像质量达到最优的图像拍摄参数的数值。终端可以根据预先存储的训练集对包含待定参数的图像拍摄参数预测模型进行训练,即终端基于通过图像拍摄参数预测模型预测得到的预览图像、曝光参数值对应的图像拍摄参数值,趋近于预先存储的预览图像、曝光参数值对应的图像拍摄参数值的训练值,对图像拍摄参数预测模型进行训练。可选的,对于预览图像、曝光参数值和图像拍摄参数值的对应关系中的每个对应关系项,终端可以将该对应关系项中的预览图像和曝光参数值输入到包含待定参数的图像拍摄参数预测模型中,得到包含待定参数的图像拍摄参数值,进而可以基于得到的待定参数的图像拍摄参数值趋近于该对应关系项中的图像拍摄参数值的训练值,得到目标函数(比如,该目标函数可以是得到的包含待定参数的图像拍摄参数值减去该对应关系项中的图像拍摄参数值的函数)。得到目标函数后,可以通过梯度下降法,得到待定参数的训练值,并将该训练值作为根据下一个对应关系项进行训练时待定参数对应的参数值。以此类推,训练结束后,即可得到待定参数的训练值。另外,上述图像拍摄参数预测模型可以为卷积神经网络模型,此种情况下,待定参数可以是神经网络模型中的各个卷积核。
可选的,上述每个对应关系中的每个对应关系项是根据预设合成张数和多个曝光参数值对应的合成后的合成图像的图像质量选取出的。相应的,处理过程可以如下:图像拍摄参数还可以包括曝光参数;通过拍摄部件获取第一预览图像,并获取第一预览图像对应的第一曝光参数值;根据第一曝光参数值、第一预设数目的衰减百分比和第二预设数目的增强百分比,确定第一预设数目的曝光参数值和第二预设数目的曝光参数值;分别根据第一预设数目的曝光参数值中的每个曝光参数值,通过拍摄部件获取预览图像,得到第一预设数目的预览图像,并分别根据第二预设数目的曝光参数值中的每个曝光参数值,通过拍摄部件获取预览图像,得到第二预设数目的预览图像;对于预先存储的多个预设合成张数中的每个预设合成张数,选取第一预览图像,并在第一预设数目的预览图像和第二预设数目的预览图像中,选取所述预设合成张数 减一个预览图像,得到预设合成张数与多个曝光参数值对应的目标预览图像集合,其中,多个曝光参数值包括第一预设数目的曝光参数值和第二预设数目的曝光参数值;对目标预览图像集合,进行图像合成处理,得到预设合成张数与多个曝光参数值对应的合成图像;在得到的所有合成图像中,确定对应的图像质量最优的目标合成图像,将目标合成图像对应的目标预设合成张数、目标曝光参数值确定为目标图像拍摄参数值;将第一预览图像、第一曝光参数值、目标图像拍摄参数值对应存储到训练集中。
可选的,上述对应关系中的每个对应关系项是由终端根据获取到第一预览图像和预设数目的不同曝光参数值对应的预览图像确定出的。其中,不同对应关系项是终端分别在不同HDR场景下获取到的第一预览图像和预设数目的预览图像确定出的。
需要说明的是,上述对应关系中的各对应关系项中,对应的图像拍摄参数中的合成张数是1的各对应关系项,图像拍摄参数值中的曝光参数值可以是预设数值(比如为0),对应的图像拍摄参数中的合成张数大于1的各对应关系项,图像拍摄参数值中的曝光参数值可以是根据第一曝光参数值确定出的第一预设数目的曝光参数值和第二预设数目的曝光参数值中的某曝光参数值。下面对其中的某个对应关系项的确定过程进行详细表述,其他对应关系项的确定过程与之相同。
示意性的,在该HDR场景下,终端可以通过拍摄部件获取预览图像(该预览图像可以称为第一预览图像),其中,预览图像是拍摄部件直接获取的,未经图像合成处理的图像,进而可以获取第一预览图像对应的曝光参数值(可称为第一曝光参数值),其中,第一曝光参数值是终端根据环境亮度和环境中光源颜色确定出的,未经调整的数值。终端中可以预先存储有第一预设数目的衰减百分比(其中,衰减百分比可以是小于1的百分比)和第二预设数目的增强百分比(其中,增强百分比可以是大于1的百分比),其中,第一预设数目和第二预设数目可以相同,也可以不同。获取到第一预览图像对应的第一曝光参数值后,可以分别将第一预设数目的衰减百分比中的每个衰减百分比与第一曝光参数值相乘,得到第一预设数目的曝光参数值,并且可以分别将第二预设数目的增强百分比中的每个增强百分比与第一曝光参数值相乘,得到第二预设数目的曝光参数值。例如,第一预设数目的衰减百分比分别是80%、60%,第一曝光参数值为A,则可以得到的第一预设数目的曝光参数值分别是A*80%、A*60%,第二预设数目的增强百分比分别是110%,第一曝光参数值为A,则可以得到的第二预设数目的曝光参数值分别是A*110%。得到第一预设数目的曝光参数值后,可以分别基于第一预设数目的曝光参数值中的每个曝光参数值,通过拍摄部件获取预览图像,得到第一预设数目的预览图像,并分别根据第二预设数目的曝光参数值中的每个曝光参数值,通过拍摄部件获取预览图像,得到第二预设数目的预览图像,即得到的第一预设数目和第二预设数目的预览图像是基于调整后的曝光参数值拍摄得到的,不是根据环境亮度和环境中光源颜色确定出的。
终端中可以预先存储有多个预设合成张数(可以分别是1、2、3),获取到第一预览图像和第一预设数目的预览图像和第二预设数目的预览图像后,对于数值为1的预设合成张数,将第一预览图像确定为该预设合成张数对应的目标预览图像集合,其中,对该目标预览图像集合进行图像合成处理时得到的合成图像对应的曝光参数值可以是预设数值;对于多个预设合成张数中的数值为2的预设合成张数,可以选取第一预览图像,并分别在第一预设数目的预览图像和第二预设数目的预览图像中选取一个预览图像,得到该预设合成张数和第一预设数目的曝光参数值对应的目标预览图像集合,以及该预设合成张数和第二预设数目的曝光参数值对应的目标预览图像集合(也即,对于多个预设合成张数中的数值为2的预设合成张数和第一预设数目的曝光参数值中的每个曝光参数值对应的预览图像,选取第一预览图像,并选取该预览图像,得到该预设合成张数与第一预设数目的曝光参数值中的每个曝光参数值对应目标预览图像集合;对于多个预设合成张数中的数值为2的预设合成张数和第二预设数目的曝光参数值中的每个曝光参数值对应的预览图像,选取第一预览图像,并选取该预览图像,得到该预设合成张数与第二预设数目的曝光参数值中的每个曝光参数值对应目标预览图像集合);对于多个预设合成张数中的数值为3的预设合成张数,可以选取第一预览图像,并分别在第一预设数目的预览图像和第二预设数目的预览图像中选取一个预览图像,得到该预设合成张数、第一预设数目的曝光参数值、第二预设数目的曝光参数值对应的目标预览图像集合。
例如,终端获取到的第一预览图像为图像1和第一曝光参数值为A,得到的第一预设数目的预览图像分别为图像2、图像3,图像2、图像3分别对应的曝光参数值为B、C,得到的第二预设数目的预览图像为图像4,图像4对应的曝光参数值为D,多个预设合成张数分别为1、2、3,则对于预设合成张数1,终端可以选取图像1,得到预设合成张数1对应的目标预览图像集合;对于预设合成张数2,终端可以选取图像1和选取1个图像2,得到预设合成张数2和曝光参数值B对应的目标预览图像集合,终端可以选取图像1和选取1个图像3,得到预设合成张数2和曝光参数值C对应的目标预览图像集合,终端可以选取图像1和选取1个图像4,得到预设合成张数2和曝光参数值D对应的目标预览图像集合;对于预设合成张数3,终端可以选取图像1、一个图像2和一个图像4,得到预设合成张数3和曝光参数值B、D对应的目标预览图像集合,终端可以选取图像1、1个图像3和1个图像4,得到预设合成张数3和曝光参数值C、D对应的目标预览图像集合,如图2所示。
对于预先存储的多个预设合成张数中的每个预设合成张数,得到预设合成张数与多个曝光参数值对应 的目标预览图像集合后,可以对目标预览图像集合进行图像合成处理,即可以对目标预览图像集合中的各预览图像进行图像合成处理,得到该预设合成张数与多个曝光参数值对应的合成图像。得到所有合成图像后,可以计算各个合成图像对应的图像质量(比如清晰度),进而可以在各个合成图像中,确定对应的图像质量最优的合成图像(可称为目标合成图像),并可以确定目标合成图像对应的预设合成张数(可称为目标预设合成张数)、曝光参数值(可称为目标曝光参数值),其中,当目标预设合成张数为1时,目标曝光参数值为预设数值(比如为0),当目标预设合成张数大于1时,目标曝光参数值是第一预设数目的曝光参数值和/或第二预设数目的曝光参数值中的某曝光参数值。确定出目标预设合成张数和目标曝光参数值后,可以将确定出的目标预设合成张数和目标曝光参数值确定为目标图像拍摄参数值。得到第一预览图像、第一曝光参数值和目标图像拍摄参数值后,可以将三者对应存储到训练集中。对于其他HDR场景,终端也可以按照上述处理过程,得到相应的预览图像、曝光参数值和对应的图像拍摄参数值、以此得到训练集中的各个对应关系项。
可选的,图像拍摄参数还包括终端性能参数,相应的,在确定训练集时,可以确定预览图像、曝光参数对应的合成张数和终端性能参数值,相应的,终端可以进行如下处理:分别基于多个预设终端性能参数值,对目标预览图像集合,进行图像合成处理,得到预设合成张数、多个曝光参数值与多个预设终端性能参数值对应的合成图像。相应的,终端可以进行如下处理:在得到的多个预设合成张数、多个曝光参数值与多个预设终端性能参数值对应的合成图像中,确定对应的图像质量最优的目标合成图像,将目标合成图像对应的目标预设合成张数、目标曝光参数值和目标预设终端性能参数值确定为目标图像拍摄参数值。
可选的,终端性能参数值为终端性能参数的参数值,终端性能参数是影响终端性能的参数,比如终端性能参数是CPU运行频率(也可称为CPU主频)。
可选的,终端中预先存储有多个预设终端性能参数值。针对图像拍摄参数还包括终端性能参数的情况,对于多个预设合成张数中的每个预设合成张数,确定出该预设合成张数与多个曝光参数值对应的目标预览图像集合后,终端可以分别基于多个预设终端性能参数值中的每个预设终端性能参数值,对目标预览图像集合进行图像合成处理,得到该预设合成张数、多个曝光参数值与多个预设终端性能参数值对应的合成图像。例如,某预设合成张数为3、曝光参数值为B、D,多个预设终端性能参数值分别为a、b,则对于预设合成张数3和曝光参数值为B、D对应的目标预览图像集合,终端可以在终端性能参数为a的情况下(即终端可以将终端性能参数设置为a),对目标预览图像集合进行图像合成处理,得到预设合成张数3、B、D和a对应的合成图像,终端还可以在终端性能参数为b的情况下,对目标预览图像集合进行图像合成处理,得到预设合成张数3、B、D和b对应的合成图像,如图3所示。
得到所有合成图像后,可以计算每个合成图像的图像质量,并可以在多个合成图像中,确定对应的图像质量最优的目标合成图像,进而可以将目标合成图像对应的目标预设合成张数、目标曝光参数值和目标预设终端性能参数值确定为目标图像拍摄参数值,进而可以将三者(即第一预览图像、第一曝光参数值、目标图像拍摄参数值)对应存储到训练集中。
可选的,终端将第一预览图像、第一曝光参数值、目标图像拍摄参数值添加至训练集中,得到更新后的训练集,根据更新后的训练集对图像拍摄参数预测模型进行训练,得到更新后的图像拍摄参数预测模型。
其中,根据更新后的训练集对图像拍摄参数预测模型进行训练,得到更新后的图像拍摄参数预测模型的过程可类比参考图像拍摄参数预测模型的训练过程,在此不再赘述。
可选的,终端在对每个目标预览图像集合进行图像合成处理时,还记录此次图像合成处理所消耗的功耗值。相应的,终端确定目标图像拍摄参数值的处理可以如下:记录得到该预设合成张数与多个曝光参数值对应的合成图像时所消耗的功耗值。在得到的多个预设合成张数与多个曝光参数值对应的合成图像中,确定对应的图像质量和功耗值综合最优的目标合成图像,将目标合成图像对应的目标预设合成张数和目标曝光参数值确定为目标图像拍摄参数值。
可选的,对于预先存储的多个预设合成张数中的每个预设合成张数,得到该预设合成张数与多个曝光参数值对应的目标预览图像集合后,对目标预览图像集合进行图像合成处理,得到该预设合成张数与多个曝光参数值对应的合成图像,并可以记录此次图像合成处理所消耗的功耗值,即可以记录得到该预设合成张数与多个曝光参数值对应的合成图像时所消耗的功耗值,其中,功耗值可以是以下数值中的一个或多个:消耗的电量、消耗的时长。此种情况下,得到所有合成图像和对应的功耗值后,终端可以在得到的合成图像中,确定对应的图像质量和功耗值综合最优的目标合成图像(其中,可以将图像质量与功耗值的商最大的合成图像确定为目标合成图像),进而可以将目标合成图像对应的目标预设合成张数和目标曝光参数值确定为目标图像拍摄参数值。
另外,需要说明的是,对预设合成张数1对应的目标预览图像集合进行图像合成处理时,即可将目标预览图像集合中的第一预览图像作为合成图像。
步骤103,当接收到拍摄指令时,根据预测出的图像拍摄参数值,执行拍摄图像处理。
可选的,在终端处于待拍摄图像的状态下,当用户想要进行拍照时,可以点击拍摄按键,此时,终端 将会接收到拍摄按键的点击指令,进而可以根据预测出的当前场景下的图像拍摄参数值和HDR合成算法,执行拍摄图像处理,其中,预测出的当前HDR场景下的图像拍摄参数值可以是HDR合成算法中的图像拍摄参数的数值,其中,当预测出的合成张数为1时,终端可以通过拍摄部件获取一个预览图像,此种情况下,获取到的该预览图像即是最终的合成图像(此种情况相当于终端没有开启HDR合成功能)。可选的,当图像拍摄参数包括合成张数时,终端可以获取第二预览图像,并基于合成张数减一个预设的曝光参数值中的每个曝光参数值,拍摄一个预览图像,并对获取的第二预览图像和合成张数减一个预览图像进行图像合成处理,得到最终的图像,并将其存储到图库中。当图像拍摄参数包括合成张数和终端性能参数时,终端可以获取第二预览图像,并基于合成张数减一个预设的曝光参数值中的每个曝光参数值,拍摄一个预览图像,并可以将终端性能参数的数值设置为预测出的终端性能参数值,进而可以基于预测出的终端性能参数值,对获取的第二预览图像和合成张数减一个预览图像进行图像合成处理,得到最终的图像,并将其存储到图库中。由此可知,如果预测出的合成张数为1时,终端即可只获取第二预览图像,并将其作为最终的图像,对其进行存储。
另外,针对步骤101中,每到预设的获取周期,通过拍摄部件获取预览图像,并获取预览图像对应的曝光参数值的情况,每当获取到预览图像时,均可按照步骤102的方式,确定当前HDR场景下(即当前获取周期对应)的图像拍摄参数值,每当在当前获取周期内接收到拍摄指令时,终端均可以根据预测出的当前获取周期对应的图像拍摄参数值,执行拍摄图像处理。
可选的,针对图像拍摄参数还包括曝光参数的情况,相应的,步骤103的处理过程可以如下:当接收到拍摄指令时,通过拍摄部件获取第二预览图像,并基于预测出的每个曝光参数值,通过拍摄部件获取一个预览图像;对第二预览图像和合成张数减一个预览图像进行图像合成处理,得到合成图像。
可选的,图像拍摄参数还可以包括曝光参数,此种情况下,图像拍摄参数预测模型的输入可以是预览图像、预览图像对应的曝光参数值(其中,该曝光参数值是终端根据环境亮度和环境中光源颜色确定出的,未经调整),图像拍摄参数预测模型的输出可以是预测的合成张数和预测的曝光参数值,其中,预测的曝光参数值可以是用于终端执行拍摄图像处理的曝光参数值,预测的曝光参数值小于预览图像对应的曝光参数值。
可选的,用户打开拍照应用程序后,想要进行拍照时,可以点击拍照按键,此时,终端将会接收到拍摄指令,进而终端通过拍摄部件获取预览图像(可称为第二预览图像),并可以获取预测出的曝光参数值和合成张数,其中,第二预览图像可以是接收到拍摄指令时终端通过拍摄部件获取到的图像。获取到预测出的曝光参数值和合成张数后,终端可以将曝光参数的数值分别设置为预测出的每个曝光参数值,基于预测出的合成张数减一个曝光参数值中的每个曝光参数值,通过拍摄部件获取一个预览图像,进而可以对第二预览图像和基于预测的各曝光参数值获取的合成张数减一个预览图像进行图像合成处理,得到合成图像,并将其存储到图库中,以便用户查看,其中,终端可以不对获取的预览图像进行存储(即用户查看不到预览图像),只是利用预览图像得到合成图像。由此可知,如果合成张数为1,则终端只获取第二预览图像,此种情况下,预测出的曝光参数值没有实质意义。
本申请实施例中,在终端处于待拍摄图像的状态下,通过拍摄部件获取预览图像,并获取预览图像对应的曝光参数值;根据预览图像、曝光参数值,以及预先训练出的图像拍摄参数预测模型,预测当前HDR场景下的图像拍摄参数值,当接收到拍摄指令时,根据预测出的所述图像拍摄参数值,执行拍摄图像处理。这样,终端可以自动计算出当前HDR场景下的合成张数,基于合成张数执行拍摄图像处理,无需用户手动开启HDR合成功能,从而提高了拍照的效率。
基于相同的技术构思,本申请实施例还提供了一种拍摄图像的装置,如图4所示,该装置包括:
第一获取模块410,用于在终端处于待拍摄图像的状态下,通过拍摄部件获取预览图像,并获取预览图像对应的曝光参数值;
预测模块420,用于根据预览图像、曝光参数值,以及预先训练出的以图像数据参数、曝光参数、图像拍摄参数为变量的图像拍摄参数预测模型,预测当前HDR场景下的图像拍摄参数值,其中,图像拍摄参数包括合成张数;
执行模块430,用于当接收到拍摄指令时,根据预测出的图像拍摄参数值,执行拍摄图像处理。
可选的,该图像拍摄参数还包括曝光参数。
可选的,如图5所示,该装置还包括:
训练模块440,用于根据预先存储的训练集中的各个预览图像、曝光参数值、图像拍摄参数值的对应关系,基于通过图像拍摄参数预测模型预测得到的图像拍摄参数值趋近于预先存储的与预览图像、曝光参数值相对应的图像拍摄参数值的训练值,对图像拍摄参数预测模型进行训练,得到训练后的以图像数据参数、曝光参数、图像拍摄参数为变量的图像拍摄参数预测模型。
可选的,如图6所示,该装置还包括:
第二获取模块450,用于通过拍摄部件获取第一预览图像,并获取第一预览图像对应的第一曝光参数值;
第一确定模块460,用于根据第一曝光参数值、第一预设数目的衰减百分比和第二预设数目的增强百分比,确定第一预设数目的曝光参数值和第二预设数目的曝光参数值;
第三获取模块470,用于分别根据第一预设数目的曝光参数值中的每个曝光参数值,通过拍摄部件获取预览图像,得到第一预设数目的预览图像,并分别根据第二预设数目的曝光参数值中的每个曝光参数值,通过拍摄部件获取预览图像,得到第二预设数目的预览图像;
第二确定模块480,用于对于预先存储的多个预设合成张数中的每个预设合成张数,选取第一预览图像,并在第一预设数目的预览图像和第二预设数目的预览图像中,选取预设合成张数减一个预览图像,得到预设合成张数与多个曝光参数值对应的目标预览图像集合,其中,多个曝光参数值包括第一预设数目的曝光参数值和第二预设数目的曝光参数值;对目标预览图像集合,进行图像合成处理,得到预设合成张数与多个曝光参数值对应的合成图像;
第三确定模块490,用于在得到的所有合成图像中,确定对应的图像质量最优的目标合成图像,将目标合成图像对应的目标预设合成张数、目标曝光参数值确定为目标图像拍摄参数值;
存储模块4100,用于将第一预览图像、第一曝光参数值、目标图像拍摄参数值对应存储到训练集中。
可选的,该第二确定模块480,用于:
分别基于多个预设终端性能参数值,对目标预览图像集合,进行图像合成处理,得到预设合成张数、多个曝光参数值与多个预设终端性能参数值对应的合成图像;
该第三确定模块490,用于:
在得到的所有合成图像中,确定对应的图像质量最优的目标合成图像,将目标合成图像对应的目标预设合成张数、目标曝光参数值和目标预设终端性能参数值确定为目标图像拍摄参数值。
可选的,如图7所示,装置还包括:
记录模块4110,用于记录得到预设合成张数与多个曝光参数值对应的合成图像时所消耗的功耗值;
该第三确定模块490,用于:
在得到的所有合成图像中,确定对应的图像质量和功耗值综合最优的目标合成图像,将目标合成图像对应的目标预设合成张数和目标曝光参数值确定为目标图像拍摄参数值。
本申请实施例中,在终端处于待拍摄图像的状态下,通过拍摄部件获取预览图像,并获取预览图像对应的曝光参数值;根据预览图像、曝光参数值,以及预先训练出的以图像数据参数、曝光参数、图像拍摄参数为变量的图像拍摄参数预测模型,预测当前HDR场景下的图像拍摄参数值,其中,图像拍摄参数包括合成张数;当接收到拍摄指令时,根据预测出的所述图像拍摄参数值,执行拍摄图像处理。这样,终端可以自动计算出当前HDR场景下的合成张数,进而可以基于合成张数执行拍摄图像处理,无需用户手动开启HDR合成功能,从而,可以提高拍照的效率。
需要说明的是:上述实施例提供的拍摄图像的装置在拍摄图像时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将终端的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的拍摄图像的装置与拍摄图像的方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
可选的,本申请实施例还提供了一种终端,该终端包括处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如上述各个方法实施例中所述的拍摄图像的方法。
可选的,本申请实施例还提供了一种计算机可读存储介质,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现如上述各个方法实施例中所述的拍摄图像的方法。
参考图8和图9所示,其示出了本申请一个示例性实施例提供的终端100的结构方框图。该终端100可以是手机、平板电脑、笔记本电脑和电子书等。本申请中的终端100可以包括一个或多个如下部件:处理器110、存储器120和触摸显示屏130。
处理器110可以包括一个或者多个处理核心。处理器110利用各种接口和线路连接整个终端100内的各个部分,通过运行或执行存储在存储器120内的指令、程序、代码集或指令集,以及调用存储在存储器120内的数据,执行终端100的各种功能和处理数据。可选地,处理器110可以采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。处理器110可集成中央处理器(Central Processing Unit,CPU)、图像处理器(Graphics Processing Unit,GPU)和调制解调器等中的一种或几种的组合。其中,CPU主要处理操作系统、用户界面和应用程序等;GPU用于负责触摸显示屏130所需要显 示的内容的渲染和绘制;调制解调器用于处理无线通信。可以理解的是,上述调制解调器也可以不集成到处理器110中,单独通过一块芯片进行实现。
存储器120可以包括随机存储器(Random Access Memory,RAM),也可以包括只读存储器(Read-Only Memory)。可选地,该存储器120包括非瞬时性计算机可读介质(non-transitory computer-readable storage medium)。存储器120可用于存储指令、程序、代码、代码集或指令集。存储器120可包括存储程序区和存储数据区,其中,存储程序区可存储用于实现操作系统的指令、用于至少一个功能的指令(比如触控功能、声音播放功能、图像播放功能等)、用于实现下述各个方法实施例的指令等;存储数据区可存储根据终端100的使用所创建的数据(比如音频数据、电话本)等。
以操作系统为安卓(英文:Android)系统为例,存储器120中存储的程序和数据如图8所示,存储器120中存储有Linux内核层220、系统运行库层240、应用框架层260和应用层280。Linux内核层220为终端100的各种硬件提供了底层的驱动,如显示驱动、音频驱动、摄像头驱动、蓝牙驱动、Wi-Fi驱动、电源管理等。系统运行库层240通过一些C/C++库来为Android系统提供了主要的特性支持。如SQLite库提供了数据库的支持,OpenGL/ES库提供了3D绘图的支持,Webkit库提供了浏览器内核的支持等。在系统运行库层240中还提供有Android运行时库(英文:Android Runtime),它主要提供了一些核心库,能够允许开发者使用Java语言来编写Android应用。应用框架层260提供了构建应用程序时可能用到的各种API,开发者也可以通过使用这些API来构建自己的应用程序,比如活动管理、窗口管理、视图管理、通知管理、内容提供者、包管理、通话管理、资源管理、定位管理。应用层280中运行有至少一个应用程序,这些应用程序可以是操作系统自带的联系人程序、短信程序、时钟程序、相机应用等;也可以是第三方开发者所开发的应用程序,比如即时通信程序、相片美化程序等。
以操作系统为IOS系统为例,存储器120中存储的程序和数据如图9所示,IOS系统包括:核心操作系统层320(英文:Core OS layer)、核心服务层340(英文:Core Services layer)、媒体层360(英文:Media layer)、可触摸层380(英文:Cocoa Touch Layer)。核心操作系统层320包括了操作系统内核、驱动程序以及底层程序框架,这些底层程序框架提供更接近硬件的功能,以供位于核心服务层340的程序框架所使用。核心服务层340提供给应用程序所需要的系统服务和/或程序框架,比如基础(英文:Foundation)框架、账户框架、广告框架、数据存储框架、网络连接框架、地理位置框架、运动框架等等。媒体层360为应用程序提供有关视听方面的接口,如图形图像相关的接口、音频技术相关的接口、视频技术相关的接口、音视频传输技术的无线播放(英文:AirPlay)接口等。可触摸层380为应用程序开发提供了各种常用的界面相关的框架,可触摸层380负责用户在终端100上的触摸交互操作。比如本地通知服务、远程推送服务、广告框架、游戏工具框架、消息用户界面接口(User Interface,UI)框架、用户界面UIKit框架、地图框架等等。
在图9所示出的框架中,与大部分应用程序有关的框架包括但不限于:核心服务层340中的基础框架和可触摸层380中的UIKit框架。基础框架提供许多基本的对象类和数据类型,为所有应用程序提供最基本的系统服务,和UI无关。而UIKit框架提供的类是基础的UI类库,用于创建基于触摸的用户界面,iOS应用程序可以基于UIKit框架来提供UI,所以它提供了应用程序的基础架构,用于构建用户界面,绘图、处理和用户交互事件,响应手势等等。
触摸显示屏130用于接收用户使用手指、触摸笔等任何适合的物体在其上或附近的触摸操作,以及显示各个应用程序的用户界面。触摸显示屏130通常设置在终端130的前面板。触摸显示屏130可被设计成为全面屏、曲面屏或异型屏。触摸显示屏130还可被设计成为全面屏与曲面屏的结合,异型屏与曲面屏的结合,本实施例对此不加以限定。其中:
全面屏
全面屏可以是指触摸显示屏130占用终端100的前面板的屏占比超过阈值(比如80%或90%或95%)的屏幕设计。屏占比的一种计算方式为:(触摸显示屏130的面积/终端100的前面板的面积)*100%;屏占比的另一种计算方式为:(触摸显示屏130中实际显示区域的面积/终端100的前面板的面积)*100%;屏占比的再一种计算方式为:(触摸显示屏130的对角线/在终端100的前面板的对角线)*100%。示意性的如图10所示的例子中,终端100的前面板上近乎所有区域均为触摸显示屏130,在终端100的前面板40上,除中框41所产生的边缘之外的其它区域,全部为触摸显示屏130。该触摸显示屏130的四个角可以是直角或者圆角。
全面屏还可以是将至少一种前面板部件集成在触摸显示屏130内部或下层的屏幕设计。可选地,该至少一种前面板部件包括:摄像头、指纹传感器、接近光传感器、距离传感器等。在一些实施例中,将传统终端的前面板上的其他部件集成在触摸显示屏130的全部区域或部分区域中,比如将摄像头中的感光元件拆分为多个感光像素后,将每个感光像素集成在触摸显示屏130中每个显示像素中的黑色区域中。由于将至少一种前面板部件集成在了触摸显示屏130的内部,所以全面屏具有更高的屏占比。
当然在另外一些实施例中,也可以将传统终端的前面板上的前面板部件设置在终端100的侧边或背面, 比如将超声波指纹传感器设置在触摸显示屏130的下方、将骨传导式的听筒设置在终端100的内部、将摄像头设置成位于终端100的侧边且可插拔的结构。
在一些可选的实施例中,当终端100采用全面屏时,终端100的中框的单个侧边,或两个侧边(比如左、右两个侧边),或四个侧边(比如上、下、左、右四个侧边)上设置有边缘触控传感器120,该边缘触控传感器120用于检测用户在中框上的触摸操作、点击操作、按压操作和滑动操作等中的至少一种操作。该边缘触控传感器120可以是触摸传感器、热力传感器、压力传感器等中的任意一种。用户可以在边缘触控传感器120上施加操作,对终端100中的应用程序进行控制。
曲面屏
曲面屏是指触摸显示屏130的屏幕区域不处于一个平面内的屏幕设计。一般的,曲面屏至少存在这样一个截面:该截面呈弯曲形状,且曲面屏在沿垂直于该截面的任意平面方向上的投影为平面的屏幕设计,其中,该弯曲形状可以是U型。可选地,曲面屏是指至少一个侧边是弯曲形状的屏幕设计方式。可选地,曲面屏是指触摸显示屏130的至少一个侧边延伸覆盖至终端100的中框上。由于触摸显示屏130的侧边延伸覆盖至终端100的中框,也即将原本不具有显示功能和触控功能的中框覆盖为可显示区域和/或可操作区域,从而使得曲面屏具有了更高的屏占比。可选地,如图11所示的例子中,曲面屏是指左右两个侧边42是弯曲形状的屏幕设计;或者,曲面屏是指上下两个侧边是弯曲形状的屏幕设计;或者,曲面屏是指上、下、左、右四个侧边均为弯曲形状的屏幕设计。在可选的实施例中,曲面屏采用具有一定柔性的触摸屏材料制备。
异型屏
异型屏是外观形状为不规则形状的触摸显示屏,不规则形状不是矩形或圆角矩形。可选地,异型屏是指在矩形或圆角矩形的触摸显示屏130上设置有凸起、缺口和/或挖孔的屏幕设计。可选地,该凸起、缺口和/或挖孔可以位于触摸显示屏130的边缘、屏幕中央或两者均有。当凸起、缺口和/或挖孔设置在一条边缘时,可以设置在该边缘的中间位置或两端;当凸起、缺口和/或挖孔设置在屏幕中央时,可以设置在屏幕的上方区域、左上方区域、左侧区域、左下方区域、下方区域、右下方区域、右侧区域、右上方区域中的一个或多个区域中。当设置在多个区域中时,凸起、缺口和挖孔可以集中分布,也可以分散分布;可以对称分布,也可以不对称分布。可选地,该凸起、缺口和/或挖孔的数量也不限。
由于异型屏将触摸显示屏的上额区和/或下额区覆盖为可显示区域和/或可操作区域,使得触摸显示屏在终端的前面板上占据更多的空间,所以异型屏也具有更大的屏占比。在一些实施例中,缺口和/或挖孔中用于容纳至少一种前面板部件,该前面板部件包括摄像头、指纹传感器、接近光传感器、距离传感器、听筒、环境光亮度传感器、物理按键中的至少一种。
示例性的,该缺口可以设置在一个或多个边缘上,该缺口可以是半圆形缺口、直角矩形缺口、圆角矩形缺口或不规则形状缺口。示意性的如图12所示的例子中,异型屏可以是在触摸显示屏130的上边缘的中央位置设置有半圆形缺口43的屏幕设计,该半圆形缺口43所空出的位置用于容纳摄像头、距离传感器(又称接近传感器)、听筒、环境光亮度传感器中的至少一种前面板部件;示意性的如图13所示,异型屏可以是在触摸显示屏130的下边缘的中央位置设置有半圆形缺口44的屏幕设计,该半圆形缺口44所空出的位置用于容纳物理按键、指纹传感器、麦克风中的至少一种部件;示意性的如图14所示的例子中,异型屏可以是在触摸显示屏130的下边缘的中央位置设置有半椭圆形缺口45的屏幕设计,同时在终端100的前面板上还形成有一个半椭圆型缺口,两个半椭圆形缺口围合成一个椭圆形区域,该椭圆形区域用于容纳物理按键或者指纹识别模组;示意性的如图15所示的例子中,异型屏可以是在触摸显示屏130中的上半部中设置有至少一个小孔45的屏幕设计,该小孔45所空出的位置用于容纳摄像头、距离传感器、听筒、环境光亮度传感器中的至少一种前面板部件。
除此之外,本领域技术人员可以理解,上述附图所示出的终端100的结构并不构成对终端100的限定,终端可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。比如,终端100中还包括射频电路、输入单元、传感器、音频电路、无线保真(Wireless Fidelity,WiFi)模块、电源、蓝牙模块等部件,在此不再赘述。
本申请实施例中,在终端处于待拍摄图像的状态下,通过拍摄部件获取预览图像,并获取预览图像对应的曝光参数值;根据预览图像、曝光参数值,以及预先训练出的以图像数据参数、曝光参数、图像拍摄参数为变量的图像拍摄参数预测模型,预测当前HDR场景下的图像拍摄参数值,其中,图像拍摄参数包括合成张数;当接收到拍摄指令时,根据预测出的所述图像拍摄参数值,执行拍摄图像处理。这样,终端可以自动计算出当前HDR场景下的合成张数,进而可以基于合成张数执行拍摄图像处理,无需用户手动开启HDR合成功能,从而,可以提高拍照的效率。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以 是只读存储器,磁盘或光盘等。
以上所述仅为本申请的较佳实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (14)

  1. 一种拍摄图像的方法,其特征在于,所述方法包括:
    在终端处于待拍摄图像的状态下,通过拍摄部件获取预览图像,并获取所述预览图像对应的曝光参数值;
    根据所述预览图像、所述曝光参数值,以及预先训练出的以图像数据参数、曝光参数、图像拍摄参数为变量的图像拍摄参数预测模型,预测当前高动态范围HDR场景下的图像拍摄参数值,其中,所述图像拍摄参数包括合成张数;
    当接收到拍摄指令时,根据预测出的所述图像拍摄参数值,执行拍摄图像处理。
  2. 根据权利要求1所述的方法,其特征在于,所述图像拍摄参数还包括曝光参数。
  3. 根据权利要求1或2所述的方法,其特征在于,所述方法还包括:
    根据预先存储的训练集中的各个预览图像、曝光参数值、图像拍摄参数值的对应关系,基于通过所述图像拍摄参数预测模型预测得到的图像拍摄参数值趋近于预先存储的与预览图像、曝光参数值相对应的图像拍摄参数值的训练值,对所述图像拍摄参数预测模型进行训练,得到训练后的以所述图像数据参数、所述曝光参数、所述图像拍摄参数为变量的所述图像拍摄参数预测模型。
  4. 根据权利要求3所述的方法,其特征在于,所述方法还包括:
    通过所述拍摄部件获取第一预览图像,并获取所述第一预览图像对应的第一曝光参数值;
    根据所述第一曝光参数值、第一预设数目的衰减百分比和第二预设数目的增强百分比,确定第一预设数目的曝光参数值和第二预设数目的曝光参数值;
    分别根据第一预设数目的所述曝光参数值中的每个所述曝光参数值,通过所述拍摄部件获取所述预览图像,得到第一预设数目的所述预览图像,并分别根据第二预设数目的所述曝光参数值中的每个所述曝光参数值,通过所述拍摄部件获取所述预览图像,得到第二预设数目的所述预览图像;
    对于预先存储的多个预设合成张数中的每个所述预设合成张数,选取所述第一预览图像,并在第一预设数目的所述预览图像和第二预设数目的所述预览图像中,选取所述预设合成张数减一个所述预览图像,得到所述预设合成张数与多个所述曝光参数值对应的目标预览图像集合,其中,所述多个曝光参数值包括第一预设数目的所述曝光参数值和第二预设数目的所述曝光参数值;对所述目标预览图像集合,进行图像合成处理,得到所述预设合成张数与所述多个曝光参数值对应的合成图像;
    在得到的所有合成图像中,确定对应的图像质量最优的目标合成图像,将所述目标合成图像对应的目标预设合成张数、目标曝光参数值确定为目标图像拍摄参数值;
    将所述第一预览图像、所述第一曝光参数值、所述目标图像拍摄参数值对应存储到所述训练集中。
  5. 根据权利要求4所述的方法,其特征在于,所述对所述目标预览图像集合,进行图像合成处理,得到所述预设合成张数与所述多个曝光参数值对应的合成图像,包括:
    分别基于多个预设终端性能参数值,对所述目标预览图像集合,进行图像合成处理,得到所述预设合成张数、所述多个曝光参数值与所述多个预设终端性能参数值对应的合成图像;
    所述在得到的所有合成图像中,确定对应的图像质量最优的目标合成图像,将所述目标合成图像对应的目标预设合成张数、目标曝光参数值确定为目标图像拍摄参数值,包括:
    在得到的所有合成图像中,确定对应的图像质量最优的目标合成图像,将所述目标合成图像对应的所述目标预设合成张数、所述目标曝光参数值和目标预设终端性能参数值确定为所述目标图像拍摄参数值。
  6. 根据权利要求4所述的方法,其特征在于,所述方法还包括:
    记录得到所述预设合成张数与所述多个曝光参数值对应的所述合成图像时所消耗的功耗值;
    所述在得到的所有合成图像中,确定对应的图像质量最优的目标合成图像,将所述目标合成图像对应的目标预设合成张数、目标曝光参数值确定为目标图像拍摄参数值,包括:
    在得到的所有合成图像中,确定对应的图像质量和功耗值综合最优的目标合成图像,将所述目标合成图像对应的所述目标预设合成张数和所述目标曝光参数值确定为所述目标图像拍摄参数值。
  7. 一种拍摄图像的装置,其特征在于,所述装置包括:
    第一获取模块,用于在终端处于待拍摄图像的状态下,通过拍摄部件获取预览图像,并获取所述预览图像对应的曝光参数值;
    预测模块,用于根据所述预览图像、所述曝光参数值,以及预先训练出的以图像数据参数、曝光参数、图像拍摄参数为变量的图像拍摄参数预测模型,预测当前高动态范围HDR场景下的图像拍摄参数值,其中,所述图像拍摄参数包括合成张数;
    执行模块,用于当接收到拍摄指令时,根据预测出的所述图像拍摄参数值,执行拍摄图像处理。
  8. 根据权利要求7所述的装置,其特征在于,所述图像拍摄参数还包括曝光参数。
  9. 根据权利要求7或8所述的装置,其特征在于,所述装置还包括:
    训练模块,用于根据预先存储的训练集中的各个预览图像、曝光参数值、图像拍摄参数值的对应关系,基于通过所述图像拍摄参数预测模型预测得到的图像拍摄参数值趋近于预先存储的与预览图像、曝光参数值相对应的图像拍摄参数值的训练值,对所述图像拍摄参数预测模型进行训练,得到训练后的以所述图像数据参数、所述曝光参数、所述图像拍摄参数为变量的所述图像拍摄参数预测模型。
  10. 根据权利要求9所述的装置,其特征在于,所述装置还包括:
    第二获取模块,用于通过所述拍摄部件获取第一预览图像,并获取所述第一预览图像对应的第一曝光参数值;
    第一确定模块,用于根据所述第一曝光参数值、第一预设数目的衰减百分比和第二预设数目的增强百分比,确定第一预设数目的曝光参数值和第二预设数目的曝光参数值;
    第三获取模块,用于分别根据第一预设数目的曝光参数值中的每个所述曝光参数值,通过所述拍摄部件获取所述预览图像,得到第一预设数目的所述预览图像,并分别根据第二预设数目的曝光参数值中的每个所述曝光参数值,通过所述拍摄部件获取所述预览图像,得到第二预设数目的所述预览图像;
    第二确定模块,用于对于预先存储的多个预设合成张数中的每个所述预设合成张数,选取所述第一预览图像,并在第一预设数目的所述预览图像和第二预设数目的所述预览图像中,选取所述预设合成张数减一个所述预览图像,得到所述预设合成张数与多个所述曝光参数值对应的目标预览图像集合,其中,所述多个曝光参数值包括第一预设数目的所述曝光参数值和第二预设数目的所述曝光参数值;对所述目标预览图像集合,进行图像合成处理,得到所述预设合成张数与所述多个曝光参数值对应的合成图像;
    第三确定模块,用于在得到的所有合成图像中,确定对应的图像质量最优的目标合成图像,将所述目标合成图像对应的目标预设合成张数、目标曝光参数值确定为目标图像拍摄参数值;
    存储模块,用于将所述第一预览图像、所述第一曝光参数值、所述目标图像拍摄参数值对应存储到所述训练集中。
  11. 根据权利要求10所述的装置,其特征在于,所述第二确定模块,用于:
    分别基于多个预设终端性能参数值,对所述目标预览图像集合,进行图像合成处理,得到所述预设合成张数、所述多个曝光参数值与所述多个预设终端性能参数值对应的合成图像;
    所述第三确定模块,用于:
    在得到的所有合成图像中,确定对应的图像质量最优的目标合成图像,将所述目标合成图像对应的所述目标预设合成张数、所述目标曝光参数值和目标预设终端性能参数值确定为所述目标图像拍摄参数值。
  12. 根据权利要求10所述的装置,其特征在于,所述装置还包括:
    记录模块,用于记录得到所述预设合成张数与所述多个曝光参数值对应的所述合成图像时所消耗的功耗值;
    所述第三确定模块,用于:
    在得到的所有合成图像中,确定对应的图像质量和功耗值综合最优的目标合成图像,将所述目标合成图像对应的所述目标预设合成张数和所述目标曝光参数值确定为所述目标图像拍摄参数值。
  13. 一种终端,其特征在于,所述终端包括处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如权利要求1至6任一所述的拍摄图像的方法。
  14. 一种计算机可读存储介质,其特征在于,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现如权利要求1至6任一所述的拍摄图像的方法。
PCT/CN2018/114423 2017-11-13 2018-11-07 拍摄图像的方法、装置、终端和存储介质 WO2019091412A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP18875585.4A EP3713212B1 (en) 2017-11-13 2018-11-07 Image capture method, terminal, and storage medium
US16/846,054 US11412153B2 (en) 2017-11-13 2020-04-10 Model-based method for capturing images, terminal, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711117449.6 2017-11-13
CN201711117449.6A CN107809593B (zh) 2017-11-13 2017-11-13 拍摄图像的方法、装置、终端和存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/846,054 Continuation US11412153B2 (en) 2017-11-13 2020-04-10 Model-based method for capturing images, terminal, and storage medium

Publications (1)

Publication Number Publication Date
WO2019091412A1 true WO2019091412A1 (zh) 2019-05-16

Family

ID=61592090

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/114423 WO2019091412A1 (zh) 2017-11-13 2018-11-07 拍摄图像的方法、装置、终端和存储介质

Country Status (4)

Country Link
US (1) US11412153B2 (zh)
EP (1) EP3713212B1 (zh)
CN (2) CN107809593B (zh)
WO (1) WO2019091412A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022005126A1 (en) 2020-06-29 2022-01-06 Samsung Electronics Co., Ltd. Electronic device and controlling method of electronic device

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107809592B (zh) 2017-11-13 2019-09-17 Oppo广东移动通信有限公司 拍摄图像的方法、装置、终端和存储介质
CN107809593B (zh) * 2017-11-13 2019-08-16 Oppo广东移动通信有限公司 拍摄图像的方法、装置、终端和存储介质
KR102412591B1 (ko) * 2017-12-21 2022-06-24 삼성전자주식회사 다른 노출값을 갖는 복수의 영상들을 이용하여 합성 영상을 생성하는 방법 및 이를 지원하는 전자 장치
CN111418201B (zh) * 2018-03-27 2021-10-15 华为技术有限公司 一种拍摄方法及设备
CN110708468B (zh) * 2018-07-10 2021-10-12 瑞芯微电子股份有限公司 摄像方法和装置
CN109618094A (zh) * 2018-12-14 2019-04-12 深圳市华星光电半导体显示技术有限公司 图像处理方法及图像处理系统
CN109547699A (zh) * 2018-12-24 2019-03-29 成都西纬科技有限公司 一种拍照的方法及装置
CN109729269B (zh) * 2018-12-28 2020-10-30 维沃移动通信有限公司 一种图像处理方法、终端设备及计算机可读存储介质
CN110072051B (zh) * 2019-04-09 2021-09-03 Oppo广东移动通信有限公司 基于多帧图像的图像处理方法和装置
CN110298810A (zh) * 2019-07-24 2019-10-01 深圳市华星光电技术有限公司 图像处理方法及图像处理系统
CN114650361B (zh) * 2020-12-17 2023-06-06 北京字节跳动网络技术有限公司 拍摄模式确定方法、装置、电子设备和存储介质
CN112804464B (zh) * 2020-12-30 2023-05-09 北京格视科技有限公司 一种hdr图像生成方法、装置、电子设备及可读存储介质
CN112822426B (zh) * 2020-12-30 2022-08-30 上海掌门科技有限公司 一种用于生成高动态范围图像的方法与设备
CN113916445A (zh) * 2021-09-08 2022-01-11 广州航新航空科技股份有限公司 旋翼共锥度的测量方法、系统、装置及存储介质
CN117714835A (zh) * 2023-08-02 2024-03-15 荣耀终端有限公司 一种图像处理方法、电子设备及可读存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103327221A (zh) * 2012-03-20 2013-09-25 华晶科技股份有限公司 摄像装置及其影像预览系统及影像预览方法
US20150124147A1 (en) * 2013-11-01 2015-05-07 Samsung Electronics Co., Ltd. Method of displaying high dynamic range (hdr) image, computer-readable storage medium for recording the method, and digital imaging apparatus
WO2016061011A2 (en) * 2014-10-15 2016-04-21 Microsoft Technology Licensing, Llc Camera capture recommendation for applications
CN105827754A (zh) * 2016-03-24 2016-08-03 维沃移动通信有限公司 一种高动态范围图像的生成方法及移动终端
CN107231530A (zh) * 2017-06-22 2017-10-03 维沃移动通信有限公司 一种拍照方法及移动终端
CN107809593A (zh) * 2017-11-13 2018-03-16 广东欧珀移动通信有限公司 拍摄图像的方法、装置、终端和存储介质

Family Cites Families (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5163031B2 (ja) * 2007-09-26 2013-03-13 株式会社ニコン 電子カメラ
KR101023946B1 (ko) 2007-11-02 2011-03-28 주식회사 코아로직 객체 추적을 이용한 디지털 영상의 손떨림 보정 장치 및방법
US20090244301A1 (en) 2008-04-01 2009-10-01 Border John N Controlling multiple-image capture
TW201036453A (en) * 2009-03-25 2010-10-01 Micro Star Int Co Ltd Method and electronic device to produce high dynamic range image
WO2010118177A1 (en) * 2009-04-08 2010-10-14 Zoran Corporation Exposure control for high dynamic range image capture
CN101859430B (zh) * 2009-04-09 2014-01-01 恩斯迈电子(深圳)有限公司 产生高动态范围图像的方法及装置
JP5397068B2 (ja) 2009-06-03 2014-01-22 ソニー株式会社 撮像装置、撮像制御方法、露出制御装置および露出制御方法
JP5713752B2 (ja) * 2011-03-28 2015-05-07 キヤノン株式会社 画像処理装置、及びその制御方法
CN104067608B (zh) * 2012-01-18 2017-10-24 英特尔公司 智能计算成像系统
KR101890305B1 (ko) * 2012-08-27 2018-08-21 삼성전자주식회사 촬영 장치, 그 제어 방법, 및 컴퓨터 판독가능 기록매체
US8446481B1 (en) 2012-09-11 2013-05-21 Google Inc. Interleaved capture for high dynamic range image acquisition and synthesis
JP6218389B2 (ja) * 2013-02-07 2017-10-25 キヤノン株式会社 画像処理装置及び画像処理方法
US9066017B2 (en) * 2013-03-25 2015-06-23 Google Inc. Viewfinder display based on metering images
CN103413285A (zh) * 2013-08-02 2013-11-27 北京工业大学 基于样本预测的hdr和hr图像重建方法
CN103455170B (zh) 2013-08-22 2017-03-01 西安电子科技大学 一种基于传感器的移动终端运动识别装置及方法
CN103747189A (zh) * 2013-11-27 2014-04-23 杨新锋 一种数字图像处理方法
WO2016197307A1 (en) * 2015-06-08 2016-12-15 SZ DJI Technology Co., Ltd. Methods and apparatus for image processing
CN105635559B (zh) 2015-07-17 2018-02-13 宇龙计算机通信科技(深圳)有限公司 用于终端的拍照控制方法及装置
US10129477B2 (en) * 2015-08-19 2018-11-13 Google Llc Smart image sensor having integrated memory and processor
US20170163903A1 (en) * 2015-12-08 2017-06-08 Le Holdings (Beijing) Co., Ltd. Method and electronic device for processing image
CN105827971B (zh) 2016-03-31 2019-01-11 维沃移动通信有限公司 一种图像处理方法及移动终端
CN106454096A (zh) * 2016-10-29 2017-02-22 深圳市金立通信设备有限公司 一种图像处理方法及终端
CN107169507A (zh) * 2017-01-06 2017-09-15 华南理工大学 一种基于网格特征提取的证件照曝光方向检测算法
CN106895916B (zh) * 2017-01-09 2018-10-30 浙江大学 一种单次曝光拍摄获取多光谱图像的方法
CN106657798A (zh) 2017-02-28 2017-05-10 上海传英信息技术有限公司 一种智能终端的拍照方法
US10334141B2 (en) * 2017-05-25 2019-06-25 Denso International America, Inc. Vehicle camera system
CN107172296A (zh) 2017-06-22 2017-09-15 维沃移动通信有限公司 一种图像拍摄方法及移动终端
WO2019001701A1 (en) * 2017-06-28 2019-01-03 Huawei Technologies Co., Ltd. APPARATUS AND METHOD FOR IMAGE PROCESSING
CN107205120B (zh) * 2017-06-30 2019-04-09 维沃移动通信有限公司 一种图像的处理方法和移动终端
US11094043B2 (en) * 2017-09-25 2021-08-17 The Regents Of The University Of California Generation of high dynamic range visual media
CN107809591B (zh) 2017-11-13 2019-09-10 Oppo广东移动通信有限公司 拍摄图像的方法、装置、终端和存储介质
CN107809592B (zh) * 2017-11-13 2019-09-17 Oppo广东移动通信有限公司 拍摄图像的方法、装置、终端和存储介质
US11182884B2 (en) * 2019-07-30 2021-11-23 Nvidia Corporation Enhanced high-dynamic-range imaging and tone mapping

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103327221A (zh) * 2012-03-20 2013-09-25 华晶科技股份有限公司 摄像装置及其影像预览系统及影像预览方法
US20150124147A1 (en) * 2013-11-01 2015-05-07 Samsung Electronics Co., Ltd. Method of displaying high dynamic range (hdr) image, computer-readable storage medium for recording the method, and digital imaging apparatus
WO2016061011A2 (en) * 2014-10-15 2016-04-21 Microsoft Technology Licensing, Llc Camera capture recommendation for applications
CN105827754A (zh) * 2016-03-24 2016-08-03 维沃移动通信有限公司 一种高动态范围图像的生成方法及移动终端
CN107231530A (zh) * 2017-06-22 2017-10-03 维沃移动通信有限公司 一种拍照方法及移动终端
CN107809593A (zh) * 2017-11-13 2018-03-16 广东欧珀移动通信有限公司 拍摄图像的方法、装置、终端和存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3713212A4 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022005126A1 (en) 2020-06-29 2022-01-06 Samsung Electronics Co., Ltd. Electronic device and controlling method of electronic device
EP4097964A4 (en) * 2020-06-29 2023-07-12 Samsung Electronics Co., Ltd. ELECTRONIC DEVICE AND ELECTRONIC DEVICE CONTROL METHOD
US11928799B2 (en) 2020-06-29 2024-03-12 Samsung Electronics Co., Ltd. Electronic device and controlling method of electronic device

Also Published As

Publication number Publication date
CN110475072A (zh) 2019-11-19
CN107809593B (zh) 2019-08-16
CN110475072B (zh) 2021-03-09
EP3713212A1 (en) 2020-09-23
EP3713212B1 (en) 2023-06-21
CN107809593A (zh) 2018-03-16
US20200244865A1 (en) 2020-07-30
US11412153B2 (en) 2022-08-09
EP3713212A4 (en) 2021-01-06

Similar Documents

Publication Publication Date Title
WO2019091412A1 (zh) 拍摄图像的方法、装置、终端和存储介质
WO2019091411A1 (zh) 拍摄图像的方法、装置、终端和存储介质
US11418702B2 (en) Method and device for displaying shooting interface, and terminal
US20230057566A1 (en) Multimedia processing method and apparatus based on artificial intelligence, and electronic device
US11070717B2 (en) Context-aware image filtering
CN109729274B (zh) 图像处理方法、装置、电子设备及存储介质
CN110874217A (zh) 快应用的界面显示方法、装置及存储介质
US11102397B2 (en) Method for capturing images, terminal, and storage medium
CN114640783B (zh) 一种拍照方法及相关设备
CN110971974B (zh) 配置参数创建方法、装置、终端及存储介质
CN115689963A (zh) 一种图像处理方法及电子设备
CN113825002B (zh) 显示设备及焦距控制方法
CN107864333B (zh) 图像处理方法、装置、终端及存储介质
CN110968362A (zh) 应用运行方法、装置及存储介质
JP2023553706A (ja) 撮影モード決定方法、装置、電子機器、及び記憶媒体
CN113923367B (zh) 拍摄方法、拍摄装置
WO2024099353A1 (zh) 视频处理方法、装置、电子设备及存储介质
CN114513690B (zh) 显示设备及图像采集方法
CN116320799A (zh) 拍摄方法、装置、电子设备及可读存储介质
CN116095498A (zh) 图像采集方法、装置、终端及存储介质
CN117768768A (zh) 辅助构图方法、装置以及电子设备
CN113259583A (zh) 一种图像处理方法、装置、终端及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18875585

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018875585

Country of ref document: EP

Effective date: 20200615