WO2022179256A1 - 图像处理方法、图像处理装置、电子设备及可读存储介质 - Google Patents

图像处理方法、图像处理装置、电子设备及可读存储介质 Download PDF

Info

Publication number
WO2022179256A1
WO2022179256A1 PCT/CN2021/137887 CN2021137887W WO2022179256A1 WO 2022179256 A1 WO2022179256 A1 WO 2022179256A1 CN 2021137887 W CN2021137887 W CN 2021137887W WO 2022179256 A1 WO2022179256 A1 WO 2022179256A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
matrix
pixel
original raw
multiple frames
Prior art date
Application number
PCT/CN2021/137887
Other languages
English (en)
French (fr)
Inventor
邹涵江
何慕威
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2022179256A1 publication Critical patent/WO2022179256A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/73Colour balance circuits, e.g. white balance circuits or colour temperature control

Definitions

  • the present application relates to the field of image technology, and in particular, to an image processing method, an image processing apparatus, an electronic device, and a non-volatile computer-readable storage medium.
  • a RAW file encoded by one image processing application may not be parsed by another image processing application.
  • single-frame RAW images have high noise and poor dynamic range.
  • Embodiments of the present application provide an image processing method, an image processing apparatus, an electronic device, and a non-volatile computer-readable storage medium.
  • Embodiments of the present application provide an image processing method.
  • the image processing method includes: acquiring multiple frames of original RAW images; performing high dynamic range image processing on the multiple frames of the original RAW images to obtain a target RAW image; according to metadata parameter information when shooting the multiple frames of the original RAW images acquiring label parameters in the DNG image; and generating a DNG file according to the target RAW image, the label parameters and the metadata parameter information.
  • Embodiments of the present application provide an image processing apparatus.
  • the image processing device includes an image sensor and one or more processors.
  • the pixel array in the image sensor is exposed to acquire multiple frames of raw RAW images.
  • One or more of the processors are used to: perform high dynamic range image processing on the multiple frames of the original RAW image to obtain the target RAW image; obtain the DNG image according to the metadata parameter information when shooting the multiple frames of the original RAW image. and generating a DNG file according to the target RAW image, the label parameters and the metadata parameter information.
  • Embodiments of the present application provide an electronic device.
  • the electronic device includes a lens and an image processing device, and the lens cooperates with an image sensor of the image processing device to form an image.
  • the image processing device includes an image sensor and one or more processors.
  • the pixel array in the image sensor is exposed to acquire multiple frames of raw RAW images.
  • One or more of the processors are used to: perform high dynamic range image processing on the multiple frames of the original RAW image to obtain the target RAW image; obtain the DNG image according to the metadata parameter information when shooting the multiple frames of the original RAW image. and generating a DNG file according to the target RAW image, the label parameters and the metadata parameter information.
  • Embodiments of the present application provide a non-volatile computer-readable storage medium containing a computer program.
  • the processor causes the processor to execute the image processing method according to any one of claims 1 to 11.
  • the image processing method includes: acquiring multiple frames of original RAW images; performing high dynamic range image processing on the multiple frames of the original RAW images to obtain a target RAW image; according to metadata parameter information when shooting the multiple frames of the original RAW images acquiring label parameters in the DNG image; and generating a DNG file according to the target RAW image, the label parameters and the metadata parameter information.
  • FIG. 1 is a schematic flowchart of an image processing method according to some embodiments of the present application.
  • FIG. 2 is a schematic structural diagram of an image processing apparatus according to some embodiments of the present application.
  • FIG. 3 is a schematic diagram of an image sensor in an image processing apparatus according to some embodiments of the present application.
  • FIG. 10 is a schematic diagram of obtaining a first grayscale image by performing first grayscale processing on a first reference image in some embodiments of the present application;
  • FIG. 11 to 12 are schematic flowcharts of image processing methods according to some embodiments of the present application.
  • FIG. 13 is a schematic diagram of a motion area of a grayscale image after registration and a corresponding motion area of the original RAW image after registration in some embodiments of the present application;
  • FIG. 15 is a schematic diagram of the principle of acquiring an intermediate RAW image according to some embodiments of the present application.
  • 16 is a schematic flowchart of an image processing method according to some embodiments of the present application.
  • 17 is an original schematic diagram of acquiring an intermediate RAW image after ghosting removal according to some embodiments of the present application.
  • 19 is a schematic diagram of obtaining a second grayscale image by performing second grayscale processing on a second reference image in some embodiments of the present application;
  • 26 is a schematic diagram of a DNG image and a target image obtained in some embodiments of the present application.
  • 27 is a schematic structural diagram of an electronic device according to some embodiments of the present application.
  • FIG. 28 is a schematic diagram of interaction between a non-volatile computer-readable storage medium and a processor according to some embodiments of the present application.
  • Embodiments of the present application provide an image processing method.
  • the image processing method includes: acquiring multiple frames of original RAW images; performing high dynamic range image processing on the multiple frames of original RAW images to obtain a target RAW image; parameters; and generating a DNG file according to the target RAW image, label parameters and metadata parameter information.
  • multiple frames of raw RAW images are exposed at at least two different exposure values.
  • the multiple frames of raw RAW images include a first raw raw image exposed at a nominal exposure value and a second raw raw image exposed at a different nominal exposure value.
  • the image processing method further includes: metering the environment, and obtaining a calibrated exposure value according to the measured ambient brightness; or obtaining a calibrated exposure value according to an exposure parameter determined by a user, wherein the exposure parameter includes an exposure value, At least one of sensitivity and exposure time.
  • performing high dynamic range image processing on multiple frames of original RAW images to obtain target RAW images includes: performing image registration on the multiple frames of original RAW images; Perform fusion to obtain multiple frames of intermediate RAW images; obtain weights corresponding to all pixels in each frame of intermediate RAW images; and fuse multiple frames of intermediate RAW images according to the weights to obtain a target RAW image.
  • performing high dynamic range image processing on multiple frames of original RAW images to obtain a target RAW image further comprising: detecting a moving area of the registered original RAW images;
  • the original RAW images are fused to obtain multiple frames of intermediate RAW images, including: for the original RAW images after registration for each frame with the same exposure value, applying the first fusion process to the pixels located in the motion area, and performing the first fusion process on the pixels located in the motion area.
  • the pixels outside the area are processed by a second fusion process to obtain multiple frames of intermediate RAW images, and the first fusion process is different from the second fusion process.
  • fusing the registered original RAW images with the same exposure value to obtain multiple frames of intermediate RAW images further includes: selecting any one of the multiple registered original RAW images with the same exposure value One frame is used as the first reference image, and the other frames are used as the first non-reference image.
  • the first fusion process is applied to the pixels located in the motion area, including: if all the pixels at the same position of the first non-reference image are located in the motion area, the pixel values of the pixels at the corresponding positions of the first reference image are taken as the fused middle The pixel value of the corresponding pixel in the RAW image.
  • the second fusion process is applied to the pixels located outside the motion area, including: if at least one pixel among all the pixels in the same position of the first non-reference image is located outside the motion area, then merging the pixel points of the corresponding position of the first reference image with the pixel points.
  • the average value of the pixel values of the pixel values corresponding to the first non-reference image and the pixel values of the pixel points outside the motion area is taken as the pixel value of the corresponding pixel point of the fused intermediate Raw image.
  • the multiple frames of original RAW images include a first original RAW image exposed at a calibrated exposure value and a second original RAW image exposed at a value different from the calibrated exposure value, and all pixels in the intermediate RAW images of each frame are acquired
  • the corresponding weights include: selecting an intermediate RAW image obtained after fusion of the first original RAW image as the second reference image, and the remaining intermediate RAW images as the second non-reference image; performing second grayscale processing on the second reference image to obtain the first reference image.
  • Two grayscale images; according to the average brightness and variance of the second grayscale image, and the pixel values of the pixels to be calculated in the intermediate RAW image, the weights corresponding to the pixels to be calculated are obtained.
  • the metadata parameter information includes shooting parameters of the original RAW image, a first color conversion matrix under the first light source, and a second color conversion matrix under the second light source
  • the tag parameters include the first color matrix, and the second color matrix.
  • Obtaining the label parameters in the DNG image according to the metadata parameter information when shooting multiple frames of original RAW images includes: obtaining the first color matrix according to the first color conversion matrix, the first matrix and the second matrix; and obtaining the first color matrix according to the second color conversion matrix and the first matrix to obtain the second color matrix.
  • the first matrix is the second light source as the reference light source, and the conversion matrix from the first space to the second space The first space is different from the second space; the second matrix is from the reference white of the second light source to the first light source. Refer to White's transformation matrix.
  • the label parameters further include a first front matrix and a second front matrix
  • the label parameters in the DNG image are obtained according to the parameter information of the metadata when the multi-frame RAW images are captured, and further includes: according to The first color conversion matrix and the third matrix are calculated to obtain the first front matrix; according to the second color conversion matrix and the third matrix, the second front matrix is obtained by calculation.
  • the third matrix is a conversion matrix from the first space to the second space with the third light source as a reference light source.
  • generating a DNG file according to the target RAW image, label parameters, and metadata parameter information includes: writing the label parameters, metadata parameter information, and data of the target RAW image into a blank file according to the DNG encoding specification to generate the DNG document.
  • Embodiments of the present application further provide an image processing apparatus.
  • An image processing device includes an image sensor and one or more processors. Pixel arrays in the image sensor are exposed to obtain a multi-frame raw RAW image; one or more processors are used to: perform high dynamic range image processing on the multi-frame raw RAW image to obtain the target RAW image; Obtain the label parameters in the DNG image according to the metadata parameter information; and generate the DNG file according to the target RAW image, label parameters and metadata parameter information.
  • multiple frames of raw RAW images are exposed at at least two different exposure values.
  • the multiple frames of raw RAW images include a first raw raw image exposed at a nominal exposure value and a second raw raw image exposed at a different nominal exposure value.
  • one or more processors are further configured to perform light metering on the environment, and obtain a calibrated exposure value according to the measured ambient brightness; or obtain a calibrated exposure value according to an exposure parameter determined by a user, wherein the exposure parameter includes At least one of exposure value, sensitivity, and exposure duration.
  • the one or more processors are further configured to: perform image registration on multiple frames of original RAW images; and fuse the registered original RAW images with the same exposure value to obtain multiple frames of intermediate RAW images ; obtain the weights corresponding to all the pixels in the intermediate RAW image of each frame; and fuse multiple frames of intermediate RAW images according to the weights to obtain the target RAW image.
  • the one or more processors are further configured to: detect a moving area of the registered original Raw image; and for each frame of the registered original Raw image with the same exposure value, detect the moving area
  • the first fusion processing is used for the pixels inside, and the second fusion processing is used for the pixels located outside the motion area, so as to obtain a multi-frame intermediate RAW image.
  • the first fusion processing is different from the second fusion processing.
  • the one or more processors are further configured to: select any one frame of the original RAW images after registration of multiple frames with the same exposure value as the first reference image, and other frames as the first non-reference image ; If all the pixels at the same position of the first non-reference image are located in the motion area, the pixel value of the pixel at the corresponding position of the first reference image is taken as the pixel value of the corresponding pixel of the intermediate RAW image after fusion; If at least one of the pixels in the same position of the reference image is located outside the motion area, then compare the pixel value of the pixel at the corresponding position of the first reference image with the pixel value of the pixel at the corresponding position of the first non-reference image and outside the motion area The mean value of is used as the pixel value of the corresponding pixel in the fused intermediate RAW image.
  • the multiple frames of raw RAW images include a first raw raw image exposed at a nominal exposure value and a second raw raw image exposed at a different nominal exposure value
  • the one or more processors are further configured to : Select the first original RAW image after fusion to obtain the intermediate RAW image as the second reference image, and the remaining intermediate RAW images as the second non-reference image; perform second grayscale processing on the second reference image to obtain the second grayscale image; and according to the average brightness and variance of the second grayscale image and the pixel value of the pixel to be calculated in the intermediate RAW image, obtain the weight corresponding to the pixel to be calculated.
  • the metadata parameter information includes shooting parameters of the original RAW image, a first color conversion matrix under the first light source, and a second color conversion matrix under the second light source
  • the tag parameters include the first color matrix, and the second color matrix.
  • the one or more processors are further configured to: obtain a first color matrix according to the first color conversion matrix, the first matrix and the second matrix; and obtain a second color matrix according to the second color conversion matrix and the first matrix.
  • the first matrix is the second light source as the reference light source, and the conversion matrix from the first space to the second space The first space is different from the second space; the second matrix is from the reference white of the second light source to the first light source. Refer to White's transformation matrix.
  • the label parameters further include a first front matrix and a second front matrix.
  • the one or more processors are further configured to: calculate and obtain the first front matrix according to the first color conversion matrix and the third matrix; and calculate and obtain the second front matrix according to the second color conversion matrix and the third matrix .
  • the third matrix is a conversion matrix from the first space to the second space with the third light source as a reference light source.
  • the one or more processors are further configured to: write tag parameters, metadata parameter information, and data of the target RAW image into a blank file according to the DNG encoding specification to generate a DNG file.
  • An electronic device includes a lens, and an image processing device according to any one of the above embodiments.
  • the lens cooperates with the image sensor of the image processing device to form an image.
  • the present application also provides a non-volatile computer-readable storage medium containing the computer program.
  • the computer program When executed by the processor, it causes the processor to execute the image processing method of any one of the above embodiments.
  • Image processing methods include:
  • an embodiment of the present application further provides an image processing apparatus 100 .
  • the image processing apparatus 100 includes an image sensor 10 and one or more processors 20 .
  • Step 01 is implemented by the image sensor 10
  • steps 02 , 03 and 04 may be executed by one or more processors 20 . That is, the pixel array 11 (shown in FIG.
  • one or more processors 20 are used to: perform high dynamic range image processing on the multiple frames of raw RAW images , to obtain the target RAW image; obtain the label parameters in the DNG image according to the metadata parameter information when shooting multiple frames of the original RAW image; and generate the DNG file according to the target RAW image, label parameters and metadata parameter information.
  • the image processing method and the image processing device 100 in the present application obtain the target RAW image by performing high-dynamic fusion of multiple frames of RAW images, and then convert the target RAW image into a DNG file.
  • the target RAW image synthesized with multiple frames of RAW images has more image information, wider dynamic range and higher definition than a single-frame RAW image;
  • the DNG file in the unified encoding and parsing format is helpful for users to export the DNG file to the post-processing software for processing.
  • the image sensor 10 includes a pixel array 11 , wherein the pixel array 11 is exposed to obtain an original RAW image.
  • the pixel array 11 includes a plurality of photosensitive pixels (not shown) arranged two-dimensionally in an array form (ie, arranged in a two-dimensional matrix form). The intensity of light converts light into electric charge.
  • multiple frames of original RAW images are exposed with at least two different exposure values, that is, the pixel array 11 is exposed with at least two different exposure values, so as to obtain multiple frames of original RAW images, and in the multiple frames of original RAW images At least two frames of original RAW images are obtained by exposing with a different exposure value.
  • the multiple frames of original RAW images include a first original RAW image exposed at a nominal exposure value and a second original RAW image exposed at a different nominal exposure value. That is, the pixel array 11 acquires the first original RAW image by exposure at the calibration exposure value, and acquires the second original RAW image by exposure at an exposure value different from the calibration exposure value.
  • the multiple frames of original RAW images include a first original RAW image exposed at a nominal exposure value, and a second original RAW image exposed at a value greater than the nominal exposure value.
  • the number of the first original RAW images may be greater than the number of the second original RAW images; or the number of the first original RAW images may also be smaller than the number of the second original RAW images; or the number of the first original RAW images may also be It can be equal to the number of second original RAW images, which is not limited here.
  • the multiple frames of the second original RAW image may also be exposed with at least two different exposure values, but no matter how many exposure values are used to obtain the second original RAW image, multiple frames of the second original RAW image are obtained.
  • the exposure values of the images are all greater than the calibration exposure value.
  • the multiple frames of original RAW images may also include a first original RAW image exposed at a nominal exposure value, and a second original RAW image exposed at a value smaller than the nominal exposure value.
  • the number of the first original RAW images may be greater than the number of the second original RAW images; or the number of the first original RAW images may also be smaller than the number of the second original RAW images; or the number of the first original RAW images may also be It can be equal to the number of second original RAW images, which is not limited here.
  • the multiple frames of the second original RAW image may also be exposed with at least two different exposure values, but no matter how many exposure values are used to obtain the second original RAW image, multiple frames of the second original RAW image are obtained.
  • the exposure values of the images are all less than the calibration exposure value.
  • the multiple frames of raw RAW images include a first raw raw image exposed at a nominal exposure value, a second raw raw image exposed at a value greater than the nominal exposure value, and a third raw raw image exposed at a value less than the nominal exposure value .
  • the numbers of the first original RAW images, the second original RAW images and the third original RAW images may be equal; or the numbers of the first original RAW images, the second original RAW images and the third original RAW images may be different. , which is not limited here.
  • the multiple frames of the second original RAW image may also be exposed with at least two different exposure values, but no matter how many exposure values are used to obtain the second original RAW image, multiple frames of the second original RAW image are obtained.
  • the exposure values of the images are all greater than the calibration exposure value.
  • the multiple frames of the third original RAW image may also be exposed with at least two different exposure values, but no matter how many exposure values are used to obtain the third original RAW image, the third original RAW image may be obtained with multiple exposure values.
  • the exposure values of RAW images are all less than the nominal exposure value.
  • the image processing apparatus 100 will perform high dynamic fusion processing on the original RAW images of the multiple frames after acquiring the multiple frames of the original RAW images, the target obtained by the high dynamic fusion of the original RAW images obtained by exposure with three different exposure values will be included
  • the RAW image has a higher dynamic range and better image quality than the target RAW image obtained by HDR containing the original RAW image obtained by exposure at two different exposure values.
  • the image processing apparatus 100 preselects and stores multiple preset exposure strategies for obtaining multiple frames of original RAW images, and the user can select a preset exposure strategy according to actual needs to obtain multiple frames of original RAW images. image. In this way, the target RAW image finally obtained can better meet the needs of users.
  • the preset exposure strategy includes, but is not limited to, at least one of the following: (1) Exposure at a calibrated exposure value to obtain multiple frames of the first original RAW image, and exposure at an exposure value less than the calibrated exposure value to obtain one frame The second original RAW image; (2) Exposure at the calibrated exposure value to obtain one frame of the first original RAW image, and exposure at the exposure value less than the calibrated exposure value to obtain multiple frames of the second original RAW image; (3) Use the calibrated exposure value Exposure to obtain a first original RAW image, exposure with an exposure value greater than the calibration exposure value to obtain a second original RAW image, and exposure with an exposure value smaller than the calibration exposure value to obtain a third original RAW image.
  • the user may also directly set the exposure strategy to obtain multiple frames of original RAW images, which is not limited herein.
  • the image processing method further includes:
  • Step 01 Acquire multiple frames of raw RAW images, including:
  • step 011 is implemented by the image sensor 10
  • step 051 is performed by one or more processors 20 . That is to say, the one or more processors 20 are also used to perform light metering on the environment, and obtain a calibrated exposure value according to the measured ambient brightness.
  • the image sensor 10 is also used to acquire a first original RAW image exposed with a nominal exposure value and a second original RAW image exposed with a different nominal exposure value.
  • the processor 20 detects the brightness of the surrounding environment of the image processing apparatus 100 or the electronic device 1000 (shown in FIG. 27 ) in which the image processing apparatus 100 is installed to obtain the ambient brightness. After obtaining the ambient brightness, the processor 20 obtains the calibrated exposure value according to the ambient brightness. It should be pointed out that under this ambient brightness, the original RAW image with better image quality can be obtained by exposing with the calibrated exposure value.
  • the image processing device 100 stores a preset correspondence table between ambient brightness and exposure value
  • the processor 20 stores the preset ambient brightness in the calibration exposure value correspondence table according to the acquired ambient brightness. Find the corresponding exposure value and use the corresponding exposure value as the calibration exposure value.
  • the image processing method further includes:
  • 052 Acquire a calibrated exposure value according to an exposure parameter determined by the user, where the exposure parameter includes at least one of an exposure value, a sensitivity, and an exposure duration.
  • step 052 is performed by one or more processors 20 . That is to say, the one or more processors 20 are further configured to obtain a calibrated exposure value according to an exposure parameter determined by a user, wherein the exposure parameter includes at least one of exposure value, sensitivity and exposure duration.
  • step 052 obtain a calibrated exposure value according to an exposure parameter determined by a user, wherein the exposure parameter includes at least one of exposure value, sensitivity, and exposure duration, and also include:
  • step 0521 , step 0522 and step 0523 may be implemented by one or more processors 20 . That is to say, the one or more processors 20 are also used for metering the environment; obtaining initial parameters of exposure according to the measured brightness of the environment; and adjusting the initial parameters according to user input to obtain exposure parameters.
  • the processor 20 detects the brightness of the surrounding environment of the image processing apparatus 100 or the electronic device 1000 (shown in FIG. 27 ) in which the image processing apparatus 100 is installed to obtain the ambient brightness. After obtaining the ambient brightness, the processor 20 obtains initial parameters according to the ambient brightness. Exemplarily, in some embodiments, the image processing device 100 stores a preset correspondence table between ambient brightness and exposure initial parameters, and the processor 20 stores the preset ambient brightness and exposure initial parameter correspondence table according to the acquired ambient brightness. Find the corresponding exposure initial parameters in . It should be noted that the exposure parameter includes at least one of exposure value, exposure time and sensitivity.
  • the processor 20 takes the initial parameters adjusted by the user as the exposure parameters determined by the user. After acquiring the exposure parameter determined by the user, the processor 20 acquires the calibration exposure value according to the exposure parameter determined by the user.
  • the exposure parameters include an exposure value, an exposure time, and an initial sensitivity, that is, the initial parameters include an initial exposure value, an initial exposure time, and an initial sensitivity.
  • the initial exposure value is used as the calibration exposure value; if the user only adjusts the initial exposure value, The initial exposure value adjusted by the user is used as the calibration exposure value; if the user adjusts the initial exposure time and the initial sensitivity, the exposure value combined with the initial exposure time adjusted by the user and the initial sensitivity adjusted by the user is used as the calibration. exposure value. It should be noted that, in some embodiments, after obtaining the initial parameters, the user only adjusts the initial exposure time, indicating that the adjusted exposure time is the exposure time expected by the user, and the exposure times of multiple frames of original RAW are all adjusted.
  • the exposure value combined with the initial sensitivity and the initial exposure time adjusted by the user is used as the calibration exposure value.
  • the original RAW image exposed at a different exposure value than the calibration exposure value can be obtained.
  • the user has only adjusted the initial sensitivity it means that the adjusted sensitivity is the exposure time expected by the user, then the sensitivity of the original RAW of multiple frames is the adjusted sensitivity, and the initial exposure time and the user-adjusted initial sensitivity
  • the exposure value after the combination of sensitivity is used as the standard exposure value, and by adjusting the exposure time, the original RAW image exposed at a different exposure value than the standard exposure value can be obtained. Since the initial parameters are obtained according to the ambient brightness, the user only needs to adjust the initial parameters according to the requirements. Compared with the user directly inputting the exposure parameters, the final target image can meet the user's needs and reduce the difficulty of the user's operation.
  • the user can also directly input the exposure parameter to obtain the exposure parameter, and the processor 20 obtains the calibrated exposure value according to the exposure parameter input by the user. For example, if the user only inputs the exposure value, the exposure value input by the user is used as the calibration exposure value; if the user inputs the sensitivity and the exposure time, the exposure value after the combination of the sensitivity and the exposure time input by the user is used as the calibration exposure value. .
  • the pixel array 11 in the image sensor 10 is exposed at the calibration exposure value to obtain the first original RAW image, and exposed at a different exposure value to obtain the second original RAW image.
  • the specific acquisition method is the same as that described in the above-mentioned embodiment to obtain the first original RAW image by exposing with a calibrated exposure value, and to obtain the second original RAW image by exposing with a different exposure value, which will not be repeated here.
  • the image processing method further includes:
  • Pre-processing the original RAW image including at least one of linear correction, dead pixel correction, black level correction and lens shading correction;
  • Step 02 Perform high dynamic range image processing on multiple frames of original RAW images to obtain target RAW images, including:
  • 021 Perform high dynamic range image processing on the multi-frame processed original RAW image to obtain the target RAW image.
  • both step 06 and step 021 may be performed by one or more processors 20 . That is to say, the one or more processors 20 are further used for: pre-processing the original RAW image; and performing high dynamic range image processing on the original RAW image after multi-frame processing, so as to obtain the target RAW image.
  • the processor 20 performs preprocessing on the original RAW images to obtain processed original RAW images.
  • the pre-processing includes at least one of linear correction, dead pixel correction, black level correction and lens shading correction.
  • the pre-processing includes only linear correction; or, the pre-processing only includes linear correction and dead pixel correction processing; or, the pre-processing only includes linear correction, dead pixel correction processing and black level correction processing; or, the pre-processing includes linear correction processing , dead pixel correction processing, black level correction processing and lens shading correction processing, which are not limited here.
  • the target RAW image obtained by high-dynamic fusion of the multi-frame processed original RAW image has higher definition than the target RAW image obtained by directly performing the high-dynamic fusion of the original RAW image. And with better image quality.
  • the processor 20 includes an image signal processor (Image Signal Processor, ISP), and the image pre-processing for multiple frames of original RAW is performed in the ISP.
  • ISP Image Signal Processor
  • the image pre-processing of the multi-frame original RAW can also be performed in other processors 20, that is, the image pre-processing of the multi-frame original RAW is not performed in the ISP, which is not limited here.
  • step 02 performing high dynamic range image processing on multiple frames of original RAW images to obtain target RAW images, including:
  • step 022 , step 023 , step 024 and step 025 may be performed by one or more processors 20 . That is to say, the one or more processors 20 are further configured to: perform image registration on multiple frames of original RAW images; fuse the registered original RAW images with the same exposure value to obtain multiple frames of intermediate RAW images; Obtain weights corresponding to all pixels in the intermediate RAW image of each frame; and fuse multiple frames of intermediate RAW images according to the weights to obtain a target RAW image.
  • image registration is performed on the multiple frames of original RAW images.
  • registration can also be performed on the original RAW images after multi-frame processing.
  • the following is an example of performing image registration on multiple frames of original RAW images.
  • registering multiple frames of original RAW images further includes:
  • step 0221 , step 0222 , step 0223 and step 0224 may all be implemented by one or more processors 20 . That is to say, the one or more processors 20 are further configured to: select a frame of the first original RAW image as the first reference image; perform first grayscale processing on the original RAW image to be registered and the first reference image to obtaining a grayscale image to be registered and a first grayscale image; obtaining a first array corresponding to the RAW image to be registered according to the grayscale image to be registered and the first grayscale image; and according to the original RAW image to be registered The coordinates of the pixel points on the image and the first array obtain the original RAW image after registration.
  • the processor 20 selects one frame of the first original RAW image as the first reference image, that is, selects one frame of the original RAW image exposed at the calibrated exposure value among the multiple frames of original RAW images as the first reference image.
  • One frame of the original RAW image is selected from the remaining multiple frames of the original RAW image as the original RAW image to be registered.
  • the first reference image since the first reference image is used as a reference, it is not necessary to perform image registration on the first reference image itself.
  • the first reference image itself is the original RAW image that has been registered, that is, the original RAW image after registration includes the image obtained by performing image registration on the original RAW image to be registered, and the first reference image.
  • the first grayscale processing is performed on the first reference image to obtain a first grayscale image.
  • the first reference image includes a plurality of pixel grids, and each pixel grid includes four pixel points in a 4 ⁇ 4 arrangement.
  • a pixel in the first grayscale image corresponds to a pixel grid in the first reference image, and the average value of all pixels in a pixel grid in the first reference image is used as the corresponding pixel in the first grayscale image pixel value. For example, as shown in FIG.
  • a pixel grid U1 in the first reference image includes: pixel points P11 arranged in the first row and first column of the first reference image, pixel points P12 in the first row and second column, and pixel points P12 in the first row and second column of the first reference image.
  • the pixel points arranged in the first row and the first column p11 of the first grayscale image correspond to the pixel grid U1.
  • the pixel value of the pixel point arranged in the first row and the first column p11 of the first grayscale image is equal to the pixel value of the pixel point P11 arranged in the first row and the first column of the first reference image, and the pixel point of the first row and the second column of the first reference image.
  • the specific method for performing the grayscale image to be registered on the original RAW image to be registered is the same as the specific method for performing the first grayscale processing on the first reference image to obtain the first grayscale image, which is not described here. Repeat.
  • the processor 20 can calculate the grayscale according to the Harris corner algorithm. feature points in the image. Of course, other methods can also be used to calculate the feature points in the grayscale image, which is not limited here.
  • a first array corresponding to the RAW image to be registered is acquired according to the corresponding feature points.
  • the first array is obtained by obtaining the mapping relationship between the feature points on the first grayscale image and the corresponding feature points on the grayscale image to be registered.
  • the first array may be a homography matrix (Homography matrix), and the first array refers to the pixel mapping relationship between the grayscale image to be registered and the first reference grayscale image.
  • Homography matrix homography matrix
  • the mapping relationship between the original RAW image to be registered and the first reference image is the same as the pixel mapping relationship between the grayscale image to be registered and the first reference grayscale image, and the same first array is also used.
  • the registered original RAW image is acquired according to the coordinates of the pixel points on the original RAW image to be registered and the first array.
  • select a pixel in the original RAW image to be registered obtain the coordinates of the selected pixel, calculate the registered coordinates of the pixel according to the coordinates of the pixel and the first array affine transformation, and The pixel point is moved to the registration coordinates, and then the next pixel point is selected to repeat the above process until the pixels in the original RAW image to be registered are moved to the registration coordinates, and the registered original RAW image is obtained.
  • the processor 20 After obtaining the multi-frame registered original RAW image, the processor 20 first fuses the multi-frame registered original RAW image with the same exposure value to obtain a multi-frame intermediate RAW image.
  • the multiple-frame registered original RAW image includes multiple frames of a registered first original RAW image exposed at a nominal exposure value, and multiple frames of a registered second RAW image exposed at a first exposure value.
  • the processor 20 fuses the registered first original RAW images of the multiple frames to obtain a first intermediate RAW image, and fuses the registered second original RAW images of the multiple frames to obtain a second intermediate RAW image.
  • performing high dynamic range image processing on multiple frames of original RAW images to obtain a target RAW image further comprising:
  • Step 023 Fusion of the registered original RAW images with the same exposure value to obtain multiple frames of intermediate RAW images, including:
  • both step 026 and step 0231 may be implemented by one or more processors 20 . That is to say, the one or more processors 20 are also used to detect the moving area of the registered original Raw image; The pixels are subjected to a first fusion process, and the pixels located outside the motion area are subjected to a second fusion process to obtain a multi-frame intermediate RAW image. The first fusion process is different from the second fusion process.
  • the registered grayscale image is acquired according to the coordinates of the pixel points on the grayscale image to be registered and the first array. That is to say, image registration is also performed on multiple frames of grayscale images (including the first grayscale image and the grayscale image to be registered). It should be noted that the registered grayscale image is obtained according to the coordinates of the pixel points on the grayscale image to be registered and the first array, which is different from the coordinates of the pixels on the original RAW image to be registered and the first array.
  • the original RAW images after array acquisition and registration are the same, and are not repeated here.
  • detecting the motion area of the registered original RAW image includes:
  • mapping difference value is greater than the preset threshold, it is determined that the corresponding pixel point in the registered original RAW image is located in the motion area.
  • step 0261 , step 0262 and step 0263 may be implemented by one or more processors 20 . That is to say, the one or more processors 20 are further configured to obtain the mapping value of the pixel value of each pixel in the first grayscale image and the pixel of each pixel in the registered grayscale image according to the preset mapping relationship. The mapping value of the value; calculate the mapping difference between the mapping value of each pixel in the first grayscale image and the mapping value of the corresponding pixel in the grayscale image after registration; and if the mapping difference is greater than the preset threshold, then It is determined that the corresponding pixels in the original RAW image after registration are located in the motion area.
  • the mapping value of the pixel value of each pixel in the first grayscale image and the pixel value of each pixel in the registered grayscale image according to the preset mapping relationship A map of pixel values. Specifically, the pixel value of the pixel point of the grayscale image after registration is acquired, and the mapping value corresponding to the pixel value is acquired in a preset mapping relationship. Similarly, the pixel value of the pixel point in the first grayscale image is obtained, and the mapping value corresponding to the pixel value is obtained in the preset mapping relationship. It should be noted that, in some embodiments, the preset mapping relationship may be a denoising lookup table.
  • the obtained pixel value and the pixel value corresponding to the pixel value after removing the influence of noise are recorded on the de-noising look-up table.
  • the pixel mapping value obtained in this way can reduce the influence of noise and improve the judgment in the process of subsequent processing. Accuracy of motion zones.
  • mapping difference is greater than the preset threshold, it means that the area where the pixel is located in the registered grayscale image is a moving area, that is, in the registered original RAW image corresponding to the registered grayscale image, the pixel is the same as the pixel.
  • the pixels in the pixel grid area corresponding to the point are all located in the motion area. For example, the registered grayscale image on the right side in FIG.
  • a pixel grid U1 in the registered original RAW image includes: pixel points P11 arranged in the first row and first column of the first reference image, pixel points P12 in the first row and second column, and pixel points P12 in the second row and second column of the first reference image.
  • the pixel points P21 in one column and the pixel points P22 in the second row and second column, and the pixel points in the first row and first column p11 in the registered grayscale image correspond to the registered original RAW pixel grid U1.
  • the pixels in the first row and first column p11 of the registered grayscale image are located in the motion area, then all the pixels in the pixel grid U1 of the original RAW after registration are located in the motion area, that is, they are arranged in the first reference image
  • the pixel point P11 in the first row and the first column, the pixel point P12 in the first row and the second column, the pixel point P21 in the second row and the first column, and the pixel point P22 in the second row and the second column are all located in the motion area.
  • the processor 20 further performs image morphological processing such as erosion and expansion on the motion region determined in the registered original RAW, so as to make the detected motion region more accurate.
  • image morphological processing such as erosion and expansion on the motion region determined in the registered original RAW
  • other ways can also be used to detect the motion area in the registered original RAW image. For example, directly calculate the difference between the pixel value of the pixel in the original RAW image after registration and the pixel value of the corresponding pixel in the first reference image. If the difference is greater than the preset value, it means that the pixel is located in the motion area. This does not limit.
  • the registered original RAW images with the same exposure value are fused to obtain multiple frames of intermediate RAW images, further including:
  • step 0232 may be implemented by one or more processors 20 . That is to say, the one or more processors 20 are further configured to select any one frame of the registered original RAW images of the multiple frames with the same exposure value as the first reference image, and other frames as the first non-reference image.
  • the processor 20 selects any one frame from the registered original RAW images with the same exposure value of multiple frames as the first reference image, and other frames as the first non-reference image.
  • the original RAW images after registration of multiple frames with the same exposure value are sorted according to the order of acquisition time, and the first frame is selected as the first reference image, that is, multiple frames with the same exposure value are selected for registration.
  • the first registered original RAW image image as the fused reference image can make the final obtained image better meet the user's needs.
  • a registered raw image with the highest definition among multiple registered raw images with the same exposure value is selected as the first reference image, so that the final obtained image can be clear. higher degree.
  • the first fusion processing is applied to the pixels located in the motion area, including:
  • the second fusion processing is applied to the pixels outside the motion area, including:
  • both steps 0311 and 0312 may be implemented by one or more processors 20 . That is to say, the one or more processors 20 are further configured to: all the pixels at the same position of the first non-reference image are located in the motion area, and then the pixel value of the pixel at the corresponding position of the first reference image is taken as the fused middle. The pixel value of the pixel point corresponding to the RAW image; and if at least one pixel point in all the pixel points in the same position of the first non-reference image is located outside the motion area, compare the pixel value of the pixel point in the corresponding position of the first reference image with the first non-reference image pixel value. The average value of the pixel values of the pixel points at the corresponding position of the reference image and outside the motion area is taken as the pixel value of the corresponding pixel points in the fused intermediate RAW image.
  • the first reference image After confirming the first reference image and the first non-reference image in the original RAW images after the registration of multiple frames with the same exposure value, if all the pixels in the same position of the first non-reference image are located in the motion area, the first reference image The pixel value of the pixel at the corresponding position is taken as the pixel value of the corresponding pixel of the fused intermediate RAW image. For example, as shown in FIG. 15 , the exposure values of the original RAW image after the first registration, the original RAW image after the second registration, and the original RAW image after the third registration are all the same.
  • the original RAW image after the first registration is the first reference image
  • the original RAW image after the second registration is the first non-reference image
  • the original RAW image after the third registration is also the first non-reference image.
  • the pixel point a2 located in the third row and third column of the original RAW image after the second registration is located in the motion area
  • the pixel point a3 located in the third row and third column of the original RAW image after the third registration is also located in the motion area.
  • the pixels in the third row and third column of the first reference image (original RAW image after the first registration) will be located.
  • the pixel value of a1 is used as the pixel value of the pixel point A of the fused intermediate RAW image, and the pixel point A is set in the third row and third column of the fused intermediate RAW image.
  • the exposure values of the original RAW image after the first registration, the original RAW image after the second registration, and the original RAW image after the third registration are all the same.
  • the original RAW image after the first registration is the first reference image
  • the original RAW image after the second registration is the first non-reference image
  • the original RAW image after the third registration is also the first non-reference image.
  • the pixel b2 located in the first row and first column of the original raw image after the second registration is located outside the motion area
  • the pixel b3 located in the first row and the first column of the original raw image after the third registration is also located in the motion area. Outside the area, that is, all the pixels in the first row and first column of the first non-reference image are located outside the motion area, then the pixels in the first row and first column of the first reference image (original RAW image after the first registration) will be located in the first row and first column.
  • the average value of the pixel values of the pixel point b3 is taken as the pixel value of the pixel point B of the intermediate RAW image after fusion, and the pixel point B is set in the first row and the first column of the intermediate RAW image after fusion.
  • the pixel point c2 located in the first row and second column of the original RAW image after the second registration is located outside the motion area
  • the pixel point c3 located in the first row and second column of the original RAW image after the third registration is located in the The motion area, that is, at least one pixel in the same position of all the first non-reference images is located outside the motion area, then it will be located in the first reference image (original RAW image after the first registration) in the first row and the second
  • the pixel value of the pixel point c1 in the column and the average value of the pixel value of the pixel point c2 located in the first row and the second column of the original Raw image after the second registration are taken as the pixel value of the pixel point C of the fused intermediate Raw image, and the The pixel point C is set in the first row and the second column of the fused intermediate RAW image.
  • performing high dynamic range image processing on multiple frames of original RAW images to obtain target RAW images further comprising:
  • step 028 , step 0241 and step 0251 may all be implemented in one or more processors 20 . That is to say, the one or more processors 20 are further configured to perform de-ghosting processing on the intermediate RAW images of multiple frames to obtain an intermediate RAW image after de-ghosting; The weights corresponding to all pixels; and the intermediate RAW image after fusing multiple frames to remove ghosting according to the weights.
  • the processor 20 selects, from the multiple frames of intermediate RAW images, the intermediate RAW image obtained by fusing the first original RAW image as the second reference image, that is, the second reference image
  • the exposure value is the calibrated exposure value.
  • a frame of an intermediate RAW image is selected, and a motion area of the intermediate RAW image is detected.
  • the pixel value of the pixel corresponding to the modified pixel, the brightness of the second reference image, and the brightness of the intermediate RAW image are used to calculate the pixel value of the pixel corresponding to the pixel in the intermediate RAW image after ghosting has been removed.
  • the exposure values of the first intermediate RAW image and the second intermediate RAW image are not the same.
  • the first intermediate RAW image is the second reference image.
  • the pixel point d1 located in the third row and third column of the second reference image is compared with the pixel point of the second intermediate RAW image.
  • the product of the ratio of the average brightness to the average brightness of the second reference image is taken as the pixel value of the second intermediate RAW image pixel d' after the ghosting is removed, and the pixel d' is set to the second intermediate RAW image after the ghosting has been removed.
  • the pixel value of the pixel in the intermediate RAW image is directly used as the pixel value of the pixel at the corresponding position of the intermediate RAW after the ghosting is removed.
  • the pixel point e2 located in the first row and first column of the second intermediate Raw image is located in the motion area, then the pixel value of the pixel point e2 located in the first row and first column of the second intermediate Raw image is As the pixel value of the pixel point e' of the second intermediate RAW image after removing the ghost image, and setting the pixel point e' in the first row and the first column of the pixels in the second intermediate RAW image after the ghost image removal.
  • the final target RAW image can be obtained with higher definition and better image quality.
  • each frame of the de-ghosted intermediate RAW image is acquired, and multiple frames of the de-ghosted intermediate RAW image are fused according to weights to obtain the target RAW image.
  • the specific method of obtaining the middle RAW image after de-ghosting of each frame, and fusing multiple frames of the middle RAW image after de-ghosting according to the weight to obtain the target RAW image is equivalent to obtaining all the pixels in the middle RAW image of each frame.
  • the specific implementation method of fusing multiple frames of intermediate RAW images to obtain the target RAW image according to the weights is the same. The following is an example of obtaining the weights corresponding to all the pixels in the intermediate RAW image of each frame, and fusing multiple frames of intermediate RAW images according to the weights to obtain the target RAW image.
  • the weights corresponding to all pixels in the intermediate RAW image of each frame are obtained, including:
  • step 0242 , step 0243 and step 0244 may be implemented by one or more processors 20 . That is to say, one or more processors 20 are configured to select an intermediate RAW image obtained after fusion of the first original RAW image as the second reference image, and the remaining intermediate RAW images are used as the second non-reference image; Two grayscale processing to obtain a second grayscale image; and according to the average brightness and variance of the second grayscale image and the pixel value of the pixel to be calculated in the intermediate RAW image, to obtain the weight corresponding to the pixel to be calculated.
  • the processor 20 selects the intermediate RAW image obtained by fusing the first original RAW image from the multiple frames of intermediate RAW images as the second reference image, that is, the exposure value of the second reference image is the calibrated exposure value, and the remaining intermediate RAW images are used as the first reference image.
  • Two non-benchmark images As shown in FIG. 19 , the processor 20 performs second grayscale processing on the second reference image to obtain a second grayscale image. For example, the processor 20 performs interpolation processing on the second reference image to obtain a second grayscale image, and the length and width of the second grayscale image are the same as those of the second reference image.
  • the average brightness and variance of the second grayscale image are obtained.
  • the weight corresponding to the pixel to be calculated may be obtained according to the average brightness and variance of the second grayscale image and the pixel value of the pixel to be calculated in the intermediate RAW image.
  • the weight corresponding to the pixel to be calculated in the intermediate RAW image of each frame can be calculated according to the calculation formula Obtained by calculation, where weight is the weight corresponding to the pixel to be calculated, mean is the average brightness of the second grayscale image, and sigma is the brightness variance of the second grayscale image.
  • the weight corresponding to the pixel can be adjusted according to the actual needs of the user, for example, according to the calculation formula Calculated, where M and N are the gain values adjusted by the user.
  • M and N are the gain values adjusted by the user.
  • the weights corresponding to all pixels in the intermediate RAW image of each frame are obtained, including:
  • both steps 0245 and 0246 may be implemented by one or more processors 20 . That is to say, one or more processors 20 are further configured to: perform second grayscale processing on all intermediate RAW images to obtain corresponding third grayscale images; and according to the average brightness and variance of the third grayscale images , and the pixel value of the pixel to be calculated in the corresponding intermediate RAW image, and obtain the weight corresponding to the pixel to be calculated.
  • the processor 20 performs the second grayscale processing on all the intermediate RAW images to obtain the corresponding third grayscale images, wherein the specific implementation of performing the second grayscale processing on all the intermediate RAW images is the same as performing the second grayscale processing on the second reference image.
  • the specific implementation manner of the grayscale processing is the same, which is not repeated here.
  • the third grayscale images After obtaining multiple frames of the third grayscale images, select one frame of the third grayscale images, and obtain the average brightness and variance of the third grayscale images.
  • the weight corresponding to the pixel is calculated according to the average brightness and variance of the third grayscale image and the pixel value of the pixel to be calculated in the intermediate RAW intermediate image corresponding to the third grayscale image.
  • the weight corresponding to the pixel to be calculated in the intermediate RAW image of each frame can be calculated according to the formula Calculated and obtained, where weight is the weight corresponding to the pixel to be calculated, mean is the average brightness of the third grayscale image corresponding to the intermediate RAW image where the pixel to be calculated is located, and sigma is the intermediate RAW image where the pixel to be calculated is located The luminance variance of the corresponding third grayscale image.
  • multiple frames of intermediate RAW images are fused according to weights to obtain a target RAW image, including:
  • 02512 Use the sum of the product of the pixel values of the corresponding positions of all intermediate RAW images and the corresponding weights as the pixel values of the corresponding pixels of the fused target RAW image.
  • step 0251 may be implemented by one or more processors 20 . That is to say, the one or more processors 20 are further configured to use the sum of the product of the pixel values of the corresponding positions of all intermediate RAW images and the corresponding weights as the pixel values of the corresponding pixels of the fused target RAW image.
  • the processor 20 After obtaining the weights corresponding to all the pixels in all the intermediate RAW images, the processor 20 takes the sum of the products of the pixel values of all the pixels corresponding to the intermediate RAW images and the corresponding weights as the pixel values of the corresponding pixels in the target RAW image after fusion .
  • the corresponding weight of the pixel located in the first row and first column of the first intermediate RAW image is the first weight
  • the corresponding weight of the pixel located in the first row and first column of the second intermediate RAW image is the second weight
  • the pixel value of the pixel located in the first row and first column of the first intermediate RAW image is multiplied by the first weight
  • the sum of the pixel value of the pixel located in the first row and column of the second intermediate RAW image multiplied by the second weight is used as a fusion.
  • the pixel value of the pixel in the first row and first column of the target RAW image is used as a fusion.
  • the corresponding weights of the pixels at the corresponding positions of all intermediate RAWs may also be normalized, that is, the sum of the corresponding weights of the pixels at the corresponding positions of all the intermediate RAWs in multiple frames is equal to 1.
  • the sum of the product of the pixel value of the pixel point at the corresponding position of the intermediate RAW and the corresponding normalized weight is used as the pixel value of the corresponding pixel point of the target RAW image after fusion.
  • the target RAW image has a higher dynamic range, and the bit width of the target RAW image is higher than that of the original RAW image. bit width.
  • bit width For example, multi-frame original RAW images with a bit width of 12 bits are subjected to high dynamic fusion to obtain a target RAW image with a bit width of 16 bits.
  • the bit width of the target RAW image may also be equal to the bit width of the original RAW image.
  • multi-frame original RAW images with a bit width of 12 bits are subjected to high dynamic fusion to obtain a target RAW image with a bit width of 12 bits.
  • the obtained multiple frames of original RAW images are all exposed with the same exposure value, and the processor 20 fuses the multiple frames of original RAW images exposed with the same exposure value to directly obtain the target RAW image. make restrictions.
  • the processor 20 is based on the meta-parameter information when shooting multiple frames of original RAW images, the parameter information includes but is not limited to: shooting parameters, image height, black level, white level, color conversion matrix, white balance parameter, lens shading correction parameter at least one of them.
  • the DNG format is an open RAW file format, mainly to unify the RAW formats of different manufacturers.
  • the DNG specification defines the organization of data, color space conversion, etc.
  • the tag parameter (Tag) used is an extension based on the TIFF/EP specification.
  • the necessary part of the Tag of the DNG image is not directly taken from the captured metadata information, but is calculated through metadata conversion.
  • the color matrix (ColorMatrix) and the front matrix (ForwardMatrix) in the label parameter need to be calculated through the metadata parameter information, which will be further described below.
  • the metadata parameter information includes shooting parameters of the original RAW image, a first color conversion matrix under a first light source, and a second color conversion matrix under a second light source, and the tag parameters include the first color matrix, and The second color matrix.
  • the metadata parameter information includes shooting parameters of the original RAW image, a first color conversion matrix under a first light source, and a second color conversion matrix under a second light source
  • the tag parameters include the first color matrix, and The second color matrix.
  • both steps 031 and 032 may be implemented by one or more processors 20 . That is to say, the one or more processors 20 are further configured to obtain the first color matrix according to the first color conversion matrix, the first matrix and the second matrix; and obtain the second color according to the second color conversion matrix and the first matrix matrix.
  • the first matrix takes the second light source as the reference light source, and the conversion matrix from the first space to the second space The first space is different from the second space; the second matrix is from the reference white of the second light source to the second space.
  • a transformation matrix for the reference white of a light source is used to be obtained.
  • the first light source may be a low color temperature light source (eg, A light)
  • the second light source may be a high color temperature light source (eg, D65 light)
  • the first space may be sRGB space
  • the second space may be XYZ space.
  • the label parameters further include a first front matrix and a second front matrix. According to the parameter information of the metadata when the multi-frame RAW images are shot, the parameters in the DNG image are obtained.
  • Label parameters which also include:
  • both steps 033 and 034 may be implemented by one or more processors 20 . That is to say, one or more processors 20 are further configured to calculate and obtain the first front matrix according to the first color conversion matrix and the third matrix; and calculate and obtain according to the second color conversion matrix and the third matrix. Second front matrix.
  • the second front matrix is equal to the product of the second color conversion matrix and the third matrix.
  • the third matrix is a conversion matrix from the first space to the second space with the third light source as a reference light source.
  • the first light source may be a low color temperature light source (such as A light)
  • the second light source may be a high color temperature light source (such as D65 light)
  • the third light source may be D50 light
  • the first space may be sRGB space
  • the second space may be an XYZ space.
  • a DNG file is generated according to the target RAW image, tag parameters and metadata parameter information, including:
  • 041 Write the label parameters, metadata parameter information, and data of the target RAW image into a blank file according to the DNG encoding specification to generate a DNG file.
  • step 041 may be implemented by one or more processors 20 . That is to say, one or more processors 20 are further configured to write tag parameters, metadata parameter information, and data of the target RAW image into a blank file according to the DNG encoding specification to generate a DNG file.
  • the original RAW images exposed at the calibrated exposure value are used as the reference image for fusion, so the shooting parameters in the metadata parameter information when creating the DNG file are based on the calibration.
  • the shooting parameters of the original RAW image exposed by the exposure value are based on the calibration.
  • the image processing method further includes:
  • step 07 may be implemented by one or more processors 20 . That is, the one or more processors 20 are also used to parse the DNG files to generate DNG images.
  • the processor 20 may perform analysis according to the tag parameters, metadata parameter information and data of the target RAW image in the DNG file to obtain an image in DNG format.
  • the processor 20 outputs the acquired DNG file to an application (eg, a photo album), the application opens the DNG file and parses the DNG file to generate a DNG image, and displays the DNG image.
  • an application eg, a photo album
  • the DNG image can be imported into post-processing software, and the DNG image can be post-adjusted to obtain the target image.
  • Post-adjustment includes, but is not limited to, at least one of brightness adjustment, chromaticity adjustment, and size adjustment.
  • the target RAW image obtained by performing high dynamic fusion of multiple frames of original RAW images in this application has higher dynamic range and higher definition than a single frame of original RAW image. And convert the target RAW image into a DNG file, which is convenient for users to export and process in post-processing software.
  • the present application further provides an electronic device 1000 .
  • the electronic device 1000 according to the embodiment of the present application includes the lens 300 , the casing 200 , and the image processing apparatus 100 according to any one of the above-mentioned embodiments.
  • the lens 300 and the image processing device 100 are combined with the casing 200 .
  • the lens 300 cooperates with the image sensor 10 of the image processing apparatus 100 to form an image.
  • the electronic device 1000 may be a mobile phone, a tablet computer, a notebook computer, a smart wearable device (eg, a smart watch, a smart bracelet, a smart glasses, a smart helmet), a drone, a head-mounted display device, etc., which are not limited herein.
  • a smart wearable device eg, a smart watch, a smart bracelet, a smart glasses, a smart helmet
  • a drone e.g., a head-mounted display device, etc., which are not limited herein.
  • the electronic device 1000 in the present application converts the target RAW image into a DNG file after obtaining a target RAW image by performing high-dynamic fusion of multiple frames of RAW images through the image processing apparatus 100 .
  • the target RAW image synthesized with multiple frames of RAW images has more image information, wider dynamic range and higher definition than a single-frame RAW image;
  • the DNG file in the unified encoding and parsing format is helpful for users to export the DNG file to the post-processing software for processing.
  • the present application also provides a non-volatile computer-readable storage medium 400 containing a computer program.
  • the processor 60 executes the image processing method of any one of the above embodiments.
  • the processor 60 when the computer program is executed by the processor 60, the processor 60 is caused to perform the following steps:
  • the processor 60 may be the same processor as the processor 20 disposed in the image processing apparatus 100, and the processor 60 may also be disposed in the electronic device 1000, that is, the processor 60 may also be disposed in the image processing apparatus 1000.
  • the processors 20 in the device 100 are not the same processor, which is not limited herein.
  • any description of a process or method in the flowcharts or otherwise described herein may be understood to represent a module, segment or portion of code comprising one or more executable instructions for implementing a specified logical function or step of the process , and the scope of the preferred embodiments of the present application includes alternative implementations in which the functions may be performed out of the order shown or discussed, including performing the functions substantially concurrently or in the reverse order depending upon the functions involved, which should It is understood by those skilled in the art to which the embodiments of the present application belong.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

一种图像处理方法、图像处理装置(100)、电子设备(1000)及计算机可读存储介质(400)。图像处理方法包括:(01)获取多帧原始RAW图像;(02)对多帧原始RAW图像进行高动态范围图像处理,以获取目标RAW图像;(03)根据拍摄多帧原始RAW图像时的元数据参数信息获取DNG图像中的标签参数;及(04)根据目标RAW图像、标签参数及元数据参数信息生成DNG文件。

Description

图像处理方法、图像处理装置、电子设备及可读存储介质
优先权信息
本申请请求2021年2月26日向中国国家知识产权局提交的、专利申请号为202110221004.2的专利申请的优先权和权益,并且通过参照将其全文并入此处。
技术领域
本申请涉及图像技术领域,特别涉及一种图像处理方法、图像处理装置、电子设备及非易失性计算机可读存储介质。
背景技术
由于终端中的不同的图像处理应用对RAW格式的文件的解析方式或编码方式的不同,由一个图像处理应用编码的RAW文件有可能不能被另一个图像处理应用解析。并且单帧RAW图像具有较高的噪点及不佳的动态范围。
发明内容
本申请实施方式提供了一种图像处理方法、图像处理装置、电子设备及非易失性计算机可读存储介质。
本申请实施方式提供一种图像处理方法。所述图像处理方法包括:获取多帧原始RAW图像;对多帧所述原始RAW图像进行高动态范围图像处理,以获取目标RAW图像;根据拍摄多帧所述原始RAW图像时的元数据参数信息获取DNG图像中的标签参数;及根据所述目标RAW图像、所述标签参数及所述元数据参数信息生成DNG文件。
本申请实施方式提供一种图像处理装置。所述图像处理装置包括图像传感器及一个或多个处理器。所述图像传感器中的像素阵列曝光以获取多帧原始RAW图像。一个或多个所述处理器用于:对多帧所述原始RAW图像进行高动态范围图像处理,以获取目标RAW图像;根据拍摄多帧所述原始RAW图像时的元数据参数信息获取DNG图像中的标签参数;及根据所述目标RAW图像、所述标签参数及所述元数据参数信息生成DNG文件。
本申请实施方式提供一种电子设备。所述电子设备包括镜头及图像处理装置,所述镜头与所述图像处理装置的图像传感器配合成像。所述图像处理装置包括图像传感器及一个或多个处理器。所述图像传感器中的像素阵列曝光以获取多帧原始RAW图像。一个或多个所述处理器用于:对多帧所述原始RAW图像进行高动态范围图像处理,以获取目标RAW图像;根据拍摄多帧所述原始RAW图像时的元数据参数信息获取DNG图像中的标签参数;及根据所述目标RAW图像、所述标签参数及所述元数据参数信息生成DNG文件。
本申请实施方式提供一种包含计算机程序的非易失性计算机可读存储介质。所述计算机程序被处理器执行时,使得所述处理器执行权利要求1至11任意一项所述的图像处理方法。所述图像处理方法包括:获取多帧原始RAW图像;对多帧所述原始RAW图像进行高动态范围图像处理,以获取目标RAW图像;根据拍摄多帧所述原始RAW图像时的元数据参数信息获取DNG图像中的标签参数;及根据所述目标RAW图像、所述标签参数及所述元数据参数信息生成DNG文件。
本申请实施方式的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实践了解到。
附图说明
本申请的上述和/或附加的方面和优点可以从结合下面附图对实施方式的描述中将变得明显和容易理解,其中:
图1是本申请某些实施方式的图像处理方法的流程示意图;
图2是本申请某些实施方式的图像处理装置的结构示意图;
图3是本申请某些实施方式的图像处理装置中图像传感器的示意图;
图4至图9是本申请某些实施方式的图像处理方法的流程示意图;
图10是本申请某些实施方式中对第一参考图像进行第一灰度处理获得第一灰度图像的示意图;
图11至图12是本申请某些实施方式的图像处理方法的流程示意图;
图13是本申请某些实施方式中配准后灰度图像的运动区域与对应的配准后的原始RAW图像运动区域的示意图;
图14是本申请某些实施方式的图像处理方法的流程示意图;
图15是本申请某些实施方式的获取中间RAW图像的原理示意图;
图16是本申请某些实施方式的图像处理方法的流程示意图;
图17是本申请某些实施方式的获取去鬼影后的中间RAW图像的原始示意图;
图18是本申请某些实施方式的图像处理方法的流程示意图;
图19是本申请某些实施方式中对第二基准图像进行第二灰度处理获得第二灰度图像的示意图;
图20至图25是本申请某些实施方式的图像处理方法的流程示意图;
图26是本申请某些实施方式中获取的DNG图像及目标图像的示意图;
图27本申请某些实施方式的电子设备的结构示意图;
图28是本申请某些实施方式的非易失性计算机可读存储介质与处理器的交互示意图。
具体实施方式
下面详细描述本申请的实施方式,所述实施方式的示例在附图中示出,其中,相同或类似的标号自始至终表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施方式是示例性的,仅用于解释本申请的实施方式,而不能理解为对本申请的实施方式的限制。
本申请实施方式提供一种图像处理方法。图像处理方法包括:获取多帧原始RAW图像;对多帧原始RAW图像进行高动态范围图像处理,以获取目标RAW图像;根据拍摄多帧原始RAW图像时的元数据参数信息获取DNG图像中的标签参数;及根据目标RAW图像、标签参数及元数据参数信息生成DNG文件。
在某些实施方式中,多帧原始RAW图像以至少两种不同的曝光值曝光。
在某些实施方式中,多帧原始RAW图像包括以标定曝光值曝光的第一原始RAW图像及以不同于标定曝光值曝光的第二原始RAW图像。
在某些实施方式中,图像处理方法还包括:对环境进行测光,及根据测得的环境亮度获取标定曝光值;或根据用户确定的曝光参数获取标定曝光值,其中曝光参数包括曝光值、感光度及曝光时长中的至少一种。
在某些实施方式中,多帧原始RAW图像进行高动态范围图像处理,以获取目标RAW图像,包括:将多帧原始RAW图像进行图像配准;将曝光值相同的配准后的原始RAW图像进行融合,以获得多帧中间RAW图像;获取每帧中间RAW图像中所有像素对应的权重;及根据权重融合多帧中间RAW图像以获取目标RAW图像。
在某些实施方式中,对多帧原始RAW图像进行高动态范围图像处理,以获取目标RAW图像,还包括:检测配准后的原始RAW图像的运动区域;将曝光值相同的配准后的原始RAW图像进行融合,以获得多帧中间RAW图像,包括:针对曝光值相同的每一帧配准后的原始RAW图像,对位于运动区域内的像素点采用第一融合处理,及对位于运动区域外的像素点采用第二融合处理,以获得多帧中间RAW图像,第一融合处理与第二融合处理不同。
在某些实施方式中,将曝光值相同的配准后的原始RAW图像进行融合,以获得多帧中间RAW图像,还包括:选取曝光值相同的多帧配准后的原始RAW图像中的任意一帧作为第一基准图像,其他帧作为第一非基准图像。对位于运动区域的像素点采用第一融合处理,包括:若所有第一非基准图像相同位置的像素点均位于运动区域,则第一基准图像对应位置的像素点的像素值作为融合后的中间RAW图像对应像素点的像素值。对位于运动区域外的像素点采用第二融合处理,包括:若所有第一非基准图像相同位置的像素点中至少一个像素点位于运动区域外,则将第一基准图像对应位置的像素点的像素值与第一非基准图像对应位置且处于运动区域外的像素点的像素值的均值作为融合后的中间RAW图像对应像素点的像素值。
在某些实施方式中,多帧原始RAW图像包括以标定曝光值曝光的第一原始RAW图像及以不同于 所述标定曝光值曝光的第二原始RAW图像,获取每帧中间RAW图像中所有像素对应的权重,包括:选取第一原始RAW图像融合后获得中间RAW图像作为第二基准图像,其余中间RAW图像作为第二非基准图像;对第二基准图像进行第二灰度处理,以获取第二灰度图像;根据第二灰度图像的平均亮度和方差、及中间RAW图像中待计算像素点的像素值,获取待计算的像素点对应的权重。
在某些实施方式中,元数据参数信息包括原始RAW图像的拍摄参数、第一光源下的第一色彩转换矩阵、及第二光源下的第二色彩转换矩阵,标签参数包括第一颜色矩阵、及第二颜色矩阵。根据拍摄多帧原始RAW图像时的元数据参数信息获取DNG图像中的标签参数,包括:根据第一色彩转换矩阵、第一矩阵及第二矩阵获取第一颜色矩阵;及根据第二色彩转换矩阵及第一矩阵获取第二颜色矩阵。其中,第一矩阵为以第二光源为参考光源,从第一空间到第二空间的转换矩阵第一空间与第二空间不同;第二矩阵为从第二光源的参考白到第一光源的参考白的转换矩阵。
在某些实施方式中,标签参数还包括第一前部矩阵及第二前部矩阵,根据多帧RAW图像拍摄时的元数据的参数信息,以获取DNG图像中的标签参数,还包括:根据第一色彩转换矩阵、及第三矩阵,计算获得第一前部矩阵;根据第二色彩转换矩阵、及第三矩阵,计算获得第二前部矩阵。其中,第三矩阵为以第三光源为参考光源,从第一空间到所述第二空间的转换矩阵。
在某些实施方式中,根据目标RAW图像、标签参数及元数据参数信息生成DNG文件,包括:将标签参数、元数据参数信息、目标RAW图像的数据按照DNG编码规范写入空白文件以生成DNG文件。
本申请实施方式还提供一种图像处理装置。图像处理装置包括图像传感器及一个或多个处理器。图像传感器中的像素阵列曝光以获取多帧原始RAW图像;一个或多个处理器用于:对多帧原始RAW图像进行高动态范围图像处理,以获取目标RAW图像;根据拍摄多帧原始RAW图像时的元数据参数信息获取DNG图像中的标签参数;及根据目标RAW图像、标签参数及元数据参数信息生成DNG文件。
在某些实施方式中,多帧原始RAW图像以至少两种不同的曝光值曝光。
在某些实施方式中,多帧原始RAW图像包括以标定曝光值曝光的第一原始RAW图像及以不同于标定曝光值曝光的第二原始RAW图像。
在某些实施方式中,一个或多个处理器还用于对环境进行测光,及根据测得的环境亮度获取标定曝光值;或根据用户确定的曝光参数获取标定曝光值,其中曝光参数包括曝光值、感光度及曝光时长中的至少一种。
在某些实施方式中,一个或多个处理器还用于:将多帧原始RAW图像进行图像配准;将曝光值相同的配准后的原始RAW图像进行融合,以获得多帧中间RAW图像;获取每帧中间RAW图像中所有像素对应的权重;及根据权重融合多帧中间RAW图像以获取目标RAW图像。
在某些实施方式中,一个或多个处理器还用于:检测配准后的原始RAW图像的运动区域;及针对曝光值相同的每一帧配准后的原始RAW图像,对位于运动区域内的像素点采用第一融合处理,及对位于运动区域外的像素点采用第二融合处理,以获得多帧中间RAW图像,第一融合处理与第二融合处理不同。
在某些实施方式中,一个或多个处理器还用于:选取曝光值相同的多帧配准后的原始RAW图像中的任意一帧作为第一基准图像,其他帧作为第一非基准图像;若所有第一非基准图像相同位置的像素点均位于运动区域,则第一基准图像对应位置的像素点的像素值作为融合后的中间RAW图像对应像素点的像素值;若所有第一非基准图像相同位置的像素点中至少一个像素点位于运动区域外,则将第一基准图像对应位置的像素点的像素值与第一非基准图像对应位置且处于运动区域外的像素点的像素值的均值作为融合后的中间RAW图像对应像素点的像素值。
在某些实施方式中,多帧原始RAW图像包括以标定曝光值曝光的第一原始RAW图像及以不同于所述标定曝光值曝光的第二原始RAW图像,一个或多个处理器还用于:选取第一原始RAW图像融合后获得中间RAW图像作为第二基准图像,其余中间RAW图像作为第二非基准图像;对第二基准图像进行第二灰度处理,以获取第二灰度图像;及根据第二灰度图像的平均亮度和方差、及中间RAW图像中待计算像素点的像素值,获取待计算的像素点对应的权重。
在某些实施方式中,元数据参数信息包括原始RAW图像的拍摄参数、第一光源下的第一色彩转换矩阵、及第二光源下的第二色彩转换矩阵,标签参数包括第一颜色矩阵、及第二颜色矩阵。一个或多个处理器还用于:根据第一色彩转换矩阵、第一矩阵及第二矩阵获取第一颜色矩阵;及根据第二色彩转换 矩阵及第一矩阵获取第二颜色矩阵。其中,第一矩阵为以第二光源为参考光源,从第一空间到第二空间的转换矩阵第一空间与第二空间不同;第二矩阵为从第二光源的参考白到第一光源的参考白的转换矩阵。
在某些实施方式中,标签参数还包括第一前部矩阵及第二前部矩阵。一个或多个处理器还用于:根据第一色彩转换矩阵、及第三矩阵,计算获得第一前部矩阵;及根据第二色彩转换矩阵、及第三矩阵,计算获得第二前部矩阵。其中,第三矩阵为以第三光源为参考光源,从第一空间到所述第二空间的转换矩阵。
在某些实施方式中,一个或多个处理器还用于:将标签参数、元数据参数信息、目标RAW图像的数据按照DNG编码规范写入空白文件以生成DNG文件。
本申请还提供一种电子设备。本申请实施方式的电子设备包括镜头、:及上述任意一项实施方式的图像处理装置。镜头与图像处理装置的图像传感器配合成像。
本申请还提供一种包含计算机程序的非易失性计算机可读存储介质。该计算机程序被处理器执行时,使得处理器执行上述任意一个实施方式的图像处理方法。
下面结合附图做进一步说明。
请参阅图1,本申请实施方式提供一种图像处理方法。图像处理方法包括:
01:获取多帧原始RAW图像;
02:对多帧原始RAW图像进行高动态范围图像处理,以获取目标RAW图像;
03:根据拍摄多帧原始RAW图像时的元数据参数信息获取DNG图像中的标签参数;及
04:根据目标RAW图像、标签参数及元数据参数信息生成DNG文件。
请参阅图1及图2,本申请实施方式还提供一种图像处理装置100。图像处理装置100包括图像传感器10及一个或多个处理器20,步骤01图像传感器10实现,步骤02、步骤03及步骤04均可以由一个或多个处理器20执行。也即是说,图像传感器10中的像素阵列11(图3所示)曝光以获取多帧原始RAW图像;一个或多个处理器20用于:对多帧原始RAW图像进行高动态范围图像处理,以获取目标RAW图像;根据拍摄多帧原始RAW图像时的元数据参数信息获取DNG图像中的标签参数;及根据目标RAW图像、标签参数及元数据参数信息生成DNG文件。
本申请中的图像处理方法及图像处理装置100通过对多帧RAW图像进行高动态融合获得目标RAW图像后,再将目标RAW图转换为DNG文件。如此,一方面有多帧RAW图像合成的目标RAW图像相较于单帧RAW图像,图像信息量更大、动态范围更广并且清晰度更高;另一方面,由于将目标RAW图转换为具有统一编码及解析格式的DNG文件,有利于用户将该DNG文件导出至后期软件中进行处理。
具体地,请参阅图3,图像传感器10包括像素阵列11,其中像素阵列11曝光,以获取原始RAW图像。需要说明的是,在一些实施例中,像素阵列11包括以阵列形式二维排列(即二维矩阵形式排布)的多个感光像素(图未示),每个感光像素根据入射在其上的光的强度将光转换为电荷。
在一些实施例中,多帧原始RAW图像以至少两种不同的曝光值曝光,即像素阵列11以至少两种不同的曝光值曝光,以获取多帧原始RAW图像,并且多帧原始RAW图像中至少两帧原始RAW图像是一不同曝光值曝光获得的。
具体地,在一些实施例中,多帧原始RAW图像包括以标定曝光值曝光的第一原始RAW图像及以不同于标定曝光值曝光的第二原始RAW图像。即像素阵列11以标定曝光值曝光获取第一原始RAW图像,以不同于标定曝光值的曝光值曝光获取第二原始RAW图像。
例如,多帧原始RAW图像包括以标定曝光值曝光的第一原始RAW图像、及以大于标定曝光值曝光的第二原始RAW图像。需要说明的是,第一原始RAW图像的数量可以大于第二原始RAW图像的数量;或者第一原始RAW图像的数量也可以小于第二原始RAW图像的数量;或者第一原始RAW图像的数量还可以等于第二原始RAW图像数量,在此不作限制。此外,在一些实施例中,多帧第二原始RAW图像也可以以至少两种不同的曝光值曝光,但无论以多少种曝光值曝光以获取第二原始RAW图像,多种获取第二原始RAW图像的曝光值均大于标定曝光值。
再例如,在一些实施例中,多帧原始RAW图像也可以包括以标定曝光值曝光的第一原始RAW图像、及以小于标定曝光值曝光的第二原始RAW图像。需要说明的是,第一原始RAW图像的数量可以大于第二原始RAW图像的数量;或者第一原始RAW图像的数量也可以小于第二原始RAW图像的数量; 或者第一原始RAW图像的数量还可以等于第二原始RAW图像数量,在此不作限制。此外,在一些实施例中,多帧第二原始RAW图像也可以以至少两种不同的曝光值曝光,但无论以多少种曝光值曝光以获取第二原始RAW图像,多种获取第二原始RAW图像的曝光值均小于标定曝光值。
在一些实施例中,多帧原始RAW图像包括以标定曝光值曝光的第一原始RAW图像、以大于标定曝光值曝光的第二原始RAW图像、及以小于标定曝光值曝光的第三原始RAW图像。需要说明的是,第一原始RAW图像、第二原始RAW图像及第三原始RAW图像的数量可以均相等;或者第一原始RAW、第二原始RAW图像及第三原始RAW图像的数量可以不相同,在此不作限制。此外,在一些实施例中,多帧第二原始RAW图像也可以以至少两种不同的曝光值曝光,但无论以多少种曝光值曝光以获取第二原始RAW图像,多种获取第二原始RAW图像的曝光值均大于标定曝光值。同样地,在一些实施例中,多帧第三原始RAW图像也可以以至少两种不同的曝光值曝光,但无论以多少种曝光值曝光以获取第三原始RAW图像,多种获取第三原始RAW图像的曝光值均小于标定曝光值。由于图像处理装置100在获取到多帧原始RAW图像后,会对多帧原始RAW图像进行高动态融合处理,如此将包含以三种不同曝光值曝光获得的原始RAW图像高动态融合后获得的目标RAW图,相较于将包含以两种不同曝光值曝光获得的原始RAW图像高动态融合后获得的目标RAW图,具有更高的动态范围及更好的图像质量。
需要说明的是,在一些实施例中,图像处理装置100中预选存储了多种获取多帧原始RAW图像的预设曝光策略,用户可根据实际需求选取预设曝光策略,以获取多帧原始RAW图像。如此,最后获得的目标RAW图像更能满足用户需求。其中,预设曝光策略包括但不限于以下几种中的至少一种:(1)以标定曝光值曝光以获取多帧第一原始RAW图像,以小于标定曝光值的曝光值曝光以获取一帧第二原始RAW图像;(2)以标定曝光值曝光以获取一帧第一原始RAW图像,以小于标定曝光值的曝光值曝光以获取多帧第二原始RAW图像;(3)以标定曝光值曝光以获取第一原始RAW图像,以大于标定曝光值的曝光值曝光以获取第二原始RAW图像,以小于标定曝光值的曝光值曝光以获取第三原始RAW图像。当然,在一些实施例中,用户也可以直接设置曝光策略,以获取多帧原始RAW图像,在此不作限制。
请参阅图4,在一些实施例中,图像处理方法还包括:
051:对环境进行测光,及根据测得的环境亮度获取标定曝光值;
步骤01:获取多帧原始RAW图像,包括:
011:获取以标定曝光值曝光的第一原始RAW图像及以不同于标定曝光值曝光的第二原始RAW图像。
请参阅图2及图4,在一些实施例中,步骤011由图像传感器10实现,步骤051由一个或多个处理器20执行。也即是说,一个或多个处理器20还用于对环境进行测光,及根据测得的环境亮度获取标定曝光值。图像传感器10还用于获取以标定曝光值曝光的第一原始RAW图像及以不同于标定曝光值曝光的第二原始RAW图像。
具体地,处理器20检测图像处理装置100,或者安装有图像处理装置100的电子设备1000(图27所示)的周围环境的亮度,以获得环境亮度。在获得环境亮度之后,处理器20根据环境亮度获取标定曝光值。需要指出的是,在该环境亮度下,以标定曝光值曝光能够获得较为清晰,图像品质较为好的原始RAW图像。示例地,在一些实施例中,图像处理装置100中存储有预设的环境亮度与曝光值对应表,处理器20根据获取到的环境亮度,在预设的环境亮度与标定曝光值对应表中查找对应的曝光值,并将对应的曝光值作为标定曝光值。
请参阅图5,在一些实施例中,图像处理方法还包括:
052:根据用户确定的曝光参数获取标定曝光值,其中曝光参数包括曝光值、感光度及曝光时长中的至少一种。
请参阅图2及图5,在一些实施例中,步骤052由一个或多个处理器20执行。也即是说,一个或多个处理器20还用于根据用户确定的曝光参数获取标定曝光值,其中曝光参数包括曝光值、感光度及曝光时长中的至少一种。
具体地,请参阅图5及图6,在一些实施例中,步骤052:根据用户确定的曝光参数获取标定曝光值,其中曝光参数包括曝光值、感光度及曝光时长中的至少一种,还包括:
0521:对环境进行测光;
0522:根据测得环境亮度获取曝光的初始参数;及
0523:根据用户输入调节初始参数以获取曝光参数。
请参阅图2及图6,在一些实施例中,步骤0521、步骤0522及步骤0523均可以由一个或多个处理器20实现。也即是说,一个或多个处理器20还用于对环境进行测光;根据测得环境亮度获取曝光的初始参数;及根据用户输入调节初始参数以获取曝光参数。
具体地,处理器20检测图像处理装置100,或者安装有图像处理装置100的电子设备1000(图27所示)的周围环境的亮度,以获得环境亮度。在获得环境亮度之后,处理器20根据环境亮度获取初始参数。示例地,在一些实施例中,图像处理装置100中存储有预设的环境亮度与曝光初始参数对应表,处理器20根据获取到的环境亮度,在预设的环境亮度与曝光初始参数对应表中查找对应的曝光初始参数。需要说明的是,曝光参数包括曝光值、曝光时间及感光度中至少一种。
用户可以根据实际需求对初始参数进行调节,处理器20将用户调节后的初始参数作为用户确定的曝光参数。处理器20在获取用户确定的曝光参数后,根据用户确定的曝光参数获取标定曝光值。具体地,在一些实施例中,曝光参数包括曝光值、曝光时间及感光度,即初始参数包括初始曝光值、初始曝光时间及初始感光度。在获取初始参数后,若用户没有调节初始参数,即用户没有调节获得的初始曝光值、初始曝光时间及初始感光度,则将初始曝光值作为标定曝光值;若用户仅调节了初始曝光值,则将用户调节后的初始曝光值作为标定曝光值;若用户调节了初始曝光时间及初始感光度,则以用户调节后的初始曝光时间及用户调节后的初始感光度组合后的曝光值作为标定曝光值。需要说明的是,在一些实施例中,在获取初始参数后,用户仅调节了初始曝光时间,说明调节后的曝光时间为用户期望的曝光时间,则多帧原始RAW的曝光时间均为调节后的初始曝光时间,以初始感光度及用户调节后的初始曝光时间组合后的曝光值作为标定曝光值,通过调节感光度以获得以不同于标定曝光值曝光的原始RAW图像。同样地,用户仅调节了初始感光度,说明调节后的感光度为用户期望的曝光时间,则多帧原始RAW的感光度均为调节后的感光度,以初始曝光时间及用户调节后的初始感光度组合后的曝光值作为标定曝光值,通过调节曝光时间以获得以不同于标定曝光值曝光的原始RAW图像。由于根据环境亮度获取初始参数后,用户仅需要根据需求对初始参数进行调节,相较于用户直接输入曝光参数,能够使最终获得地目标图像满足用户需求的同时,降低用户的操作难度。
当然,在一些实施例中,用户还可以直接输入以获取曝光参数,处理器20根据用户输入的曝光参数获取标定曝光值。示例地,若用户仅输入曝光值,则以用户输入的曝光值作为标定曝光值;若用户输入感光度及曝光时间,则以用户输入的感光度及曝光时间组合后的曝光值作为标定曝光值。
在获取到标定曝光值后,图像传感器10中的像素阵列11以标定曝光值曝光以获取第一原始RAW图像,以不同于标定曝光值曝光以获取第二原始RAW图像。具体获取方式与上述实施例中所述的以标定曝光值曝光以获取第一原始RAW图像,以不同于标定曝光值曝光以获取第二原始RAW图像相同,在此不做赘述。
请参阅图7,在一些实施例中,图像处理方法还包括:
06:对原始RAW图像进行前处理,前处理包括:线性校正、坏点校正处理、黑电平校正处理及镜头阴影校正处理中至少一种;
步骤02:对多帧原始RAW图像进行高动态范围图像处理,以获取目标RAW图像,包括:
021:对多帧处理后的原始RAW图像进行高动态范围图像处理,以获取目标RAW图像。
请参阅图2及图7,在一些实施例中,步骤06及步骤021均可以由一个或多个处理器20执行。也即是说,一个或多个处理器20还用于:对原始RAW图像进行前处理;及对多帧处理后的原始RAW图像进行高动态范围图像处理,以获取目标RAW图像。
具体地,在一些实施例中,图像传感器10曝光获取到多帧原始RAW图像后,处理器20对原始RAW图像进行前处理,以获得处理后的原始RAW图像。其中,前处理包括:线性校正、坏点校正处理、黑电平校正处理及镜头阴影校正处理中至少一种。例如,前处理仅包括线性校正;或者,前处理仅包括线性校正及坏点校正处理;或者,前处理仅包括线性校正、坏点校正处理及黑电平校正处理;或者,前处理包括线性校正、坏点校正处理、黑电平校正处理及镜头阴影校正处理,在此不作限制。
由于对原始RAW图像进行前处理,将多帧处理后的原始RAW图像进行高动态融合获得的目标RAW图像,相较于直接将原始RAW图像进行高动态融合获得的目标RAW图像,清晰度更高并且具有 更好的图像品质。
需要说明的是,在一些实施例,处理器20包括图像信号处理器(Image Signal Processor,ISP),对多帧原始RAW进行图像前处理是在ISP中进行的。当然,在一些实施例中,多帧原始RAW进行图像前处理还能在其他处理器20中进行,即对多帧原始RAW进行图像前处理不在ISP中进行的,在此不作限制。
请参阅图1及图8,在一些实施例中,步骤02:多帧原始RAW图像进行高动态范围图像处理,以获取目标RAW图像,包括:
022:将多帧原始RAW图像进行图像配准;
023:将曝光值相同的配准后的原始RAW图像进行融合,以获得多帧中间RAW图像;
024:获取每帧中间RAW图像中所有像素对应的权重;及
025:根据权重融合多帧中间RAW图像以获取目标RAW图像。
请参阅图2及图8,在一些实施例中,步骤022、步骤023、步骤024及步骤025均可以由一个或多个处理器20执行。也即是说,一个或多个处理器20还用于:将多帧原始RAW图像进行图像配准;将曝光值相同的配准后的原始RAW图像进行融合,以获得多帧中间RAW图像;获取每帧中间RAW图像中所有像素对应的权重;及根据权重融合多帧中间RAW图像以获取目标RAW图像。
具体地,在获取多帧原始RAW图像后,对多帧原始RAW图像进行图像配准。当然,在一些事实例中,也可以对多帧处理后的原始RAW图像进行配准。以下以对多帧原始RAW图像进行图像配准为例进行说明。
请参阅图8及图9,在一些实施例中,将多帧原始RAW图像进行配准,还包括:
0221:选取一帧第一原始RAW图像作为第一参考图像;
0222:对待配准的原始RAW图像及第一参考图像进行第一灰度处理,以获取待配准灰度图像及第一灰度图像;
0223:根据待配准灰度图像及第一灰度图像,获取与待配准的RAW图像对应的第一阵列;
0224:根据待配准的原始RAW图像上像素点的坐标及第一阵列获取配准后的原始RAW图像。
请参阅图2及图9,在一些实施例中,步骤0221、步骤0222、步骤0223及步骤0224均可以由一个或多个处理器20实现。也即是说,一个或多个处理器20还用于:选取一帧第一原始RAW图像作为第一参考图像;对待配准的原始RAW图像及第一参考图像进行第一灰度处理,以获取待配准灰度图像及第一灰度图像;根据待配准灰度图像及第一灰度图像,获取与待配准的RAW图像对应的第一阵列;及根据待配准的原始RAW图像上像素点的坐标及第一阵列获取配准后的原始RAW图像。
具体地,处理器20选取一帧第一原始RAW图像作为第一参考图像,即在多帧原始RAW图像中选取一帧以标定曝光值曝光的原始RAW图像作为第一参考图像。在剩余的多帧原始RAW图像中选取一帧原始RAW图像作为待配准的原始RAW图像。需要说明的是,由于以第一参考图像作为基准,无需对第一参考图像本身进行图像配准。可以理解为,第一参考图像本身就是已经配准好的原始RAW图像,即配准后的原始RAW图像包括对待配准的原始RAW图像进行图像配准后获得的图像、及第一参考图像。
请参阅图10,对第一参考图像作第一灰度处理,以获得第一灰度图像。示例地,在一些实施例中,第一参考图像中包括多个像素网格,每个像素网格包括呈4x4排列的四个像素点。第一灰度图像中的一个像素点与第一参考图像中的一个像素网格对应,第一参考图像中一个像素网格中的所有像素点的均值作为第一灰度图像中对应的像素点的像素值。例如,如图10所示,第一参考图像中的一个像素网格U1包括:排列在第一参考图像第一行第一列的像素点P11、第一行第二列的像素点P12、第二行第一列的像素点P21及第二行第二列的像素点P22。并且排列在第一灰度图像第一行第一列p11的像素点与该像素网格U1对应。排列在第一灰度图像第一行第一列p11的像素点的像素值等于排列在第一参考图像第一行第一列的像素点P11的像素值、第一行第二列的像素点P12的像素值、第二行第一列的像素点P21的像素值及第二行第二列的像素点P22的像素值的均值。同样地,对待配准的原始RAW图像进行待配准的灰度图像的具体方法,与对第一参考图像作第一灰度处理以获得第一灰度图像的具体方法相同,在此不做赘述。
获取第一灰度图像及待配准的灰度图像后,获取第一灰度图像及待配准图像中的特征点,在一些实 施例中,处理器20可以根据Harris角点算法来计算灰度图像中的特征点。当然,也可以采用其他方式计算灰度图像中的特征点,在此不做限制。
在获取到灰度图像(包括第一灰度图像及待配准的灰度图像)的特征点后,根据对应的特征点以获取与待配准的RAW图像对应的第一阵列。示例地,在一些实施例中,根据第一灰度图像上特征点及待配准的灰度图像上对应的特征点,获取二者之间的映射关系,以获取第一阵列。第一阵列可以是单应性矩阵(Homography矩阵),第一阵列是指待配准灰度图像与第一参考灰度图像上的像素映射关系。需要说明的是,待配准原始RAW图像与第一参考图像的映射关系与待配准灰度图像与第一参考灰度图像上的像素映射关系一样,也用这个相同的第一阵列。
在获取到第一阵列后,根据待配准的原始RAW图像上的像素点的坐标及第一阵列获取配准后的原始RAW图像。示例地,选取待配准原始RAW图像中的一个像素点,获取选取的像素点的坐标,根据该像素点的坐标及第一阵列仿射变换计算该像素点配准后的配准坐标,并将该像素点移动至配准坐标,随后选取下一像素点重复上述过程,直至待配准的原始RAW图像中的像素点均移动至配准坐标后,获得配准后的原始RAW图像。
当然,在一些实施例中也可以采用其他方式将多帧原始RAW图像进行图像配准,在此不再一一举例。由于对多帧原始RAW图像进行图像配准,有利于对多帧原始RAW图像的后续处理。
获得多帧配准后原始RAW图像后,处理器20先将由曝光值相同的多帧配准后的原始RAW图像进行融合,以获得多帧中间RAW图像。例如,多帧配准后的原始RAW图像包括多帧以标定曝光值曝光的配准后的第一原始RAW图像、及多帧以第一曝光值曝光的配准后的第二RAW图像。处理器20将多帧配准后的第一原始RAW图像进行融合,以获得第一中间RAW图像,将多帧配准后的第二原始RAW图像进行融合,以获得第二中间RAW图像。
具体地,请参阅图1及图11,在一些实施例中,对多帧原始RAW图像进行高动态范围图像处理,以获取目标RAW图像,还包括:
026:检测配准后的原始RAW图像的运动区域;
步骤023:将曝光值相同的配准后的原始RAW图像进行融合,以获得多帧中间RAW图像,包括:
0231:针对曝光值相同的每一帧配准后的原始RAW图像,对位于运动区域内的像素点采用第一融合处理,及对位于运动区域外的像素点采用第二融合处理,以获得多帧中间RAW图像,第一融合处理与第二融合处理不同。
请参阅图2及图11,在一些实施例中,步骤026及步骤0231均可以由一个或多个处理器20实现。也即是说,一个或多个处理器20还用于检测配准后的原始RAW图像的运动区域;及针对曝光值相同的每一帧配准后的原始RAW图像,对位于运动区域内的像素点采用第一融合处理,及对位于运动区域外的像素点采用第二融合处理,以获得多帧中间RAW图像,第一融合处理与第二融合处理不同。
在获取到第一灰度图像及待配准灰度图像后,根据待配准的灰度图像上的像素点的坐标及第一阵列获取配准后的灰度图像。也即是说,对多帧灰度图像(包括第一灰度图像及待配准灰度图像)也进行了图像配准。需要说明的是,根据待配准的灰度图像上的像素点的坐标及第一阵列获取配准后的灰度图像,与根据待配准的原始RAW图像上的像素点的坐标及第一阵列获取配准后的原始RAW图像相同,在此不做赘述。
随后,根据第一灰度图像及配准后灰度图像来确定与配准后灰度图像对应的原始RAW的运动区域。具体地,请参阅图11及图12,在一些实施例中,检测配准后的原始RAW图像的运动区域,包括:
0261:根据预设的映射关系获取第一灰度图像中各像素点的像素值的映射值及配准后灰度图像中各像素点的像素值的映射值;
0262:计算第一灰度图像中各像素点的映射值与配准后灰度图像中对应像素点的映射值之间的映射差值;
0263:若映射差值大于预设阈值,则确定配准后的原始RAW图像中对应的像素点位于运动区域内。
请参阅图2及图12,在一些实施例中,步骤0261、步骤0262及步骤0263均可以由一个或多个处理器20实现。也即是说,一个或多个处理器20还用于根据预设的映射关系获取第一灰度图像中各像素点的像素值的映射值及配准后灰度图像中各像素点的像素值的映射值;计算第一灰度图像中各像素点的映射值与配准后灰度图像中对应像素点的映射值之间的映射差值;及若映射差值大于预设阈值,则确定 配准后的原始RAW图像中对应的像素点位于运动区域内。
在获取第一灰度图像及配准后灰度图像后,根据预设的映射关系获取第一灰度图像中各像素点的像素值的映射值及配准后灰度图像中各像素点的像素值的映射值。具体地,获取配准后灰度图像的像素点的像素值,并在预设的映射关系中获取该像素值对应的映射值。同样地,获取第一灰度图像中像素点的像素值,并在预设的映射关系中获取该像素值对应的映射值。需要说明的是,在一些实施例中,预设的映射关系可以是去噪声查找表。去噪声查找表上记录了获取的像素值及与该像素值对应的去除噪声影响后的像素值,如此获得的像素映射值在进行后续处理的过程中,能够降低噪声带来的影响,提高判断运动区域的准确性。
在获取到第一灰度图像所有像素点的像素值的映射值及配准后灰度图像中所有像素点的像素值的映射值后,计算第一灰度图像中各像素点的映射值与配准后灰度图像中对应像素点的映射值之间的映射差值。若映射差值大于预设阈值,则说明配准后灰度图像中该像素点所在区域为运动区域,即与该配准后灰度图像对应的配准后的原始RAW图像中,与该像素点对应的像素网格区域内的像素点均位于运动区域。例如,图13中的右侧的配准后的灰度图像,左侧为该灰度图像对应的配准后的原始RAW图像。其中,配准后的原始RAW图中的一个像素网格U1包括:排列在第一参考图像第一行第一列的像素点P11、第一行第二列的像素点P12、第二行第一列的像素点P21及第二行第二列的像素点P22,并且配准后灰度图像中第一行第一列p11的像素点与配准后的原始RAW的像素网格U1对应。假设配准后灰度图像中第一行第一列p11的像素点位于运动区域,则配准后的原始RAW的像素网格U1内所有像素点均位于运动区域,即排列在第一参考图像第一行第一列的像素点P11、第一行第二列的像素点P12、第二行第一列的像素点P21及第二行第二列的像素点P22均位于运动区域。
需要说明的是,在一些实施例中,处理器20还对配准后的原始RAW中确定的运动区域进行腐蚀、膨胀等图像形态处理,以使检测的运动区域更加精准。当然,在一些实施例中还能够采用其他方式检测配准后的原始RAW图像中的运动区域。例如,直接计算配准后的原始RAW图像中像素点的像素值与第一参考图中对应像素点的像素值的差值,若差值大于预设值则说明该像素点位于运动区域,在此并不做限制。
请参阅图8及图14,在一些实施例中,将曝光值相同的配准后的原始RAW图像进行融合,以获得多帧中间RAW图像,还包括:
0232:选取曝光值相同的多帧配准后的原始RAW图像中的任意一帧作为第一基准图像,其他帧作为第一非基准图像。
请参阅图2及图14,在一些实施例中,步骤0232可以由一个或多个处理器20实现。也即是说,一个或多个处理器20还用于选取曝光值相同的多帧配准后的原始RAW图像中的任意一帧作为第一基准图像,其他帧作为第一非基准图像。
处理器20在曝光值相同的多帧配准后的原始RAW图像中选取任意一帧作为第一基准图像,其他帧作为第一非基准图像。其中,在一些实施例中,将曝光值相同的多帧配准后的原始RAW图像按照获取时间的先后进行排序,选取第一帧作为第一基准图像,即选取曝光值相同的多帧配准后的原始RAW图像中最先获得的图像作为第一基准图像。由于用户按下快门那一时刻表示用户最想要获得的是此刻的图像,可以理解,获得图像的时间越靠近用户按下快门那一时刻,即越先获得的图像,越接近用户所期望获得的图像。因此以最先获得的配准后的原始RAW图像图像作为融合的基准图像,能够使最终获得的图像更满足用户需求。当然,在一些实施例中,选取曝光值相同的多帧配准后的原始RAW图像中清晰度最高的一帧配准后的原始RAW图像作为第一基准图像,如此能够使最终获得的图像清晰度更高。
请参阅图8及图14,在一些实施例中,对位于运动区域的像素点采用第一融合处理,包括:
02311:若所有第一非基准图像相同位置的像素点均位于运动区域,则第一基准图像对应位置的像素点的像素值作为融合后的中间RAW图像对应像素点的像素值;
对位于运动区域外的像素点采用第二融合处理,包括:
02312:若所有第一非基准图像相同位置的像素点中至少一个像素点位于运动区域外,则将第一基准图像对应位置的像素点的像素值与第一非基准图像对应位置且处于运动区域外的像素点的像素值的均值作为融合后的中间RAW图像对应像素点的像素值。
请参阅图2及图14,在一些实施例中,步骤0311及步骤0312均可以由一个或多个处理器20实现。 也即是说,一个或多个处理器20还用于:所有第一非基准图像相同位置的像素点均位于运动区域,则第一基准图像对应位置的像素点的像素值作为融合后的中间RAW图像对应像素点的像素值;及若所有第一非基准图像相同位置的像素点中至少一个像素点位于运动区域外,则将第一基准图像对应位置的像素点的像素值与第一非基准图像对应位置且处于运动区域外的像素点的像素值的均值作为融合后的中间RAW图像对应像素点的像素值。
在曝光值相同的多帧配准后的原始RAW图像中确认第一基准图像及第一非基准图像后,若所有第一非基准图像相同位置的像素点均位于运动区域,则第一基准图像对应位置的像素的像素值作为融合后的中间RAW图像对应像素点的像素值。例如,如图15所示,第一配准后的原始RAW图像、第二配准后的原始RAW图像及第三配准后的原始RAW图像的曝光值均相同。其中,第一配准后的原始RAW图像为第一基准图像、第二配准后的原始RAW图像为第一非基准图像及第三配准后的原始RAW图像也为第一非基准图像。假设位于第二配准后的原始RAW图像第三行第三列的像素点a2位于运动区域,且位于第三配准后的原始RAW图像第三行第三列的像素点a3也位于运动区域,即所有第一非基准图像中第三行第三列的像素点均位于运动区域,则将位于第一基准图像(第一配准后的原始RAW图像)第三行第三列的像素点a1的像素值作为融合后中间RAW图像的像素点A的像素值,并将像素点A设置于融合后中间RAW图像的第三行第三列。
在曝光值相同的多帧配准后的原始RAW图像中确认第一基准图像及第一非基准图像后,若所有第一非基准图像相同位置的像素点中至少一个像素点位于运动区域外,则将第一基准图像对应位置的像素点的像素值与第一非基准图像对应位置且处于运动区域外的像素点的像素值的均值作为融合后的中间RAW图像对应像素点的像素值。例如,如图15所示,第一配准后的原始RAW图像、第二配准后的原始图像RAW及第三配准后的原始RAW图像的曝光值均相同。其中,第一配准后的原始RAW图像为第一基准图像、第二配准后的原始RAW图像为第一非基准图像及第三配准后的原始RAW图像也为第一非基准图像。假设位于第二配准后的原始RAW图像第一行第一列的像素点b2位于运动区域外,且位于第三配准后的原始RAW图像第一行第一列的像素点b3也位于运动区域外,即所有第一非基准图像中第一行第一列的像素点均位于运动区域外,则将位于第一基准图像(第一配准后的原始RAW图像)第一行第一列的像素点b1的像素值、位于第二配准后的原始RAW图像第一行第一列的像素点b2的像素值、及位于第三配准后的原始RAW图像第一行第一列的像素点b3的像素值的均值作为融合后中间RAW图像的像素点B的像素值,并将像素点B设置于融合后中间RAW图像的第一行第一列。再例如,位于第二配准后的原始RAW图像第一行第二列的像素点c2位于运动区域外,且位于第三配准后的原始RAW图像第一行第二列的像素点c3位于运动区域,即第一所有第一非基准图像相同位置的像素点中至少一个像素点位于运动区域外,则将位于第一基准图像(第一配准后的原始RAW图像)第一行第二列的像素点c1的像素值、及位于第二配准后的原始RAW图像第一行第二列的像素点c2的像素值均值作为融合后中间RAW图像的像素点C的像素值,并将像素点C设置于融合后中间RAW图像的第一行第二列。
请参阅图1及图16,在一些实施例中,对多帧原始RAW图像进行高动态范围图像处理,以获取目标RAW图像,还包括:
028:对多帧中间RAW图像进行去鬼影处理,以获得去鬼影后的中间RAW图像;
0241:获取每帧去鬼影后的中间RAW图像中所有像素对应的权重;
0251:根据权重融合多帧去鬼影后的中间RAW图像以获取目标RAW图像。
请参阅图2及图16,在一些实施例中,步骤028、步骤0241及步骤0251均可以在一个或多个处理器20实现。也即是说,一个或多个处理器20还用于对多帧中间RAW图像进行去鬼影处理,以获得去鬼影后的中间RAW图像;获取每帧去鬼影后的中间RAW图像中所有像素对应的权重;及根据权重融合多帧去鬼影后的中间RAW图像。
在获得多帧不同曝光值的中间RAW图像后,处理器20从多帧中间RAW图像中,选取由第一原始RAW图像融合后获得的中间RAW图像作为第二基准图像,即第二基准图像的曝光值为标定曝光值。
在一些实施例中,选取一帧中间RAW图像,并检测该中间RAW图像的运动区域,针对曝光值相同的每一帧中间RAW图像,对于位于运动区域内的像素点,根据在第二基准图像上与改像素点对应的像素点的像素值、第二基准图像的亮度及该中间RAW图像的亮度,计算去鬼影后中间RAW图像与该像素点对应的像素点的像素值。例如,如图17所示,第一中间RAW图像及第二中间RAW曝光值均不 相同。其中,第一中间RAW图像为第二基准图像。假设位于第二中间RAW图像第三行第三列的像素点d2位于运动区域内,则将位于第二基准图像第三行第三列的像素点d1的像素点,与第二中间RAW图像的平均亮度与第二基准图像的平均亮度的比值的乘积,作为去鬼影后第二中间RAW图像像素点d’的像素值,并将该像素点d’设置在去鬼影后第二中间RAW图像像素第三行第三列。对于位于运动区域外的像素点,则直接将该中间RAW图的像素点的像素值,作为去鬼影后中间RAW对应位置的像素点的像素值。例如,如图17所示,位于第二中间RAW图像第一行第一列的像素点e2位于运动区域内,则将位于第二中间RAW图像第一行第一列的像素点e2的像素值作为去鬼影后第二中间RAW图像像素点e’的像素值,并将该像素点e’设置在去鬼影后第二中间RAW图像像素第一行第一列。
由于对中间RAW图像进行去鬼影处理后再进行融合,相较于直接对多帧中间RAW图像进行融合,能够使最后获得的目标RAW图像具有更高的清晰度及更好的图像品质。
在获得去鬼影后的中间RAW图像后,获取每帧去鬼影后的中RAW图像,并根据权重融合多帧去鬼影后的中间RAW图像以获取目标RAW图像。其中,获取每帧去鬼影后的中RAW图像,并根据权重融合多帧去鬼影后的中间RAW图像以获取目标RAW图像的具体实方式,与获取每帧中间RAW图像中所有像素对应的权重的,并根据权重融合多帧中间RAW图像以获取目标RAW图像的具体实方式相同。以下以获取每帧中间RAW图像中所有像素对应的权重,并根据权重融合多帧中间RAW图像以获取目标RAW图像为例进行说明。
请参阅图8及图18,在一些实施例中,获取每帧中间RAW图像中所有像素对应的权重,包括:
0242:选取第一原始RAW图像融合后获得中间RAW图像作为第二基准图像,其余中间RAW图像作为第二非基准图像;
0243:对第二基准图像进行第二灰度处理,以获取第二灰度图像;
0244:根据第二灰度图像的平均亮度和方差、及中间RAW图像中待计算像素点的像素值,获取待计算的像素点对应的权重。
请参阅图2及图18,在一些实施例中,步骤0242、步骤0243及步骤0244均可以由一个或多个处理器20实现。也即是说,一个或多个处理器20用于选取第一原始RAW图像融合后获得中间RAW图像作为第二基准图像,其余中间RAW图像作为第二非基准图像;对第二基准图像进行第二灰度处理,以获取第二灰度图像;及根据第二灰度图像的平均亮度和方差、及中间RAW图像中待计算像素点的像素值,获取待计算的像素点对应的权重。
处理器20从多帧中间RAW图像中,选取由第一原始RAW图像融合后获得的中间RAW图像作为第二基准图像,即第二基准图像的曝光值为标定曝光值,将其余中间RAW作为第二非基准图像。如图19所示,处理器20对第二基准图像进行第二灰度处理,以获取第二灰度图像。示例地,处理器20对第二基准图像进行插值处理以获得第二灰度图像,并且第二灰度图像的长度及宽度与第二基准图像的长度及宽度相同。
在获得第二灰度图像后,获取第二灰度图像的平均亮度和方差。选取多帧中间RAW图像(包括第二基准图像及第二非基准图像)中的任意一帧,计算选取的中间RAW图像中所有像素点对应的权重。示例地,在一些实施例中,可以根据第二灰度图像的平均亮度和方差、及中间RAW图像中待计算像素点的像素值,获取待计算的像素点对应的权重。具体地,每一帧中间RAW图像中待计算像素点对应的权重可以根据计算公式
Figure PCTCN2021137887-appb-000001
计算获得,其中weight为待计算像素点对应的权重、mean为第二灰度图像的平均亮度及sigma为第二灰度图像的亮度方差。在一些实施例中,可以根据用户实际需求对像素点对应的权重进行调整,示例地,可以根据计算公式
Figure PCTCN2021137887-appb-000002
计算获得,其中M和N为用户调节的增益值。在获得选取的中间RAW图像中所有像素点对应的权重,随后获取下一帧中间RAW中间图像中所有像素点对应的权重,直至获取到所有中间RAW图像中所有像素点对应的权重。
请参阅图8及图20,在一些实施例中,获取每帧中间RAW图像中所有像素对应的权重,包括:
0245:对所有中间RAW图像进行第二灰度处理,以获取对应的第三灰度图像;
0246:根据第三灰度图像的平均亮度和方差、及对应的中间RAW图像中待计算像素点的像素值, 获取待计算的像素点对应的权重。
请参阅图2及图20,在一些实施例中,步骤0245及步骤0246均是可以由一个或多个处理器20实现。也即是说,一个或多个处理器20还用于:对所有中间RAW图像进行第二灰度处理,以获取对应的第三灰度图像;及根据第三灰度图像的平均亮度和方差、及对应的中间RAW图像中待计算像素点的像素值,获取待计算的像素点对应的权重。
处理器20对所有中间RAW图像进行第二灰度处理以获得对应的第三灰度图像,其中对所有中间RAW图像进行第二灰度处理的具体实施方式,与对第二基准图像进行第二灰度处理的具体实施方式相同,在此不做赘述。
在获得多帧第三灰度图像后,选取其中一帧第三灰度图像,并获取该第三灰度图像的平均亮度和方差。根据该第三灰度图像的平均亮度和方差,及与该第三灰度图像对应的中间RAW中间图像中待计算像素点的像素值,计算该像素点对应的权重。示例地,每一帧中间RAW图像中待计算像素点对应的权重可以根据计算公式
Figure PCTCN2021137887-appb-000003
计算获得,其中weight为待计算像素点对应的权重、mean为与待计算像素点所在的中间RAW图像对应的第三灰度图像的平均亮度,及sigma为与待计算像素点所在的中间RAW图像对应的第三灰度图像的亮度方差。在获得该中间RAW图像中所有像素点对应的权重,随后获取下一帧中间RAW中间图像中所有像素点对应的权重,直至获取到所有中间RAW图像中所有像素点对应的权重。
在获得所有中间RAW图像中所有像素点对应的权重后,根据权重融合多帧中间RAW图像以获取目标RAW图像。具体地,请参阅图1及图21,在一些实施例中,根据权重融合多帧中间RAW图像以获取目标RAW图像,包括:
02512:将所有中间RAW图像对应位置的像素点的像素值与对应权重乘积之和作为融合后的目标RAW图像对应像素点的像素值。
请参阅图2及图21,在一些实施例中,步骤0251可以由一个或多个处理器20实现。也即是说,一个或多个处理器20还用于将所有中间RAW图像对应位置的像素点的像素值与对应权重乘积之和作为融合后的目标RAW图像对应像素点的像素值。
在获得所有中间RAW图像中所有像素点对应的权重后,处理器20将所有中间RAW对应位置的像素点的像素值与对应权重乘积之和,作为融合后的目标RAW图像对应像素点的像素值。例如,假设位于第一中间RAW图像第一行第一列的像素点的对应权重为第一权重、位于第二中间RAW图像第一行第一列的像素点的对应权重为第二权重,则位于第一中间RAW图像第一行第一列的像素点的像素值乘第一权重,与位于第二中间RAW图像第一行第一列的像素点的像素值乘第二权重的和作为融合后的目标RAW图像位于第一行第一列的像素点的像素值。需要说明的是,在一些实施例中,还可以对所有中间RAW对应位置的像素点对应权重作归一化处理后,即多帧所有中间RAW对应位置的像素点对应权重相加等于1,所有中间RAW对应位置的像素点的像素值与对应的归一后的权重乘积之和,作为融合后的目标RAW图像对应像素点的像素值。
需要说明的是,在一些实施例中,多帧原始RAW图像经过高动态融合处理获得目标RAW图像后,目标RAW图像具有更高的动态范围,并且目标RAW图像的位宽高于原始RAW图像的位宽。例如,多帧位宽为12bit的原始RAW图像经过高动态融合,获得位宽为16bit的目标RAW图像。当然,在一些实施例中,目标RAW图像的位宽也可以等于原始RAW图像的位宽。例如,多帧位宽为12bit的原始RAW图像经过高动态融合,获得位宽为12bit的目标RAW图像。此外,在一些实施例中,获取的多帧原始RAW图像均是以相同曝光值曝光的,处理器20对多帧相同曝光值曝光的原始RAW图像进行融合,直接获得目标RAW图像,在此不做限制。
处理器20根据拍摄多帧原始RAW图像时的元参数信息,参数信息包括但不限制于:拍摄参数、图像高度、黑电平、白电平、色彩转换矩阵、白平衡参数、镜头阴影校正参数中的至少一种。
需要说明的是,DNG格式是一种开放的RAW文件格式,主要为了统一不同厂商的RAW格式。DNG规范中定义了数据的组织方式、颜色空间转换等,使用的标签参数(Tag)是基于TIFF/EP规范的拓展。DNG图像必需的部分Tag不是直接从拍摄的元数据信息中取的,而是通过元数据转换计算得到。例如,标签参数中的颜色矩阵(ColorMatrix)和前部矩阵(ForwardMatrix)需要通过元数据参数信息进行计算 获得,下面做进一步说明。
在一些实施例中,元数据参数信息包括原始RAW图像的拍摄参数、第一光源下的第一色彩转换矩阵、及第二光源下的第二色彩转换矩阵,标签参数包括第一颜色矩阵、及第二颜色矩阵。请参阅1及图22,根据拍摄多帧原始RAW图像时的元数据参数信息获取DNG图像中的标签参数,包括:
031:根据第一色彩转换矩阵、第一矩阵及第二矩阵获取第一颜色矩阵;及
032:根据第二色彩转换矩阵及第一矩阵获取第二颜色矩阵。
请参阅图2及图22,在一些实施例中,步骤031及步骤032均可以由一个或多个处理器20实现。也即是说,一个或多个处理器20还用于根据第一色彩转换矩阵、第一矩阵及第二矩阵获取第一颜色矩阵;及根据第二色彩转换矩阵及第一矩阵获取第二颜色矩阵。
具体地,第一颜色矩阵可以根据计算公式ColorMatrix1=Inv(CCM 1*sRGB2XYZ D65*D 65toA)计算获得,其中ColorMatrix1表示第一颜色矩阵,CCM 1表示在第一光源下的第一色彩转换矩阵,sRGB2XYZ D65表示第一矩阵,D 65toA表示第二矩阵。也即是说,第一颜色矩阵可以通过第一色彩转换矩阵、第一矩阵及第二矩阵乘积的逆矩阵计算获得。第二颜色矩阵可以根据计算公式ColorMatrix2=Inv(CCM 2*sRGB2XYZ D65)计算获得,其中ColorMatrix2表示第二颜色矩阵,CCM 2表示在第二光源下的第二色彩转换矩阵,sRGB2XYZ D65表示第一矩阵。也即是说,第二颜色矩阵可以通过第二色彩转换矩阵及第一矩阵乘积的逆矩阵计算获得。需要说明的是,第一矩阵为以第二光源为参考光源,从第一空间到第二空间的转换矩阵第一空间与第二空间不同;第二矩阵为从第二光源的参考白到第一光源的参考白的转换矩阵。在一些实施例中,第一光源可以为低色温光源(例如A光),第二光源可以为高色温光源(例如D65光),第一空间可以为sRGB空间,第二空间可以为XYZ空间。
请参阅图1及图23,在一些实施例中,标签参数还包括第一前部矩阵及第二前部矩阵,根据多帧RAW图像拍摄时的元数据的参数信息,以获取DNG图像中的标签参数,还包括:
033:根据第一色彩转换矩阵、及第三矩阵,计算获得第一前部矩阵;
034:根据第二色彩转换矩阵、及第三矩阵,计算获得第二前部矩阵。
请参阅图2及图23,在一些实施例中,步骤033及步骤034均可以由一个或多个处理器20实现。也即是说,一个或多个处理器20还用于根据第一色彩转换矩阵、及第三矩阵,计算获得第一前部矩阵;及根据第二色彩转换矩阵、及第三矩阵,计算获得第二前部矩阵。
具体地,第一前部矩阵可以根据计算公式ForwardMatrix1=CCM 1*sRGB2XYZ D50计算获得,ForwardMatrix1表示第一前部矩阵,CCM 1表示在第一光源下的第一色彩转换矩阵,sRGB2XYZ D50表示第三矩阵。也即是说,第一前部矩阵等于通过第一色彩转换矩阵与第三矩阵的乘积。第二前部矩阵可以根据计算公式ForwardMatrix2=CCM 2*sRGB2XYZ D50计算获得,ForwardMatrix2表示第二前部矩阵,CCM 2表示在第二光源下的第二色彩转换矩阵,sRGB2XYZ D50表示第三矩阵。也即是说,第二前部矩阵等于通过第二色彩转换矩阵与第三矩阵的乘积。需要说明的是,第三矩阵为以第三光源为参考光源,从第一空间到第二空间的转换矩阵。在一些实施例中,第一光源可以为低色温光源(例如A光),第二光源可以为高色温光源(例如D65光),第三光源可以为D50光,第一空间可以为sRGB空间,第二空间可以为XYZ空间。
请参阅图24,在一些实施例中,根据目标RAW图像、标签参数及元数据参数信息生成DNG文件,包括:
041:将标签参数、元数据参数信息、目标RAW图像的数据按照DNG编码规范写入空白文件以生成DNG文件。
请参阅图2及图24,在一些实施例中,步骤041可以由一个或多个处理器20实现。也即是说,一个或多个处理器20还用于将标签参数、元数据参数信息、目标RAW图像的数据按照DNG编码规范写入空白文件以生成DNG文件。
需要说明的是,由于在合成多帧原始RAW图像时,均是以标定曝光值曝光的原始RAW图像为基准图像进行融合的,因此创建DNG文件时元数据参数信息中的拍摄参数,是以标定曝光值曝光的原始RAW图像的拍摄参数。
请参阅图25,在一些实施例中,图像处理方法还包括:
07:解析DNG文件以生成DNG图像。
请参阅图2及图25,在一些实施例中,步骤07可以由一个或多个处理器20实现。也即是说,一个或多个处理器20还用于解析DNG文件以生成DNG图像。
获得DNG文件后,处理器20可以根据DNG文件中的标签参数、元数据参数信息及目标RAW图像的数据进行解析,以获得DNG格式的图像。在一些实施例中,处理器20将获取到的DNG文件输出到应用程序(例如相册),应用程序打开DNG文件并对DNG文件进行解析,以生成DNG图像,并且显示DNG图像。
请参阅图26,在一些实施例中,生成DNG图像后,可以将DNG图像导入后期处理软件中,对DNG图像进行后期调整以获得目标图像。后期调整包括但不限于亮度调整、色度调整及尺寸调整中的至少一种。
由于本申请中将多帧原始RAW图像进行高动态融合获得的目标RAW图像,相较于单帧原始RAW图像具有更高的动态范围及更高的清晰度。并且将目标RAW图像转换为DNG文件,方便用户导出在后期软件中进行处理。
请参阅图27,本申请还提供一种电子设备1000。本申请实施方式的电子设备1000包括镜头300、壳体200及上述任意一项实施方式的图像处理装置100。镜头300、图像处理装置100与壳体200结合。镜头300与图像处理装置100的图像传感器10配合成像。
电子设备1000可以是手机、平板电脑、笔记本电脑、智能穿戴设备(例如智能手表、智能手环、智能眼镜、智能头盔)、无人机、头显设备等,在此不作限制。
本申请中的电子设备1000,通过图像处理装置100对多帧RAW图像进行高动态融合获得目标RAW图像后,再将目标RAW图转换为DNG文件。如此,一方面有多帧RAW图像合成的目标RAW图像相较于单帧RAW图像,图像信息量更大、动态范围更广并且清晰度更高;另一方面,由于将目标RAW图转换为具有统一编码及解析格式的DNG文件,有利于用户将该DNG文件导出至后期软件中进行处理。
请参阅28,本申请还提供一种包含计算机程序的非易失性计算机可读存储介质400。该计算机程序被处理器60执行时,使得处理器60执行上述任意一个实施方式的图像处理方法。
例如,请参阅1及图28,计算机程序被处理器60执行时,使得处理器60执行以下步骤:
01:获取多帧原始RAW图像;
02:对多帧原始RAW图像进行高动态范围图像处理,以获取目标RAW图像;
03:根据拍摄多帧原始RAW图像时的元数据参数信息获取DNG图像中的标签参数;及
04:根据目标RAW图像、标签参数及元数据参数信息生成DNG文件。
需要说明的是,处理器60可以与设置在图像处理装置100内的处理器20为同一个处理器,处理器60也可以设置在电子设备1000内,即处理器60也可以与设置在图像处理装置100内的处理器20不为同一个处理器,在此不作限制。
在本说明书的描述中,参考术语“一个实施方式”、“一些实施方式”、“示意性实施方式”、“示例”、“具体示例”或“一些示例”等的描述意指结合所述实施方式或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施方式或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施方式或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施方式或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。
尽管上面已经示出和描述了本申请的实施方式,可以理解的是,上述实施方式是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施方式进行变化、修改、替换和变型。

Claims (24)

  1. 一种图像处理方法,其特征在于,包括:
    获取多帧原始RAW图像;
    对多帧所述原始RAW图像进行高动态范围图像处理,以获取目标RAW图像;
    根据拍摄多帧所述原始RAW图像时的元数据参数信息获取DNG图像中的标签参数;及
    根据所述目标RAW图像、所述标签参数及所述元数据参数信息生成DNG文件。
  2. 根据权利要求1所述的图像处理方法,其特征在于,多帧所述原始RAW图像以至少两种不同的曝光值曝光。
  3. 根据权利要求1所述的图像处理方法,其特征在于,多帧所述原始RAW图像包括以标定曝光值曝光的第一原始RAW图像及以不同于所述标定曝光值曝光的第二原始RAW图像。
  4. 根据权利要求3所述的图像处理方法,其特征在于,所述图像处理方法还包括:
    对环境进行测光,及根据测得的环境亮度获取所述标定曝光值;或
    根据用户确定的曝光参数获取所述标定曝光值,其中所述曝光参数包括曝光值、感光度及曝光时长中的至少一种。
  5. 根据权利要求1所述的图像处理方法,其特征在于,所述对多帧所述原始RAW图像进行高动态范围图像处理,以获取目标RAW图像,包括:
    将多帧所述原始RAW图像进行图像配准;
    将曝光值相同的配准后的所述原始RAW图像进行融合,以获得多帧中间RAW图像;
    获取每帧所述中间RAW图像中所有像素对应的权重;及
    根据所述权重融合多帧所述中间RAW图像以获取所述目标RAW图像。
  6. 根据权利要求5所述的图像处理方法,其特征在于,所述对多帧所述原始RAW图像进行高动态范围图像处理,以获取目标RAW图像,还包括:
    检测配准后的所述原始RAW图像的运动区域;
    所述将曝光值相同的配准后的所述原始RAW图像进行融合,以获得多帧中间RAW图像,包括:
    针对曝光值相同的每一帧配准后的所述原始RAW图像,对位于所述运动区域内的像素点采用第一融合处理,及对位于所述运动区域外的像素点采用第二融合处理,以获得多帧中间RAW图像,所述第一融合处理与所述第二融合处理不同。
  7. 根据权利要求6所述的图像处理方法,其特征在于,所述将曝光值相同的配准后的所述原始RAW图像进行融合,以获得多帧中间RAW图像,还包括:
    选取曝光值相同的多帧配准后的所述原始RAW图像中的任意一帧作为第一基准图像,其他帧作为第一非基准图像;
    所述对位于所述运动区域的像素点采用第一融合处理,包括:
    若所有所述第一非基准图像相同位置的像素点均位于所述运动区域,则所述第一基准图像对应位置的像素点的像素值作为融合后的中间RAW图像对应像素点的像素值;
    所述对位于所述运动区域外的像素点采用第二融合处理,包括:
    若所有所述第一非基准图像相同位置的像素点中至少一个像素点位于所述运动区域外,则将所述第一基准图像对应位置的像素点的像素值与所述第一非基准图像对应位置且处于所述运动区域外的像素点的像素值的均值作为融合后的中间RAW图像对应像素点的像素值。
  8. 根据权利要求5所述的图像处理方法,其特征在于,多帧所述原始RAW图像包括以标定曝光值曝光的第一原始RAW图像及以不同于所述标定曝光值曝光的第二原始RAW图像,所述获取每帧所述中间RAW图像中所有像素对应的权重,包括:
    选取所述第一原始RAW图像融合后获得所述中间RAW图像作为第二基准图像;
    对所述第二基准图像进行第二灰度处理,以获取第二灰度图像;
    根据所述第二灰度图像的平均亮度和方差、及所述中间RAW图像中待计算像素点的像素值,获取待计算的所述像素点对应的权重。
  9. 根据权利要求1所述的图像处理方法,其特征在于,所述元数据参数信息包括所述原始RAW图像的拍摄参数、第一光源下的第一色彩转换矩阵、及第二光源下的第二色彩转换矩阵,所述标签参数包 括第一颜色矩阵、及第二颜色矩阵,所述根据拍摄多帧所述原始RAW图像时的元数据参数信息获取DNG图像中的标签参数,包括:
    根据所述第一色彩转换矩阵、第一矩阵及第二矩阵获取所述第一颜色矩阵;
    根据所述第二色彩转换矩阵及第一矩阵获取所述第二颜色矩阵;其中:
    所述第一矩阵为以第二光源为参考光源,从第一空间到第二空间的转换矩阵所述第一空间与所述第二空间不同;所述第二矩阵为从第二光源的参考白到第一光源的参考白的转换矩阵。
  10. 根据权利要求9所述的图像处理方法,其特征在于,所述标签参数还包括第一前部矩阵及第二前部矩阵,所述根据多帧所述RAW图像拍摄时的元数据的参数信息,以获取DNG图像中的标签参数,还包括:
    根据所述第一色彩转换矩阵、及第三矩阵,计算获得所述第一前部矩阵;
    根据所述第二色彩转换矩阵、及第三矩阵,计算获得所述第二前部矩阵;其中:
    所述第三矩阵为以第三光源为参考光源,从所述第一空间到所述第二空间的转换矩阵。
  11. 根据权利要求1所述的图像处理方法,其特征在于,所述根据所述目标RAW图像、所述标签参数及所述元数据参数信息生成DNG文件,包括:
    将所述标签参数、所述元数据参数信息、所述目标RAW图像的数据按照DNG编码规范写入空白文件以生成所述DNG文件。
  12. 一种图像处理装置,其特征在于,所述图像处理装置包括图像传感器及一个或多个处理器;所述图像传感器中的像素阵列曝光以获取多帧原始RAW图像;
    一个或多个所述处理器用于:
    对多帧所述原始RAW图像进行高动态范围图像处理,以获取目标RAW图像;
    根据拍摄多帧所述原始RAW图像时的元数据参数信息获取DNG图像中的标签参数;及
    根据所述目标RAW图像、所述标签参数及所述元数据参数信息生成DNG文件。
  13. 根据权利要求12所述的图像处理装置,其特征在于,多帧所述原始RAW图像以至少两种不同的曝光值曝光。
  14. 根据权利要求12所述的图像处理装置,其特征在于,多帧所述原始RAW图像包括以标定曝光值曝光的第一原始RAW图像及以不同于所述标定曝光值曝光的第二原始RAW图像。
  15. 根据权利要求14所述的图像处理装置,其特征在于,所述一个或多个处理器还用于:
    对环境进行测光,及根据测得的环境亮度获取所述标定曝光值;或
    根据用户确定的曝光参数获取所述标定曝光值,其中所述曝光参数包括曝光值、感光度及曝光时长中的至少一种。
  16. 根据权利要求12所述的图像处理装置,其特征在于,所述一个或多个处理器还用于:
    将多帧所述原始RAW图像进行图像配准;
    将曝光值相同的配准后的所述原始RAW图像进行融合,以获得多帧中间RAW图像;
    获取每帧所述中间RAW图像中所有像素对应的权重;及
    根据所述权重融合多帧所述中间RAW图像以获取所述目标RAW图像。
  17. 根据权利要求16所述的图像装置,其特征在于,所述一个或多个处理器还用于:
    检测配准后的所述原始RAW图像的运动区域;及
    针对曝光值相同的每一帧配准后的所述原始RAW图像,对位于所述运动区域内的像素点采用第一融合处理,及对位于所述运动区域外的像素点采用第二融合处理,以获得多帧中间RAW图像,所述第一融合处理与所述第二融合处理不同。
  18. 根据权利要求17所述的图像处理装置,其特征在于,所述一个或多个处理器还用于:
    选取曝光值相同的多帧配准后的所述原始RAW图像中的任意一帧作为第一基准图像,其他帧作为第一非基准图像;
    若所有所述第一非基准图像相同位置的像素点均位于所述运动区域,则所述第一基准图像对应位置的像素点的像素值作为融合后的中间RAW图像对应像素点的像素值;
    若所有所述第一非基准图像相同位置的像素点中至少一个像素点位于所述运动区域外,则将所述第一基准图像对应位置的像素点的像素值与所述第一非基准图像对应位置且处于所述运动区域外的像素 点的像素值的均值作为融合后的中间RAW图像对应像素点的像素值。
  19. 根据权利要求16所述的图像处理装置,其特征在于,多帧所述原始RAW图像包括以标定曝光值曝光的第一原始RAW图像及以不同于所述标定曝光值曝光的第二原始RAW图像,所述一个或多个处理器还用于:
    选取所述第一原始RAW图像融合后获得所述中间RAW图像作为第二基准图像;
    对所述第二基准图像进行第二灰度处理,以获取第二灰度图像;
    根据所述第二灰度图像的平均亮度和方差、及所述中间RAW图像中待计算像素点的像素值,获取待计算的所述像素点对应的权重。
  20. 根据权利要求12所述的图像处理装置,其特征在于,所述元数据参数信息包括所述原始RAW图像的拍摄参数、第一光源下的第一色彩转换矩阵、及第二光源下的第二色彩转换矩阵,所述标签参数包括第一颜色矩阵、及第二颜色矩阵,所述一个或多个处理器还用于:
    根据所述第一色彩转换矩阵、第一矩阵及第二矩阵获取所述第一颜色矩阵;
    根据所述第二色彩转换矩阵及第一矩阵获取所述第二颜色矩阵;其中:
    所述第一矩阵为以第二光源为参考光源,从第一空间到第二空间的转换矩阵所述第一空间与所述第二空间不同;所述第二矩阵为从第二光源的参考白到第一光源的参考白的转换矩阵。
  21. 根据权利要求20所述的图像处理装置,其特征在于,所述标签参数还包括第一前部矩阵及第二前部矩阵,所述一个或多个处理器还用于:
    根据所述第一色彩转换矩阵、及第三矩阵,计算获得所述第一前部矩阵;
    根据所述第二色彩转换矩阵、及第三矩阵,计算获得所述第二前部矩阵;其中:
    所述第三矩阵为以第三光源为参考光源,从所述第一空间到所述第二空间的转换矩阵。
  22. 根据权利要求12所述的图像处理装置,其特征在于,所述一个或多个处理器还用于:
    将所述标签参数、所述元数据参数信息、所述目标RAW图像的数据按照DNG编码规范写入空白文件以生成所述DNG文件。
  23. 一种电子设备,其特征在于,包括:
    镜头;及
    权利要求12至22意一项所述的图像处理装置,所述镜头与所述图像处理装置的图像传感器配合成像。
  24. 一种包含计算机程序的非易失性计算机可读存储介质,其特征在于,所述计算机程序被处理器执行时,使得所述处理器执行权利要求1至11任意一项所述的图像处理方法。
PCT/CN2021/137887 2021-02-26 2021-12-14 图像处理方法、图像处理装置、电子设备及可读存储介质 WO2022179256A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110221004.2 2021-02-26
CN202110221004.2A CN114979500B (zh) 2021-02-26 2021-02-26 图像处理方法、图像处理装置、电子设备及可读存储介质

Publications (1)

Publication Number Publication Date
WO2022179256A1 true WO2022179256A1 (zh) 2022-09-01

Family

ID=82974260

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/137887 WO2022179256A1 (zh) 2021-02-26 2021-12-14 图像处理方法、图像处理装置、电子设备及可读存储介质

Country Status (2)

Country Link
CN (1) CN114979500B (zh)
WO (1) WO2022179256A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117135293B (zh) * 2023-02-24 2024-05-24 荣耀终端有限公司 图像处理方法和电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105979235A (zh) * 2016-05-30 2016-09-28 努比亚技术有限公司 一种图像处理方法及终端
US20180183988A1 (en) * 2016-12-27 2018-06-28 Canon Kabushiki Kaisha Image processing device, image processing method, imaging device, and storage medium
CN109993722A (zh) * 2019-04-09 2019-07-09 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN110022469A (zh) * 2019-04-09 2019-07-16 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN110198417A (zh) * 2019-06-28 2019-09-03 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN111418201A (zh) * 2018-03-27 2020-07-14 华为技术有限公司 一种拍摄方法及设备

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110198419A (zh) * 2019-06-28 2019-09-03 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN110430370B (zh) * 2019-07-30 2021-01-15 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN111726516A (zh) * 2019-10-23 2020-09-29 北京小米移动软件有限公司 图像处理方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105979235A (zh) * 2016-05-30 2016-09-28 努比亚技术有限公司 一种图像处理方法及终端
US20180183988A1 (en) * 2016-12-27 2018-06-28 Canon Kabushiki Kaisha Image processing device, image processing method, imaging device, and storage medium
CN111418201A (zh) * 2018-03-27 2020-07-14 华为技术有限公司 一种拍摄方法及设备
CN109993722A (zh) * 2019-04-09 2019-07-09 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN110022469A (zh) * 2019-04-09 2019-07-16 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN110198417A (zh) * 2019-06-28 2019-09-03 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备

Also Published As

Publication number Publication date
CN114979500A (zh) 2022-08-30
CN114979500B (zh) 2023-08-08

Similar Documents

Publication Publication Date Title
US10298863B2 (en) Automatic compensation of lens flare
WO2020152521A1 (en) Systems and methods for transforming raw sensor data captured in low-light conditions to well-exposed images using neural network architectures
KR102480600B1 (ko) 이미지 처리 장치의 저조도 화질 개선 방법 및 상기 방법을 수행하는 이미지 처리 시스템의 동작 방법
CN107071234B (zh) 一种镜头阴影校正方法及装置
US20110090378A1 (en) Image deblurring using panchromatic pixels
US20170064227A1 (en) Pixel defect preprocessing in an image signal processor
JP2007336019A (ja) 画像処理装置、撮像装置、画像出力装置、これらの装置における方法およびプログラム
JP2010062672A (ja) 画像処理装置およびその方法
WO2018152977A1 (zh) 一种图像降噪方法及终端和计算机存储介质
WO2022179256A1 (zh) 图像处理方法、图像处理装置、电子设备及可读存储介质
JP2016174349A (ja) 自動ホワイトバランス補償方法および自動ホワイトバランス補償装置
Shafique et al. Estimation of the radiometric response functions of a color camera from differently illuminated images
US8599288B2 (en) Noise reduction apparatus, method and computer-readable recording medium
JP2009237650A (ja) 画像処理装置及び撮像装置
US20220366588A1 (en) Electronic device for generating depth information of region of interest and operation method thereof
US10742914B2 (en) Head-wearable imaging apparatus with two imaging elements corresponding to a user left eye and right eye, method, and computer readable storage medium for correcting a defective pixel among plural pixels forming each image captured by the two imaging elements based on defective-pixel related position information
JP6210772B2 (ja) 情報処理装置、撮像装置、制御方法、及びプログラム
JP6362070B2 (ja) 画像処理装置、撮像装置、画像処理方法、プログラム、および、記憶媒体
JP6160426B2 (ja) 画像処理装置及びプログラム
CN114331893A (zh) 一种获取图像噪声的方法、介质和电子设备
JP2019028537A (ja) 画像処理装置および画像処理方法
JP2017229025A (ja) 画像処理装置、画像処理方法、及びプログラム
CN113379608A (zh) 一种图像处理方法、存储介质及终端设备
JP2020016546A (ja) 画像処理装置、画像処理方法及びプログラム
JP5752011B2 (ja) 画像処理装置及び画像処理方法、プログラム、並びに記憶媒体

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21927680

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21927680

Country of ref document: EP

Kind code of ref document: A1