CN114979500A - Image processing method, image processing apparatus, electronic device, and readable storage medium - Google Patents

Image processing method, image processing apparatus, electronic device, and readable storage medium Download PDF

Info

Publication number
CN114979500A
CN114979500A CN202110221004.2A CN202110221004A CN114979500A CN 114979500 A CN114979500 A CN 114979500A CN 202110221004 A CN202110221004 A CN 202110221004A CN 114979500 A CN114979500 A CN 114979500A
Authority
CN
China
Prior art keywords
image
raw image
matrix
pixel
raw
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110221004.2A
Other languages
Chinese (zh)
Other versions
CN114979500B (en
Inventor
邹涵江
何慕威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110221004.2A priority Critical patent/CN114979500B/en
Priority to PCT/CN2021/137887 priority patent/WO2022179256A1/en
Publication of CN114979500A publication Critical patent/CN114979500A/en
Application granted granted Critical
Publication of CN114979500B publication Critical patent/CN114979500B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/73Colour balance circuits, e.g. white balance circuits or colour temperature control

Abstract

The application discloses an image processing method, an image processing device, an electronic device and a computer readable storage medium. The embodiment of the application provides an image processing method. The image processing method comprises the following steps: acquiring a plurality of frames of original RAW images; carrying out high dynamic range image processing on a plurality of frames of original RAW images to obtain a target RAW image; acquiring a label parameter in the DNG image according to metadata parameter information when a plurality of frames of original RAW images are shot; and generating a DNG file according to the target RAW image, the label parameter and the metadata parameter information. According to the method and the device, after the target RAW image is obtained by carrying out high-dynamic fusion on the multi-frame RAW image, the target RAW image is converted into the DNG file. In this way, the image which has larger image information amount, wider dynamic range and higher definition and is in the DNG format can be obtained, and the image is also beneficial to exporting to the later software for processing by the user.

Description

Image processing method, image processing apparatus, electronic device, and readable storage medium
Technical Field
The present disclosure relates to the field of image technologies, and in particular, to an image processing method, an image processing apparatus, an electronic device, and a non-volatile computer-readable storage medium.
Background
Due to differences in the parsing or encoding of the RAW format file by different image processing applications in the terminal, a RAW file encoded by one image processing application may not be parsed by another image processing application. And the single frame RAW image has higher noise and bad dynamic range.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, an electronic device and a non-volatile computer readable storage medium.
The embodiment of the application provides an image processing method. The image processing method comprises the following steps: acquiring a plurality of frames of original RAW images; carrying out high dynamic range image processing on a plurality of frames of original RAW images to obtain a target RAW image; acquiring a label parameter in the DNG image according to metadata parameter information when multiple frames of the original RAW image are shot; and generating a DNG file according to the target RAW image, the label parameter and the metadata parameter information.
The embodiment of the application provides an image processing device. The image processing device includes an image sensor and one or more processors. The pixel array in the image sensor is exposed to acquire a plurality of frames of RAW images. One or more of the processors to: carrying out high dynamic range image processing on a plurality of frames of original RAW images to obtain a target RAW image; acquiring a label parameter in the DNG image according to metadata parameter information when a plurality of frames of the original RAW image are shot; and generating a DNG file according to the target RAW image, the label parameter and the metadata parameter information.
The embodiment of the application provides electronic equipment. The electronic equipment comprises a lens and an image processing device, wherein the lens and an image sensor of the image processing device are matched for imaging. The image processing device includes an image sensor and one or more processors. The pixel array in the image sensor is exposed to acquire a plurality of frames of RAW images. One or more of the processors to: carrying out high dynamic range image processing on a plurality of frames of original RAW images to obtain a target RAW image; acquiring a label parameter in the DNG image according to metadata parameter information when a plurality of frames of the original RAW image are shot; and generating a DNG file according to the target RAW image, the label parameter and the metadata parameter information.
The present embodiments provide a non-transitory computer-readable storage medium containing a computer program. The computer program, when executed by a processor, causes the processor to perform the image processing method of any one of claims 1 to 11. The image processing method comprises the following steps: acquiring a plurality of frames of original RAW images; carrying out high dynamic range image processing on a plurality of frames of original RAW images to obtain a target RAW image; acquiring a label parameter in the DNG image according to metadata parameter information when a plurality of frames of the original RAW image are shot; and generating a DNG file according to the target RAW image, the label parameter and the metadata parameter information.
According to the image processing method, the image processing device, the electronic equipment and the computer readable storage medium, after the target RAW image is obtained by carrying out high dynamic fusion on the multi-frame RAW images, the target RAW image is converted into the DNG file. Therefore, on one hand, compared with a single-frame RAW image, a target RAW image synthesized by multi-frame RAW images has larger image information amount, wider dynamic range and higher definition; on the other hand, the target RAW image is converted into the DNG file with the uniform coding and analyzing format, so that the DNG file can be exported to later-stage software for processing by a user.
Additional aspects and advantages of embodiments of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The above and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 2 is a schematic diagram of an image processing apparatus according to some embodiments of the present application;
FIG. 3 is a schematic diagram of an image sensor in an image processing apparatus according to some embodiments of the present application;
FIGS. 4-9 are schematic flow diagrams of image processing methods according to certain embodiments of the present application;
FIG. 10 is a schematic diagram of a first grayscale image obtained by performing a first grayscale process on a first reference image in some embodiments of the present application;
FIGS. 11-12 are schematic flow charts of image processing methods according to certain embodiments of the present application;
FIG. 13 is a schematic illustration of a motion region of a registered gray scale image and a corresponding motion region of a registered RAW image in some embodiments of the present application;
FIG. 14 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 15 is a schematic illustration of the acquisition of an intermediate RAW image according to certain embodiments of the present application;
FIG. 16 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 17 is an original schematic diagram of an acquisition of a deghosting intermediate RAW image according to some embodiments of the present application;
FIG. 18 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 19 is a diagram illustrating a second grayscale image obtained by performing a second grayscale process on a second reference image in some embodiments of the present application;
FIGS. 20-25 are schematic flow charts of image processing methods according to certain embodiments of the present application;
FIG. 26 is a schematic illustration of a DNG image and a target image acquired in some embodiments of the present application;
FIG. 27 is a schematic structural diagram of an electronic device according to some embodiments of the present application;
FIG. 28 is a schematic diagram of the interaction of a non-volatile computer readable storage medium and a processor of certain embodiments of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for the purpose of explaining the embodiments of the present application, and are not to be construed as limiting the embodiments of the present application.
Referring to fig. 1, an embodiment of the present application provides an image processing method. The image processing method comprises the following steps:
01: acquiring a plurality of frames of original RAW images;
02: carrying out high dynamic range image processing on a plurality of frames of original RAW images to obtain a target RAW image;
03: acquiring a label parameter in the DNG image according to metadata parameter information when a plurality of frames of original RAW images are shot; and
04: and generating a DNG file according to the target RAW image, the label parameter and the metadata parameter information.
Referring to fig. 1 and fig. 2, an image processing apparatus 100 is further provided in the present embodiment. The image processing apparatus 100 includes an image sensor 10 and one or more processors 20, step 01 is implemented by the image sensor 10, and step 02, step 03 and step 04 can be executed by the one or more processors 20. That is, the pixel array 11 (shown in fig. 3) in the image sensor 10 is exposed to acquire a plurality of frames of original RAW images; the one or more processors 20 are configured to: carrying out high dynamic range image processing on a plurality of frames of original RAW images to obtain a target RAW image; acquiring a label parameter in the DNG image according to metadata parameter information when a plurality of frames of original RAW images are shot; and generating a DNG file according to the target RAW image, the label parameter and the metadata parameter information.
The image processing method and the image processing apparatus 100 in the present application obtain a target RAW image by performing high dynamic fusion on a plurality of frames of RAW images, and then convert the target RAW image into a DNG file. Therefore, on one hand, compared with a single-frame RAW image, a target RAW image synthesized by multi-frame RAW images has larger image information amount, wider dynamic range and higher definition; on the other hand, the target RAW image is converted into the DNG file with the uniform coding and analyzing format, so that the DNG file can be exported to later-stage software for processing by a user.
Specifically, referring to fig. 3, the image sensor 10 includes a pixel array 11, wherein the pixel array 11 is exposed to obtain a RAW image. It should be noted that, in some embodiments, the pixel array 11 includes a plurality of photosensitive pixels (not shown) arranged two-dimensionally in an array form (i.e., arranged in a two-dimensional matrix form), and each photosensitive pixel converts light into electric charges according to the intensity of light incident thereon.
In some embodiments, the multi-frame original RAW image is exposed to at least two different exposure values, i.e., the pixel array 11 is exposed to at least two different exposure values, to obtain the multi-frame original RAW image, and at least two of the multi-frame original RAW images are obtained by exposure to a different exposure value.
Specifically, in some embodiments, the multi-frame RAW image includes a first RAW image exposed at a nominal exposure and a second RAW image exposed at a different nominal exposure. That is, the pixel array 11 acquires a first original RAW image exposed at a nominal exposure value and acquires a second original RAW image exposed at an exposure value different from the nominal exposure value.
For example, the multi-frame RAW image includes a first RAW image exposed at a nominal exposure value and a second RAW image exposed at a value greater than the nominal exposure value. It should be noted that the number of the first original RAW images may be greater than the number of the second original RAW images; or the number of the first original RAW images may be smaller than the number of the second original RAW images; or the number of the first original RAW images may also be equal to the number of the second original RAW images, which is not limited herein. Further, in some embodiments, the plurality of frames of the second RAW image may also be exposed at least two different exposure values, but regardless of how many exposure values are exposed to obtain the second RAW image, the plurality of exposure values to obtain the second RAW image are greater than the nominal exposure value.
For another example, in some embodiments, the multi-frame RAW image may also include a first RAW image exposed at a nominal exposure value and a second RAW image exposed at a smaller exposure value than the nominal exposure value. It should be noted that the number of the first original RAW images may be greater than the number of the second original RAW images; or the number of first original RAW images may also be smaller than the number of second original RAW images; or the number of the first original RAW images may also be equal to the number of the second original RAW images, which is not limited herein. Further, in some embodiments, the plurality of frames of the second RAW image may also be exposed at least two different exposure values, but regardless of how many exposure values are exposed to obtain the second RAW image, the plurality of exposure values to obtain the second RAW image are less than the nominal exposure value.
In some embodiments, the multi-frame RAW image includes a first RAW image exposed at a nominal exposure value, a second RAW image exposed at a greater than nominal exposure value, and a third RAW image exposed at a less than nominal exposure value. It should be noted that, the number of the first original RAW image, the second original RAW image, and the third original RAW image may be equal; or the number of the first original RAW image, the second original RAW image and the third original RAW image may be different, and is not limited herein. Further, in some embodiments, the plurality of frames of the second RAW image may also be exposed at least two different exposure values, but regardless of how many exposure values are exposed to obtain the second RAW image, the exposure values at which the second RAW image is obtained are greater than the nominal exposure value. Likewise, in some embodiments, the plurality of frames of the third RAW image may also be exposed at least two different exposure values, but regardless of how many exposure values are exposed to obtain the third RAW image, the exposure values at which the plurality of obtained third RAW images are less than the nominal exposure value. Since the image processing apparatus 100 performs high dynamic fusion processing on the multi-frame original RAW image after acquiring the multi-frame original RAW image, the target RAW image obtained by high dynamic fusion of the original RAW image obtained by exposure with three different exposure values has a higher dynamic range and better image quality than the target RAW image obtained by high dynamic fusion of the original RAW image obtained by exposure with two different exposure values.
It should be noted that, in some embodiments, a plurality of preset exposure strategies for acquiring multiple frames of original RAW images are pre-selected and stored in the image processing apparatus 100, and a user may select the preset exposure strategies according to actual needs to acquire multiple frames of original RAW images. Therefore, the finally obtained target RAW image can better meet the user requirements. The preset exposure strategy includes but is not limited to at least one of the following: (1) exposing by a calibrated exposure value to obtain a plurality of frames of first original RAW images, and exposing by an exposure value smaller than the calibrated exposure value to obtain a frame of second original RAW images; (2) exposing by a calibrated exposure value to obtain a frame of first original RAW image, and exposing by an exposure value smaller than the calibrated exposure value to obtain a plurality of frames of second original RAW images; (3) the exposure is performed with a nominal exposure value to obtain a first RAW image, with an exposure value greater than the nominal exposure value to obtain a second RAW image, and with an exposure value less than the nominal exposure value to obtain a third RAW image. Of course, in some embodiments, the user may also directly set the exposure strategy to acquire multiple frames of original RAW images, which is not limited herein.
Referring to fig. 4, in some embodiments, the image processing method further includes:
051: performing light measurement on the environment, and acquiring a calibrated exposure value according to the measured environment brightness;
step 01: acquiring a plurality of frames of original RAW images, comprising:
011: a first RAW image exposed at a nominal exposure value and a second RAW image exposed at a different nominal exposure value are acquired.
Referring to fig. 2 and 4, in some embodiments, step 011 is implemented by image sensor 10 and step 051 is performed by one or more processors 20. That is, the one or more processors 20 are also configured to perform light metering on the environment and obtain a calibration exposure value based on the measured ambient brightness. The image sensor 10 is also used to acquire a first RAW image exposed at a nominal exposure value and a second RAW image exposed at a different nominal exposure value.
Specifically, the processor 20 detects the brightness of the surrounding environment of the image processing apparatus 100, or the electronic device 1000 (shown in fig. 27) in which the image processing apparatus 100 is installed, to obtain the ambient brightness. After obtaining the ambient brightness, the processor 20 obtains a calibration exposure value according to the ambient brightness. It should be noted that, under the ambient brightness, the exposure with the nominal exposure value can obtain a clearer and better image quality original RAW image. For example, in some embodiments, the image processing apparatus 100 stores a preset ambient brightness and exposure value corresponding table, and the processor 20 looks up a corresponding exposure value in the preset ambient brightness and calibrated exposure value corresponding table according to the obtained ambient brightness, and uses the corresponding exposure value as a calibrated exposure value.
Referring to fig. 5, in some embodiments, the image processing method further includes:
052: and acquiring a calibration exposure value according to an exposure parameter determined by a user, wherein the exposure parameter comprises at least one of the exposure value, the sensitivity and the exposure duration.
Referring to fig. 2 and 5, in some embodiments, step 052 is performed by one or more processors 20. That is, the one or more processors 20 are further configured to obtain a nominal exposure value according to a user-determined exposure parameter, wherein the exposure parameter includes at least one of an exposure value, a sensitivity, and an exposure duration.
Specifically, referring to fig. 5 and 6, in some embodiments, step 052: obtaining a calibration exposure value according to an exposure parameter determined by a user, wherein the exposure parameter comprises at least one of an exposure value, sensitivity and exposure duration, and the calibration exposure value further comprises:
0521: performing light measurement on the environment;
0522: acquiring initial parameters of exposure according to the measured ambient brightness; and
0523: the initial parameters are adjusted according to user input to obtain exposure parameters.
Referring to fig. 2 and 6, in some embodiments, steps 0521, 0522 and 0523 may be implemented by one or more processors 20. That is, the one or more processors 20 are also used to perform light metering on the environment; acquiring initial parameters of exposure according to the measured ambient brightness; and adjusting the initial parameters according to the user input to obtain the exposure parameters.
Specifically, the processor 20 detects the brightness of the surrounding environment of the image processing apparatus 100, or the electronic device 1000 (shown in fig. 27) in which the image processing apparatus 100 is installed, to obtain the ambient brightness. After obtaining the ambient brightness, the processor 20 obtains initial parameters according to the ambient brightness. For example, in some embodiments, the image processing apparatus 100 stores a preset ambient brightness and exposure initial parameter corresponding table, and the processor 20 searches the preset ambient brightness and exposure initial parameter corresponding table for a corresponding exposure initial parameter according to the obtained ambient brightness. The exposure parameter includes at least one of an exposure value, an exposure time, and a sensitivity.
The user can adjust the initial parameters according to actual requirements, and the processor 20 uses the initial parameters adjusted by the user as the exposure parameters determined by the user. After acquiring the exposure parameter determined by the user, the processor 20 acquires a calibration exposure value according to the exposure parameter determined by the user. Specifically, in some embodiments, the exposure parameters include an exposure value, an exposure time, and a sensitivity, i.e., the initial parameters include an initial exposure value, an initial exposure time, and an initial sensitivity. After the initial parameters are obtained, if the initial parameters are not adjusted by the user, namely the initial exposure value, the initial exposure time and the initial sensitivity which are obtained by the user are not adjusted, the initial exposure value is used as a calibration exposure value; if the user only adjusts the initial exposure value, taking the initial exposure value adjusted by the user as a calibrated exposure value; and if the initial exposure time and the initial sensitivity are adjusted by the user, taking the exposure value obtained by combining the initial exposure time adjusted by the user and the initial sensitivity adjusted by the user as a calibration exposure value. It should be noted that, in some embodiments, after the initial parameters are obtained, the user only adjusts the initial exposure time, and it is stated that the adjusted exposure time is the exposure time desired by the user, and then the exposure times of the multiple frames of original RAW are all the adjusted initial exposure times, and the exposure value obtained by combining the initial sensitivity and the user-adjusted initial exposure time is used as the calibration exposure value, and the sensitivity is adjusted to obtain the original RAW image exposed at the different calibration exposure value. Similarly, if the user adjusts only the initial sensitivity, which indicates that the adjusted sensitivity is the exposure time desired by the user, the sensitivities of the multiple frames of original RAW are all adjusted sensitivities, and the exposure value obtained by combining the initial exposure time and the initial sensitivity adjusted by the user is taken as the calibration exposure value, and the exposure time is adjusted to obtain the original RAW image exposed at the different calibration exposure value. After the initial parameters are obtained according to the ambient brightness, the user only needs to adjust the initial parameters according to the requirements, and compared with the method that the user directly inputs the exposure parameters, the finally obtained target image can meet the requirements of the user, and meanwhile the operation difficulty of the user is reduced.
Of course, in some embodiments, the user may also directly input to obtain the exposure parameters, and the processor 20 obtains the calibration exposure values according to the exposure parameters input by the user. Exemplarily, if the user only inputs the exposure value, the exposure value input by the user is taken as the calibration exposure value; and if the user inputs the sensitivity and the exposure time, taking the exposure value obtained by combining the sensitivity and the exposure time input by the user as a calibration exposure value.
After the calibration exposure value is acquired, the pixel array 11 in the image sensor 10 is exposed at the calibration exposure value to acquire a first RAW image and is exposed at a different value from the calibration exposure value to acquire a second RAW image. The specific obtaining manner is the same as that of the above embodiment in which the first original RAW image is obtained by exposure with the calibration exposure value, and the second original RAW image is obtained by exposure with the exposure value different from the calibration exposure value, which is not described herein again.
Referring to fig. 7, in some embodiments, the image processing method further includes:
06: preprocessing an original RAW image, wherein the preprocessing comprises the following steps: at least one of linear correction, dead pixel correction processing, black level correction processing, and lens shading correction processing;
step 02: the method for processing the high dynamic range image of the multi-frame original RAW image to obtain the target RAW image comprises the following steps:
021: and carrying out high dynamic range image processing on the original RAW image after the multi-frame processing to obtain a target RAW image.
Referring to fig. 2 and 7, in some embodiments, step 06 and step 021 may be performed by one or more processors 20. That is, the one or more processors 20 are also configured to: preprocessing an original RAW image; and carrying out high dynamic range image processing on the original RAW image after the multi-frame processing to obtain a target RAW image.
Specifically, in some embodiments, after the image sensor 10 has acquired a plurality of frames of RAW images by exposure, the processor 20 performs pre-processing on the RAW images to obtain processed RAW images. Wherein the pretreatment comprises the following steps: at least one of linear correction, dead pixel correction processing, black level correction processing, and lens shading correction processing. For example, the pre-processing includes only linear correction; or, the pre-processing only comprises linear correction and dead pixel correction; or, the pre-processing only comprises linear correction, dead pixel correction processing and black level correction processing; alternatively, the preprocessing includes linear correction, dead pixel correction processing, black level correction processing, and lens shading correction processing, which is not limited herein.
Because the original RAW image is preprocessed, the target RAW image obtained by performing high dynamic fusion on the original RAW image after multi-frame processing has higher definition and better image quality compared with the target RAW image obtained by directly performing high dynamic fusion on the original RAW image.
It should be noted that, in some embodiments, the Processor 20 includes an Image Signal Processor (ISP), and the Image preprocessing is performed in the ISP for a plurality of frames of the RAW. Of course, in some embodiments, the image preprocessing on the multi-frame RAW can also be performed in the other processors 20, that is, the image preprocessing on the multi-frame RAW is not performed in the ISP, and is not limited herein.
Referring to fig. 1 and 8, in some embodiments, step 02: the method for processing the high dynamic range image of the multi-frame original RAW image to obtain the target RAW image comprises the following steps:
022: carrying out image registration on a plurality of frames of original RAW images;
023: fusing the registered original RAW images with the same exposure value to obtain a multi-frame intermediate RAW image;
024: acquiring weights corresponding to all pixels in each frame of intermediate RAW image; and
025: and fusing the multi-frame middle RAW images according to the weight to obtain a target RAW image.
Referring to fig. 2 and 8, in some embodiments, step 022, step 023, step 024, and step 025 can all be performed by one or more processors 20. That is, the one or more processors 20 are also configured to: carrying out image registration on a plurality of frames of original RAW images; fusing the registered original RAW images with the same exposure value to obtain a multi-frame intermediate RAW image; acquiring weights corresponding to all pixels in each frame of intermediate RAW image; and fusing the multi-frame middle RAW images according to the weight to obtain a target RAW image.
Specifically, after acquiring a plurality of frames of original RAW images, image registration is performed on the plurality of frames of original RAW images. Of course, in some instances, the original RAW image after multi-frame processing may also be registered. The following description will take an example of image registration of a plurality of frames of original RAW images.
Referring to fig. 8 and 9, in some embodiments, registering the plurality of original RAW images further includes:
0221: selecting a frame of first original RAW image as a first reference image;
0222: performing first gray processing on an original RAW image to be registered and a first reference image to obtain a gray image to be registered and a first gray image;
0223: acquiring a first array corresponding to the RAW image to be registered according to the gray image to be registered and the first gray image;
0224: and acquiring the registered original RAW image according to the coordinates of the pixel points on the original RAW image to be registered and the first array.
Referring to fig. 2 and 9, in some embodiments, step 0221, step 0222, step 0223, and step 0224 may all be implemented by one or more processors 20. That is, the one or more processors 20 are also configured to: selecting a frame of first original RAW image as a first reference image; performing first gray processing on an original RAW image to be registered and a first reference image to obtain a gray image to be registered and a first gray image; acquiring a first array corresponding to the RAW image to be registered according to the gray image to be registered and the first gray image; and acquiring the registered original RAW image according to the coordinates of the pixel points on the original RAW image to be registered and the first array.
Specifically, the processor 20 selects a frame of the first original RAW image as the first reference image, that is, selects a frame of the original RAW image exposed with the nominal exposure value from the plurality of frames of the original RAW images as the first reference image. And selecting one frame of original RAW image from the rest multiple frames of original RAW images as an original RAW image to be registered. It should be noted that, since the first reference image is used as a reference, image registration of the first reference image itself is not required. It can be understood that the first reference image is the original RAW image which has been registered, that is, the registered original RAW image includes an image obtained by image registration of the original RAW image to be registered, and the first reference image.
Referring to fig. 10, a first gray scale process is performed on a first reference image to obtain a first gray scale image. Illustratively, in some embodiments, the first reference image includes a plurality of pixel grids, each pixel grid including four pixel points arranged in a 4x4 arrangement. One pixel point in the first gray image corresponds to one pixel grid in the first reference image, and the mean value of all pixel points in one pixel grid in the first reference image is used as the pixel value of the corresponding pixel point in the first gray image. For example, as shown in FIG. 10, a grid of pixels U1 in the first reference image includes: the pixels P11 in the first row and the first column, the pixels P12 in the first row and the second column, the pixels P21 in the second row and the first column, and the pixels P22 in the second row and the second column of the first reference image are arranged. And the pixel points arranged in the first row and the first column p11 of the first gray image correspond to the pixel grid U1. The pixel value of the pixel point arranged in the first row and the first column of the first gray image P11 is equal to the average value of the pixel point P11 arranged in the first row and the first column of the first reference image P12 arranged in the first row and the second column of the first gray image P21 arranged in the second row and the first column of the second gray image P22 arranged in the second row and the second column of the second gray image. Similarly, the specific method for performing the gray scale image to be registered on the original RAW image to be registered is the same as the specific method for performing the first gray scale processing on the first reference image to obtain the first gray scale image, and is not described herein again.
After the first grayscale image and the grayscale image to be registered are obtained, the feature points in the first grayscale image and the grayscale image to be registered are obtained, and in some embodiments, the processor 20 may calculate the feature points in the grayscale image according to a Harris corner algorithm. Of course, the feature points in the grayscale image may be calculated in other manners, which is not limited herein.
After the characteristic points of the gray-scale image (including the first gray-scale image and the gray-scale image to be registered) are obtained, the first array corresponding to the RAW image to be registered is obtained according to the corresponding characteristic points. For example, in some embodiments, a mapping relationship between the feature point on the first grayscale image and the corresponding feature point on the grayscale image to be registered is obtained to obtain the first array. The first array may be a Homography matrix (Homography matrix), and the first array refers to a pixel mapping relationship between the grayscale image to be registered and the first reference grayscale image. It should be noted that, the same mapping relationship between the original RAW image to be registered and the first reference image as the pixel mapping relationship between the grayscale image to be registered and the first reference grayscale image is also used in the same first array.
And after the first array is acquired, acquiring the registered original RAW image according to the coordinates of the pixel points on the original RAW image to be registered and the first array. Exemplarily, one pixel point in the original RAW image to be registered is selected, the coordinate of the selected pixel point is obtained, the registered coordinate of the pixel point after registration is calculated according to the coordinate of the pixel point and the first array affine transformation, the pixel point is moved to the registered coordinate, then the next pixel point is selected, the process is repeated until the pixel point in the original RAW image to be registered is moved to the registered coordinate, and the original RAW image after registration is obtained.
Of course, in some embodiments, the image registration may be performed on a plurality of original RAW images in other manners, which is not illustrated here. Due to the fact that image registration is carried out on the multi-frame original RAW images, subsequent processing of the multi-frame original RAW images is facilitated.
After obtaining the multi-frame registered original RAW image, the processor 20 first fuses the multi-frame registered original RAW images with the same exposure value to obtain a multi-frame intermediate RAW image. For example, the plurality of frames of registered RAW images include a plurality of frames of registered first RAW images exposed at the nominal exposure value and a plurality of frames of registered second RAW images exposed at the first exposure value. The processor 20 fuses the first original RAW images after the multi-frame registration to obtain a first intermediate RAW image, and fuses the second original RAW images after the multi-frame registration to obtain a second intermediate RAW image.
Specifically, referring to fig. 1 and fig. 11, in some embodiments, the performing high dynamic range image processing on a plurality of frames of original RAW images to obtain a target RAW image further includes:
026: detecting a motion area of the original RAW image after the registration;
step 023: fusing the registered original RAW images with the same exposure value to obtain a multi-frame intermediate RAW image, comprising:
0231: aiming at each frame of the registered original RAW image with the same exposure value, performing first fusion processing on pixel points located in the motion area and performing second fusion processing on pixel points located outside the motion area to obtain a multi-frame intermediate RAW image, wherein the first fusion processing is different from the second fusion processing.
Referring to fig. 2 and 11, in some embodiments, steps 026 and 0231 can be implemented by one or more processors 20. That is, the one or more processors 20 are also configured to detect motion regions of the registered RAW image; and aiming at the original RAW image after each frame with the same exposure value is registered, performing first fusion processing on pixel points positioned in the motion area, and performing second fusion processing on pixel points positioned outside the motion area to obtain a multi-frame middle RAW image, wherein the first fusion processing is different from the second fusion processing.
After the first gray image and the gray image to be registered are obtained, the registered gray image is obtained according to the coordinates of the pixel points on the gray image to be registered and the first array. That is, the image registration is also performed on the multi-frame grayscale image (including the first grayscale image and the grayscale image to be registered). It should be noted that the obtained gray-scale image after registration according to the coordinates of the pixel points on the gray-scale image to be registered and the first array is the same as the obtained original RAW image after registration according to the coordinates of the pixel points on the original RAW image to be registered and the first array, and details are not repeated here.
Then, a motion region of the original RAW corresponding to the registered grayscale image is determined according to the first grayscale image and the registered grayscale image. Specifically, referring to fig. 11 and 12, in some embodiments, detecting the motion region of the registered RAW image includes:
0261: acquiring a mapping value of a pixel value of each pixel point in the first gray level image and a mapping value of a pixel value of each pixel point in the registered gray level image according to a preset mapping relation;
0262: calculating a mapping difference value between the mapping value of each pixel point in the first gray level image and the mapping value of the corresponding pixel point in the registered gray level image;
0263: and if the mapping difference is larger than a preset threshold, determining that the corresponding pixel point in the original RAW image after registration is located in the motion area.
Referring to fig. 2 and 12, in some embodiments, step 0261, step 0262, and step 0263 can be implemented by one or more processors 20. That is, the one or more processors 20 are further configured to obtain, according to a preset mapping relationship, a mapping value of a pixel value of each pixel point in the first grayscale image and a mapping value of a pixel value of each pixel point in the registered grayscale image; calculating a mapping difference value between the mapping value of each pixel point in the first gray level image and the mapping value of the corresponding pixel point in the registered gray level image; and if the mapping difference is larger than a preset threshold, determining that the corresponding pixel point in the original RAW image after registration is located in the motion area.
After the first gray image and the registered gray image are obtained, the mapping value of the pixel value of each pixel point in the first gray image and the mapping value of the pixel value of each pixel point in the registered gray image are obtained according to a preset mapping relation. Specifically, pixel values of pixel points of the registered gray level image are obtained, and a mapping value corresponding to the pixel values is obtained in a preset mapping relation. Similarly, a pixel value of a pixel point in the first gray image is obtained, and a mapping value corresponding to the pixel value is obtained in a preset mapping relation. It should be noted that, in some embodiments, the preset mapping relationship may be a denoising look-up table. The denoising lookup table records the acquired pixel value and the pixel value corresponding to the pixel value after the denoising effect is removed, so that the influence caused by noise can be reduced in the subsequent processing process of the acquired pixel mapping value, and the accuracy of judging the motion area is improved.
After the mapping values of the pixel values of all the pixel points in the first gray level image and the mapping values of the pixel values of all the pixel points in the registered gray level image are obtained, the mapping difference value between the mapping value of each pixel point in the first gray level image and the mapping value of the corresponding pixel point in the registered gray level image is calculated. If the mapping difference is greater than the preset threshold, it indicates that the region where the pixel point is located in the registered gray-scale image is a motion region, that is, in the registered original RAW image corresponding to the registered gray-scale image, the pixel points in the pixel grid region corresponding to the pixel point are all located in the motion region. For example, in fig. 13, the registered gray scale image on the right side is the registered original RAW image corresponding to the gray scale image on the left side. Wherein one pixel grid U1 in the registered original RAW map comprises: the pixels P11 in the first row and the first column, the pixels P12 in the first row and the second column, the pixels P21 in the second row and the first column, and the pixels P22 in the second row and the second column of the first reference image are arranged, and the pixels P11 in the first row and the first column in the gray image after registration correspond to the pixel grid U1 of the original RAW after registration. Assuming that the pixels in the first row and the first column P11 in the gray-scale image after registration are located in the motion region, all the pixels in the pixel grid U1 of the original RAW after registration are located in the motion region, that is, the pixels P11 arranged in the first row and the first column of the first reference image, the pixels P12 in the first row and the second column, the pixels P21 in the first column of the second row and the pixels P22 in the second row and the second column are all located in the motion region.
It should be noted that, in some embodiments, the processor 20 further performs image morphological processing such as erosion and dilation on the motion region determined in the original RAW after the registration, so as to make the detected motion region more accurate. Of course, in some embodiments, other ways of detecting motion regions in the registered RAW image can be used. For example, a difference between a pixel value of a pixel point in the registered original RAW image and a pixel value of a corresponding pixel point in the first reference image is directly calculated, and if the difference is greater than a preset value, it is determined that the pixel point is located in the motion region, which is not limited herein.
Referring to fig. 8 and 14, in some embodiments, fusing the registered original RAW images with the same exposure value to obtain multiple frames of intermediate RAW images, further includes:
0232: and selecting any one frame of the registered original RAW images of the multiple frames with the same exposure value as a first reference image, and using other frames as first non-reference images.
Referring to fig. 2 and 14, in some embodiments, step 0232 may be implemented by one or more processors 20. That is, the one or more processors 20 are further configured to select any one of the frames of the registered original RAW image with the same exposure value as the first reference image, and the other frames as the first non-reference image.
The processor 20 selects any one frame from the registered original RAW images of the plurality of frames with the same exposure value as a first reference image, and the other frames as first non-reference images. In some embodiments, the multiple frames of registered original RAW images with the same exposure value are sequenced according to the sequence of the acquisition time, and the first frame is selected as the first reference image, that is, the image obtained first in the multiple frames of registered original RAW images with the same exposure value is selected as the first reference image. Since the moment when the user presses the shutter indicates that the image at the moment the user most wants to obtain, it can be understood that the closer the time to obtain the image to the moment when the user presses the shutter, i.e., the closer the image obtained first, the closer the image that the user desires to obtain. Therefore, the original RAW image after the first registration is taken as the fused reference image, and the finally obtained image can meet the requirements of users. Of course, in some embodiments, the registered original RAW image with the highest definition in the multiple frames of registered original RAW images with the same exposure value is selected as the first reference image, so that the definition of the finally obtained image is higher.
Referring to fig. 8 and 14, in some embodiments, a first fusion process is performed on the pixels located in the motion region, including:
02311: if all the pixel points at the same position of the first non-reference image are located in the motion area, the pixel value of the pixel point at the corresponding position of the first reference image is used as the pixel value of the pixel point corresponding to the intermediate RAW image after fusion;
and performing second fusion processing on the pixel points outside the motion area, wherein the second fusion processing comprises the following steps:
02312: and if at least one pixel point of the pixel points at the same position of all the first non-reference images is located outside the motion area, taking the mean value of the pixel values of the pixel points at the corresponding position of the first reference image and the pixel values of the pixel points at the corresponding position of the first non-reference image and outside the motion area as the pixel value of the pixel point corresponding to the intermediate RAW image after fusion.
Referring to fig. 2 and 14, in some embodiments, step 0311 and step 0312 may be implemented by one or more processors 20. That is, the one or more processors 20 are also configured to: all the pixel points at the same position of the first non-reference image are located in the motion area, and the pixel value of the pixel point at the corresponding position of the first reference image is used as the pixel value of the pixel point corresponding to the intermediate RAW image after fusion; and if at least one pixel point of the pixel points at the same position of all the first non-reference images is located outside the motion area, taking the mean value of the pixel values of the pixel points at the corresponding position of the first reference image and the pixel values of the pixel points at the corresponding position of the first non-reference image and outside the motion area as the pixel value of the pixel point corresponding to the intermediate RAW image after fusion.
After confirming a first reference image and a first non-reference image in the multi-frame registered original RAW image with the same exposure value, if all pixel points at the same position of the first non-reference image are located in the motion region, the pixel value of the pixel at the corresponding position of the first reference image is used as the pixel value of the pixel point corresponding to the intermediate RAW image after fusion. For example, as shown in fig. 15, the exposure values of the first, second, and third registered original RAW images are all the same. The original RAW image after the first registration is a first reference image, the original RAW image after the second registration is a first non-reference image, and the original RAW image after the third registration is also a first non-reference image. Assuming that the pixel point a2 located in the third row and the third column of the second registered original RAW image is located in the motion region, and the pixel point a3 located in the third row and the third column of the third registered original RAW image is also located in the motion region, that is, the pixel points in the third row and the third column of the first non-reference image are all located in the motion region, the pixel value of the pixel point a1 located in the third row and the third column of the first reference image (the first registered original RAW image) is used as the pixel value of the pixel point a of the intermediate RAW image after fusion, and the pixel point a is located in the third row and the third column of the intermediate RAW image after fusion.
After a first reference image and a first non-reference image are confirmed in a multi-frame registered original RAW image with the same exposure value, if at least one pixel point of pixel points at the same position of all the first non-reference images is located outside a motion region, taking the mean value of the pixel values of the pixel points at the corresponding position of the first reference image and the pixel values of the pixel points at the corresponding position of the first non-reference image and outside the motion region as the pixel value of the pixel point corresponding to the middle RAW image after fusion. For example, as shown in fig. 15, the exposure values of the first, second, and third registered original RAW images are all the same. The original RAW image after the first registration is a first reference image, the original RAW image after the second registration is a first non-reference image, and the original RAW image after the third registration is also a first non-reference image. Assuming that the pixel B2 in the first row and the first column of the original RAW image after the second registration is located outside the motion region, and the pixel B3 in the first row and the first column of the original RAW image after the third registration is also located outside the motion region, that is, the pixel in the first row and the first column of all the first non-reference images is located outside the motion region, the average value of the pixel B1 in the first row and the first column of the first reference image (the original RAW image after the first registration), the pixel B25 in the first row and the first column of the original RAW image after the second registration, and the pixel B3 in the first row and the first column of the original RAW image after the third registration is used as the pixel value of the pixel B2 of the intermediate RAW image after the fusion, and the pixel B is located in the first row and the first column of the intermediate RAW image after the fusion. For example, the pixel point C2 in the first row and the second column of the original RAW image after the second registration is located outside the motion region, and the pixel point C3 in the first row and the second column of the original RAW image after the third registration is located outside the motion region, that is, at least one of the pixel points at the same position of all the first non-reference images is located outside the motion region, then the pixel value of the pixel point C1 in the first row and the second column of the first reference image (the original RAW image after the first registration) and the pixel value mean value of the pixel point C2 in the first row and the second column of the original RAW image after the second registration are used as the pixel value of the pixel point C of the intermediate RAW image after the fusion, and the pixel point C is located in the first row and the second column of the intermediate RAW image after the fusion.
Referring to fig. 1 and 16, in some embodiments, the performing high dynamic range image processing on a plurality of original RAW images to obtain a target RAW image further includes:
028: carrying out ghost-removing processing on the multi-frame intermediate RAW image to obtain a ghost-removed intermediate RAW image;
0241: acquiring weights corresponding to all pixels in each frame of the intermediate RAW image after ghost shadow removal;
0251: and fusing the multi-frame ghost-removed intermediate RAW image according to the weight to obtain a target RAW image.
Referring to fig. 2 and 16, in some embodiments, step 028, step 0241, and step 0251 can be implemented in one or more processors 20. That is, the one or more processors 20 are further configured to perform de-ghosting on the multi-frame intermediate RAW image to obtain a de-ghosted intermediate RAW image; acquiring weights corresponding to all pixels in each frame of the intermediate RAW image after ghost shadow removal; and fusing the multi-frame ghost-removed intermediate RAW image according to the weight.
After obtaining the multiple frames of intermediate RAW images with different exposure values, the processor 20 selects the intermediate RAW image obtained by fusing the first original RAW image from the multiple frames of intermediate RAW images as a second reference image, that is, the exposure value of the second reference image is the calibration exposure value.
In some embodiments, a frame of intermediate RAW image is selected, a motion region of the intermediate RAW image is detected, and for each frame of intermediate RAW image with the same exposure value, for a pixel point located in the motion region, a pixel value of a pixel point corresponding to the pixel point on the second reference image and corresponding to the pixel point is calculated according to a pixel value of a pixel point on the second reference image, brightness of the second reference image, and brightness of the intermediate RAW image. For example, as shown in fig. 17, the first intermediate RAW image and the second intermediate RAW exposure value are different. Wherein the first intermediate RAW image is a second reference image. Assuming that the pixel point d2 located in the third row and the third column of the second intermediate RAW image is located in the motion region, the product of the pixel point d1 located in the third row and the third column of the second reference image and the ratio of the average brightness of the second intermediate RAW image to the average brightness of the second reference image is used as the pixel value of the pixel point d 'of the second intermediate RAW image after the ghost shadow is removed, and the pixel point d' is set in the third row and the third column of the pixel of the second intermediate RAW image after the ghost shadow is removed. And for the pixel points outside the motion area, directly taking the pixel values of the pixel points of the intermediate RAW image as the pixel values of the pixel points at the corresponding position of the intermediate RAW after the ghost is removed. For example, as shown in fig. 17, if the pixel e2 in the first row and the first column of the second intermediate RAW image is located in the motion region, the pixel value of the pixel e2 in the first row and the first column of the second intermediate RAW image is taken as the pixel value of the pixel e 'of the second intermediate RAW image after the ghost is removed, and the pixel e' is set in the first row and the first column of the pixel of the second intermediate RAW image after the ghost is removed.
Because the intermediate RAW image is subjected to the ghost removing processing and then is fused, compared with the method of directly fusing multi-frame intermediate RAW images, the target RAW image obtained finally has higher definition and better image quality.
After the ghost-removed intermediate RAW image is obtained, each frame of ghost-removed intermediate RAW image is obtained, and the multi-frame ghost-removed intermediate RAW images are fused according to the weight to obtain a target RAW image. The specific real mode of acquiring each frame of ghost-removed intermediate RAW image, fusing a plurality of frames of ghost-removed intermediate RAW images according to the weights to acquire the target RAW image is the same as the specific real mode of acquiring all weights corresponding to all pixels in each frame of intermediate RAW image, and fusing a plurality of frames of intermediate RAW images according to the weights to acquire the target RAW image. The following description will take an example of acquiring weights corresponding to all pixels in each frame of intermediate RAW image, and fusing multiple frames of intermediate RAW images according to the weights to acquire a target RAW image.
Referring to fig. 8 and 18, in some embodiments, obtaining the weights corresponding to all pixels in each frame of the intermediate RAW image includes:
0242: selecting a first original RAW image, fusing to obtain a middle RAW image as a second reference image, and taking the rest middle RAW images as second non-reference images;
0243: performing second gray level processing on the second reference image to obtain a second gray level image;
0244: and acquiring the weight corresponding to the pixel point to be calculated according to the average brightness and the variance of the second gray image and the pixel value of the pixel point to be calculated in the middle RAW image.
Referring to fig. 2 and 18, in some embodiments, step 0242, step 0243, and step 0244 may be implemented by one or more processors 20. That is, the one or more processors 20 are configured to select an intermediate RAW image obtained after the first original RAW image is fused as a second reference image, and the remaining intermediate RAW images are used as second non-reference images; performing second gray level processing on the second reference image to obtain a second gray level image; and acquiring the weight corresponding to the pixel point to be calculated according to the average brightness and the variance of the second gray image and the pixel value of the pixel point to be calculated in the middle RAW image.
The processor 20 selects an intermediate RAW image obtained by fusing the first original RAW image from the multi-frame intermediate RAW image as a second reference image, that is, the exposure value of the second reference image is a calibration exposure value, and the rest of the intermediate RAW is used as a second non-reference image. As shown in fig. 19, the processor 20 performs second gray-scale processing on the second reference image to acquire a second gray-scale image. Illustratively, the processor 20 interpolates the second reference image to obtain a second gray scale image, and the length and width of the second gray scale image are the same as those of the second reference image.
After obtaining the second gray scale image, the average brightness and variance of the second gray scale image are obtained. And selecting any one frame of the multi-frame intermediate RAW image (including a second reference image and a second non-reference image), and calculating the weight corresponding to all pixel points in the selected intermediate RAW image. For example, in some embodiments, the weights corresponding to the pixel points to be calculated may be obtained according to the average brightness and the variance of the second gray scale image and the pixel values of the pixel points to be calculated in the intermediate RAW image. Specifically, the weight corresponding to the pixel point to be calculated in the middle RAW image of each frame may be calculated according to a calculation formula
Figure BDA0002954909050000101
And calculating to obtain, wherein weight is the weight corresponding to the pixel point to be calculated, mean is the average brightness of the second gray scale image, and sigma is the brightness variance of the second gray scale image. In some embodiments, the weights corresponding to the pixel points may be adjusted according to the actual needs of the user, for example, according to a calculation formula
Figure BDA0002954909050000102
And calculating, wherein M and N are gain values adjusted by the user. And then obtaining the weights corresponding to all the pixel points in the intermediate RAW image of the next frame until obtaining the weights corresponding to all the pixel points in all the intermediate RAW images.
Referring to fig. 8 and 20, in some embodiments, obtaining the weights corresponding to all pixels in each frame of the intermediate RAW image includes:
0245: performing second gray level processing on all the intermediate RAW images to obtain corresponding third gray level images;
0246: and acquiring the weight corresponding to the pixel point to be calculated according to the average brightness and the variance of the third gray level image and the pixel value of the pixel point to be calculated in the corresponding intermediate RAW image.
Referring to fig. 2 and 20, in some embodiments, step 0245 and step 0246 may be implemented by one or more processors 20. That is, the one or more processors 20 are also configured to: performing second gray level processing on all the intermediate RAW images to obtain corresponding third gray level images; and acquiring the weight corresponding to the pixel point to be calculated according to the average brightness and the variance of the third gray level image and the pixel value of the pixel point to be calculated in the corresponding intermediate RAW image.
The processor 20 performs second gray scale processing on all intermediate RAW images to obtain corresponding third gray scale images, where a specific implementation manner of performing the second gray scale processing on all intermediate RAW images is the same as a specific implementation manner of performing the second gray scale processing on the second reference image, and is not described herein again.
After multiple frames of third gray level images are obtained, one frame of the third gray level images is selected, and the average brightness and the variance of the third gray level images are obtained. And calculating the weight corresponding to the pixel point according to the average brightness and the variance of the third gray level image and the pixel value of the pixel point to be calculated in the intermediate RAW intermediate image corresponding to the third gray level image. For example, the weight corresponding to the pixel point to be calculated in the intermediate RAW image of each frame may be calculated according to a calculation formula
Figure BDA0002954909050000103
And obtaining through calculation, wherein weight is the weight corresponding to the pixel point to be calculated, mean is the average brightness of the third gray image corresponding to the middle RAW image where the pixel point to be calculated is located, and sigma is the brightness variance of the third gray image corresponding to the middle RAW image where the pixel point to be calculated is located. Obtaining the weights corresponding to all pixel points in the intermediate RAW image, and then obtaining the weights corresponding to all pixel points in the intermediate RAW image of the next frame until obtainingAnd obtaining the weights corresponding to all pixel points in all the intermediate RAW images.
After the weights corresponding to all pixel points in all the intermediate RAW images are obtained, fusing the multi-frame intermediate RAW images according to the weights to obtain a target RAW image. Specifically, referring to fig. 1 and 21, in some embodiments, fusing the multi-frame intermediate RAW images according to the weight to obtain the target RAW image includes:
02512: and taking the sum of the pixel values of the pixel points at the corresponding positions of all the intermediate RAW images and the product of the corresponding weights as the pixel value of the pixel point corresponding to the target RAW image after fusion.
Referring to fig. 2 and 21, in some embodiments, step 0251 can be implemented by one or more processors 20. That is, the one or more processors 20 are further configured to use the sum of the pixel values of the pixel points at the corresponding positions of all the intermediate RAW images and the product of the corresponding weights as the pixel value of the pixel point corresponding to the fused target RAW image.
After obtaining the weights corresponding to all the pixel points in all the intermediate RAW images, the processor 20 takes the sum of the product of the pixel values of the pixel points at the corresponding positions of all the intermediate RAW images and the corresponding weights as the pixel value of the pixel point corresponding to the fused target RAW image. For example, assuming that the corresponding weight of the pixel point located in the first row and the first column of the first intermediate RAW image is the first weight, and the corresponding weight of the pixel point located in the first row and the first column of the second intermediate RAW image is the second weight, the sum of the pixel value of the pixel point located in the first row and the first column of the first intermediate RAW image multiplied by the first weight and the pixel value of the pixel point located in the first row and the first column of the second intermediate RAW image multiplied by the second weight is used as the pixel value of the pixel point located in the first row and the first column of the fused target RAW image. It should be noted that, in some embodiments, the weights corresponding to the pixels at the corresponding positions of all the intermediate RAW may be normalized, that is, the sum of the weights corresponding to the pixels at the corresponding positions of all the intermediate RAW of multiple frames is equal to 1, and the sum of the product of the pixel values of the pixels at the corresponding positions of all the intermediate RAW and the corresponding normalized weight is used as the pixel value of the pixel corresponding to the fused target RAW image.
It should be noted that, in some embodiments, after the target RAW image is obtained by performing high dynamic fusion processing on multiple frames of original RAW images, the target RAW image has a higher dynamic range, and the bit width of the target RAW image is higher than the bit width of the original RAW image. For example, a target RAW image with a bit width of 16 bits is obtained by high dynamic fusion of a plurality of original RAW images with a bit width of 12 bits. Of course, in some embodiments, the bit width of the target RAW image may also be equal to the bit width of the original RAW image. For example, a target RAW image with a bit width of 12 bits is obtained by high dynamic fusion of a plurality of original RAW images with a bit width of 12 bits. In addition, in some embodiments, the acquired multiple frames of original RAW images are all exposed with the same exposure value, and the processor 20 performs fusion on the multiple frames of original RAW images exposed with the same exposure value to directly obtain the target RAW image, which is not limited herein.
The processor 20 is configured to obtain parameter information from the original RAW images, where the parameter information includes, but is not limited to: at least one of a photographing parameter, an image height, a black level, a white level, a color conversion matrix, a white balance parameter, and a lens shading correction parameter.
It should be noted that the DNG format is an open RAW file format, and is mainly used to unify RAW formats of different manufacturers. The DNG specification defines the organization mode, color space conversion and the like of data, and the used Tag parameter (Tag) is based on the extension of the TIFF/EP specification. A part of Tag necessary for DNG images is not taken directly from the photographed metadata information but is calculated by metadata conversion. For example, the color matrix (ColorMatrix) and the front matrix (ForwardMatrix) in the label parameters need to be obtained by calculation through metadata parameter information, which is further described below.
In some embodiments, the metadata parameter information includes shooting parameters of the original RAW image, a first color conversion matrix under a first light source, and a second color conversion matrix under a second light source, and the tag parameters include a first color matrix and a second color matrix. Referring to fig. 1 and fig. 22, acquiring tag parameters in a DNG image according to metadata parameter information when multiple frames of original RAW images are captured includes:
031: obtaining a first color matrix according to the first color conversion matrix, the first matrix and the second matrix; and
032: and acquiring a second color matrix according to the second color conversion matrix and the first matrix.
Referring to fig. 2 and 22, in some embodiments, step 031 and step 032 can be implemented by one or more processors 20. That is, the one or more processors 20 are further configured to obtain a first color matrix from the first color conversion matrix, the first matrix, and the second matrix; and acquiring a second color matrix according to the second color conversion matrix and the first matrix.
In particular, the first color matrix may be calculated according to the formula ColorMatrix1 ═ Inv (CCM) 1 *sRGB2XYZ D65 *D 65 toA), where ColorMatrix1 denotes a first color matrix, CCM 1 Representing a first color conversion matrix, sRGB2XYZ, under a first light source D65 Denotes a first matrix, D 65 toA denotes a second matrix. That is, the first color matrix may be obtained by inverse matrix calculation of a product of the first color conversion matrix and the first matrix and the second matrix. The second color matrix may be calculated according to the formula ColorMatrix2 ═ Inv (CCM) 2 *sRGB2XYZ D65 ) Obtained by calculation, wherein ColorMatrix2 represents the second color matrix, CCM 2 Represents a second color conversion matrix under a second light source, sRGB2XYZ D65 A first matrix is represented. That is, the second color matrix may be obtained by an inverse matrix calculation of a product of the second color conversion matrix and the first matrix. It should be noted that the first matrix is a conversion matrix that takes the second light source as a reference light source and is from the first space to the second space, and the first space is different from the second space; the second matrix is a conversion matrix from the reference white of the second light source to the reference white of the first light source. In some embodiments, the first light source may be a low color temperature light source (e.g., a light), the second light source may be a high color temperature light source (e.g., D65 light), the first space may be an sRGB space, and the second space may be an XYZ space.
Referring to fig. 1 and 23, in some embodiments, the tag parameters further include a first front matrix and a second front matrix, and the tag parameters in the DNG image are obtained according to parameter information of metadata when multiple frames of RAW images are captured, further including:
033: calculating to obtain a first front matrix according to the first color conversion matrix and the third matrix;
034: and calculating to obtain a second front matrix according to the second color conversion matrix and the third matrix.
Referring to fig. 2 and 23, in some embodiments, step 033 and step 034 may be implemented by one or more processors 20. That is, the one or more processors 20 are further configured to calculate a first front matrix according to the first color conversion matrix and the third matrix; and calculating to obtain a second front matrix according to the second color conversion matrix and the third matrix.
In particular, the first forward matrix may be CCM according to the formula ForwardMatrix1 1 *sRGB2XYZ D50 Obtained by calculation, Forwardmatrix1 denotes the first front matrix, CCM 1 Representing a first color conversion matrix, sRGB2XYZ, under a first light source D50 Representing a third matrix. That is, the first front matrix is equal to a product by the first color conversion matrix and the third matrix. The second forward matrix may be CCM according to the formula Forwardmatrix2 2 *sRGB2XYZ D50 Calculated as Forwardmatrix2 denotes the second front matrix, CCM 2 Represents a second color conversion matrix under a second light source, sRGB2XYZ D50 Representing a third matrix. That is, the second front matrix is equal to a product by the second color conversion matrix and the third matrix. It should be noted that the third matrix is a conversion matrix from the first space to the second space by using the third light source as a reference light source. In some embodiments, the first light source may be a low color temperature light source (e.g., a light), the second light source may be a high color temperature light source (e.g., D65 light), the third light source may be D50 light, the first space may be an sRGB space, and the second space may be an XYZ space.
Referring to fig. 24, in some embodiments, generating a DNG file according to a target RAW image, a tag parameter, and metadata parameter information includes:
041: and writing the label parameters, the metadata parameter information and the data of the target RAW image into a blank file according to the DNG coding specification to generate a DNG file.
Referring to fig. 2 and 24, in some embodiments, step 041 may be implemented by one or more processors 20. That is, the one or more processors 20 are further configured to write the tag parameters, the metadata parameter information, and the data of the target RAW image into the blank file according to the DNG encoding specification to generate a DNG file.
It should be noted that, when synthesizing a plurality of frames of original RAW images, all the original RAW images exposed by the calibration exposure value are taken as reference images for fusion, and therefore, the shooting parameters in the metadata parameter information when creating the DNG file are the shooting parameters of the original RAW images exposed by the calibration exposure value.
Referring to fig. 25, in some embodiments, the image processing method further includes:
07: the DNG file is parsed to generate a DNG image.
Referring to fig. 2 and 25, in some embodiments, step 07 may be implemented by one or more processors 20. That is, the one or more processors 20 are also operative to parse the DNG file to generate a DNG image.
After obtaining the DNG file, the processor 20 may perform parsing according to the tag parameter, the metadata parameter information, and the data of the target RAW image in the DNG file to obtain an image in the DNG format. In some embodiments, processor 20 outputs the retrieved DNG file to an application (e.g., an album), which opens and parses the DNG file to generate a DNG image, and displays the DNG image.
Referring to fig. 26, in some embodiments, after the DNG image is generated, the DNG image may be imported into post-processing software to perform post-adjustment on the DNG image to obtain the target image. The post-adjustment includes, but is not limited to, at least one of a luma adjustment, a chroma adjustment, and a resize adjustment.
Compared with a single-frame original RAW image, the target RAW image obtained by high-dynamic fusion of the multi-frame original RAW image has a higher dynamic range and higher definition. And the target RAW image is converted into a DNG file, so that the user can conveniently export the DNG file to be processed in later software.
Referring to fig. 27, the present application further provides an electronic device 1000. The electronic device 1000 according to the present embodiment includes the lens 300, the housing 200, and the image processing apparatus 100 according to any of the above embodiments. The lens 300 and the image processing apparatus 100 are combined with the housing 200. The lens 300 cooperates with the image sensor 10 of the image processing apparatus 100 to form an image.
The electronic device 1000 may be a mobile phone, a tablet computer, a notebook computer, an intelligent wearable device (e.g., an intelligent watch, an intelligent bracelet, an intelligent glasses, an intelligent helmet), an unmanned aerial vehicle, a head display device, etc., without limitation.
In the electronic device 1000 of the present application, after a target RAW image is obtained by performing high-dynamic fusion on multiple frames of RAW images by the image processing apparatus 100, the target RAW image is converted into a DNG file. Therefore, on one hand, compared with a single-frame RAW image, a target RAW image synthesized by multiple frames of RAW images has larger image information amount, wider dynamic range and higher definition; on the other hand, the target RAW image is converted into the DNG file with the uniform coding and analyzing format, so that the DNG file can be exported to later-stage software for processing by a user.
Referring to fig. 28, the present application also provides a non-volatile computer readable storage medium 400 containing a computer program. The computer program, when executed by the processor 60, causes the processor 60 to perform the image processing method of any of the above embodiments.
For example, referring to fig. 1 and 28, the computer program, when executed by the processor 60, causes the processor 60 to perform the steps of:
01: acquiring a plurality of frames of original RAW images;
02: carrying out high dynamic range image processing on a plurality of frames of original RAW images to obtain a target RAW image;
03: acquiring a label parameter in the DNG image according to metadata parameter information when a plurality of frames of original RAW images are shot; and
04: and generating a DNG file according to the target RAW image, the label parameter and the metadata parameter information.
The processor 60 may be the same as the processor 20 provided in the image processing apparatus 100, or the processor 60 may be provided in the electronic device 1000, that is, the processor 60 may not be the same as the processor 20 provided in the image processing apparatus 100, and the present invention is not limited thereto.
In the description herein, reference to the description of the terms "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example" or "some examples" or the like means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
Although embodiments of the present application have been shown and described above, it is to be understood that the above embodiments are exemplary and not to be construed as limiting the present application, and that changes, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (24)

1. An image processing method, characterized by comprising:
acquiring a plurality of frames of original RAW images;
carrying out high dynamic range image processing on a plurality of frames of original RAW images to obtain a target RAW image;
acquiring a label parameter in the DNG image according to metadata parameter information when a plurality of frames of the original RAW image are shot; and
and generating a DNG file according to the target RAW image, the label parameter and the metadata parameter information.
2. The image processing method according to claim 1, wherein a plurality of frames of the original RAW image are exposed with at least two different exposure values.
3. The method according to claim 1, wherein the plurality of frames of RAW images comprise a first RAW image exposed at a nominal exposure and a second RAW image exposed at a different nominal exposure.
4. The image processing method according to claim 3, characterized in that the image processing method further comprises:
performing light measurement on the environment, and acquiring the calibrated exposure value according to the measured environment brightness; or
And acquiring the calibrated exposure value according to an exposure parameter determined by a user, wherein the exposure parameter comprises at least one of an exposure value, sensitivity and exposure duration.
5. The image processing method according to claim 1, wherein the performing high dynamic range image processing on a plurality of frames of the original RAW image to obtain a target RAW image comprises:
carrying out image registration on a plurality of frames of original RAW images;
fusing the registered original RAW images with the same exposure value to obtain a multi-frame intermediate RAW image;
acquiring weights corresponding to all pixels in each frame of the intermediate RAW image; and
and fusing the multi-frame intermediate RAW images according to the weight to acquire the target RAW image.
6. The method according to claim 5, wherein the performing high dynamic range image processing on a plurality of frames of the original RAW image to obtain a target RAW image further comprises:
detecting a motion region of the original RAW image after registration;
the fusing the registered original RAW images with the same exposure value to obtain a multi-frame intermediate RAW image, including:
and aiming at the original RAW image after registration of each frame with the same exposure value, performing first fusion processing on pixel points located in the motion area, and performing second fusion processing on pixel points located outside the motion area to obtain a multi-frame intermediate RAW image, wherein the first fusion processing is different from the second fusion processing.
7. The image processing method according to claim 6, wherein the fusing the registered original RAW images having the same exposure value to obtain a plurality of frames of intermediate RAW images, further comprises:
selecting any one frame of the original RAW images after the registration of the multiple frames with the same exposure value as a first reference image, and using other frames as a first non-reference image;
the first fusion processing is adopted for the pixel points in the motion area, and the first fusion processing comprises the following steps:
if all the pixel points at the same position of the first non-reference image are located in the motion area, the pixel value of the pixel point at the corresponding position of the first reference image is used as the pixel value of the pixel point corresponding to the intermediate RAW image after fusion;
and the second fusion processing is adopted for the pixel points outside the motion area, and the second fusion processing comprises the following steps:
and if at least one pixel point of the pixel points at the same position of the first non-reference image is located outside the moving area, taking the mean value of the pixel values of the pixel points at the corresponding position of the first reference image and the pixel values of the pixel points at the corresponding position of the first non-reference image and outside the moving area as the pixel value of the pixel point corresponding to the intermediate RAW image after fusion.
8. The method according to claim 5, wherein the plurality of frames of RAW images include a first RAW image exposed at a nominal exposure value and a second RAW image exposed at a different nominal exposure value, and the obtaining the weights corresponding to all pixels in each frame of the intermediate RAW image comprises:
selecting the intermediate RAW image obtained after the first original RAW image is fused as a second reference image;
performing second gray processing on the second reference image to obtain a second gray image;
and acquiring the weight corresponding to the pixel point to be calculated according to the average brightness and the variance of the second gray image and the pixel value of the pixel point to be calculated in the middle RAW image.
9. The method according to claim 1, wherein the metadata parameter information includes shooting parameters of the original RAW image, a first color conversion matrix under a first light source, and a second color conversion matrix under a second light source, the tag parameters include a first color matrix and a second color matrix, and the obtaining tag parameters in the DNG image according to the metadata parameter information when shooting a plurality of frames of the original RAW image includes:
acquiring the first color matrix according to the first color conversion matrix, the first matrix and the second matrix;
acquiring a second color matrix according to the second color conversion matrix and the first matrix; wherein:
the first matrix is a conversion matrix from a first space to a second space by taking a second light source as a reference light source, wherein the first space is different from the second space; the second matrix is a conversion matrix from the reference white of the second light source to the reference white of the first light source.
10. The method according to claim 9, wherein the tag parameters further include a first front matrix and a second front matrix, and the obtaining the tag parameters in the DNG image according to parameter information of metadata at the time of capturing the RAW images for a plurality of frames further includes:
calculating to obtain the first front matrix according to the first color conversion matrix and the third matrix;
calculating to obtain the second front matrix according to the second color conversion matrix and the third matrix; wherein:
the third matrix is a conversion matrix from the first space to the second space by taking a third light source as a reference light source.
11. The method according to claim 1, wherein the generating a DNG file according to the target RAW image, the tag parameter, and the metadata parameter information includes:
and writing the label parameters, the metadata parameter information and the data of the target RAW image into a blank file according to a DNG coding specification to generate the DNG file.
12. An image processing apparatus, characterized in that the image processing apparatus comprises an image sensor and one or more processors; exposing a pixel array in the image sensor to acquire a plurality of frames of original RAW images;
one or more of the processors to:
carrying out high dynamic range image processing on a plurality of frames of original RAW images to obtain a target RAW image;
acquiring a label parameter in the DNG image according to metadata parameter information when a plurality of frames of the original RAW image are shot; and
and generating a DNG file according to the target RAW image, the label parameter and the metadata parameter information.
13. The image processing apparatus according to claim 12, wherein a plurality of frames of the original RAW image are exposed with at least two different exposure values.
14. The image processing apparatus according to claim 12, wherein the plurality of frames of RAW images include a first RAW image exposed at a nominal exposure value and a second RAW image exposed at a different nominal exposure value.
15. The image processing apparatus of claim 14, wherein the one or more processors are further configured to:
performing light measurement on the environment, and acquiring the calibrated exposure value according to the measured environment brightness; or
And acquiring the calibrated exposure value according to an exposure parameter determined by a user, wherein the exposure parameter comprises at least one of the exposure value, the sensitivity and the exposure duration.
16. The image processing apparatus of claim 12, wherein the one or more processors are further configured to:
carrying out image registration on a plurality of frames of original RAW images;
fusing the registered original RAW images with the same exposure value to obtain a multi-frame intermediate RAW image;
acquiring weights corresponding to all pixels in each frame of the intermediate RAW image; and
and fusing a plurality of frames of the intermediate RAW images according to the weight to acquire the target RAW image.
17. The image device of claim 16, wherein the one or more processors are further configured to:
detecting a motion region of the original RAW image after registration; and
and aiming at the original RAW image after registration of each frame with the same exposure value, performing first fusion processing on pixel points located in the motion area, and performing second fusion processing on pixel points located outside the motion area to obtain a multi-frame intermediate RAW image, wherein the first fusion processing is different from the second fusion processing.
18. The image processing apparatus of claim 17, wherein the one or more processors are further configured to:
selecting any one frame of the original RAW images after the registration of the multiple frames with the same exposure value as a first reference image, and using other frames as a first non-reference image;
if all the pixel points at the same position of the first non-reference image are located in the motion area, the pixel value of the pixel point at the corresponding position of the first reference image is used as the pixel value of the pixel point corresponding to the intermediate RAW image after fusion;
and if at least one pixel point of the pixel points at the same position of the first non-reference image is located outside the moving area, taking the mean value of the pixel values of the pixel points at the corresponding position of the first reference image and the pixel values of the pixel points at the corresponding position of the first non-reference image and outside the moving area as the pixel value of the pixel point corresponding to the intermediate RAW image after fusion.
19. The image processing device of claim 16, wherein the plurality of frames of RAW images comprise a first RAW image exposed at a nominal exposure and a second RAW image exposed at a different nominal exposure, the one or more processors further configured to:
selecting the intermediate RAW image obtained after the first original RAW image is fused as a second reference image;
performing second gray scale processing on the second reference image to obtain a second gray scale image;
and acquiring the weight corresponding to the pixel point to be calculated according to the average brightness and the variance of the second gray image and the pixel value of the pixel point to be calculated in the middle RAW image.
20. The apparatus of claim 12, wherein the metadata parameter information comprises capture parameters of the original RAW image, a first color conversion matrix under a first light source, and a second color conversion matrix under a second light source, wherein the tag parameters comprise a first color matrix and a second color matrix, and wherein the one or more processors are further configured to:
obtaining the first color matrix according to the first color conversion matrix, the first matrix and the second matrix;
acquiring the second color matrix according to the second color conversion matrix and the first matrix; wherein:
the first matrix is a conversion matrix from a first space to a second space by taking a second light source as a reference light source, wherein the first space is different from the second space; the second matrix is a conversion matrix from the reference white of the second light source to the reference white of the first light source.
21. The image processing apparatus of claim 20, wherein the label parameters further comprise a first front matrix and a second front matrix, the one or more processors further to:
calculating to obtain the first front matrix according to the first color conversion matrix and the third matrix;
calculating to obtain the second front matrix according to the second color conversion matrix and the third matrix; wherein:
the third matrix is a conversion matrix from the first space to the second space by taking a third light source as a reference light source.
22. The image processing apparatus of claim 12, wherein the one or more processors are further configured to:
and writing the label parameters, the metadata parameter information and the data of the target RAW image into a blank file according to a DNG coding specification to generate the DNG file.
23. An electronic device, comprising:
a lens; and
the image processing apparatus of one of claims 12 to 22, the lens cooperating with an image sensor of the image processing apparatus for imaging.
24. A non-transitory computer-readable storage medium containing a computer program, wherein the computer program, when executed by a processor, causes the processor to perform the image processing method of any one of claims 1 to 11.
CN202110221004.2A 2021-02-26 2021-02-26 Image processing method, image processing apparatus, electronic device, and readable storage medium Active CN114979500B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110221004.2A CN114979500B (en) 2021-02-26 2021-02-26 Image processing method, image processing apparatus, electronic device, and readable storage medium
PCT/CN2021/137887 WO2022179256A1 (en) 2021-02-26 2021-12-14 Image processing method, image processing apparatus, electronic device, and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110221004.2A CN114979500B (en) 2021-02-26 2021-02-26 Image processing method, image processing apparatus, electronic device, and readable storage medium

Publications (2)

Publication Number Publication Date
CN114979500A true CN114979500A (en) 2022-08-30
CN114979500B CN114979500B (en) 2023-08-08

Family

ID=82974260

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110221004.2A Active CN114979500B (en) 2021-02-26 2021-02-26 Image processing method, image processing apparatus, electronic device, and readable storage medium

Country Status (2)

Country Link
CN (1) CN114979500B (en)
WO (1) WO2022179256A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117135293A (en) * 2023-02-24 2023-11-28 荣耀终端有限公司 Image processing method and electronic device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105979235A (en) * 2016-05-30 2016-09-28 努比亚技术有限公司 Image processing method and terminal
CN110198417A (en) * 2019-06-28 2019-09-03 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110198419A (en) * 2019-06-28 2019-09-03 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110430370A (en) * 2019-07-30 2019-11-08 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN111726516A (en) * 2019-10-23 2020-09-29 北京小米移动软件有限公司 Image processing method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018107664A (en) * 2016-12-27 2018-07-05 キヤノン株式会社 Image processing device, image processing method, imaging apparatus, and program
CN111418201B (en) * 2018-03-27 2021-10-15 华为技术有限公司 Shooting method and equipment
CN109993722B (en) * 2019-04-09 2023-04-18 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN110022469B (en) * 2019-04-09 2021-03-02 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105979235A (en) * 2016-05-30 2016-09-28 努比亚技术有限公司 Image processing method and terminal
CN110198417A (en) * 2019-06-28 2019-09-03 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110198419A (en) * 2019-06-28 2019-09-03 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110430370A (en) * 2019-07-30 2019-11-08 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN111726516A (en) * 2019-10-23 2020-09-29 北京小米移动软件有限公司 Image processing method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117135293A (en) * 2023-02-24 2023-11-28 荣耀终端有限公司 Image processing method and electronic device

Also Published As

Publication number Publication date
WO2022179256A1 (en) 2022-09-01
CN114979500B (en) 2023-08-08

Similar Documents

Publication Publication Date Title
US11849224B2 (en) Global tone mapping
US10007967B2 (en) Temporal and spatial video noise reduction
KR101643122B1 (en) Image processing device, image processing method and recording medium
CN110022469B (en) Image processing method, image processing device, storage medium and electronic equipment
CN108668093B (en) HDR image generation method and device
CN108012078A (en) Brightness of image processing method, device, storage medium and electronic equipment
CN110213502A (en) Image processing method, device, storage medium and electronic equipment
CN107911683B (en) Image white balancing treatment method, device, storage medium and electronic equipment
KR20120114899A (en) Image processing method and image processing apparatus
WO2019104047A1 (en) Global tone mapping
CN107948511B (en) Brightness of image processing method, device, storage medium and brightness of image processing equipment
CN107920205B (en) Image processing method, device, storage medium and electronic equipment
CN113643214A (en) Image exposure correction method and system based on artificial intelligence
CN114979500B (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
US11640654B2 (en) Image processing method and apparatus
JP2012109849A (en) Imaging device
JP6210772B2 (en) Information processing apparatus, imaging apparatus, control method, and program
JP2007312294A (en) Imaging apparatus, image processor, method for processing image, and image processing program
JP2007293686A (en) Imaging apparatus, image processing apparatus, image processing method and image processing program
JP2007221678A (en) Imaging apparatus, image processor, image processing method and image processing program
CN109447925B (en) Image processing method and device, storage medium and electronic equipment
US11153467B2 (en) Image processing
CN115037915B (en) Video processing method and processing device
JP7214407B2 (en) Image processing device, image processing method, and program
US20240137658A1 (en) Global tone mapping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant