CN115063333A - Image processing method, image processing device, electronic equipment and computer readable storage medium - Google Patents

Image processing method, image processing device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN115063333A
CN115063333A CN202210751338.5A CN202210751338A CN115063333A CN 115063333 A CN115063333 A CN 115063333A CN 202210751338 A CN202210751338 A CN 202210751338A CN 115063333 A CN115063333 A CN 115063333A
Authority
CN
China
Prior art keywords
image
raw
tone mapping
mode
dynamic range
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210751338.5A
Other languages
Chinese (zh)
Inventor
何慕威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Oppo Communication Technology Co ltd
Original Assignee
Xi'an Oppo Communication Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Oppo Communication Technology Co ltd filed Critical Xi'an Oppo Communication Technology Co ltd
Priority to CN202210751338.5A priority Critical patent/CN115063333A/en
Publication of CN115063333A publication Critical patent/CN115063333A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application relates to an image processing method, an apparatus, an electronic device, a storage medium and a computer program product. The method comprises the following steps: acquiring a first image and a second image; the exposure of the first image is less than the exposure of the second image; demosaicing the first image to obtain a third image, and demosaicing the second image to obtain a fourth image; carrying out high dynamic range imaging fusion on the third image and the fourth image to obtain a high dynamic range image; and carrying out image signal processing on the high dynamic range image to obtain a fifth image. The method can improve the definition of the image.

Description

Image processing method, image processing device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to image technologies, and in particular, to an image processing method, an image processing apparatus, an electronic device, and a computer-readable storage medium.
Background
With the development of image technology, in order to obtain a clearer image, the electronic device shoots and obtains a plurality of images to be fused, so as to obtain a new image.
However, in the process of image processing, the image obtained by fusing a plurality of images often has a problem of low definition.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, electronic equipment, a computer readable storage medium and a computer program product, which can improve the definition of an image.
In a first aspect, the present application provides an image processing method. The method comprises the following steps:
acquiring a first image and a second image; the exposure of the first image is less than the exposure of the second image;
demosaicing the first image to obtain a third image, and demosaicing the second image to obtain a fourth image;
carrying out high dynamic range imaging fusion on the third image and the fourth image to obtain a high dynamic range image;
and carrying out image signal processing on the high dynamic range image to obtain a fifth image.
In a second aspect, the present application further provides an image processing apparatus. The device comprises:
the acquisition module is used for acquiring a first image and a second image; the exposure of the first image is less than the exposure of the second image;
the demosaicing processing module is used for demosaicing the first image to obtain a third image and demosaicing the second image to obtain a fourth image;
the fusion module is used for carrying out high dynamic range imaging fusion on the third image and the fourth image to obtain a high dynamic range image;
and the image signal processing module is used for carrying out image signal processing on the high dynamic range image to obtain a fifth image.
In a third aspect, the present application further provides an electronic device. The electronic device comprises a memory and a processor, the memory stores a computer program, and the processor realizes the following steps when executing the computer program:
acquiring a first image and a second image; the exposure of the first image is less than the exposure of the second image;
demosaicing the first image to obtain a third image, and demosaicing the second image to obtain a fourth image;
performing high dynamic range imaging fusion on the third image and the fourth image to obtain a high dynamic range image;
and carrying out image signal processing on the high dynamic range image to obtain a fifth image.
In a fourth aspect, the present application further provides a computer-readable storage medium. The computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
acquiring a first image and a second image; the exposure of the first image is less than the exposure of the second image;
demosaicing the first image to obtain a third image, and demosaicing the second image to obtain a fourth image;
performing high dynamic range imaging fusion on the third image and the fourth image to obtain a high dynamic range image;
and carrying out image signal processing on the high dynamic range image to obtain a fifth image.
In a fifth aspect, the present application further provides a computer program product. The computer program product comprising a computer program which when executed by a processor performs the steps of:
acquiring a first image and a second image; the exposure of the first image is less than the exposure of the second image;
demosaicing the first image to obtain a third image, and demosaicing the second image to obtain a fourth image;
carrying out high dynamic range imaging fusion on the third image and the fourth image to obtain a high dynamic range image;
and carrying out image signal processing on the high dynamic range image to obtain a fifth image.
The image processing method, the image processing apparatus, the electronic device, the computer-readable storage medium and the computer program product are used for acquiring a first image and a second image, wherein the exposure of the first image is less than that of the second image; the method comprises the steps of performing demosaicing processing on a first image and a second image respectively to obtain a third image and a fourth image with mosaic noise removed, performing high dynamic range imaging fusion on the third image and the fourth image to obtain a high dynamic range image, performing image signal processing on the high dynamic range image, and not performing image signal processing before demosaicing, so that the problem that the accuracy of demosaicing processing is low due to the fact that demosaicing processing is performed on the images after image signal processing can be solved, and a clear and more accurate fifth image is obtained.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow diagram of a method of image processing in one embodiment;
FIG. 2 is a flow diagram of image signal processing in one embodiment;
FIG. 3 is a flow diagram of acquiring a high dynamic range image in one embodiment;
FIG. 4 is a flow diagram of acquiring an eighth image in YUV mode in one embodiment;
FIG. 5 is a flow chart of a method of image processing in another embodiment;
FIG. 6 is a block diagram showing the configuration of an image processing apparatus according to an embodiment;
FIG. 7 is a diagram illustrating an internal structure of an electronic device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clearly understood, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In an embodiment, as shown in fig. 1, an image processing method is provided, and this embodiment is exemplified by applying the method to an electronic device, where the electronic device may be a terminal or a server; it is understood that the method can also be applied to a server, and can also be applied to a system comprising a terminal and a server, and is realized through the interaction of the terminal and the server.
The terminal can be but not limited to various personal computers, notebook computers, smart phones, tablet computers, internet of things equipment and portable wearable equipment, and the internet of things equipment can be smart sound boxes, smart televisions, smart air conditioners, smart vehicle-mounted equipment and the like. The portable wearable device can be a smart watch, a smart bracelet, a head-mounted device, and the like. The server may be implemented as a stand-alone server or as a server cluster consisting of a plurality of servers.
In this embodiment, the image processing method includes steps 102 to 108:
102, acquiring a first image and a second image; the exposure amount of the first image is smaller than the exposure amount of the second image.
The exposure amount is an integral of the illuminance EV received by a certain surface element of the object over time t. Exposure amount is illuminance × exposure time period. The illumination is determined by the aperture and the exposure duration is controlled by the shutter. The size of the aperture and the length of the shutter determine the amount of exposure. Therefore, the exposure amount is controlled by both the diaphragm and the shutter.
The exposure of the first image is less than the exposure of the second image, i.e. the first image is a dark frame or an underexposed image relative to the second image. Alternatively, the first image is an underexposed image EV-, and the second image is a normally exposed image EV 0.
And 104, performing demosaicing processing on the first image to obtain a third image, and performing demosaicing processing on the second image to obtain a fourth image.
Demosaic processing (Demosaic), i.e., color interpolation, restores the bayer data obtained from the image sensor to the real-world colors that conform to the color display device, i.e., converts the image from the RAW mode to the RGB mode, in which the image can be more accurately color-adjusted and processed.
Optionally, the electronic device performs demosaicing on the first image by using a demosaicing algorithm to obtain a third image, and performs demosaicing on the second image by using a demosaicing algorithm to obtain a fourth image. The demosaicing algorithm comprises a traditional mosaic algorithm or an artificial intelligence demosaicing algorithm (AI Demosaic), and the artificial intelligence demosaicing algorithm can be used for conducting demosaicing processing on the image more accurately.
And 106, carrying out high dynamic range imaging fusion on the third image and the fourth image to obtain a high dynamic range image.
Optionally, the electronic device performs High Dynamic Range Imaging fusion on the third image and the fourth image by using a High Dynamic Range Imaging (HDR) algorithm to obtain a High Dynamic Range image.
And step 108, carrying out image signal processing on the high dynamic range image to obtain a fifth image.
Alternatively, the electronic device inputs the high dynamic range Image into an Image Signal Processing processor, and performs Image Signal Processing (ISP) on the high dynamic range Image by the Image Signal processor to obtain a fifth Image.
The image signal processing includes a black reduction level processing, Lens Shading Correction (LSC), White Balance processing (WB gain), or Color Correction Matrix processing (CCM). In other embodiments, the image signal processing may include other operations, such as, but not limited to, deblurring, sharpening, and the like.
Optionally, as shown in fig. 2, the electronic device performs black subtraction level processing, lens shading correction compensation, white balance processing, and color correction matrix processing on the 16-Bit RGB high dynamic range image in sequence to obtain a fifth 16-Bit image.
Optionally, the high dynamic range image is an image in an RGB mode, and the black level reduction processing is performed on the R channel, the G channel, and the B channel of the high dynamic range in the RGB mode respectively by using the following formulas:
Rout[i]=Rin[i]-Blacklevel
Gout[i]=Gin[i]-Blacklevel
Bout[i]=Bin[i]-Blacklevel
wherein Rout [ i ] is the pixel value of the ith R channel which is output, Rin [ i ] is the pixel value of the ith R channel which is input, and Blacklevel is the black level value; gout [ i ] is the pixel value of the ith G channel of the output, Gin [ i ] is the pixel value of the ith G channel of the input; bout [ i ] is the pixel value of the ith B-channel that is output, and Bin [ i ] is the pixel value of the ith B-channel that is input.
Further, the high dynamic range image is an image in an RGB mode, and the following formulas are further adopted to perform black level reduction processing on the R channel, the G channel, and the B channel of the high dynamic range in the RGB mode respectively:
Figure BDA0003721233430000071
Figure BDA0003721233430000072
Figure BDA0003721233430000073
where Whitelevel is the maximum value in a pixel of the image, and is a fixed value, e.g. 1 The 0Bit image maximum value is 1023.
Optionally, the lens shading correction compensation comprises: firstly, interpolating an R table, a G table and a B table of an LSC table which are obtained in advance to the size which is the same as that of an input RGB image in an interpolation mode, and then applying the size to the input image, wherein the specific operations are as follows:
Figure BDA0003721233430000074
Figure BDA0003721233430000075
Figure BDA0003721233430000076
where Rout [ i ] is the pixel value of the ith R-channel of the output, Rin [ i ] is the pixel value of the ith R-channel of the input, LSC _ Rtable [ i ] is the ith value in the R-table of the LSC table,
LSC-Gtable [ i ] is the ith value in the G table of the LSC table, and LSC-Btable [ i ] is the ith value in the B table of the LSC table.
Optionally, the white balance processing is performed on the input RGB image, and the specific operations are as follows:
Figure BDA0003721233430000081
Figure BDA0003721233430000082
where Rout [ i ] is the output pixel value of the ith R channel, Rin [ i ] is the input pixel value of the ith R channel, R _ gain is the white balance gain value of the R channel, and B _ gain is the white balance gain value of the B channel.
As the human eye is more sensitive to green (G channel), if white balance processing is performed based on green, the white balance gain value G _ gain of the G channel is 1.0, that is, no processing is performed.
Alternatively, the electronic device obtains a color correction matrix, multiplies the color correction matrix with the input RGB image, and outputs the RGB image.
Alternatively, the image signal processing is performed in the second color depth domain, that is, the input images are all RGB images of 16 bits, and the output images are all RGB images of 16 bits.
The image processing method acquires a first image and a second image, wherein the exposure of the first image is less than that of the second image; the method comprises the steps of performing demosaicing processing on a first image and a second image respectively to obtain a third image and a fourth image with mosaic noise removed, performing high dynamic range imaging fusion on the third image and the fourth image to obtain a high dynamic range image, performing image signal processing on the high dynamic range image, and not performing image signal processing before demosaicing, so that the problem that the accuracy of demosaicing processing is low due to the fact that demosaicing processing is performed on the images after image signal processing can be solved, and a clear and more accurate fifth image is obtained.
In one embodiment, acquiring a first image and a second image comprises: acquiring a first RAW image and a plurality of frames of second RAW images; the exposure amount of the first RAW image is smaller than that of the second RAW image; registering a target RAW image in the first RAW image and the multi-frame second RAW images to obtain a first image, and registering the multi-frame second RAW images to obtain a second RAW image after multi-frame registration; and performing spatial domain fusion on the multi-frame registration processed second RAW image to obtain a second image.
The RAW image is RAW image data that has not been processed.
Optionally, the electronic device performs registration processing on the first RAW image and a target RAW image in the multiple second RAW images by using a multiple-frame registration algorithm to obtain a first image, and performs registration processing on the multiple second RAW images by using a multiple-frame registration algorithm to obtain a multiple-frame registered second RAW image.
The processing process of the multi-frame registration algorithm comprises the following steps: extracting the characteristics of at least two images to obtain characteristic points; searching and finding matched feature point pairs based on the feature points; obtaining image space coordinate transformation parameters according to the matched feature point pairs; and carrying out image registration by the coordinate transformation parameters.
In an optional implementation manner, the electronic device uses a sharpness detection algorithm to take a second RAW image with the highest sharpness in the multiple second RAW images as a target RAW image, and performs registration processing on the first RAW image and the target RAW image in the multiple second RAW images to obtain the first image.
In another optional implementation, the electronic device uses the first exposed second RAW image of the multiple frames of second RAW images as the target RAW image, and performs registration processing on the first RAW image and the target RAW image of the multiple frames of second RAW images to obtain the first image.
In other embodiments, the electronic device may also determine the target RAW image in other manners, so as to perform registration processing with the first RAW image to obtain the first image.
Optionally, the electronic device averages the multiple frames of registered second RAW images in a spatial domain to obtain a second image through fusion. In the process of spatial domain fusion, ghost in the image can be removed, noise is reduced, and the definition is improved, so that a clearer second image is obtained.
In another optional implementation manner, the electronic device converts the second RAW image after the multi-frame registration processing into a frequency domain, performs frequency domain fusion, and converts an image obtained by the frequency domain fusion into a spatial domain to obtain a second image. The transform is performed from the spatial domain to the frequency domain, i.e., fourier transform, and from the frequency domain to the spatial domain, i.e., inverse fourier transform.
In this embodiment, the electronic device acquires a first RAW image and multiple frames of second RAW images, and performs registration processing on the first RAW image and the multiple frames of second RAW images respectively, so as to obtain a registered first image and multiple frames of registered second RAW images, and the registered images can be used for subsequent fusion processing more accurately, so as to obtain a more accurate and clearer second image. And performing spatial domain fusion on the second RAW image subjected to multi-frame registration processing in the RAW domain, so that the noise originality of the image data of the second image obtained by fusion can be ensured, and the noise definition of the second image obtained by fusion is further ensured.
In one embodiment, as shown in fig. 3, different exposure parameters are set in the image sensor, resulting in one frame of a first RAW image and a plurality of frames of a second RAW image, the exposure amount of the first RAW image being smaller than that of the second RAW image; registering the multiple frames of second RAW images by adopting a multiple-frame registration algorithm, performing spatial domain fusion on the multiple frames of registered second RAW images by adopting a multiple-frame fusion algorithm to obtain a second image; performing registration processing on a target RAW image in the first RAW image and a target RAW image in the multiple frames of second RAW images by adopting a multi-frame registration algorithm to obtain a first image; demosaicing the first image to obtain a third image with the color depth of 10 bits, and demosaicing the second image to obtain a fourth image with the color depth of 10 bits; and carrying out high dynamic range imaging fusion on the third image with the color depth of 10 bits and the fourth image with the color depth of 10 bits to obtain a high dynamic range image with the color depth of 16 bits.
In one embodiment, after the image signal processing is performed on the high dynamic range image to obtain a fifth image, the method further includes: and carrying out tone mapping on the fifth image to obtain a sixth image.
Tone mapping (Tonemapping) is a computer graphics technique for approximating high dynamic range images on limited dynamic range media.
Optionally, the tone mapping includes Global tone mapping (Global tone mapping) or Local tone mapping (Local tone mapping).
In an optional implementation manner, the electronic device performs global tone mapping on the fifth image to obtain a tone-mapped image; and carrying out local tone mapping on the tone mapping image to obtain a sixth image. The electronic equipment performs global tone mapping on the whole fifth image and then performs local tone mapping on local pixels of the tone-mapped image, so that tone mapping accuracy of the whole fifth image can be guaranteed, and partial pixels in the image can be processed more accurately to obtain a more accurate sixth image.
In another optional implementation, the electronic device performs global tone mapping or local tone mapping on the fifth image to obtain a sixth image.
In another optional implementation, the electronic device performs local tone mapping on the fifth image, and then performs global tone mapping to obtain a sixth image.
In other embodiments, the electronic device may also perform tone mapping in other manners to obtain the sixth image, which is not limited herein.
In this embodiment, the electronic device performs tone mapping on the fifth image after performing image signal processing on the high dynamic range image, so that better debugging and output tone mapping effects can be achieved, the problem that the tone mapping effect is affected by lens shading correction in subsequent image signal processing due to tone mapping performed before image signal processing is avoided, the tone mapping effect can be improved, and a more accurate sixth image can be obtained.
In one embodiment, the fifth image is in RGB mode; tone mapping the fifth image to obtain a sixth image, including: converting the fifth image into a gray mode, and performing tone mapping on the fifth image of the gray mode to obtain a sixth image of the gray mode; the method further comprises the following steps: dividing the sixth image of the gray scale mode and the fifth image of the gray scale mode to obtain a tone mapping mask image; and obtaining a seventh image in the RGB mode based on the tone mapping mask map and the fifth image in the RGB mode.
The RGB scheme is the commonly known three primary colors, R for Red, G for Green and B for Blue. The gray mode is to represent an image with a single tone, the color of one pixel is represented by eight bits, and a total of 256 levels (color levels) of gray tones (including black and white) can be represented, that is, 256 lightness of gray.
Optionally, the electronic device divides each pixel in the sixth image in the grayscale mode by a pixel in a corresponding position in the fifth image in the grayscale mode to obtain a tone mapping mask map. The tone mapping mask is in RGB mode. The tone mapping mask map is a floating point type image.
In an alternative embodiment, the electronic device multiplies the tone mapping mask map and the fifth image in RGB mode to obtain a seventh image in RGB mode.
In another alternative embodiment, the electronic device performs dot multiplication on the tone mapping mask map and the R channel, the G channel, and the B channel in the fifth image in the RGB mode to obtain a seventh image in the RGB mode. In this embodiment, the electronic device may perform processing on each channel in the fifth image to obtain a more accurate seventh image in the RGB mode.
In other embodiments, the electronic device may obtain the seventh image in other manners, which is not limited herein.
In this embodiment, the electronic device converts the fifth image into the grayscale mode, performs tone mapping on the fifth image of the grayscale mode to obtain a sixth image of the grayscale mode, divides the sixth image of the grayscale mode and the fifth image of the grayscale mode to accurately obtain a tone mapping mask, and obtains a seventh image of the RGB mode based on the tone mapping mask and the fifth image of the RGB mode.
In one embodiment, the color depth of the seventh image is a first color depth; the method further comprises the following steps: compressing the color depth of the seventh image to the second color depth to obtain an eighth image; the second color depth is less than the first color depth.
The color depth is the number of colors that can be displayed per pixel. The greater the color depth, the more colors are available and the more accurate the color representation of the image.
Illustratively, the first color depth is 16 bits and the second color depth is 10 bits.
It is understood that the fifth image of RGB mode, the fifth image of gray scale mode, the sixth image of gray scale mode and the seventh image of RGB mode are all the first color depth. The first image, the second image and the eighth image are all of the second color depth.
It can be understood that, in the middle of image processing, the processing is performed in the first color depth domain, and the accuracy of the data can be ensured.
In this embodiment, the electronic device compresses the color depth of the seventh image to avoid that other displays cannot support the first color depth, and may be compatible with the displays of the terminals to display the compressed eighth image.
Optionally, the eighth image is in an RGB mode; after the compressing the color depth of the seventh image to the second color depth to obtain the eighth image, the method further includes: and converting the eighth image in the RGB mode into an eighth image in a YUV mode.
And the eighth image in the RGB mode and the eighth image in the YUV mode are the second color depth.
In this embodiment, the electronic device converts the eighth image in the RGB mode into the YUV mode, so as to obtain the eighth image in the YUV mode more suitable for the display process.
In one embodiment, as shown in fig. 4, taking an example that the first color depth is 16 bits and the second color depth is 10 bits as an example, the electronic device performs image signal processing on the high dynamic range image to obtain a fifth image in an RGB mode of 16 bits, converts the fifth image in the RGB mode of 16 bits into a grayscale mode to obtain a fifth image in a grayscale mode of 16 bits, and performs global tone mapping and local tone mapping on the fifth image in the grayscale mode of 16 bits to obtain a sixth image in the grayscale mode of 16 bits; performing division operation on the sixth image of the 16-Bit gray scale mode and the fifth image of the 16-Bit gray scale mode to obtain a 16-Bit tone mapping mask map; performing dot multiplication on the tone mapping mask map of 16 bits and the fifth image of the RGB mode of 16 bits to obtain a seventh image of the RGB mode of 16 bits; compressing the color depth of the seventh image of the RGB mode of 16 bits to 10 bits to obtain an eighth image of the RGB mode of 10 bits; and converting the eighth image in the RGB mode of 10 bits into a YUV mode to obtain the eighth image in the YUV mode of 10 bits.
In one embodiment, as shown in fig. 5, different exposure parameters are set in the image sensor, resulting in one frame of a first RAW image and a plurality of frames of a second RAW image, the exposure amount of the first RAW image being smaller than that of the second RAW image; performing airspace fusion on the multi-frame second RAW image by adopting a multi-frame fusion algorithm to obtain a second image, and performing demosaicing processing on the second image to obtain a fourth image in an RGB mode; demosaicing the first RAW image to obtain a third image in an RGB mode; carrying out high dynamic range imaging fusion on the third image of the RGB mode and the fourth image of the RGB mode to obtain a high dynamic range image of the RGB mode; carrying out image signal processing on the high dynamic range image in the RGB mode to obtain a fifth image in the RGB mode; carrying out tone mapping on the fifth image in the RGB mode to obtain a sixth image in the RGB mode; and converting the sixth image in the RGB mode into a YUV mode to obtain the sixth image in the YUV mode.
Wherein, the electronic device may further compress the color depth of the sixth image in the RGB mode.
It should be understood that, although the steps in the flowcharts related to the embodiments as described above are sequentially displayed as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be rotated or alternated with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the present application further provides an image processing apparatus for implementing the image processing method. The implementation scheme for solving the problem provided by the apparatus is similar to the implementation scheme described in the above method, so specific limitations in one or more embodiments of the image processing apparatus provided below can be referred to the limitations of the image processing method in the foregoing, and details are not described here.
In one embodiment, as shown in fig. 6, there is provided an image processing apparatus including: an acquisition module 602, a demosaicing processing module 604, a fusion module 606, and an image signal processing module 608, wherein:
an obtaining module 602, configured to obtain a first image and a second image; the exposure amount of the first image is smaller than the exposure amount of the second image.
The demosaicing module 604 is configured to perform demosaicing on the first image to obtain a third image, and perform demosaicing on the second image to obtain a fourth image.
And a fusion module 606, configured to perform high dynamic range imaging fusion on the third image and the fourth image to obtain a high dynamic range image.
The image signal processing module 608 is configured to perform image signal processing on the high dynamic range image to obtain a fifth image.
The image processing method acquires a first image and a second image, wherein the exposure of the first image is less than that of the second image; the first image and the second image are demosaiced respectively to obtain a third image and a fourth image with mosaic noise removed, the third image and the fourth image are subjected to high dynamic range imaging fusion to obtain a high dynamic range image, and then the high dynamic range image is subjected to image signal processing to avoid the problem of low demosaicing processing accuracy caused by demosaicing processing after image signal processing, so that a fifth image which is clearer and more accurate in denoising can be obtained.
In an embodiment, the obtaining module 602 is further configured to obtain a first RAW image and a plurality of frames of second RAW images; the exposure amount of the first RAW image is smaller than that of the second RAW image; registering a target RAW image in the first RAW image and the multi-frame second RAW images to obtain a first image, and registering the multi-frame second RAW images to obtain a second RAW image after multi-frame registration; and performing spatial domain fusion on the second RAW images subjected to the multi-frame registration processing to obtain a second image.
In an embodiment, the obtaining module 602 is further configured to take a second RAW image with the highest definition in the multiple frames of second RAW images as a target RAW image; or the second RAW image obtained by the first exposure in the multi-frame second RAW images is used as the target RAW image.
In one embodiment, the apparatus further comprises a tone mapping module; and the tone mapping module is used for carrying out tone mapping on the fifth image to obtain a sixth image.
In one embodiment, the tone mapping module is further configured to perform global tone mapping on the fifth image to obtain a tone mapped image; and carrying out local tone mapping on the tone mapping image to obtain a sixth image.
In one embodiment, the fifth image is in RGB mode; the device also comprises a mode conversion module and a calculation module; the mode conversion module is used for converting the fifth image into a gray mode, and the tone mapping module is also used for carrying out tone mapping on the fifth image of the gray mode to obtain a sixth image of the gray mode; the calculation module is used for dividing the sixth image in the gray scale mode and the fifth image in the gray scale mode to obtain a tone mapping mask map; and obtaining a seventh image in the RGB mode based on the tone mapping mask map and the fifth image in the RGB mode.
In an embodiment, the computing module is further configured to perform dot multiplication on the tone mapping mask map and the R channel, the G channel, and the B channel in the fifth image in the RGB mode to obtain a seventh image in the RGB mode.
In one embodiment, the color depth of the seventh image is a first color depth; the device also comprises a compression module; the compression module is used for compressing the color depth of the seventh image to the second color depth to obtain an eighth image; the second color depth is less than the first color depth.
In one embodiment, the eighth image is in RGB mode; the mode conversion module is further configured to convert the eighth image in the RGB mode into an eighth image in the YUV mode.
The respective modules in the image processing apparatus described above may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent of a processor in the electronic device, or can be stored in a memory in the electronic device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, an electronic device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 7. The electronic device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input device. The processor, the memory and the input/output interface are connected by a system bus, and the communication interface, the display unit and the input device are connected by the input/output interface to the system bus. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic equipment comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The input/output interface of the electronic device is used for exchanging information between the processor and an external device. The communication interface of the electronic device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement an image processing method. The display unit of the electronic equipment is used for forming a visual picture and can be a display screen, a projection device or a virtual reality imaging device. The display screen can be a liquid crystal display screen or an electronic ink display screen, and the input device of the electronic equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the electronic equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 7 is a block diagram of only a portion of the architecture associated with the subject application, and does not constitute a limitation on the electronic devices to which the subject application may be applied, and that a particular electronic device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of the image processing method.
Embodiments of the present application also provide a computer program product containing instructions which, when run on a computer, cause the computer to perform an image processing method.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, displayed data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the relevant laws and regulations and standards of the relevant country and region.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), Magnetic Random Access Memory (MRAM), Ferroelectric Random Access Memory (FRAM), Phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the various embodiments provided herein may be, without limitation, general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing-based data processing logic devices, or the like.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (13)

1. An image processing method, comprising:
acquiring a first image and a second image; the exposure of the first image is less than the exposure of the second image;
demosaicing the first image to obtain a third image, and demosaicing the second image to obtain a fourth image;
carrying out high dynamic range imaging fusion on the third image and the fourth image to obtain a high dynamic range image;
and carrying out image signal processing on the high dynamic range image to obtain a fifth image.
2. The method of claim 1, wherein the acquiring the first image and the second image comprises:
acquiring a first RAW image and a plurality of frames of second RAW images; the exposure amount of the first RAW image is smaller than the exposure amount of the second RAW image;
registering the first RAW image and a target RAW image in the multiple frames of second RAW images to obtain a first image, and registering the multiple frames of second RAW images to obtain a second RAW image after the multiple frames of registration;
and performing spatial domain fusion on the second RAW images subjected to the multi-frame registration processing to obtain a second image.
3. The method according to claim 2, wherein after acquiring the first RAW image and the plurality of frames of the second RAW image, further comprising:
taking a second RAW image with the highest definition in the plurality of frames of second RAW images as a target RAW image; or
And taking the second RAW image obtained by the first exposure in the plurality of frames of second RAW images as a target RAW image.
4. The method of claim 1, wherein after the image signal processing the high dynamic range image to obtain a fifth image, further comprising:
and carrying out tone mapping on the fifth image to obtain a sixth image.
5. The method of claim 4, wherein tone mapping the fifth image to obtain a sixth image comprises:
carrying out global tone mapping on the fifth image to obtain a tone mapping image;
and carrying out local tone mapping on the tone mapping image to obtain a sixth image.
6. The method of claim 4, wherein the fifth image is in RGB mode; performing tone mapping on the fifth image to obtain a sixth image, including:
converting the fifth image into a gray mode, and performing tone mapping on the fifth image of the gray mode to obtain a sixth image of the gray mode;
the method further comprises the following steps:
dividing the sixth image of the gray scale mode and the fifth image of the gray scale mode to obtain a tone mapping mask map;
and obtaining a seventh image of the RGB mode based on the tone mapping mask map and the fifth image of the RGB mode.
7. The method of claim 6, wherein obtaining a seventh image in RGB mode based on the tone mapping mask and the fifth image in RGB mode comprises:
and performing dot multiplication on the tone mapping mask image and the R channel, the G channel and the B channel in the fifth image of the RGB mode to obtain a seventh image of the RGB mode.
8. The method according to claim 6 or 7, wherein the color depth of the seventh image is a first color depth; the method further comprises the following steps:
compressing the color depth of the seventh image to a second color depth to obtain an eighth image; the second color depth is less than the first color depth.
9. The method of claim 8, wherein the eighth image is in RGB mode; after the compressing the color depth of the seventh image to the second color depth to obtain the eighth image, the method further includes:
and converting the eighth image in the RGB mode into an eighth image in a YUV mode.
10. An image processing apparatus characterized by comprising:
the acquisition module is used for acquiring a first image and a second image; the exposure of the first image is less than the exposure of the second image;
the demosaicing processing module is used for conducting demosaicing processing on the first image to obtain a third image and conducting demosaicing processing on the second image to obtain a fourth image;
the fusion module is used for carrying out high dynamic range imaging fusion on the third image and the fourth image to obtain a high dynamic range image;
and the image signal processing module is used for carrying out image signal processing on the high dynamic range image to obtain a fifth image.
11. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program, wherein the computer program, when executed by the processor, causes the processor to perform the steps of the image processing method according to any of claims 1 to 9.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 9.
13. A computer program product comprising a computer program, characterized in that the computer program realizes the steps of the method of any one of claims 1 to 9 when executed by a processor.
CN202210751338.5A 2022-06-29 2022-06-29 Image processing method, image processing device, electronic equipment and computer readable storage medium Pending CN115063333A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210751338.5A CN115063333A (en) 2022-06-29 2022-06-29 Image processing method, image processing device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210751338.5A CN115063333A (en) 2022-06-29 2022-06-29 Image processing method, image processing device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN115063333A true CN115063333A (en) 2022-09-16

Family

ID=83204167

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210751338.5A Pending CN115063333A (en) 2022-06-29 2022-06-29 Image processing method, image processing device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN115063333A (en)

Similar Documents

Publication Publication Date Title
US20150363912A1 (en) Rgbw demosaic method by combining rgb chrominance with w luminance
JP2018511193A (en) Method and apparatus for inverse tone mapping of pictures
US11721003B1 (en) Digital image dynamic range processing apparatus and method
CN113112424A (en) Image processing method, image processing device, computer equipment and storage medium
US20110234611A1 (en) Method and apparatus for processing image in handheld device
CN113507598A (en) Video picture display method, device, terminal and storage medium
CN114040246A (en) Image format conversion method, device, equipment and storage medium of graphic processor
CN111738950B (en) Image processing method and device
CN117768774A (en) Image processor, image processing method, photographing device and electronic device
JP2009224901A (en) Dynamic range compression method of image, image processing circuit, imaging apparatus, and program
CN115205168A (en) Image processing method, device, electronic equipment, storage medium and product
CN115049572A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN115063333A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN115147304A (en) Image fusion method and device, electronic equipment, storage medium and product
CN115293994A (en) Image processing method, image processing device, computer equipment and storage medium
CN114519753A (en) Image generation method, system, electronic device, storage medium and product
CN115735226A (en) Image processing method and apparatus
CN112232125A (en) Key point detection method and key point detection model training method
CN117522742B (en) Image processing method, architecture, device and computer equipment
CN116977154B (en) Visible light image and infrared image fusion storage method, device, equipment and medium
CN117994179A (en) Image processing method, apparatus, electronic device, and computer-readable storage medium
CN117408872B (en) Color image data conversion method, device, equipment and storage medium
CN113194267B (en) Image processing method and device and photographing method and device
CN115082357B (en) Video denoising data set generation method and device, computer equipment and storage medium
WO2021159414A1 (en) White balance control method and apparatus, electronic device and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination