WO2022116988A1 - 图像处理方法、装置、设备和存储介质 - Google Patents

图像处理方法、装置、设备和存储介质 Download PDF

Info

Publication number
WO2022116988A1
WO2022116988A1 PCT/CN2021/134712 CN2021134712W WO2022116988A1 WO 2022116988 A1 WO2022116988 A1 WO 2022116988A1 CN 2021134712 W CN2021134712 W CN 2021134712W WO 2022116988 A1 WO2022116988 A1 WO 2022116988A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
brightness
brightness correction
target
dynamic
Prior art date
Application number
PCT/CN2021/134712
Other languages
English (en)
French (fr)
Inventor
姜文杰
Original Assignee
影石创新科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 影石创新科技股份有限公司 filed Critical 影石创新科技股份有限公司
Publication of WO2022116988A1 publication Critical patent/WO2022116988A1/zh

Links

Images

Classifications

    • G06T5/70
    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present application relates to the technical field of image processing, and in particular, to an image processing method, apparatus, device and storage medium.
  • Raw format pictures are the most original image data information, and the JPEG pictures we see in our daily life come from photos obtained after image compression of Raw format pictures, which will lose a lot of important information.
  • high dynamic images are widely used because they can provide more dynamic range and image details.
  • the dynamic range of the display device that often needs to display images is limited dynamic range or low dynamic range, and low dynamic range is relative to high dynamic range, for example, CRT (Cathode Ray Tube) display, LCD display or Projectors, etc., have limited dynamic range.
  • CRT Cathode Ray Tube
  • LCD display or Projectors etc.
  • high-dynamic images need to be displayed on the above-mentioned limited dynamic range devices. In this case, it is necessary to process high-dynamic images into images that can be displayed on devices with limited dynamic range and retain image details, so that users can Displays the same effect as HDR on display devices with limited dynamic range.
  • the currently outputted JPEG images have problems of high noise and inability to preserve image details well on display devices with limited dynamic range.
  • the present invention provides an image processing method, the method comprising:
  • performing noise reduction processing on the raw image to be processed to obtain a noise reduction Raw image specifically includes:
  • converting the noise reduction Raw image to obtain an RGB image specifically includes:
  • the noise-reduced Raw image is processed by an interpolation algorithm to obtain an RGB image.
  • performing brightness correction processing on the RGB image to obtain a brightness corrected image specifically includes:
  • the statistical value of ambient light intensity is a statistical value of light intensity of the shooting environment where the RGB image is located;
  • the determining of the target brightness correction coefficient according to the target brightness statistic value and the image brightness statistic value includes at least one of the following steps:
  • a brightness reduction coefficient is obtained as a target brightness correction coefficient.
  • the target brightness correction coefficient includes a first brightness correction coefficient
  • the determining the target brightness correction coefficient according to the target brightness statistics value and the image brightness statistics value includes:
  • the target brightness correction coefficient further includes a second brightness correction coefficient and a third brightness correction coefficient; and the determining the target brightness correction coefficient according to the target brightness statistics value and the image brightness statistics value includes:
  • the performing brightness correction on the RGB image according to the target brightness correction coefficient to obtain the target brightness correction image includes:
  • the performing pixel dynamic range mapping on the target brightness correction image to obtain the target dynamic image includes:
  • Fusion processing is performed on the first mapping dynamic image, the second mapping dynamic image and the third mapping dynamic image to obtain a target dynamic image.
  • performing fusion processing on the first mapping dynamic image, the second mapping dynamic image, and the third mapping dynamic image to obtain the target dynamic image includes:
  • Tone-mapping processing is performed on the RGB image according to the local mapping gain value to obtain a target dynamic image.
  • the present invention provides an image processing device, the device comprising:
  • the image acquisition module is used to acquire the Raw image to be processed
  • a noise reduction module configured to perform noise reduction processing on the Raw image to be processed to obtain a noise reduction Raw image
  • a conversion processing module for converting the noise reduction Raw image to obtain an RGB image
  • a brightness correction module for performing brightness correction processing on the RGB image to obtain a target brightness correction image
  • the dynamic mapping module is used for performing pixel dynamic range mapping on the target brightness correction image to obtain the target dynamic image.
  • the present invention provides a computer device, comprising a memory and a processor, wherein the memory stores a computer program, wherein the processor implements the following steps when executing the computer program:
  • a pixel dynamic range mapping process is performed on the target brightness correction image to obtain a target dynamic image, where the pixel dynamic range of the target dynamic image is smaller than the pixel dynamic range of the RGB image.
  • the present invention provides a computer-readable storage medium on which a computer program is stored, characterized in that, when the computer program is executed by a processor, the following steps are implemented:
  • the above-mentioned image processing method, device, computer equipment and storage medium by acquiring the Raw image to be processed, performing noise reduction processing on the Raw image to be processed, and then performing conversion processing to obtain an RGB image, and performing brightness correction processing on the RGB image, A target brightness correction image is obtained, and pixel dynamic range mapping processing is performed on the target brightness correction image to obtain a target dynamic image, wherein the pixel dynamic range of the target dynamic image is smaller than the pixel dynamic range of the RGB image.
  • Fig. 1 is the application environment diagram of the image processing method in one embodiment
  • FIG. 2 is a schematic flowchart of an image processing method in one embodiment
  • FIG. 3 is a schematic flowchart of a method for generating an image noise reduction model in another embodiment
  • FIG. 4 is a schematic flowchart of a step of determining a target brightness correction coefficient according to a target brightness statistic value and an image brightness statistic value in another embodiment
  • FIG. 5 is a schematic flowchart of a method for obtaining a target dynamic image by performing fusion processing on a first mapping dynamic image, a second mapping dynamic image, and a third mapping dynamic image in another embodiment
  • FIG. 6 is a schematic flowchart of obtaining a statistical value of ambient light intensity in another embodiment
  • FIG. 7 is a structural block diagram of an image processing apparatus in one embodiment
  • FIG. 8 is a diagram of the internal structure of a computer device in one embodiment.
  • the image processing method provided in this application can be applied to the application environment shown in FIG. 1 .
  • the application environment includes an image capturing device 102 and a terminal 104 , wherein the image capturing device 102 and the terminal 104 are connected in communication. After the image acquisition device 102 collects the dynamic image, it is transmitted to the terminal 104, and the terminal 104 obtains the Raw image to be processed.
  • the terminal 104 determines the target brightness correction coefficient according to the target brightness statistic value and the image brightness statistic value; performs brightness correction on the RGB image according to the target brightness correction coefficient to obtain the target brightness correction image; performs pixel dynamic range mapping on the target brightness correction image to obtain the target brightness correction image. dynamic images.
  • the pixel dynamic range of the target dynamic image is smaller than the pixel dynamic range of the RGB image.
  • the image capturing device 102 may be, but is not limited to, various devices having an image capturing function, and may be distributed outside the terminal 104 or inside the terminal 104 .
  • various cameras, scanners, various cameras, and image capture cards distributed outside the terminal 104 may be, but is not limited to, various cameras, personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices.
  • an image processing method is provided, and the method is applied to the terminal in FIG. 1 as an example for description, including the following steps:
  • the Raw image is the original data that the CMOS or CCD image sensor converts the captured light source signal into a digital signal, which is lossless and contains the original color information of the object.
  • the data format of the raw image generally adopts the Bayer array arrangement.
  • the color filter array (CFA) is generated through the filter light sheet. Since the human eye is more sensitive to the color of the green band, the Bayer array data format contains 50% of the green information. , and 25% each of the red and blue information.
  • the Bayer array is a 4 ⁇ 4 array consisting of 8 green, 4 blue and 4 red pixels. When converting a grayscale image to a color image, it will perform 9 operations with a 2 ⁇ 2 matrix, and finally generate a Color graphics.
  • Bayer arrays generally have four formats: RGGB(a), BGGR(b), GBRG(c), GRBG(d).
  • the image processing instruction carries the image identifier of the image to be processed, and through the image identifier, the terminal can obtain the raw image to be processed from the stored image.
  • the terminal acquires the Raw image to be processed so as to perform subsequent processing work on the Raw image to be processed.
  • the raw image to be processed can be obtained by shooting with any device having a shooting function, such as a digital camera, a panoramic camera, a mobile phone, a tablet computer, a motion camera, and the like.
  • the Raw image to be processed may also be a Raw image obtained by any image processing method, such as image transformation, image stitching, image segmentation, image synthesis, image compression, image enhancement, image restoration, and the like.
  • the Raw image to be processed may be a panoramic image or a common plane image.
  • noisy Raw image For the noise reduction processing of the Raw image to be processed, a variety of noise reduction methods can be used to perform the noise reduction processing.
  • noisy Raw image For the noise reduction processing of the Raw image to be processed, a variety of noise reduction methods can be used to perform the noise reduction processing.
  • a method for generating a preset image noise reduction model comprising the following steps:
  • the first Raw image can be obtained by shooting any device with a shooting function, such as a digital camera, a panoramic camera, a mobile phone, a tablet computer, a motion camera, and the like.
  • the first Raw image may also be a Raw image obtained by any image processing method, such as image transformation, image stitching, image segmentation, image synthesis, image compression, image enhancement, image restoration, and the like.
  • the first Raw image may be a panoramic image or a common plane image.
  • One shooting device shoots multiple Raw images under the same scene, and then selects N first Raw images with the same Bayer array from the multiple Raw images; in the second method, one shooting device can also continuously shoot N images of the same scene Raw images under the same scene, or multiple shooting devices to shoot multiple Raw images in the same scene, and then obtain N first Raw images with the same Bayer array through the Bayer array conversion method; the third way, you can also use a shooting device or Multiple photographing devices continuously photograph N first Raw images of different Bayer arrays in the same scene.
  • the first Raw images of N different Bayer arrays in the same scene can also be obtained by adopting different combinations of the above listed methods or adopting any other methods, which are not listed one by one here.
  • S302 specifically includes the following steps:
  • S3024 Perform pixel-by-pixel weighted fusion processing on the to-be-fused image to obtain a corresponding second Raw image.
  • any one of the N first Raw images is selected as the reference image in sequence, and one method that can be adopted is: marking the N first Raw images as the first one in sequence image, the second image, the third image...the Nth image, and then the first image, the second image, the third image...the Nth image are respectively selected as the reference image.
  • step S3022 the difference points between the reference image and the other N-1 first Raw images are respectively detected, which specifically includes the following steps:
  • the alignment algorithm is used to calculate the difference point between each grid content of the reference image and the grid content corresponding to the other N-1 first Raw images.
  • the N first Raw images are divided into G grids, where G is an integer greater than or equal to 2, and the images can be divided into multiple grids in various ways.
  • Grids can be divided into two categories: structured grids and unstructured grids; structured grids: There are only a limited number of connection methods between unit nodes, and all internal nodes in the generated grid area are There are the same number of adjacent cells and the same number of adjacent nodes, corresponding to two-dimensional planes or three-dimensional surfaces.
  • the grid cells generated by structured grids are generally quadrilateral, and the three-dimensional solid space is mainly hexahedron; unstructured Grid, grid element nodes can be connected in any form, do not need to have the same adjacent elements, the number of connections between different nodes in the grid area can be different, corresponding to two-dimensional plane or three-dimensional surface, non-structural
  • the mesh elements generated by the meshed mesh are generally triangles, and the three-dimensional solid space is mainly tetrahedron.
  • Common structured grid generation methods generally include mapping method, geometric decomposition method, etc.
  • unstructured grid generation methods generally include octree method, quadtree method, Delaunay triangulation method, etc.
  • the embodiment of the present invention is not limited to this, and the embodiment of the present invention may use any one or more of the above methods to equally divide the N first Raw images For the G meshes, other mesh generation methods in the prior art may also be used, and details are not described herein again.
  • the divided G meshes may be of any shape, for example, it may be an AXA mesh, or an AXB mesh, where A and B are not equal and are both integers greater than or equal to 2.
  • an alignment algorithm is used to calculate the difference point between each grid content of the reference image and the grid content corresponding to the other N-1 first Raw images
  • the alignment algorithm may be any image alignment Algorithms, such as: pixel value cross-correlation-based method, sequential similarity detection algorithm, Fourier transform-based phase correlation algorithm, mutual information-based alignment algorithm, optimization-based image alignment method is the Lucas-Kanade algorithm and its improvements algorithm.
  • step S3023 according to the difference point, the reference image is processed correspondingly to obtain the image to be fused, which specifically includes the following steps:
  • the grid corresponding to the N-1 first Raw images is translated in the opposite direction of the single direction relative to the reference image, and the N-1 images after translation and the reference image are used as N images to be fused.
  • the difference caused by the movement in the chaotic direction is relative to the difference caused by the movement in a single direction, that is, the difference caused by the movement in the chaotic direction includes a movement difference in at least two directions .
  • the difference caused by the movement refers to the offset difference between the two grids in the set direction, for example, grid 1 and grid 2, grid 1 is on the corresponding first Raw image Grid 2 is the corresponding grid on the reference image, both of which are set to detect difference points in the X and Y directions. Shift difference, it means that the difference between grid 1 and grid 2 is the difference caused by the movement in the cluttered direction, then delete grid 1 and grid 2.
  • the image to be fused is subjected to pixel-by-pixel weighted fusion processing to obtain a corresponding second Raw image, wherein the pixel-by-pixel weighted fusion processing generally directly weights the pixels of the image.
  • the algorithm of weight selection directly affects the effect of the fusion image. Based on how to effectively select the fusion weight coefficient, it can generally be divided into an average weighted fusion algorithm (which directly averages the corresponding pixels of the two images), multi-scale weighted gradient.
  • the fusion algorithm Principal Component Analysis (PCA), etc.
  • the pixel-by-pixel weighted fusion processing in step S3024 is not the only method of fusion processing in step S302, but only one of the optional methods.
  • the pixel-by-pixel weighted fusion processing can also be performed by other pixel-level image fusion methods. Instead, for example, simple point-by-pixel addition and fusion processing and fusion methods based on multi-scale transformation, etc., wherein, the pixel-by-pixel addition and fusion processing generally refers to two or more images of the same size. A method of adding operations to generate a new image containing information from two or more images.
  • multiple images to be fused may be subjected to pixel-by-pixel average weighted fusion processing, and then the average value of the multi-pixel gray values after adding is taken as the addition result, that is, as the corresponding second Raw.
  • the pixel gray value of the image in this process, because the average value of the multi-pixel gray value addition is taken as the pixel gray value of the corresponding second Raw image, the random noise between the images to be fused can be effectively reduced.
  • the corresponding second Raw image is subtracted pixel by pixel from the first Raw image to obtain a corresponding residual noise image
  • the pixel-by-pixel subtraction generally refers to two images
  • the result of the subtraction of the pixel values at the corresponding coordinate positions of the two 255 gray-level images is greater than or equal to zero, it is taken as the gray value of the corresponding position pixel in the result image.
  • the subtraction result is less than zero, it is generally negative.
  • the value is the result value.
  • the absolute value can also be taken as the result value.
  • the corresponding second Raw image is subtracted pixel by pixel from the first Raw image to obtain a corresponding residual noise image, wherein the residual noise image includes the difference information between the first Raw image and the second Raw image.
  • establishing a sample image database based on the first Raw image and the residual noise image specifically includes:
  • the conversion process is to perform Bayer array format conversion on the first Raw image and the corresponding residual noise image, so as to make the first Raw image and the corresponding residual noise in the sample image database
  • the Bayer array format of the difference noise image remains the same; further, in order to clearly illustrate how to perform the conversion process, an example is as follows: if the Bayer array format of the first Raw image is "RGGB”, when the residual error corresponding to the first Raw image is When the Bayer array format of the noise image is not "RGGB”, the conversion processing can be performed in the following way.
  • the Bayer array format of the corresponding residual noise image is "GBRG”
  • the upper and lower boundaries of the corresponding residual noise image are expanded.
  • image data sets in M groups of scenes can be obtained by the above method of obtaining N images in the same scene, which will not be repeated here, where M is an integer greater than or equal to 2.
  • model training is performed based on the sample image database to obtain an image noise reduction model, which specifically includes:
  • the first Raw image consistent with the Bayer array is used as input image data
  • the residual noise image consistent with the Bayer array corresponding to the first Raw image is used as the corresponding output image data
  • model training is performed to obtain image noise reduction. Model.
  • the first Raw image set can be used as the training input
  • the corresponding residual noise image set can be used as the target output
  • the model can be carried out according to the preset training algorithm. training to obtain an image denoising model for image denoising processing.
  • the training algorithm is a machine learning algorithm, and the machine learning algorithm can process data through continuous feature learning.
  • Machine learning algorithms may include: decision tree algorithms, logistic regression algorithms, Bayesian algorithms, neural network algorithms (which may include deep neural network algorithms, convolutional neural network algorithms, and recurrent neural network algorithms, etc.), clustering algorithms, and the like.
  • sample image data may include image data sets under M groups of scenes, and at this time, each group of scenes includes N first Raw images with inconsistent Bayer arrays, and the sample image data at this time includes the first Raw images under M groups of scenes.
  • Raw images and corresponding residual noise images may be used.
  • the sample image database contains a large amount of training data, and the image noise reduction model obtained at this time will become more and more accurate. .
  • step S304 it is first judged whether the raw image to be processed is consistent with the Bayer array of the preset image noise reduction model obtained in the above steps S301-S305, and if not, the conversion process in step S304 is used (not repeated here) , so that the Bayer array of the raw image to be processed is consistent with the Bayer array of the above-mentioned image noise reduction model, and then input the raw image to be processed into the preset image noise reduction model obtained in the above steps S301-S305 to obtain the residual image corresponding to the raw image to be processed.
  • the Bayer array of the residual noise image is converted to be consistent with the raw image to be processed, and then the final noise reduction Raw image is obtained through pixel-by-pixel weighted fusion processing.
  • the obtained second Raw image is used as the output image, and the corresponding first Raw image is used as the input image, and model training is performed according to a preset training algorithm to train
  • the image noise reduction model used for image noise reduction processing is obtained, and it should be understood that the model training here is the same as that in step S305, and details are not repeated here.
  • step S304 (not repeated here), so that the Bayer array of the Raw image to be processed is consistent
  • the array is consistent with the Bayer array of the above-mentioned preset image noise reduction model, and then the raw image to be processed is input into the preset image noise reduction model obtained in this embodiment, and the final noise reduction Raw image of the raw image to be processed can be directly obtained.
  • the above-mentioned embodiment only enumerates two of the image noise reduction model generation methods and the corresponding Raw image noise reduction methods, but the preset image noise reduction model in S202 does not.
  • the Raw image noise reduction method in S202 is not limited to the noise reduction method in the above embodiments, and those skilled in the art can also use other Raw image noise reduction methods to achieve the purpose of Raw image noise reduction.
  • an interpolation algorithm is used to process the denoised Raw image to obtain an RGB image
  • an interpolation algorithm is used to process the denoised Raw image to obtain an RGB image.
  • the interpolation algorithm here may be the nearest neighbor interpolation method, bilinear interpolation, bicubic interpolation, stereo interpolation, Demosaic algorithm, etc., because the original Raw Each pixel in the image only contains one of the three components of R/G/B.
  • the other two components missing from each pixel need to be supplemented by an interpolation algorithm to obtain an RGB image.
  • brightness correction processing is performed on the RGB image to obtain a target brightness correction image. Specifically, the following steps may be adopted:
  • S2042 Obtain a statistical value of ambient light intensity, and obtain a statistical value of standard brightness corresponding to the shooting environment according to the statistical value of ambient light intensity; the statistical value of ambient light intensity is a statistical value of light intensity of the shooting environment where the RGB image is located;
  • the statistical value of image brightness refers to the comprehensive quantitative performance of image brightness, and the overall situation of image brightness can be obtained from the statistical value of image brightness.
  • the statistical value is obtained through statistics, such as an average value. Or the median.
  • the terminal after acquiring the RGB image, the terminal obtains a brightness histogram by converting the RGB image into a grayscale image, and the brightness histogram can represent the number of pixels of each brightness level of the image. Average brightness.
  • the statistical value of the ambient light intensity refers to the comprehensive quantitative performance of the ambient light intensity.
  • the overall situation of the ambient light intensity can be known through the statistical value of the ambient light intensity.
  • the statistical value is obtained through statistics. either the mean or the median.
  • the terminal can obtain the target corresponding to the statistical value of ambient light intensity according to the corresponding relationship between the statistical value of ambient light intensity and the statistical value of target brightness Brightness statistics.
  • the exposure parameter can be obtained according to the sensitivity, shutter speed and aperture value, and the statistical value of the image brightness is divided by the exposure parameter as the true number of the logarithmic function, and the logarithmic calculation is performed to obtain the statistical value of the ambient light intensity.
  • the statistical value of ambient light intensity is expressed as EE
  • the statistical value of image brightness is expressed as V0
  • the statistical value of target brightness is expressed as V
  • I is the sensitivity
  • s the shutter speed in seconds
  • a is the aperture value
  • the statistical value of ambient light intensity is related to the image.
  • the relationship between the luminance statistics can be expressed as:
  • Different statistical values of ambient light intensity have different statistical values of target brightness. Assuming that the statistical value of target brightness is V, there is a one-to-one correspondence between the statistical value of ambient light intensity EE and the statistical value of target brightness V. It is understandable that images taken under different ambient light intensities have different image brightness that the user likes. For example, for photos taken in sunny days with sufficient light, people prefer brighter pictures, so the target brightness statistic value V corresponding to the ambient light intensity statistic value EE is too large, and for images taken outdoors at night, the appropriate brightness The target brightness statistic value V corresponding to the ambient light intensity statistic value EE of the image is relatively small.
  • f there is a one-to-one correspondence between the statistical value of the ambient light intensity EE and the statistical value of the target brightness V, which may be denoted as f.
  • EE the statistical value of the ambient light intensity
  • V the statistical value of the target brightness V
  • f can be a mapping table, as shown in Table 1, which is part of the data in the mapping table f:
  • Table 1 The mapping table between the statistical value of ambient light intensity and the statistical value of target brightness
  • the numerical relationship between different ambient light intensity statistics EE and the corresponding target brightness statistics V can be obtained, and the numerical relationship is unique.
  • the target brightness statistic V can be expressed as f(EE)
  • the target brightness statistic f(EE) under a certain illumination intensity can be adjusted by adjusting the value in this correspondence, so that the Images can be made lighter or darker.
  • the brightness correction coefficient is a parameter for performing brightness correction on the image.
  • the image can be adjusted to the appropriate brightness through the brightness correction factor, such as brightening or darkening.
  • the terminal After the terminal has obtained the statistical value of target brightness and the statistical value of image brightness, through the functional relationship between the statistical value of target brightness and the statistical value of image brightness, for example, there is a ratio between the statistical value of target brightness and the statistical value of image brightness
  • the type of functional relationship can determine the target brightness correction coefficient.
  • the statistical value of target brightness is expressed as f(EE)
  • the statistical value of image brightness is expressed as V0
  • the target brightness correction is obtained through the functional relationship between the target brightness correction coefficient, the target brightness statistical value and the image brightness statistical value.
  • the target luminance correction coefficient is ⁇
  • step S2044 specifically, the terminal performs brightness correction on the RGB image according to the determined target brightness correction coefficient, and obtains the corrected RGB image as the target brightness correction image.
  • each pixel value of the RGB image can be multiplied by the target brightness correction coefficient to obtain the brightness-corrected pixel value, and the image composed of the corrected pixel values is the target brightness-corrected image.
  • the target brightness correction coefficient may be used to directly perform brightness correction on the RGB image, so that the image brightness statistics value of the corrected RGB image is close to the target brightness statistics value. For example, assuming that the value of each pixel in the RGB image is X, and using the target brightness correction coefficient e1 to correct the brightness of the RGB image, the pixel value of the corresponding position of the obtained target brightness correction image is X*e1; The function of the corresponding relationship between the correction coefficients is calculated to obtain the final correction coefficient, and the brightness of the RGB image is corrected by using the final correction coefficient.
  • the function related to the target brightness correction coefficient is an exponential function
  • the brightness correction coefficient can be used as an exponent
  • a preset value is used as a base to perform exponential calculation
  • the preset value is a number greater than 1. For example, if the preset value is 2, the final correction coefficient obtained is 2 e1 , and the pixel value of the corresponding position of the obtained target brightness correction image is X*2 e1 .
  • dynamic range mapping refers to mapping an image from one dynamic range to another dynamic range.
  • the pixel dynamic range of the RGB image is larger than the pixel dynamic range of the target dynamic image.
  • the target brightness correction image at this time is still a high dynamic image, which needs to be converted into a low dynamic image by dynamic range mapping, So that the low dynamic image can be used as the target dynamic image, it can be applied to the low dynamic device.
  • the transformation from a high dynamic image to a low dynamic image can be achieved using a gamma transform. For example, converting from a high-motion image with pixel values ranging from 0 to 65535 to a low-motion image with pixel values ranging from 0 to 255.
  • the image brightness statistical value of the RGB image is obtained by obtaining the RGB image to be processed; the statistical value of the target brightness corresponding to the shooting environment is obtained according to the statistical value of the ambient light intensity by obtaining the statistical value of the ambient light intensity.
  • the above-mentioned target brightness statistic value and image brightness statistic value determine a target brightness correction coefficient, and the RGB image is subjected to brightness correction through the target brightness correction coefficient to obtain a target brightness correction image.
  • the correction parameters of the image can be determined according to the ambient light intensity, and the brightness of the RGB image can be corrected by using the correction parameters, and an image with suitable brightness and more details can be obtained.
  • the target dynamic image is obtained, wherein the pixel dynamic range of the target dynamic image is smaller than the pixel dynamic range of the RGB image, thus realizing the conversion from high dynamic image to low dynamic image.
  • the image correction parameters are determined by the ambient light intensity and the statistical value of the image brightness when the image is taken, and the image correction parameters are used to correct the image.
  • the high dynamic image The image processing process to low dynamic images improves the image processing effect.
  • determining the target brightness correction coefficient according to the target brightness statistical value and the image brightness statistical value includes at least one of the following steps: when the target brightness statistical value is greater than the image brightness statistical value, acquiring a brightness enhancement coefficient as the target brightness correction coefficient ; When the target brightness statistic value is less than the image brightness statistic value, obtain the brightness reduction coefficient as the target brightness correction coefficient.
  • the brightness enhancement coefficient enhances the brightness.
  • the target brightness correction coefficient will enhance the brightness of the image.
  • the target brightness correction coefficient at this time may be called a brightness enhancement coefficient.
  • the brightness enhancement coefficient can enhance the image brightness linearly or nonlinearly.
  • the brightness enhancement coefficient and the brightness reduction coefficient may be preset, or may be calculated by a preset algorithm. For example, the brightness ratio of the target brightness statistic value and the image brightness statistic value can be calculated; the brightness ratio is used as the logarithm in the logarithmic function to perform logarithmic calculation to obtain the first brightness correction coefficient, wherein the exponent of the logarithmic function is greater than 1.
  • f(EE)/V0 When the statistical value of target brightness is greater than the statistical value of image brightness, f(EE)/V0 is a positive number greater than 1, at this time e is a positive number greater than 0, then ⁇ is a positive number greater than 1, multiply ⁇ by the The pixel value will increase the brightness. At this time, e can be called the brightness enhancement coefficient, and the brightness of the image can be enhanced and adjusted.
  • f(EE)/V0 is a positive number less than 1, and e is a negative number less than 0, then ⁇ is a positive number less than 1, and e can be called brightness Decrease the coefficient to decrease the brightness of the image.
  • the purpose of determining the target brightness parameter can be achieved through the target brightness statistical value and the image brightness statistical value, and the brightness of the image can be enhanced or attenuated by using the target brightness parameter.
  • the target brightness correction coefficient includes a first brightness correction coefficient
  • determining the target brightness correction coefficient according to the target brightness statistic value and the image brightness statistic value includes: calculating a brightness ratio between the target brightness statistic value and the image brightness statistic value; The ratio is used as the logarithm in the logarithmic function to perform logarithmic calculation to obtain the first brightness correction coefficient.
  • the terminal may obtain the first brightness correction coefficient through the statistical value of target brightness and the statistical value of image brightness, for example, firstly calculate the brightness ratio of the statistical value of target brightness to the statistical value of image brightness.
  • the statistical value of target brightness is expressed as f(EE)
  • the statistical value of image brightness is V0
  • the calculated ratio is expressed as a
  • the first luminance correction coefficient can be expressed as e1, wherein the exponent of the logarithmic function is greater than 1, for example, e1 is an exponential function with a base of an integer, and e1 can vary with the exponent of the exponential function.
  • the first brightness correction coefficient is calculated by calculating the brightness ratio between the target brightness statistic value and the image brightness statistic value, so that the brightness The ratio reflects the relationship between the statistical value of target brightness and the statistical value of image brightness.
  • the first brightness enhancement coefficient is a brightness enhancement coefficient.
  • the target brightness statistic value is smaller than the image brightness statistic value
  • the first brightness enhancement coefficient is a brightness reduction coefficient. Therefore, the adjusted image is matched with the ambient brightness where the shooting environment is located.
  • the target brightness correction coefficient further includes a second brightness correction coefficient and a third brightness correction coefficient; and determining the target brightness correction coefficient according to the target brightness statistical value and the image brightness statistical value includes:
  • the first brightness correction coefficient may be reduced based on the first brightness correction coefficient, and the second brightness correction coefficient may be obtained.
  • reducing the first luminance correction coefficient may be in the form of reducing the corresponding coefficient value.
  • the first brightness correction coefficient may be increased based on the first brightness correction coefficient, and the third brightness correction coefficient may be obtained.
  • the process of increasing the first luminance correction coefficient may be in the form of increasing the corresponding coefficient value.
  • S403 Perform brightness correction on the RGB image according to the first brightness correction coefficient, the second brightness correction coefficient, and the third brightness correction coefficient, respectively, to obtain a first brightness correction image obtained by the correction of the first brightness correction coefficient, and a second brightness correction coefficient to obtain a corrected image.
  • the second brightness correction image and the third brightness correction image obtained by correcting the third brightness correction coefficient.
  • the RGB image is processed by using the first brightness correction coefficient, the second brightness correction coefficient, and the third brightness correction coefficient, respectively. Then, a first brightness corrected image, a second brightness corrected image and a third brightness corrected image are obtained.
  • the first brightness correction coefficient is used as the target correction coefficient
  • the corresponding first brightness correction image is used as the target correction image. Since the second brightness correction image is the correction image corresponding to the reduction of the brightness correction coefficient, the second brightness correction is a darker image; similarly, the third brightness corrected image is a brighter image.
  • the third brightness correction coefficient when the first brightness correction coefficient is large, can be configured to be closer to the first brightness correction coefficient; when the first brightness correction coefficient is small, the second brightness correction coefficient can be configured to be closer to at the first brightness correction coefficient. Therefore, the details of each brightness level in the picture can be better balanced, so that the image corrected by the correction coefficient can reflect more details of the target image.
  • S404 Perform pixel dynamic range mapping on the first brightness-corrected image, the second brightness-corrected image, and the third brightness-corrected image, respectively, to obtain a first mapped dynamic image corresponding to the first brightness-corrected image and a second brightness-corrected image corresponding to the second brightness-corrected image. mapping the dynamic image and the third mapped dynamic image corresponding to the third brightness correction image.
  • pixel dynamic range mapping may be performed on the first brightness-corrected image, the second brightness-corrected image, and the third brightness-corrected image by means of gamma transformation, to obtain the first mapping dynamic range corresponding to the first brightness-corrected image.
  • the image, the second mapping dynamic image corresponding to the second brightness correction image, and the third mapping dynamic image corresponding to the third brightness correction image are transformed into a low-dynamic-range mapped dynamic image, and a low-dynamic-range mapped dynamic image with different brightness is obtained.
  • the low dynamic range image may be an eight-bit low dynamic range image
  • the high dynamic correction image may be a sixteen-bit high dynamic range image
  • the high dynamic range image has three channels of red, green and blue
  • the pixel value range is An image between 0 and 65535
  • a low dynamic range image is an image with three channels of red, green and blue
  • the pixel value range is between 0 and 255.
  • S405 Perform fusion processing on the first mapping dynamic image, the second mapping dynamic image, and the third mapping dynamic image to obtain a target dynamic image.
  • the fusion processing refers to the fusion of images with different brightness according to a certain image fusion method, so that the processed image has more abundant image details.
  • the first mapping dynamic image, the second mapping dynamic image and the third mapping dynamic image are down-sampled; according to the down-sampled first mapping dynamic image, the second mapping dynamic image and the third mapping dynamic image, obtain the first weight map corresponding to the first mapped dynamic image, the second weight map corresponding to the second mapped dynamic image, and the third weight map corresponding to the third mapped dynamic image; the down-sampled first mapped dynamic image,
  • the second mapping dynamic image and the third mapping dynamic image are converted into grayscale images respectively, and multi-resolution fusion is performed on the three grayscale images and the first weight map, the second weight map and the third weight map to obtain a multi-resolution rate fusion grayscale image; according to this grayscale image, and the downsampled first mapping dynamic image, second mapping dynamic image and third mapping dynamic image are converted into three grayscale images and the first weight map, the third
  • new weight maps are obtained by the following formulas, which are the fourth weight map, the fifth weight map and the sixth weight map,
  • the new weight map is represented as w i '
  • the first weight map, the second weight map and the third weight map are represented as w i , where i ⁇ (1,2,3), multi-resolution fusion
  • the grayscale image is represented as If, and the grayscale images converted from the first mapping dynamic image, the second mapping dynamic image, and the third mapping dynamic image are respectively represented as I 1 , I 2 , and I 3 , then the new weight map Expressed as w i ', it can be obtained by the following formula:
  • I f ' w 1 I 1 +w 2 I 2 +w 3 I 3 ,i ⁇ (1,2,3)
  • the fourth weight map, the fifth weight map and the sixth weight map After up-sampling the new weight map, the fourth weight map, the fifth weight map and the sixth weight map respectively, it forms the same size as the first mapping dynamic image, the second mapping dynamic image and the third mapping dynamic image. After weighted fusion of the fourth weight map, the fifth weight map and the sixth weight map with the first mapping dynamic image, the second mapping dynamic image and the third mapping dynamic image, the final fused target is obtained dynamic images. It can be understood that the above-mentioned image fusion method can use other fusion methods that can achieve the same effect.
  • the multi-resolution fusion method can also adopt the multi-resolution fusion method of biorthogonal wavelet transform, which can utilize the redundant and complementary information of multiple images, so that the fused image can contain more abundant and comprehensive information. information.
  • the first mapped dynamic image, the second mapped dynamic image, and the third mapped dynamic image may be fused by a Laplacian pyramid weighted fusion method to obtain the target dynamic image.
  • the second brightness correction coefficient and the third brightness correction coefficient are respectively obtained by using the first brightness correction coefficient
  • the first brightness correction image, the second brightness correction image and the third brightness correction image are obtained by using the above three correction coefficients. image, and obtain the corresponding mapping dynamic image through the first brightness correction image, the second brightness correction image and the third brightness correction image, and fuse the above three mapping dynamic images to obtain the target dynamic image.
  • the images with higher brightness use the value of the image with lower brightness more
  • the image with lower brightness uses the value of the image with higher brightness more, so that the target dynamic image can be The purpose of preserving more image details and improving the image processing effect.
  • performing fusion processing on the first mapping dynamic image, the second mapping dynamic image and the third mapping dynamic image, and obtaining the target dynamic image includes:
  • S501 Perform fusion processing on the first mapping dynamic image, the second mapping dynamic image, and the third mapping dynamic image to obtain a fusion processed image.
  • the first mapping dynamic image, the second mapping dynamic image and the third mapping dynamic image are respectively used for fusion processing to obtain a fusion processed image.
  • the first mapping dynamic image, the second mapping dynamic image, and the third mapping dynamic image are respectively low dynamic images with pixel values between 0 and 255, and fusion processing is performed on the above three mapping dynamic images Afterwards, the obtained fusion image is also a low dynamic image with pixel values between 0 and 255.
  • the fusion-processed image retains the highlights and shadow details in the high dynamic range image
  • some highlight areas that originally have color information such as colored signboard light boxes
  • the corrected images after different brightness corrections will appear due to highlight truncation. different color.
  • a brighter image will be overexposed due to brightness correction, causing the image to turn white. Therefore, the image obtained by the fusion processing method often has a color cast in the highlight color.
  • the image area may be a partial area of the fused image, or may be the entire area of the fused image.
  • the size of the fused image is the same as that of the second mapped dynamic image.
  • the second mapped dynamic image there is an image area corresponding to the fused image, and a reference image area is obtained.
  • the reference image area may be a partial area in the second mapped dynamic image, or may be the entire area in the second mapped dynamic image.
  • S504 Calculate the local mapping gain value of the image area of the fused image relative to the reference image area.
  • the local mapping gain value means that the mapping value is the mapping gain value corresponding to the image area.
  • the local mapping gain value is subjected to inverse gamma transformation between the image region in the fused image and the reference image region in the second mapped dynamic image, and the ratio calculation is performed on the two luminance values after the inverse gamma transformation to obtain The linear gain value of the luminance value of each pixel of the fused image relative to the luminance value of the corresponding pixel of the second mapped dynamic image.
  • each of the fused images can be obtained.
  • S505 Perform tone mapping processing on the RGB image according to the local mapping gain value to obtain a target dynamic image.
  • tone mapping is performed on the RGB image, and after gamma transformation, the RGB image is converted into an eight-bit low dynamic range image with a pixel value between 0 and 255.
  • the product of the pixel value of the RGB image area to be processed and the second brightness parameter correction coefficient corresponding to the corresponding second mapped dynamic image pixel is multiplied with the local mapping gain value of the corresponding pixel position, Then perform gamma transformation on the pixel value after the gain, and convert it into an eight-bit low-dynamic target dynamic image with a pixel value between 0 and 255.
  • the pixel value after the gain exceeds the preset pixel value, it is limited to the preset pixel value.
  • the preset pixel value is limited to 65535
  • the part of the pixel value after the gain that exceeds 65535 is limited to 65535.
  • the fusion processing image is obtained by using the first mapping dynamic image, the second mapping dynamic image and the third mapping dynamic image, and the to-be-processed image area in the fusion processing image can be processed by using the local mapping gain value.
  • the fused image is then subjected to gamma change to obtain a target dynamic image, which can achieve a target image with more image details preserved after image processing.
  • obtaining the statistical value of ambient light intensity includes:
  • the sensitivity refers to the sensitivity of the camera to light when acquiring RGB images. If the sensitivity is too high, the image quality will be affected. Although the brightness of the acquired image will be brighter, the sensitivity will be too high if the sensitivity is too high, and the image noise will be higher.
  • the shutter speed refers to the opening time of the shutter when the camera is used to obtain images. The faster the shutter speed, the shorter the opening time, the less light entering the camera, and the darker the image; The more light that enters the camera, the brighter the image.
  • the aperture value refers to the relative value of the light passing through the camera lens. The smaller the aperture value, the greater the amount of light entering in the same unit time; conversely, the larger the aperture value, the greater the amount of light entering in the same unit time.
  • the statistical value of ambient light intensity has a functional relationship with the sensitivity, shutter speed, and aperture value. To obtain the statistical value of ambient light intensity, it is necessary to first obtain the above-mentioned parameters of sensitivity, shutter speed, and aperture value.
  • the first parameter value is obtained through the sensitivity, shutter speed and aperture value. Since the shutter speed is doubled, for example, press 1 second, 1/2 second, 1 /4 second, 1/8 second sequence, the light flux of the lens will be reduced by half; each time the aperture value increases by one stop, for example, 1.4, 2.0, 4.0, 5.6, 8.0, etc., the light flux will also be reduced by half; the shutter speed is based on When the multiple is increased or decreased, the aperture value is multiplied or decreased according to the square root of the fixed value; when the sensitivity is doubled, the amount of light transmitted is reduced by half. Underexposure can be adjusted by setting a larger aperture, slower shutter speed and higher sensitivity value. Avoid overexposure, which can be adjusted by setting a smaller aperture, faster shutter speed and lower ISO value.
  • the first parameter value may be represented by a formula, and the formula includes sensitivity, shutter speed and aperture value.
  • the sensitivity is expressed as I
  • the shutter speed is expressed as s
  • the aperture value is expressed as a
  • the first parameter value is expressed as c
  • S603 Calculate the parameter ratio between the image brightness statistic value and the first parameter.
  • the image brightness statistical value is represented as V0, and the parameter ratio between the image brightness statistical value and the first parameter can be calculated, and the functional relationship between the image brightness statistical value and the acquired image parameter value can be preliminarily determined through the ratio.
  • step S604 the parameter ratio is used as the true number of the logarithmic function to perform logarithmic calculation to obtain a statistical value of the ambient light intensity.
  • the above-mentioned parameter ratio is used as the true number of the logarithmic function to perform logarithmic calculation, and the statistical value of the ambient light intensity can be obtained.
  • the ambient light intensity EE is a value greater than 0, so b, which is the true number of the logarithmic function, is greater than 1, and the preprocessed image brightness statistic value is greater than the parameter value of the first parameter.
  • the purpose of obtaining the statistical value of ambient light intensity can be achieved through the sensitivity, shutter speed, and aperture value corresponding to the RGB image, as well as the functional relationship between the three and the statistical value of image brightness.
  • the terminal first obtains the RGB image to be processed; obtains the target brightness statistic value corresponding to the shooting environment through the image brightness statistic value corresponding to the RGB image and the ambient light intensity statistic value, and obtains the target brightness statistic value corresponding to the shooting environment through the target brightness statistic value and the image brightness statistic value
  • the target brightness correction coefficient is determined by the value, and the target brightness correction coefficient is used as the first brightness correction coefficient; then the other two brightness correction coefficients are formed by increasing or decreasing the brightness on the basis of the first brightness correction coefficient, which are the second brightness correction coefficient.
  • the high dynamic range image is processed through the three brightness correction coefficients, and the three brightness correction images are obtained, which are respectively the normal images corrected by the first brightness correction coefficient; the second brightness correction coefficient The corrected darker image and the brighter image corrected by the third brightness correction coefficient.
  • the image pixel values are mapped to red, green, and blue three-channel low-dynamic images with pixel values ranging from 0 to 255. The three low-dynamic images after mapping are then fused to obtain a single image after fusion.
  • the fused image retains the highlights and shadow details in the original high dynamic range image, because in some highlight areas that originally have color information, such as colored signboard light boxes, the images corrected by different brightness correction coefficients will appear due to highlight truncation. different color. For example, a brighter image will be overexposed due to brightness correction, causing the image to turn white.
  • the fused image often has a color cast in the highlights. Taking the local brightness information of the fused picture as a reference, and combining the local brightness information corresponding to the darker image, a local tone mapping gain map is obtained. Extract the pixel intensity values of the fused image, and the pixel intensity values of the three darker images in low dynamic range.
  • the pixel value in the gain image exceeds the preset pixel value, it is limited to the preset pixel value, and then gamma transform is performed on the pixel value of the gain image, and the pixel value is converted into a pixel value between 0 and 255, with eight red, green and blue channels. Bit low dynamic range images.
  • FIGS. 1-6 are shown in sequence according to the arrows, these steps are not necessarily executed in the sequence shown by the arrows. Unless explicitly stated herein, the execution of these steps is not strictly limited to the order, and these steps may be performed in other orders. Moreover, at least a part of the steps in FIGS. 1-6 may include multiple steps or multiple stages. These steps or stages are not necessarily executed at the same time, but may be executed at different times. The execution of these steps or stages The order is also not necessarily sequential, but may be performed alternately or alternately with other steps or at least a portion of the steps or phases within the other steps.
  • an image processing apparatus 700 including: an image acquisition module 701, a noise reduction module 702, a conversion processing module 703, a brightness correction module 704, and a dynamic image mapping module 705, wherein :
  • An image acquisition module 701 configured to acquire a Raw image to be processed
  • a noise reduction module 702 configured to perform noise reduction processing on the Raw image to be processed to obtain a noise reduction Raw image
  • a conversion processing module 703, configured to perform conversion processing on the noise reduction Raw image to obtain an RGB image
  • a brightness correction module 704 configured to perform brightness correction processing on the RGB image to obtain a target brightness correction image
  • the dynamic mapping module 705 is configured to perform pixel dynamic range mapping on the target brightness correction image to obtain a target dynamic image.
  • Each module in the above-mentioned image processing apparatus may be implemented in whole or in part by software, hardware, and combinations thereof.
  • the above modules can be embedded in or independent of the processor in the computer device in the form of hardware, or stored in the memory in the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
  • a computer device is provided, and the computer device may be a server, and its internal structure diagram may be as shown in FIG. 8 .
  • the computer device includes a processor, memory, and a network interface connected by a system bus. Among them, the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium, an internal memory.
  • the nonvolatile storage medium stores an operating system, a computer program, and a database.
  • the internal memory provides an environment for the execution of the operating system and computer programs in the non-volatile storage medium.
  • the database of the computer device is used to store image processing data.
  • the network interface of the computer device is used to communicate with an external terminal through a network connection.
  • the computer program implements an image processing method when executed by a processor.
  • FIG. 8 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the computer equipment to which the solution of the present application is applied. Include more or fewer components than shown in the figures, or combine certain components, or have a different arrangement of components.
  • a computer device including a memory and a processor, a computer program is stored in the memory, and the processor implements the following steps when executing the computer program:
  • the present invention provides a computer-readable storage medium on which a computer program is stored, characterized in that, when the computer program is executed by a processor, the following steps are implemented:
  • Non-volatile memory may include read-only memory (Read-Only Memory, ROM), magnetic tape, floppy disk, flash memory, or optical memory, and the like.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • the RAM may be in various forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM).

Abstract

一种图像处理方法、装置、计算机设备和存储介质。所述方法包括:获取待处理的Raw图像,对所述待处理Raw图像进行降噪处理后再进行转换处理,得到RGB图像,对RGB图像进行亮度修正处理,得到目标亮度修正图像,对目标亮度修正图像进行像素动态范围映射处理,得到目标动态图像,目标动态图像的像素动态范围小于所述RGB图像的像素动态范围。上述过程能够保证输出图像的噪声低且在有限动态范围的显示设备上保留了较多图像细节,提高了输出图像的图像品质。

Description

图像处理方法、装置、设备和存储介质 技术领域
本申请涉及图像处理技术领域,特别是涉及一种图像处理方法、装置、设备和存储介质。
背景技术
Raw格式的图片是最原始的图像数据信息,而我们日常生活中看到的JPEG图片来自于Raw格式图片进行图像压缩后得到的照片,其会丢掉许多重要的信息。
现有技术中,一般针对JPEG图像进行降噪处理,但由于JPEG现有降噪算法的限制及JPEG图像由于压缩处理导致的信息丢失,目前输出的JPEG图像还是存在噪声较大的问题。
随着图像处理技术的发展,高动态图像因能够提供更多的动态范围和图像细节,被广泛应用。但是在部分应用中,往往要显示图像的显示设备的动态范围为有限动态范围或者低动态范围,低动态范围是相对于高动态来说的,例如,CRT(Cathode Ray Tube)显示器、LCD显示器或者投影仪等,只有有限的动态范围。经常存在需要在上述有限动态范围设备上显示高动态图像的情景,此时,就需要将高动态图像处理成能够在有限的动态范围设备上显示,并且能够保留图像细节的图像,以便于用户在有限动态范围的显示设备上显示和高动态图像一样的效果。
技术问题
鉴于以上图像处理中存在的局限性,目前输出的JPEG图像存在噪声大且不能在有限动态范围的显示设备上较好保留图像细节的问题。
技术解决方案
基于此,有必要针对上述技术问题,提供一种能够降低现有输出图像噪声且能在有限动态范围的显示设备上保留较多图像细节的图像处理方法、装置、计算机设备和存储介质。
第一方面,本发明提供一种图像处理方法,所述方法包括:
获取待处理Raw图像;
对所述待处理Raw图像进行降噪处理,得到降噪Raw图像;
对所述降噪Raw图像进行转换处理,得到RGB图像;
对所述RGB图像进行亮度修正处理,得到目标亮度修正图像;
对所述目标亮度修正图像进行像素动态范围映射处理,得到目标动态图像,所述目标动态图像的像素动态范围小于所述RGB图像的像素动态范围。
在其中一个实施例中,所述对所述待处理Raw图像进行降噪处理,得到降噪Raw图像,具体包括:
将所述待处理Raw图像输入预设图像降噪模型进行降噪处理,得到降噪Raw图像。
在其中一个实施例中,所述对所述降噪Raw图像进行转换处理,得到RGB图像,具体包括:
采用插值算法对所述降噪Raw图像进行处理,得到RGB图像。
在其中一个实施例中,所述对所述RGB图像进行亮度修正处理,得到亮度修正图像,具体包括:
获取所述RGB图像对应的图像亮度统计值;
获取环境光照强度统计值,根据所述环境光照强度统计值获取拍摄环境对应的目标亮度统计值;所述环境光照强度统计值为所述RGB图像所在拍摄环境的光照强度统计值;
根据所述目标亮度统计值与所述图像亮度统计值确定目标亮度修正系数;
根据所述目标亮度修正系数对所述RGB图像进行亮度修正,得到目标亮度修正图像。
在其中一个实施例中,所述根据所述目标亮度统计值与所述图像亮度统计值确定目标亮度修正系数包括以下步骤的至少一个:
当所述目标亮度统计值大于所述图像亮度统计值时,获取亮度增强系数,作为目标亮度修正系数;
当所述目标亮度统计值小于所述图像亮度统计值时,获取亮度减弱系数,作为目标亮度修正系数。
在其中一个实施例中,所述目标亮度修正系数包括第一亮度修正系数,所述根据所述目标亮度统计值与所述图像亮度统计值确定目标亮度修正系数包括:
计算所述目标亮度统计值与所述图像亮度统计值的亮度比值;
将所述亮度比值作为对数函数中的对数进行对数计算,得到第一亮度修正系数,其中所述对数函数的指数大于1。
在其中一个实施例中,所述目标亮度修正系数还包括第二亮度修正系数以及第三亮度修正系数;所述根据所述目标亮度统计值与所述图像亮度统计值确定目标亮度修正系数包括:
对所述第一亮度修正系数进行减少处理,得到第二亮度修正系数;
对所述第二亮度修正系数进行增加处理,得到第三亮度修正系数;
所述根据所述目标亮度修正系数对所述RGB图像进行亮度修正,得到目标亮度修正图像包括:
分别根据所述第一亮度修正系数、所述第二亮度修正系数、所述第三亮度修正系数对所述RGB图像进行亮度修正,得到所述第一亮度修正系数修正得到的第一亮度修正图像、 所述第二亮度修正系数修正得到的第二亮度修正图像以及所述第三亮度修正系数修正得到的第三亮度修正图像;
所述对所述目标亮度修正图像进行像素动态范围映射,得到目标动态图像包括:
分别对所述第一亮度修正图像、所述第二亮度修正图像以及所述第三亮度修正图像进行像素动态范围映射,得到所述第一亮度修正图像对应的第一映射动态图像、所述第二亮度修正图像对应的第二映射动态图像以及所述第三亮度修正图像对应的第三映射动态图像;
对所述第一映射动态图像、所述第二映射动态图像以及所述第三映射动态图像进行融合处理,得到目标动态图像。
在其中一个实施例中,所述对所述第一映射动态图像、所述第二映射动态图像以及所述第三映射动态图像进行融合处理,得到目标动态图像包括:
对所述第一映射动态图像、所述第二映射动态图像以及所述第三映射动态图像进行融合处理,得到融合处理图像;
获取所述融合处理图像的图像区域;
获取参考图像区域;
计算所述融合图像的图像区域相对于参考图像区域的局部映射增益值;
根据所述局部映射增益值对所述RGB图像进行色调映射处理,得到目标动态图像。
第二方面,本发明提供一种图像处理装置,所述装置包括:
图像获取模块,用于获取待处理Raw图像;
降噪模块,用于对所述待处理Raw图像进行降噪处理,得到降噪Raw图像;
转换处理模块,用于对所述降噪Raw图像进行转换处理,得到RGB图像;
亮度修正模块,用于对所述RGB图像进行亮度修正处理,得到目标亮度修正图像;
动态映射模块,用于对所述目标亮度修正图像进行像素动态范围映射,得到目标动态图像。
第三方面,本发明提供一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如下步骤:
获取待处理Raw图像;
对所述待处理Raw图像进行降噪处理,得到降噪Raw图像;
对所述降噪Raw图像进行转换处理,得到RGB图像;
对所述RGB图像进行亮度修正处理,得到目标亮度修正图像;
对所述目标亮度修正图像进行像素动态范围映射处理,得到目标动态图像,所述目标 动态图像的像素动态范围小于所述RGB图像的像素动态范围。
第四方面,本发明提供一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如下步骤:
获取待处理Raw图像;
对所述待处理Raw图像进行降噪处理,得到降噪Raw图像;
对所述降噪Raw图像进行转换处理,得到RGB图像;
对所述RGB图像进行亮度修正处理,得到目标亮度修正图像;
对所述目标亮度修正图像进行像素动态范围映射处理,得到目标动态图像,所述目标动态图像的像素动态范围小于所述RGB图像的像素动态范围。
有益效果
上述图像处理方法、装置、计算机设备和存储介质,通过获取待处理的Raw图像,对所述待处理Raw图像进行降噪处理后再进行转换处理,得到RGB图像,对RGB图像进行亮度修正处理,得到目标亮度修正图像,对目标亮度修正图像进行像素动态范围映射处理,得到目标动态图像,其中,目标动态图像的像素动态范围小于所述RGB图像的像素动态范围。通过上述过程,先针对Raw图像进行降噪处理,然后通过高动态范围图像到低动态范围的像素动态范围映射处理,保证输出图像的噪声低且在有限动态范围的显示设备上保留了较多图像细节,提高了输出图像的图像品质。
附图说明
图1为一个实施例中图像处理方法的应用环境图;
图2为一个实施例中图像处理方法的流程示意图;
图3为另一个实施例中图像降噪模型生成方法的流程示意图;
图4为另一个实施例中根据目标亮度统计值与图像亮度统计值确定目标亮度修正系数步骤的流程示意图;
图5为另一个实施例中第一映射动态图像、第二映射动态图像以及第三映射动态图像进行融合处理,得到目标动态图像方法的流程示意图;
图6为另一个实施例中获取环境光照强度统计值的流程示意图;
图7为一个实施例中图像处理装置的结构框图;
图8为一个实施例中计算机设备的内部结构图。
本发明的实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不 用于限定本申请。
本申请提供的图像处理方法,可以应用于如图1所示的应用环境中。应用环境包括图像采集设备102与终端104,其中,图像采集设备102与终端104通信连接。图像采集设备102采集到动态图像后,传输给终端104,终端104获取待处理的Raw图像,终端104可以先对待处理Raw图像进行降噪和转换处理,得到RGB图像,然后通过获取到的Raw图像获取RGB图像对应的图像亮度统计值和环境光照强度统计值,根据环境光照强度统计值获取拍摄环境对应的目标亮度统计值;其中,环境光照强度统计值为RGB图像所在拍摄环境的光照强度统计值;终端104根据目标亮度统计值与图像亮度统计值确定目标亮度修正系数;根据目标亮度修正系数对RGB图像进行亮度修正,得到目标亮度修正图像;对目标亮度修正图像进行像素动态范围映射,得到目标动态图像。其中,目标动态图像的像素动态范围小于RGB图像的像素动态范围。其中,图像采集设备102可以但不限于是各种有图像采集功能的设备,可以分布于终端104的外部,也可以分布于终端104的内部。例如:分布于终端104的外部的各种摄像头、扫描仪、各种相机、图像采集卡。终端104可以但不限于是各种相机、个人计算机、笔记本电脑、智能手机、平板电脑和便携式可穿戴设备。
可以理解,本申请实施例提供的方法,也可以是由服务器执行的。
在一个实施例中,如图2所示,提供了一种图像处理方法,以该方法应用于图1中的终端为例进行说明,包括以下步骤:
S201,获取待处理的Raw图像。
其中,Raw图像就是CMOS或者CCD图像感应器将捕捉到的光源信号转化为数字信号的原始数据,是无损的,包含了物体原始的颜色信息等。Raw图像的数据格式一般采用的是Bayer阵列排列方式,通过滤波光片,产生彩色滤波阵列(CFA),鉴于人眼对绿色波段的色彩比较敏感,Bayer阵列数据格式中包含了50%的绿色信息,以及各25%的红色和蓝色信息。Bayer阵列是一个4×4阵列,由8个绿色、4个蓝色和4个红色像素组成,在将灰度图形转换为彩色图片时会以2×2矩阵进行9次运算,最后生成一幅彩色图形。Bayer阵列一般有四种格式:RGGB(a),BGGR(b),GBRG(c),GRBG(d)。
具体地,当终端接收到图像处理指令后,图像处理指令中携带有待处理图像的图像标识,通过此图像标识,终端可以从存储的图像中获取到待处理Raw图像。终端获取待处理Raw图像以便于对待处理Raw图像做后续的处理工作。
其中,可以通过任意具有拍摄功能的设备拍摄获取该待处理Raw图像,比如数码相机、全景相机、手机、平板电脑、运动相机等。此外,该待处理Raw图像也可以是通过任意图像处理方式得到的Raw图像,比如图像变换、图像拼接、图像分割、图像合成、图像压缩、 图像增强、图像复原等。此外,该待处理Raw图像可以是全景图像,也可以是普通平面图像。
S202,对所述待处理Raw图像进行降噪处理,得到降噪Raw图像。
其中,针对待处理Raw图像进行降噪处理,可以采用多种降噪方法进行降噪处理,在一个实施例中,可以将待处理Raw图像输入预设图像降噪模型进行降噪处理,得到降噪Raw图像。
在一个实施例中,如图3所示,提供一种预设图像降噪模型生成方法,包括如下步骤:
S301、获取同一场景下的N张不同Bayer阵列的第一Raw图像。
其中,可以通过任意具有拍摄功能的设备拍摄获取该第一Raw图像,比如数码相机、全景相机、手机、平板电脑、运动相机等。此外,该第一Raw图像也可以是通过任意图像处理方式得到的Raw图像,比如图像变换、图像拼接、图像分割、图像合成、图像压缩、图像增强、图像复原等。此外,该第一Raw图像可以是全景图像,也可以是普通平面图像。
其中,为了获取同一场景下的N张不同Bayer阵列的第一Raw图像,可以使用多种方式,例如:第一种方式,可以通过一个拍摄设备连续拍摄多张同一场景下的Raw图像,或者多个拍摄设备拍摄多张同一场景下的Raw图像,然后从多张Raw图像中挑选出Bayer阵列一致的N张第一Raw图像;第二种方式,也可以通过一个拍摄设备连续拍摄N张同一场景下的Raw图像,或者多个拍摄设备拍摄多张同一场景下的Raw图像,然后通过Bayer阵列转换方法得到Bayer阵列一致的N张第一Raw图像;第三种方式,还可以通过一个拍摄设备或者多个拍摄设备连续拍摄N张同一场景下不同Bayer阵列的第一Raw图像。应该理解,采取以上列举方式的不同组合或者采用任何其他方式,也可以获取同一场景下的N张不同Bayer阵列的第一Raw图像,在此不进行一一列举。
S302、将所述N张不同Bayer阵列的第一Raw图像分别进行融合处理,得到相对应的N张第二Raw图像。
在一个实施例中,S302具体包括如下步骤:
S3021,依次选择所述N张第一Raw图像中任一张图像作为基准图像;
S3022,分别检测所述基准图像与其他N-1张第一Raw图像的差异点;
S3023,根据所述差异点,对所述基准图像进行相应处理,得到待融合图像;
S3024,将所述待融合图像进行逐像素加权融合处理,得到相应的第二Raw图像。
在一个实施例中,步骤S3021中,依次选择所述N张第一Raw图像中任一张Raw图像作为基准图像,可以采用的一种方式为:将N张第一Raw图像依次标记为第1图像、第2图像、第3图像......第N图像,然后依次分别选择第1图像、第2图像、第3图像...... 第N图像作为基准图像。
在一个实施例中,步骤S3022中,分别检测所述基准图像与其他N-1张第一Raw图像的差异点,具体包括如下步骤:
将所述N张第一Raw图像均划分为G个网格,其中,G为大于等于2的整数;
采用对齐算法,计算所述基准图像的每一个网格内容与其他N-1张第一Raw图像相对应网格内容的差异点。
在一个实施例中,将所述N张第一Raw图像均划分为G个网格,其中,G为大于等于2的整数,可以采用多种方式将图像划分为多个网格,常见的网格可以分为两类:结构化网格和非结构化网格;结构化网格:单元结点之间的连接方式只有有限的几种,所生成的网格体区域内的所以内部结点都有相同数量的相邻单元和相同数量的相邻结点,对应二维平面或三维曲面,结构化网格生成的网格单元一般都是四边形,三维实体空间则主要是六面体;非结构化网格,网格单元节点之间可以以任意的形式连接,不需要具有相同的相邻单元,网格区域内的不同节点之间的连接数目可以不同,对应二维平面或三维曲面,非结构化网格生成的网格单元一般都是三角形,三维实体空间则主要是四面体。常见的结构化网格生成方法,一般有映射法、几何分解法等;非结构性网格的生成方法,一般有八叉树法、四叉树法、Delaunay三角化法等。
应当理解,上述只是列举常见的一些网格生成方法,本发明实施例并不局限于此,本发明实施例可以采用上述的任一种或者多种方法将所述N张第一Raw图像均划分为G个网格,也可以采用现有技术中的其他网格生成方法,在此不再赘述。
进一步地,应当理解,划分成的G个网格可以是任意形状,比如可以是AXA的网格,或者是AXB的网格,其中A和B不相等,且均为大于等于2的整数。
在一个实施例中,采用对齐算法,计算所述基准图像的每一个网格内容与其他N-1张第一Raw图像相对应网格内容的差异点,其中,对齐算法可以是任意的图像对齐算法,例如:基于像素值互相关的方法、序贯相似检测算法、基于傅里叶变换的相位相关算法、基于互信息的对齐算法、基于最优化的图像对齐方法是Lucas-Kanade算法及其改进算法。
在一个实施例中,步骤S3023中,根据所述差异点,对所述基准图像进行相应处理,得到待融合图像,具体包括如下步骤:
若检测到所述差异点为杂乱方向上移动造成的差异,则删除所述N张第一Raw图像中的相应网格,得到N张待融合图像;
若检测到所述差异点为单一方向上移动造成的差异,则将所述N-1张第一Raw图像对应的网格相对于所述基准图像在所述单一方向的反方向上进行平移,将N-1张平移后图像 和所述基准图像作为N张待融合图像。
在一个实施例中,所述杂乱方向上移动造成的差异,是相对于单一方向上移动造成的差异而言的,即所述杂乱方向上移动造成的差异包含至少在两个方向上存在移动差异。进一步地,所述移动造成的差异,指的是在两个网格在设定的方向上存在偏移差,比如,网格1和网格2,网格1为对应的第一Raw图像上的一个网格,网格2为基准图像上对应的网格,均设定为在X和Y两个方向上检测差异点,若网格1与网格2在X和Y方向均上存在偏移差,则表示网格1和网格2之间的差异点为杂乱方向上移动造成的差异,则删除网格1和网格2。进一步地,如上所述,若网格1和网格2仅仅在X或者Y方向上存在偏移差,则表示网格1和网格2之间的差异点为单一方向上移动造成的差异,则将网格2相对于网格1在所述单一偏移方向的反方向上做一个平移,确保平移后的网格1和网格2之间不存在任意方向上的差异点。
在一个实施例中,步骤3024中,将所述待融合图像进行逐像素加权融合处理,得到相应的第二Raw图像,其中,所述逐像素加权融合处理,一般是直接对图像的像素点加权计算,权值选取的算法直接影响融合图像的效果,基于如何有效地选择融合权值系数,一般还可以分为平均加权融合算法(直接将两幅图像对应像素取均值处理)、多尺度加权梯度的融合算法、、主成分分析(Principal Component Analysis,PCA)等。
应当理解,步骤S3024中的逐像素加权融合处理,并不是步骤S302中融合处理的唯一方式,只是其中一种可选方式而已,所述逐像素加权融合处理也可以用其他像素级图像融合方法来替代,例如,简单点逐像素相加融合处理及基于多尺度变换的融合方法等,其中,所述逐像素相加融合处理,一般是指两幅或者多幅大小相同的图像对应位置像素的相加运算,以产生一副新的含有两幅或者多幅图像信息的图像的方法。应该理解,两幅或者多幅255灰度级图像对应坐标位置像素值的相加,其结果必然会超过其最大的灰度表示范围255,显然对图像相加运算的结果都需要进行处理,基本方法有三种:一种是将两像素或者多像素灰度值相加后的平均值作为相加结果;一种是根据两幅图像或者多幅图像所有像素灰度值相加结果的最小值和最大值情况,作等比例缩小,使其结果灰度值符合O至255的灰度值范围;还有一种是当两像素或者多像素灰度值相加后的值超过255时,取255即可。
进一步地,在本发明实施例中,可以将多张待融合图像进行逐像素平均加权融合处理,然后取多像素灰度值相加后的平均值作为相加结果,即作为相应的第二Raw图像的像素灰度值,这个过程中,因为采取了多像素灰度值相加的平均值作为相应的第二Raw图像的像素灰度值,可以有效减少待融合图像之间的随机噪声。
S303、将所述第一Raw图像逐像素减去相对应的所述第二Raw图像,得到相对应的残差噪声图像。
在一个实施例中,将所述第一Raw图像逐像素减去相对应的所述第二Raw图像,得到相对应的残差噪声图像,其中,所述逐像素相减,一般是指两幅大小相同的图像对应位置像素的相减运算,以产生一副新的含有两幅图像信息的图像的方法。当两幅255灰度级图像对应坐标位置像素值相减的结果大于或等于零时,则取其为结果图像中对应位置像素的灰度值,当相减结果小于零时,一般都是取负值为结果值。当然,对于某些特殊的应用目的,也可以取其绝对值为结果值。进一步地,在本发明实施例中,将所述第一Raw图像逐像素减去相对应的所述第二Raw图像,得到相对应的残差噪声图像,其中,所述残差噪声图像包含所述第一Raw图像与所述第二Raw图像的差异信息。
S304、基于所述第一Raw图像及所述残差噪声图像,建立样本图像数据库。
在一个实施例中,所述基于所述第一Raw图像及所述残差噪声图像,建立样本图像数据库,具体包括:
若所述第一Raw图像及所述残差噪声图像的Bayer阵列不一致,则对所述第一Raw图像及所述残差噪声图像进行转换处理,得到Bayer阵列一致的第一Raw图像及残差噪声图像。
其中,因为Raw图像Bayer阵列一般有四种格式:RGGB(a),BGGR(b),GBRG(c),GRBG(d)。在一个实施例中,所述转换处理,即将所述第一Raw图像与相对应的残差噪声图像进行Bayer阵列格式的转换,目的是使得样本图像数据库中,第一Raw图像及相对应的残差噪声图像的Bayer阵列格式保持一致;进一步地,为了清楚说明如何进行所述转换处理,举例如下:若第一Raw图像的Bayer阵列格式为“RGGB”,当第一Raw图像相对应的残差噪声图像的Bayer阵列格式不是“RGGB”时,可以采取如下方式进行转换处理,当相对应残差噪声图像的Bayer阵列格式为“GBRG”时,则在相对应残差噪声图像的上下边界各扩充一个像素;当相对应残差噪声图像的Bayer阵列格式为“GRBG”时,则在相对应残差噪声图像的左右边界各扩充一个像素;当相对应残差噪声图像的Bayer阵列格式为“BGGR”时,则在相对应残差噪声图像的上下左右边界各扩充一个像素。
进一步地,为了建立样本图像数据库,可以通过上述获取同一场景下N张图像的方法来分别获取M组场景下的图像数据集合,在此不再赘述,其中,M为大于等于2的整数。
S305、基于所述样本图像数据库进行模型训练,得到图像降噪模型。
在一个实施例中,基于所述样本图像数据库进行模型训练,得到图像降噪模型,具体包括:
将所述Bayer阵列一致的第一Raw图像作为输入图像数据,将与所述第一Raw图像相对应的Bayer阵列一致的残差噪声图像作为对应的输出图像数据,进行模型训练,得到图像降噪模型。
在一个实施例中,在完成样本图像数据集合的构建之后,即可将第一Raw图像集合作为训练输入、将对应的残差噪声图像集合作为目标输出,按照预先设定的训练算法来进行模型训练,以训练得到用于进行图像降噪处理的图像降噪模型。
其中,训练算法为机器学习算法,机器学习算法可以通过不断特征学习来对数据进行处理。机器学习算法可以包括:决策树算法、逻辑回归算法、贝叶斯算法、神经网络算法(可以包括深度神经网络算法、卷积神经网络算法以及递归神经网络算法等)、聚类算法等等。
需要说明的是,对于选取何种训练算法进行图像降噪模型的训练,可由本领域技术人员根据实际需要进行选取,比如,本申请实施例可以选取卷积神经网络算法来进行模型训练,以此来得到图像降噪模型。
应当理解,样本图像数据可以包含M组场景下的图像数据集合,此时每一组场景下包含N张Bayer阵列不一致的第一Raw图像,此时的样本图像数据包含M组场景下的第一Raw图像及对应的残差噪声图像,进一步地,当M和N的取值越来越大时,样本图像数据库中包含海量的训练数据,此时得到的图像降噪模型也会越来越精准。
在一个实施例中,先判断待处理Raw图像与上述S301-S305步骤得到的预设图像降噪模型的Bayer阵列是否一致,如果不一致,则采用步骤S304中的转换处理(在此不再赘述),使得待处理Raw图像的Bayer阵列与上述图像降噪模型的Bayer阵列保持一致,然后将待处理Raw图像输入上述S301-S305步骤得到的预设图像降噪模型,得到待处理Raw图像对应的残差噪声图像,将残差噪声图像的Bayer阵列转换成与待处理Raw图像一致,然后通过逐像素加权融合处理,得到最终降噪Raw图像。
在一个实施例中,经过上述步骤S301-S302后,将得到的第二Raw图像作为输出图像,将对应的第一Raw图像作为输入图像,按照预先设定的训练算法来进行模型训练,以训练得到用于进行图像降噪处理的图像降噪模型,应当理解,此处的模型训练与步骤S305中的相同,在此不再赘述。仍然先判断待处理Raw图像与本实施例中得到的图像降噪模型的Bayer阵列是否一致,如果不一致,则采用步骤S304中的转换处理(在此不再赘述),使得待处理Raw图像的Bayer阵列与上述预设图像降噪模型的Bayer阵列保持一致,然后将待处理Raw图像输入本实施例中得到的预设图像降噪模型,可以直接得到待处理Raw图像的最终降噪Raw图像。
应当理解,图像降噪模型的生成方法多种多样,上述实施例只是列举其中的两种图像降噪模型生成方法及对应的Raw图像降噪方法,但S202中的预设图像降噪模型并不局限于上述实施例,即S202中的Raw图像降噪方法并不局限于上述实施例中的降噪方法,本领域技术人员也可以采用其他Raw图像降噪方法来实现Raw图像降噪目的。
S203,对所述降噪Raw图像进行转换处理,得到RGB图像。
在一个实施例中,采用插值算法对所述降噪Raw图像进行处理,得到RGB图像;
在一个实施例中,采用插值算法对降噪Raw图像进行处理,得到RGB图像,这里的插值算法可以是最近邻插值法、双线性插值、双三次、立体插值、Demosaic算法等,因为原始Raw图像中的每个像素只包含R/G/B三个分量中的一个,为了得到RGB图像,需要将每个像素缺少的另两个分量通过插值算法进行补充,从而得到RGB图像。
S204,对所述RGB图像进行亮度修正处理,得到目标亮度修正图像。
在一个实施例中,对所述RGB图像进行亮度修正处理,得到目标亮度修正图像,具体可以采用如下步骤:
S2041,获取所述RGB图像的图像亮度统计值;
S2042,获取环境光照强度统计值,根据所述环境光照强度统计值获取拍摄环境对应的标准亮度统计值;所述环境光照强度统计值为所述RGB图像所在拍摄环境的光照强度统计值;
S2043,根据所述标准亮度统计值与所述图像亮度统计值确定目标亮度修正系数;
S2044,根据所述目标亮度修正系数对所述RGB图像进行亮度修正,得到目标亮度修正图像。
在一个实施例中,步骤S2041中,图像亮度统计值是指图像亮度的综合数量表现,从图像亮度统计值中可以获得图像亮度的整体情况,统计值是经过统计得到的,例如可以是平均值或者是中位数。
具体的,终端在获取RGB图像之后,通过将RGB图像转换成灰度图之后统计得到亮度直方图,亮度直方图可以表示图像的每个亮度级别的像素数量,通过亮度直方图获取到RGB图像的平均亮度。
在一个实施例中,步骤S2042中,环境光照强度统计值是指环境光照强度的综合数量表现,通过环境光照强度统计值可以获知环境光照强度的整体情况,统计值是经过统计得到的,例如可以是平均值或者是中位数。
具体的,终端在获取RGB图像对应的图像亮度统计值和环境光照强度统计值之后,根据环境光照强度统计值和目标亮度统计值之间的对应关系,可以获取到环境光照强度统计 值对应的目标亮度统计值。
在一个实施例中,可以根据感光度、快门速度以及光圈值得到曝光参数,将图像亮度统计值除以曝光参数,作为对数函数的真数,进行对数计算,得到环境光照强度统计值。环境光照强度统计值表示为EE,图像亮度统计值表示为V0,目标亮度统计值表示为V,I表示感光度,s为快门速度单位为秒,a为光圈值,环境光照强度统计值与图像亮度统计值之间的关系可以表示为:
EE=log 2(V0/(I*s/a 2)
不同的环境光照强度统计值有不同的目标亮度统计值,假设目标亮度统计值为V,则环境光照强度统计值EE与目标亮度统计值为V之间有一一对应关系。可以理解的,不同的环境光照强度下拍摄的图像,用户喜欢的图像亮度也不同。比如光照充足的晴天日间拍摄的照片,人们就更喜欢画面偏亮的图片,所以此环境光照强度统计值EE对应的目标亮度统计值V就偏大,而对于夜间室外拍摄的图像,合适亮度的图像的环境光照强度统计值EE对应的目标亮度统计值V相对较小。
在一个实施例中,环境光照强度统计值EE与目标亮度统计值为V之间有一一对应关系可以记作f。例如,假设以图像像素点亮度值范围在0到1之间,f可以是映射表,如表1所示,为映射表f中部分数据:
EE(cd/m 2) -12 -10 -8 -4 -2 3 5
V(cd/m 2) 0.025 0.05 0.2 0.25 0.375 0.5 0.5
表1环境光照强度统计值与目标亮度统计值映射表
从表1中可以得到不同的环境光照强度统计值EE与相对应的目标亮度统计值V之间的数值关系,并且数值关系唯一。基于这种一一对应关系,目标亮度统计值V可以表示为f(EE),可以通过调整此对应关系中的值,来调整某个光照强度下的目标亮度统计值f(EE),从而使图像能够实现更亮或者更暗的效果。
在一个实施例中,步骤S2043中,亮度修正系数为对图像进行亮度修正的参数。可以通过亮度修正系数把图像调到合适亮度,例如调亮或者调暗。
具体的,在终端已经获取到目标亮度统计值与图像亮度统计值之后,通过目标亮度统计值与图像亮度统计值之间的函数关系,例如,目标亮度统计值与图像亮度统计值之间存在比值类型的函数关系,可以确定目标亮度修正系数。
在一个实施例中,目标亮度统计值表示为f(EE),图像亮度统计值表示为V0,通过目标亮度修正系数、目标亮度统计值以及图像亮度统计值之间的函数关系,得到目标亮度修正系数。例如,目标亮度修正系数为β,β可表示为:β=log 2(f(EE)/V 0)。
在一个实施例中,步骤S2044中,具体的,终端根据确定的目标亮度修正系数对RGB图像进行亮度修正,获取修正后的RGB图像,作为目标亮度修正图像。例如,可以通过将RGB图像每个像素值与目标亮度修正系数相乘,得到亮度修正后的像素值,修正后的像素值组成的图像为目标亮度修正图像。
在一个实施例中,可以直接利用目标亮度修正系数对RGB图像进行亮度修正,使修正后的RGB图像的图像亮度统计值接近目标亮度统计值。例如,假设RGB图像中每个像素值为X,利用目标亮度修正系数e1对RGB图像进行亮度修正,则得到的目标亮度修正图像相对应位置的像素值为X*e1;也可以通过与目标亮度修正系数有对应关系的函数进行计算,得到最终的修正系数,利用该最终的修正系数对RGB图像进行亮度修正。例如,与目标亮度修正系数相关的函数为指数函数,可以亮度修正系数作为指数,将预设值作为底数,进行指数计算,预设值为大于1的数。例如,假设预设值为2,则最终得到的修正系数为2 e1,则得到的目标亮度修正图像相对应位置的像素值为X*2 e1
S205,对目标亮度修正图像进行像素动态范围映射,得到目标动态图像,目标动态图像的像素动态范围小于RGB图像的像素动态范围。
具体的,动态范围映射是指将图像由一个动态范围映射到另一个动态范围。RGB图像的像素动态范围是大于目标动态图像的像素动态范围的,为了能够使图像的像素动态范围适应有限动态或者低动态的显示设备,需要将终端得到的目标亮度修正图像进行像素动态范围映射,将高动态图像转换成低动态图像,以便于映射后的图像能够适应低动态的显示设备。
在一个实施例中,当利用亮度修正系数对RGB图像进行修正得到目标亮度修正图像之后,此时的目标亮度修正图像仍然为高动态图像,需要将其通过动态范围映射方式转换为低动态图像,以便于低动态图像作为目标动态图像,能够适用于低动态设备。
在一个实施例中,可以利用伽马变换实现从高动态图像到低动态图像之间的变换。例如,从像素值范围为0到65535的高动态图像转换为像素值范围为0到255的低动态图像。
上述图像处理方法中,通过获取待处理的RGB图像,获取该RGB图像的图像亮度统计值;通过获取环境光照强度统计值,根据该环境光照强度统计值获取拍摄环境对应的目标亮度统计值,通过上述的目标亮度统计值与图像亮度统计值确定目标亮度修正系数,通过该目标亮度修正系数对RGB图像进行亮度修正,得到目标亮度修正图像。可以通过环境光照强度确定图像的修正参数,并利用修正参数对RGB图像进行亮度修正,可以得到合适亮度、保留更多细节的图像。通过对目标亮度修正图像进行像素动态范围映射,得到目标动态图像,其中目标动态图像的像素动态范围小于RGB图像的像素动态范围,从而实现了从 高动态图像转换为低动态图像。通过上述过程,实现了通过拍摄图像时的环境光照强度和图像亮度统计值确定图像修正参数,并利用图像修正参数对图像进行修正,在保留更多细节和合适亮度的基础上,从高动态图像到低动态图像的图像处理过程,提高了图像处理效果。
在一个实施例中,根据目标亮度统计值与图像亮度统计值确定目标亮度修正系数包括以下步骤的至少一个:当目标亮度统计值大于图像亮度统计值时,获取亮度增强系数,作为目标亮度修正系数;当目标亮度统计值小于图像亮度统计值时,获取亮度减弱系数,作为目标亮度修正系数。
具体的,亮度增强系数使得亮度增强,当目标亮度统计值大于图像亮度统计值时,目标亮度修正系数会对图像有亮度增强用,此时的目标亮度修正系数可以称为亮度增强系数。亮度增强系数可以对图像亮度线性增强,也可以对图像亮度非线性增强。亮度增强系数以及亮度减弱系数可以是预设的,也可是通过预设的算法计算得到。例如可以计算目标亮度统计值与所述图像亮度统计值的亮度比值;将亮度比值作为对数函数中的对数进行对数计算,得到第一亮度修正系数,其中对数函数的指数大于1。
在一个实施例中,目标亮度统计值可以表示为f(EE),图像亮度统计值为V0,目标亮度修正系数为α,α可表示为:α=2 e
其中,e可表示为:e=log 2(f(EE)/V 0)
当目标亮度统计值大于图像亮度统计值时,f(EE)/V0为大于1的正数,此时e为大于0的正数,则α为大于1的正数,将α乘以图像的像素值,将会使亮度增大,此时e可以称为亮度增强系数,可以对图像亮度进行增强调整。当目标亮度统计值小于图像亮度统计值时,f(EE)/V0为小于1的正数,此时e为小于0的负数,则α为小于1的正数,此时e可以称为亮度减弱系数,可以对图像亮度进行减弱。
本实施例中,通过目标亮度统计值与图像亮度统计值,能够达到确定目标亮度参数的目的,利用目标亮度参数可以对图像进行亮度增强或者衰减。
在一个实施例中,目标亮度修正系数包括第一亮度修正系数,根据目标亮度统计值与图像亮度统计值确定目标亮度修正系数包括:计算目标亮度统计值与图像亮度统计值的亮度比值;将亮度比值作为对数函数中的对数进行对数计算,得到第一亮度修正系数。
具体的,终端可以通过目标亮度统计值与图像亮度统计值能够得到第一亮度修正系数,例如先计算目标亮度统计值与图像亮度统计值的亮度比值。例如,目标亮度统计值表示为f(EE),图像亮度统计值为V0,计算比值表示为a,则a可表示为:a=f(EE)/V0
第一亮度修正系数可表示为e1,其中对数函数的指数大于1,例如,e1是以某种整数 为底的指数函数,e1能够随着指数函数的指数的变化而变化。例如,e1为以2为底,单调递增的对数函数,e1=log 2a;
本实施例中,对数函数的指数大于1,则使得亮度比值与第一亮度修正系数成正相关关系,通过目标亮度统计值与图像亮度统计值的亮度比值计算得到第一亮度修正系数,使得亮度比值反映目标亮度统计值与图像亮度统计值的大小关系,当目标亮度统计值大于所述图像亮度统计值时,第一亮度增强系数为亮度增强系数。当目标亮度统计值小于所述图像亮度统计值时,第一亮度增强系数为亮度减弱系数。因此使得调整后的图像,与拍摄环境所在的环境亮度是匹配的。
在一个实施例中,如图4所示,目标亮度修正系数还包括第二亮度修正系数以及第三亮度修正系数;根据目标亮度统计值与图像亮度统计值确定目标亮度修正系数包括:
S401,对第一亮度修正系数进行减少处理,得到第二亮度修正系数。
具体的,在得到第一亮度修正系数之后,可以以第一亮度修正系数为基础,对第一亮度修正系数进行减少处理,可以得到第二亮度修正系数。
在一个实施例中,对第一亮度修正系数进行减少处理可以通过减少相应的百分比形式。例如,原有修正参数为b,可以通过减少0.1b的形式,得到减少后得到的第二亮度修正系数b1=b-0.1b=0.9b。
在一个实施例中,对第一亮度修正系数进行减少处理可以通过减少相应的系数数值形式。例如,原有修正参数为b,可以通过减少数值m的形式,得到减少后得到的第二亮度修正系数b1=b-m,其中m可以为任意正数,例如为3。
S402,对第一亮度修正系数进行增加处理,得到第三亮度修正系数。
具体的,在得到第一亮度修正系数之后,可以以第一亮度修正系数为基础,对第一亮度修正系数进行增加处理,可以得到第三亮度修正系数。
在一个实施例中,可以通过增加相应的百分比形式对第一亮度修正系数进行增加处理。例如,原有修正参数为b,可以通过增加0.1b的形式,得到减少后得到的第三亮度修正系数b1=b+0.1b=1.1b。
在一个实施例中,对第一亮度修正系数进行增加处理可以通过增加相应的系数数值形式。例如,原有修正参数为b,可以通过增加数值n的形式,得到增加后得到的第三亮度修正系数b1=b+n,其中n为任意正数,例如为3。
S403,分别根据第一亮度修正系数、第二亮度修正系数、第三亮度修正系数对RGB图像进行亮度修正,得到第一亮度修正系数修正得到的第一亮度修正图像、第二亮度修正系数修正得到的第二亮度修正图像以及第三亮度修正系数修正得到的第三亮度修正图像。
具体的,在终端获取到第一亮度修正系数、第二亮度修正系数、第三亮度修正系数之后,利用第一亮度修正系数、第二亮度修正系数和第三亮度修正系数分别对RGB图像进行处理后,得到第一亮度修正图像、第二亮度修正图像以及第三亮度修正图像。例如,第一亮度修正系数作为目标修正系数,相对应的第一亮度修正图像作为目标修正图像,因第二亮度修正图像为对亮度修正系数进行减少处理对应的修正图像,则第二亮度修正图像为较暗的图像;同理,第三亮度修正图像为较亮的图像。
在一个实施例中,第一亮度修正系数较大时,第三亮度修正系数可以配置为更接近于第一亮度修正系数;第一亮度修正系数较小时,第二亮度修正系数可以配置为更接近于第一亮度修正系数。因此可以更好的平衡画面中各个亮度等级的细节,以便于经过修正系数修正后的图像能够体现出目标图像的更多细节。
S404,分别对第一亮度修正图像、第二亮度修正图像以及第三亮度修正图像进行像素动态范围映射,得到第一亮度修正图像对应的第一映射动态图像、第二亮度修正图像对应的第二映射动态图像以及第三亮度修正图像对应的第三映射动态图像。
在一个实施例中,可以通过伽马变换方式,分别对第一亮度修正图像、第二亮度修正图像以及第三亮度修正图像进行像素动态范围映射,得到第一亮度修正图像对应的第一映射动态图像、第二亮度修正图像对应的第二映射动态图像以及第三亮度修正图像对应的第三映射动态图像。从而实现高动态的亮度修正图像变换为低动态范围的映射动态图像,并且得到的是不同亮度的低动态范围映射动态图像。
在一个实施例中,低动态范围图像可以是八位的低动态范围图像,高动态的修正图像可以是十六位的高动态范围图像,高动态图像是红绿蓝三通道、像素值范围是0到65535之间的图像,低动态范围图像是红绿蓝三通道、像素值范围是0到255之间的图像。
S405,对第一映射动态图像、第二映射动态图像以及第三映射动态图像进行融合处理,得到目标动态图像。
其中,融合处理是指根据一定的图像融合方法对不同亮度的图像进行融合,以便于使处理后的图像具有更丰富的图像细节。
具体的,首先对第一映射动态图像、第二映射动态图像以及第三映射动态图像进行降采样;根据降采样后的第一映射动态图像、第二映射动态图像以及第三映射动态图像,获取第一映射动态图像对应的第一权值图、第二映射动态图像对应的第二权值图和第三映射动态图像对应的第三权值图;将降采样后的第一映射动态图像、第二映射动态图像以及第三映射动态图像分别转换成灰度图像,对三张灰度图像及第一权值图、第二权值图和第三权值图做多分辨率融合得到多分辨率融合灰度图像;根据此灰度图像,和降采样后的第一 映射动态图像、第二映射动态图像以及第三映射动态图像转换成的三张灰度图像和第一权值图、第二权值图和第三权值图,通过如下公式得到新的权值图,分别为第四权值图、第五权值图和第六权值图。假设新的权值图表示为w i',第一权值图、第二权值图和第三权值图表示为w i,其中,i∈(1,2,3),多分辨率融合灰度图像表示为I f,第一映射动态图像、第二映射动态图像以及第三映射动态图像分别转换成的灰度图像分别表示为I 1、I 2和I 3,则新的权值图表示为w i'可以通过如下公式求得:
w i'=kw i
Figure PCTCN2021134712-appb-000001
I f'=w 1I 1+w 2I 2+w 3I 3,i∈(1,2,3)
再将新的权值图第四权值图、第五权值图和第六权值图分别上采样后,形成和第一映射动态图像、第二映射动态图像以及第三映射动态图像尺寸相同的图像,将第四权值图、第五权值图和第六权值图和第一映射动态图像、第二映射动态图像以及第三映射动态图像加权融合后,得到最终的融合后的目标动态图像。可以理解的,上述图像融合的方法可以使用其他能实现相同效果的融合方法。
在一个实施例中,多分辨率融合方法也可以采用双正交小波变换的多分辨率融合方法,可以利用多张图像的冗余及互补信息,使融合后的图像可以包含更加丰富、全面的信息。
在一个实施例中,可以通过拉普拉斯金字塔加权融合方法对第一映射动态图像、第二映射动态图像以及第三映射动态图像进行融合处理,得到目标动态图像。
本实施例中,通过利用第一亮度修正系数分别得到第二亮度修正系数和第三亮度修正系数,再利用上述三种修正系数得到第一亮度修正图像、第二亮度修正图像以及第三亮度修正图像,并通过第一亮度修正图像、第二亮度修正图像以及第三亮度修正图像得到相应的映射动态图像,将上述三张映射动态图像进行融合处理,得到目标动态图像。能够达到通过不同亮度图像之间细节互补,使亮度较高的图像更多使用亮度较低的图像的值,使亮度较低的图像更多使用亮度较高的图像的值,使目标动态图像能够保留更多图像细节,提高图像处理效果的目的。
在一个实施例中,如图5所示,对第一映射动态图像、第二映射动态图像以及第三映射动态图像进行融合处理,得到目标动态图像包括:
S501,对第一映射动态图像、第二映射动态图像以及第三映射动态图像进行融合处理,得到融合处理图像。
具体的,为了保留更多的图像细节,分别利用第一映射动态图像、第二映射动态图像 以及第三映射动态图像做融合处理,得到融合处理图像。
在一个实施例中,第一映射动态图像、第二映射动态图像以及第三映射动态图像分别都是像素值处在0到255之间的低动态图像,对上述三张映射动态图像进行融合处理后,得到融合处理图像也为像素值处在0到255之间的低动态图像。
S502,获取融合处理图像的图像区域。
具体的,融合处理图像虽然保留了高动态范围图像中的高光和阴影细节,但是由于在一些原本有色彩信息的高光区域,如彩色招牌灯箱,不同亮度修正后的修正图像会因为高光截断而呈现不同的颜色。如较亮图像会因为亮度修正而过曝,导致该处画面变白。因此,融合处理方法得到的图像,在高光出的色彩经常会有偏色的现象。其中,图像区域可以是融合图像中的部分区域,也可以是融合图像的全部区域。
S503,获取参考图像区域。
具体的,融合后的图像与第二映射动态图像尺寸是相同的,在第二映射动态图像中,有与融合后的图像相对应的图像区域,获取参考图像区域。其中,参考图像区域可以是第二映射动态图像中的部分区域,也可以是第二映射动态图像中的全部区域。
S504,计算融合图像的图像区域相对于参考图像区域的局部映射增益值。
其中,局部映射增益值是指该映射值是图像区域对应的映射增益值。
在一个实施例中,局部映射增益值为融合后的图像中图像区域与第二映射动态图像中参考图像区域做逆伽马变换,对逆伽马变换后的两个亮度值做比值计算,得到融合后图像每个像素的亮度值相对第二映射动态图像对应像素亮度值的线性增益值。
在一个实施例中,可以通过对逆伽马变换后的融合后的图像中待处理图像区域与第二映射动态图像中参考图像区域的两个亮度值做差值计算,得到融合后图像每个像素的亮度值相对第二映射动态图像对应像素亮度值的线性增益值。
S505,根据局部映射增益值对RGB图像进行色调映射处理,得到目标动态图像。
具体的,在获取局部映射增益值之后,对RGB图像进行色调映射处理,做伽马变换之后,将RGB图像转换为像素值为0到255之间的八位低动态范围图像。
在一个实施例中,将RGB图像待处理图像区域的像素值和对应的第二映射动态图像像素对应的第二亮度参数修正系数的乘积,再与对应像素位置的局部映射增益值进行倍数运算,再对增益之后的像素值做伽马变换,转换为像素值为0到255之间,八位低动态的目标动态图像。经过以上处理,可以保留图像的更多细节,提高图像处理的效果,使高光处颜色更准确。
在一个实施例中,当增益后的像素值超过预设的像素值时,限制为预设像素值。例如, 预设像素值限制为65535时,增益后的像素值超过65535的部分都限制为65535。
本实施例中,通过第一映射动态图像、第二映射动态图像以及第三映射动态图像得到融合处理图像,通过局部映射增益值可以对融合处理图像中的待处理图像区域进行处理,处理后的融合图像再经过伽马变化,得到目标动态图像,能够达到在图像处理之后得到保留图像细节更多的目标图像。
在一个实施例中,如图6所示,获取环境光照强度统计值包括:
S601,获取RGB图像对应的感光度、快门速度以及光圈值。
其中,感光度是指获取RGB图像时相机对光的敏感程度,感光度过高会影响图像质量,虽然获取的图像亮度会偏亮,但是感光度过高会使感光度过高,图像噪点更多;快门速度是指使用相机获取图像时,快门的开启时间,快门速度越快,开启时间越短,进入相机的光线越少,图像越暗;反之,快门速度越慢,开启时间越长,进入相机的光线越多,图像越亮。光圈值是指相机镜头通光的相对值,光圈值越小,同一单位时间内进光量越大;反之,光圈值越大,同一单位时间内进光量越大。
具体的,环境光照强度统计值与感光度、快门速度以及光圈值存在函数关系,获取环境光照强度统计值,需要首先获取到上述感光度、快门速度以及光圈值参数。
S602,根据感光度、快门速度以及光圈值得到第一参数值。
具体的,在获取到感光度、快门速度以及光圈值之后,通过感光度、快门速度以及光圈值得到第一参数值,因快门速度提高一倍,例如,按1秒、1/2秒、1/4秒、1/8秒的序列排列,镜头通光量会减少一半;光圈值每增加一档,例如,1.4、2.0、4.0、5.6、8.0等,通光量也会减少一半;快门速度是按照倍数增加或者减少,光圈值是按照固定数值的平方根成倍增加或者减少的;感光度增加一倍,通光量减少一半。曝光不足时,可以通过设置更大的光圈、更慢的快门速度和更高的感光值来调整。避免曝光过度,可以通过设置更小的光圈、更快的快门速度和更低的感光值来调整。
在一个实施例中,第一参数值可以利用公式来表示,公式中包括感光度、快门速度和光圈值。其中感光度表示为I,快门速度表示为s,光圈值表示为a,第一参数值表示为c,则c可以表示为:c=I*s/a 2
S603,计算图像亮度统计值与第一参数的参数比值。
具体的,图像亮度统计值表示为V0,可以计算出图像亮度统计值与第一参数的参数比值,通过该比值可以初步判断出图像亮度统计值与获取到图像参数值之间的函数关系。
在一个实施例中,图像亮度统计值与第一参数的参数比值可以用b来表示:b=V0/I*s/a 2
步骤S604,将参数比值作为对数函数的真数进行对数计算,得到环境光照强度统计值。
具体的,将上述参数比值作为对数函数的真数进行对数计算,可以得到环境光照强度统计值。环境光照强度统计值可以表示为EE,则EE使用公式表示为:EE=log 2b
环境光照强度EE为大于0的数值,所以作为对数函数的真数的b大于1,预处理的图像亮度统计值大于第一参数的参数值。
本实施例中,通过RGB图像对应的感光度、快门速度以及光圈值,以及三者与图像亮度统计值之间的函数关系,能够达到获取环境光照强度统计值的目的。
在一个实施例中,终端先获取到待处理的RGB图像;通过RGB图像对应的图像亮度统计值和环境光照强度统计值获取拍摄环境对应的目标亮度统计值,通过目标亮度统计值与图像亮度统计值确定目标亮度修正系数,把此目标亮度修正系数作为第一亮度修正系数;再通过在此第一亮度修正系数的基础上增加或者减少亮度形成其他两种亮度修正系数,分别是第二亮度修正系数和第三亮度修正系数,通过三种亮度修正系数对高动态范围图像进行处理,得到三种亮度修正之后的图像,分别是经过第一亮度修正系数修正后的正常图像;第二亮度修正系数修正后的较暗的图像和经过第三亮度修正系数修正后的较亮的图像。三张图像经过伽马变换之后,将图像像素值映射到像素值范围为0到255之间的红绿蓝三通道低动态图像。映射后的三张低动态图像,再通过图像融合,得到融合后的单张图像。融合后的图像虽然保留了原高动态范围图像中的高光和阴影细节,但是由于在一些原本有色彩信息的高光区域,如彩色招牌灯箱,不同亮度修正系数修正后的图像会因为高光截断而呈现不同的颜色。如较亮图像会因为亮度修正而过曝,导致该处画面变白。融合后的图像,在高光处的色彩经常会有偏色的现象。以融合后的画面的局部亮度信息作为参考,结合较暗图相对应的局部亮度信息,得到局部色调映射增益图。提取融合后图像的像素亮度值,和三张低动态范围中较暗图的像素亮度值。对两个亮度值作逆伽马变换,对逆伽马变换后的两个亮度值做比值运算,例如除法运算,得到融合后图像每个像素的亮度值,相对于较暗图对应像素亮度值的线性增益值。RGB图像的像素值乘以较暗图所对应的亮度修正系数,再乘以对应像素位置的局部色调映射增益值,得到增益图像。增益图像中的像素值超过预设像素值时,限制为预设像素值,再对增益图像的像素值做伽马变换,转换为像素值为0~255之间,红绿蓝三通道的八位低动态范围图像。
应该理解的是,虽然图1-6的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图1-6中的至少一部分步骤可以包括多个步骤或者多个阶段,这些步骤或者阶段并不必然是在同一时刻执行 完成,而是可以在不同的时刻执行,这些步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤中的步骤或者阶段的至少一部分轮流或者交替地执行。
在一个实施例中,如图7所示,提供了一种图像处理装置700,包括:图像获取模块701、降噪模块702、转换处理模块703、亮度修正模块704、动态图像映射模块705,其中:
图像获取模块701,用于获取待处理Raw图像;
降噪模块702,用于对所述待处理Raw图像进行降噪处理,得到降噪Raw图像;
转换处理模块703,用于对所述降噪Raw图像进行转换处理,得到RGB图像;
亮度修正模块704,用于对所述RGB图像进行亮度修正处理,得到目标亮度修正图像;
动态映射模块705,用于对所述目标亮度修正图像进行像素动态范围映射,得到目标动态图像。
关于图像处理装置的具体限定可以参见上文中对于图像处理方法的限定,在此不再赘述。上述图像处理装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是服务器,其内部结构图可以如图8所示。该计算机设备包括通过系统总线连接的处理器、存储器和网络接口。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统、计算机程序和数据库。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的数据库用于存储图像处理数据。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机程序被处理器执行时以实现一种图像处理方法。
本领域技术人员可以理解,图8中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,提供了一种计算机设备,包括存储器和处理器,存储器中存储有计算机程序,该处理器执行计算机程序时实现以下步骤:
获取待处理Raw图像;
对所述待处理Raw图像进行降噪处理,得到降噪Raw图像;
对所述降噪Raw图像进行转换处理,得到RGB图像;
对所述RGB图像进行亮度修正处理,得到目标亮度修正图像;
对所述目标亮度修正图像进行像素动态范围映射处理,得到目标动态图像,所述目标动态图像的像素动态范围小于所述RGB图像的像素动态范围。
第四方面,本发明提供一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如下步骤:
获取待处理Raw图像;
对所述待处理Raw图像进行降噪处理,得到降噪Raw图像;
对所述降噪Raw图像进行转换处理,得到RGB图像;
对所述RGB图像进行亮度修正处理,得到目标亮度修正图像;
对所述目标亮度修正图像进行像素动态范围映射处理,得到目标动态图像,所述目标动态图像的像素动态范围小于所述RGB图像的像素动态范围。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存或光存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或外部高速缓冲存储器。作为说明而非局限,RAM可以是多种形式,比如静态随机存取存储器(Static Random Access Memory,SRAM)或动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (11)

  1. 一种图像处理方法,其特征在于,所述方法包括:
    获取待处理Raw图像;
    对所述待处理Raw图像进行降噪处理,得到降噪Raw图像;
    对所述降噪Raw图像进行转换处理,得到RGB图像;
    对所述RGB图像进行亮度修正处理,得到目标亮度修正图像;
    对所述目标亮度修正图像进行像素动态范围映射处理,得到目标动态图像,所述目标动态图像的像素动态范围小于所述RGB图像的像素动态范围。
  2. 根据权利要求1所述的方法,其特征在于,所述对所述待处理Raw图像进行降噪处理,得到降噪Raw图像,具体包括:
    将所述待处理Raw图像输入预设图像降噪模型进行降噪处理,得到降噪Raw图像。
  3. 根据权利要求1所述的方法,其特征在于,所述对所述降噪Raw图像进行转换处理,得到RGB图像,具体包括:
    采用插值算法对所述降噪Raw图像进行处理,得到RGB图像。
  4. 根据权利要求1所述的方法,其特征在于,所述对所述RGB图像进行亮度修正处理,得到亮度修正图像,具体包括:
    获取所述RGB图像对应的图像亮度统计值;
    获取环境光照强度统计值,根据所述环境光照强度统计值获取拍摄环境对应的目标亮度统计值;所述环境光照强度统计值为所述RGB图像所在拍摄环境的光照强度统计值;
    根据所述目标亮度统计值与所述图像亮度统计值确定目标亮度修正系数;
    根据所述目标亮度修正系数对所述RGB图像进行亮度修正,得到目标亮度修正图像。
  5. 根据权利要求4所述的方法,其特征在于,所述根据所述目标亮度统计值与所述图像亮度统计值确定目标亮度修正系数包括以下步骤的至少一个:
    当所述目标亮度统计值大于所述图像亮度统计值时,获取亮度增强系数,作为目标亮度修正系数;
    当所述目标亮度统计值小于所述图像亮度统计值时,获取亮度减弱系数,作为目标亮度修正系数。
  6. 根据权利要求4或5所述的方法,其特征在于,所述目标亮度修正系数包括第一亮度修正系数,所述根据所述目标亮度统计值与所述图像亮度统计值确定目标亮度修正系数包括:
    计算所述目标亮度统计值与所述图像亮度统计值的亮度比值;
    将所述亮度比值作为对数函数中的对数进行对数计算,得到第一亮度修正系数,其中所述对数函数的指数大于1。
  7. 根据权利要求6所述的方法,其特征在于,所述目标亮度修正系数还包括第二亮度修正系数以及第三亮度修正系数;所述根据所述目标亮度统计值与所述图像亮度统计值确定目标亮度修正系数包括:
    对所述第一亮度修正系数进行减少处理,得到第二亮度修正系数;
    对所述第二亮度修正系数进行增加处理,得到第三亮度修正系数;
    所述根据所述目标亮度修正系数对所述RGB图像进行亮度修正,得到目标亮度修正图像包括:
    分别根据所述第一亮度修正系数、所述第二亮度修正系数、所述第三亮度修正系数对所述RGB图像进行亮度修正,得到所述第一亮度修正系数修正得到的第一亮度修正图像、所述第二亮度修正系数修正得到的第二亮度修正图像以及所述第三亮度修正系数修正得到的第三亮度修正图像;
    所述对所述目标亮度修正图像进行像素动态范围映射,得到目标动态图像包括:
    分别对所述第一亮度修正图像、所述第二亮度修正图像以及所述第三亮度修正图像进行像素动态范围映射,得到所述第一亮度修正图像对应的第一映射动态图像、所述第二亮度修正图像对应的第二映射动态图像以及所述第三亮度修正图像对应的第三映射动态图像;
    对所述第一映射动态图像、所述第二映射动态图像以及所述第三映射动态图像进行融合处理,得到目标动态图像。
  8. 根据权利要求7所述的方法,其特征在于,所述对所述第一映射动态图像、所述第二映射动态图像以及所述第三映射动态图像进行融合处理,得到目标动态图像包括:
    对所述第一映射动态图像、所述第二映射动态图像以及所述第三映射动态图像进行融合处理,得到融合处理图像;
    获取所述融合处理图像的图像区域;
    获取参考图像区域;
    计算所述融合图像的图像区域相对于参考图像区域的局部映射增益值;
    根据所述局部映射增益值对所述RGB图像进行色调映射处理,得到目标动态图像。
  9. 一种图像处理装置,其特征在于,所述装置包括:
    图像获取模块,用于获取待处理Raw图像;
    降噪模块,用于对所述Raw图像进行降噪处理,得到降噪Raw图像;
    转换处理模块,用于对所述降噪Raw图像进行转换处理,得到RGB图像;
    亮度修正模块,用于对所述RGB图像进行亮度修正处理,得到目标亮度修正图像;
    动态映射模块,用于对所述目标亮度修正图像进行像素动态范围映射,得到目标动态图 像。
  10. 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1至8中任一项所述的方法的步骤。
  11. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至8中任一项所述的方法的步骤。
PCT/CN2021/134712 2020-12-01 2021-12-01 图像处理方法、装置、设备和存储介质 WO2022116988A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011378491.5 2020-12-01
CN202011378491.5A CN112381743A (zh) 2020-12-01 2020-12-01 图像处理方法、装置、设备和存储介质

Publications (1)

Publication Number Publication Date
WO2022116988A1 true WO2022116988A1 (zh) 2022-06-09

Family

ID=74589333

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/134712 WO2022116988A1 (zh) 2020-12-01 2021-12-01 图像处理方法、装置、设备和存储介质

Country Status (2)

Country Link
CN (1) CN112381743A (zh)
WO (1) WO2022116988A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112565636B (zh) * 2020-12-01 2023-11-21 影石创新科技股份有限公司 图像处理方法、装置、设备和存储介质
CN112381743A (zh) * 2020-12-01 2021-02-19 影石创新科技股份有限公司 图像处理方法、装置、设备和存储介质
CN112837254A (zh) * 2021-02-25 2021-05-25 普联技术有限公司 一种图像融合方法、装置、终端设备及存储介质
CN113674231B (zh) * 2021-08-11 2022-06-07 宿迁林讯新材料有限公司 基于图像增强的轧制过程中氧化铁皮检测方法与系统
CN113852759B (zh) * 2021-09-24 2023-04-18 豪威科技(武汉)有限公司 图像增强方法及拍摄装置
CN113838070A (zh) * 2021-09-28 2021-12-24 北京地平线信息技术有限公司 数据脱敏方法和装置

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050117799A1 (en) * 2003-12-01 2005-06-02 Chiou-Shann Fuh Method and apparatus for transforming a high dynamic range image into a low dynamic range image
CN102547301A (zh) * 2010-09-30 2012-07-04 苹果公司 使用图像信号处理器处理图像数据的系统和方法
CN105335933A (zh) * 2014-05-27 2016-02-17 上海贝卓智能科技有限公司 一种图像对比度增强方法和装置
CN105469375A (zh) * 2014-08-28 2016-04-06 北京三星通信技术研究有限公司 处理高动态范围全景图的方法和装置
CN110892408A (zh) * 2017-02-07 2020-03-17 迈恩德玛泽控股股份有限公司 用于立体视觉和跟踪的系统、方法和装置
CN111885312A (zh) * 2020-07-27 2020-11-03 展讯通信(上海)有限公司 Hdr图像的成像方法、系统、电子设备及存储介质
CN112381743A (zh) * 2020-12-01 2021-02-19 影石创新科技股份有限公司 图像处理方法、装置、设备和存储介质
CN112565636A (zh) * 2020-12-01 2021-03-26 影石创新科技股份有限公司 图像处理方法、装置、设备和存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050117799A1 (en) * 2003-12-01 2005-06-02 Chiou-Shann Fuh Method and apparatus for transforming a high dynamic range image into a low dynamic range image
CN102547301A (zh) * 2010-09-30 2012-07-04 苹果公司 使用图像信号处理器处理图像数据的系统和方法
CN105335933A (zh) * 2014-05-27 2016-02-17 上海贝卓智能科技有限公司 一种图像对比度增强方法和装置
CN105469375A (zh) * 2014-08-28 2016-04-06 北京三星通信技术研究有限公司 处理高动态范围全景图的方法和装置
CN110892408A (zh) * 2017-02-07 2020-03-17 迈恩德玛泽控股股份有限公司 用于立体视觉和跟踪的系统、方法和装置
CN111885312A (zh) * 2020-07-27 2020-11-03 展讯通信(上海)有限公司 Hdr图像的成像方法、系统、电子设备及存储介质
CN112381743A (zh) * 2020-12-01 2021-02-19 影石创新科技股份有限公司 图像处理方法、装置、设备和存储介质
CN112565636A (zh) * 2020-12-01 2021-03-26 影石创新科技股份有限公司 图像处理方法、装置、设备和存储介质

Also Published As

Publication number Publication date
CN112381743A (zh) 2021-02-19

Similar Documents

Publication Publication Date Title
WO2022116988A1 (zh) 图像处理方法、装置、设备和存储介质
US11882357B2 (en) Image display method and device
CN110428366B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
CN109636754B (zh) 基于生成对抗网络的极低照度图像增强方法
WO2022116989A1 (zh) 图像处理方法、装置、设备和存储介质
CN108898567B (zh) 图像降噪方法、装置及系统
US11037278B2 (en) Systems and methods for transforming raw sensor data captured in low-light conditions to well-exposed images using neural network architectures
CN108694705B (zh) 一种多帧图像配准与融合去噪的方法
JP4234195B2 (ja) 画像分割方法および画像分割システム
CN110033418B (zh) 图像处理方法、装置、存储介质及电子设备
WO2022000397A1 (zh) 低照度图像增强方法、装置及计算机设备
CN110349163B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
WO2021143300A1 (zh) 图像处理方法、装置、电子设备及存储介质
WO2020143257A1 (zh) 一种抗运动伪影的hdr方法及便携式终端
US11184570B2 (en) Method controlling image sensor parameters
CN115550570A (zh) 图像处理方法与电子设备
TWI694722B (zh) 用於高動態範圍成像的曝光位準控制、系統和方法
US11017510B1 (en) Digital image dynamic range processing apparatus and method
CN114429476A (zh) 图像处理方法、装置、计算机设备以及存储介质
CN114240767A (zh) 一种基于曝光融合的图像宽动态范围处理方法及装置
CN111080543A (zh) 图像处理方法及装置、电子设备及计算机可读存储介质
EP3718049A1 (en) Temporal de-noising
CN115278090B (zh) 一种基于行曝光的单帧四曝光wdr处理方法
CN114554106B (zh) 自动曝光方法、装置、图像获取方法、介质及设备
US20230289930A1 (en) Systems and Methods for Lightweight Machine Learning for Image Illumination Control

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21900019

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21900019

Country of ref document: EP

Kind code of ref document: A1