WO2024045821A1 - 图像处理方法、装置、计算机设备和存储介质 - Google Patents

图像处理方法、装置、计算机设备和存储介质 Download PDF

Info

Publication number
WO2024045821A1
WO2024045821A1 PCT/CN2023/102697 CN2023102697W WO2024045821A1 WO 2024045821 A1 WO2024045821 A1 WO 2024045821A1 CN 2023102697 W CN2023102697 W CN 2023102697W WO 2024045821 A1 WO2024045821 A1 WO 2024045821A1
Authority
WO
WIPO (PCT)
Prior art keywords
attribute
image
pixel position
pixel
enhanced
Prior art date
Application number
PCT/CN2023/102697
Other languages
English (en)
French (fr)
Inventor
李�浩
张欢荣
孙磊
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to US18/581,818 priority Critical patent/US20240193739A1/en
Publication of WO2024045821A1 publication Critical patent/WO2024045821A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction

Definitions

  • the present application relates to the field of image processing technology, and in particular, to an image processing method, device, computer equipment, storage medium and computer program product.
  • image enhancement refers to improving the source image from one or more image attribute dimensions such as brightness, hue, contrast, and sharpness, so that the image quality of the processed output image is enhanced.
  • the source image and the enhanced image are globally fused based on the designed fusion coefficient to obtain a fused image, that is, an enhanced post-processed image.
  • an image processing method, apparatus, computer equipment, computer-readable storage medium and computer program product are provided.
  • this application provides an image processing method, executed by a computer device, including:
  • the enhanced image is obtained by enhancing the source image
  • this application also provides an image processing device.
  • the device includes:
  • the attribute representation acquisition module is used to obtain the respective attribute representations of the source image and the enhanced image; the enhanced image is obtained by enhancing the source image;
  • the attribute representation comparison module is used to compare the attribute representations of the source image and the enhanced image, and obtain the attribute differences between the source image and the enhanced image at at least some pixel positions;
  • a local fusion weight generation module configured to generate local fusion weights at at least some pixel positions in the source image based on attribute differences and attribute representation of the enhanced image
  • An enhanced fusion weight generation module configured to determine the enhanced fusion weight at at least a portion of the pixel positions in the enhanced image, and the determined enhanced fusion weight at at least one pixel position is negatively correlated with the local fusion weight at the same pixel position in the source image;
  • the fused image generation module is used to generate a fused image of the source image and the enhanced image.
  • this application also provides a computer device.
  • the computer device includes a memory and a processor.
  • the memory stores computer readable instructions.
  • the processor executes the computer readable instructions, the above image processing method is implemented.
  • this application also provides a computer-readable storage medium.
  • the computer-readable storage medium has computer-readable instructions stored thereon, and when the computer-readable instructions are executed by a processor, the above image processing method is implemented.
  • this application also provides a computer program product.
  • the computer program product includes a computer Readable instructions, which when executed by the processor implement the above image processing method.
  • Figure 1 is an application environment diagram of the image processing method in one embodiment
  • Figure 2 is an application environment diagram of the image processing method in another embodiment
  • FIG. 3 is a schematic flowchart of an image processing method in one embodiment
  • Figure 4 is a schematic diagram of comparing attribute values at pixel positions in attribute representation in one embodiment
  • Figure 5 is a schematic diagram of comparing attribute values at pixel positions in attribute representation in another embodiment
  • Figure 6 is a schematic diagram of generating pixel values at pixel positions in the fused image in one embodiment
  • Figure 7 is a schematic diagram of local fusion weight generation in an embodiment
  • Figure 8 is a schematic flow chart of an image processing method in another embodiment
  • Figure 9 is a schematic flow chart of an image processing method in yet another embodiment
  • Figure 10 is an application scenario diagram of image processing in one embodiment
  • Figure 11 is a schematic flow chart of an image processing method in yet another embodiment
  • Figure 12 is a comparison chart of the effects of image processing in one embodiment
  • Figure 13 is a comparison chart of the effects of image processing in another embodiment
  • Figure 14 is a structural block diagram of an image processing device in one embodiment
  • Figure 15 is an internal structure diagram of a computer device in one embodiment
  • Figure 16 is an internal structure diagram of a computer device in one embodiment.
  • the image processing method provided by the embodiment of the present application can be applied in the application environment as shown in Figure 1.
  • the terminal 102 obtains the respective attribute representations of the source image and the enhanced image.
  • the enhanced image is obtained by performing enhancement processing on the source image, compares the attribute representations of the source image and the enhanced image, and obtains the position of the source image and the enhanced image at at least part of the pixel positions.
  • the terminal 102 can be, but is not limited to, various desktop computers, laptops, smart phones, tablets, Internet of Things devices and portable wearable devices.
  • the Internet of Things devices can be smart speakers, smart TVs, smart air conditioners, smart vehicle-mounted devices, etc. .
  • Portable wearable devices can be smart watches, smart bracelets, head-mounted devices, etc.
  • the image processing method provided by the embodiment of the present application can be applied in the application environment as shown in Figure 2.
  • the terminal 202 communicates with the server 204 through the network.
  • the data storage system may store data that server 204 needs to process.
  • the data storage system can be integrated on the server 204, or placed on the cloud or other servers.
  • the terminal 202 stores the active image and the enhanced image.
  • the server 204 obtains the source image and the enhanced image from the terminal 202 and obtains the respective attribute representations of the source image and the enhanced image.
  • the enhanced image is obtained by enhancing the source image.
  • the source image and the enhanced image are obtained.
  • the terminal 202 can be, but is not limited to, various desktop computers, laptops, smartphones, tablets, Internet of Things devices and portable wearable devices.
  • the Internet of Things devices can be smart speakers, smart TVs, smart air conditioners, smart vehicle-mounted devices, etc.
  • Portable wearable devices can be smart watches, smart bracelets, head-mounted devices, etc.
  • the server 204 can be implemented as an independent server or a server cluster or cloud server composed of multiple servers, or it can be a node on the blockchain.
  • an image processing method is provided, which can be executed by a terminal or a server alone, or can be executed by a terminal and a server in collaboration.
  • the method is applied to a terminal as an example for description, including the following steps:
  • Step 302 Obtain respective attribute representations of the source image and the enhanced image.
  • the enhanced image is obtained by enhancing the source image.
  • the source image refers to the original image that has not been enhanced.
  • the source image may specifically refer to an image collected by an electronic device.
  • the source image may specifically refer to an image collected by a camera, scanner, etc.
  • the source image may specifically refer to the original video frame in the video data that has not been enhanced.
  • Enhanced images refer to images obtained by enhancing source images to enhance useful information in the source images. Enhancement can be a distortion process whose purpose is to improve the visual effect of the source image for a given image application.
  • an enhanced image may specifically refer to an image obtained by improving the source image from one or more image attribute dimensions such as brightness, hue, contrast, and sharpness. Enhancement processing can be divided into two major categories.
  • image attributes refer to the inherent characteristics of the image.
  • the image attribute may specifically refer to brightness.
  • Brightness refers to the lightness and darkness of the image color, and is the human eye's perception of the intensity of light and darkness of an object.
  • the image attribute may specifically refer to hue.
  • Hue refers to the relative lightness and darkness of an image, which appears as color on color images.
  • the image attribute may specifically refer to contrast. Contrast refers to the measurement of different brightness levels between the brightest white and the darkest black in the light and dark areas of an image. The larger the difference range, the greater the contrast, and the smaller the difference range, the smaller the contrast.
  • attribute representation refers to information that can characterize image attributes.
  • attribute representation may specifically refer to an image that can represent image attributes, that is, an attribute representation map.
  • attribute representation refers to an image that can represent brightness.
  • the attribute representation may specifically be a grayscale image representing brightness.
  • the attribute representation may specifically be a V channel image in the HSV (Hue, Saturation, Value) color model that represents brightness.
  • the attribute representation may specifically be the L channel image in the LAB color model.
  • L represents brightness
  • a and B are two color channels. A includes colors from dark green (low brightness value) to gray (medium brightness value) to bright pink (high brightness value), and B includes colors from bright blue (low brightness value) to gray (medium brightness value) and then to yellow (high brightness value).
  • the attribute representation refers to the image that can represent the hue.
  • the attribute representation may specifically be an H-channel image in the HSV color model.
  • the attribute representation may be an image obtained by combining the A channel and the B channel in the LAB color model.
  • the attribute representation refers to the image that can represent the contrast.
  • the attribute value at each pixel position in the image that represents contrast can be obtained by calculating the difference between the pixel value at each pixel position and the average pixel value.
  • the attribute value at each pixel position can also be the difference between the maximum pixel value and the minimum pixel value in the local neighborhood of each pixel position.
  • the size of the local neighborhood can be configured according to the actual application scenario.
  • the terminal will obtain the respective attribute representations of the source image and the enhanced image based on the image attributes of interest.
  • what the terminal obtains may be attribute representations of the source image and the enhanced image under the same image attributes.
  • the attribute representation may be an attribute representation map.
  • the image properties of interest may be Configure the scene.
  • the image attribute of concern may specifically be at least one of brightness, hue, and contrast.
  • the terminal when the image attribute of concern is brightness, the terminal obtains the attribute representation of the source image and the enhanced image under brightness.
  • the attribute representation can be a grayscale image or the V channel in the HSV color model.
  • the image may also be an L-channel image in the LAB color model. This embodiment does not limit the attribute representation representing brightness here.
  • the attributes are represented as grayscale images
  • the terminal can obtain respective grayscale images of the source image and the enhanced image by performing grayscale transformation on the source image and the enhanced image respectively.
  • grayscale transformation refers to a method of changing the grayscale value of each pixel in the source image point by point according to certain target conditions and a certain transformation relationship. The purpose is to improve the image quality and make the display effect of the image clearer.
  • the method of grayscale transformation is not limited here, as long as grayscale transformation can be achieved, it can be linear transformation or nonlinear transformation.
  • the attribute is represented as a V-channel image in the HSV color model, and the terminal can obtain the respective V-channel images of the source image and the enhanced image by converting the source image and the enhanced image to HSV format respectively.
  • the attribute is characterized as an L-channel image in the LAB color model, and the terminal can obtain the respective L-channel images of the source image and the enhanced image by converting the source image and the enhanced image to LAB format respectively.
  • Step 304 Compare the attribute representations of the source image and the enhanced image to obtain attribute differences between the source image and the enhanced image at at least some pixel positions.
  • the attribute difference is used to describe the degree of attribute difference between the source image and the enhanced image at the same pixel position.
  • the attribute difference describes the degree of difference in grayscale values between the source image and the enhanced image at the same pixel position.
  • the attribute difference describes the degree of contrast difference between the source image and the enhanced image at the same pixel position.
  • the terminal will compare the attribute values at each same pixel position in the respective attribute representations of the source image and the enhanced image, and compare the attribute values at each same pixel position in the respective attribute representations of the source image and the enhanced image.
  • the attribute values are compared respectively, the difference between the attribute values at each same pixel position can be extracted, and the differential attribute values at each pixel position of different attribute representations can be obtained, and then the difference attribute values at each pixel position can be obtained based on
  • the differential attribute value obtains the attribute difference between the source image and the enhanced image at at least a part of the pixel positions.
  • the terminal when comparing the attribute values at each same pixel position in the respective attribute representations of the source image and the enhanced image, can compare each of the respective attribute representations of the source image and the enhanced image. Attribute values at the same pixel positions are compared by subtracting each other to obtain differential attribute values representing different attributes at each pixel position. In one embodiment, as shown in Figure 4, this can be performed by subtracting the attribute value B at the first pixel position in the attribute representation of the enhanced image from the attribute value A at the first pixel position in the attribute representation of the source image. Compare and obtain the differential attribute value at the first pixel position.
  • the terminal can also perform comparison by dividing the attribute values at each same pixel position in the respective attribute representations of the source image and the enhanced image to obtain different attribute representations at each pixel position.
  • the differential attribute value at can be performed by removing the attribute value B at the first pixel position in the attribute representation of the enhanced image by removing the attribute value A at the first pixel position in the attribute representation of the source image. Yes, get the differential attribute value at the first pixel position.
  • Step 306 Generate local fusion weights at at least some pixel positions in the source image based on the attribute difference and the attribute representation of the enhanced image.
  • weight refers to the importance of a certain factor or indicator relative to a certain thing.
  • the local fusion weight at the targeted pixel position refers to the source image when performing weighted fusion of the pixel values at the targeted pixel position in the source image and the enhanced image.
  • the weight of the pixel value at the targeted pixel position in the weighted fusion is used to represent the importance of the pixel value at the targeted pixel position in the source image relative to the weighted fusion.
  • the terminal generates local fusion weights at at least a portion of pixel positions in the source image based on attribute differences at at least a portion of the pixel positions and attribute values at at least a portion of the pixel positions in the attribute representation of the enhanced image.
  • the terminal determines The attribute difference at the location and the attribute value at the targeted pixel position in the attribute representation of the enhanced image are used to generate the local fusion weight at the targeted pixel position, that is, what is generated is the respective local fusion weight at multiple pixel positions in the source image. Fusion weights.
  • the terminal fuses the attribute difference at the targeted pixel position with the attribute value at the targeted pixel position in the attribute representation of the enhanced image to generate a local fusion weight at the targeted pixel position.
  • a specific way of performing the fusion may be to multiply the attribute difference at the targeted pixel position and the attribute value at the targeted pixel position in the attribute representation of the enhanced image.
  • the terminal when fusing the attribute difference at the targeted pixel position and the attribute value at the targeted pixel position, the terminal first separately fuses the attribute difference at the targeted pixel position and the targeted pixel The attribute value at the position is weighted to increase or decrease the importance of the attribute difference and attribute value, and then the weight-adjusted attribute difference and attribute value are fused to generate a local fusion weight at the targeted pixel position.
  • the terminal can adjust the weight of the attribute difference at the pixel position and the attribute value at the pixel position through a preconfigured stretch coefficient.
  • the preconfigured stretch coefficient can be configured according to the actual application scenario. This document The examples are not specifically limited here.
  • the preconfigured stretch coefficient can be any value between 0 and 1.
  • Step 308 Determine the enhanced fusion weight at at least a part of the pixel positions in the enhanced image, and the determined enhanced fusion weight at at least one pixel position is negatively correlated with the local fusion weight at the same pixel position in the source image.
  • the enhanced fusion weight at the targeted pixel position refers to the weighted fusion of the pixel values at the targeted pixel position in the source image and the enhanced image
  • the weight of the pixel value at the targeted pixel position in the enhanced image when performing weighted fusion is used to represent the importance of the pixel value at the targeted pixel position in the enhanced image relative to the weighted fusion.
  • the terminal determines the enhanced fusion weight at at least a part of the pixel positions in the enhanced image based on the local fusion weight at at least a part of the pixel positions in the source image, and the determined enhanced fusion weight at at least one pixel position is the same as that in the source image.
  • the local fusion weights at the pixel positions are negatively correlated.
  • the terminal determines the enhanced fusion at the targeted pixel position in the enhanced image based on the local fusion weight at the targeted pixel position in the source image.
  • weight, and the determined enhanced fusion weight at the targeted pixel position is negatively correlated with the local fusion weight at the same pixel position in the source image (i.e., at the targeted pixel position), that is, the determined enhanced fusion weight is the multiple The respective augmented fusion weights at the pixel positions.
  • the sum of the local fusion weight and the enhanced fusion weight at each pixel position is preconfigured, so by subtracting the local fusion weight at the targeted pixel position in the source image from the preconfigured sum, The enhanced fusion weight at the targeted pixel position in the enhanced image (that is, at the same pixel position) can be obtained.
  • the preconfigured sum can be 1. Then subtract the local fusion weight at the targeted pixel position in the source image from 1 to obtain the targeted pixel position in the enhanced image (that is, the same pixel position). The enhanced fusion weight.
  • Step 310 Generate a fused image of the source image and the enhanced image.
  • the fused image refers to the image obtained by fusing the source image and the enhanced image.
  • the fused image may specifically refer to an image obtained by weighted fusion of pixel values at at least part of the pixel positions in the source image and the enhanced image.
  • the terminal will generate a fused image of the source image and the enhanced image. For each pixel position at at least a part of the pixel positions, the pixel value at the targeted pixel position in the fused image is based on the targeted pixel position.
  • the local fusion weights and enhanced fusion weights are obtained by weighted fusion of the pixel values of the source image and the enhanced image at the targeted pixel positions, that is, for each pixel position at at least a part of the pixel positions, the Unique fusion value, in this way more flexible image fusion can be achieved.
  • the pixel value at the targeted pixel location in the fused image is a product of the local fusion weight and the pixel value of the source image at the targeted pixel location.
  • the pixel value C at the first pixel position in the fused image local fusion weight A2 * the source image at the first pixel position Pixel value A1 + enhanced fusion weight B2 * pixel value B1 of the enhanced image at the first pixel position.
  • the above image processing method by obtaining the respective attribute representations of the source image and the enhanced image, and comparing the attribute representations of the source image and the enhanced image, can pay attention to the attribute change trend from the source image to the enhanced image, and obtain at least the relationship between the source image and the enhanced image.
  • the attribute differences at a part of the pixel positions can then be used to perform adaptive fusion weight calculations based on the attribute differences at at least a part of the pixel positions and the attribute representation of the enhanced image to generate local fusion weights at at least a part of the pixel positions in the source image, so that Determine the enhanced fusion weights at at least some pixel positions in the enhanced image, and generate a fused image of the source image and the enhanced image.
  • the entire process generates local fusion weights by paying attention to the attribute change trend from the source image to the enhanced image, using pixel-by-pixel local fusion. Weights are used to achieve image fusion, which can improve the enhancement effect.
  • comparing the attribute representations of the source image and the enhanced image, and obtaining the attribute difference between the source image and the enhanced image at at least a part of the pixel positions includes:
  • a differential attribute representation is generated based on a differential attribute value at each pixel position of the different attribute representation, the attribute value at at least a portion of the pixel positions in the differential attribute representation represents an attribute difference between the source image and the enhanced image at at least a portion of the pixel positions.
  • the differential attribute value refers to the difference in the attribute value of different attribute representations at each pixel position, that is, the difference in the attribute value of the source image and the enhanced image at each pixel position.
  • the differential attribute value may specifically refer to the difference in attribute values represented by different attributes at each pixel position.
  • the differential attribute value may specifically refer to the ratio of attribute values represented by different attributes at each pixel position.
  • the differential attribute value may specifically refer to the absolute value of the difference between the attribute values represented by different attributes at each pixel position.
  • the differential attribute value when the attribute is represented as a grayscale image, the differential attribute value can specifically refer to the difference in the grayscale value of different grayscale images at each pixel position.
  • the difference can be the difference in grayscale values, or it can be The ratio of grayscale values.
  • the terminal will compare the attribute values at each same pixel position in the respective attribute representations of the source image and the enhanced image, and compare the attribute values at each same pixel position in the respective attribute representations of the source image and the enhanced image.
  • the attribute values are compared respectively, the difference between the attribute values at each same pixel position can be extracted, and the differential attribute values at each pixel position of different attribute representations can be obtained, and then the difference attribute values at each pixel position can be obtained based on
  • the differential attribute value analyzes the degree of attribute difference between the source image and the enhanced image at each pixel position, and generates a differential attribute representation.
  • the differential attribute representation includes a differential attribute value at each pixel position, a differential attribute value at at least a part of the pixel positions in the differential attribute representation, and represents an attribute difference between the source image and the enhanced image at at least a part of the pixel positions.
  • the terminal can perform comparison by subtracting the attribute values at each same pixel position in the respective attribute representations of the source image and the enhanced image to obtain different attribute representations at each pixel position.
  • differential attribute value In one embodiment, for each pixel position, the terminal will subtract the attribute value at the pixel position in the attribute representation of the source image from the attribute value at the pixel position in the attribute representation of the enhanced image to obtain different attribute representations in the pixel. The differential attribute value at the location.
  • the obtained The differential attribute value is the difference between the attribute values.
  • the obtained differential attribute value is the absolute value of the difference in attribute values.
  • the absolute value of the difference in attribute values can describe the degree of deviation between hues at pixel positions.
  • the terminal can perform comparison by dividing the attribute values at each same pixel position in the respective attribute representations of the source image and the enhanced image to obtain different attribute representations at each pixel position.
  • differential attribute value In one embodiment, for each pixel position, the terminal will remove the attribute value at the pixel position in the attribute representation of the enhanced image from the attribute value at the pixel position in the attribute representation of the source image to obtain different attribute representations at the pixel position. The differential attribute value at .
  • the difference between the attribute values at each pixel position can be extracted, and the different attribute representations at each pixel position can be obtained.
  • the differential attribute value at each pixel position can then be used to analyze the difference between the source image and the enhanced image based on the differential attribute value at each pixel position.
  • the degree of attribute difference at each pixel position generates a differential attribute representation.
  • generating differential attribute representations based on differential attribute values of different attribute representations at each pixel location includes:
  • a differential attribute representation is generated based on the attribute difference of different attribute representations at each pixel position, and the attribute value at each pixel position in the differential attribute representation is the attribute difference at the corresponding pixel position.
  • the preset attribute value range refers to a preconfigured range used to standardize differential attribute values.
  • the preset attribute value range can be configured according to actual application scenarios. For example, the default attribute value range may be 0 to 1.
  • the differential attribute values represented by different attributes at each pixel position may be at different orders of magnitude.
  • the terminal will represent different attributes at each pixel position.
  • the differential attribute values are respectively mapped to the preset attribute value range, so that the differential attribute values of different attribute representations at each pixel position are of the same order of magnitude, and the attribute differences of different attribute representations at each pixel position are obtained, so that A differential attribute representation is generated based on the attribute difference of different attribute representations at each pixel position, and the attribute value at each pixel position in the differential attribute representation is the attribute difference at the corresponding pixel position.
  • the terminal first subtracts the noise threshold from the differential attribute value representing different attributes at each pixel position to eliminate useless fluctuation noise, and then maps the differential attribute values after subtracting the noise threshold respectively. Within the preset attribute value range, different attributes are obtained to represent the attribute differences at each pixel position.
  • the differential attribute values of different attribute representations at each pixel position are respectively mapped to a preset attribute value range to obtain the attribute differences of different attribute representations at each pixel position, including:
  • Different attributes represent the differential attribute value at each pixel position.
  • the targeted differential attribute value is lower than the preset differential attribute threshold, the targeted differential attribute value is mapped to the lower limit of the preset attribute value range. Obtain the attribute difference at the pixel position corresponding to the targeted differential attribute value;
  • the targeted differential attribute value is mapped to the preset attribute value range in a positive correlation mapping manner, and the pixel position corresponding to the targeted differential attribute value is obtained.
  • the preset differential attribute thresholds can be configured according to actual application scenarios.
  • the preset differential attribute threshold can be configured according to the image attribute and attribute value comparison method, and different preset differential attribute thresholds can be configured for different combinations of image attributes and attribute value comparison methods.
  • the preset difference attribute threshold may be 0.
  • the preset differential attribute threshold when the image attribute is contrast and the attribute value comparison method is attribute value subtraction, the preset differential attribute threshold may be 0.
  • the preset differential attribute threshold when the image attribute is hue and the attribute value comparison method is attribute value subtraction, the preset differential attribute threshold can be a minimum value greater than 0, and the minimum value can be configured according to the actual application scenario.
  • the preset differential attribute threshold may be 1.
  • the lower limit of the preset attribute value range can be configured according to the actual application scenario.
  • the lower limit of the default attribute value range can be 0.
  • the terminal will compare the targeted differential attribute value with the preset differential attribute threshold.
  • the targeted differential attribute value is lower than the preset differential attribute threshold
  • map the targeted differential attribute value to the lower limit of the preset attribute value range and obtain the attribute difference at the corresponding pixel position of the targeted differential attribute value.
  • map the targeted differential attribute value is not lower than the preset differential attribute threshold
  • map the targeted differential attribute value to the preset attribute value range in a positive correlation mapping manner and obtain the attribute difference at the pixel position corresponding to the targeted differential attribute value.
  • the terminal when mapping the targeted differential attribute value to a preset attribute value range in a positive correlation mapping manner, when the targeted differential attribute value is the maximum differential attribute value, the terminal will map the targeted differential attribute value to the preset attribute value range. The value is mapped to the upper limit of the preset attribute value range. When the targeted differential attribute value is not the maximum differential attribute value, the terminal will use the ratio of the targeted differential attribute value to the maximum differential attribute value as the targeted differential attribute value. The attribute difference at the corresponding pixel location. Among them, the maximum differential attribute value refers to the maximum value among the differential attribute values at each pixel position. The default attribute The upper limit of the value range can be configured according to the actual application scenario. For example, the upper limit of the preset attribute value range may be 1.
  • the terminal when mapping the targeted differential attribute value to a preset attribute value range in a positive correlation mapping manner, can also use a normalization function such as a Sigmoid function and a Softmax function to perform positive correlation mapping.
  • a normalization function such as a Sigmoid function and a Softmax function to perform positive correlation mapping.
  • the Sigmoid function is a common S-shaped function in biology, also known as the S-shaped growth curve.
  • the Sigmoid function is often used as a neural network.
  • Activation function maps variables between 0 and 1.
  • the Softmax function is also called the normalized exponential function. It is the generalization of the binary classification function Sigmoid in multi-classification. The purpose is to display the results of multi-classification in the form of probability.
  • At least a part of the pixel positions includes part of the pixel positions forming the calibration area, the attribute value at each pixel position in the calibration area in the attribute representation of the enhanced image meets the calibration area identification conditions, and the non-calibration in the fused image
  • the pixel value at each pixel position in the region is equal to the pixel value at the same pixel position in the enhanced image.
  • the calibration area refers to a specific area with obvious image attributes calibrated based on the image attribute of interest.
  • the calibration area may specifically refer to a brighter area calibrated based on brightness.
  • the calibration area may specifically refer to an area containing a specific color that is calibrated based on a specific color.
  • the specific color may be yellow, and the calibration area may specifically refer to an area containing yellow.
  • Calibration area recognition conditions refer to the conditions for identifying the calibration area, which can be configured according to actual application scenarios. For different image attributes, the corresponding calibration area recognition conditions can be different.
  • the pixel positions include part of the pixel positions that form the calibration area.
  • the terminal will perform calibration area recognition on the attribute representation of the enhanced image based on the calibration area recognition conditions, and obtain the calibration area in the attribute representation of the enhanced image that meets the calibration area recognition conditions.
  • the calibration area recognition conditions and attribute representations correspond to the image attributes.
  • the terminal will perform calibration area recognition on the attribute representation of the enhanced image based on the calibration area recognition conditions under the same image attributes, and obtain the enhanced image.
  • the calibration area generally refers to the area where the enhancement effect becomes worse, and the corresponding non-calibration area generally refers to the area where the enhancement effect is better.
  • the terminal can directly determine the pixel value at each pixel position in the non-calibrated area in the enhanced image as the pixel value at the same pixel position in the fused image, that is, the pixel value at the non-calibrated area in the fused image.
  • the pixel value at each pixel position in the calibration area is equal to the pixel value at the same pixel position in the enhanced image.
  • the calibration area in the attribute representation of the enhanced image can be determined through the calibration area recognition condition, so that on the basis of determining the calibration area, each pixel position in the non-calibration area in the fused image can be determined. Determination of pixel values.
  • the calibration area identification conditions include: the pixel positions in the calibration area in the attribute representation of the enhanced image constitute a connected domain, and the attribute value at each pixel position in the connected domain in the attribute representation of the enhanced image belongs to the preset calibration Property value range.
  • the preset calibration attribute value range can be configured according to actual application scenarios, and different image attributes correspond to different preset calibration attribute value ranges.
  • the corresponding preset calibration attribute value range can be greater than 0, indicating that the calibration area to be identified is a brighter area.
  • the corresponding preset calibration attribute value range may be less than 0, indicating that the calibration area to be identified is a darker area.
  • the corresponding preset calibration attribute value range can be the attribute value range corresponding to a specific color.
  • the corresponding preset calibration attribute value range can be the attribute value range corresponding to yellow. .
  • the terminal will perform calibration area recognition on the attribute representation of the enhanced image based on the calibration area recognition conditions, and obtain the calibration area that meets the calibration area recognition conditions in the attribute representation of the enhanced image.
  • the calibration area recognition conditions include: the attribute representation of the enhanced image.
  • the pixel positions in the calibrated area constitute a connected domain and enhance the attribute representation of the image.
  • the attribute value at each pixel position in the connected domain belongs to the preset calibration attribute value range.
  • the terminal when performing calibration area recognition, performs edge-preserving filtering on the attribute representation of the enhanced image to retain large contours while removing texture details, and compares the position of each pixel in the filtered attribute representation.
  • the attribute value at the location and the preset calibration attribute value range are used to obtain the calibration area that meets the calibration area identification conditions in the filtered attribute representation.
  • the terminal since the terminal performs edge-preserving filtering when identifying the calibration area to retain large contours while removing texture details, the pixel positions in the calibration area in the enhanced attribute representation of the image can form a connected domain.
  • the calibration area identification conditions can be used to realize the calibration area identification of the attribute representation of the enhanced image, and obtain the calibration area in the attribute representation of the enhanced image.
  • the local fusion weight at the pixel position of the non-calibrated area in the source image is zero, and the enhanced fusion weight at the pixel position of the non-calibrated area in the enhanced image is based on the local fusion weight at the same pixel position in the source image.
  • the fusion weight is determined to generate a fused image of the source image and the enhanced image, including:
  • the source image and the enhanced image at the same pixel position are calculated. Pixel values are weighted and fused to obtain a fused image.
  • the respective local fusion weights at multiple pixel positions in the non-calibrated area in the source image are zero.
  • the terminal will determine the non-calibrated image in the enhanced image based on the respective local fusion weights at multiple pixel positions in the non-calibrated area in the source image.
  • the respective enhanced fusion weights at multiple pixel positions in the area are negatively correlated with the local fusion weights and enhanced fusion weights at the same pixel positions in the non-calibrated area in the source image and the enhanced image.
  • the terminal will respectively adjust the source image and the enhanced image at the same pixel position according to the local fusion weight and enhanced fusion weight at the same targeted pixel position.
  • the pixel values at are weighted and fused to obtain the fused image.
  • the sum of the local fusion weight and the enhanced fusion weight at each pixel position in the non-calibrated area is preconfigured, and by subtracting the local fusion weight at the pixel position in the source image from the preconfigured sum, we get The enhanced fusion weights at the same pixel positions in the enhanced image can be obtained.
  • the preconfigured sum can be 1, then subtract the local fusion weight at the pixel position in the source image from 1 (specifically 0), and you can obtain the enhanced fusion weight at the same pixel position in the enhanced image (specifically 1 ).
  • the pixel value at the same pixel position in the fused image is the product of the local fusion weight and the pixel value of the source image at the same pixel position, and the enhanced fusion weight and the enhanced image at the same pixel position.
  • the sum of the products of the pixel values at the same pixel position, that is, the pixel value at the same pixel position in the fused image local fusion weight * the pixel value of the source image at the same pixel position + Enhancement fusion weight* enhances the pixel value of the image at the same pixel position targeted.
  • the pixel value is equal to the enhanced fusion weight and the enhanced image at that location.
  • the local fusion weight at the pixel position of the non-calibrated area in the source image is zero.
  • the enhanced fusion weight at the pixel position of the non-calibrated area in the enhanced image can be determined, using the pixel position of the non-calibrated area.
  • Image fusion using local fusion weights and enhanced fusion weights can make the pixel values at the pixel positions in the fused image closer to the pixel values at the same pixel positions in the enhanced image, achieving good enhancement effects.
  • generating a fused image of the source image and the enhanced image includes:
  • the pixel value of the source image and the enhanced image at the pixel position in the calibration area is Perform weighted fusion to form the pixel value at the pixel position in the calibration area targeted in the fused image;
  • the pixel value at each pixel position in the non-calibrated area of the enhanced image is used as the pixel value at the same pixel position in the non-calibrated area in the fused image.
  • the terminal when generating a fused image of the source image and the enhanced image, for each pixel position in the calibration area, the terminal will combine the source image with the corresponding local fusion weight and enhanced fusion weight at the pixel position in the calibration area.
  • the pixel values at the pixel positions in the targeted calibration area are weighted and fused with the enhanced image to form a fusion map.
  • the terminal will treat the pixel value at each pixel position in the non-calibration area of the enhanced image as the same pixel value in the non-calibration area in the fused image.
  • the pixel value at the pixel location For the non-calibration area, the terminal will treat the pixel value at each pixel position in the non-calibration area of the enhanced image as the same pixel value in the non-calibration area in the fused image.
  • the source image can be used to enhance the calibration area in the enhanced image, so that the pixel value at the corresponding pixel position in the fused image is closer to the corresponding pixel position in the source image.
  • the pixel value at the pixel position improves the enhancement effect.
  • the fusion can be achieved.
  • the pixel value at the same pixel position in the non-calibrated area in the image is closer to the pixel value at the corresponding pixel position in the enhanced image, which can achieve good enhancement effects.
  • generating local fusion weights at at least a portion of pixel locations in the source image based on the attribute difference and the attribute representation of the enhanced image includes:
  • edge-preserving filtering processing refers to retaining edge contours through filtering while removing texture details within attribute representations.
  • the terminal performs edge-preserving filtering processing on the attribute representation of the enhanced image to obtain a smooth attribute representation of the enhanced image, and then based on the attribute difference at at least a part of the pixel positions and the attribute values at at least a part of the pixel positions in the smooth attribute representation, Generate local fusion weights at at least a portion of pixel locations in the source image.
  • the local fusion weights at at least a part of the pixel positions may specifically refer to respective local fusion weights at the multiple pixel positions.
  • edge-preserving filtering can be performed through guided filtering, bilateral filtering, morphological opening and closing operations, etc.
  • the filtering method for edge-preserving is not limited here, as long as edge-preserving filtering can be achieved. .
  • the terminal performs edge-preserving filtering on the attribute representation of the enhanced image through guided filtering.
  • Guided filtering explicitly uses a guide graph to calculate the output image, where the guide graph can be the input image itself or other images. Guided filtering works better near the boundary than bilateral filtering. In addition, it also has O(N) linearity. Speed advantage of time.
  • the guidance map uses the input image itself, that is, the attribute representation of the enhanced image is used as the guidance map in guidance filtering.
  • the terminal performs edge-preserving filtering on the attribute representation of the enhanced image through bilateral filtering.
  • Bilateral filtering is a nonlinear filtering method. It is a compromise process that combines the spatial proximity and pixel value similarity of the image. It also considers spatial information and grayscale similarity to achieve the purpose of edge-preserving denoising. It is simple, non-iterative and local.
  • the terminal performs edge-preserving filtering on the attribute representation of the enhanced image through morphological opening and closing operations.
  • the definition of morphological opening operation is to first perform an erosion operation on the image, and then perform an expansion operation on the image. It first corrodes the image to eliminate noise and smaller connected domains in the image, and then uses expansion operations to compensate for the area reduction caused by corrosion in the larger connected domains.
  • the morphological closing operation is just the opposite.
  • the image is first expanded and then corroded. It first expands the image to fill small holes in the connected domain, expands the boundary of the connected domain, connects two adjacent connected domains, and then uses the erosion operation to reduce the connection caused by the expansion operation. Expansion of domain boundaries and increase in area.
  • the terminal when performing edge-preserving filtering processing on the attribute representation of the enhanced image through morphological opening and closing operations, the terminal will first perform an opening operation on the attribute representation of the enhanced image, and then perform an opening operation on the attribute representation of the enhanced image. Closed operation operation to obtain smooth attribute representation of the enhanced image.
  • texture details can be removed while retaining the edges, achieving image smoothing processing of the attribute representation, and obtaining a smooth attribute representation of the enhanced image, which can then be based on attribute differences. and smoothing attribute values at at least a portion of pixel positions in the attribute representation to generate local fusion weights at at least a portion of pixel positions in the source image.
  • generating local fusion weights at at least a portion of pixel positions in the source image based on attribute differences and attribute values at at least a portion of the pixel positions in the smooth attribute representation includes:
  • the terminal will fuse the attribute difference at the targeted pixel position with the attribute value at the targeted pixel position in the smooth attribute representation to generate the targeted pixel.
  • the specific method of fusion may be to multiply the attribute difference at the targeted pixel position by the attribute value at the targeted pixel position in the smooth attribute representation.
  • the terminal when performing fusion, will first adjust the weight of the attribute difference at the targeted pixel position and the attribute value at the targeted pixel position in the smooth attribute representation to increase or decrease the attribute difference. and the importance of the attribute value, and then fuse the weight-adjusted attribute difference and attribute value to generate a local fusion weight at the targeted pixel position.
  • the method of weight adjustment is not specifically limited, as long as the weight adjustment can be realized to increase or decrease the importance of attribute differences and attribute values.
  • the way to fuse the weight-adjusted attribute difference and the attribute value is to multiply the weight-adjusted attribute difference and the attribute value, and use the product of the weight-adjusted attribute difference and the attribute value as the target Local fusion weight at pixel location.
  • the attribute difference at the targeted pixel position is fused with the attribute value at the targeted pixel position in the smooth attribute representation, and generating the local fusion weight at the targeted pixel position includes:
  • the attribute difference weight and the attribute value weight are fused to generate a local fusion weight at the targeted pixel position.
  • the variation stretching coefficient refers to a coefficient used to stretch attribute differences to increase or reduce the importance of attribute differences (ie, variation).
  • the attribute stretching coefficient refers to the coefficient used to stretch the attribute value to increase or decrease the importance of the attribute value.
  • the attribute stretching coefficient can specifically be the brightness stretching coefficient.
  • the attribute stretching coefficient may specifically be the contrast stretching coefficient.
  • the terminal will obtain the variation stretching coefficient and the attribute stretching coefficient, adjust the weight of the attribute difference at the targeted pixel position based on the variation stretching coefficient, obtain the attribute difference weight, and smooth the smooth attribute based on the attribute stretching coefficient.
  • the attribute value at the targeted pixel position in the representation is weighted to obtain the attribute value weight, and the attribute difference weight and attribute value weight are fused to generate a local fusion weight at the targeted pixel position.
  • the variation stretching coefficient and the attribute stretching coefficient correspond to image attributes, and the variation stretching coefficients corresponding to different image attributes The number and attribute stretch factors can be different.
  • the terminal can obtain the variation stretching coefficient and the attribute stretching coefficient based on the image attribute of interest.
  • the image attributes of concern refer to attribute differences and smoothing attributes representing corresponding image attributes.
  • the way to fuse the attribute difference weight and the attribute value weight is to multiply the attribute difference weight and the attribute value weight, and use the product of the attribute difference weight and the attribute value weight as the local fusion at the targeted pixel position.
  • the way to fuse the attribute difference weight and the attribute value weight is to add the attribute difference weight and the attribute value weight, and use the sum of the attribute difference weight and the attribute value weight as the local fusion at the targeted pixel position.
  • the variation stretching coefficient and the attribute stretching coefficient can be configured according to the actual application scenario, and are not specifically limited here in this embodiment.
  • the variation stretch coefficient can be any value between 0 and 1
  • the attribute stretch coefficient can be any value between 0 and 1.
  • the terminal can use a power function, an exponential function, a logarithmic function, etc. to adjust the weight of attribute differences and attribute values. This embodiment does not limit the method of weight adjustment here, as long as the weight adjustment can be achieved. .
  • the terminal can use a power function to adjust the weight of the attribute difference and the attribute value, that is, using the attribute difference as the base and the variation stretching coefficient as the power, adjust the weight of the attribute difference to obtain the attribute difference weight, as The attribute value is the base, and the attribute stretch coefficient is used as the power.
  • the attribute value is weighted to obtain the attribute value weight.
  • the attribute difference weight can be expressed as Among them, diff is the attribute difference, and factor diff is the variation stretch coefficient.
  • the attribute value weight can be expressed specifically as Among them, A is the attribute value, and factor A is the attribute stretch coefficient.
  • the terminal can use the attribute difference diff at the first pixel position as the base and the change degree stretching coefficient factor diff as the power to adjust the weight of the attribute difference to obtain the first Attribute difference weight at pixel location Using the attribute value A at the first pixel position as the base and the attribute stretching coefficient factor A as the power, adjust the weight of the attribute value to obtain the attribute value weight at the first pixel position. Then multiply the attribute difference weight and the attribute value weight to obtain the local fusion weight at the first pixel position:
  • the weight of the attribute difference at the targeted pixel position by adjusting the weight of the attribute difference at the targeted pixel position based on the variation stretching coefficient, the importance of the attribute difference can be adjusted, and the attribute difference weight can be obtained.
  • the smooth attribute representation can be adjusted. Adjusting the weight of the attribute value at the pixel position can adjust the importance of the attribute value and obtain the attribute value weight. Then, by fusing the attribute difference weight and the attribute value weight, and comprehensively considering the attribute difference weight and the attribute value weight, we can generate The local fusion weight at the targeted pixel location.
  • the attribute difference includes an image attribute difference of at least two image attributes at at least a portion of the pixel positions, and enhancing the attribute representation of the image includes enhancing the image attribute representation of at least two image attributes of the image;
  • generating local fusion weights at at least a part of the pixel positions in the source image includes:
  • attribute fusion weights of at least two image attributes corresponding to the targeted pixel position are fused to generate a local fusion weight at the pixel position at the targeted pixel position.
  • Image attribute representation refers to information that can characterize the targeted image attributes.
  • image attribute representation may specifically refer to an image that can represent the targeted image attribute, that is, an image attribute representation map.
  • the attribute difference includes at least part of The image attribute difference of at least two image attributes at the pixel position
  • the attribute representation of the enhanced image includes the image attribute representation of at least two image attributes of the enhanced image
  • the terminal will be based on at least The image attribute difference of the targeted image attribute at a portion of the pixel positions and the image attribute representation of the targeted image attribute of the enhanced image generate attribute fusion weights of the targeted image attribute at at least a portion of the pixel positions.
  • the terminal performs an edge-preserving filtering process on the image attribute representation of the image attribute targeted by the enhanced image, and obtains a smooth image attribute representation of the image attribute targeted by the enhanced image, targeting at least a portion of the pixel positions.
  • the image attribute difference of the targeted image attribute at the targeted pixel position and the attribute value at the targeted pixel position in the smooth image attribute representation of the targeted image attribute are fused to generate the targeted image attribute.
  • the pixel position is the attribute fusion weight under the targeted image attribute.
  • a specific way of performing fusion may be to multiply the image attribute difference of the target image attribute at the target pixel position and the attribute value at the target pixel position in the smooth image attribute representation of the target image attribute.
  • the fusion method may be that the terminal obtains the variation stretching coefficient and the attribute stretching coefficient corresponding to the targeted image attribute, and based on the variation stretching coefficient, the targeted pixel at the targeted pixel position is The image attribute difference of the image attribute is weighted, and the attribute value at the pixel position targeted in the smoothed image attribute representation of the image attribute is adjusted based on the attribute stretching coefficient.
  • the adjusted attribute difference and the adjusted Attribute values are merged.
  • the method of fusing the adjusted attribute difference and the adjusted attribute value may be to multiply the adjusted attribute difference and the adjusted attribute value.
  • the terminal fuses the attribute fusion weights of at least two image attributes corresponding to the targeted pixel position to generate a local fusion weight at the targeted pixel position.
  • the terminal performs fusion by superimposing the attribute fusion weights of at least two image attributes corresponding to the targeted pixel position to generate a local fusion weight at the targeted pixel position, that is, the targeted pixel position
  • the local fusion weight at is equal to the sum of the attribute fusion weights of at least two image attributes.
  • the image attribute difference and image attribute representation of each image attribute in at least two image attributes can be considered respectively, the optimized analysis of each image attribute can be realized, and at least a part of the pixel positions can be generated at least two
  • the corresponding attribute fusion weights under each image attribute are fused for each pixel position at at least a part of the pixel positions, and the attribute fusion weights of at least two image attributes corresponding to the targeted pixel positions are fused, so that each pixel position can be comprehensively considered.
  • attribute fusion weights of various image attributes more appropriate local fusion weights at the targeted pixel positions are generated, so that more appropriate local fusion weights can be used to achieve image fusion, which can improve the enhancement effect.
  • a schematic flow chart is used to illustrate the image processing method of the present application.
  • the method can be executed by the terminal or the server alone, or can be executed by the terminal and the server collaboratively.
  • the method is applied to a terminal as an example for description, including the following steps:
  • Step 802 Obtain the respective attribute representations of the source image and the enhanced image; the enhanced image is obtained by enhancing the source image;
  • Step 804 Compare the attribute values at each same pixel position in the attribute representations of the source image and the enhanced image, respectively, to obtain differential attribute values at each pixel position of different attribute representations;
  • Step 806 represent the differential attribute value at each pixel position for different attributes.
  • map the targeted differential attribute value to the lower part of the preset attribute value range. limit, obtain the attribute difference at the corresponding pixel position of the targeted differential attribute value;
  • map the targeted differential attribute value to the preset value in a positive correlation mapping manner Assuming that the attribute value is within the range, the attribute difference at the pixel position corresponding to the target differential attribute value is obtained;
  • Step 808 Generate a differential attribute representation based on the attribute difference of different attribute representations at each pixel position.
  • the attribute value at each pixel position in the differential attribute representation is the attribute difference at the corresponding pixel position;
  • Step 810 Perform edge-preserving filtering processing on the attribute representation of the enhanced image to obtain a smooth attribute representation of the enhanced image
  • Step 812 For each pixel position in at least a part of the pixel positions, obtain the degree of variation stretching coefficient and the attribute stretching coefficient, and adjust the weight of the attribute difference at the targeted pixel position based on the degree of variation stretching coefficient to obtain Attribute difference weight, adjust the weight of the attribute value at the pixel position targeted in the smooth attribute representation based on the attribute stretching coefficient, obtain the attribute value weight, fuse the attribute difference weight and the attribute value weight, and generate the targeted pixel position local fusion weight;
  • Step 814 Determine the enhanced fusion weight at at least a part of the pixel positions in the enhanced image, and the determined enhanced fusion weight at at least one pixel position is negatively correlated with the local fusion weight at the same pixel position in the source image;
  • Step 816 Generate a fused image of the source image and the enhanced image.
  • a schematic flow chart is used to illustrate the image processing method of the present application.
  • the method can be executed by the terminal or the server alone, or can be executed by the terminal and the server collaboratively.
  • the method is applied to a terminal as an example for description, including the following steps:
  • Step 902 Obtain the respective attribute representations of the source image and the enhanced image; the enhanced image is obtained by enhancing the source image;
  • Step 904 Compare the attribute representations of the source image and the enhanced image to obtain attribute differences between the source image and the enhanced image at at least a part of the pixel positions.
  • the attribute differences at at least a part of the pixel positions include at least two kinds of attributes at at least a part of the pixel positions.
  • Step 906 For each of the at least two image attributes, generate at least a portion of the pixels based on the image attribute difference of the targeted image attribute at at least a portion of the pixel positions and the image attribute representation of the targeted image attribute of the enhanced image.
  • Step 908 For each pixel position at at least a part of the pixel positions, fuse the attribute fusion weights of at least two image attributes corresponding to the targeted pixel position to generate a local fusion weight at the targeted pixel position;
  • Step 910 Determine the enhanced fusion weight at at least a part of the pixel positions in the enhanced image, and the determined enhanced fusion weight at at least one pixel position is negatively correlated with the local fusion weight at the same pixel position in the source image;
  • Step 912 Generate a fused image of the source image and the enhanced image.
  • the inventor believes that the traditional enhancement post-processing technology solution adopts a global fusion solution that will cause all areas and all pixels of the image content to use the same fusion coefficient, so that the final enhanced output image will either be close to the source image in the same proportion as a whole, or the entire image will be proportionally close to the source image. Isoproportionally enhanced images. Since it is difficult for conventional enhancement algorithms to ensure that the enhancement effects in all areas of all scenes are equally satisfactory, using a global fusion scheme to fuse them in equal proportions will result in poor enhancement effects in some areas and satisfactory enhancement effects in some areas. It is impossible to achieve adaptive fusion of enhanced post-processing in different areas and different pixels. In addition, the setting of global fusion coefficients relies on the experience of algorithm developers, and its adaptability is limited.
  • this application proposes an image processing method.
  • pixel-by-pixel local fusion coefficients it solves the problem of all pixels in the global fusion scheme sharing one coefficient, achieving more flexible fusion.
  • the image processing method provided by this application can be mainly applied in the post-processing steps of the image enhancement module.
  • this application can be used
  • the proposed image processing method performs enhancement post-processing (adaptive pixel-by-pixel local fusion) on the source image and the enhanced image, thereby outputting the final enhanced image, that is, the fused image.
  • the image processing method proposed in this application can be applied to various video data that require image quality enhancement, including but not limited to various long videos, short videos, etc.
  • the image processing method proposed in this application when the image processing method proposed in this application is applied to video data, it can be applied to processing single video frames in the video data.
  • a single frame of video frame can be used as the source image, and the single frame of video frame can be enhanced to obtain the corresponding enhanced image of the single frame of video frame, and then the image processing method proposed in this application can be used for processing.
  • adjacent frames in the time domain in the video data can also be used as source images and enhanced images respectively, and processed using the image processing method proposed in this application to achieve image enhancement of a single video frame.
  • the image processing method provided by this application can focus on different image attribute dimensions, such as brightness, hue, contrast, etc., and analyze the changing trend of the enhanced image relative to the source image, so that the image attribute dimensions of concern in the enhanced image are better than the source image.
  • the pixel value of the poor area is closer to the source image (that is, the local fusion weight at the pixel position of the source image is closer to 1).
  • the image processing method specifically includes the following steps:
  • Step 1 Extract attribute representation, that is, obtain the attribute representation of the source image and the enhanced image under the brightness attribute.
  • the attribute representation under the brightness attribute can be a grayscale image, a V-channel image in the HSV color model, or an L-channel image in the LAB color model.
  • the terminal will obtain the attribute representations of the source image and the enhanced image respectively under the brightness attribute.
  • Step 2 Extract the bright area, that is, calibrate the bright area in the enhanced image.
  • the terminal performs edge-preserving filtering on the attribute representation of the enhanced image, retaining large contours while removing texture details, thereby extracting brighter areas in the enhanced image and obtaining a smooth attribute representation of the enhanced image.
  • edge-preserving filtering can be performed through guided filtering, bilateral filtering, morphological opening and closing operations, etc.
  • the filtering method for edge-preserving is not limited here, as long as edge-preserving filtering can be achieved. .
  • edge-preserving filtering can be performed on the attribute representation of the enhanced image through guided filtering.
  • Step 3 Extract the brightened area, that is, calibrate the area where the brightness of the enhanced image is brightened compared to the source image.
  • the terminal will compare the attribute values of the source image and the enhanced image at each same pixel position, obtain the differential attribute values of different attribute representations at each pixel position, and combine the different attributes Characterize the differential attribute value at each pixel position minus the preset brightness noise threshold to eliminate useless brightness fluctuation noise.
  • Different attributes are represented at each pixel position by subtracting the differential attribute value after the noise threshold.
  • Map to the preset attribute value range obtain the attribute difference of different attribute representations at each pixel position, and generate a differential attribute representation based on the attribute difference of different attribute representations at each pixel position.
  • Each pixel in the differential attribute representation The attribute value at the position is the attribute difference at the corresponding pixel position.
  • the preset brightness noise threshold can be configured according to actual application scenarios.
  • the preset brightness noise threshold may be 0.1cd/m 2 (candelas per square meter).
  • the differential attribute value at each pixel position is represented.
  • the terminal When the targeted differential attribute value is lower than the preset differential attribute threshold, the terminal will map the targeted differential attribute value to the preset differential attribute value.
  • the lower limit value of the attribute value range is used to obtain the attribute difference at the pixel position corresponding to the target differential attribute value.
  • the terminal When the targeted differential attribute value is not lower than the preset differential attribute threshold, the terminal will map the targeted differential attribute value to the preset attribute value range in a positive correlation mapping manner to obtain the corresponding pixel position of the targeted differential attribute value. attribute differences.
  • the preset differential attribute threshold and the lower limit of the preset attribute value range can be configured according to actual application scenarios.
  • the preset differential attribute threshold may be 0, and the lower limit of the preset attribute value range may also be 0.
  • the terminal will determine the targeted differential attribute value. Maps to 0. In this way, the terminal can uniformly map the differential attribute values that are lower than the preset differential attribute threshold to the lower limit of the preset attribute value range, so that the obtained differential attribute representation only focuses on the enhanced image that is brighter than the source image. area, where dark detail is lost.
  • the terminal when mapping the targeted differential attribute value to a preset attribute value range in a positive correlation mapping manner, when the targeted differential attribute value is the maximum differential attribute value, the terminal will map the targeted differential attribute value to the preset attribute value range. The value is mapped to the upper limit of the preset attribute value range. When the targeted differential attribute value is not the maximum differential attribute value, the terminal will use the ratio of the targeted differential attribute value to the maximum differential attribute value as the targeted differential attribute value. The attribute difference at the corresponding pixel position.
  • the upper limit of the preset attribute value range can be configured according to the actual application scenario.
  • the upper limit of the preset attribute value range may be 1, and the lower limit of the preset attribute value range may be is 0, then by mapping the differential attribute value at each pixel position to the preset attribute value range, the differential attribute value can be mapped to between [0, 1] respectively, that is, normalization processing is performed.
  • the data processing process involved in step three can be realized through the following two formulas.
  • diff refers to the value before normalization.
  • gray enhanced refers to the enhanced image
  • gray src refers to the source image
  • refers to the preset brightness noise threshold
  • ReLU refers to the linear rectification function (Linear rectification function), also known as the modified linear unit, which is a kind of artificial nerve Activation functions commonly used in networks usually refer to nonlinear functions represented by slope functions and their variants.
  • the second is the formula This formula can be used to normalize differential attribute values.
  • the diff (x, y) on the right side of the equation refers to the differential attribute value at the pixel position in the differential attribute representation before normalization.
  • the differential attribute value on the left side of the equation diff (x,y) refers to the attribute difference at the pixel position, It refers to the maximum differential attribute value in the differential attribute representation before normalization.
  • Step 4 Generate a fusion mask, that is, combining steps 2 and 3, extract the pixels in the bright area that have lost dark details, and generate a local fusion mask.
  • the terminal will generate attribute differences at at least a portion of the pixel positions obtained in step three and attribute values at at least a portion of the pixel positions in the smooth attribute representation of the enhanced image obtained in step two to generate at least a portion of the pixel positions in the source image.
  • Respective local fusion weights that is, local fusion masks.
  • the terminal obtains the variation stretching coefficient and the brightness stretching coefficient corresponding to the brightness attribute, and based on the variation stretching coefficient, the target pixel position is Adjust the weight of the attribute difference to obtain the attribute difference weight, adjust the weight of the attribute value at the pixel position targeted in the smooth attribute representation based on the brightness stretching coefficient, obtain the attribute value weight, and fuse the attribute difference weight and the attribute value weight. , generate the local fusion weight at the targeted pixel position.
  • the degree of variation stretching coefficient and the brightness stretching coefficient can be configured according to actual application scenarios, and are not specifically limited here in this embodiment. It should be noted that the larger the variation stretch coefficient, the more important the attribute difference can be increased. The larger the brightness stretch coefficient is, the more important the attribute value can be increased.
  • the terminal can use a power function, an exponential function, a logarithmic function, etc. to adjust the weight of attribute differences and attribute values. This embodiment does not limit the method of weight adjustment here, as long as the weight adjustment can be achieved. .
  • the terminal can use a power function to adjust the weight of the attribute difference and the attribute value, that is, using the attribute difference as the base and the variation stretching coefficient as the power, adjust the weight of the attribute difference to obtain the attribute difference weight, as The attribute value is the base, and the brightness stretching coefficient is used as the power.
  • the attribute value is weighted to obtain the attribute value weight.
  • the attribute difference weight can be expressed as Among them, diff is the attribute difference, and factor diff is the variation stretch coefficient.
  • the attribute value weight can be expressed specifically as Among them, lightness is the attribute value, and factor lightess is the brightness stretching coefficient.
  • the method of fusing the attribute difference weight and the attribute value weight is to multiply the attribute difference weight and the attribute value weight, and use the product of the attribute difference weight and the attribute value weight as the local fusion weight at the pixel position.
  • the local fusion weight can be: in is the attribute value weight, is the attribute difference weight.
  • Step 5 Fusion, that is, using the local fusion mask generated in step 4 to weightedly fuse the source image and the enhanced image to obtain the final enhanced image, which is the fused image.
  • the terminal will generate a fused image of the source image and the enhanced image.
  • the pixel value at the targeted pixel position in the fused image is based on the targeted pixel position.
  • the local fusion weights and enhanced fusion weights are obtained by weighted fusion of the pixel values of the source image and the enhanced image at the targeted pixel positions, that is, for each pixel position of at least part of the pixel positions, a unique fusion value, in this way more flexible image fusion can be achieved.
  • the respective enhancement fusion weights at at least some pixel positions in the enhanced image are negatively correlated with the local fusion weights at the same pixel positions in the source image.
  • the sum of the local fusion weight and the enhanced fusion weight at each pixel position is preconfigured.
  • the enhanced image can be obtained.
  • the preconfigured sum can be 1. Then subtract the local fusion weight at the pixel position in the source image from 1 to obtain the enhanced fusion weight at the same pixel position in the enhanced image.
  • This formula takes pixel points as the execution unit, that is, for each pixel position at at least part of the pixel positions, the pixel value at the pixel position targeted in the fusion image dst is based on The local fusion weight alpha_mask and the enhanced fusion weight 1-alpha_masl at the targeted pixel position are obtained by weighted fusion of the pixel values of the source image src and the enhanced image enhanced at the targeted pixel position.
  • a comparison chart of the effects after applying the image processing proposed by this application is provided. It can be seen that for the bright area 1202 (in Figure 12, the source image, the enhanced image and the final enhanced image The enlarged view of the bright area 1202 is shown in the lower left corner of That is, excessive enhancement processing causes dark details to be lost, but in the final enhanced image, the dark details existing in the bright areas of the source image are retained. That is, the final enhanced image contrasts with the enhanced image. In the bright areas 1202, the dark details are obviously more For richness and prominence.
  • the moderate brightness area 1204 the brightness of the moderate brightness area in the source image is not obvious (it is not obvious in the source image represented by a dotted line).
  • the visual effect of the moderate brightness area in the enhanced image is better, then In the final enhanced image, the moderate brightness areas with better visual effects after enhancement processing are retained, that is, compared with the source image in the final enhanced image, the moderate brightness areas have better visual effects, higher contrast, and clarity.
  • a comparison chart of the effects after applying the image processing proposed in this application is provided. It can be seen that the final enhanced image is compared with the conventional enhanced image, and the bright area 1302 ( An enlarged view of the bright area 1302 is given in the lower left corner of the source image, the conventional enhanced image and the final enhanced image in Figure 13), and the dark details (existing in the area 1306 marked with a black box in Figure 13) are significantly more Rich and prominent, the final enhanced image contrasts with the source image, and the moderate brightness area 1304 marked with a white box has a better visual effect, higher contrast, and clarity. Overall, the final enhanced image combines the advantages of the source image and the conventional enhanced image, and has a better overall and local look and feel.
  • the inventor believes that this application uses pixel-by-pixel local fusion coefficients so that each area or even each pixel can adopt a unique fusion value, and can achieve more flexible image fusion based on picture attributes (such as brightness, contrast, Changes before and after, etc.) and enhanced change trends (such as loss of details, dimming of colors, etc.), perform adaptive fusion weight calculations, do not rely on empirical design, achieve more intelligent fusion, and ultimately enhance the quality of the output image in the focused image
  • picture attributes such as brightness, contrast, Changes before and after, etc.
  • enhanced change trends such as loss of details, dimming of colors, etc.
  • embodiments of the present application also provide an image processing device for implementing the above-mentioned image processing method.
  • the solution to the problem provided by this device is similar to the solution described in the above method. Therefore, for the specific limitations in one or more image processing device embodiments provided below, please refer to the above limitations on the image processing method. I won’t go into details here.
  • an image processing device including: an attribute representation acquisition module 1402, an attribute representation comparison module 1404, a local fusion weight generation module 1406, an enhanced fusion weight generation module 1408 and a fusion Image generation module 1410, wherein:
  • the attribute representation acquisition module 1402 is used to obtain the respective attribute representations of the source image and the enhanced image.
  • the enhanced image is obtained by enhancing the source image;
  • the attribute representation comparison module 1404 is used to compare the attribute representations of the source image and the enhanced image, and obtain the attribute differences between the source image and the enhanced image at at least a part of the pixel positions;
  • the local fusion weight generation module 1406 is used to generate local fusion weights at at least some pixel positions in the source image based on attribute differences and attribute representation of the enhanced image;
  • the enhanced fusion weight generation module 1408 is used to determine the enhanced fusion weight at at least a part of the pixel positions in the enhanced image, and the determined enhanced fusion weight at at least one pixel position is negatively correlated with the local fusion weight at the same pixel position in the source image. ;
  • the fused image generation module 1410 is used to generate a fused image of the source image and the enhanced image.
  • the above image processing device by acquiring the respective attribute representations of the source image and the enhanced image, and comparing the respective attribute representations of the source image and the enhanced image, can pay attention to the attribute change trend from the source image to the enhanced image, and obtain the relationship between the source image and the enhanced image.
  • adaptive fusion weight calculation can be performed to generate local fusion weights at at least a part of the pixel positions in the source image, thereby
  • the enhanced fusion weights at at least some pixel positions in the enhanced image can be determined to generate a fused image of the source image and the enhanced image.
  • the entire process generates local fusion weights by paying attention to the attribute change trend from the source image to the enhanced image, using pixel-by-pixel local fusion. Fusion weights are used to achieve image fusion, which can improve the enhancement effect.
  • the attribute representation comparison module is also used to compare the attribute values at each same pixel position in the respective attribute representations of the source image and the enhanced image, respectively, to obtain different attribute representations at each pixel position.
  • the differential attribute value at each pixel position is used to generate a differential attribute representation based on the differential attribute value at each pixel position of the different attribute representations.
  • the attribute values at at least a part of the pixel positions in the differential attribute representation represent the source image and the enhanced image at at least a part of the pixel positions. attribute differences.
  • the attribute representation comparison module is also used to map the differential attribute values of different attribute representations at each pixel position to a preset attribute value range to obtain different attribute representations at each pixel position.
  • the attribute difference, a differential attribute representation is generated based on the attribute difference of different attribute representations at each pixel position, and the attribute value at each pixel position in the differential attribute representation is the attribute difference at the corresponding pixel position.
  • the attribute representation comparison module is also used to characterize the differential attribute value at each pixel position for different attributes.
  • the targeted differential attribute value is lower than the preset differential attribute threshold, the targeted differential attribute value is The attribute value is mapped to the lower limit of the preset attribute value range, and the attribute difference at the corresponding pixel position of the targeted differential attribute value is obtained.
  • the targeted differential attribute value is not lower than the preset differential attribute threshold, the targeted differential attribute value is The attribute value is mapped to the preset attribute value range in a positive correlation mapping manner to obtain the attribute difference at the pixel position corresponding to the targeted differential attribute value.
  • At least a part of the pixel positions includes part of the pixel positions forming the calibration area, the attribute value at each pixel position in the calibration area in the attribute representation of the enhanced image meets the calibration area identification conditions, and the non-calibration in the fused image
  • the pixel value at each pixel position in the region is equal to the pixel value at the same pixel position in the enhanced image.
  • the calibration area identification conditions include: the pixel positions in the calibration area in the attribute representation of the enhanced image constitute a connected domain, and the attribute value at each pixel position in the connected domain in the attribute representation of the enhanced image belongs to the preset calibration Property value range.
  • the local fusion weight at the pixel position of the non-calibrated area in the source image is zero, and the enhanced fusion weight at the pixel position of the non-calibrated area in the enhanced image is based on the local fusion weight at the same pixel position in the source image.
  • the fusion weight is determined, and the fusion image generation module is also used for each same pixel position of the source image and the enhanced image, respectively, according to the local fusion weight and the enhanced fusion weight at the same pixel position, the source image and the enhanced image are The pixel values of the image at the same pixel position are weighted and fused to obtain a fused image.
  • the fusion image generation module is also configured to, for each pixel position in the calibration area, combine the source image and the enhanced image according to the corresponding local fusion weight and enhancement fusion weight at the pixel position in the calibration area.
  • the pixel values at the pixel positions in the targeted calibration area are weighted and fused to form the pixel values at the pixel positions in the targeted calibration area in the fused image, which will enhance the pixel values at each pixel position in the non-calibrated area of the image. , respectively as the pixel value at the same pixel position in the non-calibrated area in the fused image.
  • the local fusion weight generation module is also used to perform edge-preserving filtering processing on the attribute representation of the enhanced image to obtain a smooth attribute representation of the enhanced image, based on the attribute difference and the attributes at at least a part of the pixel positions in the smooth attribute representation. value to generate local fusion weights at at least a subset of pixel locations in the source image.
  • the local fusion weight generation module is further configured to, for each pixel position at at least a part of the pixel positions, combine the attribute difference at the targeted pixel position and the smooth attribute representation at the targeted pixel position. Values are fused to generate local fusion weights at the targeted pixel positions.
  • the local fusion weight generation module is also used to obtain the variation stretching coefficient and the attribute stretching coefficient, adjust the weight of the attribute difference at the targeted pixel position based on the variation stretching coefficient, and obtain the attribute difference weight. , adjust the weight of the attribute value at the pixel position targeted in the smooth attribute representation based on the attribute stretching coefficient, obtain the attribute value weight, fuse the attribute difference weight and the attribute value weight, and generate a local fusion at the targeted pixel position. Weights.
  • the attribute differences at at least a portion of the pixel positions include image attribute differences of at least two image attributes at at least a portion of the pixel positions, and enhancing the attribute representation of the image includes enhancing the image attribute representation of at least two image attributes of the image
  • the local fusion weight generation module is further configured to, for each of the at least two image attributes, image attribute differences based on the targeted image attributes at at least a portion of the pixel locations and an image attribute representation of the targeted image attributes of the enhanced image , generate attribute fusion weights of the targeted image attributes at at least a portion of the pixel positions, and for each pixel position at at least a portion of the pixel positions, conduct attribute fusion weights of at least two image attributes corresponding to the targeted pixel positions. Fusion, generating local fusion weights at the targeted pixel positions.
  • Each module in the above image processing device can be implemented in whole or in part by software, hardware and combinations thereof.
  • Each of the above modules may be embedded in or independent of the processor of the computer device in the form of hardware, or may be stored in the memory of the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
  • a computer device is provided.
  • the computer device may be a server, and its internal structure diagram may be as shown in Figure 15.
  • the computer device includes a processor, a memory, an input/output interface (Input/Output, referred to as I/O), and a communication interface.
  • the processor, memory and input/output interface are connected through the system bus, and the communication interface is connected to the system bus through the input/output interface.
  • the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes non-volatile storage media and internal memory.
  • the non-volatile storage medium stores an operating system, computer-readable instructions and a database.
  • This internal memory provides an environment for the execution of an operating system and computer-readable instructions in a non-volatile storage medium.
  • the computer device's database is used to store data such as source images and enhanced images.
  • the input/output interface of the computer device is used to exchange information between the processor and external devices.
  • the communication interface of the computer device is used to communicate with an external terminal through a network connection.
  • the computer readable instructions when executed by the processor implement an image processing method.
  • a computer device is provided.
  • the computer device may be a terminal, and its internal structure diagram may be as shown in Figure 16.
  • the computer device includes a processor, memory, input/output interface, communication interface, display unit and input device.
  • the processor, memory and input/output interface are connected through the system bus, and the communication interface, display unit and input device are connected to the system bus through the input/output interface.
  • the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes non-volatile storage media and internal memory. should not The volatile storage medium stores the operating system and computer-readable instructions. This internal memory provides an environment for the execution of an operating system and computer-readable instructions in a non-volatile storage medium.
  • the input/output interface of the computer device is used to exchange information between the processor and external devices.
  • the communication interface of the computer device is used for wired or wireless communication with external terminals.
  • the wireless mode can be implemented through WIFI, mobile cellular network, NFC (Near Field Communication) or other technologies.
  • the computer readable instructions when executed by the processor implement an image processing method.
  • the display unit of the computer device is used to form a visually visible picture and can be a display screen, a projection device or a virtual reality imaging device.
  • the display screen can be a liquid crystal display screen or an electronic ink display screen.
  • the input device of the computer device can be a display screen.
  • the touch layer covered above can also be buttons, trackballs or touch pads provided on the computer equipment shell, or it can also be an external keyboard, touch pad or mouse, etc.
  • Figures 15 and 16 are only block diagrams of partial structures related to the solution of the present application, and do not constitute a limitation on the computer equipment to which the solution of the present application is applied.
  • Computer equipment may include more or fewer components than shown in the figures, or some combinations of components, or have different arrangements of components.
  • a computer device including a memory and a processor.
  • Computer-readable instructions are stored in the memory.
  • the processor executes the computer-readable instructions, the steps in the above method embodiments are implemented.
  • a computer-readable storage medium which stores computer-readable instructions.
  • the steps in the above method embodiments are implemented.
  • a computer program product including computer readable instructions, which when executed by a processor implement the steps in each of the above method embodiments.
  • the computer readable instructions can be stored in a non-volatile computer.
  • the computer-readable instructions when executed, may include the processes of the above method embodiments.
  • Any reference to memory, database or other media used in the embodiments provided in this application may include at least one of non-volatile and volatile memory.
  • Non-volatile memory can include read-only memory (ROM), magnetic tape, floppy disk, flash memory, optical memory, high-density embedded non-volatile memory, resistive memory (ReRAM), magnetic variable memory (Magnetoresistive Random Access Memory (MRAM), ferroelectric memory (Ferroelectric Random Access Memory, FRAM), phase change memory (Phase Change Memory, PCM), graphene memory, etc.
  • Volatile memory may include random access memory (Random Access Memory, RAM) or external cache memory, etc.
  • RAM Random Access Memory
  • RAM random access memory
  • RAM Random Access Memory
  • the databases involved in the various embodiments provided in this application may include at least one of a relational database and a non-relational database.
  • Non-relational databases may include blockchain-based distributed databases, etc., but are not limited thereto.
  • the processors involved in the various embodiments provided in this application may be general-purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing-based data processing logic devices, etc., and are not limited to this.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)

Abstract

本申请涉及一种图像处理方法、装置、计算机设备和存储介质。所述方法包括:获取源图像和增强图像各自的属性表征(302);对源图像和增强图像的属性表征进行比对,获得源图像和增强图像在至少一部分像素位置处的属性差异(304);基于属性差异和增强图像的属性表征,生成源图像中至少一部分像素位置处的局部融合权重(306);确定增强图像中至少一部分像素位置处的增强融合权重(308);生成源图像和增强图像的融合图像(310)。

Description

图像处理方法、装置、计算机设备和存储介质
相关申请
本申请要求2022年08月30日申请的,申请号为2022110491786,名称为“基于增强图像的图像处理方法、装置和计算机设备”的中国专利申请的优先权,在此将其全文引入作为参考。
技术领域
本申请涉及图像处理技术领域,特别是涉及一种图像处理方法、装置、计算机设备、存储介质和计算机程序产品。
背景技术
随着图像处理技术的发展,出现了增强后处理技术。增强后处理技术用于图像的进一步增强,是对图像增强的补充,以使得最终增强效果更佳。其中,图像增强是指对源图像从亮度、色调、对比度、清晰度等一个或多个图像属性维度进行提升,使得处理后的输出图像在画质上得到增强。
传统技术中,在进行增强后处理时,会基于设计的融合系数对源图像和增强图像进行全局融合,得到融合图像,即增强后处理图像。
然而,传统方法,存在增强效果差的问题。
发明内容
根据本申请提供的各种实施例,提供一种图像处理方法、装置、计算机设备、计算机可读存储介质和计算机程序产品。
第一方面,本申请提供了一种图像处理方法,由计算机设备执行,包括:
获取源图像和增强图像各自的属性表征;增强图像是对源图像进行增强处理获得的;
对源图像和增强图像的属性表征进行比对,获得源图像和增强图像在至少一部分像素位置处的属性差异;
基于属性差异和增强图像的属性表征,生成源图像中至少一部分像素位置处的局部融合权重;
确定增强图像中至少一部分像素位置处的增强融合权重,且确定的至少一个像素位置处的增强融合权重与源图像中相同的像素位置处的局部融合权重负相关;及
生成源图像和增强图像的融合图像。
第二方面,本申请还提供了一种图像处理装置。所述装置包括:
属性表征获取模块,用于获取源图像和增强图像各自的属性表征;增强图像是对源图像进行增强处理获得的;
属性表征比对模块,用于对源图像和增强图像的属性表征进行比对,获得源图像和增强图像在至少一部分像素位置处的属性差异;
局部融合权重生成模块,用于基于属性差异和增强图像的属性表征,生成源图像中至少一部分像素位置处的局部融合权重;
增强融合权重生成模块,用于确定增强图像中至少一部分像素位置处的增强融合权重,且确定的至少一个像素位置处的增强融合权重与源图像中相同的像素位置处的局部融合权重负相关;及
融合图像生成模块,用于生成源图像和增强图像的融合图像。
第三方面,本申请还提供了一种计算机设备。所述计算机设备包括存储器和处理器,所述存储器存储有计算机可读指令,所述处理器执行所述计算机可读指令时实现上述图像处理方法。
第四方面,本申请还提供了一种计算机可读存储介质。所述计算机可读存储介质,其上存储有计算机可读指令,所述计算机可读指令被处理器执行时实现上述图像处理方法。
第五方面,本申请还提供了一种计算机程序产品。所述计算机程序产品,包括计算机 可读指令,该计算机可读指令被处理器执行时实现上述图像处理方法。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征、目的和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例或传统技术中的技术方案,下面将对实施例或传统技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据公开的附图获得其他的附图。
图1为一个实施例中图像处理方法的应用环境图;
图2为另一个实施例中图像处理方法的应用环境图;
图3为一个实施例中图像处理方法的流程示意图;
图4为一个实施例中将属性表征中像素位置处的属性值进行比对的示意图;
图5为另一个实施例中将属性表征中像素位置处的属性值进行比对的示意图;
图6为一个实施例中,融合图像中像素位置处的像素值生成的示意图;
图7为一个实施例中局部融合权重生成的示意图;
图8为另一个实施例中图像处理方法的流程示意图;
图9为又一个实施例中图像处理方法的流程示意图;
图10为一个实施例中图像处理的应用场景图;
图11为再一个实施例中图像处理方法的流程示意图;
图12为一个实施例中图像处理的效果对比图;
图13为另一个实施例中图像处理的效果对比图;
图14为一个实施例中图像处理装置的结构框图;
图15为一个实施例中计算机设备的内部结构图;
图16为一个实施例中计算机设备的内部结构图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请实施例提供的图像处理方法,可以应用于如图1所示的应用环境中。终端102获取源图像和增强图像各自的属性表征,增强图像是对源图像进行增强处理获得的,对源图像和增强图像的属性表征进行比对,获得源图像和增强图像在至少一部分像素位置处的属性差异,基于属性差异和增强图像的属性表征,生成源图像中至少一部分像素位置处的局部融合权重,确定增强图像中至少一部分像素位置处的增强融合权重,且确定的至少一个像素位置处的增强融合权重与源图像中相同的像素位置处的局部融合权重负相关,生成源图像和增强图像的融合图像。其中,终端102可以但不限于是各种台式计算机、笔记本电脑、智能手机、平板电脑、物联网设备和便携式可穿戴设备,物联网设备可为智能音箱、智能电视、智能空调、智能车载设备等。便携式可穿戴设备可为智能手表、智能手环、头戴设备等。
本申请实施例提供的图像处理方法,可以应用于如图2所示的应用环境中。其中,终端202通过网络与服务器204进行通信。数据存储系统可以存储服务器204需要处理的数据。数据存储系统可以集成在服务器204上,也可以放在云上或其他服务器上。终端202存储有源图像和增强图像,服务器204从终端202获取源图像和增强图像,获取源图像和增强图像各自的属性表征,增强图像是对源图像进行增强处理获得的,对源图像和增强图像的属性表征进行比对,获得源图像和增强图像在至少一部分像素位置处的属性差异,基于属性差异和增强图像的属性表征,生成源图像中至少一部分像素位置处的局部融合权重, 确定增强图像中至少一部分像素位置处的增强融合权重,且确定的至少一个像素位置处的增强融合权重与源图像中相同的像素位置处的局部融合权重负相关,生成源图像和增强图像的融合图像。其中,终端202可以但不限于是各种台式计算机、笔记本电脑、智能手机、平板电脑、物联网设备和便携式可穿戴设备,物联网设备可为智能音箱、智能电视、智能空调、智能车载设备等。便携式可穿戴设备可为智能手表、智能手环、头戴设备等。服务器204可以用独立的服务器或者是多个服务器组成的服务器集群或云服务器来实现,也可以为区块链上的节点。
在一个实施例中,如图3所示,提供了一种图像处理方法,该方法可以由终端或服务器单独执行,也可以由终端和服务器协同执行。在本申请实施例中,以该方法应用于终端为例进行说明,包括以下步骤:
步骤302,获取源图像和增强图像各自的属性表征,增强图像是对源图像进行增强处理获得的。
其中,源图像是指尚未进行增强处理的原始图像。比如,源图像具体可以是指电子设备所采集的图像。举例说明,源图像具体可以是指相机、扫描仪等所采集的图像。又比如,源图像具体可以是指视频数据中的未进行增强处理的原始视频帧。增强图像是指对源图像进行增强处理获得的图像,以增强源图像中的有用信息。增强处理可以是一个失真的过程,其目的是要改善源图像的视觉效果,针对给定图像的应用场合。举例说明,增强图像具体可以是指对源图像从亮度、色调、对比度、清晰度等一个或多个图像属性维度进行提升,获得的图像。增强处理可以分为两大类,一是频率域法,它把图像看成一种二维信号,对图像进行基于二维傅里叶变换的信号增强,采样低通滤波(即只让低频信号通过)法,可以去掉图像中的噪声,采用高通滤波法,则可增强边缘等高频信号,使模糊的图片变得清晰。二是空间域法。空间域法中具有代表性的算法有局部求平均值法和中值滤波(取局部邻域中的中间像素值)法等,它们可用于去除或减弱噪声。
其中,图像属性是指图像固有的特性。比如,图像属性具体可以是指亮度。亮度是指图像色彩的明暗程度,是人眼对物体明暗强度的感觉。又比如,图像属性具体可以是指色调。色调是指图像的相对明暗程度,在彩色图像上表现为颜色。再比如,图像属性具体可以是指对比度。对比度指的是一幅图像中明暗区域最亮的白和最暗的黑之间不同亮度层级的测量,差异范围越大代表对比越大,差异范围越小代表对比越小。
其中,属性表征是指能够表征图像属性的信息。比如,属性表征具体可以是指能够表征图像属性的图像,即属性表征图。举例说明,当图像属性为亮度,属性表征是指能够表征亮度的图像。又举例说明,属性表征具体可以为表征亮度的灰度图。再举例说明,属性表征具体可以为表征亮度的HSV(Hue,Saturation,Value,色调,饱和度、明度)颜色模型中的V通道图像。再举例说明,属性表征具体可以为LAB颜色模型中的L通道图像。LAB颜色模型中L表示亮度,A和B是两个颜色通道。A包括的颜色是从深绿色(低亮度值)到灰色(中亮度值)再到亮粉红色(高亮度值),B是从亮蓝色(低亮度值)到灰色(中亮度值)再到黄色(高亮度值)。
举例说明,当图像属性为色调,属性表征是指能够表征色调的图像。又举例说明,属性表征具体可以为HSV颜色模型中的H通道图像。再举例说明,属性表征具体可以为组合LAB颜色模型中A通道和B通道所得到的图像。举例说明,当图像属性为对比度,属性表征是指能够表征对比度的图像。又举例说明,表征对比度的图像中每个像素位置处的属性值,可以通过计算每个像素位置处的像素值与平均像素值的差值得到。再举例说明,每个像素位置处的属性值,还可以为每个像素位置处的局部邻域内最大像素值和最小像素值的差值。其中,局部邻域的大小可按照实际应用场景进行配置。
具体的,终端会基于所关注的图像属性,获取源图像和增强图像各自的属性表征。在一个实施例中,终端所获取的可以是源图像和增强图像各自在同种图像属性下的属性表征,该属性表征具体可以是属性表征图。在一个实施例中,所关注的图像属性可按照实际应用 场景进行配置。比如,所关注的图像属性具体可以为亮度、色调、对比度中的至少一种。
在一个实施例中,当所关注的图像属性为亮度,终端会获取源图像和增强图像各自在亮度下的属性表征,该属性表征具体可以为灰度图,也可以为HSV颜色模型中的V通道图像,还可以为LAB颜色模型中的L通道图像,本实施例在此处不对表征亮度的属性表征进行限定。
在一个实施例中,属性表征为灰度图,终端可以通过分别对源图像和增强图像进行灰度变换,获得源图像和增强图像各自的灰度图。其中,灰度变换是指根据某种目标条件按一定变换关系逐点改变源图像中每一个像素灰度值的方法,目的是为了改善画质,使图像的显示效果更加清晰,本实施例中此处不对灰度变换的方式进行限定,只要能够实现灰度变换即可,可以为线性变换,也可以为非线性变换。
在一个实施例中,属性表征为HSV颜色模型中的V通道图像,终端可以通过分别将源图像和增强图像转换为HSV格式,获得源图像和增强图像各自的V通道图像。在一个实施例中,属性表征为LAB颜色模型中的L通道图像,终端可以通过分别将源图像和增强图像转换为LAB格式,获得源图像和增强图像各自的L通道图像。
步骤304,对源图像和增强图像的属性表征进行比对,获得源图像和增强图像在至少一部分像素位置处的属性差异。
其中,属性差异用于描述源图像和增强图像在相同的像素位置处的属性差别程度。比如,当属性表征为表征亮度的灰度图,属性差异描述的是源图像和增强图像在相同的像素位置处的灰度值差别程度。又比如,当属性表征为表征对比度的图像,属性差异描述的是源图像和增强图像在相同的像素位置处的对比度差别程度。
具体的,终端会将源图像和增强图像各自的属性表征中每个相同的像素位置处的属性值分别进行比对,通过将源图像和增强图像各自的属性表征中每个相同的像素位置处的属性值分别进行比对,可以提取每个相同的像素位置处的属性值之间的差异,获得不同的属性表征在每个像素位置处的差分属性值,进而可以基于每个像素位置处的差分属性值,获得源图像和增强图像在至少一部分像素位置处的属性差异。
在一个实施例中,在将源图像和增强图像各自的属性表征中每个相同的像素位置处的属性值分别进行比对时,终端可以通过将源图像和增强图像各自的属性表征中每个相同的像素位置处的属性值分别相减的方式进行比对,获得不同的属性表征在每个像素位置处的差分属性值。在一个实施例中,如图4所示,可以通过将增强图像的属性表征中第一个像素位置处属性值B减去源图像的属性表征中第一个像素位置处属性值A的方式进行比对,获得第一个像素位置处的差分属性值。
在一个实施例中,终端也可以通过将源图像和增强图像各自的属性表征中每个相同的像素位置处的属性值分别相除的方式进行比对,获得不同的属性表征在每个像素位置处的差分属性值。在一个实施例中,如图5所示,可以通过将增强图像的属性表征中第一个像素位置处属性值B除去源图像的属性表征中第一个像素位置处属性值A的方式进行比对,获得第一个像素位置处的差分属性值。
步骤306,基于属性差异和增强图像的属性表征,生成源图像中至少一部分像素位置处的局部融合权重。
其中,权重是指某一因素或指标相对于某一事物的重要程度。
针对至少一部分像素位置处中每个像素位置处,所针对的像素位置处的局部融合权重是指在对源图像和增强图像中的所针对的像素位置处的像素值进行加权融合时,源图像中的所针对的像素位置处的像素值在进行加权融合时的权重,用于表示源图像中的所针对的像素位置处的像素值相对于加权融合的重要程度。
具体的,终端会基于至少一部分像素位置处的属性差异和增强图像的属性表征中至少一部分像素位置处的属性值,生成源图像中至少一部分像素位置处的局部融合权重。在一个实施例中,针对至少一部分像素位置处中每个像素位置处,终端会根据所针对的像素位 置处的属性差异和增强图像的属性表征中所针对的像素位置处的属性值,生成所针对的像素位置处的局部融合权重,即所生成的是源图像中多个像素位置处各自的局部融合权重。
在一个实施例中,终端会将所针对的像素位置处的属性差异和增强图像的属性表征中所针对的像素位置处的属性值进行融合,生成所针对的像素位置处的局部融合权重。具体进行融合的方式可以为,将所针对的像素位置处的属性差异和增强图像的属性表征中所针对的像素位置处的属性值相乘。在一个实施例中,在对所针对的像素位置处的属性差异和所针对的像素位置处的属性值进行融合时,终端会先分别对所针对的像素位置处的属性差异和所针对的像素位置处的属性值进行权重调整,以提高或降低属性差值和属性值的重要程度,再对权重调整后的属性差异和属性值进行融合,生成所针对的像素位置处的局部融合权重。
在一个实施例中,终端可以通过预配置的拉伸系数来对像素位置处的属性差异和像素位置处的属性值进行权重调整,该预配置的拉伸系数可以按照实际应用场景进行配置,本实施例在此处不做具体限定。举例说明,该预配置的拉伸系数可以为0到1之间的任意值。
步骤308,确定增强图像中至少一部分像素位置处的增强融合权重,且确定的至少一个像素位置处的增强融合权重与源图像中相同的像素位置处的局部融合权重负相关。
其中,针对至少一部分像素位置处中每个像素位置处,所针对的像素位置处的增强融合权重是指在对源图像和增强图像中的所针对的像素位置处的像素值进行加权融合时,增强图像中的所针对的像素位置处的像素值在进行加权融合时的权重,用于表示增强图像中的所针对的像素位置处的像素值相对于加权融合的重要程度。
具体的,终端会根据源图像中至少一部分像素位置处的局部融合权重,确定增强图像中至少一部分像素位置处的增强融合权重,且确定的至少一个像素位置处的增强融合权重与源图像中相同的像素位置处的局部融合权重负相关。
在一个实施例中,针对至少一部分像素位置处中每个像素位置处,终端会根据源图像中所针对的像素位置处的局部融合权重,来确定增强图像中所针对的像素位置处的增强融合权重,且确定的所针对的像素位置处的增强融合权重与源图像中相同的像素位置处(即所针对的像素位置处)的局部融合权重负相关,即所确定的是增强图像中多个像素位置处各自的增强融合权重。
在一个实施例中,每个像素位置处的局部融合权重和增强融合权重的总和是预配置的,因此,通过用预配置的总和减去源图像中所针对的像素位置处的局部融合权重,就可以得到增强图像中所针对的像素位置处(即相同的像素位置处)的增强融合权重。举例说明,预配置的总和可以为1,则用1减去源图像中所针对的像素位置处的局部融合权重,就可以得到增强图像中所针对的像素位置处(即相同的像素位置处)的增强融合权重。
步骤310,生成源图像和增强图像的融合图像。
其中,融合图像是指融合源图像和增强图像所得到的图像。比如,融合图像具体可以是指对源图像和增强图像中至少一部分像素位置处的像素值进行加权融合所得到的图像。
具体的,终端会生成源图像和增强图像的融合图像,针对于至少一部分像素位置处中每个像素位置处,融合图像中所针对的像素位置处的像素值,是按所针对的像素位置处的局部融合权重和增强融合权重,对源图像和所述增强图像各自在所针对的像素位置处的像素值加权融合获得的,即针对至少一部分像素位置处中每个像素位置处,都采用了独有的融合值,通过这种方式能够实现更灵活的图像融合。
在一个实施例中,针对至少一部分像素位置处中每个像素位置处,融合图像中所针对的像素位置处的像素值为局部融合权重和源图像在所针对的像素位置处的像素值的乘积以及增强融合权重和增强图像在所针对的像素位置处的像素值的乘积之和,即融合图像中所针对的像素位置处的像素值=局部融合权重*源图像在所针对的像素位置处的像素值+增强融合权重*增强图像在所针对的像素位置处的像素值。在一个实施例中,如图6所示,融合图像中第一个像素位置处的像素值C=局部融合权重A2*源图像在第一个像素位置处的 像素值A1+增强融合权重B2*增强图像在第一个像素位置处的像素值B1。
上述图像处理方法,通过获取源图像和增强图像各自的属性表征,对源图像和增强图像的属性表征进行比对,能够关注源图像到增强图像的属性变化趋势,获得源图像和增强图像在至少一部分像素位置处的属性差异,进而可以基于至少一部分像素位置处的属性差异和增强图像的属性表征,进行自适应的融合权重计算,生成源图像中至少一部分像素位置处的局部融合权重,从而可以确定增强图像中至少一部分像素位置处的增强融合权重,生成源图像和增强图像的融合图像,整个过程,通过关注源图像到增强图像的属性变化趋势来生成局部融合权重,采用逐像素的局部融合权重来实现图像融合,能够提升增强效果。
在一个实施例中,对源图像和增强图像的属性表征进行比对,获得源图像和增强图像在至少一部分像素位置处的属性差异包括:
将源图像和增强图像的属性表征中每个相同的像素位置处的属性值分别进行比对,获得不同的属性表征在每个像素位置处的差分属性值;
基于不同的属性表征在每个像素位置处的差分属性值生成差分属性表征,差分属性表征中至少一部分像素位置处的属性值,表征源图像和增强图像在至少一部分像素位置处的属性差异。
其中,差分属性值是指不同的属性表征在每个像素位置处的属性值的差异,即源图像和增强图像的属性表征在每个像素位置处的属性值的差异。比如,差分属性值具体可以是指不同的属性表征在每个像素位置处的属性值的差值。又比如,差分属性值具体可以是指不同的属性表征在每个像素位置处的属性值的比值。再比如,差分属性值具体可以是指不同的属性表征在每个像素位置处的属性值的差值的绝对值。举例说明,当属性表征为灰度图,差分属性值具体可以是指不同的灰度图在每个像素位置处的灰度值的差异,该差异可以是灰度值的差值,也可以是灰度值的比值。
具体的,终端会将源图像和增强图像各自的属性表征中每个相同的像素位置处的属性值分别进行比对,通过将源图像和增强图像各自的属性表征中每个相同的像素位置处的属性值分别进行比对,可以提取每个相同的像素位置处的属性值之间的差异,获得不同的属性表征在每个像素位置处的差分属性值,进而可以基于每个像素位置处的差分属性值分析源图像和增强图像在每个像素位置处的属性差别程度,生成差分属性表征。其中,该差分属性表征包括每个像素位置处的差分属性值,差分属性表征中至少一部分像素位置处的差分属性值,表征源图像和增强图像在至少一部分像素位置处的属性差异。
在一个实施例中,终端可以通过将源图像和增强图像各自的属性表征中每个相同的像素位置处的属性值分别相减的方式进行比对,获得不同的属性表征在每个像素位置处的差分属性值。在一个实施例中,针对每个像素位置处,终端会将增强图像的属性表征中像素位置处的属性值减去源图像的属性表征中像素位置处的属性值,获得不同的属性表征在像素位置处的差分属性值。
在一个实施例中,在通过将源图像和增强图像各自的属性表征中每个相同的像素位置处的属性值分别相减的方式进行比对时,当图像属性为亮度或对比度,所获得的差分属性值为属性值的差值。当图像属性为色调,所获得的差分属性值为属性值的差值的绝对值,该属性值的差值的绝对值可以描述像素位置处的色调之间的偏离程度。
在一个实施例中,终端可以通过将源图像和增强图像各自的属性表征中每个相同的像素位置处的属性值分别相除的方式进行比对,获得不同的属性表征在每个像素位置处的差分属性值。在一个实施例中,针对每个像素位置处,终端会将增强图像的属性表征中像素位置处的属性值除去源图像的属性表征中像素位置处的属性值,获得不同的属性表征在像素位置处的差分属性值。
本实施例中,通过将源图像和增强图像的属性表征每个像素位置处的属性值分别进行比对,可以提取每个像素位置处的属性值之间的差异,获得不同的属性表征在每个像素位置处的差分属性值,进而可以基于每个像素位置处的差分属性值分析源图像和增强图像在 每个像素位置处的属性差别程度,生成差分属性表征。
在一个实施例中,基于不同的属性表征在每个像素位置处的差分属性值生成差分属性表征,包括:
将不同的属性表征在每个像素位置处的差分属性值分别映射到预设属性值范围内,获得不同的属性表征在每个像素位置处的属性差异;
基于不同的属性表征在每个像素位置处的属性差异生成差分属性表征,差分属性表征中每个像素位置处的属性值为相应像素位置处的属性差异。
其中,预设属性值范围是指预配置的用于对差分属性值进行标准化处理的范围。预设属性值范围可按照实际应用场景进行配置。举例说明,预设属性值范围具体可以为0到1。
具体的,不同的属性表征在每个像素位置处的差分属性值可能处于不同的数量级,为了消除不同的数量级对局部融合权重生成的影响,终端会将不同的属性表征在每个像素位置处的差分属性值分别映射到预设属性值范围内,以使得不同的属性表征在每个像素位置处的差分属性值处于同一数量级,获得不同的属性表征在每个像素位置处的属性差异,从而可以基于不同的属性表征在每个像素位置处的属性差异生成差分属性表征,差分属性表征中每个像素位置处的属性值为相应像素位置处的属性差异。
在一个实施例中,终端会先将不同的属性表征在每个像素位置处的差分属性值减去噪声阈值,以排除掉无用的波动噪声,再将减去噪声阈值后的差分属性值分别映射到预设属性值范围内,获得不同的属性表征在每个像素位置处的属性差异。
在一个实施例中,将不同的属性表征在每个像素位置处的差分属性值分别映射到预设属性值范围内,获得不同的属性表征在每个像素位置处的属性差异,包括:
针对于不同的属性表征在每个像素位置处的差分属性值,当所针对的差分属性值低于预设差分属性阈值,将所针对的差分属性值映射为预设属性值范围的下限值,获得所针对的差分属性值相应的像素位置处的属性差异;
当所针对的差分属性值不低于预设差分属性阈值,将所针对的差分属性值,以正相关映射方式映射到预设属性值范围内,获得所针对的差分属性值相应的像素位置处的属性差异。
其中,预设差分属性阈值可按照实际应用场景进行配置。比如,预设差分属性阈值可以按照图像属性和属性值比对方式进行配置,针对不同的图像属性和属性值比对方式的组合,可以配置不同的预设差分属性阈值。举例说明,当图像属性为亮度且属性值比对方式为属性值相减时,预设差分属性阈值可以为0。又举例说明,当图像属性为对比度且属性值比对方式为属性值相减时,预设差分属性阈值可以为0。再举例说明,当图像属性为色调且属性值比对方式为属性值相减时,预设差分属性阈值可以为大于0的一个极小值,该极小值可按照实际应用场景进行配置。比如,该预设差分属性阈值具体可以为1。预设属性值范围的下限值可按照实际应用场景进行配置。比如,预设属性值范围的下限值可以为0。
具体的,针对于不同的属性表征在每个像素位置处的差分属性值,终端会比对所针对的差分属性值和预设差分属性阈值,当所针对的差分属性值低于预设差分属性阈值,将所针对的差分属性值映射为预设属性值范围的下限值,获得所针对的差分属性值相应的像素位置处的属性差异,当所针对的差分属性值不低于预设差分属性阈值,将所针对的差分属性值,以正相关映射方式映射到预设属性值范围内,获得所针对的差分属性值相应的像素位置处的属性差异。
在一个实施例中,在将所针对的差分属性值,以正相关映射方式映射到预设属性值范围内时,当所针对的差分属性值为最大差分属性值,终端会将所针对的差分属性值映射为预设属性值范围的上限值,当所针对的差分属性值不为最大差分属性值,终端会将所针对的差分属性值与最大差分属性值的比值,作为所针对的差分属性值相应的像素位置处的属性差异。其中,最大差分属性值是指每个像素位置处的差分属性值中的最大值,预设属性 值范围的上限值可按照实际应用场景进行配置。比如,预设属性值范围的上限值具体可以为1。
在一个实施例中,在将所针对的差分属性值,以正相关映射方式映射到预设属性值范围内时,终端也可以利用Sigmoid函数、Softmax函数等归一化函数进行正相关映射。其中,Sigmoid函数是一个在生物学中常见的S型函数,也称为S型生长曲线,在信息科学中,由于其单增以及反函数单增等性质,Sigmoid函数常被用作神经网络的激活函数,将变量映射到0到1之间。Softmax函数又称归一化指数函数,它是二分类函数Sigmoid在多分类上的推广,目的是将多分类的结果以概率的形式展现出来。
本实施例中,能够实现对不同的属性表征在每个像素位置处的差分属性值的映射,消除不同的数量级对局部融合权重生成的影响,获得不同的属性表征在每个像素位置处的属性差异。
在一个实施例中,至少一部分像素位置处包括形成标定区域的部分像素位置,增强图像的属性表征中标定区域中的每个像素位置处的属性值符合标定区域识别条件,融合图像中在非标定区域中的每个像素位置处的像素值,分别等于增强图像中相同的像素位置处的像素值。
其中,标定区域是指基于所关注的图像属性标定的图像属性明显的特定区域。比如,当图像属性为亮度,标定区域具体可以是指基于亮度标定的偏亮的区域。又比如,当图像属性为色调,标定区域具体可以是指基于特定颜色标定的包含特定颜色的区域。举例说明,特定颜色可以为黄色,则标定区域具体可以是指包含黄色的区域。标定区域识别条件是指对标定区域进行识别的条件,可按照实际应用场景进行配置,针对不同的图像属性,其相应的标定区域识别条件可以不同。
具体的,至少一部分像素位置处包括形成标定区域的部分像素位置,终端会基于标定区域识别条件对增强图像的属性表征进行标定区域识别,获得增强图像的属性表征中符合标定区域识别条件的标定区域。在一个实施例中,标定区域识别条件和属性表征都是与图像属性相对应的,终端会基于同种图像属性下的标定区域识别条件对增强图像的属性表征进行标定区域识别,获得增强图像的属性表征中符合标定区域识别条件的标定区域。在一个实施例中,标定区域一般是指增强效果变差的区域,而对应的非标定区域一般是指增强效果较好的区域,因此针对增强效果较好的非标定区域,在对源图像和增强图像进行图像融合时,终端可以直接将增强图像中在非标定区域中的每个像素位置处的像素值,分别确定为融合图像中相同的像素位置处的像素值,即融合图像中在非标定区域中的每个像素位置处的像素值,分别等于增强图像中相同的像素位置处的像素值。
本实施例中,能够通过标定区域识别条件实现对增强图像的属性表征中标定区域的确定,从而可以在确定标定区域的基础上,实现对融合图像中在非标定区域中的每个像素位置处的像素值的确定。
在一个实施例中,标定区域识别条件包括:增强图像的属性表征中标定区域中的像素位置构成连通域,且增强图像的属性表征中连通域中每个像素位置处的属性值属于预设标定属性值范围。
其中,预设标定属性值范围可按照实际应用场景进行配置,不同图像属性所对应的预设标定属性值范围不同。比如,当图像属性为亮度,所对应的预设标定属性值范围可以为大于0,表示所要识别的标定区域为较亮的区域。又比如,当图像属性为亮度,所对应的预设标定属性值范围可以为小于0,表示所要识别的标定区域为较暗的区域。再比如,当图像属性为色调,所对应的预设标定属性值范围可以为某种特定颜色对应的属性值范围,举例说明,所对应的预设标定属性值范围可以为黄色对应的属性值范围。
具体的,终端会基于标定区域识别条件对增强图像的属性表征进行标定区域识别,获得增强图像的属性表征中符合标定区域识别条件的标定区域,其中,标定区域识别条件包括:增强图像的属性表征中标定区域中的像素位置构成连通域,且增强图像的属性表征中 连通域中每个像素位置处的属性值属于预设标定属性值范围。
在一个实施例中,在进行标定区域识别时,终端会对增强图像的属性表征进行边缘保持的滤波处理,以保留大轮廓的同时去除纹理细节,比对滤波后的属性表征中每个像素位置处的属性值和预设标定属性值范围,以获得滤波后的属性表征中符合标定区域识别条件的标定区域。其中,由于在进行标定区域识别时,终端进行了边缘保持的滤波处理,以保留大轮廓的同时去除纹理细节,所以增强图像的属性表征中标定区域中的像素位置可以构成连通域。
本实施例中,通过限定标定区域识别条件,能够利用标定区域识别条件实现对增强图像的属性表征的标定区域识别,获得增强图像的属性表征中标定区域。
在一个实施例中,源图像中非标定区域的像素位置处的局部融合权重为零,增强图像中非标定区域的像素位置处的增强融合权重,是根据源图像中相同的像素位置处的局部融合权重确定的,生成源图像和增强图像的融合图像,包括:
针对源图像和增强图像的每个相同的像素位置,分别按照所针对的相同的像素位置处的局部融合权重和增强融合权重,对源图像和增强图像各自在所针对的相同的像素位置处的像素值加权融合,获得融合图像。
具体的,源图像中非标定区域的多个像素位置处各自的局部融合权重为零,终端会根据源图像中非标定区域的多个像素位置处各自的局部融合权重,确定增强图像中非标定区域的多个像素位置处各自的增强融合权重,且源图像和增强图像中非标定区域中相同的像素位置处的局部融合权重和增强融合权重负相关。针对源图像和增强图像的每个相同的像素位置,终端会分别按照所针对的相同的像素位置处的局部融合权重和增强融合权重,对源图像和增强图像各自在所针对的相同的像素位置处的像素值加权融合,获得融合图像。
在一个实施例中,非标定区域的每个像素位置处的局部融合权重和增强融合权重的总和是预配置的,通过用预配置的总和减去源图像中像素位置处的局部融合权重,就可以得到增强图像中相同的像素位置处的增强融合权重。举例说明,预配置的总和可以为1,则用1减去源图像中像素位置处的局部融合权重(具体为0),就可以得到增强图像中相同像素位置处的增强融合权重(具体为1)。
在一个实施例中,融合图像中所针对的相同的像素位置处的像素值为局部融合权重和源图像在所针对的相同的像素位置处的像素值的乘积以及增强融合权重和增强图像在所针对的相同的像素位置处的像素值的乘积之和,即融合图像中所针对的相同的像素位置处的像素值=局部融合权重*源图像在所针对的相同的像素位置处的像素值+增强融合权重*增强图像在所针对的相同的像素位置处的像素值。本实施例中,由于源图像中非标定区域的像素位置处各自的局部融合权重为零,则针对融合图像中所针对的相同的像素位置处,其像素值等于增强融合权重与增强图像在所针对的相同的像素位置处的像素值的乘积。
本实施例中,源图像中非标定区域的像素位置处的局部融合权重为零,基于此可确定增强图像中非标定区域的像素位置处的增强融合权重,利用非标定区域的像素位置处的局部融合权重和增强融合权重进行图像融合,能够使得融合图像中像素位置处的像素值更接近增强图像中相同的像素位置处的像素值,能够达到良好的增强效果。
在一个实施例中,生成源图像和增强图像的融合图像,包括:
针对于标定区域中每个像素位置处,按照所针对的标定区域中像素位置处相应的局部融合权重和增强融合权重,将源图像和增强图像在所针对的标定区域中像素位置处的像素值进行加权融合,形成融合图像中所针对的标定区域中像素位置处的像素值;
将增强图像在非标定区域的每个像素位置处的像素值,分别作为融合图像中非标定区域中相同的像素位置处的像素值。
具体的,在生成源图像和增强图像的融合图像时,针对于标定区域中每个像素位置处终端会按照所针对的标定区域中像素位置处相应的局部融合权重和增强融合权重,将源图像和增强图像各自在所针对的标定区域中像素位置处的像素值进行加权融合,形成融合图 像中所针对的标定区域中像素位置处的像素值,针对非标定区域,终端会将增强图像在非标定区域的每个像素位置处的像素值,分别作为融合图像中非标定区域中相同的像素位置处的像素值。
在一个实施例中,针对标定区域中每个像素位置处,融合图像中所针对的标定区域中像素位置处的像素值为所针对的标定区域中像素位置处相应的局部融合权重和源图像在所针对的标定区域中像素位置处的像素值的乘积以及所针对的标定区域中像素位置处相应的增强融合权重和增强图像在所针对的标定区域中像素位置处的像素值的乘积之和,即融合图像中所针对的标定区域中像素位置处的像素值=所针对的标定区域中像素位置处相应的局部融合权重*源图像在所针对的标定区域中像素位置处的像素值+所针对的标定区域中像素位置处相应的增强融合权重*增强图像在所针对的标定区域中像素位置处的像素值。
本实施例中,通过在标定区域的每个像素位置处进行图像融合,能够利用源图像对增强图像中标定区域进行补强,使得融合图像中相应像素位置处的像素值更接近源图像中相应像素位置处的像素值,提升增强效果,通过将增强图像在非标定区域的每个像素位置处的像素值,分别作为融合图像中非标定区域中相同的像素位置处的像素值,能够使得融合图像中非标定区域中相同的像素位置处的像素值更接近增强图像中相应像素位置处的像素值,能够达到良好的增强效果。
在一个实施例中,基于属性差异和增强图像的属性表征,生成源图像中至少一部分像素位置处的局部融合权重包括:
对增强图像的属性表征进行边缘保持的滤波处理,获得增强图像的平滑属性表征;
基于属性差异和平滑属性表征中至少一部分像素位置处的属性值,生成源图像中至少一部分像素位置处的局部融合权重。
其中,边缘保持的滤波处理是指通过滤波保留边缘轮廓的同时去除属性表征内的纹理细节。
具体的,终端会对增强图像的属性表征进行边缘保持的滤波处理,获得增强图像的平滑属性表征,再基于至少一部分像素位置处的属性差异和平滑属性表征中至少一部分像素位置处的属性值,生成源图像中至少一部分像素位置处的局部融合权重。在至少一部分像素位置处为多个像素位置处的情况下,至少一部分像素位置处的局部融合权重具体可以是指多个像素位置处各自的局部融合权重。其中,可以通过导向滤波、双边滤波、形态学的开闭操作等进行边缘保持的滤波处理,本实施例中在此处不对进行边缘保持的滤波方式进行限定,只要能够实现边缘保持的滤波即可。
在一个实施例中,终端会通过导向滤波对增强图像的属性表征进行边缘保持的滤波处理。导向滤波显式地利用导向图计算输出图像,其中导向图可以是输入图像本身或者其他图像,导向滤波比起双边滤波来说在边界附近效果较好,另外,它还具有O(N)的线性时间的速度优势。在一个实施例中,导向图采用输入图像本身,即将增强图像的属性表征作为导向滤波中的导向图。
在一个实施例中,终端会通过双边滤波对增强图像的属性表征进行边缘保持的滤波处理。双边滤波是一种非线性的滤波方法,是结合图像的空间邻近度和像素值相似度的一种折中处理,同时考虑空域信息和灰度相似性,达到保边去噪的目的。具有简单、非迭代、局部的特点。
在一个实施例中,终端会通过形态学的开闭操作对增强图像的属性表征进行边缘保持的滤波处理。其中,形态学的开运算操作的定义是先对图像进行腐蚀操作,然后再对图像进行膨胀操作。它先对图像进行腐蚀,消除图像中的噪声和较小的连通域,之后通过膨胀运算弥补较大的连通域中因腐蚀造成的面积减小。形态学的闭运算则刚好相反,先对图像进行膨胀操作,再对图像进行腐蚀操作。它先对图像进行膨胀以填充连通域内的小型空洞,扩大连通域的边界,连接邻近的两个连通域,之后通过腐蚀运算减少由膨胀运算引起的连 通域边界的扩大及面积的增加。
在一个实施例中,在通过形态学的开闭操作对增强图像的属性表征进行边缘保持的滤波处理时,终端会先对增强图像的属性表征进行开运算操作,再对增强图像的属性表征进行闭运算操作,获得增强图像的平滑属性表征。
本实施例中,通过对增强图像的属性表征进行边缘保持的滤波处理,能够保留边缘的同时去除纹理细节,实现对属性表征的图像平滑处理,获得增强图像的平滑属性表征,进而可以基于属性差异和平滑属性表征中至少一部分像素位置处的属性值,生成源图像中至少一部分像素位置处的局部融合权重。
在一个实施例中,基于属性差异和平滑属性表征中至少一部分像素位置处的属性值,生成源图像中至少一部分像素位置处的局部融合权重包括:
针对至少一部分像素位置处中每个像素位置处,将所针对的像素位置处的属性差异和平滑属性表征中所针对的像素位置处的属性值进行融合,生成所针对的像素位置处的局部融合权重。
具体的,针对至少一部分像素位置处中每个像素位置处,终端会将所针对的像素位置处的属性差异和平滑属性表征中所针对的像素位置处的属性值进行融合,生成所针对的像素位置处的局部融合权重。在一个实施例中,具体进行融合的方式可以为,将所针对的像素位置处的属性差异和平滑属性表征中所针对的像素位置处的属性值相乘。
在一个实施例中,在进行融合时,终端会先分别对所针对的像素位置处的属性差异和平滑属性表征中所针对的像素位置处的属性值进行权重调整,以提高或降低属性差值和属性值的重要程度,再对权重调整后的属性差异和属性值进行融合,生成所针对的像素位置处的局部融合权重。本实施例中不对权重调整的方式进行具体限定,只要能够实现权重调整,以提高或降低属性差值和属性值的重要程度即可。
在一个实施例中,对权重调整后的属性差异和属性值进行融合的方式为将权重调整后的属性差异和属性值相乘,以权重调整后的属性差异和属性值的乘积作为所针对的像素位置处的局部融合权重。
本实施例中,通过针对至少一部分像素位置处中每个像素位置处,将所针对的像素位置处的属性差异和平滑属性表征中所针对的像素位置处的属性值进行融合,能够在综合考虑属性差异和属性值的情况下,生成更合适的所针对的像素位置处的局部融合权重,从而可以利用更合适的局部融合权重来实现图像融合,能够提升增强效果。
在一个实施例中,将所针对的像素位置处的属性差异和平滑属性表征中所针对的像素位置处的属性值进行融合,生成所针对的像素位置处的局部融合权重包括:
获取变化度拉伸系数和属性拉伸系数;
基于变化度拉伸系数对所针对的像素位置处的属性差异进行权重调整,获得属性差异权重;
基于属性拉伸系数对平滑属性表征中所针对的像素位置处的属性值进行权重调整,获得属性值权重;
对属性差异权重和属性值权重进行融合,生成所针对的像素位置处的局部融合权重。
其中,变化度拉伸系数是指用于对属性差异进行拉伸,以提高或降低属性差异(即变化度)的重要程度的系数。属性拉伸系数是指用于对属性值进行拉伸,以提高或降低属性值的重要程度的系数。比如,属性拉伸系数具体可以为亮度拉伸系数。又比如,属性拉伸系数具体可以为对比度拉伸系数。
具体的,终端会获取变化度拉伸系数和属性拉伸系数,基于变化度拉伸系数对所针对的像素位置处的属性差异进行权重调整,获得属性差异权重,基于属性拉伸系数对平滑属性表征中所针对的像素位置处的属性值进行权重调整,获得属性值权重,对属性差异权重和属性值权重进行融合,生成所针对的像素位置处的局部融合权重。在一个实施例中,变化度拉伸系数和属性拉伸系数是与图像属性相对应的,不同图像属性相应的变化度拉伸系 数和属性拉伸系数可以不同。终端可以基于所关注的图像属性来获取变化度拉伸系数和属性拉伸系数。该所关注的图像属性是指属性差异以及平滑属性表征相应的图像属性。
在一个实施例中,对属性差异权重和属性值权重进行融合的方式为将属性差异权重和属性值权重相乘,以属性差异权重和属性值权重的乘积作为所针对的像素位置处的局部融合权重。在一个实施例中,对属性差异权重和属性值权重进行融合的方式为将属性差异权重和属性值权重相加,以属性差异权重和属性值权重的和作为所针对的像素位置处的局部融合权重。其中,变化度拉伸系数和属性拉伸系数可按照实际应用场景进行配置,本实施例在此处不做具体限定。举例说明,变化度拉伸系数可以为0到1之间的任意值,属性拉伸系数可以为0到1之间的任意值。
在一个实施例中,终端可以使用幂函数、指数函数、对数函数等对属性差异和属性值进行权重调整,本实施例在此处不限定进行权重调整的方式,只要能够实现权重调整即可。在一个实施例中,终端可以使用幂函数对属性差异和属性值进行权重调整,即以属性差异为底数,以变化度拉伸系数为幂,对属性差异进行权重调整,获得属性差异权重,以属性值为底数,以属性拉伸系数为幂,对属性值进行权重调整,获得属性值权重。举例说明,属性差异权重具体可以表示为其中diff为属性差异,factordiff为变化度拉伸系数。属性值权重具体可以表示为其中,A为属性值,factorA为属性拉伸系数。
在一个实施例中,如图7所示,终端可以以第一个像素位置处的属性差异diff为底数,以变化度拉伸系数factordiff为幂,对属性差异进行权重调整,获得第一个像素位置处的属性差异权重以第一个像素位置处的属性值A为底数,以属性拉伸系数factorA为幂,对属性值进行权重调整,获得第一个像素位置处的属性值权重再将属性差异权重和属性值权重相乘,得到第一个像素位置处的局部融合权重为
本实施例中,通过基于变化度拉伸系数对所针对的像素位置处的属性差异进行权重调整,能够调整属性差异的重要程度,获得属性差异权重,基于属性拉伸系数对平滑属性表征中所针对的像素位置处的属性值进行权重调整,能够调整属性值的重要程度,获得属性值权重,进而可以通过对属性差异权重和属性值权重进行融合,综合考虑属性差异权重和属性值权重,生成所针对的像素位置处的局部融合权重。
在一个实施例中,属性差异包括至少一部分像素位置处的至少两种图像属性的图像属性差异,增强图像的属性表征包括增强图像的至少两种图像属性的图像属性表征;
基于属性差异和增强图像的属性表征,生成源图像中至少一部分像素位置处的局部融合权重包括:
针对于至少两种图像属性中每种图像属性,基于至少一部分像素位置处的所针对的图像属性的图像属性差异和增强图像的所针对的图像属性的图像属性表征,生成至少一部分像素位置处的所针对的图像属性的属性融合权重;
针对至少一部分像素位置处中每个像素位置处,对所针对的像素位置处相应的至少两种图像属性的属性融合权重进行融合,生成所针对的像素位置处像素位置处的局部融合权重。
其中,针对于至少两种图像属性中每种图像属性,所针对的图像属性的图像属性差异用于描述源图像和增强图像在相同的像素位置处的所针对的图像属性的属性差别程度。图像属性表征是指能够表征所针对的图像属性的信息。比如,图像属性表征具体可以是指能够表征所针对的图像属性的图像,即图像属性表征图。具体的,属性差异包括至少一部分 像素位置处的至少两种图像属性的图像属性差异,增强图像的属性表征包括增强图像的至少两种图像属性的图像属性表征,针对于至少两种图像属性中每种图像属性,终端会基于至少一部分像素位置处的所针对的图像属性的图像属性差异和增强图像的所针对的图像属性的图像属性表征,生成至少一部分像素位置处的所针对的图像属性的属性融合权重。
在一个实施例中,终端会对增强图像的所针对的图像属性的图像属性表征进行边缘保持的滤波处理,获得增强图像的所针对的图像属性的平滑图像属性表征,针对至少一部分像素位置处中每个像素位置处,将所针对的像素位置处的所针对的图像属性的图像属性差异和所针对的图像属性的平滑图像属性表征中所针对的像素位置处的属性值进行融合,生成所针对的像素位置处在所针对的图像属性下的属性融合权重。具体的进行融合的方式可以为,将所针对的像素位置处的所针对的图像属性的图像属性差异和所针对的图像属性的平滑图像属性表征中所针对的像素位置处的属性值相乘。
在一个实施例中,进行融合的方式可以为,终端获取所针对的图像属性相应的变化度拉伸系数和属性拉伸系数,基于变化度拉伸系数对所针对的像素位置处的所针对的图像属性的图像属性差异进行权重调整,并基于属性拉伸系数对所针对的图像属性的平滑图像属性表征中所针对的像素位置处的属性值进行调整,对调整后的属性差异和调整后的属性值进行融合。在一个实施例中,对调整后的属性差异和调整后的属性值进行融合的方式可以为将调整后的属性差异和调整后的属性值相乘。
具体的,针对至少一部分像素位置处中每个像素位置处,终端会对所针对的像素位置处相应的至少两种图像属性的属性融合权重进行融合,生成所针对的像素位置处的局部融合权重。在一个实施例中,终端会通过叠加所针对的像素位置处相应的至少两种图像属性的属性融合权重的方式进行融合,生成所针对的像素位置处的局部融合权重,即所针对的像素位置处的局部融合权重等于至少两种图像属性的属性融合权重的和。
本实施例中,能够在分别考虑至少两种图像属性中每种图像属性的图像属性差异和图像属性表征的情况下,实现对每种图像属性的优化分析,生成至少一部分像素位置处在至少两种图像属性下相应的属性融合权重,通过针对至少一部分像素位置处中每个像素位置处,对所针对的像素位置处相应的至少两种图像属性的属性融合权重进行融合,能够在综合考虑每种图像属性的属性融合权重的情况下,生成更合适的所针对的像素位置处的局部融合权重,从而可以利用更合适的局部融合权重来实现图像融合,能够提升增强效果。
在一个实施例中,如图8所示,通过一个流程示意图来说明本申请的图像处理方法,该方法可以由终端或服务器单独执行,也可以由终端和服务器协同执行。在本申请实施例中,以该方法应用于终端为例进行说明,包括以下步骤:
步骤802,获取源图像和增强图像各自的属性表征;增强图像是对源图像进行增强处理获得的;
步骤804,将源图像和增强图像的属性表征中每个相同的像素位置处的属性值分别进行比对,获得不同的属性表征在每个像素位置处的差分属性值;
步骤806,针对于不同的属性表征在每个像素位置处的差分属性值,当所针对的差分属性值低于预设差分属性阈值,将所针对的差分属性值映射为预设属性值范围的下限值,获得所针对的差分属性值相应的像素位置处的属性差异;当所针对的差分属性值不低于预设差分属性阈值,将所针对的差分属性值,以正相关映射方式映射到预设属性值范围内,获得所针对的差分属性值相应的像素位置处的属性差异;
步骤808,基于不同的属性表征在每个像素位置处的属性差异生成差分属性表征,差分属性表征中每个像素位置处的属性值为相应像素位置处的属性差异;
步骤810,对增强图像的属性表征进行边缘保持的滤波处理,获得增强图像的平滑属性表征;
步骤812,针对至少一部分像素位置处中每个像素位置处,获取变化度拉伸系数和属性拉伸系数,基于变化度拉伸系数对所针对的像素位置处的属性差异进行权重调整,获得 属性差异权重,基于属性拉伸系数对平滑属性表征中所针对的像素位置处的属性值进行权重调整,获得属性值权重,对属性差异权重和属性值权重进行融合,生成所针对的像素位置处的局部融合权重;
步骤814,确定增强图像中至少一部分像素位置处的增强融合权重,且确定的至少一个像素位置处的增强融合权重与源图像中相同的像素位置处的局部融合权重负相关;
步骤816,生成源图像和增强图像的融合图像。
在一个实施例中,如图9所示,通过一个流程示意图来说明本申请的图像处理方法,该方法可以由终端或服务器单独执行,也可以由终端和服务器协同执行。在本申请实施例中,以该方法应用于终端为例进行说明,包括以下步骤:
步骤902,获取源图像和增强图像各自的属性表征;增强图像是对源图像进行增强处理获得的;
步骤904,对源图像和增强图像的属性表征进行比对,获得源图像和增强图像在至少一部分像素位置处的属性差异,至少一部分像素位置处的属性差异包括至少一部分像素位置处的至少两种图像属性的图像属性差异;
步骤906,针对于至少两种图像属性中每种图像属性,基于至少一部分像素位置处的所针对的图像属性的图像属性差异和增强图像的所针对的图像属性的图像属性表征,生成至少一部分像素位置处的所针对的图像属性的属性融合权重;
步骤908,针对至少一部分像素位置处中每个像素位置处,对所针对的像素位置处相应的至少两种图像属性的属性融合权重进行融合,生成所针对的像素位置处的局部融合权重;
步骤910,确定增强图像中至少一部分像素位置处的增强融合权重,且确定的至少一个像素位置处的增强融合权重与所述源图像中相同的像素位置处的局部融合权重负相关;
步骤912,生成源图像和增强图像的融合图像。
发明人认为,传统的增强后处理技术方案,采用的全局融合方案,会使得图像内容所有区域、所有像素使用相同的融合系数,使得最终的增强输出图,要么整体等比例接近源图像,要么整体等比例接近增强图像。由于常规增强算法很难保证所有场景所有区域的增强效果是同等让人满意的,所以采用全局融合方案等比例融合,会导致部分区域欠佳的增强效果与部分区域满意的增强效果得到了同等程度的保留,没法实现增强后处理在不同区域、不同像素的自适应融合。除此外,全局融合系数的设置比较依赖算法开发人员的经验,自适应性有限。
基于此,本申请提出了一种图像处理方法,通过采用逐像素的局部融合系数,解决全局融合方案的所有像素共用一个系数的问题,实现更灵活的融合,通过基于画面属性和增强变化趋势,进行自适应的融合系数计算,解决全局增强方法依赖经验设计、自适应性不够的问题,实现更智能的融合。
本申请提供的图像处理方法,作为完整增强算法组成的一部分,主要可以应用于图像增强模块的后处理步骤中,如图10所示,对源图像进行增强处理获得增强图像后,可以利用本申请所提出的图像处理方法,将源图像和增强图像,进行增强后处理(自适应的逐像素局部融合),从而输出最终增强图像,即融合图像。进一步的,本申请所提出的图像处理方法,可以应用于各种有画质增强需求的视频数据中,包括但不限于各种长视频、短视频等。
举例说明,在将本申请所提出的图像处理方法应用于视频数据中时,可以应用于对视频数据中单帧视频帧进行处理。在一个实施例中,可以以单帧视频帧为源图像,通过对单帧视频帧进行增强处理,得到单帧视频帧相应的增强图像后,再采用本申请所提出的图像处理方法进行处理,以实现对单帧视频帧的图像增强。在一个实施例中,也可以以视频数据中时域上的相邻帧分别为源图像和增强图像,采用本申请拖提出的图像处理方法进行处理,以实现对单帧视频帧的图像增强。
本申请所提供的图像处理方法,可以聚焦于不同的图像属性维度,如亮度、色调、对比度等,并分析增强图像相对于源图像的变化趋势,使得增强图像中关注的图像属性维度比源图像差的区域的像素值更接近于源图像(即源图像的像素位置处的局部融合权重更接近于1)。
下面以聚焦于亮度为例,来说明本申请的图像处理方法,假定场景为,常规增强步骤后,增强图像与源图像相比,偏亮区域的细节(一般为暗部细节)丢失了,利用本申请提出的图像处理方法,可以对常规增强进行补强,使得最终增强画质更佳,具体的流程示意图可以如图11所示,该图像处理方法具体包括以下步骤:
步骤1:提取属性表征,即获取源图像和增强图像各自在亮度属性下的属性表征。
其中,亮度属性下的属性表征可以为灰度图、也可以为HSV颜色模型中的V通道图像,还可以为LAB颜色模型中的L通道图像。
具体的,终端会分别获取源图像和增强图像各自在亮度属性下的属性表征。
步骤2:提取亮部区域,即标定增强图像中偏亮的区域。
具体的,终端会对增强图像的属性表征进行边缘保持的滤波处理,保留大轮廓的同时去除纹理细节,从而提取出增强图像中偏亮的区域,获得增强图像的平滑属性表征。
其中,可以通过导向滤波、双边滤波、形态学的开闭操作等进行边缘保持的滤波处理,本实施例中在此处不对进行边缘保持的滤波方式进行限定,只要能够实现边缘保持的滤波即可。
在一个实施例中,可以通过导向滤波对增强图像的属性表征进行边缘保持的滤波处理,该滤波处理可以用公式lightness=guide_filter(grayenhanced)表示,其中lightness是指滤波后的属性表征,guide_filter是指导向滤波,grayenhanced是指增强图像的属性表征。
步骤3:提取提亮区域,即标定增强图像对比源图像亮度变亮的区域。
具体的,终端会将源图像和增强图像各自的属性表征每个相同的像素位置处的属性值分别进行比对,获得不同的属性表征在每个像素位置处的差分属性值,将不同的属性表征在每个像素位置处的差分属性值减去预设亮度噪声阈值,以排除掉无用的亮度波动噪声,将不同的属性表征在每个像素位置处的减去噪声阈值后的差分属性值分别映射到预设属性值范围内,获得不同的属性表征在每个像素位置处的属性差异,基于不同的属性表征在每个像素位置处的属性差异生成差分属性表征,差分属性表征中每个像素位置处的属性值为相应像素位置处的属性差异。其中,预设亮度噪声阈值可按照实际应用场景进行配置。举例说明,预设亮度噪声阈值具体可以为0.1cd/m2(堪德拉每平米)。
在一个实施例中,针对于不同的属性表征在每个像素位置处的差分属性值,当所针对的差分属性值低于预设差分属性阈值,终端会将所针对的差分属性值映射为预设属性值范围的下限值,获得所针对的差分属性值相应的像素位置处的属性差异。当所针对的差分属性值不低于预设差分属性阈值,终端会将所针对的差分属性值,以正相关映射方式映射到预设属性值范围内,获得所针对的差分属性值相应的像素位置处的属性差异。
其中,预设差分属性阈值和预设属性值范围的下限值均可按照实际应用场景进行配置。在一个实施例中,预设差分属性阈值具体可以为0,预设属性值范围的下限值也可以为0,当所针对的差分属性值低于0时,终端会将所针对的差分属性值映射为0。通过这种方式,终端可以将低于预设差分属性阈值的差分属性值,统一映射为预设属性值范围的下限值,能够使得所得到的差分属性表征只关注增强图像比源图像亮的区域,即暗细节丢失的区域。
在一个实施例中,在将所针对的差分属性值,以正相关映射方式映射到预设属性值范围内时,当所针对的差分属性值为最大差分属性值,终端会将所针对的差分属性值映射为预设属性值范围的上限值,当所针对的差分属性值不为最大差分属性值,终端会将所针对的差分属性值与最大差分属性值的比值,作为所针对的差分属性值相应的像素位置处的属性差异。其中,预设属性值范围的上限值可按照实际应用场景进行配置。
在一个实施例中,预设属性值范围的上限值可以为1,预设属性值范围的下限值可以 为0,则通过将每个像素位置处的差分属性值分别映射到预设属性值范围内,能够将差分属性值分别映射到[0,1]之间,即进行归一化处理。
在一个实施例中,步骤三中所涉及的数据处理过程可以通过以下两个公式来实现,一是公式diff=ReLU(grayenhanced-graysrc-ε),其中,diff是指归一化之前的差分属性表征,grayenhanced是指增强图像,graysrc是指源图像,ε是指预设亮度噪声阈值,ReLU是指线性整流函数(Linear rectification function),又称修正线性单元,是一种人工神经网络中常用的激活函数,通常指代以斜坡函数及其变种为代表的非线性函数。二是公式通过该公式可以实现对差分属性值的归一化处理,其中,等式右边的diff(x,y)是指归一化之前的差分属性表征中像素位置处的差分属性值,等式左边的diff(x,y)是指像素位置处的属性差异,是指归一化之前的差分属性表征中最大差分属性值。
步骤4:生成融合掩膜,即综合步骤2和步骤3,将亮部区域中丢失了暗部细节的像素点提取出来,生成局部融合掩膜。
具体的,终端会将步骤三所得到的至少一部分像素位置处的属性差异和步骤二所得到的增强图像的平滑属性表征中至少一部分像素位置处的属性值,生成源图像中至少一部分像素位置处各自的局部融合权重,即局部融合掩膜。
在一个实施例中,针对至少一部分像素位置处中每个像素位置处,终端会获取亮度属性相应的变化度拉伸系数和亮度拉伸系数,基于变化度拉伸系数对所针对的像素位置处的属性差异进行权重调整,获得属性差异权重,基于亮度拉伸系数对平滑属性表征中所针对的像素位置处的属性值进行权重调整,获得属性值权重,对属性差异权重和属性值权重进行融合,生成所针对的像素位置处的局部融合权重。其中,变化度拉伸系数和亮度拉伸系数可按照实际应用场景进行配置,本实施例在此处不做具体限定。需要说明的是,变化度拉伸系数越大,越能提高属性差异的重要程度,亮度拉伸系数越大,越能提高属性值的重要程度。
在一个实施例中,终端可以使用幂函数、指数函数、对数函数等对属性差异和属性值进行权重调整,本实施例在此处不限定进行权重调整的方式,只要能够实现权重调整即可。在一个实施例中,终端可以使用幂函数对属性差异和属性值进行权重调整,即以属性差异为底数,以变化度拉伸系数为幂,对属性差异进行权重调整,获得属性差异权重,以属性值为底数,以亮度拉伸系数为幂,对属性值进行权重调整,获得属性值权重。举例说明,属性差异权重具体可以表示为其中diff为属性差异,factordiff为变化度拉伸系数。属性值权重具体可以表示为其中,lightness为属性值,factorlightess为亮度拉伸系数。
在一个实施例中,对属性差异权重和属性值权重进行融合的方式为将属性差异权重和属性值权重相乘,以属性差异权重和属性值权重的乘积作为像素位置处的局部融合权重。举例说明,局部融合权重具体可以为:其中为属性值权重,为属性差异权重。
步骤5:融合,即利用步骤四生成的局部融合掩膜,将源图像和增强图像加权融合,得到最终增强图像,即融合图像。
具体的,终端会生成源图像和增强图像的融合图像,针对于至少一部分像素位置处中每个像素位置处,融合图像中所针对的像素位置处的像素值,是按所针对的像素位置处的局部融合权重和增强融合权重,对源图像和增强图像各自在所针对的像素位置处的像素值加权融合获得的,即针对至少一部分像素位置处中每个像素位置处,都采用了独有的融合值,通过这种方式能够实现更灵活的图像融合。其中,增强图像中至少一部分像素位置处各自的增强融合权重与源图像中相同的像素位置处的局部融合权重负相关。
在一个实施例中,每个像素位置处的局部融合权重和增强融合权重的总和是预配置的,通过用预配置的总和减去源图像中像素位置处的局部融合权重,就可以得到增强图像中相同像素位置处的增强融合权重。举例说明,预配置的总和可以为1,则用1减去源图像中像素位置处的局部融合权重,就可以得到增强图像中相同像素位置处的增强融合权重。
在一个实施例中,生成融合图像所使用的融合公式可以为:dst=alpha_mask*src+(1-alpha_mask)*enhanced,其中,src是指源图像,enhanced是指增强图像,alpha_mask是指局部融合权重,1-alpha_mask是指增强融合权重,该公式以像素点为执行单位,即针对于至少一部分像素位置处中每个像素位置处,融合图像dst中所针对的像素位置处的像素值,是按所针对的像素位置处的局部融合权重alpha_mask和增强融合权重1-alpha_masl,对源图像src和增强图像enhanced各自在所针对的像素位置处的像素值加权融合获得的。
在一个实施例中,如图12所示,提供了采用本申请提出的图像处理后的效果对比图,可以看到,针对亮部区域1202(图12中在源图像、增强图像和最终增强图像的左下角给出了亮部区域1202的放大图),源图像在亮部区域存在的暗细节(通过亮部区域内部的线条表示),在经过增强处理所得到的增强图像中并未存在,即过度增强处理使得暗细节丢失了,而在最终增强图像中,源图像在亮部区域存在的暗细节被保留下来了,即最终增强图像对比增强图像,在亮部区域1202,暗细节明显更为丰富和突出。针对适中亮度区域1204,在源图像中适中亮度区域的亮度并不明显(在源图像中用虚线表示并不明显),经过增强处理后在增强图像中该适中亮度区域的视觉效果更佳,则在最终增强图像中,经过增强处理后视觉效果更佳的适中亮度区域被保留下来了,即最终增强图像对比源图像,适中亮度区域视觉效果更佳,对比度更高,更清晰。
在一个实施例中,如图13所示,提供了采用本申请提出的图像处理后的效果对比图,可以看到,最终增强图像对比常规增强图像,在用白色框所标注亮部区域1302(图13中在源图像、常规增强图像和最终增强图像的左下角给出了亮部区域1302的放大图),暗细节(存在于图13中用黑色框所标注的区域1306内)明显更为丰富和突出,最终增强图像对比源图像,用白色框所标注的适中亮度区域1304视觉效果更佳,对比度更高,更清晰。总体而言,最终增强图像综合了源图像和常规增强图像的优点,整体和局部观感都更好。
发明人认为,本申请通过采用逐像素的局部融合系数,使得每个区域、甚至每个像素都可以采用独有的融合值,能够实现更灵活的图像融合,基于画面属性(如亮度、对比度、前后变化度等)和增强变化趋势(如细节丢失、色彩变暗淡等),进行自适应的融合权重计算,不依赖经验设计,实现更智能的融合,使得最终增强输出图画质在聚焦的图像属性上效果比增强图像更佳,在整体上比源图像有明显的画质提升,能同时保留源图像和增强图像的有益特性,且可以适配多个画质属性,实用性广。
应该理解的是,虽然如上所述的各实施例所涉及的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,如上所述的各实施例所涉及的流程图中的至少一部分步骤可以包括多个步骤或者多个阶段,这些步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤中的步骤或者阶段的至少一部分轮流或者交替地执行。
基于同样的发明构思,本申请实施例还提供了一种用于实现上述所涉及的图像处理方法的图像处理装置。该装置所提供的解决问题的实现方案与上述方法中所记载的实现方案相似,故下面所提供的一个或多个图像处理装置实施例中的具体限定可以参见上文中对于图像处理方法的限定,在此不再赘述。
在一个实施例中,如图14所示,提供了一种图像处理装置,包括:属性表征获取模块1402、属性表征比对模块1404、局部融合权重生成模块1406、增强融合权重生成模块1408和融合图像生成模块1410,其中:
属性表征获取模块1402,用于获取源图像和增强图像各自的属性表征,增强图像是对源图像进行增强处理获得的;
属性表征比对模块1404,用于对源图像和增强图像的属性表征进行比对,获得源图像和增强图像在至少一部分像素位置处的属性差异;
局部融合权重生成模1406,用于基于属性差异和增强图像的属性表征,生成源图像中至少一部分像素位置处的局部融合权重;
增强融合权重生成模块1408,用于确定增强图像中至少一部分像素位置处的增强融合权重,且确定的至少一个像素位置处的增强融合权重与源图像中相同的像素位置处的局部融合权重负相关;
融合图像生成模块1410,用于生成源图像和增强图像的融合图像。
上述图像处理装置,通过获取源图像和增强图像各自的属性表征,对源图像和增强图像各自的属性表征进行比对,能够关注源图像到增强图像的属性变化趋势,获得源图像和增强图像在至少一部分像素位置处的属性差异,进而可以基于至少一部分像素位置处的属性差异和增强图像的属性表征,进行自适应的融合权重计算,生成源图像中至少一部分像素位置处的局部融合权重,从而可以确定增强图像中至少一部分像素位置处的增强融合权重,生成源图像和增强图像的融合图像,整个过程,通过关注源图像到增强图像的属性变化趋势来生成局部融合权重,采用逐像素的局部融合权重来实现图像融合,能够提升增强效果。
在一个实施例中,属性表征比对模块还用于将源图像和增强图像各自的属性表征中每个相同的像素位置处的属性值分别进行比对,获得不同的属性表征在每个像素位置处的差分属性值,基于不同的属性表征在每个像素位置处的差分属性值生成差分属性表征,差分属性表征中至少一部分像素位置处的属性值,表征源图像和增强图像在至少一部分像素位置处的属性差异。
在一个实施例中,属性表征比对模块还用于将不同的属性表征在每个像素位置处的差分属性值分别映射到预设属性值范围内,获得不同的属性表征在每个像素位置处的属性差异,基于不同的属性表征在每个像素位置处的属性差异生成差分属性表征,差分属性表征中每个像素位置处的属性值为相应像素位置处的属性差异。
在一个实施例中,属性表征比对模块还用于针对于不同的属性表征在每个像素位置处的差分属性值,当所针对的差分属性值低于预设差分属性阈值,将所针对的差分属性值映射为预设属性值范围的下限值,获得所针对的差分属性值相应的像素位置处的属性差异,当所针对的差分属性值不低于预设差分属性阈值,将所针对的差分属性值,以正相关映射方式映射到预设属性值范围内,获得所针对的差分属性值相应的像素位置处的属性差异。
在一个实施例中,至少一部分像素位置处包括形成标定区域的部分像素位置,增强图像的属性表征中标定区域中的每个像素位置处的属性值符合标定区域识别条件,融合图像中在非标定区域中的每个像素位置处的像素值,分别等于增强图像中相同的像素位置处的像素值。
在一个实施例中,标定区域识别条件包括:增强图像的属性表征中标定区域中的像素位置构成连通域,且增强图像的属性表征中连通域中每个像素位置处的属性值属于预设标定属性值范围。
在一个实施例中,源图像中非标定区域的像素位置处的局部融合权重为零,增强图像中非标定区域的像素位置处的增强融合权重,是根据源图像中相同的像素位置处的局部融合权重确定的,融合图像生成模块还用于针对源图像和增强图像的每个相同的像素位置,分别按照所针对的相同的像素位置处的局部融合权重和增强融合权重,对源图像和增强图像在所针对的相同的像素位置处的像素值加权融合,获得融合图像。
在一个实施例中,融合图像生成模块还用于针对于标定区域中每个像素位置处,按照所针对的标定区域中像素位置处相应的局部融合权重和增强融合权重,将源图像和增强图像在所针对的标定区域中像素位置处的像素值进行加权融合,形成融合图像中所针对的标定区域中像素位置处的像素值,将增强图像在非标定区域的每个像素位置处的像素值,分别作为融合图像中非标定区域中相同的像素位置处的像素值。
在一个实施例中,局部融合权重生成模块还用于对增强图像的属性表征进行边缘保持的滤波处理,获得增强图像的平滑属性表征,基于属性差异和平滑属性表征中至少一部分像素位置处的属性值,生成源图像中至少一部分像素位置处的局部融合权重。
在一个实施例中,局部融合权重生成模块还用于针对至少一部分像素位置处中每个像素位置处,将所针对的像素位置处的属性差异和平滑属性表征中所针对的像素位置处的属性值进行融合,生成所针对的像素位置处的局部融合权重。
在一个实施例中,局部融合权重生成模块还用于获取变化度拉伸系数和属性拉伸系数,基于变化度拉伸系数对所针对的像素位置处的属性差异进行权重调整,获得属性差异权重,基于属性拉伸系数对平滑属性表征中所针对的像素位置处的属性值进行权重调整,获得属性值权重,对属性差异权重和属性值权重进行融合,生成所针对的像素位置处的局部融合权重。
在一个实施例中,至少一部分像素位置处的属性差异包括至少一部分像素位置处的至少两种图像属性的图像属性差异,增强图像的属性表征包括增强图像的至少两种图像属性的图像属性表征,局部融合权重生成模块还用于针对于至少两种图像属性中每种图像属性,基于至少一部分像素位置处的所针对的图像属性的图像属性差异和增强图像的所针对的图像属性的图像属性表征,生成至少一部分像素位置处的所针对的图像属性的属性融合权重,针对至少一部分像素位置处中每个像素位置处,对所针对的像素位置处相应的至少两种图像属性的属性融合权重进行融合,生成所针对的像素位置处的局部融合权重。
上述图像处理装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是服务器,其内部结构图可以如图15所示。该计算机设备包括处理器、存储器、输入/输出接口(Input/Output,简称I/O)和通信接口。其中,处理器、存储器和输入/输出接口通过系统总线连接,通信接口通过输入/输出接口连接到系统总线。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质和内存储器。该非易失性存储介质存储有操作系统、计算机可读指令和数据库。该内存储器为非易失性存储介质中的操作系统和计算机可读指令的运行提供环境。该计算机设备的数据库用于存储源图像和增强图像等数据。该计算机设备的输入/输出接口用于处理器与外部设备之间交换信息。该计算机设备的通信接口用于与外部的终端通过网络连接通信。该计算机可读指令被处理器执行时以实现一种图像处理方法。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是终端,其内部结构图可以如图16所示。该计算机设备包括处理器、存储器、输入/输出接口、通信接口、显示单元和输入装置。其中,处理器、存储器和输入/输出接口通过系统总线连接,通信接口、显示单元和输入装置通过输入/输出接口连接到系统总线。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非 易失性存储介质存储有操作系统和计算机可读指令。该内存储器为非易失性存储介质中的操作系统和计算机可读指令的运行提供环境。该计算机设备的输入/输出接口用于处理器与外部设备之间交换信息。该计算机设备的通信接口用于与外部的终端进行有线或无线方式的通信,无线方式可通过WIFI、移动蜂窝网络、NFC(近场通信)或其他技术实现。该计算机可读指令被处理器执行时以实现一种图像处理方法。该计算机设备的显示单元用于形成视觉可见的画面,可以是显示屏、投影装置或虚拟现实成像装置,显示屏可以是液晶显示屏或电子墨水显示屏,该计算机设备的输入装置可以是显示屏上覆盖的触摸层,也可以是计算机设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。
本领域技术人员可以理解,图15和图16中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,还提供了一种计算机设备,包括存储器和处理器,存储器中存储有计算机可读指令,该处理器执行计算机可读指令时实现上述各方法实施例中的步骤。
在一个实施例中,提供了一种计算机可读存储介质,存储有计算机可读指令,该计算机可读指令被处理器执行时实现上述各方法实施例中的步骤。
在一个实施例中,提供了一种计算机程序产品,包括计算机可读指令,该计算机可读指令被处理器执行时实现上述各方法实施例中的步骤。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存、光存储器、高密度嵌入式非易失性存储器、阻变存储器(ReRAM)、磁变存储器(Magnetoresistive Random Access Memory,MRAM)、铁电存储器(Ferroelectric Random Access Memory,FRAM)、相变存储器(Phase Change Memory,PCM)、石墨烯存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或外部高速缓冲存储器等。作为说明而非局限,RAM可以是多种形式,比如静态随机存取存储器(Static Random Access Memory,SRAM)或动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。本申请所提供的各实施例中所涉及的数据库可包括关系型数据库和非关系型数据库中至少一种。非关系型数据库可包括基于区块链的分布式数据库等,不限于此。本申请所提供的各实施例中所涉及的处理器可为通用处理器、中央处理器、图形处理器、数字信号处理器、可编程逻辑器、基于量子计算的数据处理逻辑器等,不限于此。
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (17)

  1. 一种图像处理方法,由计算机设备执行,所述方法包括:
    获取源图像和增强图像各自的属性表征,所述增强图像是对所述源图像进行增强处理获得的;
    对所述源图像和所述增强图像的属性表征进行比对,获得所述源图像和所述增强图像在至少一部分像素位置处的属性差异;
    基于所述属性差异和所述增强图像的属性表征,确定所述源图像中所述至少一部分像素位置处的局部融合权重;
    确定所述增强图像中所述至少一部分像素位置处的增强融合权重,且确定的至少一个像素位置处的增强融合权重与所述源图像中相同的像素位置处的局部融合权重负相关;及
    生成所述源图像和所述增强图像的融合图像。
  2. 根据权利要求1所述的方法,其特征在于,所述对所述源图像和所述增强图像的属性表征进行比对,获得所述源图像和所述增强图像在至少一部分像素位置处的属性差异包括:
    将所述源图像和所述增强图像的属性表征中每个相同的像素位置处的属性值分别进行比对,获得不同的所述属性表征在每个像素位置处的差分属性值;及
    基于不同的所述属性表征在每个像素位置处的差分属性值生成差分属性表征;所述差分属性表征中至少一部分像素位置处的属性值,表征所述源图像和所述增强图像在所述至少一部分像素位置处的属性差异。
  3. 根据权利要求2所述的方法,其特征在于,所述基于不同的所述属性表征在每个像素位置处的差分属性值生成差分属性表征,包括:
    将不同的所述属性表征在每个像素位置处的差分属性值分别映射到预设属性值范围内,获得不同的所述属性表征在每个像素位置处的属性差异;及
    基于不同的所述属性表征在每个像素位置处的属性差异生成差分属性表征;所述差分属性表征中每个像素位置处的属性值为相应像素位置处的属性差异。
  4. 根据权利要求3所述的方法,其特征在于,所述将不同的所述属性表征在每个像素位置处的差分属性值分别映射到预设属性值范围内,获得不同的所述属性表征在每个像素位置处的属性差异,包括:
    针对于不同的所述属性表征在每个像素位置处的差分属性值,当所针对的差分属性值低于预设差分属性阈值,将所述所针对的差分属性值映射为预设属性值范围的下限值,获得所述所针对的差分属性值相应的像素位置处的属性差异;
    当所述所针对的差分属性值不低于所述预设差分属性阈值,将所述所针对的差分属性值,以正相关映射方式映射到所述预设属性值范围内,获得所述所针对的差分属性值相应的像素位置处的属性差异。
  5. 根据权利要求1至4任一项所述的方法,其特征在于,所述至少一部分像素位置处包括形成标定区域的部分像素位置,所述增强图像的属性表征中所述标定区域中的每个像素位置处的属性值符合标定区域识别条件;所述融合图像中在非所述标定区域中的每个像素位置处的像素值,分别等于所述增强图像中相同的像素位置处的像素值。
  6. 根据权利要求5所述的方法,其特征在于,所述标定区域识别条件包括:所述增强图像的属性表征中所述标定区域中的像素位置构成连通域,且所述增强图像的属性表征中所述连通域中每个像素位置处的属性值属于预设标定属性值范围。
  7. 根据权利要求5所述的方法,其特征在于,所述源图像中非所述标定区域的像素位置处的局部融合权重为零;所述增强图像中非所述标定区域的像素位置处的增强融合权重,是根据所述源图像中相同的像素位置处的局部融合权重确定的;所述生成所述源图像和所述增强图像的融合图像,包括:
    针对所述源图像和所述增强图像的每个相同的像素位置,分别按照所针对的相同的像素位置处的局部融合权重和增强融合权重,对所述源图像和所述增强图像在所述所针对的相同的像素位置处的像素值加权融合,获得融合图像。
  8. 根据权利要求5所述的方法,其特征在于,所述生成所述源图像和所述增强图像的融合图像,包括:
    针对于标定区域中每个像素位置处,按照所针对的标定区域中像素位置处相应的局部融合权重和增强融合权重,将所述源图像和所述增强图像在所述所针对的标定区域中像素位置处的像素值进行加权融合,形成融合图像中所述所针对的标定区域中像素位置处的像素值;及
    将所述增强图像在非所述标定区域的每个像素位置处的像素值,分别作为所述融合图像中非所述标定区域中相同的像素位置处的像素值。
  9. 根据权利要求1至6任一项所述的方法,其特征在于,所述基于所述属性差异和所述增强图像的属性表征,生成所述源图像中所述至少一部分像素位置处的局部融合权重包括:
    对所述增强图像的属性表征进行边缘保持的滤波处理,获得所述增强图像的平滑属性表征;及
    基于所述属性差异和所述平滑属性表征中所述至少一部分像素位置处的属性值,生成所述源图像中所述至少一部分像素位置处的局部融合权重。
  10. 根据权利要求9所述的方法,其特征在于,所述基于所述属性差异和所述平滑属性表征中所述至少一部分像素位置处的属性值,生成所述源图像中所述至少一部分像素位置处的局部融合权重包括:
    针对所述至少一部分像素位置处中每个像素位置处,将所针对的像素位置处的属性差异和所述平滑属性表征中所述所针对的像素位置处的属性值进行融合,生成所述所针对的像素位置处的局部融合权重。
  11. 根据权利要求10所述的方法,其特征在于,所述将所述所针对的像素位置处的属性差异和所述平滑属性表征中所述所针对的像素位置处的属性值进行融合,生成所述所针对的像素位置处的局部融合权重包括:
    获取变化度拉伸系数和属性拉伸系数;
    基于所述变化度拉伸系数对所述所针对的像素位置处的属性差异进行权重调整,获得属性差异权重;
    基于所述属性拉伸系数对所述平滑属性表征中所述所针对的像素位置处的属性值进行权重调整,获得属性值权重;及
    对所述属性差异权重和所述属性值权重进行融合,生成所述所针对的像素位置处的局部融合权重。
  12. 根据权利要求1至6任一项所述的方法,其特征在于,所述属性差异包括所述至少一部分像素位置处的至少两种图像属性的图像属性差异,所述增强图像的属性表征包括所述增强图像的所述至少两种图像属性的图像属性表征;
    所述基于所述属性差异和所述增强图像的属性表征,生成所述源图像中所述至少一部分像素位置处的局部融合权重包括:
    针对于所述至少两种图像属性中每种图像属性,基于所述至少一部分像素位置处的所针对的图像属性的图像属性差异和所述增强图像的所述所针对的图像属性的图像属性表征,生成所述至少一部分像素位置处的所述所针对的图像属性的属性融合权重;及
    针对所述至少一部分像素位置处中每个像素位置处,对所针对的像素位置处相应的所述至少两种图像属性的属性融合权重进行融合,生成所述所针对的像素位置处的局部融合权重。
  13. 根据权利要求1至6任一项所述的方法,其特征在于,针对于所述至少一部分像 素位置处中每个像素位置处,所述融合图像中所针对的像素位置处的像素值,是按所述所针对的像素位置处的局部融合权重和增强融合权重,对所述源图像和所述增强图像各自在所述所针对的像素位置处的像素值加权融合获得的。
  14. 一种图像处理装置,其特征在于,所述装置包括:
    属性表征获取模块,用于获取源图像和增强图像各自的属性表征;所述增强图像是对所述源图像进行增强处理获得的;
    属性表征比对模块,用于对所述源图像和所述增强图像的属性表征进行比对,获得所述源图像和所述增强图像在至少一部分像素位置处的属性差异;
    局部融合权重生成模块,用于基于所述属性差异和所述增强图像的属性表征,确定所述源图像中所述至少一部分像素位置处的局部融合权重;
    增强融合权重生成模块,用于确定所述增强图像中所述至少一部分像素位置处的增强融合权重,且确定的至少一个像素位置处的增强融合权重与所述源图像中相同的像素位置处的局部融合权重负相关;及
    融合图像生成模块,用于生成所述源图像和所述增强图像的融合图像。
  15. 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机可读指令,其特征在于,所述处理器执行所述计算机可读指令时实现权利要求1至13中任一项所述的方法的步骤。
  16. 一种计算机可读存储介质,其上存储有计算机可读指令,其特征在于,所述计算机可读指令被处理器执行时实现权利要求1至13中任一项所述的方法的步骤。
  17. 一种计算机程序产品,包括计算机可读指令,其特征在于,该计算机可读指令被处理器执行时实现权利要求1至13中任一项所述的方法的步骤。
PCT/CN2023/102697 2022-08-30 2023-06-27 图像处理方法、装置、计算机设备和存储介质 WO2024045821A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/581,818 US20240193739A1 (en) 2022-08-30 2024-02-20 Image processing method and apparatus, computer device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211049178.6 2022-08-30
CN202211049178.6A CN115115554B (zh) 2022-08-30 2022-08-30 基于增强图像的图像处理方法、装置和计算机设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/581,818 Continuation US20240193739A1 (en) 2022-08-30 2024-02-20 Image processing method and apparatus, computer device, and storage medium

Publications (1)

Publication Number Publication Date
WO2024045821A1 true WO2024045821A1 (zh) 2024-03-07

Family

ID=83335780

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/102697 WO2024045821A1 (zh) 2022-08-30 2023-06-27 图像处理方法、装置、计算机设备和存储介质

Country Status (3)

Country Link
US (1) US20240193739A1 (zh)
CN (1) CN115115554B (zh)
WO (1) WO2024045821A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115115554B (zh) * 2022-08-30 2022-11-04 腾讯科技(深圳)有限公司 基于增强图像的图像处理方法、装置和计算机设备
CN116188327B (zh) * 2023-04-21 2023-07-14 济宁职业技术学院 一种用于安防监控视频的图像增强方法
CN116167949B (zh) * 2023-04-25 2023-06-30 天津中智云海软件科技有限公司 一种基于医疗影像大数据的临床辅助决策系统
CN117036209B (zh) * 2023-10-07 2024-01-26 深圳英美达医疗技术有限公司 图像对比度增强方法、装置、计算机设备和存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110619610A (zh) * 2019-09-12 2019-12-27 紫光展讯通信(惠州)有限公司 图像处理方法及装置
CN112561804A (zh) * 2020-10-09 2021-03-26 天津大学 基于多尺度细节增强的低光照水下图像增强方法
CN112614064A (zh) * 2020-12-18 2021-04-06 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质
US20220188999A1 (en) * 2019-09-04 2022-06-16 Huawei Technologies Co., Ltd. Image enhancement method and apparatus
CN115115554A (zh) * 2022-08-30 2022-09-27 腾讯科技(深圳)有限公司 基于增强图像的图像处理方法、装置和计算机设备

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8411938B2 (en) * 2007-11-29 2013-04-02 Sri International Multi-scale multi-camera adaptive fusion with contrast normalization
CN108205804B (zh) * 2016-12-16 2022-05-31 斑马智行网络(香港)有限公司 图像处理方法、装置及电子设备
CN108335279B (zh) * 2017-01-20 2022-05-17 微软技术许可有限责任公司 图像融合和hdr成像
CN109345491B (zh) * 2018-09-26 2021-07-27 中国科学院西安光学精密机械研究所 一种融合梯度和灰度信息的遥感图像增强方法
CN109509164B (zh) * 2018-09-28 2023-03-28 洛阳师范学院 一种基于gdgf的多传感器图像融合方法及系统
CN109741269B (zh) * 2018-12-07 2020-11-24 广州华多网络科技有限公司 图像处理方法、装置、计算机设备和存储介质
US10825160B2 (en) * 2018-12-12 2020-11-03 Goodrich Corporation Spatially dynamic fusion of images of different qualities
CN110956122B (zh) * 2019-11-27 2022-08-02 深圳市商汤科技有限公司 图像处理方法及装置、处理器、电子设备、存储介质
CN111507913B (zh) * 2020-04-08 2023-05-05 四川轻化工大学 一种基于纹理特征的图像融合算法
CN112488968B (zh) * 2020-12-14 2023-06-20 华侨大学 一种分程度直方图均衡融合的图像增强方法
CN113240609A (zh) * 2021-05-26 2021-08-10 Oppo广东移动通信有限公司 图像去噪方法、装置及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220188999A1 (en) * 2019-09-04 2022-06-16 Huawei Technologies Co., Ltd. Image enhancement method and apparatus
CN110619610A (zh) * 2019-09-12 2019-12-27 紫光展讯通信(惠州)有限公司 图像处理方法及装置
CN112561804A (zh) * 2020-10-09 2021-03-26 天津大学 基于多尺度细节增强的低光照水下图像增强方法
CN112614064A (zh) * 2020-12-18 2021-04-06 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质
CN115115554A (zh) * 2022-08-30 2022-09-27 腾讯科技(深圳)有限公司 基于增强图像的图像处理方法、装置和计算机设备

Also Published As

Publication number Publication date
US20240193739A1 (en) 2024-06-13
CN115115554A (zh) 2022-09-27
CN115115554B (zh) 2022-11-04

Similar Documents

Publication Publication Date Title
WO2024045821A1 (zh) 图像处理方法、装置、计算机设备和存储介质
Celik Spatial mutual information and PageRank-based contrast enhancement and quality-aware relative contrast measure
Khan et al. Localization of radiance transformation for image dehazing in wavelet domain
Zhou et al. A multifeature fusion method for the color distortion and low contrast of underwater images
Vazquez-Corral et al. A fast image dehazing method that does not introduce color artifacts
Wang et al. Enhancement for dust-sand storm images
Chen et al. A solution to the deficiencies of image enhancement
Liu et al. Enhancement of low illumination images based on an optimal hyperbolic tangent profile
Feng et al. Low-light image enhancement algorithm based on an atmospheric physical model
Lei et al. A novel intelligent underwater image enhancement method via color correction and contrast stretching✰
Wang et al. Single Underwater Image Enhancement Based on $ L_ {P} $-Norm Decomposition
Wen et al. Autonomous robot navigation using Retinex algorithm for multiscale image adaptability in low-light environment
Arigela et al. Self-tunable transformation function for enhancement of high contrast color images
Lei et al. Low-light image enhancement using the cell vibration model
Lee et al. Color preserving contrast enhancement for low light level images based on retinex
Liu et al. Low-light image enhancement based on membership function and gamma correction
Zhou et al. Low-light enhancement method based on a Retinex model for structure preservation
Yan et al. Underwater image dehazing using a novel color channel based dual transmission map estimation
Wang et al. Adaptive enhancement for nonuniform illumination images via nonlinear mapping
Fan et al. RME: a low-light image enhancement model based on reflectance map enhancing
Miao et al. Novel tone mapping method via macro-micro modeling of human visual system
Goyal et al. An enhancement of underwater images based on contrast restricted adaptive histogram equalization for image enhancement
CN115564682A (zh) 一种光照不均图像增强方法及系统
Pattanayak et al. Dark image enhancement using adaptive piece-wise sigmoid gamma correction (APSGC) in presence of optical sources
Hashim et al. Very low illumination image enhancement via lightness mapping

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23858859

Country of ref document: EP

Kind code of ref document: A1