CN115170413A - Image processing method, image processing device, electronic equipment and computer readable storage medium - Google Patents

Image processing method, image processing device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN115170413A
CN115170413A CN202210744777.3A CN202210744777A CN115170413A CN 115170413 A CN115170413 A CN 115170413A CN 202210744777 A CN202210744777 A CN 202210744777A CN 115170413 A CN115170413 A CN 115170413A
Authority
CN
China
Prior art keywords
image
sub
pixel
filter coefficient
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210744777.3A
Other languages
Chinese (zh)
Inventor
韩徐
胥立丰
刘欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Eswin Computing Technology Co Ltd
Original Assignee
Beijing Eswin Computing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Eswin Computing Technology Co Ltd filed Critical Beijing Eswin Computing Technology Co Ltd
Priority to CN202210744777.3A priority Critical patent/CN115170413A/en
Publication of CN115170413A publication Critical patent/CN115170413A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application provides an image processing method, an image processing device, electronic equipment and a computer readable storage medium, and relates to the technical field of tone mapping. The method comprises the following steps: determining a plurality of sub-images in an image to be processed; the central pixel point of each subimage uniquely corresponds to one pixel point in the image to be processed, and the number of the subimages is consistent with that of the pixel points; for each sub-image, determining the average pixel value of the sub-image, and determining the image type of the sub-image according to the difference value between the pixel value of the central pixel point of the sub-image and the average pixel value; performing layer decomposition on the image to be processed according to the image type of each sub-image to obtain a base layer image and a detail layer image of the image to be processed; and carrying out weighted summation on the pixel values of the base layer image and the detail layer image to obtain a target image. According to the method and the device, the base layer image and the detail layer image of the image to be processed are accurately identified through layer decomposition processing according to the image type, and the visual quality of image display is effectively improved.

Description

Image processing method, image processing device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of tone mapping technologies, and in particular, to an image processing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
In computer graphics and cinematography, high Dynamic Range Imaging (HDR) technology is used to achieve a larger exposure Dynamic Range than ordinary digital image technology, and tone mapping is an image processing technology that approximately displays HDR images on a limited Dynamic Range medium; in practical rendering applications, the real scene of an image may be matched to the displayed scene of the image by tone mapping, although the display device may not be able to display the entire range of luminance of the HDR image.
In the prior art, an image edge preserving filter (such as a bilateral filter and a guide filter) is generally adopted for carrying out tone mapping processing; the processing steps of the guiding filter are as follows: determining a filter coefficient according to a preset regularization parameter, performing layer decomposition on an image to be processed according to the filter coefficient, and fusing the decomposed image layers to obtain an image subjected to tone mapping; the method has the problems that the visual quality of image display is poor, and the halo effect (halo effect) of the image edge is easily caused.
Disclosure of Invention
Embodiments of the present application provide an image processing method and apparatus, an electronic device, and a computer-readable storage medium, which can avoid a problem in the prior art that a halo effect exists at an image edge when performing tone mapping. The technical scheme is as follows:
according to an aspect of an embodiment of the present application, there is provided an image processing method including:
determining a plurality of sub-images in an image to be processed; the central pixel point of each subimage uniquely corresponds to one pixel point in the image to be processed, and the number of the subimages is consistent with that of the pixel points;
for each sub-image, determining the average pixel value of the sub-image, and determining the image type of the sub-image according to the difference value between the pixel value of the central pixel point of the sub-image and the average pixel value;
performing layer decomposition on the image to be processed according to the image type of each sub-image to obtain a base layer image and a detail layer image of the image to be processed;
and carrying out weighted summation on the pixel values of the base layer image and the detail layer image to obtain a target image.
Optionally, the determining the image type of the sub-image includes:
for each sub-image, taking the difference value of the average pixel value and the pixel value of the central pixel point as the pixel change deviation value of the sub-image;
determining a first filter coefficient of the sub-image according to the pixel change deviation value;
the image type of the sub-image is determined based on the first filter coefficient.
Optionally, the determining a first filter coefficient of the sub-image according to the pixel variation deviation value includes:
acquiring a preset filtering scale;
weighting the filtering scale according to the pixel change deviation value of the sub-image to obtain a weighted filtering scale;
and determining a first filter coefficient of the sub-image according to the weighted filter scale and the pixel value variance of the sub-image.
Optionally, the average pixel value and the variance of the pixel values of the sub-images are calculated based on the following method:
determining pixel points of all arrangement groups in the subimages, wherein the arrangement groups are rows or columns;
for the pixel points of each arrangement group, if the target parameters of the arrangement groups stored in advance are determined, the target parameters of the arrangement groups stored in advance are called, and if the target parameters of the arrangement groups not stored in advance are determined, the target parameters of the arrangement groups are calculated and stored according to the target parameters of the pixel points in the arrangement groups;
obtaining an average pixel value or a pixel value variance of the sub-images according to target parameters of all the arrangement groups; wherein the target parameter is the sum of the pixel values and the sum of the squares of the pixel values.
Optionally, the determining a first filter coefficient of the sub-image according to the weighted filter scale and the pixel value variance of the sub-image includes:
taking the sum of the weighted filtering scale and the variance of the pixel value as a self-adaptive variance value;
and taking the ratio of the pixel value variance to the self-adaptive variance value as a first filter coefficient of the sub-image.
Optionally, the first filter coefficient is calculated based on the following manner:
shifting the self-adaptive variance value according to a preset shift coefficient to obtain a first shift result, and searching the reciprocal of the first shift result through a preset lookup table;
and determining the product of the reciprocal of the first shift result and the pixel variance value, shifting the product result according to a preset shift coefficient to obtain a second shift result, and taking the second shift result as a first filter coefficient.
Optionally, the performing layer decomposition on the image to be processed according to the image type of each sub-image to obtain a base layer image and a detail layer image of the image to be processed includes:
for each sub-image, carrying out weighting processing on the average pixel value of the sub-image according to the first filter coefficient corresponding to the image type, and determining a second filter coefficient of the sub-image;
and performing layer decomposition on the image to be processed according to the first filter coefficient and the second filter coefficient of each sub-image to obtain a base layer image and a detail layer image of the image to be processed.
Optionally, the weighting the average pixel value of the sub-image according to the first filter coefficient corresponding to the image type to determine the second filter coefficient of the sub-image includes:
weighting the average pixel value of the sub-image according to the first filter coefficient corresponding to the image type to obtain the weighted average pixel value;
and taking the difference value of the average pixel value of the sub-image and the weighted average pixel value as a second filter coefficient of the sub-image.
Optionally, the performing layer decomposition on the image to be processed according to the first filter coefficient and the second filter coefficient of each sub-image to obtain a base layer image and a detail layer image of the image to be processed includes:
filtering the pixel value of the central pixel point of the sub-image according to the first filter coefficient and the second filter coefficient to obtain a target pixel value of the central pixel point of the sub-image;
obtaining a base layer image of the image to be processed according to the target pixel values of the central pixel points of all the sub-images;
obtaining a detail layer image according to the base layer image and the image to be processed; and the pixel value of the pixel point in the detail layer image is the difference value of the pixel values of the pixel point in the image to be processed and the base layer image.
Optionally, when the filter scale includes a first filter scale and a second filter scale, the first filter coefficient includes a first target filter coefficient determined according to the first filter scale and a second target filter coefficient determined according to the second filter scale;
the base layer image comprises a first base layer image and a second base layer image, and the detail layer image comprises a first detail layer image and a second detail layer image; wherein the first base layer image and the first detail layer image are determined according to a first target filter coefficient, and the second base layer image and the second detail layer image are determined according to a second target filter coefficient;
the above-mentioned weighted summation of the pixel values of the base layer image and the detail layer image to obtain the target image includes:
weighting the first detail layer image and the second detail layer image according to preset detail enhancement weights respectively;
weighting the second base layer image according to the preset contrast weakening weight;
superposing the weighted second base layer image, the weighted first detail layer image and the weighted second detail layer image to obtain a target image; and the pixel value of the pixel point of the target image is the sum of the pixel values of the pixel point in the weighted second base layer image, the weighted first detail layer image and the weighted second detail layer image.
According to another aspect of embodiments of the present application, there is provided an image processing apparatus including:
the first determining module is used for determining a plurality of sub-images in the image to be processed; the central pixel point of each subimage uniquely corresponds to one pixel point in the image to be processed, and the number of the subimages is consistent with that of the pixel points;
the second determining module is used for determining the average pixel value of the sub-images for each sub-image and determining the image type of the sub-images according to the difference value between the pixel value of the central pixel point of the sub-images and the average pixel value;
the decomposition module is used for carrying out layer decomposition on the image to be processed according to the image type of each sub-image to obtain a base layer image and a detail layer image of the image to be processed;
and the weighting module is used for carrying out weighted summation on the pixel values of the base layer image and the detail layer image to obtain a target image.
Optionally, when the second determining module determines the image type of the sub-image, the second determining module is configured to:
for each sub-image, taking the difference value of the average pixel value and the pixel value of the central pixel point as the pixel change deviation value of the sub-image;
determining a first filter coefficient of the sub-image according to the pixel change deviation value;
the image type of the sub-image is determined based on the first filter coefficient.
Optionally, when the second determining module determines the first filter coefficient of the sub-image according to the pixel variation deviation value, the second determining module is configured to:
acquiring a preset filtering scale;
weighting the filtering scale according to the pixel change deviation value of the sub-image to obtain a weighted filtering scale;
and determining a first filter coefficient of the sub-image according to the weighted filter scale and the pixel value variance of the sub-image.
Optionally, the average pixel value and the variance of the pixel values of the sub-images are calculated based on the following method:
determining pixel points of all arrangement groups in the subimages, wherein the arrangement groups are rows or columns;
for the pixel points of each arrangement group, if the target parameters of the arrangement groups stored in advance are determined, the target parameters of the arrangement groups stored in advance are called, and if the target parameters of the arrangement groups not stored in advance are determined, the target parameters of the arrangement groups are calculated and stored according to the target parameters of the pixel points in the arrangement groups;
obtaining an average pixel value or a pixel value variance of the sub-images according to target parameters of all the arrangement groups; wherein the target parameter is the sum of the pixel values and the sum of the squares of the pixel values.
Optionally, when the second determining module determines the first filter coefficient of the sub-image according to the weighted filter scale and the pixel value variance of the sub-image, the second determining module is configured to:
taking the sum of the weighted filtering scale and the variance of the pixel value as a self-adaptive variance value;
and taking the ratio of the pixel value variance to the self-adaptive variance value as a first filter coefficient of the sub-image.
Optionally, the first filter coefficient is calculated based on the following manner:
shifting the self-adaptive variance value according to a preset shift coefficient to obtain a first shift result, and searching the reciprocal of the first shift result through a preset lookup table;
and determining the product of the reciprocal of the first shift result and the pixel variance value, shifting the product result according to a preset shift coefficient to obtain a second shift result, and taking the second shift result as a first filter coefficient.
Optionally, the decomposition module performs layer decomposition on the image to be processed according to the image type of each sub-image, and is configured to, when obtaining a base layer image and a detail layer image of the image to be processed:
for each sub-image, carrying out weighting processing on the average pixel value of the sub-image according to the first filter coefficient corresponding to the image type, and determining a second filter coefficient of the sub-image;
and performing layer decomposition on the image to be processed according to the first filter coefficient and the second filter coefficient of each sub-image to obtain a base layer image and a detail layer image of the image to be processed.
Optionally, the decomposition module performs weighting processing on the average pixel value of the sub-image according to the first filter coefficient corresponding to the image type, and when determining the second filter coefficient of the sub-image, the decomposition module is configured to:
weighting the average pixel value of the sub-image according to a first filter coefficient corresponding to the image type to obtain a weighted average pixel value;
and taking the difference value of the average pixel value of the sub-image and the weighted average pixel value as a second filter coefficient of the sub-image.
Optionally, the decomposition module performs layer decomposition on the image to be processed according to the first filter coefficient and the second filter coefficient of each sub-image, and is configured to, when obtaining a base layer image and a detail layer image of the image to be processed:
filtering the pixel value of the central pixel point of the sub-image according to the first filter coefficient and the second filter coefficient to obtain a target pixel value of the central pixel point of the sub-image;
obtaining a base layer image of the image to be processed according to the target pixel values of the central pixel points of all the subimages;
obtaining a detail layer image according to the base layer image and the image to be processed; and the pixel value of the pixel point in the detail layer image is the difference value of the pixel values of the pixel point in the image to be processed and the base layer image.
Optionally, when the filter scale includes a first filter scale and a second filter scale, the first filter coefficient includes a first target filter coefficient determined according to the first filter scale and a second target filter coefficient determined according to the second filter scale;
the base layer pictures comprise a first base layer picture and a second base layer picture, and the detail layer pictures comprise a first detail layer picture and a second detail layer picture; wherein the first base layer image and the first detail layer image are determined according to a first target filter coefficient, and the second base layer image and the second detail layer image are determined according to a second target filter coefficient;
the weighting module performs weighted summation on the pixel values of the base layer image and the detail layer image to obtain a target image, and is configured to:
weighting the first detail layer image and the second detail layer image according to preset detail enhancement weights;
weighting the second base layer image according to a preset contrast weakening weight;
superposing the weighted second base layer image, the weighted first detail layer image and the weighted second detail layer image to obtain a target image; and the pixel value of the pixel point of the target image is the sum of the pixel values of the pixel point in the weighted second base layer image, the weighted first detail layer image and the weighted second detail layer image.
According to another aspect of an embodiment of the present application, there is provided an electronic apparatus including: the device comprises a memory, a processor and a computer program stored on the memory, wherein the processor executes the computer program to realize the steps of the method shown in the first aspect of the embodiment of the application.
According to a further aspect of embodiments of the present application, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method as set forth in the first aspect of embodiments of the present application.
According to an aspect of embodiments of the present application, there is provided a computer program product comprising a computer program that, when executed by a processor, performs the steps of the method illustrated in the first aspect of embodiments of the present application.
The technical scheme provided by the embodiment of the application has the following beneficial effects:
the method comprises the steps of obtaining a plurality of sub-images in an image to be processed, determining the image type of each sub-image through the difference value of the average pixel value of each sub-image and the pixel value of the central pixel point of the sub-image, performing layer decomposition on the image to be processed according to the image type of each sub-image to obtain a base layer image and a detail layer image of the image to be processed, and performing weighted summation on the pixel values of the base layer image and the detail layer image to obtain a target image. Because the central pixel point of each sub-image uniquely corresponds to one pixel point in the image to be processed, and the number of the sub-images is consistent with the number of the pixel points, the image type of the sub-image can be determined according to the difference value of the average pixel value of each sub-image and the pixel value of the central pixel point, and the type of the area where each pixel point in the image to be processed is located can be effectively distinguished. According to the method and the device, the base layer image and the detail layer image of the image to be processed are accurately identified according to the layer decomposition processing of the image type, and the filtering coefficient is determined according to the preset regularization parameter to carry out layer decomposition in the prior art.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments of the present application will be briefly described below.
Fig. 1 is a schematic view of an application scenario of an image processing method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 3 is a schematic diagram of an overlapping area of sub-images of adjacent sliding windows in an image processing method according to an embodiment of the present application;
fig. 4 is a schematic diagram illustrating a subimage updated based on arrangement group data in an image processing method according to an embodiment of the present application;
FIG. 5 is a flowchart illustrating an exemplary image processing method according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an image processing electronic device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described below in conjunction with the drawings in the present application. It should be understood that the embodiments set forth below in connection with the drawings are exemplary descriptions for explaining technical solutions of the embodiments of the present application, and do not limit the technical solutions of the embodiments of the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be further understood that the terms "comprises" and/or "comprising," when used in this specification in connection with embodiments of the present application, specify the presence of stated features, information, data, steps, operations, elements, and/or components, but do not preclude the presence or addition of other features, information, data, steps, operations, elements, components, and/or groups thereof, as embodied in the art. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. The term "and/or" as used herein indicates at least one of the items defined by the term, e.g., "a and/or B" may be implemented as "a", or as "B", or as "a and B".
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Since the dynamic range perceived by the real scene and the human eye is very wide, when HDR images or videos of 12bit, even 20bit are displayed on a display of 8bit, it is not possible to match the real scene and the human eye perception. Tone mapping is an image processing technique that approximates the display of a high dynamic range image on a limited dynamic range medium. Tone mapping is to solve the problem of performing large contrast attenuation to transform the scene brightness to a displayable range, and simultaneously, maintaining the information of image details, colors and the like so that the scene after tone mapping matches the perception of the real scene.
In the prior art, the tone mapping technology is divided into global tone mapping and local tone mapping, wherein the basic idea of local tone mapping is as follows: a to-be-processed image is decomposed into a plurality of to-be-processed windows with the same size, a central pixel of each window can perform a series of processing such as brightness, contrast and detail according to information of pixels around the window, and finally local tone mapping operation is completed.
The inventor finds that, because the local tone mapping technology based on layer decomposition generally needs to use a low-pass filter for layer decomposition, on one hand, the design defect of the filter may cause poor quality of the layer decomposition, and on the other hand, the insufficient structure perception capability of the filter for the image may introduce a halo effect (often appearing at the edge of the image and appearing as a circle of halo-like objects) after the local tone mapping, which seriously affects the visual effect of the image after the local tone mapping, and therefore, it is important how to more effectively perform the layer decomposition and to suppress the halo effect while ensuring the local tone mapping effect.
The application provides an image processing method, an image processing device, an electronic device and a computer-readable storage medium, which aim to solve the above technical problems in the prior art.
The embodiment of the application provides an image processing method, which can be realized by a terminal or a server. The terminal or the server determines a plurality of sub-images in an image to be processed, calculates an average pixel value of the sub-images for each sub-image, and determines the image types of the sub-images according to the difference value between the pixel value of a central pixel point of the sub-images and the average pixel value; and then carrying out layer decomposition on the image to be processed according to the image type of each sub-image to obtain a base layer image and a detail layer image of the image to be processed, and further carrying out weighted summation on pixel values of the base layer image and the detail layer image to obtain a target image. According to the image type-based layer decomposition processing method and device, the layer decomposition processing based on the image type is realized, and the halo effect of the image edge is effectively avoided while the layer decomposition effect is improved.
The technical solutions of the embodiments of the present application and the technical effects produced by the technical solutions of the present application will be described below through descriptions of several exemplary embodiments. It should be noted that the following embodiments may be referred to, referred to or combined with each other, and the description of the same terms, similar features, similar implementation steps and the like in different embodiments is not repeated.
As shown in fig. 1, the image processing method of the present application may be applied to the scene shown in fig. 1, specifically, the server 102 receives an image to be processed sent by the client 101, and the server 102 determines a plurality of sub-images by using each pixel point of the image to be processed as a central pixel point; then, for each sub-image, determining the image type of the sub-image according to the difference value between the pixel value of the central pixel point of the sub-image and the average pixel value of the sub-image, performing layer decomposition on the image to be processed according to the image type of each sub-image to obtain a base layer image and a detail layer image of the image to be processed, performing weighted summation on the pixel values of the base layer image and the detail layer image to obtain a target image after tone mapping is completed, and sending the target image to the client 101.
In the scenario shown in fig. 1, the image processing method may be performed in the server, or in another scenario, may be performed in the terminal.
Those skilled in the art will understand that the "terminal" used herein may be a Mobile phone, a tablet computer, a PDA (Personal Digital Assistant), an MID (Mobile Internet Device), etc.; a "server" may be implemented as a stand-alone server or as a server cluster comprised of multiple servers.
An embodiment of the present application provides an image processing method, as shown in fig. 2, which may be applied to a server or a terminal that performs image processing, where the method includes:
s201, determining a plurality of sub-images in the image to be processed.
The central pixel point of each subimage uniquely corresponds to one pixel point in the image to be processed, and the number of the subimages is consistent with that of the pixel points.
Specifically, the server or the terminal for image processing may adopt a sliding window to sequentially scan each pixel point of the image to be processed, and for each pixel point, intercept a sub-image of a preset window size with the pixel point as a central pixel point. For the boundary area of the image to be processed, when the subimages are intercepted, the pixel points can be supplemented in a mirror image filling mode, so that the sizes of the subimages are consistent, and the subimages correspond to the pixel points in the image to be processed one by one.
In the embodiment of the application, when the image to be processed is a gray image, the gray value of each pixel point can be used as a pixel value; when the image to be processed is a multi-channel image, such as an RGB image, the maximum value of the R, G, B three channel values of each pixel point can be used as the pixel value of the pixel point.
S202, for each sub-image, determining the average pixel value of the sub-image, and determining the image type of the sub-image according to the difference value between the pixel value of the central pixel point of the sub-image and the average pixel value.
Wherein the image types include a flat region, a detail-rich region, and a structural region.
Specifically, for each sub-image, a server or a terminal for performing image processing may use a difference between a pixel value of a center pixel point of the sub-image and an average pixel point of the sub-image as a pixel change deviation value; the pixel deviation value may represent a smoothing degree of the sub-image, that is, the smoothing degree of the sub-image may be determined according to the pixel variation deviation value: for the sub-image of the flat area or the structural area, the smoothness degree is high; the smoothness is low for sub-images of the detail-rich area. Then, a first filter coefficient may be calculated based on the pixel variation deviation value and the pixel variance value of the sub-image, and the type of the sub-image may be determined based on the first filter coefficient.
And S203, performing layer decomposition on the image to be processed according to the image type of each sub-image to obtain a base layer image and a detail layer image of the image to be processed.
Wherein, each image type corresponds to a first filter coefficient; the sizes of the base layer image and the detail layer image are consistent with the size of the image to be processed.
Specifically, the server or the terminal for image processing may filter the image to be processed according to the first filter coefficient of each sub-image, so as to complete layer decomposition of the image, and obtain a base layer image and a detail layer image.
In the embodiment of the present application, for a flat region, mean filtering may be performed on the region; for a region rich in details, the region may be smoothly filtered; for a structural region, no filtering or a filtering process with a small amplitude may be performed to retain the pixel information of the region as much as possible, i.e., to retain an edge structure in the image to be processed.
And S204, carrying out weighted summation on the pixel values of the base layer image and the detail layer image to obtain a target image.
Specifically, the server or the terminal for image processing may weight the detail layer image based on a preset detail enhancement weight, weight the base layer image based on a preset contrast weakening weight, and superimpose the weighted detail enhancement image and the base layer image to enhance the local contrast and detail change of the image to be processed, so as to obtain the target image.
In the embodiment of the application, after the target image is acquired, the target image can be linearly stretched to further increase the contrast, enhance the visual impact of image display and optimize the tone mapping effect.
In some embodiments, when the image to be processed is a grayscale image, the target image may be directly used as the image after the tone mapping process.
In other embodiments, when the image to be processed is a multi-channel image, taking an RGB image as an example, the color of the target image may be corrected to restore the color of the image, and the target image after the color correction is used as the image after the tone mapping process. Wherein, the color correction can be performed by the following formula:
Figure BDA0003716589400000121
wherein c is a color channel of the image, r is a color correction coefficient, and r belongs to (0,1), wherein the larger r is, the heavier the color correction degree is, and otherwise, the color correction degree is closer to the gray level image; t is the pixel value of the target image, I is the pixel value of the image to be processed,src c The color channel value of the image to be processed is R, G or B value. out c The color channel value after the color correction of the target image is R, G or B value.
According to the method and the device for processing the image, the multiple sub-images in the image to be processed are obtained, the image type of each sub-image is determined through the difference value of the average pixel value of each sub-image and the pixel value of the central pixel point of the sub-image, the image to be processed is subjected to layer decomposition according to the image type of each sub-image, the base layer image and the detail layer image of the image to be processed are obtained, and then the pixel values of the base layer image and the detail layer image are subjected to weighted summation to obtain the target image. Because the central pixel point of each sub-image uniquely corresponds to one pixel point in the image to be processed, and the number of the sub-images is consistent with the number of the pixel points, the image type of the sub-image can be determined according to the difference value of the average pixel value of each sub-image and the pixel value of the central pixel point, and the type of the area where each pixel point in the image to be processed is located can be effectively distinguished. According to the method and the device, the base layer image and the detail layer image of the image to be processed are accurately identified according to the layer decomposition processing of the image type, and the filtering coefficient is determined according to the preset regularization parameter to carry out layer decomposition in the prior art.
The embodiment of the present application provides a possible implementation manner, and determining an image type of a sub-image includes:
s301, regarding each sub-image, taking the difference value between the average pixel value and the pixel value of the central pixel point as the pixel change deviation value of the sub-image.
Specifically, for each sub-image, a server or a terminal for image processing may calculate an average pixel value of the sub-image according to pixel values of all pixel points in the sub-image; and then taking the difference value between the average pixel value and the pixel value of the central pixel point as a pixel change deviation value.
Wherein the pixel variation deviation of each sub-image may be calculated based on the following formula:
Figure BDA0003716589400000131
Figure BDA0003716589400000132
is the average pixel value of the theta sub-image, I (x, y) is the pixel value of the center pixel of the sub-image,
Figure BDA0003716589400000133
the offset value is changed for the pixel of the sub-image.
S302, determining a first filter coefficient of the sub-image according to the pixel change deviation value.
Specifically, the first filter coefficient of the sub-image may be determined according to the pixel variation deviation value of the sub-image, the pixel variance value of the sub-image, and a preset filter scale.
In the embodiment of the application, the number of the filtering scales can be two, so that two first filtering coefficients can be obtained, multi-scale layer decomposition can be performed on the image to be processed according to the two first filtering coefficients, and the layer decomposition effect of the image is further improved.
The calculation process of the specific first filter coefficient will be described in detail below.
And S303, determining the image type of the sub-image according to the first filter coefficient.
Specifically, the correspondence between the data range of the first filter coefficient and the image type may be established in advance, and the image type of the sub-image may be determined according to the correspondence.
In the embodiment of the application, for each sub-image, a first filter coefficient is determined based on the difference value between the average pixel value and the pixel value of the central pixel point; and then the image type of the sub-image can be determined according to the corresponding relation between the data range of the first filter coefficient and the image type, so that the flat area, the detail-rich area and the structural area of the sub-image can be distinguished, and the image to be processed can be filtered through the image type subsequently, so that the layer decomposition effect for the image to be processed is improved.
The embodiment of the present application provides a possible implementation manner, and determining a first filter coefficient of a sub-image according to a pixel variation deviation value includes:
s401, acquiring a preset filtering scale.
The number of the filter scales can be one or more, when two filter scales exist, two first filter coefficients can be obtained, and then multi-scale layer decomposition can be performed on the image to be processed according to the two first filter coefficients, so that the layer decomposition effect of the image is further improved.
The process of determining different first filter coefficients based on different filter scales is the same, and is not described herein again, and a specific description will be given below by taking one filter scale as an example.
S402, weighting the filtering scale according to the pixel change deviation value of the sub-image to obtain a weighted filtering scale; and determining a first filter coefficient of the sub-image according to the weighted filter scale and the pixel value variance of the sub-image.
In the embodiment of the present application, the first filter coefficient is determined based on the preset filter scale and the pixel variation deviation value of the sub-image, and then the image type of each sub-image can be determined according to the first filter coefficient:
the calculation can be based on the following formula:
Figure BDA0003716589400000141
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003716589400000151
is the variance of the pixel values of the sub-images,
Figure BDA0003716589400000152
and a is a pixel change deviation value of the sub-image, a is a first filter coefficient, epsilon is a filter scale, and epsilon is a positive number which is not zero.
When the type of the sub-image belongs to a flat area, then
Figure BDA0003716589400000153
Figure BDA0003716589400000154
The first filter coefficient a is approximately equal to 0;
when the type of the sub-image belongs to the detail-rich area, the LVD θ >0,
Figure BDA0003716589400000155
A first filter coefficient a ∈ (0,1);
when the type of the sub-image belongs to the structural region, then
Figure BDA0003716589400000156
Figure BDA0003716589400000157
The first filter coefficient a ≈ 1.
Therefore, the image type of each sub-image can be determined according to the numerical range of the first filter coefficient, and a good basis is laid for the decomposition of the subsequent image layer.
In the embodiment of the present application, a possible implementation manner is provided, and the average pixel value and the pixel value variance of the sub-image are calculated based on the following manner:
s501, determining pixel points of all arrangement groups in the subimages, wherein the arrangement groups are rows or columns.
Specifically, the size of the sub-image is 7*7 for example, then there are 7 rows and 7 columns of pixel points in each sub-image, and the arrangement group may be one row or one column, which is not limited in this embodiment.
In the embodiment of the present application, the average pixel value and the variance of the pixel value of the sub-image can be obtained by the following formula:
Figure BDA0003716589400000158
Figure BDA0003716589400000159
wherein, I θ (i, j) is the pixel value of each pixel point in the subimage, and N is the number of the pixel points in the subimage. Taking a 7*7 sub-image as an example, the calculation of equations (4) and (5) requires 49 multiplications and 98 additions, which is costly in hardware implementation.
As shown in fig. 3, for the processing window 7*7, 42 pixel points in the sub-image in which two adjacent central pixels are located are overlapped, and the averaging operation in formulas (4) and (5) is relatively independent for each pixel point, the sub-image is divided into independent arrangement groups, and the result is the same, so that the processing procedure for the middle 42 pixel points is repeated for the two adjacent central pixels. Therefore, the embodiment of the application can simplify the calculation and save the hardware implementation cost by the following method:
and S502, for the pixel points of each arrangement group, if the target parameters of the arrangement groups stored in advance are determined, calling the target parameters of the arrangement groups stored in advance, and if the target parameters of the arrangement groups not stored in advance are determined, calculating and storing the target parameters of the arrangement groups according to the target parameters of each pixel point in the arrangement groups.
S503, obtaining an average pixel value or a variance of pixel values of the sub-images according to the target parameters of all the arrangement groups.
The sum of pixel values and the sum of pixel value products for each permutation group are calculated independently for each permutation group:
sum mean [7]={sum1,sum2,sum3,sum4,sum5,sum6,sum7} (6)
sum corr [7]={sum11,sum22,sum33,sum44,sum55,sum66,sum77} (7)
and sum1 is the sum of pixel values of the pixels in the first arrangement group, and sum11 is the sum of squares of the pixel values of the pixels in the first arrangement group. sum mean [7]And sum corr [7]A target parameter array of pixel values formed for 7 arrangement groups in the sub-image; wherein the target parameter is the sum of the pixel values and the sum of the squares of the pixel values.
As shown in fig. 4, the left image is an original sub-image, the sub-image includes 7*7 pixel points, and a pixel set including 7 arrangement groups shown in the middle image of fig. 4 can be obtained by taking each column of pixel points as an arrangement group; in hardware implementation, each running clock only processes one arrangement group, only 7 multiplications and 14 addition operations exist in one arrangement group, after seven clocks are reached, the first sub-image is calculated, and sum is obtained mean [7]And sum corr [7]Calculating the average pixel value of the current sub-image by averaging
Figure BDA0003716589400000161
Mean of the squares of pixels
Figure BDA0003716589400000162
Sum pixel value variance
Figure BDA0003716589400000163
When the next sub-image is calculated, due to the overlapped part in the middle, as shown in the right diagram in fig. 4, only the middle result of the leftmost arrangement group of the first sub-image needs to be discarded, and the middle result of the last arrangement group of the second sub-image needs to be updated:
sum mean [7]={sum2,sum3,sum4,sum5,sum6,sum7,sum8} (8)
sum corr [7]={sum22,sum33,sum44,sum55,sum66,sum77,sum88} (9)
therefore, only the data of one arrangement group needs to be updated for every two adjacent subimages, so that not only is repeated operation avoided, but also a large amount of addition and multiplication operations in one subimage are avoided, and the hardware implementation cost is greatly saved.
The embodiment of the present application provides a possible implementation manner, and determining a first filter coefficient of a sub-image according to a weighted filter scale and a pixel value variance of the sub-image includes:
s601, taking the sum of the weighted filtering scale and the variance of the pixel value as a self-adaptive variance value.
S602, the ratio of the pixel value variance to the adaptive variance value is used as a first filter coefficient of the sub-image.
Specifically, the calculation can be performed based on the following formula:
Figure BDA0003716589400000171
wherein the content of the first and second substances,
Figure BDA0003716589400000172
is the variance of the pixel values of the sub-images,
Figure BDA0003716589400000173
a is a pixel change deviation value of the sub-image, a is a first filter coefficient, epsilon is a filter scale, and epsilon is a positive number which is not zero; and x is an adaptive variance value.
In the embodiment of the present application, in order to simplify the calculation cost of hardware and reduce the complexity of hardware implementation, when the first filter coefficient a is calculated, the calculation may be performed in a manner of converting a division operation into a lookup table, and the specific process is as follows:
in the embodiment of the present application, a possible implementation manner is provided, and the first filter coefficient is calculated based on the following manner:
shifting the self-adaptive variance value according to a preset shift coefficient to obtain a first shift result, and searching the reciprocal of the first shift result through a preset lookup table;
and determining the product of the reciprocal of the first shifting result and the pixel variance value, shifting the product result according to a preset shifting coefficient to obtain a second shifting result, and taking the second shifting result as a first filter coefficient.
In the embodiment of the present application, it can be known from the algorithm of division that when the denominator is not 0, the numerator and denominator are simultaneously divided by a number which is not 0, and the result is still equal, and for hardware implementation:
Figure BDA0003716589400000174
where rshift is a right shift coefficient.
Can be preset
Figure BDA0003716589400000181
The table length of the lookup table (2) can be 32 or 64, and the table length of 64 is taken as an example for specific description: when x is 100, to define x within the table length of the lookup table, x > rshift can be defined within the table length by right shifting x by one bit (i.e., dividing by 2), and then looking up the table based on x > rshift
Figure BDA0003716589400000182
The value of (2) can convert the division into multiplication calculation, thereby avoiding the adoption of a divider when hardware is realized and saving the hardware realization cost.
The embodiment of the present application provides a possible implementation manner, performing layer decomposition on an image to be processed according to an image type of each sub-image to obtain a base layer image and a detail layer image of the image to be processed, including:
s701, for each sub-image, weighting the average pixel value of the sub-image according to the first filter coefficient corresponding to the image type, and determining a second filter coefficient of the sub-image.
Specifically, the server or the terminal for performing image processing may pre-construct a mapping relationship between the first filter coefficient, the average pixel value, and the second filter coefficient, and determine the second filter coefficient of the sub-image according to the mapping relationship.
S702, according to the first filter coefficient and the second filter coefficient of each sub-image, performing layer decomposition on the image to be processed to obtain a base layer image and a detail layer image of the image to be processed.
Specifically, the pixel values of the pixel points in the image to be processed may be filtered according to the first filter coefficient and the second filter coefficient to obtain a base layer image of the image to be processed, and then the detail layer image is determined based on the base layer image, where a specific layer decomposition process will be described in detail below.
The embodiment of the present application provides a possible implementation manner, performing weighting processing on an average pixel value of a sub-image according to a first filter coefficient corresponding to an image type, and determining a second filter coefficient of the sub-image, including:
weighting the average pixel value of the sub-image according to a first filter coefficient corresponding to the image type to obtain a weighted average pixel value; and taking the difference value of the average pixel value of the sub-image and the weighted average pixel value as a second filter coefficient of the sub-image.
Specifically, the second filter coefficient of each sub-image may be calculated according to the following formula:
Figure BDA0003716589400000183
wherein the content of the first and second substances,
Figure BDA0003716589400000184
is the average pixel value of the sub-image, a is the first filter coefficient of the sub-image, and b is the second filter coefficient of the sub-image.
The embodiment of the present application provides a possible implementation manner, in which a layer decomposition is performed on an image to be processed according to a first filter coefficient and a second filter coefficient of each sub-image, so as to obtain a base layer image and a detail layer image of the image to be processed, including:
s801, filtering the pixel value of the central pixel point of the sub-image according to the first filter coefficient and the second filter coefficient to obtain a target pixel value of the central pixel point of the sub-image; and obtaining a base layer image of the image to be processed according to the target pixel values of the central pixel points of all the sub-images.
Specifically, the target pixel value of the center pixel point in each sub-image may be calculated based on the following formula:
B=a×I+b (13)
and I is the pixel value of the central pixel point of the sub-image, a is a first filter coefficient corresponding to the sub-image, and b is a second filter coefficient corresponding to the sub-image. And B is the target pixel value of the central pixel point of the sub-image.
And S802, obtaining a detail layer image according to the base layer image and the image to be processed.
And the pixel value of the pixel point in the detail layer image is the difference value of the pixel values of the pixel point in the image to be processed and the base layer image.
Specifically, the pixel value of each pixel point of the detail layer image can be obtained based on the following formula:
D=I-B (14)
wherein, I is the pixel value of the pixel point of the image to be processed, B is the pixel value of each pixel point of the base layer image, and D is the pixel value of the pixel point in the detail layer image.
While the above describes the layer decomposition of an image by taking a single filtering scale as an example, another aspect of the embodiment of the present application may perform the layer decomposition of an image to be processed by combining multiple scales, and the following specifically describes two filtering scales by taking an example.
When the filtering scale comprises a first filtering scale and a second filtering scale, the first filtering coefficient comprises a first target filtering coefficient determined according to the first filtering scale and a second target filtering coefficient determined according to the second filtering scale;
the base layer image comprises a first base layer image and a second base layer image, and the detail layer image comprises a first detail layer image and a second detail layer image; wherein the first base layer image and the first detail layer image are determined according to the first target filter coefficient, and the second base layer image and the second detail layer image are determined according to the second target filter coefficient.
In the embodiment of the present application, a first filtering scale ε may be adopted 1 And a second filter scale epsilon 2 To achieve multi-scale image layer decomposition. In particular, the average pixel value of the sub-images at two different scales
Figure BDA0003716589400000201
SubimagePixel variance value of
Figure BDA0003716589400000202
The corresponding first target filter coefficient a can be obtained only by setting different filter scales in the formula (3) without repeated calculation 1 A first target filter coefficient a 2 (ii) a Meanwhile, two corresponding second filter coefficients can be obtained according to equation (12): b is a mixture of 1 、b 2
The specific image layer decomposition process is as follows:
Figure BDA0003716589400000203
Figure BDA0003716589400000204
wherein, B 1 For the first base layer picture, B 2 Is the second base layer picture.
Specifically, a first detail layer image may be obtained according to the image to be processed and the first base layer image, and a second detail layer image may be obtained according to the first base layer image and the second base layer image. The pixel value of the pixel point of the first detail layer image is the difference value between the pixel values of the pixel point in the image to be processed and the first base layer image, and the pixel value of the pixel point of the second detail layer image is the difference value between the pixel values of the pixel point in the first base layer image and the second base layer image.
D 1 =I-B 1 (17)
D 2 =B 1 -B 2 (18)
Wherein D is 1 For the first detail layer image, D 2 Is the second detail layer image.
Carrying out weighted summation on pixel values of the base layer image and the detail layer image to obtain a target image, wherein the weighted summation comprises the following steps:
and S901, weighting the first detail layer image and the second detail layer image according to preset detail enhancement weights respectively.
S902, weighting the second base layer image according to the preset contrast weakening weight; global tone mapping is carried out on the second base layer image;
s903, overlapping the weighted second base layer image, the weighted first detail layer image and the weighted second detail layer image to obtain a target image; and the pixel value of the pixel point of the target image is the sum of the pixel values of the pixel point in the weighted second base layer image, the weighted first detail layer image and the weighted second detail layer image.
The specific implementation formula is as follows:
T=k 1 ×B 2 +k 2 ×D 2 +k 3 ×D 1 (19)
wherein T is the target image, k 1 To contrast the weakening weight, k 2 And k 3 All detail enhancement weights; k is a radical of 1 ∈(0,1),k 2 ≥1,k 3 ≥1。
After the target image is obtained, linear stretching can be performed on the target image, and the contrast of a brightness channel is improved, so that the tone mapping effect of the image is optimized.
In order to better understand the above image processing method, an example of the image processing method of the present application is described in detail below with reference to fig. 5, and the method includes the following steps:
s1001, determining a plurality of sub-images in the image to be processed.
The central pixel point of each sub-image uniquely corresponds to one pixel point in the image to be processed, and the number of the sub-images is consistent with that of the pixel points.
Specifically, the average pixel value and the pixel value variance of the sub-image may be obtained as follows:
(1) Determining pixel points of all arrangement groups in the subimages, wherein the arrangement groups are rows or columns;
(2) For the pixel points of each arrangement group, if the target parameters of the arrangement groups stored in advance are determined, the target parameters of the arrangement groups stored in advance are called, and if the target parameters of the arrangement groups not stored in advance are determined, the target parameters of the arrangement groups are calculated and stored according to the target parameters of the pixel points in the arrangement groups; wherein, the target parameter is the sum of the pixel values and the sum of squares of the pixel values;
(3) And obtaining the average pixel value or the variance of the pixel values of the sub-images according to the target parameters of all the arrangement groups.
And S1002, regarding each sub-image, taking the difference value between the average pixel value and the pixel value of the central pixel point as the pixel change deviation value of the sub-image.
S1003, respectively weighting a preset first filtering scale and a preset second filtering scale according to the pixel change deviation value of the sub-image to obtain a first weighted filtering scale and a second weighted filtering scale.
S1004, determining a first target filter coefficient of the sub-image according to the first weighted filter scale and the pixel value variance of the sub-image; and determining a second target filter coefficient of the sub-image according to the second weighted filter scale and the pixel value variance of the sub-image.
Specifically, the process of determining the first target filter coefficient is as follows:
taking the sum of the first weighted filtering scale and the variance of the pixel value as a self-adaptive variance value;
and taking the ratio of the pixel value variance to the self-adaptive variance value as a first target filter coefficient of the sub-image.
When the ratio of the pixel value variance to the adaptive variance value is calculated, division operation can be replaced by a mode based on a lookup table, so that the operation efficiency is improved, and the specific process is as follows:
shifting the self-adaptive variance value according to a preset shift coefficient to obtain a first shift result, and searching the reciprocal of the first shift result through a preset lookup table; and determining the product of the reciprocal of the first shift result and the pixel variance value, shifting the product result according to a preset shift coefficient to obtain a second shift result, and taking the second shift result as a first target filter coefficient.
The process of calculating the second target filter coefficient is the same as the above process, and is not described herein again.
S1005, for each sub-image, carrying out weighting processing on the average pixel value of the sub-image according to the first target filter coefficient corresponding to the image type, and determining a third target filter coefficient of the sub-image; and performing weighting processing on the average pixel value of the sub-image according to the second target filter coefficient corresponding to the image type to determine a fourth target filter coefficient of the sub-image.
S1006, according to the first target filter coefficient and the third target filter coefficient of each sub-image, performing layer decomposition on the image to be processed to obtain a first base layer image and a first detail layer image of the image to be processed. And meanwhile, performing layer decomposition on the image to be processed according to the second target filter coefficient and the fourth target filter coefficient of each sub-image to obtain a second base layer image and a second detail layer image of the image to be processed.
S1007, weighting the first detail layer image and the second detail layer image according to the preset detail enhancement weight; the second base layer image is weighted according to a preset contrast weakening weight.
S1008, overlapping the weighted second base layer image, the weighted first detail layer image and the weighted second detail layer image to obtain a target image; and the pixel value of the pixel point of the target image is the sum of the pixel values of the pixel point in the weighted second base layer image, the weighted first detail layer image and the weighted second detail layer image.
According to the method and the device for processing the image, the multiple sub-images in the image to be processed are obtained, the image type of each sub-image is determined through the difference value of the average pixel value of each sub-image and the pixel value of the central pixel point of the sub-image, the image to be processed is subjected to layer decomposition according to the image type of each sub-image, the base layer image and the detail layer image of the image to be processed are obtained, and then the pixel values of the base layer image and the detail layer image are subjected to weighted summation to obtain the target image. Because the central pixel point of each sub-image uniquely corresponds to one pixel point in the image to be processed, and the number of the sub-images is consistent with the number of the pixel points, the image type of the sub-image can be determined according to the difference value of the average pixel value of each sub-image and the pixel value of the central pixel point, and the type of the area where each pixel point in the image to be processed is located can be effectively distinguished. According to the method and the device, the base layer image and the detail layer image of the image to be processed are accurately identified according to the layer decomposition processing of the image type, and the filtering coefficient is determined according to the preset regularization parameter to carry out layer decomposition in the prior art.
An embodiment of the present application provides an image processing apparatus, and as shown in fig. 6, the image processing apparatus 60 may include: a first determination module 601, a second determination module 602, a decomposition module 603, and a weighting module 604;
the first determining module 601 is configured to determine a plurality of sub-images in an image to be processed; the central pixel point of each subimage uniquely corresponds to one pixel point in the image to be processed, and the number of the subimages is consistent with that of the pixel points;
a second determining module 602, configured to determine, for each sub-image, an average pixel value of the sub-image, and determine an image type of the sub-image according to a difference between a pixel value of a central pixel point of the sub-image and the average pixel value;
the decomposition module 603 is configured to perform layer decomposition on the image to be processed according to the image type of each sub-image, so as to obtain a base layer image and a detail layer image of the image to be processed;
and the weighting module 604 is configured to perform weighted summation on the pixel values of the base layer image and the detail layer image to obtain a target image.
In an embodiment of the present application, a possible implementation manner is provided, and when the second determining module 602 determines the image type of the sub-image, the second determining module is configured to:
for each sub-image, taking the difference value of the average pixel value and the pixel value of the central pixel point as the pixel change deviation value of the sub-image;
determining a first filter coefficient of the sub-image according to the pixel change deviation value;
the image type of the sub-image is determined based on the first filter coefficient.
In an embodiment of the present application, a possible implementation manner is provided, and when the second determining module 602 determines the first filter coefficient of the sub-image according to the pixel variation deviation value, the second determining module is configured to:
acquiring a preset filtering scale;
weighting the filtering scale according to the pixel change deviation value of the sub-image to obtain a weighted filtering scale;
and determining a first filter coefficient of the sub-image according to the weighted filter scale and the pixel value variance of the sub-image.
In the embodiment of the present application, a possible implementation manner is provided, and the average pixel value and the pixel value variance of the sub-image are calculated based on the following manner:
determining pixel points of all arrangement groups in the subimages, wherein the arrangement groups are rows or columns;
for each pixel point of the arrangement group, if the target parameter of the arrangement group which is stored in advance is determined, the target parameter of the arrangement group which is stored in advance is called, and if the target parameter of the arrangement group which is not stored in advance is determined, the target parameter of the arrangement group is calculated and stored according to the target parameter of each pixel point in the arrangement group;
obtaining an average pixel value or a pixel value variance of the sub-images according to target parameters of all the arrangement groups; wherein the target parameter is the sum of the pixel values and the sum of the squares of the pixel values.
In an embodiment of the present application, a possible implementation manner is provided, and when the second determining module 602 determines the first filter coefficient of the sub-image according to the weighted filter scale and the pixel value variance of the sub-image, the second determining module is configured to:
taking the sum of the weighted filtering scale and the variance of the pixel value as a self-adaptive variance value;
and taking the ratio of the pixel value variance to the self-adaptive variance value as a first filter coefficient of the sub-image.
The embodiment of the present application provides a possible implementation manner, and the first filter coefficient is calculated based on the following manner:
shifting the self-adaptive variance value according to a preset shift coefficient to obtain a first shift result, and searching the reciprocal of the first shift result through a preset lookup table;
and determining the product of the reciprocal of the first shift result and the pixel variance value, shifting the product result according to a preset shift coefficient to obtain a second shift result, and taking the second shift result as a first filter coefficient.
In the embodiment of the present application, a possible implementation manner is provided, and when the decomposition module 603 performs layer decomposition on the image to be processed according to the image type of each sub-image, and obtains a base layer image and a detail layer image of the image to be processed, the decomposition module is configured to:
for each sub-image, carrying out weighting processing on the average pixel value of the sub-image according to the first filter coefficient corresponding to the image type, and determining a second filter coefficient of the sub-image;
and performing layer decomposition on the image to be processed according to the first filter coefficient and the second filter coefficient of each sub-image to obtain a base layer image and a detail layer image of the image to be processed.
In the embodiment of the present application, a possible implementation manner is provided, where the decomposition module 603 performs weighting processing on an average pixel value of the sub-image according to a first filter coefficient corresponding to an image type, and when determining a second filter coefficient of the sub-image, the decomposition module is configured to:
weighting the average pixel value of the sub-image according to the first filter coefficient corresponding to the image type to obtain the weighted average pixel value;
and taking the difference value of the average pixel value of the sub-image and the weighted average pixel value as a second filter coefficient of the sub-image.
In the embodiment of the present application, a possible implementation manner is provided, and when the decomposition module 603 performs layer decomposition on the image to be processed according to the first filter coefficient and the second filter coefficient of each sub-image, and obtains a base layer image and a detail layer image of the image to be processed, the decomposition module is configured to:
filtering the pixel value of the central pixel point of the sub-image according to the first filter coefficient and the second filter coefficient to obtain a target pixel value of the central pixel point of the sub-image;
obtaining a base layer image of the image to be processed according to the target pixel values of the central pixel points of all the subimages;
obtaining a detail layer image according to the base layer image and the image to be processed; and the pixel value of the pixel point in the detail layer image is the difference value of the pixel values of the pixel point in the image to be processed and the base layer image.
When the filtering scale comprises a first filtering scale and a second filtering scale, the first filtering coefficient comprises a first target filtering coefficient determined according to the first filtering scale and a second target filtering coefficient determined according to the second filtering scale;
the base layer pictures comprise a first base layer picture and a second base layer picture, and the detail layer pictures comprise a first detail layer picture and a second detail layer picture; wherein the first base layer image and the first detail layer image are determined according to a first target filter coefficient, and the second base layer image and the second detail layer image are determined according to a second target filter coefficient;
the weighting module 604 performs weighted summation on the pixel values of the base layer image and the detail layer image to obtain the target image, and is configured to:
weighting the first detail layer image and the second detail layer image according to preset detail enhancement weights;
weighting the second base layer image according to the preset contrast weakening weight;
superposing the weighted second base layer image, the weighted first detail layer image and the weighted second detail layer image to obtain a target image; and the pixel value of the pixel point of the target image is the sum of the pixel values of the pixel point in the weighted second base layer image, the weighted first detail layer image and the weighted second detail layer image.
The apparatus of the embodiment of the present application may execute the method provided by the embodiment of the present application, and the implementation principle is similar, the actions executed by the modules in the apparatus of the embodiments of the present application correspond to the steps in the method of the embodiments of the present application, and for the detailed functional description of the modules of the apparatus, reference may be specifically made to the description in the corresponding method shown in the foregoing, and details are not repeated here.
According to the method and the device for processing the image, the multiple sub-images in the image to be processed are obtained, the image type of each sub-image is determined through the difference value of the average pixel value of each sub-image and the pixel value of the central pixel point of the sub-image, the image to be processed is subjected to layer decomposition according to the image type of each sub-image, the base layer image and the detail layer image of the image to be processed are obtained, and then the pixel values of the base layer image and the detail layer image are subjected to weighted summation to obtain the target image. Because the central pixel point of each sub-image uniquely corresponds to one pixel point in the image to be processed, and the number of the sub-images is consistent with the number of the pixel points, the image type of the sub-image can be determined according to the difference value of the average pixel value of each sub-image and the pixel value of the central pixel point, and the type of the area where each pixel point in the image to be processed is located can be effectively distinguished. According to the method and the device, the base layer image and the detail layer image of the image to be processed are accurately identified according to the layer decomposition processing of the image type, and the filtering coefficient is determined according to the preset regularization parameter to carry out layer decomposition in the prior art.
The embodiment of the application provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory, wherein the processor executes the computer program to realize the steps of the image processing method, and compared with the related art, the method can realize the following steps: according to the method and the device for processing the image, the multiple sub-images in the image to be processed are obtained, the image type of each sub-image is determined through the difference value of the average pixel value of each sub-image and the pixel value of the central pixel point of the sub-image, the image to be processed is subjected to layer decomposition according to the image type of each sub-image, the base layer image and the detail layer image of the image to be processed are obtained, and then the pixel values of the base layer image and the detail layer image are subjected to weighted summation to obtain the target image. Because the central pixel point of each sub-image uniquely corresponds to one pixel point in the image to be processed, and the number of the sub-images is consistent with the number of the pixel points, the image type of the sub-image can be determined according to the difference value of the average pixel value of each sub-image and the pixel value of the central pixel point, and the type of the area where each pixel point in the image to be processed is located can be effectively distinguished. According to the method and the device, the base layer image and the detail layer image of the image to be processed are accurately identified according to the layer decomposition processing of the image type, and the filtering coefficient is determined according to the preset regularization parameter to carry out layer decomposition in the prior art.
In an alternative embodiment, an electronic device is provided, as shown in fig. 7, the electronic device 700 shown in fig. 7 comprising: a processor 701 and a memory 703. Wherein the processor 701 is coupled to the memory 703, such as via a bus 702. Optionally, the electronic device 700 may further include a transceiver 704, and the transceiver 704 may be used for data interaction between the electronic device and other electronic devices, such as transmission of data and/or reception of data. It should be noted that the transceiver 704 is not limited to one in practical applications, and the structure of the electronic device 700 is not limited to the embodiment of the present application.
The Processor 701 may be a CPU (Central Processing Unit), a general-purpose Processor, a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array), or other Programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or execute the various illustrative logical blocks, modules, and circuits described in connection with the disclosure herein. The processor 701 may also be a combination of computing functions, e.g., comprising one or more microprocessors, DSPs, and combinations of microprocessors, and the like.
Bus 702 may include a path that transfers information between the above components. The bus 702 may be a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus 702 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 7, but this is not intended to represent only one bus or type of bus.
The Memory 703 may be a ROM (Read Only Memory) or other type of static storage device that can store static information and instructions, a RAM (Random Access Memory) or other type of dynamic storage device that can store information and instructions, an EEPROM (Electrically Erasable Programmable Read Only Memory), a CD-ROM (Compact disk Read Only Memory) or other optical disk storage, optical disk storage (including Compact disk, laser disk, optical disk, digital versatile disk, blu-ray disk, etc.), a magnetic disk storage medium, other magnetic storage devices, or any other medium that can be used to carry or store a computer program and that can be Read by a computer, without limitation.
The memory 703 is used for storing computer programs for executing the embodiments of the present application, and is controlled by the processor 701 to execute. The processor 701 is adapted to execute a computer program stored in the memory 703 to implement the steps shown in the foregoing method embodiments.
Wherein, the electronic device includes but is not limited to: mobile terminals such as mobile phones, notebook computers, PADs, etc. and fixed terminals such as digital TVs, desktop computers, etc.
Embodiments of the present application provide a computer-readable storage medium, on which a computer program is stored, and when being executed by a processor, the computer program may implement the steps and corresponding contents of the foregoing method embodiments.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device realizes the following when executed:
determining a plurality of sub-images in an image to be processed; the central pixel point of each subimage uniquely corresponds to one pixel point in the image to be processed, and the number of the subimages is consistent with that of the pixel points;
for each sub-image, determining the average pixel value of the sub-image, and determining the image type of the sub-image according to the difference value between the pixel value of the central pixel point of the sub-image and the average pixel value;
performing layer decomposition on the image to be processed according to the image type of each sub-image to obtain a base layer image and a detail layer image of the image to be processed;
and carrying out weighted summation on the pixel values of the base layer image and the detail layer image to obtain a target image.
The terms "first," "second," "third," "fourth," "1," "2," and the like in the description and in the claims of the present application and in the above-described drawings (if any) are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used are interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in other sequences than illustrated or otherwise described herein.
It should be understood that, although each operation step is indicated by an arrow in the flowchart of the embodiment of the present application, the implementation order of the steps is not limited to the order indicated by the arrow. In some implementation scenarios of the embodiments of the present application, the implementation steps in the flowcharts may be performed in other sequences as desired, unless explicitly stated otherwise herein. In addition, some or all of the steps in each flowchart may include multiple sub-steps or multiple stages based on an actual implementation scenario. Some or all of these sub-steps or stages may be performed at the same time, or each of these sub-steps or stages may be performed at different times, respectively. Under the scenario that the execution time is different, the execution sequence of the sub-steps or phases may be flexibly configured according to the requirement, which is not limited in the embodiment of the present application.
The foregoing is only an optional implementation manner of a part of implementation scenarios in this application, and it should be noted that, for those skilled in the art, other similar implementation means based on the technical idea of this application are also within the protection scope of the embodiments of this application without departing from the technical idea of this application.

Claims (13)

1. An image processing method, comprising:
determining a plurality of sub-images in an image to be processed; the central pixel point of each subimage uniquely corresponds to one pixel point in the image to be processed, and the number of the subimages is consistent with that of the pixel points;
for each sub-image, determining an average pixel value of the sub-image, and determining the image type of the sub-image according to the difference value between the pixel value of the central pixel point of the sub-image and the average pixel value;
performing layer decomposition on the image to be processed according to the image type of each sub-image to obtain a base layer image and a detail layer image of the image to be processed;
and carrying out weighted summation on the pixel values of the base layer image and the detail layer image to obtain a target image.
2. The method of claim 1, wherein the determining the image type of the sub-image comprises:
for each sub-image, taking the difference value between the average pixel value and the pixel value of the central pixel point as the pixel change deviation value of the sub-image;
determining a first filter coefficient of the sub-image according to the pixel change deviation value;
and determining the image type of the sub-image according to the first filter coefficient.
3. The method of claim 2, wherein determining the first filter coefficient for the sub-image based on the pixel variation deviation value comprises:
acquiring a preset filtering scale;
weighting the filtering scale according to the pixel variation deviation value of the sub-image to obtain a weighted filtering scale;
and determining a first filter coefficient of the sub-image according to the weighted filter scale and the pixel value variance of the sub-image.
4. The method of claim 3, wherein the average pixel value and the variance of the pixel values of the sub-images are calculated based on:
determining pixel points of all arrangement groups in the subimages, wherein the arrangement groups are rows or columns;
for the pixel points of each arrangement group, if the target parameters of the arrangement groups stored in advance are determined, the target parameters of the arrangement groups stored in advance are called, and if the target parameters of the arrangement groups are not determined to be stored in advance, the target parameters of the arrangement groups are calculated and stored according to the target parameters of the pixel points in the arrangement groups;
obtaining an average pixel value or a pixel value variance of the sub-images according to target parameters of all the arrangement groups; wherein the target parameter is the sum of the pixel values and the sum of squares of the pixel values.
5. The method of claim 3, wherein determining the first filter coefficient for the sub-image based on the weighted filter scale and the variance of the pixel values of the sub-image comprises:
taking the sum of the weighted filtering scale and the variance of the pixel value as an adaptive variance value;
and taking the ratio of the pixel value variance to the self-adaptive variance value as a first filter coefficient of the sub-image.
6. The method of claim 5, wherein the first filter coefficient is calculated based on:
shifting the self-adaptive variance value according to a preset shift coefficient to obtain a first shift result, and searching the reciprocal of the first shift result through a preset lookup table;
and determining the product of the reciprocal of the first shift result and the pixel variance value, shifting the product result according to a preset shift coefficient to obtain a second shift result, and taking the second shift result as the first filter coefficient.
7. The method according to claim 3, wherein the performing layer decomposition on the image to be processed according to the image type of each sub-image to obtain a base layer image and a detail layer image of the image to be processed comprises:
for each sub-image, carrying out weighting processing on the average pixel value of the sub-image according to the first filter coefficient corresponding to the image type, and determining a second filter coefficient of the sub-image;
and performing layer decomposition on the image to be processed according to the first filter coefficient and the second filter coefficient of each sub-image to obtain a base layer image and a detail layer image of the image to be processed.
8. The method according to claim 7, wherein the weighting the average pixel value of the sub-image according to the first filter coefficient corresponding to the image type to determine the second filter coefficient of the sub-image comprises:
weighting the average pixel value of the sub-image according to the first filter coefficient corresponding to the image type to obtain the weighted average pixel value;
and taking the difference value of the average pixel value of the sub-image and the weighted average pixel value as a second filter coefficient of the sub-image.
9. The method according to claim 7, wherein the performing layer decomposition on the image to be processed according to the first filter coefficient and the second filter coefficient of each sub-image to obtain a base layer image and a detail layer image of the image to be processed comprises:
filtering the pixel value of the central pixel point of the sub-image according to the first filter coefficient and the second filter coefficient to obtain a target pixel value of the central pixel point of the sub-image;
obtaining a base layer image of the image to be processed according to the target pixel values of the central pixel points of all the sub-images;
obtaining the detail layer image according to the base layer image and the image to be processed; and the pixel value of the pixel point in the detail layer image is the difference value of the pixel point in the image to be processed and the base layer image.
10. The method of claim 7, wherein when the filter scale comprises a first filter scale and a second filter scale, the first filter coefficient comprises a first target filter coefficient determined according to the first filter scale and a second target filter coefficient determined according to the second filter scale;
the base layer pictures comprise a first base layer picture and a second base layer picture, and the detail layer pictures comprise a first detail layer picture and a second detail layer picture; wherein the first base layer image and the first detail layer image are determined from the first target filter coefficients, and the second base layer image and the second detail layer image are determined from the second target filter coefficients;
the weighted summation of the pixel values of the base layer image and the detail layer image to obtain the target image comprises:
weighting the first detail layer image and the second detail layer image according to preset detail enhancement weights;
weighting the second base layer image according to a preset contrast weakening weight;
superposing the weighted second base layer image, the weighted first detail layer image and the weighted second detail layer image to obtain a target image; and the pixel value of the pixel point of the target image is the sum of the pixel values of the pixel point in the weighted second base layer image, the weighted first detail layer image and the weighted second detail layer image.
11. An image processing apparatus characterized by comprising:
the first determining module is used for determining a plurality of sub-images in the image to be processed; the central pixel point of each subimage uniquely corresponds to one pixel point in the image to be processed, and the number of the subimages is consistent with that of the pixel points;
the second determining module is used for determining the average pixel value of each sub-image and determining the image type of the sub-image according to the difference value between the pixel value of the central pixel point of the sub-image and the average pixel value;
the decomposition module is used for carrying out layer decomposition on the image to be processed according to the image type of each sub-image to obtain a base layer image and a detail layer image of the image to be processed;
and the weighting module is used for weighting and summing the pixel values of the base layer image and the detail layer image to obtain a target image.
12. An electronic device comprising a memory, a processor and a computer program stored on the memory, characterized in that the processor executes the computer program to implement the steps of the method of any of claims 1 to 10.
13. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 10.
CN202210744777.3A 2022-06-27 2022-06-27 Image processing method, image processing device, electronic equipment and computer readable storage medium Pending CN115170413A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210744777.3A CN115170413A (en) 2022-06-27 2022-06-27 Image processing method, image processing device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210744777.3A CN115170413A (en) 2022-06-27 2022-06-27 Image processing method, image processing device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN115170413A true CN115170413A (en) 2022-10-11

Family

ID=83489382

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210744777.3A Pending CN115170413A (en) 2022-06-27 2022-06-27 Image processing method, image processing device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN115170413A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115830181A (en) * 2023-01-04 2023-03-21 深圳市先地图像科技有限公司 Image processing method and device for laser imaging and related equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115830181A (en) * 2023-01-04 2023-03-21 深圳市先地图像科技有限公司 Image processing method and device for laser imaging and related equipment

Similar Documents

Publication Publication Date Title
CN112530347B (en) Method, device and equipment for determining compensation gray scale
US11803947B2 (en) Brightness and contrast enhancement for video
CN112419151B (en) Image degradation processing method and device, storage medium and electronic equipment
US20110170801A1 (en) Resizing of digital images
US20190356895A1 (en) Method and apparatus for processing an image property map
JP6360965B2 (en) Image display method and display system
KR20110065997A (en) Image processing apparatus and method of processing image
TWI413101B (en) Control method for improving the luminous uniformity and related luminosity calibrating controller and display device
CN114203087B (en) Configuration of compensation lookup table, compensation method, device, equipment and storage medium
WO2019090580A1 (en) System and method for image dynamic range adjusting
US20150187051A1 (en) Method and apparatus for estimating image noise
CN115170413A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN114723044B (en) Error compensation method, device, chip and equipment for in-memory computing chip
CN111738950B (en) Image processing method and device
CN111275615B (en) Video image scaling method based on bilinear interpolation improvement
CN112534466B (en) Directional scaling system and method
US8305500B2 (en) Method of block-based motion estimation
US20150117757A1 (en) Method for processing at least one disparity map, corresponding electronic device and computer program product
CN114266803A (en) Image processing method, image processing device, electronic equipment and storage medium
CN114093293B (en) Luminance compensation parameter determination method, device and equipment
CN115760658A (en) Image processing method, image processing device, storage medium and electronic equipment
CN114420066B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN112801997A (en) Image enhancement quality evaluation method and device, electronic equipment and storage medium
CN113194267B (en) Image processing method and device and photographing method and device
CN113034552B (en) Optical flow correction method and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination