CN111192214A - Image processing method and device, electronic equipment and storage medium - Google Patents

Image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111192214A
CN111192214A CN201911379688.8A CN201911379688A CN111192214A CN 111192214 A CN111192214 A CN 111192214A CN 201911379688 A CN201911379688 A CN 201911379688A CN 111192214 A CN111192214 A CN 111192214A
Authority
CN
China
Prior art keywords
image
pixel point
structural
item
term
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911379688.8A
Other languages
Chinese (zh)
Other versions
CN111192214B (en
Inventor
吴佳飞
张帅
张广程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Intelligent Technology Co Ltd
Priority to CN201911379688.8A priority Critical patent/CN111192214B/en
Publication of CN111192214A publication Critical patent/CN111192214A/en
Application granted granted Critical
Publication of CN111192214B publication Critical patent/CN111192214B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/70
    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation

Abstract

The present disclosure provides a method, an apparatus, an electronic device and a storage medium for image processing, the method comprises obtaining an original image; extracting a structural component in the original image based on a guide filter containing a structural perception item to obtain a target image; and generating a guide filter containing a structure perception item based on the original image and the guide image, wherein the structure perception item represents the importance degree of each pixel point in the structure component. The structure component in the original image is extracted through the guiding filter containing the structure perception item, the structure perception item represents the importance degree of each pixel point in the structure component, the image feature with high importance degree in the structure component can be extracted in a key mode based on the structure perception item, the image feature with low importance degree in the structure component is weakened, therefore, the problem that halo artifacts appear in some edge areas of the target image due to the fact that pixel values of the pixel points corresponding to the guiding image are inconsistent in part of pixel points in the target image is reduced, and the filtering effect is optimized.

Description

Image processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
In recent years, edge protection filters have been increasingly used in image processing and computer vision, and a guide filter is one of the edge protection filters. The guiding filter is used for obtaining a target image after edge filtering processing by performing linear transformation on pixel values of pixel points of the guiding image.
The processing mode can cause halo artifacts to appear in the edge area corresponding to the partial pixel points in the target image because the pixel values of the corresponding partial pixel points between the target image and the guide image are inconsistent.
Disclosure of Invention
In view of this, the present disclosure at least provides an image processing method to weaken the problem of inconsistency between pixel values of corresponding partial pixels between a target image and a guide image, and optimize the filtering effect.
In a first aspect, the present disclosure provides a method of image processing, comprising:
acquiring an original image;
extracting a structural component in the original image based on a guiding filter containing a structural perception item to obtain a target image;
and generating the guiding filter containing the structure perception items based on the original image and the guiding image, wherein the structure perception items represent the importance degree of each pixel point in the structure components.
By adopting the method, the structure component in the original image is extracted through the guiding filter containing the structure perception item to obtain the target image, the structure perception item represents the importance degree of each pixel point in the structure component, the image feature with high importance degree in the structure component can be mainly extracted based on the structure perception item, the image feature with low importance degree in the structure component is weakened, the problem that the target image is inconsistent with the guiding image structure is solved, and the filtering effect is optimized.
In one possible embodiment, generating the guided filter containing the structural perception term based on the original image and the guide image includes:
obtaining the structural awareness item;
obtaining an objective function and parameters of a guide filter containing the structural sensing items based on the structural sensing items;
and obtaining the guiding filter containing the structure sensing item based on the target function and the parameters thereof.
In one possible embodiment, the structure perception item includes: a structural confidence term; the structural confidence coefficient item is used for determining the probability that each pixel point in the guide image belongs to the pixel point in the structural component;
the step of determining the structural confidence term includes:
determining a standard deviation of pixel values corresponding to each pixel point in a window with the target pixel point as the center under each preset window size in at least one window size aiming at each target pixel point in the guide image;
generating the structural confidence term based on the standard deviation of each target pixel point in the guide image under each window size.
In the above embodiment, by setting the structure confidence term, the structure confidence term is used to calculate the probability that each pixel point in the guide image belongs to a pixel point in the structure component, when the structure component is extracted from the original image based on the guide image, if the probability value corresponding to the pixel point in the guide image is small, when the structure component is extracted from the original image, the feature of the corresponding pixel point in the original image is correspondingly weakened, and then the structure component is extracted through the guide filter including the structure confidence term, so as to obtain the target image, thereby alleviating the problem that the extracted structure component is not consistent with the structure of the guide image, and optimizing the filtering effect.
In one possible embodiment, generating the structural confidence term based on the standard deviation of each target pixel point in the guide image at each window size includes:
and generating the structural confidence term based on the product of the standard deviation of each target pixel point in the guide image in each preset window size in multiple window sizes.
In one possible implementation, the structural awareness item further includes: a bias elimination term; the deviation elimination item is used for determining the degree of pixel value deviation between a pixel point in the original image and a corresponding pixel point in the guide image;
the determining of the structural perception item of the acquired original image comprises:
and determining the deviation elimination item based on the difference between the pixel value of the pixel point in the original image and the pixel value of the corresponding pixel point in the guide image.
In the above embodiment, by setting a deviation elimination term, the deviation elimination term is used to calculate the degree of deviation of the pixel value between the pixel point in the original image and the corresponding pixel point in the guide image, when the deviation of the pixel value between the pixel point in the original image and the corresponding pixel point in the guide image is large, the obtained value of the deviation elimination term is smaller, so that when the structural component is extracted, the feature of the pixel point is correspondingly weakened, and further when the structural component is extracted by the guide filter including the deviation elimination term, the guide filter can extract the structural component from the original image according to the degree of deviation of the pixel value between each pixel point in the original image and the corresponding pixel point in the guide image, so as to obtain the target image, and optimize the filtering effect.
In one possible embodiment, the structure-aware term is obtained according to a product of the structure-confidence term and the deviation-elimination term.
In one possible embodiment, the obtaining an objective function including the structural aware item includes:
acquiring a least square function of a structural component in the original image and the target image;
and obtaining the target function according to the structural perception item and the least square function.
In a possible implementation, the step of obtaining the parameters of the objective function includes:
acquiring a weight image of the structural perception item;
and obtaining the parameters of the objective function based on the weight image, the guide image and the original image.
The following descriptions of the effects of the apparatus, the electronic device, and the like refer to the description of the above method, and are not repeated here.
In a second aspect, the present disclosure provides an apparatus for image processing, comprising:
the original image acquisition module is used for acquiring an original image;
the structure component extraction module is used for extracting the structure components in the original image based on a guide filter containing a structure perception item to obtain a target image;
the guiding filter generating module is used for generating the guiding filter containing the structure perception items based on the original image and the guiding image, and the structure perception items represent the importance degree of each pixel point in the structure components.
In a possible implementation, the guided filter generation module includes:
a structure sensing item acquisition unit configured to acquire the structure sensing item;
the target function determining unit is used for obtaining a target function of a guide filter containing the structural sensing item and parameters thereof based on the structural sensing item;
and the guiding filter determining unit is used for obtaining the guiding filter containing the structure sensing item based on the target function and the parameters thereof.
In one possible embodiment, the structure perception item includes: a structural confidence term; the structural confidence coefficient item is used for determining the probability that each pixel point in the guide image belongs to the pixel point in the structural component;
the device further comprises: a structure confidence term determination module for determining the structure confidence term;
the structure confidence term determination module determines the structure confidence term by using the following steps:
determining a standard deviation of pixel values corresponding to each pixel point in a window with the target pixel point as the center under each preset window size of at least one window size aiming at each target pixel point in the guide image;
generating the structural confidence term based on the standard deviation of each target pixel point in the guide image under each window size.
In one possible embodiment, the structure confidence term determination module determines the structure confidence term by:
and generating the structural confidence term based on the product of the standard deviation of each target pixel point in the guide image in each preset window size in multiple window sizes.
In one possible implementation, the structural awareness item further includes: a bias elimination term; the deviation elimination item is used for determining the degree of pixel value deviation between a pixel point in the original image and a corresponding pixel point in the guide image;
the device further comprises: a bias elimination term determination module; the deviation elimination item determination module is used for determining the deviation elimination item;
the deviation elimination term determination module determines the deviation elimination term by using the following steps:
and determining the deviation elimination item based on the difference between the pixel value of the pixel point in the original image and the pixel value of the corresponding pixel point in the guide image.
In one possible embodiment, the structure-aware term is obtained according to a product of the structure-confidence term and the deviation-elimination term.
In one possible embodiment, the objective function determining unit obtains the objective function including the structure-aware item by:
acquiring a least square function of a structural component in the original image and the target image;
and obtaining the target function according to the structural perception item and the least square function.
In one possible implementation, the objective function determining unit obtains the parameters of the objective function by:
acquiring a weight image of the structural perception item;
and obtaining the parameters of the objective function based on the weight image, the guide image and the original image.
In a third aspect, the present disclosure provides an electronic device comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is operating, the machine-readable instructions when executed by the processor performing the steps of a method of image processing as set forth in the first aspect or any one of the embodiments above.
In a fourth aspect, the present disclosure provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of a method of image processing as described in the first aspect or any one of the embodiments above.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 shows a schematic flow chart of a method of image processing provided by an embodiment of the present disclosure;
FIG. 2 is a flow diagram illustrating a method for generating a guided filter containing a structure-aware item based on an original image and a guide image according to an embodiment of the disclosure;
FIG. 3 illustrates a flow chart of a method for determining a structural confidence term provided by an embodiment of the present disclosure;
fig. 4 is a schematic diagram illustrating an architecture of an image processing apparatus provided in an embodiment of the present disclosure;
fig. 5 shows a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
In order to solve the problem of halo artifacts appearing in some edge regions of a target image and improve the filtering effect, the embodiment of the disclosure provides an image processing method, which includes obtaining an original image; extracting a structural component in the original image based on a guide filter containing a structural perception item to obtain a target image; the method comprises the steps of generating a guide filter containing a structure perception item based on an original image and a guide image, wherein the structure perception item represents the importance degree of each pixel point in a structure component. The structure component in the original image is extracted through the guiding filter containing the structure perception item to obtain the target image, the structure perception item represents the importance degree of each pixel point in the structure component, the image feature with high importance degree in the structure component can be extracted in a key mode based on the structure perception item, the image feature with low importance degree in the structure component is weakened, and therefore the problem that halo artifacts occur in some edge areas of the target image due to the fact that the pixel values of the pixel points corresponding to the guiding image are inconsistent in part of the pixel points in the target image is solved, and the filtering effect is optimized.
For the purpose of understanding the embodiments of the present disclosure, a method of image processing disclosed in the embodiments of the present disclosure will be described in detail first.
The image processing method provided by the embodiment of the disclosure can be applied to a local or cloud server, and can also be applied to an intelligent device with an image processing function, wherein the intelligent device can include an image acquisition device or not, and if the intelligent device does not include the image acquisition device, the intelligent device processes a received original image; if the intelligent device comprises the image acquisition device, the intelligent device can process the received original image, and can also process the image acquired by the image acquisition device of the intelligent device in real time. Illustratively, smart devices include, but are not limited to, cell phones, tablets, computers, video cameras, robots, and the like.
Specifically, the method is applicable to scenes in which the image needs to be optimized, and the scenes include, but are not limited to, a face recognition scene, a gait recognition scene, a vehicle recognition scene, a license plate recognition scene, and the like. The face recognition scene is taken as an example for explanation, in the face recognition scene, after the face image is obtained, the face image can be processed by using the image processing method provided by the embodiment of the disclosure, the structural component is extracted from the face image, and the target image is obtained, so that the structure of the processed image corresponding to the obtained face image is clearer, and further, the accuracy of face recognition can be improved by recognizing the processed image.
Referring to fig. 1, a schematic flowchart of a method for image processing according to an embodiment of the present disclosure is shown, where the method is applied to a server for example.
A method of image processing as shown in fig. 1 comprises the following steps:
s101, acquiring an original image.
In the embodiment of the present disclosure, the original image may be acquired from the image acquiring apparatus, or may be acquired from at least one image stored locally. The original image may be any image to be processed.
In the embodiment of the present disclosure, the guide image is a reference image used for extracting a structural component in the original image, where the preset guide image may be an image similar to the structural component of the original image, or may be the original image itself. In a specific implementation, after the image capturing device captures the original image of the target object, another image capturing device may be selected to capture the guide image of the target object, and generally, the edge structure of the guide image is clearer than that of the original image. For example, a depth map of a target object may be obtained by a depth image obtaining device, where the depth map is an original image, and a Red Green Blue (RGB) image of the target object may be obtained by a common image obtaining device. The method for acquiring the guide image corresponding to the original image may be various, and the embodiment of the disclosure is only an example, and is not particularly limited thereto.
S102, extracting a structural component in the original image based on a guide filter containing a structural perception item to obtain a target image; the method comprises the steps of generating a guide filter containing a structure perception item based on an original image and a guide image, wherein the structure perception item represents the importance degree of each pixel point in a structure component.
In the embodiment of the present disclosure, the structural perception item represents the importance degree of each pixel point in the structural component, that is, the structural perception item may determine the weight value of the pixel value of each pixel point. The guide filter is a filter for filtering the original image by introducing the guide image. And further generating a guide filter containing the structural perception items based on the structural perception items and the guide filter, and extracting structural components in the original image through the guide filter containing the structural perception items to obtain the target image.
Based on the steps, obtaining an original image; extracting a structural component in the original image based on a guide filter containing a structural perception item to obtain a target image; the method comprises the steps of generating a guide filter containing a structure perception item based on an original image and a guide image, wherein the structure perception item represents the importance degree of each pixel point in a structure component. The structure component in the original image is extracted through the guiding filter containing the structure perception item to obtain the target image, the structure perception item represents the importance degree of each pixel point in the structure component, the image feature with high importance degree in the structure component can be extracted in a key mode based on the structure perception item, the image feature with low importance degree in the structure component is weakened, and therefore the problem that halo artifacts occur in some edge areas of the target image due to the fact that the pixel values of the pixel points corresponding to the guiding image are inconsistent in part of the pixel points in the target image is solved, and the filtering effect is optimized.
In one possible embodiment, referring to fig. 2, a guided filter containing a structure-aware term is generated based on an original image and a guided image, including:
s201, obtaining a structural perception item.
In embodiments of the present disclosure, the structural awareness term may include a structural confidence term, and/or a bias elimination term. And obtaining a structural sensing item by obtaining a structural confidence item and/or a deviation elimination item.
S202, based on the structural perception items, obtaining an objective function and parameters of the guide filter containing the structural perception items.
S203, based on the objective function and the parameters thereof, a guiding filter containing the structure perception item is obtained.
In the embodiment of the present disclosure, based on the structural perception item, an objective function of the guiding filter including the structural perception item may be determined, based on the objective function of the guiding filter including the structural perception item, a parameter corresponding to the objective function may be obtained, and further, based on the objective function and the parameter thereof, the guiding filter including the structural perception item may be obtained.
In one possible embodiment, the structure perception item includes: a structural confidence term; and the structure confidence coefficient item is used for determining the probability that each pixel point in the guide image belongs to the pixel point in the structure component.
Referring to fig. 3, which is a flow chart of a method for determining a structural confidence term in an image processing method, the step of determining the structural confidence term includes:
s301, aiming at each target pixel point in the guide image, determining the standard deviation of pixel values corresponding to each pixel point in a window with the target pixel point as the center under each preset at least one window size;
s302, generating a structure confidence coefficient item based on the standard deviation of each target pixel point in the guide image under each window size.
In the embodiment of the present disclosure, the structural confidence term is a function for calculating a probability that each pixel point in the guide image belongs to a pixel point in the structural component.
In a specific implementation, the target pixel point is any one of the pixel points in the guidance image, specifically, one of the pixel points included in the guidance image may be sequentially selected as the target pixel point, and a standard deviation of pixel values corresponding to the pixel points in each window size centered on the target pixel point is determined for each window size in at least one preset window size until a standard deviation of pixel values corresponding to the pixel points in each window size centered on the pixel points in the guidance image is obtained.
Exemplarily, if the guide image includes n pixel points, that is, a 1 st pixel point, a 2 nd pixel point, … …, and an nth pixel point, where n is a positive integer, selecting the 1 st pixel point from the guide image as a target pixel point, and determining a standard deviation of pixel values corresponding to each pixel point in a window with the 1 st pixel point as a center in each preset at least one window size; selecting a 2 nd pixel point from the guide image as a target pixel point, and determining the standard deviation of pixel values corresponding to each pixel point in a window with the 2 nd pixel point as the center under each preset window size of at least one window size; and analogizing in sequence until a standard deviation of pixel values corresponding to each pixel point in a window with the nth pixel point as a center is obtained, wherein the sequence of selecting the target pixel point from the guide image can be determined according to actual conditions, and the embodiment of the disclosure does not specifically limit this.
In the embodiment of the present disclosure, the number of the preset window sizes may be one or more, for example, the preset window may include a window with a size of 3 × 3; alternatively, the preset window may include a window having a size of 5 × 5; alternatively, the preset windows may include a window having a size of 3 × 3 and a window having a size of 5 × 5; the number of window sizes and the size of the window sizes may be set according to actual needs, which is not specifically limited in the embodiment of the present disclosure.
In the embodiment of the present disclosure, after the standard deviation corresponding to each target pixel point in the guide image is obtained, a structure confidence term is generated based on a product of the standard deviations of each target pixel point in the guide image in each of the preset window sizes. Preferably, the formula of the structural confidence term may be formula (1), and formula (1) is as follows:
Figure BDA0002341939580000111
wherein, in the formula (1), p is the coordinate of the pixel point in the preset coordinate system,
Figure BDA0002341939580000112
for a structure perception item, representing the probability value of a pixel point with the coordinate p, wherein gamma is a preset constant, the value of gamma can be determined according to actual needs, mu (p) is the standard deviation of pixel values corresponding to each pixel point in a window with the pixel point with the coordinate p as the center under each preset window size,if the preset window sizes are multiple, respectively calculating the standard deviation corresponding to the pixel point with the coordinate p in each window size. The calculation formula of μ (p) is formula (2), and formula (2) is as follows:
Figure BDA0002341939580000113
wherein, in the formula (2), ζ1、ζ2Two size values of window, wherein1、ζ2The corresponding size values may be the same or different, for example, a 3 × 3 window size and a 5 × 5 window size, or a 3 × 3 window size and a 7 × 7 window size, or a 3 × 3 window size and a 3 × 3 window size. When embodied, ζ1、ζ2The corresponding size value can be selected according to actual conditions.
Figure BDA0002341939580000121
Takes zeta as the center point of a pixel point with coordinate p in the guide image1In a window of a size, including a standard deviation of a pixel value corresponding to each pixel point, for example, taking p as a central point, under a window of 3 × 3, including a standard deviation of a pixel value corresponding to each pixel point, wherein under the window of 3 × 3, the number of the included pixel points is 9, that is, calculating a standard deviation of a pixel value corresponding to the 9 pixel points;
Figure BDA0002341939580000122
takes zeta as the center point of a pixel point with coordinate p in the guide image2For example, with p as a central point, the standard deviation of the pixel value corresponding to each pixel point is included in a 5 × 5 window, where the number of the included pixels is 25 in the 5 × 5 window, that is, the standard deviation of the pixel value corresponding to the 25 pixel points is calculated.
In the embodiment of the present disclosure, as can be seen from formula (2), the larger the value of μ (p) of any target pixel in the guidance image is, the larger the two kinds of non-target pixels where the target pixel is located are representedThe larger the deviation of the pixel values among the pixel points in the same-size window or the same-size window is, if the value of mu (p) corresponding to the target pixel point is larger, the structural confidence coefficient item is
Figure BDA0002341939580000123
The larger the probability, that is, the larger the probability that the target pixel is a pixel in the structural component.
In the embodiment of the disclosure, by setting the structural confidence term and calculating the probability that each pixel point in the guide image is a pixel point in the structural component according to the structural confidence term, when the structural component is extracted from the original image based on the guide image, if the probability value corresponding to the pixel point in the guide image is small, when the structural component is extracted from the original image, the feature of the corresponding pixel point in the original image is correspondingly weakened, and further, by setting the guide filter of the structural confidence term, the structural component is extracted from the original image, thereby weakening the problem that halo artifacts appear in some edge regions of the target image due to the fact that the pixel values of the pixel points corresponding to the partial pixel points in the target image are inconsistent with the pixel values of the pixel points corresponding to the guide image, and optimizing the filtering effect.
In one possible embodiment, the structure perception item includes: a bias elimination term; the deviation elimination item is used for determining the degree of pixel value deviation between a pixel point in the original image and a corresponding pixel point in the guide image;
the step of determining the bias elimination term includes:
and determining a deviation elimination item based on the difference between the pixel value of the pixel point in the original image and the pixel value of the corresponding pixel point in the guide image.
In the embodiment of the present disclosure, the deviation elimination term is a function for determining a degree of deviation of a pixel value between a pixel point in the original image and a corresponding pixel point in the guide image, where the pixel point in the original image and the corresponding pixel point in the guide image are pixel points of the same coordinate in the same coordinate system. Preferably, the formula of the deviation elimination term may be formula (3), and formula (3) is as follows:
Figure BDA0002341939580000131
wherein, in the formula (3) < tau >r(p) is a deviation elimination term used for representing the value of the deviation degree, G (p) is the pixel value of a pixel point with the coordinate p in the guide image, and X (p) is the pixel value of the pixel point with the coordinate p in the original image.
In the embodiment of the present disclosure, when the original image and the guide image are the same image, the value corresponding to the deviation-elimination term may be 1 through calculation. Therefore, when the original image and the guide image are the same image, the deviation eliminating item has no influence in the image processing process.
In the image processing process, there is a case where a deviation of a pixel value between a pixel point in an original image and a corresponding pixel point in a guide image is large, and in this case, if a structural component extraction is directly performed on the pixel point in the original image through the pixel point of the guide image, a large error may be caused. Therefore, in the embodiment of the disclosure, by setting a deviation elimination item, the deviation elimination item is used for calculating the degree of deviation between the pixel value of each pixel point in the original image and the pixel value of the corresponding pixel point in the guide image, when the deviation between the pixel value of the pixel point of the original image and the pixel value of the pixel point of the corresponding guide image is large, the value obtained by the deviation elimination item is small, so that when the structural component is extracted, the characteristic of the pixel point can be correspondingly weakened, and further, by setting the guide filter of the deviation elimination item, the structural component can be accurately extracted from the original image, and the filtering effect is optimized.
In one possible embodiment, the structure-aware term is derived from the product of the structure-confidence term and the bias-elimination term.
In the embodiment of the present disclosure, when the structure sensing item includes the structure confidence item and the deviation elimination item, the structure sensing item may be obtained based on a product of the structure confidence item and the deviation elimination item.
In the embodiment of the present disclosure, preferably, as can be seen from the above description, the formula of the structural sense term may be the following formula (4):
Figure BDA0002341939580000141
wherein in the formula (4), ω (p) is a structural perception term,
Figure BDA0002341939580000142
for structural confidence terms, τrAnd (p) is a bias elimination term. The structure perception term is equal to the product of the structure confidence term and the deviation elimination term, that is, the structure confidence term may be the product of the structure confidence term and the deviation elimination term, or the product of the deviation elimination term and the structure confidence term.
In an alternative embodiment, obtaining an objective function containing a structural aware item comprises:
acquiring a least square function of a structural component in an original image and a target image;
and obtaining a target function according to the structural perception item and the least square function.
In the embodiment of the present disclosure, the target function is a function representing a minimum euclidean distance between structural components of the original image and the target image. Specifically, the square of the difference between the pixel value of the pixel point in the original image and the pixel value of the pixel point in the target image is multiplied by the structural perception term to obtain a product expression; and taking a function which represents the minimum sum value between the product expression and the regular term as an objective function.
For example, the formula of the objective function containing the structural perception term may be the following formula (5):
Figure BDA0002341939580000143
wherein, in the formula (5),
Figure BDA0002341939580000144
centering on a pixel point with coordinate p, using ζ1Set of pixel points contained within a window of size, e.g. if ζ1Is 3X 3, then
Figure BDA0002341939580000145
The number of included pixels is 9, wherein the 9 pixels include a pixel with a coordinate p; p' is
Figure BDA0002341939580000146
Coordinates of any pixel point in, ap′And bp′For the parameters of the objective function, X (p) is the pixel value of the pixel point with coordinate p in the original image, and epsilon is a preset coefficient, wherein epsilon is used for the specification ap′The value of e can be set according to the actual situation. In practice, ζ1The size of (b) may be selected according to practical situations, and the embodiment of the present disclosure is not particularly limited thereto.
In an alternative embodiment, the step of obtaining the parameters of the objective function comprises:
acquiring a weight image of the structural perception item;
and obtaining parameters of the target function based on the weight image, the guide image and the original image.
In the embodiment of the present disclosure, the weight value corresponding to each pixel point in the weight image is obtained based on the pixel value of the pixel point in the original image, the pixel value of the pixel point in the guide image, and the structure sensing item. For example, an unselected pixel point is selected from the original image as a target pixel point, for example, a pixel point with a coordinate a in the original image is selected as a target pixel point, a pixel value of the target pixel point in the original image and a pixel value of a corresponding pixel point in the guide image are input into a calculation formula of the structural perception item, and a weight value corresponding to the pixel point with the coordinate a is obtained, that is, the pixel value of the pixel point with the coordinate a in the weight image is obtained. Repeating the step of selecting an unselected pixel point from the original image as a target pixel point until the unselected pixel point position does not exist in the original image, and obtaining a weight image corresponding to the original image or the guide image.
In the embodiment of the present disclosure, after the weight image is obtained, the parameter of the objective function is obtained based on the weight image, the guide image, and the original image.
In the embodiment of the present disclosure, the expression corresponding to the parameter of the objective function is used to represent the relationship between the parameter value, the pixel value of each pixel point in the original image and the guide image (the guide image is not the original image), and the pixel value of each pixel point in the weight image; or, the expression is used to represent the relationship between the parameter value, the pixel value of each pixel point in the guide image (the guide image is the original image), and the pixel value of each pixel point in the weight image.
Specifically, the parameters of the objective function include a slope parameter for performing a linear transformation on the guide image and an intercept parameter for performing a linear transformation on the guide image.
In the embodiment of the present disclosure, the specific process of obtaining a slope parameter for performing linear transformation on a guide image in the parameters of the objective function includes:
determining an original pixel value matrix corresponding to a pixel point in an original image, a guide pixel value matrix corresponding to a pixel point in a guide image and an importance degree matrix corresponding to the original pixel value matrix based on a structural perception item in a window which takes a target pixel point coordinate as a center and corresponds to a preset window size.
And secondly, multiplying the mean expression of the mean value of the first intermediate matrix obtained by point multiplication of the solved original pixel value matrix and the importance degree matrix by the mean expression of the mean value of the second intermediate matrix obtained by point multiplication of the solved guide pixel value matrix and the importance degree matrix to obtain a mean product expression.
And thirdly, subtracting the mean expression of the average value of the third intermediate matrix obtained by dot multiplication of the solved first intermediate matrix and the solved second intermediate matrix from the mean product expression to obtain a difference expression.
And fourthly, adding the variance expression for solving the variance of the second intermediate matrix and the regular term coefficient to obtain a summation expression.
And fifthly, dividing the difference expression and the summation expression to obtain an expression of the slope parameter.
In the embodiment of the present disclosure, a specific process of obtaining an intercept parameter for performing linear transformation on a guide image in parameters of an objective function includes:
multiplying an average expression for solving the average value of the second intermediate matrix by an expression of a slope parameter value to obtain a target expression;
and secondly, subtracting the target expression from the mean expression for solving the average value of the first intermediate matrix to obtain an expression of the intercept parameter.
The order of the steps of the specific process for obtaining the slope parameter and the intercept parameter of the objective function may be adjusted accordingly, and the embodiment of the disclosure is only an exemplary description.
In the embodiment of the present disclosure, a calculation formula of parameters of an objective function may be obtained by deriving formula (5), and a slope parameter a of the objective function is obtainedp′The calculation formula (2) is the following formula (6), and the intercept parameter b of the target function is obtainedp′The calculation formula of (2) is the following formula (7):
Figure BDA0002341939580000161
Figure BDA0002341939580000162
the value of the importance degree corresponding to each pixel point in the original image can be obtained through the structural sensing item omega (p), and the weight value corresponding to each pixel point in the original image is obtained; based on the value of the corresponding importance degree of each pixel point in the original image, a weight image H is formed, that is, the pixel value of each pixel point in H is the weight value of the corresponding pixel point in the original image, wherein the pixel point in H and the corresponding pixel point in the original image are pixel points of the same coordinate in the same coordinate system. In the above formula (6) and formula (7)
Figure BDA00023419395800001711
It is shown that the operation of dot-product,
Figure BDA0002341939580000171
the calculation process of (2) is as follows: centered on the pixel point of coordinate p and having ζ1In a window with the size, respectively determining an importance degree matrix corresponding to each corresponding pixel point in the weight image (namely, obtaining the importance degree matrix corresponding to the original pixel value matrix based on the structure perception item) and a guide pixel matrix corresponding to each pixel point in the guide image (namely, obtaining the guide pixel value matrix corresponding to the pixel point in the guide image), multiplying the guide pixel value matrix and the importance degree matrix to obtain a second intermediate matrix, and solving the average value of the second intermediate matrix to obtain the average value of the second intermediate matrix
Figure BDA0002341939580000172
The value of (c).
In an exemplary manner, the first and second electrodes are,
Figure BDA0002341939580000173
the solving process of (2) is shown in the following table, in table 1, a pixel point with a coordinate of p is taken as a center, and zeta is taken as11Taking 3 × 3 as an example), the pixel value of each corresponding pixel point in the guide image, that is, table 1 is the corresponding guide pixel matrix, where the pixel value corresponding to pixel point p is 4; shown in Table 2 is a pixel point having a coordinate of p as a center, and ζ is11Taking 3 × 3 as an example) as a size window, the weighted value of each corresponding pixel point in the weighted image, that is, table 2, is a corresponding importance matrix, wherein the weighted value corresponding to the pixel point p is 0.4; shown in table 3 is that the corresponding importance degree matrix in the determined weight image and the corresponding guide pixel matrix in the guide image are subjected to dot product operation to obtain a second intermediate matrix, and further the second intermediate matrix is obtained based on table 3
Figure BDA0002341939580000174
A value of (i), i.e
Figure BDA0002341939580000175
Figure BDA0002341939580000176
Further, in the above-mentioned case,
Figure BDA0002341939580000177
is determined by
Figure BDA0002341939580000178
Is the same as in
Figure BDA0002341939580000179
When calculating, reference can be made
Figure BDA00023419395800001710
The determination process of (a) is not specifically described in the embodiments of the present application.
TABLE 1 schematic representation of windows in guide images
5 1 -1
2 4 -6
3 5 7
TABLE 2 schematic of windows in weighted images
0.7 0.5 0.3
0.2 0.4 0.6
0.3 0.5 0.7
TABLE 3 schematic of windows obtained after dot product operation
3.5 0.5 -0.3
0.4 1.6 -3.6
0.9 2.5 4.9
In an exemplary manner, the first and second electrodes are,
Figure BDA0002341939580000181
the determination process of (2) is: centered on the pixel point of coordinate p and having ζ1Respectively determining an importance degree matrix corresponding to the weight image, an original pixel matrix corresponding to the original image and a guide pixel matrix corresponding to the guide image in a window with the size, performing dot multiplication operation on the determined importance degree matrix and the guide pixel matrix corresponding to the guide image to obtain a second intermediate matrix after operation, performing dot multiplication operation on the importance degree matrix corresponding to the weight image and the original pixel matrix corresponding to the original image to obtain a first intermediate matrix, performing dot multiplication operation on the obtained first intermediate matrix and the obtained second intermediate matrix to obtain a third intermediate matrix, solving the average value of the third intermediate matrix to obtain the average value of the third intermediate matrix
Figure BDA0002341939580000182
The value of (c).
In the embodiment of the disclosure, the slope parameter value a is obtainedp′After that, a isp′The value of (2) is substituted into the formula (7), and the intercept parameter value of the objective function can be obtained.
In the embodiment of the disclosure, the guiding filter based on the structural perception term further includes a linear model formula, for each pixel point in the original image, a slope parameter value and an intercept parameter value corresponding to the pixel point are obtained through solving by formulas (6) and (7), an adjusted pixel value corresponding to the pixel point is obtained based on the slope parameter value and the intercept parameter value obtained through solving and a pixel value of the corresponding pixel point in the guiding image, and the structural component in the original image is obtained based on each adjusted pixel value in the original image.
Illustratively, the formula of the linear model is the following formula (8):
Figure BDA0002341939580000191
wherein, in the formula (8),
Figure BDA0002341939580000192
for pixel values after linear transformation of pixel point with coordinate p, i.e.
Figure BDA0002341939580000193
The adjusted pixel value corresponding to the pixel point with the coordinate p, and g (p) is the pixel value corresponding to the coordinate p in the guide image.
In the embodiment of the disclosure, the guiding filter containing the structural perception term can be obtained based on the target function, the slope parameter and the intercept parameter.
In the embodiment of the present disclosure, extracting a structural component in an original image based on a guiding filter including a structural perception item to obtain a target image includes:
firstly, based on the pixel value of each pixel point in the original image, the pixel value of the corresponding pixel point in the guide image and the structural sensing item, a weight image corresponding to the original image is obtained (the pixel value corresponding to each pixel point in the weight image is the weight value corresponding to each pixel point).
And secondly, obtaining a slope parameter value and an intercept parameter value corresponding to each pixel point based on the pixel value of each pixel point in the original image, the pixel value of the corresponding pixel point in the guide image and the pixel value of the corresponding pixel point in the weight image.
And thirdly, obtaining an adjusted pixel value corresponding to each pixel point in the original image based on the pixel value of each pixel point in the guide image, the slope parameter value corresponding to each pixel point and the intercept parameter value.
And fourthly, determining the structural component of the original image based on the adjusted pixel value corresponding to each pixel point in the original image, and further obtaining a target image corresponding to the original image.
In specific implementation, the above process may be: determining a probability value corresponding to each pixel point in the guide image based on a formula (1) and a formula (2), determining a value of a deviation elimination item corresponding to each pixel point in the original image based on a formula (3), namely obtaining a value of a characterization deviation degree corresponding to each pixel point, determining a weight value corresponding to each pixel point through a formula (4) according to the probability value corresponding to each pixel point and the value of the corresponding characterization deviation degree, and further forming the weight value corresponding to the original imageAnd obtaining the weight image corresponding to the original image. Further, based on the pixel value of each pixel point in the original image, the pixel value of the corresponding pixel point in the guide image, and the pixel value of the corresponding pixel point in the weight image, the slope parameter value and the intercept parameter value corresponding to each pixel point in the original image are obtained according to formula (6) and formula (7), and then the a corresponding to each pixel point is obtainedp′、bp′The value of (c). Aiming at each pixel point in the original image, corresponding a of the pixel pointp′、bp′The value of (2) and the pixel value of the corresponding pixel point in the guide image are input into the formula (8), so that an adjusted pixel value corresponding to the pixel point in the original image is obtained, and further, an adjusted pixel value corresponding to each pixel point in the original image is obtained. And determining the structural component of the original image based on the adjusted pixel value corresponding to each pixel point in the original image, thereby obtaining a target image corresponding to the original image.
In the embodiment of the present disclosure, if a pixel point in the original image is a pixel point in the structural component, the feature corresponding to the adjusted pixel value corresponding to the pixel point is retained, and if another pixel point in the original image is a pixel point in the non-structural component, the feature corresponding to the adjusted pixel value corresponding to the pixel point is smoothed, and then the retained pixel point corresponding to the feature of the pixel value constitutes the structural component corresponding to the original image, thereby obtaining the target image corresponding to the original image.
In the embodiment of the disclosure, after the structural component corresponding to the original image is determined, the corresponding detail component may be extracted from the original image based on the structural component (the portion of the original image except the structural component is the detail component), after the structural component is extracted, the structural component may be enhanced, and/or after the detail component is extracted, the detail component may be enhanced, so that the image after the structural component enhancement and/or the detail component enhancement is clearer, and the display effect of the image is better.
The image processing method provided by the present disclosure acquires an original image; extracting a structural component in the original image based on a guide filter containing a structural perception item to obtain a target image; the method comprises the steps of generating a guide filter containing a structure perception item based on an original image and a guide image, wherein the structure perception item represents the importance degree of each pixel point in a structure component. The structure component in the original image is extracted through the guiding filter containing the structure perception item to obtain the target image, the structure perception item represents the importance degree of each pixel point in the structure component, the image feature with high importance degree in the structure component can be extracted in a key mode based on the structure perception item, the image feature with low importance degree in the structure component is weakened, and therefore the problem that halo artifacts occur in some edge areas of the target image due to the fact that the pixel values of the pixel points corresponding to the guiding image are inconsistent in part of the pixel points in the target image is solved, and the filtering effect is optimized.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same concept, an embodiment of the present disclosure further provides an image processing apparatus, and as shown in fig. 4, an architecture schematic diagram of the image processing apparatus provided in the embodiment of the present disclosure includes an original image acquisition module 401, a structural component extraction module 402, and a guided filter generation module 403, specifically:
an original image obtaining module 401, configured to obtain an original image;
a structural component extracting module 402, configured to extract a structural component in the original image based on a guiding filter containing a structural perception item, so as to obtain a target image;
a guiding filter generating module 403, configured to generate the guiding filter including a structure perception item based on the original image and the guiding image, where the structure perception item represents an importance degree of each pixel point in a structure component.
In a possible implementation, the guided filter generation module includes:
a structure sensing item acquisition unit configured to acquire the structure sensing item;
the target function determining unit is used for obtaining a target function of a guide filter containing the structural sensing item and parameters thereof based on the structural sensing item;
and the guiding filter determining unit is used for obtaining the guiding filter containing the structure sensing item based on the target function and the parameters thereof.
In one possible embodiment, the structure perception item includes: a structural confidence term; the structural confidence coefficient item is used for determining the probability that each pixel point in the guide image belongs to the pixel point in the structural component;
the device further comprises: a structure confidence term determination module for determining the structure confidence term;
the structure confidence term determination module determines the structure confidence term by using the following steps:
determining a standard deviation of pixel values corresponding to each pixel point in a window with the target pixel point as the center under each preset window size of at least one window size aiming at each target pixel point in the guide image;
generating the structural confidence term based on the standard deviation of each target pixel point in the guide image under each window size.
In one possible embodiment, the structure confidence term determination module determines the structure confidence term by:
and generating the structural confidence term based on the product of the standard deviation of each target pixel point in the guide image in each preset window size in multiple window sizes.
In one possible implementation, the structural awareness item further includes: a bias elimination term; the deviation elimination item is used for determining the degree of pixel value deviation between a pixel point in the original image and a corresponding pixel point in the guide image;
the device further comprises: a bias elimination term determination module; the deviation elimination item determination module is used for determining the deviation elimination item;
the deviation elimination term determination module determines the deviation elimination term by using the following steps:
and determining the deviation elimination item based on the difference between the pixel value of the pixel point in the original image and the pixel value of the corresponding pixel point in the guide image.
In one possible embodiment, the structure-aware term is obtained according to a product of the structure-confidence term and the deviation-elimination term.
In one possible embodiment, the objective function determining unit obtains the objective function including the structure-aware item by:
acquiring a least square function of a structural component in the original image and the target image;
and obtaining the target function according to the structural perception item and the least square function.
In one possible implementation, the objective function determining unit obtains the parameters of the objective function by:
acquiring a weight image of the structural perception item;
and obtaining the parameters of the objective function based on the weight image, the guide image and the original image.
In some embodiments, the functions of the apparatus provided in the embodiments of the present disclosure or the included templates may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, no further description is provided here.
Based on the same technical concept, the embodiment of the disclosure also provides an electronic device. Referring to fig. 5, a schematic structural diagram of an electronic device provided in the embodiment of the present disclosure includes a processor 501, a memory 502, and a bus 503. The memory 502 is used for storing execution instructions and includes a memory 5021 and an external memory 5022; the memory 5021 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 501 and data exchanged with an external storage 5022 such as a hard disk, the processor 501 exchanges data with the external storage 5022 through the memory 5021, and when the electronic device 500 operates, the processor 501 communicates with the storage 502 through the bus 503, so that the processor 501 executes the following instructions:
acquiring an original image;
extracting a structural component in the original image based on a guiding filter containing a structural perception item to obtain a target image;
and generating the guiding filter containing the structure perception items based on the original image and the guiding image, wherein the structure perception items represent the importance degree of each pixel point in the structure components.
In one possible design, the instructions executed by the processor 501 further include:
obtaining the structural awareness item;
obtaining an objective function and parameters of a guide filter containing the structural sensing items based on the structural sensing items;
and obtaining the guiding filter containing the structure sensing item based on the target function and the parameters thereof.
In one possible design, the instructions executed by the processor 501 further include:
the structure awareness item includes: a structural confidence term; the structural confidence coefficient item is used for determining the probability that each pixel point in the guide image belongs to an edge point;
the step of determining the structural confidence term includes:
determining a standard deviation of pixel values corresponding to each pixel point in a window with the target pixel point as the center under each preset window size in at least one window size aiming at each target pixel point in the guide image;
generating the structural confidence term based on the standard deviation of each target pixel point in the guide image under each window size.
In one possible design, the instructions executed by the processor 501 further include:
and generating the structural confidence term based on the product of the standard deviation of each target pixel point in the guide image in each preset window size in multiple window sizes.
In one possible design, the instructions executed by the processor 501 further include:
the structural awareness item further comprises: a bias elimination term; the deviation elimination item is used for determining the degree of pixel value deviation between a pixel point in the original image and a corresponding pixel point in the guide image;
the step of determining the bias elimination term includes:
and determining the deviation elimination item based on the difference between the pixel value of the pixel point in the original image and the pixel value of the corresponding pixel point in the guide image.
In one possible design, the instructions executed by the processor 501 further include:
and obtaining the structural sensing item according to the product of the structural confidence item and the deviation elimination item.
In one possible design, the instructions executed by the processor 501 further include:
acquiring a least square function of a structural component in the original image and the target image;
and obtaining the target function according to the structural perception item and the least square function.
In one possible design, the instructions executed by the processor 501 further include:
acquiring a weight image of the structural perception item;
and obtaining the parameters of the objective function based on the weight image, the guide image and the original image.
Furthermore, the present disclosure also provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, performs the steps of one of the image processing methods described in the above method embodiments.
The computer program product of the image processing method provided by the embodiment of the present disclosure includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute steps of an image processing method described in the foregoing method embodiment, which may be referred to specifically for the foregoing method embodiment, and are not described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above are only specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present disclosure, and shall be covered by the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (18)

1. A method of image processing, comprising:
acquiring an original image;
extracting a structural component in the original image based on a guiding filter containing a structural perception item to obtain a target image;
and generating the guiding filter containing the structure perception items based on the original image and the guiding image, wherein the structure perception items represent the importance degree of each pixel point in the structure components.
2. The method of claim 1, wherein generating the guided filter containing the structure-aware items based on the original image and a guide image comprises:
obtaining the structural awareness item;
obtaining an objective function and parameters of a guide filter containing the structural sensing items based on the structural sensing items;
and obtaining the guiding filter containing the structure sensing item based on the target function and the parameters thereof.
3. The method of claim 1 or 2, wherein the structure awareness term comprises: a structure confidence term, wherein the structure confidence term is used for determining the probability that each pixel point in the guide image belongs to a pixel point in a structure component;
the step of determining the structural confidence term includes:
determining a standard deviation of pixel values corresponding to each pixel point in a window with the target pixel point as the center under each preset window size in at least one window size aiming at each target pixel point in the guide image;
generating the structural confidence term based on the standard deviation of each target pixel point in the guide image under each window size.
4. The method of claim 3, wherein generating the structural confidence term based on the standard deviation for each target pixel point in the guide image at the each window size comprises:
and generating the structural confidence term based on the product of the standard deviation of each target pixel point in the guide image in each preset window size in multiple window sizes.
5. The method of claim 1 or 2, wherein the structure awareness term comprises: a deviation elimination term, wherein the deviation elimination term is used for determining the degree of pixel value deviation between a pixel point in the original image and a corresponding pixel point in the guide image;
the step of determining the bias elimination term includes:
and determining the deviation elimination item based on the difference between the pixel value of the pixel point in the original image and the pixel value of the corresponding pixel point in the guide image.
6. The method of any of claims 3-5, wherein the structural perception term is derived from a product of the structural confidence term and the bias elimination term.
7. The method of claim 2, wherein the deriving an objective function that includes the structure-aware item comprises:
acquiring a least square function of a structural component in the original image and the target image;
and obtaining the target function according to the structural perception item and the least square function.
8. The method of claim 2, wherein the step of obtaining parameters of the objective function comprises:
acquiring a weight image of the structural perception item;
and obtaining the parameters of the objective function based on the weight image, the guide image and the original image.
9. An apparatus for image processing, comprising:
the original image acquisition module is used for acquiring an original image;
the structure component extraction module is used for extracting the structure components in the original image based on a guide filter containing a structure perception item to obtain a target image;
the guiding filter generating module is used for generating the guiding filter containing the structure perception items based on the original image and the guiding image, and the structure perception items represent the importance degree of each pixel point in the structure components.
10. The apparatus of claim 9, wherein the guided filter generation module comprises:
a structure sensing item acquisition unit configured to acquire the structure sensing item;
the target function determining unit is used for obtaining a target function of a guide filter containing the structural sensing item and parameters thereof based on the structural sensing item;
and the guiding filter determining unit is used for obtaining the guiding filter containing the structure sensing item based on the target function and the parameters thereof.
11. The apparatus of claim 9 or 10, wherein the structure perception item comprises: a structural confidence term; the structural confidence coefficient item is used for determining the probability that each pixel point in the guide image belongs to the pixel point in the structural component;
the device further comprises: a structure confidence term determination module for determining the structure confidence term;
the structure confidence term determination module determines the structure confidence term by using the following steps:
determining a standard deviation of pixel values corresponding to each pixel point in a window with the target pixel point as the center under each preset window size in at least one window size aiming at each target pixel point in the guide image;
generating the structural confidence term based on the standard deviation of each target pixel point in the guide image under each window size.
12. The apparatus of claim 11, wherein the structural confidence term determination module determines the structural confidence term by:
and generating the structural confidence term based on the product of the standard deviation of each target pixel point in the guide image in each preset window size in multiple window sizes.
13. The apparatus of claim 9 or 10, wherein the structural awareness item further comprises: a bias elimination term; the deviation elimination item is used for determining the degree of pixel value deviation between a pixel point in the original image and a corresponding pixel point in the guide image;
the device further comprises: a bias elimination term determination module; the deviation elimination item determination module is used for determining the deviation elimination item;
the deviation elimination term determination module determines the deviation elimination term by using the following steps:
and determining the deviation elimination item based on the difference between the pixel value of the pixel point in the original image and the pixel value of the corresponding pixel point in the guide image.
14. The apparatus of any of claims 11-13, wherein the structural perception term is derived from a product of the structural confidence term and the bias elimination term.
15. The apparatus according to claim 10, wherein the objective function determining unit obtains the objective function including the structure-aware item by:
acquiring a least square function of a structural component in the original image and the target image;
and obtaining the target function according to the structural perception item and the least square function.
16. The apparatus of claim 10, wherein the objective function determining unit obtains the parameters of the objective function by:
acquiring a weight image of the structural perception item;
and obtaining the parameters of the objective function based on the weight image, the guide image and the original image.
17. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is operating, the machine-readable instructions when executed by the processor performing the steps of the method of image processing according to any one of claims 1 to 8.
18. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, performs the steps of the method of image processing according to any one of claims 1 to 8.
CN201911379688.8A 2019-12-27 2019-12-27 Image processing method, device, electronic equipment and storage medium Active CN111192214B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911379688.8A CN111192214B (en) 2019-12-27 2019-12-27 Image processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911379688.8A CN111192214B (en) 2019-12-27 2019-12-27 Image processing method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111192214A true CN111192214A (en) 2020-05-22
CN111192214B CN111192214B (en) 2024-03-26

Family

ID=70710572

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911379688.8A Active CN111192214B (en) 2019-12-27 2019-12-27 Image processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111192214B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113327207A (en) * 2021-06-03 2021-08-31 广州光锥元信息科技有限公司 Method and device applied to image face optimization
WO2022120899A1 (en) * 2020-12-07 2022-06-16 中国科学院深圳先进技术研究院 Image reconstruction method and apparatus, electronic device and machine-readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160292824A1 (en) * 2013-04-12 2016-10-06 Agency For Science, Technology And Research Method and System for Processing an Input Image
CN107730479A (en) * 2017-08-30 2018-02-23 中山大学 High dynamic range images based on compressed sensing go artifact fusion method
CN109767408A (en) * 2018-12-29 2019-05-17 广州华多网络科技有限公司 Image processing method, device, storage medium and computer equipment
US20190332883A1 (en) * 2018-04-27 2019-10-31 Ati Technologies Ulc Perceptual importance maps for image processing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160292824A1 (en) * 2013-04-12 2016-10-06 Agency For Science, Technology And Research Method and System for Processing an Input Image
CN107730479A (en) * 2017-08-30 2018-02-23 中山大学 High dynamic range images based on compressed sensing go artifact fusion method
US20190332883A1 (en) * 2018-04-27 2019-10-31 Ati Technologies Ulc Perceptual importance maps for image processing
CN109767408A (en) * 2018-12-29 2019-05-17 广州华多网络科技有限公司 Image processing method, device, storage medium and computer equipment

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ZHENGGUO LI 等: "Weighted Guided Image Filtering" *
刘俊毅: "彩色图像引导的深度图像增强" *
张晓东 等: "基于压缩感知的道路交通图像处理及重构算法研究" *
邢奕楠 等: "基于沈峻算子的引导滤波方法研究" *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022120899A1 (en) * 2020-12-07 2022-06-16 中国科学院深圳先进技术研究院 Image reconstruction method and apparatus, electronic device and machine-readable storage medium
CN113327207A (en) * 2021-06-03 2021-08-31 广州光锥元信息科技有限公司 Method and device applied to image face optimization
CN113327207B (en) * 2021-06-03 2023-12-08 广州光锥元信息科技有限公司 Method and device applied to image face optimization

Also Published As

Publication number Publication date
CN111192214B (en) 2024-03-26

Similar Documents

Publication Publication Date Title
EP3916627A1 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN109886997B (en) Identification frame determining method and device based on target detection and terminal equipment
US9697416B2 (en) Object detection using cascaded convolutional neural networks
EP3767523A1 (en) Image processing method and apparatus, and computer readable medium, and electronic device
CN109840477B (en) Method and device for recognizing shielded face based on feature transformation
CN107316326B (en) Edge-based disparity map calculation method and device applied to binocular stereo vision
CN109766925B (en) Feature fusion method and device, electronic equipment and storage medium
CN110147708B (en) Image data processing method and related device
CN109948439B (en) Living body detection method, living body detection system and terminal equipment
CN111275040B (en) Positioning method and device, electronic equipment and computer readable storage medium
CN110675940A (en) Pathological image labeling method and device, computer equipment and storage medium
CN112287867B (en) Multi-camera human body action recognition method and device
CN110969046B (en) Face recognition method, face recognition device and computer-readable storage medium
US20210004947A1 (en) Evaluation system, evaluation device, evaluation method, evaluation program, and recording medium
CN106569946B (en) Mobile terminal performance test method and system
CN111192214A (en) Image processing method and device, electronic equipment and storage medium
CN112802081A (en) Depth detection method and device, electronic equipment and storage medium
WO2015010559A1 (en) Devices, terminals and methods for image processing
CN113642639A (en) Living body detection method, living body detection device, living body detection apparatus, and storage medium
CN112380978A (en) Multi-face detection method, system and storage medium based on key point positioning
CN110660091A (en) Image registration processing method and device and photographing correction operation system
KR101592087B1 (en) Method for generating saliency map based background location and medium for recording the same
CN115019057A (en) Image feature extraction model determining method and device and image identification method and device
CN113628148A (en) Infrared image noise reduction method and device
CN112529928A (en) Part assembly detection method, computer device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant