CN115587937A - Image filtering method and device, electronic equipment and computer readable storage medium - Google Patents

Image filtering method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN115587937A
CN115587937A CN202110755665.3A CN202110755665A CN115587937A CN 115587937 A CN115587937 A CN 115587937A CN 202110755665 A CN202110755665 A CN 202110755665A CN 115587937 A CN115587937 A CN 115587937A
Authority
CN
China
Prior art keywords
image
filtering
pixel value
target
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110755665.3A
Other languages
Chinese (zh)
Inventor
肖云雷
刘阳兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan TCL Group Industrial Research Institute Co Ltd
Original Assignee
Wuhan TCL Group Industrial Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan TCL Group Industrial Research Institute Co Ltd filed Critical Wuhan TCL Group Industrial Research Institute Co Ltd
Priority to CN202110755665.3A priority Critical patent/CN115587937A/en
Publication of CN115587937A publication Critical patent/CN115587937A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image filtering method, an image filtering device, electronic equipment and a computer readable storage medium, wherein the method comprises the following steps: acquiring an image to be processed; down-sampling the image to be processed to obtain an intermediate image and a filtering parameter of the intermediate image; filtering the intermediate image according to the filtering parameters to obtain a smooth image; and respectively carrying out up-sampling on the pixel value variance matrixes in the smooth image and the filtering parameters, and combining the up-sampled results to obtain a filtered target image. Therefore, the smooth image and the pixel value variance matrix are respectively subjected to upsampling, the results are combined to obtain the target image, and compared with the method of directly performing upsampling on the image obtained after filtering, the method does not need to process a large image during upsampling, so that the processing data volume is small, and the processing speed is higher.

Description

Image filtering method and device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of image filtering technologies, and in particular, to an image filtering method and apparatus, an electronic device, and a computer-readable storage medium.
Background
Image filtering is the most basic operation in image processing, and can filter out noise in an image to make the image smoother and clearer. In order to preserve the edge features in the image, the image is typically filtered using an edge-preserving filter to preserve the edge features.
However, when the existing filtering method is used for performing edge-preserving filtering on an image, although edge features can be preserved, the whole image needs to be filtered, so that the data processing amount is large, the calculation speed is slow, and the requirement on the calculation speed of equipment is high.
Disclosure of Invention
The application provides an image filtering method, an image filtering device, electronic equipment and a computer readable storage medium, and aims to solve the problems of large data processing capacity and low calculation speed of the existing filtering method.
In a first aspect, the present application provides an image filtering method, including:
acquiring an image to be processed;
down-sampling an image to be processed to obtain an intermediate image, and determining a filtering parameter of the intermediate image;
filtering the intermediate image according to the filtering parameters to obtain a smooth image;
and upsampling the pixel value variance matrix in the smooth image and the filtering parameter, and combining the upsampled results to obtain a filtered target image.
In a second aspect, the present application provides an image filtering apparatus, comprising:
the acquisition unit is used for acquiring an image to be processed;
the down-sampling unit is used for down-sampling the image to be processed to obtain an intermediate image and determining a filtering parameter of the intermediate image;
the filtering unit is used for filtering the intermediate image according to the filtering parameters to obtain a smooth image;
and the upsampling unit is used for upsampling the pixel value variance matrix in the smooth image and the filtering parameter and combining the upsampled results to obtain a filtered target image.
In a third aspect, the present application further provides an electronic device, which includes a processor, a memory, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the steps in any one of the image filtering methods provided in the present application are implemented.
In a fourth aspect, the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps in any one of the image filtering methods provided in the present application.
In summary, the present application obtains the image to be processed; down-sampling an image to be processed to obtain an intermediate image, and determining a filtering parameter of the intermediate image; filtering the intermediate image according to the filtering parameters to obtain a smooth image; and respectively carrying out up-sampling on the pixel value variance matrixes in the smooth image and the filtering parameters, and combining the up-sampled results to obtain a filtered target image. Therefore, the smooth image and the pixel value variance matrix are respectively subjected to upsampling, the results are combined to obtain the target image, and compared with the method of directly performing upsampling on the image obtained after filtering, the method does not need to process a large image during upsampling, so that the processing data volume is small, and the processing speed is higher.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario of an image filtering method provided in an embodiment of the present application;
FIG. 2 is a schematic flow chart of an image filtering method provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of a target pixel provided in an embodiment of the present application;
FIG. 4 is a schematic flow chart of acquiring a target image provided in an embodiment of the present application;
FIG. 5 is a schematic diagram of one post-upsampling combination provided in embodiments of the present application;
fig. 6 is a schematic flow chart of obtaining filter parameters provided in the embodiment of the present application;
FIG. 7 is a schematic flow chart of obtaining a mean value and a variance of a neighborhood pixel value according to an embodiment of the present application;
FIG. 8 is a schematic diagram of an integral image provided in an embodiment of the present application;
FIG. 9 is a schematic flow chart of acquiring a to-be-processed image according to an embodiment of the present disclosure;
FIG. 10 is a schematic flow chart of acquiring a target image according to an embodiment of the present application;
FIG. 11 is a schematic flow chart of obtaining a target image according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of an embodiment of an image filtering apparatus provided in an embodiment of the present application;
fig. 13 is a schematic structural diagram of an embodiment of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the description of the embodiments of the present application, it should be understood that the terms "first", "second", and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, features defined as "first", "second", may explicitly or implicitly include one or more of the described features. In the description of the embodiments of the present application, "a plurality" means two or more unless specifically defined otherwise.
The following description is presented to enable any person skilled in the art to make and use the application. In the following description, details are set forth for the purpose of explanation. It will be apparent to one of ordinary skill in the art that the present application may be practiced without these specific details. In other instances, well-known processes have not been described in detail in order not to obscure the description of the embodiments of the present application with unnecessary detail. Thus, the present application is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed in the embodiments herein.
The embodiment of the application provides an image filtering method and device, electronic equipment and a computer-readable storage medium. The image filtering apparatus may be integrated in an electronic device, and the electronic device may be a server or a terminal.
The main body of the image filtering method in the embodiment of the present application may be the image filtering apparatus provided in the embodiment of the present application, or different types of electronic devices such as a server device, a physical host, or a User Equipment (UE) integrated with the image filtering apparatus, where the image filtering apparatus may be implemented in a hardware or software manner, and the UE may specifically be a terminal device such as a smart phone, a tablet computer, a notebook computer, a palm computer, a desktop computer, or a Personal Digital Assistant (PDA).
The electronic device may adopt a working mode of independent operation, or may also adopt a working mode of a device cluster.
Referring to fig. 1, fig. 1 is a schematic view of a scene of an image filtering system provided in an embodiment of the present application. The image filtering system may include an electronic device 100, and an image filtering apparatus is integrated in the electronic device 100.
In addition, as shown in fig. 1, the image filtering system may further include a memory 200 for storing data, such as text data.
It should be noted that the scene schematic diagram of the image filtering system shown in fig. 1 is only an example, and the image filtering system and the scene described in the embodiment of the present application are for more clearly illustrating the technical solution of the embodiment of the present application, and do not form a limitation on the technical solution provided in the embodiment of the present application.
In the following, an image filtering method provided in an embodiment of the present application is described, where an electronic device is used as an execution subject, and the execution subject will be omitted in subsequent embodiments of the method for simplicity and convenience of description. It should be noted that, if the coordinates are referred to hereinafter, the coordinate systems are all the following coordinate systems: the upper left corner of the image is taken as a zero point, the direction from the zero point along the length of the image is the positive x direction, and the direction from the zero point along the width of the image is the positive y direction.
Referring to fig. 2, fig. 2 is a schematic flowchart of an image filtering method according to an embodiment of the present disclosure. It should be noted that, although a logical order is shown in the flow chart, in some cases, the steps shown or described may be performed in an order different than that shown or described herein. The image filtering method may specifically include the following steps 201 to 204, where:
201. and acquiring an image to be processed.
The image to be processed may be an image captured by an image capturing device, the type of the image capturing device is not limited in this embodiment, and a common image capturing device such as a video camera and a still camera may be used to acquire the image to be processed.
In some embodiments, the image to be processed may also be a portion of an image captured by the image capture device. For example, an initial image may be first acquired by an image capturing device, and then segmented by an image segmentation algorithm such as semantic segmentation, instance segmentation, or the like, to obtain a sub-image that can be used as an image to be processed.
Further, the image to be processed may be an image in a plurality of formats, and the format of the image to be processed is not limited in the embodiment of the present application. For example, an image in RGB format may be used as the image to be processed, and an image in HSV format may also be used as the image to be processed.
202. The method comprises the steps of down-sampling an image to be processed to obtain an intermediate image and determining a filtering parameter of the intermediate image.
Among them, there are various methods for down-sampling the image to be processed. Illustratively, the image to be processed may be down-sampled by an averaging method, or by interpolation methods including nearest neighbor, bilinear, and trilinear methods. For example, the picture may be divided into a plurality of downsampling regions, for each downsampling region, a pixel value average of pixel points in the region is calculated, and the pixel value average is used as a pixel value of a newly formed pixel point at the corresponding downsampling region after downsampling, so as to complete downsampling. The features in the image to be processed can be extracted through down sampling, and the size of the image to be processed is reduced, so that the filtering speed is accelerated.
Further, the image to be processed may be downsampled using a preset convolution kernel. Illustratively, convolution kernels with different sizes can be selected according to the size of the image to be processed and the magnification of the down-sampling so as to perform the down-sampling on the image to be processed. For example, for an image with a size of 4 × 4, if the down-sampling magnification is 2 times and the number of steps is 2, the image to be processed may be down-sampled using a convolution kernel with a size of 2 × 2.
The intermediate image is an image obtained by down-sampling the image to be processed. Specifically, the intermediate image is an image of which the size is smaller than that of the image to be processed after the image to be processed is downsampled. The intermediate image comprises the image information extracted from the image to be processed through downsampling, so that the purpose of denoising can be achieved by filtering the intermediate image.
The filter parameters may include a pixel value variance matrix and a pixel value mean matrix. Illustratively, the filter parameters may be a pixel value variance matrix and a pixel value mean matrix of pixel points in the entire intermediate image. Referring to fig. 3, fig. 3 includes a 5 × 5 intermediate image a, and if the intermediate image a is convolved by the 3 × 3 convolution kernel shown in fig. 3 by the step number 1, a pixel value average matrix of the intermediate image a can be obtained. After the pixel value of each pixel point in the image a is squared, the intermediate image is processed by step 1 using a 3 × 3 convolution kernel shown in fig. 3, and then the pixel value variance matrix of the intermediate image a can be obtained.
In some embodiments, the filtering parameter may also be a pixel value variance matrix and a pixel value mean matrix of a part of pixel points in the intermediate image, so as to highlight the characteristics of the part of pixel points in the filtering.
Further, the filtering parameters may also include parameters such as the number of downsamplings. At this time, the filtering parameter may be a filtering parameter preset in the image filtering apparatus, or may be a filtering parameter input by a user. For example, the image filtering device may be a personal computer, and the user may input the filtering parameters through a keyboard, and during the filtering process, the user may adjust the filtering parameters at any time to observe the filtering effect in real time, so as to determine the appropriate filtering parameters.
203. And filtering the intermediate image according to the filtering parameters to obtain a smooth image.
The smoothed image may be an image obtained by filtering the intermediate image with a filter kernel after the filter kernel is set according to the pixel value mean matrix and the pixel value variance matrix in the filter parameters.
Illustratively, the intermediate image may be filtered using the filter kernel in equation (1):
w ij =(1/W)*[1+(I ii )(I ji )/(α i1 )]formula (1)
Wherein, I j When the ith pixel point in the intermediate image is taken as the center of the window of the filtering kernel, the pixel value of any pixel point in the window of the filtering kernel; w is a ij When the ith pixel point in the intermediate image is taken as the center of the window of the filtering kernel, the pixel value is I j The weight corresponding to the pixel of (a); w is the window area size of the filter kernel, I i Is the pixel value, I, of the ith pixel point in the intermediate image j The pixel value, mu, of any pixel point in the window of the filter kernel i When the center of a window of the filtering kernel is the ith pixel point, the average value of pixel values of pixel points contained in the window; alpha (alpha) ("alpha") i When the center of the window of the filtering kernel is the ith pixel point, the variance of the pixel values of the pixel points contained in the window; ε 1 is the second tuning parameter. At this time, the filter parameters include μ i, α i, and ∈ 1.
Benefits of using this filter kernel include:
the integral of the filtering result in the window is 1, which is specifically demonstrated as follows:
Figure BDA0003147251100000061
therefore, when the filter kernel is adopted, the kernel function of the filter kernel does not need to be additionally normalized, and the filter calculation can be simplified.
After the intermediate image is filtered through the filtering core, a filtered image can be obtained. Each pixel value in the filtered image is:
Figure BDA0003147251100000071
wherein, I j As the pixel value of the pixel before filtering, in equation (2)
Figure BDA0003147251100000072
I.e. the pixel values of the pixels in the smoothed image. Due to the fact that in the formula (2)
Figure BDA0003147251100000073
Contains pixel value variances and can therefore be used to characterize the information of the edges in the intermediate image.
During the filtering, the smoothing effect may be changed by changing the second adjustment parameter. For example, turning up the second adjustment parameter may enhance the smoothing effect. E.g. when ε 1 tends to be positive infinity, b = μ i That is, the smoothed image is an image obtained by performing mean filtering on the intermediate image to realize smoothing, when epsilon 1 When going to 0, b =0, i.e., the intermediate image is not subjected to the smoothing processing. For intermediate images,. Epsilon 1 Too large may result in loss of edge information, ε, in the intermediate image 1 Too little results in poor smoothing and therefore needs to be adjusted to the actual filtering requirements.
204. And upsampling the pixel value variance matrix in the smooth image and the filtering parameter, and combining the upsampled results to obtain a filtered target image.
The upsampling method may also adopt an upsampling method such as nearest neighbor interpolation, trilinear interpolation and the like, it should be noted that, since bilinear interpolation blurs edges in an image, and the purpose of filtering in the embodiment of the present application is to retain edges in the image, if the bilinear interpolation causes abnormal output, the upsampling method is not generally adopted.
After the smooth image and the pixel value variance matrix are respectively subjected to up-sampling, a new pixel value matrix and a new pixel value variance matrix can be obtained, and the two obtained new matrices are combined to obtain a filtered target image.
In summary, in the embodiment of the present application, an image to be processed is obtained; down-sampling an image to be processed to obtain an intermediate image, and determining a filtering parameter of the intermediate image; filtering the intermediate image according to the filtering parameters to obtain a smooth image; and respectively carrying out up-sampling on the pixel value variance matrixes in the smooth image and the filtering parameters, and combining the up-sampled results to obtain a filtered target image. Therefore, the smooth image and the pixel value variance matrix are respectively subjected to upsampling, the results are combined to obtain the target image, and compared with the method of directly performing upsampling on the image obtained after filtering, the method does not need to process a large image during upsampling, so that the processing data volume is small, and the processing speed is higher.
In some embodiments, the target variance matrix obtained after upsampling the pixel value variance matrix may be input into a preset edge feature extraction function to obtain an edge feature matrix containing edge information. Referring to fig. 4, at this time, upsampling the pixel value variance matrix in the smoothed image and the filtering parameters and combining the upsampled results to obtain a filtered target image may include:
301. and respectively up-sampling the pixel value variance matrixes in the smooth image and the filtering parameters to obtain a target smooth image corresponding to the smooth image and a target variance matrix corresponding to the pixel value variance matrix.
302. And inputting the target variance matrix into a preset edge feature extraction function, and calculating to obtain an edge feature matrix.
The preset edge feature extraction function may be the following function:
A=(α*I)/(α+ε 0 ) Formula (3)
I is a pixel value matrix of the intermediate image, A is an edge feature matrix, alpha is a matrix obtained after upsampling of a pixel value variance matrix, epsilon 0 Is the first adjustment parameter. Because the edge feature extraction parameters comprise the pixel value variance matrix, the deviation condition of the pixel values in the intermediate image can be represented, and the edge feature extraction parameters can be further used for representing edge information in the image.
Further, the first tuning parameter ε may be varied according to actual filtering requirements 0 Specifically, reference may be made to a change logic of the second adjustment parameter, which is not described herein again.
303. And adding the pixel value matrix of the target smooth image and the edge characteristic matrix, and calculating to obtain a filtered target image.
And adding the pixel value matrix and the edge characteristic matrix to obtain a new matrix, wherein the matrix contains the pixel value of each pixel point in the target image. Taking fig. 5 as an example for explanation, X in fig. 5 is a pixel value matrix, Y is an edge feature matrix, and a Z matrix in fig. 4 can be obtained by adding the X matrix and the Y matrix, where the Z matrix is a pixel value of each pixel in the target image, so that the target image can be obtained according to the Z matrix.
In order to preserve edges in the intermediate image, a pixel value mean matrix and a pixel value variance matrix may be used as filtering parameters. Referring to fig. 6, at this time, determining the filtering parameter of the intermediate image may specifically include:
401. and sequentially extracting each pixel point in the intermediate image as a target pixel point.
The target pixel point refers to a currently extracted pixel point. For example, for an intermediate image containing a, B, C, and D4 pixels, when a pixel a is extracted, a is the target pixel. And after the pixel A is processed, extracting the next pixel B, wherein the B is the target pixel. For example, the target pixel point may be a pixel point corresponding to the center of a window of a convolution kernel when the intermediate image is processed by a preset convolution kernel. Taking fig. 3 as an example for explanation, when the 3 × 3 convolution kernel shown in fig. 3 is used to check the intermediate image for processing, if the convolution kernel covers the pixel points a, b, c, f, g, h, k, l, and m, that is, the pixel point corresponding to the window center of the convolution kernel is g, the pixel point g is the target pixel point.
402. And calculating the neighborhood pixel value mean value and the neighborhood pixel value variance of the target pixel point according to the first pixel value of the target pixel point and the second pixel value of the neighborhood pixel point corresponding to the target pixel point.
The distance between the neighborhood pixel point and the target pixel point is smaller than or equal to a preset radius, and the neighborhood pixel value mean value is a mean value calculated according to the pixel value of the neighborhood pixel point and the pixel value of the target pixel point; the neighborhood pixel value variance is a variance calculated according to the pixel values of the neighborhood pixels and the pixel value of the target pixel. For example, the neighborhood pixel point may be a pixel covered by a convolution kernel window when a preset convolution kernel is used to obtain a neighborhood pixel value mean and a neighborhood pixel value variance. Taking fig. 3 as an illustration, when the 3 × 3 convolution kernel shown in fig. 3 is used to obtain the neighborhood pixel value mean and the neighborhood pixel value variance of the intermediate image a, the pixel points covering the intermediate image a are the neighborhood pixel points. For example, when the convolution kernel covers the pixel points a, b, c, f, g, h, k, l, and m, the 9 pixel points are neighborhood pixel points, the neighborhood pixel value mean is the pixel value mean of the 9 pixels, and the neighborhood pixel value variance is the pixel value variance of the 9 pixels.
403. And arranging the neighborhood pixel value mean of each target pixel point according to the position of the target pixel point to form a pixel value mean matrix, arranging the neighborhood pixel value variance of each target pixel point according to the position of the target pixel point to form a pixel value variance matrix, wherein the pixel value mean matrix and the pixel value variance matrix are filtering parameters of the intermediate image.
Besides calculating the variance and mean of the neighborhood pixel values by using convolution kernels, the variance and mean of the neighborhood pixel values can be calculated by using a preset calculation formula. Referring to fig. 7, at this time, calculating the neighborhood pixel value mean and the neighborhood pixel value variance of the target pixel point according to the first pixel value of the target pixel point and the second pixel value of the neighborhood pixel point corresponding to the target pixel point may specifically include:
501. and inputting the first pixel value of the target pixel point and the second pixel value of the neighborhood pixel point corresponding to the target pixel point into a preset average value calculation formula, and calculating to obtain the neighborhood pixel value average value of the target pixel point.
Wherein, the preset average value calculation formula is as follows:
μ i =S(I i )/(2r+1) 2 formula (4)
μ i The mean value of the values of the neighborhood pixels representing the ith target pixel point in the intermediate image, I i Is the ith target pixel point in the intermediate image, r is the preset radius, S (I) i ) The pixel value sum of the first pixel value of the ith target pixel point and the second pixel value of the neighborhood pixel point corresponding to the ith target pixel point.
Further, S (I) may be acquired using an Integral image (Integral image) i ). The integral image is described by taking fig. 8 as an example. Fig. 8 shows pixel values of each pixel in the 6 × 3 image, and an integral image obtained by converting the 6 × 3 image. The value 16 at the (4, 2) position in the integral image is the sum of the pixel values of the pixels included in the frame A1 in the 6 × 3 image, and the value 8 at the (4, 0) position is the sum of the pixel values of the pixels included in the frame A2 in the 6 × 3 image, so that the value at each coordinate point in the integral image represents the sum of the pixel values of all points in a rectangular region formed by the pixels at the same coordinate point in the 6 × 3 image from the upper left corner of the 6 × 3 image.
As can be seen from the description of fig. 8, if Ii is a pixel point with coordinates (2, 2) in 6 × 3 image and r is 1, S (I) in the integral image i ) I.e., the value of (3, 2) — (3, 0) and the value of- (0, 3) + (0, 0). By the above calculation, S (I) can be finally obtained i ) To 6, r and S (I) i ) Substituting into a preset average value calculation formula to obtain mu i
It is therefore possible to obtain only four points by integrating the imageTo S (I) more simply directly i ) Regardless of the size of the preset radius r.
502. And inputting the first pixel value of the target pixel point and the second pixel value of the neighborhood pixel point corresponding to the target pixel point into a preset variance calculation formula, and calculating to obtain the neighborhood pixel value variance of the target pixel point.
The preset variance calculation formula is as follows:
α i =S(I 2 i )/(2r+1) 2i 2 formula (5)
α i Is the variance of the neighborhood pixel values of the ith target pixel point in the intermediate image, S (I) 2 i ) The sum of squares of the pixel values of the first pixel value of the ith target pixel point and the second pixel value of the neighborhood pixel point corresponding to the ith target pixel point.
Further, integral image acquisition S (I) may also be employed 2 i ) Reference may be made in particular to obtaining S (I) i ) The obtaining process is not described herein again.
For some images, a user only wants to filter an area containing specific information, so the image can be processed through a preset recognition model and then filtered to reduce the data processed during filtering and increase the filtering speed. Referring to fig. 9, at this time, before acquiring the image to be processed, the method may further specifically include:
010. and inputting the initial image into a preset interest region identification model for processing, and outputting the interest region contained in the initial image.
Wherein the initial image is an image before a preset interest region identification model is input. Likewise, the method of acquiring the initial image is not limited, and the initial image may be acquired by a commonly used image capturing apparatus such as a video camera, a still camera, or the like.
Wherein the region of interest is a filtered target region. Illustratively, the image information preset by the user may be included in the interest area. For example, a user may use a car as preset image information, and an area including the car in the image is an interest area, that is, a target area during filtering.
The interest region identification model may be a model obtained by training an image identification model by using a large number of images including the interest region. The region of interest identification model may be used to identify regions of interest contained in the image. For example, a large number of images containing cars may be used to train an image recognition model, resulting in a region of interest recognition model that can be used to recognize regions of the images containing cars.
020. And dividing the initial image according to the edge of the interest area to obtain an image to be processed containing the interest area.
The image of interest refers to a sub-image containing only the region of interest. The interest image is used as the image to be processed, so that the data amount processed during filtering can be reduced, and the filtered area can contain image information preset by a user.
In some embodiments, the image to be processed may also comprise a multi-channel image. Referring to fig. 10, in this case, the down-sampling the image to be processed to obtain the intermediate image and the filter parameter of the intermediate image may specifically include:
601. and acquiring three single-channel images corresponding to the image to be processed.
The type of the single-channel image is determined according to the color space type of the image to be processed. For example, when the color space type of the image to be processed is red, green and blue (hereinafter referred to as RGB) color space, the three single-channel images are an R-channel image, a G-channel image, and a B-channel image of the image to be processed, respectively. For example, the color space type of the image to be processed may also be a hue (hereinafter referred to as H), a saturation (hereinafter referred to as S), and a lightness (hereinafter referred to as V) color space, in which case the three single-channel images are an H-channel image, an S-channel image, and a V-channel image of the image to be processed, respectively.
602. Respectively carrying out downsampling on the three single-channel images to obtain three intermediate images corresponding to the three single-channel images respectively, and determining three filtering parameters corresponding to the three single-channel images respectively.
For the process of down-sampling three single-channel images and the explanation of the filtering parameters, reference may be made to the description in step 202 above, and details are not repeated here.
Referring to fig. 11, upsampling the pixel value variance matrix in the smoothed image and the filter parameter, and combining the upsampled results to obtain a filtered target image may specifically include:
603. and respectively carrying out upsampling on the three smooth images to obtain target smooth images corresponding to the three single-channel images.
The process of upsampling the smoothed image may refer to the description in step 40, and is not described herein again.
604. And respectively carrying out up-sampling on each pixel value variance matrix in the three filtering parameters to obtain target variance matrices corresponding to the three single-channel images.
The process of upsampling the variance matrix of the pixel values may refer to the description in step 40, and is not described herein again.
605. And respectively combining the target smooth images and the target variance matrixes corresponding to the three single-channel images to obtain the sub-target images corresponding to the three single-channel images.
The sub-target image is an image obtained by combining the target smooth image and the target variance matrix of each single-channel image. For example, after the target edge matrix of each single-channel image is obtained by using the formula in formula (3), the target smooth image and the target edge matrix of each single-channel image are added to obtain the sub-target image. For example, if the target smooth image of the single-channel image a is A1 and the target variance matrix is A2, the target edge matrix A4 of a may be obtained by inputting A2 into equation (3), and the sub-target image of a may be obtained by adding A4 to the pixel value matrix of A1.
606. And combining the three sub-target images to obtain a filtered target image.
In order to better implement the image filtering method in the embodiment of the present application, on the basis of the image filtering method, an image filtering apparatus is further provided in the embodiment of the present application, as shown in fig. 12, which is a schematic structural diagram of an embodiment of the image filtering apparatus in the embodiment of the present application, and the image filtering apparatus 1200 includes:
an obtaining unit 1201, configured to obtain an image to be processed;
a down-sampling unit 1202, configured to down-sample an image to be processed to obtain an intermediate image, and determine a filtering parameter of the intermediate image;
a filtering unit 1203, configured to filter the intermediate image according to the filtering parameter to obtain a smooth image;
an upsampling unit 1204, configured to upsample the pixel value variance matrix in the smoothed image and the filtering parameter, and combine the upsampled results to obtain a filtered target image.
In a possible implementation manner, the upsampling unit 1204 may specifically be configured to:
respectively up-sampling pixel value variance matrixes in the smooth image and the filtering parameters to obtain a target smooth image corresponding to the smooth image and a target variance matrix corresponding to the pixel value variance matrixes;
inputting the target variance matrix into a preset edge feature extraction function, and calculating to obtain an edge feature matrix;
and adding the pixel value matrix of the target smooth image and the edge characteristic matrix, and calculating to obtain a filtered target image.
In a possible implementation, the downsampling unit 1202 may be specifically configured to:
sequentially extracting each pixel point in the intermediate image as a target pixel point;
calculating a neighborhood pixel value mean value and a neighborhood pixel value variance of the target pixel point according to a first pixel value of the target pixel point and a second pixel value of a neighborhood pixel point corresponding to the target pixel point; the distance between the neighborhood pixel point and the target pixel point is smaller than or equal to a preset radius;
and arranging the neighborhood pixel value mean of each target pixel point according to the position of the target pixel point to form a pixel value mean matrix, arranging the neighborhood pixel value variance of each target pixel point according to the position of the target pixel point to form a pixel value variance matrix, and taking the pixel value mean matrix and the pixel value variance matrix as the filtering parameters of the intermediate image.
In one possible implementation, the downsampling unit 1202 may be further configured to:
inputting a first pixel value of a target pixel point and a second pixel value of a neighborhood pixel point corresponding to the target pixel point into a preset average value calculation formula, and calculating to obtain a neighborhood pixel value average value of the target pixel point;
inputting a first pixel value of a target pixel point and a second pixel value of a neighborhood pixel point corresponding to the target pixel point into a preset variance calculation formula, and calculating to obtain a neighborhood pixel value variance of the target pixel point;
wherein the mean value is calculated by the formula of mu i =S(I i )/(2r+1) 2
The variance is calculated as i =S(I 2 i )/(2r+1) 2i 2
μ i Representing the mean value of the neighborhood pixel values of the ith target pixel point in the intermediate image, ii is the ith target pixel point in the intermediate image, r is a preset radius, and S (I) i ) Is the sum of the pixel values of the first pixel value of the ith target pixel point and the second pixel value of the neighborhood pixel point corresponding to the ith target pixel point, alpha i Is the variance of the neighborhood pixel values of the ith target pixel point in the intermediate image, S (I) 2 i ) The sum of squares of the pixel values of the first pixel value of the ith target pixel point and the second pixel value of the neighborhood pixel point corresponding to the ith target pixel point.
In a possible implementation manner, the filtering unit 1203 is further configured to:
setting a filtering kernel according to a pixel value mean matrix and a pixel value variance matrix in the filtering parameters;
filtering the intermediate image by using a filtering core to obtain a smooth image;
wherein, the filtering kernel is:
w ij =(1/W)*[1+(I ii )(I ji )/(α i1 )],
I j when the ith pixel point in the intermediate image is taken as the center of the window of the filtering kernel, the pixel value of any pixel point in the window of the filtering kernel; w is a ij When the ith pixel point in the intermediate image is taken as the center of the window of the filtering kernel, the pixel value is I j The weight corresponding to the pixel of (a); w is the window area size of the filter kernel, I i Is the pixel value, I, of the ith pixel point in the intermediate image j Is the pixel value, mu, of any pixel point within the window of the filter kernel i When the center of the window of the filtering kernel is the ith pixel point, the pixel value mean value of the pixel points contained in the window; alpha (alpha) ("alpha") i When the center of the window of the filtering kernel is the ith pixel point, the variance of the pixel values of the pixel points contained in the window; epsilon 1 Is a second adjustment parameter.
In a possible implementation manner, the image filtering apparatus 1200 further includes a dividing module 1205, where the dividing module 1205 is configured to:
inputting the initial image into a preset interest area recognition model for processing, and outputting an interest area contained in the initial image;
and dividing the initial image according to the edge of the interest area to obtain an image to be processed containing the interest area.
In one possible implementation, the downsampling unit 1202 may be further configured to:
acquiring three single-channel images corresponding to an image to be processed;
respectively carrying out downsampling on the three single-channel images to obtain three intermediate images corresponding to the three single-channel images respectively, and determining three filtering parameters corresponding to the three single-channel images respectively;
the upsampling unit 1204 may further be configured to:
respectively carrying out upsampling on the three smooth images to obtain target smooth images corresponding to the three single-channel images;
respectively carrying out up-sampling on each pixel value variance matrix in the three filtering parameters to obtain target variance matrices corresponding to the three single-channel images;
respectively combining the target smooth images and the target variance matrixes corresponding to the three single-channel images to obtain sub-target images corresponding to the three single-channel images;
and combining the three sub-target images to obtain a filtered target image.
In a specific implementation, the above units may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and the specific implementation of the above units may refer to the foregoing method embodiments, which are not described herein again.
Since the image filtering apparatus can execute the steps in the image filtering method in any embodiment, the beneficial effects that can be realized by the image filtering method in any embodiment of the present application can be realized, which are detailed in the foregoing description and are not repeated herein.
In addition, in order to better implement the image filtering method in the embodiment of the present application, based on the image filtering method, the embodiment of the present application further provides an electronic device, referring to fig. 13, fig. 13 shows a schematic structural diagram of the electronic device in the embodiment of the present application, specifically, the electronic device provided in the embodiment of the present application includes a processor 1301, and the processor 1301 is configured to implement each step of the image filtering method in any embodiment when executing the computer program stored in the memory 1302; alternatively, the processor 1301 is configured to implement the functions of the units in the corresponding embodiments as shown in fig. 5 when executing the computer program stored in the memory 1302.
Illustratively, a computer program may be partitioned into one or more modules/units, which are stored in the memory 1302 and executed by the processor 1301 to implement the embodiments of the present application. One or more modules/units may be a series of computer program instruction segments capable of performing certain functions, the instruction segments being used to describe the execution of a computer program in a computer device.
The electronic devices may include, but are not limited to, a processor 1301, a memory 1302. Those skilled in the art will appreciate that the illustrations are merely examples of electronic devices and do not constitute a limitation of electronic devices and may include more or fewer components than those illustrated, or some components may be combined, or different components.
The Processor 1301 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, the processor being the control center for the electronic device and various interfaces and lines connecting the various parts of the overall electronic device.
The memory 1302 may be used to store computer programs and/or modules, and the processor 1301 implements various functions of the computer apparatus by running or executing the computer programs and/or modules stored in the memory 1302 and calling data stored in the memory 1302. The memory 1302 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, video data, etc.) created according to the use of the electronic device, and the like. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the image filtering apparatus, the electronic device and the corresponding units thereof described above may refer to the description of the image filtering method in any embodiment, and are not described herein in detail.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
For this reason, the embodiments of the present application provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps in the image filtering method in any embodiment of the present application are implemented, and specific operations may refer to descriptions of the image filtering method in any embodiment, which are not repeated herein.
Wherein the computer-readable storage medium may include: a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic or optical disk, and the like.
Since the instructions stored in the computer-readable storage medium can execute the steps in the image filtering method in any embodiment of the present application, the beneficial effects that can be achieved by the image filtering method in any embodiment of the present application can be achieved, for details, see the foregoing description, and are not described again here.
The foregoing describes in detail an image filtering method, an image filtering device, a storage medium, and an electronic device provided in the embodiments of the present application, and a specific example is applied in the present application to explain the principles and embodiments of the present application, and the description of the foregoing embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. An image filtering method, comprising:
acquiring an image to be processed;
down-sampling the image to be processed to obtain an intermediate image, and determining a filtering parameter of the intermediate image;
filtering the intermediate image according to the filtering parameters to obtain a smooth image;
and upsampling the pixel value variance matrix in the smooth image and the filtering parameter, and combining the upsampled results to obtain a filtered target image.
2. The method of claim 1, wherein upsampling the pixel value variance matrix in the smoothed image and the filter parameters and combining the upsampled results to obtain a filtered target image comprises:
respectively carrying out up-sampling on the pixel value variance matrixes in the smooth image and the filtering parameters to obtain a target smooth image corresponding to the smooth image and a target variance matrix corresponding to the pixel value variance matrix;
inputting the target variance matrix into a preset edge feature extraction function, and calculating to obtain an edge feature matrix;
and adding the pixel value matrix of the target smooth image and the edge characteristic matrix, and calculating to obtain a filtered target image.
3. The method of claim 2, wherein determining the filtering parameters of the intermediate image comprises:
sequentially extracting each pixel point in the intermediate image as a target pixel point;
calculating a neighborhood pixel value mean value and a neighborhood pixel value variance of the target pixel point according to the first pixel value of the target pixel point and a second pixel value of a neighborhood pixel point corresponding to the target pixel point; the distance between the neighborhood pixel point and the target pixel point is smaller than or equal to a preset radius;
and arranging the neighborhood pixel value mean of each target pixel point according to the position of the target pixel point to form a pixel value mean matrix, and arranging the neighborhood pixel value variance of each target pixel point according to the position of the target pixel point to form a pixel value variance matrix, wherein the pixel value mean matrix and the pixel value variance matrix are filtering parameters of the intermediate image.
4. The method of claim 3, wherein calculating a neighborhood pixel value mean and a neighborhood pixel value variance of the target pixel point according to the first pixel value of the target pixel point and the second pixel value of a neighborhood pixel point corresponding to the target pixel point comprises:
inputting the first pixel value of the target pixel point and the second pixel value of the neighborhood pixel point corresponding to the target pixel point into a preset mean value calculation formula, and calculating to obtain a neighborhood pixel value mean value of the target pixel point;
inputting a first pixel value of the target pixel point and a second pixel value of a neighborhood pixel point corresponding to the target pixel point into a preset variance calculation formula, and calculating to obtain a neighborhood pixel value variance of the target pixel point;
wherein the mean value calculation formula is mu i =S(I i )/(2r+1) 2
The variance calculation formula is alpha i =S(I 2 i )/(2r+1) 2i 2
The mu i Representing a neighborhood pixel value mean value of an ith target pixel point in the intermediate image, wherein Ii is the ith target pixel point in the intermediate image, r is the preset radius, and S (I) i ) Is the sum of the pixel values of the first pixel value of the ith target pixel point and the second pixel value of the neighborhood pixel point corresponding to the ith target pixel point, wherein alpha is i Is the neighborhood pixel value variance of the ith target pixel point in the intermediate image, S (I) 2 i ) The sum of squares of pixel values of a first pixel value of an ith target pixel point and a second pixel value of a neighborhood pixel point corresponding to the ith target pixel point.
5. The method of claim 1, wherein the filtering the intermediate image according to the filtering parameters to obtain a smoothed image comprises:
setting a filtering kernel according to the pixel value mean matrix and the pixel value variance matrix in the filtering parameters;
filtering the intermediate image by the filtering core to obtain a smooth image;
wherein the filtering kernel is:
w ij =(1/W)*[1+(I ii )(I ji )/(α i1 )],
said I j When the ith pixel point in the intermediate image is taken as the center of the window of the filtering kernel, the pixel value of any pixel point in the window of the filtering kernel; said w ij When the ith pixel point in the intermediate image is taken as the window center of the filtering kernel, the pixel value is I j The weight corresponding to the pixel of (a); w is the window area size of the filter kernel, I i Is the pixel value of the ith pixel point in the intermediate image, I j The mu is the pixel value of any pixel point in the window of the filter kernel i When the center of the window of the filter kernel is the ith pixel point, the average value of the pixel values of the pixel points contained in the window is obtained; a is said i When the center of the window of the filtering kernel is the ith pixel point, the variance of the pixel values of the pixel points contained in the window is obtained; the epsilon 1 Is the second adjustment parameter.
6. The method according to any one of claims 1-5, wherein prior to said acquiring the image to be processed, the method further comprises:
inputting an initial image into a preset interest region identification model for processing, and outputting an interest region contained in the initial image;
and dividing the initial image according to the edge of the interest region to obtain an image to be processed containing the interest region.
7. The method according to claim 1, wherein the down-sampling the image to be processed to obtain an intermediate image, and determining the filtering parameters of the intermediate image comprises:
acquiring three single-channel images corresponding to the image to be processed;
respectively carrying out downsampling on the three single-channel images to obtain three intermediate images corresponding to the three single-channel images respectively, and determining three filtering parameters corresponding to the three single-channel images respectively;
the upsampling the smoothed image and the pixel value variance matrix in the filtering parameter and combining the upsampled results to obtain a filtered target image includes:
respectively carrying out up-sampling on the three smooth images to obtain target smooth images corresponding to the three single-channel images;
respectively carrying out up-sampling on each pixel value variance matrix in the three filtering parameters to obtain target variance matrices corresponding to the three single-channel images;
respectively combining the target smooth image and the target variance matrix corresponding to the three single-channel images to obtain sub-target images corresponding to the three single-channel images;
and combining the three sub-target images to obtain a filtered target image.
8. An image filtering apparatus, comprising:
an acquisition unit for acquiring an image to be processed;
the down-sampling unit is used for down-sampling the image to be processed to obtain an intermediate image and determining a filtering parameter of the intermediate image;
the filtering unit is used for filtering the intermediate image according to the filtering parameters to obtain a smooth image;
and the upsampling unit is used for upsampling the pixel value variance matrix in the smooth image and the filtering parameter and combining the upsampled results to obtain a filtered target image.
9. An electronic device, characterized in that the electronic device comprises a processor, a memory and a computer program stored in the memory and executable on the processor, the processor implementing the steps in the image filtering method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, performs the steps in the image filtering of any one of claims 1 to 7.
CN202110755665.3A 2021-07-05 2021-07-05 Image filtering method and device, electronic equipment and computer readable storage medium Pending CN115587937A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110755665.3A CN115587937A (en) 2021-07-05 2021-07-05 Image filtering method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110755665.3A CN115587937A (en) 2021-07-05 2021-07-05 Image filtering method and device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN115587937A true CN115587937A (en) 2023-01-10

Family

ID=84771603

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110755665.3A Pending CN115587937A (en) 2021-07-05 2021-07-05 Image filtering method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN115587937A (en)

Similar Documents

Publication Publication Date Title
CN108205804B (en) Image processing method and device and electronic equipment
US8594451B2 (en) Edge mapping incorporating panchromatic pixels
US20140368891A1 (en) Method for detecting a document boundary
EP3644599B1 (en) Video processing method and apparatus, electronic device, and storage medium
US10515438B2 (en) System and method for supporting image denoising based on neighborhood block dimensionality reduction
CN112602088B (en) Method, system and computer readable medium for improving quality of low light images
CN109214996B (en) Image processing method and device
CN110503704B (en) Method and device for constructing three-dimensional graph and electronic equipment
CN112889069A (en) Method, system, and computer readable medium for improving low-light image quality
CN111861938B (en) Image denoising method and device, electronic equipment and readable storage medium
US11151693B2 (en) Image processing apparatus and image processing method for noise reduction
CN112150371A (en) Image noise reduction method, device, equipment and storage medium
CN113744294B (en) Image processing method and related device
WO2022016326A1 (en) Image processing method, electronic device, and computer-readable medium
JP2024037722A (en) Content-based image processing
CN112907468A (en) Image noise reduction method, device and computer storage medium
WO2021102704A1 (en) Image processing method and apparatus
CN115587937A (en) Image filtering method and device, electronic equipment and computer readable storage medium
CN111583111B (en) Dynamic range image compression method, computer equipment and storage device
CN115761827A (en) Cosmetic progress detection method, device, equipment and storage medium
WO2020124355A1 (en) Image processing method, image processing device, and unmanned aerial vehicle
CN111178289B (en) Method and system for shortening iris recognition time consumption
CN111986095A (en) Image processing method and image processing device based on edge extraction
US20170289404A1 (en) Joint edge enhance dynamic
EP4246428A1 (en) Perspective method for physical whiteboard and generation method for virtual whiteboard

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication