CN113643198A - Image processing method, image processing device, electronic equipment and storage medium - Google Patents

Image processing method, image processing device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113643198A
CN113643198A CN202110828277.3A CN202110828277A CN113643198A CN 113643198 A CN113643198 A CN 113643198A CN 202110828277 A CN202110828277 A CN 202110828277A CN 113643198 A CN113643198 A CN 113643198A
Authority
CN
China
Prior art keywords
image block
neighborhood
pixel
central
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110828277.3A
Other languages
Chinese (zh)
Inventor
胥立丰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Eswin Computing Technology Co Ltd
Haining Eswin IC Design Co Ltd
Original Assignee
Beijing Eswin Computing Technology Co Ltd
Haining Eswin IC Design Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Eswin Computing Technology Co Ltd, Haining Eswin IC Design Co Ltd filed Critical Beijing Eswin Computing Technology Co Ltd
Priority to CN202110828277.3A priority Critical patent/CN113643198A/en
Publication of CN113643198A publication Critical patent/CN113643198A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)

Abstract

The application provides an image processing method, an image processing device, an electronic device and a storage medium, wherein the image processing method comprises the following steps: acquiring a central image block and a neighborhood image block in at least one region of an image, wherein the neighborhood image block is adjacent to the central image block; calculating the distance between the neighborhood image block and the central image block; finding out the weight corresponding to the distance from a preset table, and using the weight as the weight of the neighborhood image block, wherein the preset table is used for representing the corresponding relation between different distances and weights; and filtering the pixels in the central image block based on the weights of the neighborhood image blocks, the pixel values of the neighborhood image blocks, the weight of the central image block and the pixel value of the central image block, so that the complexity of image noise reduction calculation can be simplified, and the cost of a hardware circuit is further reduced.

Description

Image processing method, image processing device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
With the continuous development of semiconductor chips in digital image processing technology, people can take pictures by various shooting devices (such as digital cameras, mobile phones and the like) to obtain high-resolution pictures or videos. Among various photographing devices, a Complementary Metal Oxide Semiconductor (CMOS) image sensor is mainly used to acquire a picture or video with high resolution.
Due to the inherent hardware limitation of the CMOS image sensor, the pictures shot by the shooting equipment in many occasions have serious brightness and chrominance noise. In order to improve the quality of the picture, in general, noise of an original picture (i.e., a Bayer image) inside the photographing apparatus is directly suppressed. Thus, a higher quality image can be obtained. For example: the Bayer image is denoised by a Non-Local mean filtering (NLM) method to obtain a higher quality image.
However, the conventional image noise reduction algorithm has a large computational complexity. Accordingly, when the ASIC is implemented by an Application Specific Integrated Circuit (ASIC), the hardware requirement for the ASIC is high, which will undoubtedly increase the cost of the ASIC.
Disclosure of Invention
An object of the embodiments of the present application is to provide an image processing method, an image processing apparatus, and an electronic device, and a storage medium, so as to simplify the operation complexity of image noise reduction, thereby reducing the cost of an ASIC.
In order to solve the above technical problem, an embodiment of the present application provides the following technical solutions:
a first aspect of the present application provides an image processing method, including: acquiring a central image block and a neighborhood image block in at least one region of an image, wherein the neighborhood image block is adjacent to the central image block; calculating the distance between the neighborhood image block and the center image block; finding out the weight corresponding to the distance from a preset table, wherein the preset table is used for representing the corresponding relation between different distances and weights, and is used as the weight of the neighborhood image block; and filtering the pixels in the central image block based on the weights of the neighborhood image blocks, the pixel values of the neighborhood image blocks, the weights of the central image block and the pixel values of the central image block.
A second aspect of the present application provides an image processing apparatus, the apparatus comprising: the image processing device comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for acquiring a central image block and a neighborhood image block in at least one region of an image, and the neighborhood image block is adjacent to the central image block; the calculation module is used for calculating the distance between the neighborhood image block and the center image block; the searching module is used for searching out the weight corresponding to the distance from a preset table, and the preset table is used for representing the corresponding relation between different distances and weights and is used as the weight of the neighborhood image block; and the filtering module is used for filtering the pixels in the central image block based on the weights of the neighborhood image blocks, the pixel values of the neighborhood image blocks, the weights of the central image block and the pixel values of the central image block.
A third aspect of the present application provides an electronic device comprising: a processor, a memory, a bus; the processor and the memory complete mutual communication through the bus; the processor is for invoking program instructions in the memory for performing the method of the first aspect.
A fourth aspect of the present application provides a computer-readable storage medium comprising: a stored program; wherein the program, when executed, controls an apparatus in which the storage medium is located to perform the method of the first aspect.
Compared with the prior art, according to the image processing method provided by the first aspect of the present application, after the central image block and the neighborhood image blocks in at least one region of the image are obtained, the distances between the neighborhood image blocks and the central image block are calculated, the weights of the neighborhood image blocks are further found from the preset table based on the distances between the neighborhood image blocks and the central image block, and finally, the pixels in the central image block are filtered based on the weights of the neighborhood image blocks, the pixel values of the neighborhood image blocks, the weights of the central image blocks and the pixel values of the central image block. When the corresponding weight of each image block is determined based on the distance between the image blocks, the weight of each image block is calculated one by one based on the Gaussian function, and the weight corresponding to the distance of each image block is directly found out from a preset table in a table look-up mode. Because the table lookup is simpler than the calculation mode of the function, the complexity of image noise reduction calculation can be simplified, and the cost of a hardware circuit is further reduced.
The image processing apparatus provided by the second aspect, the electronic device provided by the third aspect, and the computer-readable storage medium provided by the fourth aspect of the present application have the same or similar advantageous effects as the image processing method provided by the first aspect.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present application will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the present application are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings and in which like reference numerals refer to similar or corresponding parts and in which:
FIG. 1 is a first flowchart illustrating an image processing method according to an embodiment of the present disclosure;
FIG. 2 is a first schematic diagram of each image block in the embodiment of the present application;
FIG. 3 is a second schematic diagram of each image block in the embodiment of the present application;
FIG. 4 is a first diagram of a default table in an embodiment of the present application;
FIG. 5 is a second flowchart illustrating an image processing method according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a sliding window not completely located within an image according to an embodiment of the present application;
fig. 7 is a schematic diagram of a certain neighborhood image block and a center image block in the embodiment of the present application;
FIG. 8 is a first diagram illustrating several pixel location templates in an embodiment of the present application;
FIG. 9 is a second diagram of a default table in the embodiment of the present application;
FIG. 10 is a third schematic diagram of each image block in the embodiment of the present application;
FIG. 11 is a fourth schematic diagram of each image block in the embodiment of the present application;
FIG. 12 is a second drawing illustrating several pixel location templates in an embodiment of the present application;
FIG. 13 is a first schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure;
FIG. 14 is a second schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of an electronic device in an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the exemplary embodiments set forth herein. Rather, these exemplary embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
It is to be noted that, unless otherwise specified, technical or scientific terms used herein shall have the ordinary meaning as understood by those skilled in the art to which this application belongs.
In the prior art, when the image needs to be subjected to noise reduction processing, generally, an image noise reduction algorithm is adopted to process the image so as to obtain a high-quality image. However, the operation process of the image noise reduction algorithm is complicated. Moreover, the image noise reduction algorithm is finally realized by depending on a hardware circuit. This results in a need for higher performance hardware circuitry to support image noise reduction processing by the image noise reduction algorithm, thereby increasing the cost of the hardware circuitry.
The applicant finds, through a great deal of research, that the reason that the operation process of the existing image noise reduction algorithm, especially the NLM algorithm, is complex is that: in calculating the distance between image blocks in an image, the euclidean distance is used. The calculation of the euclidean distance is cumbersome and requires multiple multiplications, which undoubtedly requires enhancing the performance of the hardware circuit. And adopting a Gaussian function when calculating the corresponding weight of each image block based on the distance between the image blocks. The gaussian function involves exponential calculation, which inevitably requires enhancing the performance of the hardware circuit, which results in an increase in the cost of the hardware circuit.
In view of this, embodiments of the present application provide an image processing method, when determining weights corresponding to image blocks based on distances between the image blocks, the weights corresponding to the image blocks are obtained by looking up a table instead of calculating the weights of the image blocks one by using a gaussian function. In this table, the corresponding weights are calculated in advance based on the respective distances. In the actual process of denoising an image, when the weight corresponding to a certain image block needs to be obtained, the weight corresponding to the distance of the image block can be searched from the table. Because the table lookup is simpler than the function calculation, the image processing method provided by the embodiment of the application can simplify the complexity of the image noise reduction calculation, and further reduce the cost of a hardware circuit.
It should be noted here that, in practical applications, the image processing method provided by the embodiment of the present application is mainly applied to processing of a Bayer image. Of course, the image processing method provided by the embodiment of the application can also be applied to processing other images. The type of other images is not limited herein.
Next, the image processing method provided in the embodiments of the present application will be described in detail.
Fig. 1 is a first schematic flowchart of an image processing method in an embodiment of the present application, and referring to fig. 1, the method may include:
s101: a central image block and a neighborhood image block in at least one region of an image are acquired.
Wherein the neighborhood image block is adjacent to the central image block.
When the image needs to be subjected to noise reduction processing, first, an image to be processed is acquired. Then, at least one region in the image is determined. When performing noise reduction processing on an image, processing is not performed on all regions of the image together, but processing is performed on each of a plurality of regions, and therefore, at least one region in the image needs to be determined so as to perform processing on each region. And finally, acquiring a central image block and a neighborhood image block in at least one region. Since each area is processed subsequently, each area needs to be divided, and each divided area includes a plurality of image blocks, i.e., a central image block and a neighborhood image block.
FIG. 2 is a schematic diagram of each image block in the embodiment of the present application, referring to FIG. 2, in an image X, an area Y is determined by using a sliding window1. The size of the sliding window is 7 x 7, i.e. the area Y1Is also 7 × 7. In the region Y1In the size of 3 × 3, the region Y is divided into1Divided into 9 image blocks, i.e. image block A0、A1、A2、A3、A4、A5、A6、A7、A8. Wherein the image blocks adjacent in the horizontal direction and in the vertical direction have an overlap of 1 pixel. Here, the image block A4I.e. the central image block, image block a0、A1、A2、A3、A5、A6、A7、A8Is the neighborhood image block. Obtaining each image block is equivalent to obtaining a pixel value of each pixel point in each image block, and the pixel value may be a gray value. Of course, the pixel values may be those corresponding to Red (R), Green (G) and Blue (B) channels. The specific category of pixel values is not limited herein.
In practical applications, for the number of central image blocks and neighborhood image blocks in a region of an image, the number of central image blocks is 1, and the number of neighborhood image blocks may be 1 or more. The number of the neighborhood image blocks and the positions of the neighborhood image blocks relative to the central image block are not specifically limited, and may be set according to actual requirements. In FIG. 2, a neighborhood image block A0、A1、A2、A3、A5、A6、A7、A8Is 8 and is uniformly distributed in the central image block A4Outside of (a).
S102: and calculating the distance between the neighborhood image block and the central image block.
That is, it is equivalent to calculating the similarity between the pixel values in the neighborhood image block and the pixel values in the center image block.
Taking the calculation of the similarity between a certain neighborhood image block and the central image block as an example, since there is not only one pixel but a plurality of pixels in the neighborhood image block and the central image block, the similarity between the neighborhood image block and the central image block needs to be determined based on the similarity between each pixel in the neighborhood image block and the pixel in the corresponding position in the central image block.
FIG. 3 is a schematic diagram of each image block A in the embodiment of the present application, referring to FIG. 3, in calculating a neighborhood image block A0And the central image block A4When the similarity is within the range of (1), the neighborhood image block A0The middle bag contains 9 pixel points, namely a pixel point q0、q1、q2、q3、q4、q5、q6、q7、q8Center image block A4Also contains 9 pixels, i.e. pixel p0、p1、p2、p3、p4、p5、p6、p7、p8. Thus, pixel point q is calculated0And pixel point p0Calculating the difference value of the pixel values of the pixel points q1And pixel point p1And so on. Summing the squares of the difference values, and taking the average number to obtain the neighborhood image block A0And the central image block A4Similarity of, i.e. neighborhood image block A0And the central image block A4The distance of (c). Computing neighborhood image block A1、A2、A3、A5、A6、A7、A8Adjacent to the central image block A4The similarity between the image block A and the neighboring image block0The calculation method is the same, and the description is omitted here.
The above calculation of the distance between the neighborhood image block and the central image block adopts a calculation manner of the square of the euclidean distance, and a specific calculation formula is shown as the following formula (1):
Figure BDA0003174458560000061
wherein d represents the Euclidean distance, p represents a central image block, q represents a neighborhood image block, and k represents a pixel point in the image block.
Of course, other distance calculation methods may be adopted to calculate the distance between the neighborhood image block and the central image block. The specific distance calculation method is not limited herein, and the distance between the neighborhood image block and the central image block is calculated.
S103: and finding out the weight corresponding to the distance from the preset table, and using the weight as the weight of the neighborhood image block.
The preset table is used for representing the corresponding relation between different distances and weights.
The primary purpose of calculating the distance between the neighborhood image block and the central image block is to determine the weight of the neighborhood image block relative to the central image block, and therefore, after the distance between the neighborhood image block and the central image block is calculated, the weight of the neighborhood image block needs to be determined according to the distance between the neighborhood image block and the central image block.
In the prior art, based on the distance between a neighborhood image block and a central image block, a gaussian function is adopted to calculate the weight of the neighborhood image block. The specific calculation formula is shown in the following formula (2):
Figure BDA0003174458560000062
wherein w represents the weight, i represents the neighborhood image block, and d represents the distance between the neighborhood image block and the central image block. σ represents the standard deviation of the noise, and σ needs to be calibrated with a standard image under different illumination for different image sensors. That is, the image to be processed is obtained by which image sensor, and σ in the above formula is represented by which image sensor corresponds to σ. h denotes a filter coefficient, and is positively correlated with σ, that is, h ═ k σ, and k is generally a coefficient between (0.3, 1).
According to the formula, the smaller the distance between the neighborhood image block and the central image block is, the larger the weight of the neighborhood image block relative to the central image block is. When the square of the distance between the neighborhood image block and the central image block is less than or equal to 2 sigma2The weight of the neighborhood image block relative to the center image block is 1. And the corresponding weight of the central image block is also 1.
In the prior art, the weights of the neighborhood image blocks relative to the central image block are calculated one by one based on the distances between the neighborhood image blocks and the central image block, and the calculation process is complex because the weights include exponential calculation. Therefore, in the embodiment of the present application, the weights of the neighborhood images are not calculated one by one based on the distances between the neighborhood image blocks and the central image block, but the weights corresponding to the distances between the neighborhood image blocks and the central image block are directly searched in a preset table in which the weights corresponding to various distances are stored in advance, and are used as the weights of the neighborhood images. The table look-up is simpler than the exponential calculation.
Fig. 4 is a first schematic diagram of a preset table in the embodiment of the present application, and referring to fig. 4, in the preset table, a plurality of distances d are stored1、d2、……、dnWith a corresponding weight w1、w2、……、wnThe corresponding relationship of (1). Obtaining the distance d between the neighborhood image block and the central image block2Then, the weight w of the neighborhood image block can be obtained by looking up a preset table2. The index calculation of one time is avoided, and the calculation process of the weight is simplified.
It should be noted that the weights corresponding to the various distances in the preset table are calculated in advance and stored. The calculation of the weight by the distance may adopt various existing manners of calculating the weight of the image block, such as: gaussian function, etc. The specific calculation method of the weight is not limited here.
S104: and filtering the pixels in the central image block based on the weights of the neighborhood image blocks, the pixel values of the neighborhood image blocks, the weights of the central image block and the pixel values of the central image block.
After the weight of the neighborhood image block relative to the central image block is obtained, the pixel value of the neighborhood image block, the pixel value of the central image block and the weight (generally 1) are combined, and the pixel value of the target pixel point in the central image block can be obtained in a weighted average mode, namely, the filtering processing of the target pixel point is realized. The specific calculation formula is shown in the following formula (3):
Figure BDA0003174458560000071
wherein u' (A)n) A pixel value, u (A), representing the filtered central image blocki) Representing pixel values, w, of a central image block before filtering and of all neighborhood image blocksiAnd representing the weights corresponding to the central image block and all the neighborhood image blocks. C ═ Σ wiAnd is a weight normalization value.
As can be seen from the above, in the image processing method provided in this embodiment of the present application, after a central image block and a neighborhood image block in at least one region of an image are obtained, a distance between the neighborhood image block and the central image block is calculated, a weight of the neighborhood image block is further found from a preset table based on the distance between the neighborhood image block and the central image block, and finally, a pixel in the central image block is filtered based on the weight of the neighborhood image block, a pixel value of the neighborhood image block, the weight of the central image block, and a pixel value of the central image block. When the corresponding weight of each image block is determined based on the distance between the image blocks, the weight of each image block is calculated one by one based on the Gaussian function, and the weight corresponding to the distance of each image block is directly found out from a preset table in a table look-up mode. Because the table lookup is simpler than the calculation mode of the function, the complexity of image noise reduction calculation can be simplified, and the cost of a hardware circuit is further reduced.
Further, as a refinement and an extension of the method shown in fig. 1, an embodiment of the present application further provides an image processing method. Fig. 5 is a second flowchart of an image processing method in an embodiment of the present application, and referring to fig. 5, the method may include:
s501: a target region of the image is determined.
When performing noise reduction processing on an image, noise reduction processing is not directly performed on the entire image, but noise reduction processing is performed on each region in the image, and therefore, it is first necessary to identify one region in the image and perform noise reduction processing on the region. When the noise reduction processing is completed for all the regions in the image, the noise reduction processing of the image is completed.
The target area may specifically be determined in the image by means of a sliding window. Still referring to FIG. 2, in image X, region Y1A target area is determined using a sliding window. Of course, continuing to move the sliding window, it is also possible to determine the region Y2、Y3And the like. The specific number of target areas can be determined according to the size of the image and the size of the sliding window, as long as all the determined target areas can cover the image. In order to process the image quickly and comprehensively, the target areas determined by the sliding window may not be overlapped.
When the pixel to be filtered is located at the image boundary, that is, when the pixel to be filtered is located in the center of the sliding window and the sliding window is not located on the image completely, in order to continue to perform noise reduction on the image in the sliding window, the missing image in the sliding window may be filled according to the entire image, and then the target region to be processed is obtained.
Fig. 6 is a schematic diagram of a sliding window not completely located in an image in the embodiment of the present application, and referring to fig. 6, a region 6011 in the sliding window 601 is located in the image X, and a region 6012 in the sliding window 601 is not located in the image X. In order to perform noise reduction processing on the target region in the sliding window 601, it is necessary to fill in the image missing in the region 6012.
In practical application, the area which is not on the image in the sliding window can be filled in a mirror image mode, namely, the edge of the image in the sliding window is used as a turnover shaft to turn over the image, and the missing part in the sliding window is filled. Of course, other ways of filling in areas of the sliding window that are not on the image are also possible. The specific filling manner is not limited herein.
S502: and acquiring a central image block and a neighborhood image block in the target area.
Step S502 is the same as step S101, and is not described herein again.
S503: judging whether the target area is a texture detail area; if yes, executing S504; if not, the target area can be considered as a flat area, then S505 is executed.
Of course, in other embodiments, it may also be determined whether the target area is a flat area; if yes, go to S505; if not, the target area can be considered as the texture detail area, then S504 is executed.
Different types of regions in the image have different emphasis points when denoising is performed, and for texture detail regions, detail information in the image needs to be reserved while denoising is performed, so that the denoising strength is not suitable to be too large. For the flat area, the detail information is not much, so the de-noising can be emphasized.
When determining whether the target region belongs to the flat region or the texture detail region, still referring to fig. 2, the specific steps are as follows:
the method comprises the following steps: determining a central image block A in a target area4Domain image block A0、A1、A2、A3、A5、A6、A7、A8A pixel value of (a);
step two: selecting a maximum pixel value max _ val and a minimum pixel value min _ val from the 9 pixel values;
step three: calculating the difference diff between the maximum pixel value max _ val and the minimum pixel value min _ val as max _ val-min _ val;
step four: comparing the difference diff with a preset value diff _ threshold; if the difference is smaller than the preset value, namely diff is smaller than diff _ threshold, determining that the target area is a flat area; and if the difference is greater than or equal to a preset value, namely diff is greater than or equal to diff _ threshold, determining that the target area is a texture detail area.
The method for judging the type of the target area is simple and convenient. Of course, other ways may also be used to determine whether the target region belongs to the flat region or the texture detail region, and for the specific determination way, this is not limited here.
In the central image block, only one pixel point is not included, but a plurality of pixel points are generally included. The central image block contains a plurality of pixel points, so that the central image block has a plurality of central pixel points. Similarly, in the neighborhood image block, only one pixel point is not included, but a plurality of pixel points are included, and the number and distribution of the pixel points in the neighborhood image block are generally the same as those in the central image block. Therefore, the neighborhood image block contains many pixel points, and thus, the neighborhood image block has many neighborhood pixel points.
S504: and calculating pixel differences between the central pixel points and the neighborhood pixel points corresponding to the pixel positions.
That is, when the target area is a texture detail area, detail preservation of the image needs to be considered. Therefore, when the distance between the central image block and the neighborhood image block is calculated, the pixel difference between each central pixel point in the central image block and the neighborhood pixel point at the corresponding pixel position in the neighborhood image block needs to be calculated. As for the reason why the texture detail information in the central image block can be retained when denoising the central image block by calculating the pixel difference between each central pixel point in the central image block and the neighborhood pixel point at the corresponding pixel position in the neighborhood image block, the specific reason will be explained later (chebyshev formula).
Fig. 7 is a schematic diagram of a certain neighborhood image block and a center image block in the embodiment of the present application, and as shown in fig. 7, the neighborhood image block includes 9 pixels, each of which is a neighborhood pixel q0、q1、q2、q3、q4、q5、q6、q7、q8. In the central image block, there are 9 pixels, which are the central pixel p0、p1、p2、p3、p4、p5、p6、p7、p8. When the distance between the neighborhood image block and the central image block is calculated, since the target region where the neighborhood image block is located is a texture detail region, the pixel difference between each central pixel point in the central image block and the neighborhood pixel point at the corresponding pixel position in the neighborhood image block needs to be calculated. I.e. calculate p0-q0、p1-q1、p2-q2、p3-q3、p4-q4、p5-q5、p6-q6、p7-q7、p8-q8
S505: and calculating the pixel difference between the N central pixel points and the neighborhood pixel points corresponding to the pixel positions.
And the numerical value of N is less than the total number of the central pixel points in the central image block.
That is, when the target region is a flat region, the detail preservation of the image may not be excessively considered, and the center of gravity may be focused on denoising. Therefore, when the distance between the central image block and the neighborhood image block is calculated, the pixel difference between a limited number of central pixel points in the central image block and neighborhood pixel points at corresponding pixel positions in the neighborhood image block is calculated. As for why the central image block can be denoised well by calculating the pixel difference between a limited number of central pixel points in the central image block and the neighborhood pixel points at the corresponding pixel positions in the neighborhood image block, the specific reason will be explained later (chebyshev's formula).
Still referring to fig. 7, when calculating the distance between the neighborhood image block and the central image block, since the target region where the target region is located is a flat region, the pixel difference between a plurality of central pixel points in the central image block and the neighborhood pixel points at the corresponding pixel positions in the neighborhood image block may be calculated. For example: calculating p1-q1、p3-q3、p4-q4、p5-q5、p7-q7
Specifically, the selection of which central pixel points are selected from the central image block may have a variety of different selections, which is not limited herein.
Fig. 8 is a first schematic diagram of several pixel position templates in the embodiment of the present application, and as shown in fig. 8, an image block includes 9 pixels, that is, 3 × 3, as an example. In 8a, all the pixels (pixels where shadows are located) in the image block are selected to participate in the distance operation. 8a applies to step S504. In 8b and 8c, it is selected that some pixels (pixels where shadows are located) in the image block participate in distance calculation. 8b and 8c are applied to step S505.
Of course, the positions of the partial pixel points participating in the operation are not limited to 8b and 8c, and may also be a combination of other positions, for example: pixel points at four apex angles, 3 pixel points in the first column, and the like.
S506: and determining a target pixel difference with the maximum pixel difference from the pixel differences between at least one central pixel point and the neighborhood pixel points corresponding to the pixel positions, and taking the target pixel difference as the distance between the neighborhood image block and the central image block.
In the prior art, the euclidean distance is used to calculate the distance between the neighborhood image block and the central image block. While in euclidean distance more multiplications are involved. For example: still referring to fig. 2 and 3, to calculate the neighborhood image block a0And the central image block A4The distance of (2) is needed to calculate the neighborhood pixel point q respectively0And the central pixel point p0The Euclidean distance of the neighboring pixel point q is calculated1And the central pixel point p1… …, calculating the neighborhood pixel point q8And the central pixel point p8This involves 9 multiplications. And in the target area Y1In total, there are 8 field image blocks, i.e. the field image block A0、A1、A2、A3、A5、A6、A7、A8. Thus, a total of 72 multiplications of 9 × 8 is involved. This is also only to calculate the distance between the neighborhood image block and the central image block in one area of the image X, and there are multiple such areas in the image X, and the number of multiplications can be said to be enormous.
In order to simplify the calculation amount of the Distance, in the embodiment of the present application, the euclidean Distance is discarded from being used, and the Chebyshev Distance (Chebyshev Distance) is used to approximate the euclidean Distance. That is to say, a pixel difference with the largest numerical value is selected from the calculated pixel differences between the central pixel points and the neighborhood pixel points corresponding to the pixel positions, and the selected pixel difference is used as the distance between the neighborhood image block and the central image block. The specific calculation formula is shown in the following formula (4):
dChebyshev=maxk|pk-qk| (4)
wherein d isChebyshevRepresenting the chebyshev distance, i.e. the distance of the neighborhood image block from the central image block. p represents a central image block, q represents a neighborhood image block, and k represents the serial number of a pixel point in the image block.
Here, based on the calculation formula of the euclidean distance and the chebyshev distance, the relationship between the two can be derived as shown in the following formula (5):
Figure BDA0003174458560000111
where i denotes an image block.
Assuming that the noise follows a Gaussian distribution, in a flat area of the image, the above-obtained 9 pixel differences | pk-qkIn | most of the values are much smaller than dChebyshevCan be approximated as shown in the following formula (6):
Figure BDA0003174458560000121
that is to say, the Chebyshev distance approximation is adopted to replace the Euclidean distance to determine the distance between the neighborhood image block and the central image block, the error is not large, and the denoising accuracy is not reduced.
As is clear from the chebyshev distance calculation formula, the distance between image blocks depends on the pixel position where the pixel difference is the largest. And the fewer the pixels participating in the calculation, the greater the chance that the image block distance obtains a smaller value, the greater the obtained weight, and the greater the noise reduction strength. Therefore, in the flat area, part of pixel points in the image block can be selected for distance operation, the obtained image block distance is relatively small, the corresponding weight is relatively large, and a good noise reduction effect is achieved. In the detail texture region, all pixel points in the image block are selected to perform distance operation, the obtained image block distance is relatively large, the corresponding weight is relatively small, and then noise reduction to a large extent is avoided, so that the texture detail in the image block is ensured not to be lost. Therefore, the image processing method provided by the embodiment of the application can reduce the complexity of calculation, further reduce the cost of a hardware circuit, and balance denoising and texture information retaining, further obtain better image quality.
S507: and determining the index number of the neighborhood image block based on the distance between the neighborhood image block and the central image block.
Compared with the square of the euclidean distance, the range of the chebyshev distance is much smaller, that is, the range of the chebyshev distance is limited. Therefore, all the Chebyshev distances which can be foreseen can be calculated in advance, and then the weight corresponding to each Chebyshev distance is calculated respectively, so that after the distances between the neighborhood image block and the central image block are obtained, the corresponding weight is obtained directly based on the obtained distances, and the substitution index is searched for and calculated.
In order to further reduce the length of the preset table, the preset table may store the corresponding relationship between the index number and the weight instead of the corresponding relationship between the distance and the weight. After the distance between the neighborhood image block and the central image block is obtained, the index number corresponding to the distance is calculated through an index formula, and then the weight corresponding to the distance is found out in a preset table according to the index number.
Specifically, after the distance between the neighborhood image block and the center image block is obtained, first, the distance between the neighborhood image block and the center image block is subtracted from a preset distance, so as to obtain a distance difference. Wherein the preset distance is determined based on a noise standard deviation of an image sensor acquiring the image data and a noise reduction intensity input by a user. And then, obtaining the index number of the neighborhood image block according to the distance difference of the right shift of the preset digit, wherein the preset digit is determined based on the noise standard deviation and the maximum length of the preset table. The index formula may be specifically represented by the following formula (7):
idx=(dChebyshev- dthr)>>rshift (7)
wherein idx denotes an index number, dChebyshevThe distance of the chebyshev is expressed,
Figure BDA0003174458560000131
Figure BDA0003174458560000132
beta represents the noise reduction intensity, which can be input by a user, the user can adjust the noise reduction intensity of the image by inputting the beta, sigma represents the standard deviation of noise, and > rshift represents the right shift of preset digits, wherein the right shift is the digits of pixel values, rshift is used for reducing the value range of indexes, and can be determined according to the standard deviation of the noise and the maximum length of a preset table.
For example, assume dthr110010, rshift 4. In determining dChebyshevAfter 1100100, according to the index formula, 1100100-. And 11 is the index number corresponding to 1100100.
S508: and finding out the weight corresponding to the index number of the neighborhood image block from the preset table.
Fig. 9 is a second schematic diagram of a preset table in the embodiment of the present application, and referring to fig. 9, an index number idx is stored in the preset table1、idx2、……、idxnAnd its corresponding weight w1、w2、……、wn. Obtaining index number idx of image block in neighborhood2Then, through the preset table, the weight w of the neighborhood image block can be found out2
Since the index numbers and the weights corresponding to the index numbers stored in the preset table are calculated in advance, each weight in the preset table needs to be prepared before the image is subjected to the noise reduction processing. The specific calculation steps of the weight are as follows:
the method comprises the following steps: and calculating initial weights corresponding to different distances by adopting a Gaussian function.
Step two: and adjusting the initial weight based on the noise reduction intensity input by the user to obtain weights corresponding to different distances, and storing the weights in a preset table.
In fact, when calculating the weight corresponding to the distance, the above steps one and two may be performed simultaneously. That is, the noise reduction strength is introduced into the gaussian function, and the weight corresponding to each distance is calculated by the gaussian function introduced with the noise reduction strength. The specific calculation formula is shown in the following formula (8):
Figure BDA0003174458560000133
wherein w represents the weight, i represents the neighborhood image block, dChebyshevAnd expressing the Chebyshev distance between the neighborhood image block and the central image block, beta expressing the denoising strength input by a user, sigma expressing the standard deviation of noise, and h expressing a filter coefficient.
S509: and filtering the pixels in the central image block based on the weights of the neighborhood image blocks, the pixel values of the neighborhood image blocks, the weights of the central image block and the pixel values of the central image block.
After the weights of all neighborhood image blocks are obtained, the pixel values of all neighborhood image blocks, the weight (default to 1) of the central image block and the pixel value of the central image block are combined, and the central pixel point of the central image block can be filtered by adopting a weighted average calculation mode. The specific calculation method is described in detail in step S104, and is not described herein again.
The above is described for the filtering process of the central pixel of the central image block, i.e. the pixel p in fig. 34. However, the image processing method provided in the embodiment of the present application does not only perform filtering processing on the central pixel point of the central image block, but also performs filtering processing on a pixel point at any position in the central image block, and may even perform filtering processing on a plurality of pixel points in the central image block at the same time. The whole processing procedure can be seen in the above steps S501 to S509. Only the differences will be described below.
Fig. 10 is a third schematic diagram of each image block in the embodiment of the present application, and referring to fig. 10, the size of the sliding window is 7 × 6. In the sliding window, pixel A is included0、B0、A1、B1、A2、B2、A3、B3、A4、B4、A5、B5、A6、B6、A7、B7、A8、B8And pixels not marked around it.
Fig. 11 is a fourth schematic diagram of each image block in the embodiment of the present application, and referring to fig. 11, when an image in the sliding window of fig. 10 needs to be divided into 3 × 3 image blocks, the size of each image block is 2 × 3. The image blocks are not overlapped in the horizontal direction, and 1 pixel is overlapped in the vertical direction. Thus, A0、B0And the 4 pixels above and below the image block constitute an image block, and so on, for a total of 9 image blocks. In the 9 image blocks, A4、B4The central image block is composed of 4 pixels above and below the central image block, and for convenience of reference, the central image block can be called as a central image block A4B4. Accordingly, the aforementioned central image block A4Does not merely mean that only one pixel a is present in the central image block4And also includes 8 pixels around it. And A is0、B0And 4 pixels above and below it, A1、B1And 4 pixels above and below it, … …, A8、B8And the 4 pixels above and below it constitute 8 corresponding neighborhood image blocks.
Within the sliding window, A can be simultaneously paired4And B4And (6) carrying out filtering processing. The specific calculation formulas are shown in the following formulas (9) and (10):
Figure BDA0003174458560000141
Figure BDA0003174458560000151
wherein u (A)4) And u (B)4) Respectively represent A after filtering4And B4Pixel value of (A)iAnd BiRespectively represent A4、B4Pixel values, w, of all neighborhood image blocksiRepresenting all neighborhood image blocks relative to A4、B4Weight of (A)iAnd BiUsing the same weight, this saves computation, C ═ Σ wiAnd is a weight normalization value.
FIG. 12 is a second schematic diagram of several pixel location templates in the embodiment of the present application, referring to FIG. 12. Since the pixel blocks divided in fig. 11 are not 3 × 3 image blocks but 2 × 3 image blocks, after the target area is determined to belong to a flat area, when the distance between the neighboring image block and the central image block in the target area is calculated, the adopted pixel position template will slightly change, but the calculation idea is not changed, and part of the pixel points in the image blocks are still used for distance calculation. In 12a, all the pixels (pixels where shadows are located) in the image block are selected to participate in the distance operation. 12a are adapted for distance calculation of image blocks in the detail texture region. In 12b and 12c, it is selected that some pixels (pixels where shadows are located) in the image block participate in the distance operation. 12b and 12c are suitable for distance calculation of image blocks in flat areas.
Based on the same inventive concept, as an implementation of the method, the embodiment of the application further provides an image processing device. Fig. 13 is a schematic structural diagram of an image processing apparatus in an embodiment of the present application, and referring to fig. 13, the apparatus may include:
the receiving module 1301 is configured to obtain a central image block and a neighborhood image block in at least one region of an image, where the neighborhood image block is adjacent to the central image block.
A calculating module 1302, configured to calculate distances between the neighborhood image block and the center image block.
The searching module 1303 is configured to search out a weight corresponding to the distance from a preset table, where the preset table is used to represent corresponding relationships between different distances and weights, and the weight is used as the weight of the neighborhood image block.
A filtering module 1304, configured to perform filtering processing on the pixels in the central image block based on the weights of the neighborhood image blocks, the pixel values of the neighborhood image blocks, the weight of the central image block, and the pixel value of the central image block.
Further, as a refinement and an extension of the apparatus shown in fig. 13, an embodiment of the present application also provides an image processing apparatus. Fig. 14 is a schematic structural diagram of an image processing apparatus in an embodiment of the present application, and referring to fig. 14, the apparatus may include:
the storage module 1401, comprising:
a first calculating unit 1401a, configured to calculate initial weights corresponding to different distances by using a gaussian function.
And a storage unit 1401b, configured to adjust the initial weight based on the noise reduction strength input by the user, obtain weights corresponding to different distances, and store the weights in the preset table.
A first determining module 1402 comprising:
a sliding window unit 1402a for determining a target area in the image using a sliding window.
A filling unit 1402b, configured to, when the target region is not completely located in the image, fill a region located outside the image in the target region based on the image, so as to obtain the at least one filled region.
A receiving module 1403, configured to obtain a central image block and a neighborhood image block in at least one region of an image, where the neighborhood image block is adjacent to the central image block.
The central image block comprises a plurality of central pixel points, and the neighborhood image block comprises a plurality of neighborhood pixel points.
The second determining module 1404 includes:
a first determining unit 1404a configured to determine pixel values of the central image block and pixel values of the neighborhood image blocks.
A selecting unit 1404b configured to select a maximum pixel value and a minimum pixel value from the pixel values of the central image block and the pixel values of the neighborhood image blocks.
A second calculating unit 1404c for calculating a difference value between the maximum pixel value and the minimum pixel value.
A second determining unit 1404d for determining the at least one region as a flat region when the difference is smaller than a preset value; and when the difference value is greater than or equal to a preset value, determining that the at least one area is a detail texture area.
A calculation module 1405, comprising:
the third calculating unit 1405a is configured to calculate a pixel difference between at least one central pixel point and a neighboring pixel point of the corresponding pixel position.
The third calculating unit 1405a is specifically configured to calculate, when the at least one region is a flat region, pixel differences between N central pixel points and neighborhood pixel points at corresponding pixel positions, where a value of the N is smaller than a total number of the central pixel points in the central image block; and when the at least one region is a texture detail region, calculating pixel differences between the central pixel points and neighborhood pixel points corresponding to the pixel positions.
A third determining unit 1405b, configured to determine, from pixel differences between the at least one central pixel point and a neighboring pixel point at a corresponding pixel position, a target pixel difference with a largest pixel difference, and use the target pixel difference as a distance between the neighboring image block and the central image block.
The preset table comprises a corresponding relation between a plurality of index numbers and weights, and one index number in the preset table corresponds to at least one distance.
The lookup module 1406, comprising:
a fourth determining unit 1406a, configured to determine the index number of the neighborhood image block based on the distance between the neighborhood image block and the center image block.
The fourth determining unit 1406a is specifically configured to subtract a preset distance from a distance between the neighboring image block and the center image block to obtain a distance difference, where the preset distance is determined based on a noise standard deviation of an image sensor that acquires the image data and a noise reduction strength input by a user; and shifting the distance difference to the right according to a preset digit to obtain the index number of the neighborhood image block, wherein the preset digit is determined based on the noise standard deviation and the maximum length of the preset table.
The searching unit 1406b is configured to search the weight corresponding to the index number of the neighborhood image block from the preset table.
A filtering module 1407, configured to perform filtering processing on the pixels in the central image block based on the weights of the neighborhood image blocks, the pixel values of the neighborhood image blocks, the weights of the central image block, and the pixel values of the central image block.
Fig. 14 shows the respective modules and the signal flow direction between the respective units in the modules.
It is to be noted here that the above description of the embodiments of the apparatus, similar to the description of the embodiments of the method described above, has similar advantageous effects as the embodiments of the method. For technical details not disclosed in the embodiments of the apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
Based on the same inventive concept, the embodiment of the application also provides the electronic equipment. Fig. 15 is a schematic structural diagram of an electronic device in an embodiment of the present application, and referring to fig. 15, the electronic device may include: a processor 1501, memory 1502, bus 1503; the processor 1501 and the memory 1502 communicate with each other via a bus 1503; the processor 1501 is used to call program instructions in the memory 1502 to perform the methods in one or more of the embodiments described above.
It is to be noted here that the above description of the embodiments of the electronic device, similar to the description of the embodiments of the method described above, has similar advantageous effects as the embodiments of the method. For technical details not disclosed in the embodiments of the electronic device of the present application, refer to the description of the embodiments of the method of the present application for understanding.
Based on the same inventive concept, the embodiment of the present application further provides a computer-readable storage medium, where the storage medium may include: a stored program; wherein the program controls the device on which the storage medium is located to execute the method in one or more of the above embodiments when the program runs.
It is to be noted here that the above description of the storage medium embodiments, like the description of the above method embodiments, has similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the storage medium of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (11)

1. An image processing method, characterized in that the method comprises:
acquiring a central image block and a neighborhood image block in at least one region of an image, wherein the neighborhood image block is adjacent to the central image block;
calculating the distance between the neighborhood image block and the center image block;
finding out the weight corresponding to the distance from a preset table, wherein the preset table is used for representing the corresponding relation between different distances and weights, and is used as the weight of the neighborhood image block;
and filtering the pixels in the central image block based on the weights of the neighborhood image blocks, the pixel values of the neighborhood image blocks, the weights of the central image block and the pixel values of the central image block.
2. The method according to claim 1, wherein the preset table comprises a corresponding relationship between a plurality of index numbers and weights, and one index number in the preset table corresponds to at least one distance; the finding out the weight corresponding to the distance from a preset table includes:
determining the index number of the neighborhood image block based on the distance between the neighborhood image block and the center image block;
and finding out the weight corresponding to the index number of the neighborhood image block from the preset table.
3. The method of claim 2, wherein determining the index number of the neighborhood image block based on the distance between the neighborhood image block and the center image block comprises:
subtracting the distance between the neighborhood image block and the center image block from a preset distance to obtain a distance difference, wherein the preset distance is determined based on the noise standard deviation of an image sensor for acquiring the image data and the noise reduction intensity input by a user;
and shifting the distance difference to the right according to a preset digit to obtain the index number of the neighborhood image block, wherein the preset digit is determined based on the noise standard deviation and the maximum length of the preset table.
4. The method of claim 1, wherein before the finding the weight corresponding to the distance from the preset table, the method further comprises:
calculating initial weights corresponding to different distances by adopting a Gaussian function;
and adjusting the initial weight based on the noise reduction intensity input by the user to obtain weights corresponding to different distances, and storing the weights in the preset table.
5. The method of any of claims 1 to 4, wherein the central image patch comprises a plurality of central pixels and the neighborhood image patch comprises a plurality of neighborhood pixels; the calculating the distance between the neighborhood image block and the center image block includes:
calculating the pixel difference between at least one central pixel point and the neighborhood pixel point of the corresponding pixel position;
and determining a target pixel difference with the maximum pixel difference from the pixel differences between the at least one central pixel point and the neighborhood pixel points of the corresponding pixel positions, and taking the target pixel difference as the distance between the neighborhood image block and the central image block.
6. The method of claim 5, wherein calculating the pixel difference between at least one central pixel point and a corresponding neighborhood pixel point comprises:
when the at least one region is a flat region, calculating pixel differences between N central pixel points and neighborhood pixel points corresponding to pixel positions, wherein the numerical value of N is less than the total number of the central pixel points in the central image block;
and when the at least one region is a texture detail region, calculating pixel differences between the central pixel points and neighborhood pixel points corresponding to the pixel positions.
7. The method of claim 6, wherein prior to said computing pixel differences between at least one center pixel point and a corresponding neighborhood pixel point, said method further comprises:
determining the pixel value of the central image block and the pixel value of the neighborhood image block;
selecting a maximum pixel value and a minimum pixel value from the pixel value of the central image block and the pixel values of the neighborhood image blocks;
calculating a difference between the maximum pixel value and the minimum pixel value;
when the difference value is smaller than a preset value, determining that the at least one area is a flat area;
and when the difference value is greater than or equal to a preset value, determining that the at least one area is a detail texture area.
8. The method of any of claims 1 to 4, wherein prior to a central patch and a neighborhood patch in at least one region of the acquired image, the method further comprises:
determining a target area in the image by adopting a sliding window;
and when the target area is not completely positioned in the image, filling an area positioned outside the image in the target area based on the image to obtain the filled at least one area.
9. An image processing apparatus, characterized in that the apparatus comprises:
the image processing device comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for acquiring a central image block and a neighborhood image block in at least one region of an image, and the neighborhood image block is adjacent to the central image block;
the calculation module is used for calculating the distance between the neighborhood image block and the center image block;
the searching module is used for searching out the weight corresponding to the distance from a preset table, and the preset table is used for representing the corresponding relation between different distances and weights and is used as the weight of the neighborhood image block;
and the filtering module is used for filtering the pixels in the central image block based on the weights of the neighborhood image blocks, the pixel values of the neighborhood image blocks, the weights of the central image block and the pixel values of the central image block.
10. An electronic device, comprising: a processor, a memory, a bus;
the processor and the memory complete mutual communication through the bus; the processor is configured to invoke program instructions in the memory to perform the method of any of claims 1 to 8.
11. A computer-readable storage medium, comprising: a stored program; wherein the program, when executed, controls the device on which the storage medium is located to perform the method according to any one of claims 1 to 8.
CN202110828277.3A 2021-07-22 2021-07-22 Image processing method, image processing device, electronic equipment and storage medium Pending CN113643198A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110828277.3A CN113643198A (en) 2021-07-22 2021-07-22 Image processing method, image processing device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110828277.3A CN113643198A (en) 2021-07-22 2021-07-22 Image processing method, image processing device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113643198A true CN113643198A (en) 2021-11-12

Family

ID=78417959

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110828277.3A Pending CN113643198A (en) 2021-07-22 2021-07-22 Image processing method, image processing device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113643198A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9262810B1 (en) * 2014-09-03 2016-02-16 Mitsubishi Electric Research Laboratories, Inc. Image denoising using a library of functions
US20160086317A1 (en) * 2014-09-23 2016-03-24 Intel Corporation Non-local means image denoising with detail preservation using self-similarity driven blending
WO2018134128A1 (en) * 2017-01-19 2018-07-26 Telefonaktiebolaget Lm Ericsson (Publ) Filtering of video data using a shared look-up table
CN111784605A (en) * 2020-06-30 2020-10-16 珠海全志科技股份有限公司 Image denoising method based on region guidance, computer device and computer readable storage medium
CN111861938A (en) * 2020-07-30 2020-10-30 展讯通信(上海)有限公司 Image denoising method and device, electronic equipment and readable storage medium
CN111882504A (en) * 2020-08-05 2020-11-03 展讯通信(上海)有限公司 Method and system for processing color noise in image, electronic device and storage medium
CN112435156A (en) * 2020-12-08 2021-03-02 烟台艾睿光电科技有限公司 Image processing method, device, equipment and medium based on FPGA
CN112508810A (en) * 2020-11-30 2021-03-16 上海云从汇临人工智能科技有限公司 Non-local mean blind image denoising method, system and device
CN112884667A (en) * 2021-02-04 2021-06-01 湖南兴芯微电子科技有限公司 Bayer domain noise reduction method and noise reduction system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9262810B1 (en) * 2014-09-03 2016-02-16 Mitsubishi Electric Research Laboratories, Inc. Image denoising using a library of functions
US20160086317A1 (en) * 2014-09-23 2016-03-24 Intel Corporation Non-local means image denoising with detail preservation using self-similarity driven blending
CN107004255A (en) * 2014-09-23 2017-08-01 英特尔公司 The non-local mean image denoising retained with details of mixing is driven using self-similarity
WO2018134128A1 (en) * 2017-01-19 2018-07-26 Telefonaktiebolaget Lm Ericsson (Publ) Filtering of video data using a shared look-up table
CN111784605A (en) * 2020-06-30 2020-10-16 珠海全志科技股份有限公司 Image denoising method based on region guidance, computer device and computer readable storage medium
CN111861938A (en) * 2020-07-30 2020-10-30 展讯通信(上海)有限公司 Image denoising method and device, electronic equipment and readable storage medium
CN111882504A (en) * 2020-08-05 2020-11-03 展讯通信(上海)有限公司 Method and system for processing color noise in image, electronic device and storage medium
CN112508810A (en) * 2020-11-30 2021-03-16 上海云从汇临人工智能科技有限公司 Non-local mean blind image denoising method, system and device
CN112435156A (en) * 2020-12-08 2021-03-02 烟台艾睿光电科技有限公司 Image processing method, device, equipment and medium based on FPGA
CN112884667A (en) * 2021-02-04 2021-06-01 湖南兴芯微电子科技有限公司 Bayer domain noise reduction method and noise reduction system

Similar Documents

Publication Publication Date Title
KR102675217B1 (en) Image signal processor for processing images
US9558543B2 (en) Image fusion method and image processing apparatus
JP6469678B2 (en) System and method for correcting image artifacts
CN110827200A (en) Image super-resolution reconstruction method, image super-resolution reconstruction device and mobile terminal
US20160350900A1 (en) Convolutional Color Correction
CN111563552B (en) Image fusion method, related device and apparatus
WO2018082185A1 (en) Image processing method and device
US20200068151A1 (en) Systems and methods for processing low light images
KR101526031B1 (en) Techniques for reducing noise while preserving contrast in an image
US8284271B2 (en) Chroma noise reduction for cameras
US9619862B2 (en) Raw camera noise reduction using alignment mapping
CN102202162A (en) Image processing apparatus, image processing method and program
CN111784605A (en) Image denoising method based on region guidance, computer device and computer readable storage medium
TWI703872B (en) Circuitry for image demosaicing and enhancement
CN113168669A (en) Image processing method and device, electronic equipment and readable storage medium
CN111429371B (en) Image processing method and device and terminal equipment
CN112384945B (en) Super resolution using natural hand-held motion applied to user equipment
CN108701353B (en) Method and device for inhibiting false color of image
CN104853063B (en) A kind of image sharpening method based on SSE2 instruction set
CN115835034A (en) White balance processing method and electronic equipment
CN113935934A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111080683B (en) Image processing method, device, storage medium and electronic equipment
CN113643198A (en) Image processing method, image processing device, electronic equipment and storage medium
Yamaguchi et al. Image demosaicking via chrominance images with parallel convolutional neural networks
CN111354058B (en) Image coloring method and device, image acquisition equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 263, block B, science and technology innovation center, 128 Shuanglian Road, Haining Economic Development Zone, Haining City, Jiaxing City, Zhejiang Province, 314400

Applicant after: Haining yisiwei IC Design Co.,Ltd.

Applicant after: Beijing ESWIN Computing Technology Co.,Ltd.

Address before: Room 263, block B, science and technology innovation center, 128 Shuanglian Road, Haining Economic Development Zone, Haining City, Jiaxing City, Zhejiang Province, 314400

Applicant before: Haining yisiwei IC Design Co.,Ltd.

Applicant before: Beijing yisiwei Computing Technology Co.,Ltd.

CB02 Change of applicant information