US20170185863A1 - System and method for adaptive pixel filtering - Google Patents

System and method for adaptive pixel filtering Download PDF

Info

Publication number
US20170185863A1
US20170185863A1 US14/983,150 US201514983150A US2017185863A1 US 20170185863 A1 US20170185863 A1 US 20170185863A1 US 201514983150 A US201514983150 A US 201514983150A US 2017185863 A1 US2017185863 A1 US 2017185863A1
Authority
US
United States
Prior art keywords
pixel
pixels
patch
selecting
weight values
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US14/983,150
Other versions
US9710722B1 (en
Inventor
Mahesh Chandra
Antoine Drouot
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
STMICROELECTRONICS INTERNATIONAL NV
STMicroelectronics France SAS
STMicroelectronics International NV
Original Assignee
STMicroelectronics SA
STMicroelectronics International NV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US14/983,150 priority Critical patent/US9710722B1/en
Application filed by STMicroelectronics SA, STMicroelectronics International NV filed Critical STMicroelectronics SA
Assigned to STMICROELECTRONICS INTERNATIONAL N.V. reassignment STMICROELECTRONICS INTERNATIONAL N.V. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHANDRA, MAHESH
Assigned to STMICROELECTRONICS SA reassignment STMICROELECTRONICS SA ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DROUOT, ANTOINE
Priority to CN201610465099.1A priority patent/CN106937020B/en
Priority to CN201911124050.XA priority patent/CN110852334B/en
Priority to US15/636,294 priority patent/US10186022B2/en
Publication of US20170185863A1 publication Critical patent/US20170185863A1/en
Publication of US9710722B1 publication Critical patent/US9710722B1/en
Application granted granted Critical
Assigned to STMICROELECTRONICS FRANCE reassignment STMICROELECTRONICS FRANCE CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: STMICROELECTRONICS SA
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • G06K9/4671
    • G06K9/4604
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/36Applying a local operator, i.e. means to operate on image points situated in the vicinity of a given point; Non-linear local filtering operations, e.g. median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Definitions

  • the present disclosure generally relates to filter optimization.
  • the present disclosure is directed to an optimized image filter having a weighting function that depends on neighboring pixel values.
  • a common noise reduction filter is a finite impulse response (FIR) filter.
  • An adaptive FIR filter's convolution kernel (matrix of pixels) may be defined by equation (1):
  • the filter of equation (1) is a weighted average of surrounding pixels.
  • the weighting function w(i,j) for the filter of equation (1) can be computed in a plurality of ways.
  • the weighting function w(i,j) for a bilateral or sigma filter is a product of spatial weights and a photonic (or range) weights.
  • the weighting function w(i,j) for a bilateral filter may be defined by equation (2):
  • f( ) and g( ) are, ideally, continuous and monotonous decreasing functions, such as a Gaussian curve; and ⁇ i,j ⁇ designates a Euclidean distance between the spatial positions of pixels i and j.
  • Another common filter for noise reduction is a non-local filter.
  • the weighting function w(i,j) is dependent upon a difference between patches p of pixels centered on target and reference pixels.
  • a patch refers to a subset of pixels.
  • the weighting function w(i,j) for a non-local filter may be defined by equation (3):
  • w non-local ( i,j ) g ( ⁇ square root over ( ⁇ k ⁇ p(i),l ⁇ p(j) (pix in ( k ) ⁇ pix in ( l )) 2 ) ⁇ ) (3)
  • an optimized image filter obtains an input image and selects a first target pixel for modification within a search area (a first subset of pixels of the image).
  • a sum of absolute differences (SAD) values are then determined between the selected first target pixel and each reference pixel of a search area.
  • the SAD values are computed from a second subset of pixels that is within the search area with a third subset of pixels within the search area. The second subset of pixels being associated with the first target pixel and each third subset of pixels being associated with the respective reference pixel.
  • a weighting function is used to determine weight values for each of the reference pixels based on their respective SAD value.
  • the first target pixel is then modified by the image filter using the determined weight values.
  • a second target pixel within in an apply patch is selected for modification.
  • the apply patch being a fourth subset of pixels that includes the first target pixel.
  • the second target pixel is modified using the previously determined weight values from the first target pixel, i.e., weighted values are not computed for the second target pixel within the apply patch. Instead, each of the reference pixels of the search area for the second target pixel will be assigned the previously determined weight values computed for the first target pixel.
  • the weight values are reassigned to the set of reference pixels associated with the second target pixel based on a relative position of the first target pixel to the second target pixel.
  • the image filter has a low level of complexity, processing time can be reduced, especially in software implementations of the image filter, and power consumption is improved.
  • FIG. 1 is a flow diagram illustrating an example of data flow for an optimized image filter according to one embodiment disclosed herein;
  • FIG. 2 is a flow diagram illustrating an example of processing for an optimized image filter according to an embodiment disclosed herein;
  • FIG. 3 is a diagram illustrating an example of modifying a first pixel of an apply patch according to one embodiment disclosed herein;
  • FIG. 4 is a diagram illustrating an example of modifying a second pixel of the apply patch of FIG. 3 according to one embodiment disclosed herein;
  • FIG. 5 is a diagram illustrating an example of modifying a third pixel of the apply patch of FIG. 3 according to one embodiment disclosed herein;
  • FIG. 6 is a diagram illustrating an example of modifying a fourth pixel of the apply patch of FIG. 3 according to one embodiment disclosed herein;
  • FIG. 7 is a schematic illustrating an example of an electronic device for implementing an optimized image filter according to one embodiment disclosed herein.
  • Acceleration techniques are often used to lower the complexity of image filters, and improve processing time and power consumption.
  • a common acceleration technique is to lower the size of a search area, such as the search area ⁇ , of an image, or lowering the size of pixels subsets, such as a target patch 38 , used for difference calculations.
  • FIG. 3 illustrates a search area 36 , which is a portion of the overall image that is used for processing, and the target patch 38 .
  • Lowering the size of the search area or the target patch also reduces the efficiency of the technique and may impact image sharpness and resolution.
  • lowering the size of the search area prevents the removal of lower frequency noise and reduces the ability to find acceptable matching areas, and lowering the size of target patch increases the number of false matching areas.
  • Another common acceleration technique is to use a relatively simple difference calculation for non-local filters. Using a simpler difference calculation has been proven to provide good results; however, the complexity is still too high, and further complexity reduction is advantageous.
  • the present disclosure is directed to an image filter that reduces complexity by reducing a total amount of calculations used for a weighting function of the image filter.
  • the image filter determines weight values for a selected target pixel a in FIG. 3 , and then reuses the determined weight values for other target pixels a+1, b, b+1.
  • the complexity level of the image filter is reduced and processing time and power consumption is improved. The processing of the image filter will be discussed in further detail with respect to FIGS. 1-6 .
  • FIG. 1 is a flow diagram illustrating an example of data flow for an image filter according to one embodiment disclosed herein.
  • an input image is obtained for digital image processing.
  • the input image may be a single image or may be single frame of a stream of input images, such as a video.
  • the input image may be obtained from a variety of sources, such as an image sensor, a multimedia content provider, memory, and a world wide web.
  • the input image is provided to the image filter for processing.
  • the image filter may modify the input image to digitally reduce noise present in the input image and produce a final image for a user. Processing for the image filter will be discussed in further detail with respect to FIGS. 2-6 .
  • step 14 the image filter has completed processing and a filtered image is obtained.
  • the data flow of FIG. 1 may be repeated for multiple images.
  • the data flow of FIG. 1 may be repeated for real-time processing of multiple images or a stream of input images, such as a video.
  • FIG. 2 is a flow diagram illustrating an example of processing for an image filter according to an embodiment disclosed herein. It is beneficial to review the steps of FIG. 2 simultaneously with FIGS. 3-6 , which are diagrams illustrating examples of modifying pixels of an apply patch according to one embodiment disclosed herein.
  • the image filter obtains an input image.
  • the input image may be a single image or may be single frame of a stream of input images, such as a video.
  • a target pixel is selected for modification.
  • a target pixel a is selected.
  • Target pixels may be selected at random, by row, by column, or in any predetermined order.
  • target pixels may be selected by starting at an upper left corner of the input image, selecting pixels of a first row from left to right, moving to the next row, and then continue this pattern until reaching a lower right corner of the input image.
  • every other pixel of the input image is selected as a target pixel.
  • the selected target pixel has a corresponding search area, target patch, reference patch, and apply patch.
  • the target pixel a has a corresponding search area 36 , a target patch 38 , a reference patch 40 , and an apply patch 42 .
  • the patches are subsets of pixels within the search area 36 .
  • the search area includes reference pixels that surround the selected target pixel.
  • the reference pixels are used for modification of the target pixel.
  • the target pixel a may be modified by replacing its value with a weighted average of the reference pixels of the search area 36 for noise reduction.
  • the weights of the reference pixels are based on a distance or difference calculation, such as a sum of absolute differences (SAD) or sum of squared differences (SSD).
  • the target patch 38 and the reference patch 40 have the same dimensions and are used to determine a similarity between the selected target pixel and a reference pixel.
  • the similarity between the target pixel and the reference pixel may be computed as a difference value, as each pixel may have a numerical representation and the similarity is a comparison of the numerical representation of each pixel. This can also be referred to as a distance between the target pixel's value and the reference pixel's value, where the distance is not necessarily representative of the physical space between the pixels in the array.
  • the target patch and the reference patch are centered on the target pixel and the reference pixel, respectively.
  • the target patch 38 and the reference patch 40 are used to determine a difference value between the target pixel a and a reference pixel c.
  • Each of the reference pixels within the search area 36 will be used to create a difference value with respect to the target pixel. Accordingly, as each reference pixel in the search area is processed, a reference patch 40 will be associated with the reference pixel being processed. The determination of difference values will be discussed in further detail with respect to step 20 .
  • the apply patch 42 includes the target pixel a and is a subset of the target patch 38 .
  • the apply patch includes additional target pixels a+1, b, b+1 that are modified using previously determined weight values from the target pixel a.
  • the apply patch will be discussed in further detail with respect to steps 26 - 30 .
  • difference values are determined between the selected target pixel and each of the reference pixels of the search area.
  • a difference value between a first pixel (the target pixel a) and a second pixel (reference pixel c) is determined by comparing pixels in a first patch centered on the first pixel (the target patch 38 ) and respective corresponding pixels in a second patch centered on the second pixel (reference patch 40 ).
  • the difference values may be determined using a calculation, such as SAD or SSD. SAD and SSD calculations are well known in the art and will not be discussed in detail in this description.
  • a difference or distance between the target pixel a value and the reference pixel c value is determined by calculating a difference value between the target patch 38 centered on the target pixel a and the reference patch 40 centered on the reference pixel c. Difference values are determined between the target pixel a and each reference pixel in the search area 36 .
  • the dimensions of the search area 36 , the target patch 38 , and the reference patch 40 shown in FIGS. 3-6 are for illustrative purposes.
  • the search area 36 , the target patch 38 , and the surrounding patch 40 may have any size. In a preferred embodiment, the search area is larger than the target patch.
  • a reference pixel is selected, such as reference pixel c in FIG. 3 .
  • a reference patch is identified that corresponds to reference pixel c, for example, reference patch 40 .
  • the target patch 38 and the reference patch 40 have the same dimension, i.e. include the same number of pixels in the same shape.
  • the target patch and the reference patch are both 5 ⁇ 5 arrays of pixels. Then, each pixel of the target patch is compared to each pixel of the reference patch and then a single value is calculated to generate the difference value.
  • a weighting function is used to determine weight values for each of the reference pixels of the search area based on their respective difference value between the selected target pixel determined in step 20 .
  • a weight value is determined for the reference pixel c based on its determined difference value between the target pixel a from step 20 .
  • Weight values are determined for each reference pixel of the search area 36 .
  • a weight value for a reference pixel is inversely related to its determined difference value. That is, reference pixels that are similar to the selected target pixel (i.e., smaller difference values) are given larger weight values, and vice versa.
  • the comparison and determination of distances values as done for the reference pixel c above is performed for every pixel in the search area.
  • the selected target pixel is modified by the image filter using the weight values computed in step 22 .
  • the target pixel a may be modified by replacing its value with a weighted average of the reference pixels of the search area 36 for noise reduction.
  • the image filter may be any type of filter that utilizes a weighting function based on the difference between a target pixel value and reference pixel values.
  • step 26 it is determined whether there are additional pixels in the apply patch 42 associated the selected target pixel a. For example, it is determined whether the apply patch 42 includes additional pixels besides the target pixel a. If there are no additional pixels in the apply patch, the processing moves to step 32 . If there are additional pixels in the apply patch, the processing moves to step 28 .
  • An apply patch may include any number of pixels that are part of a target patch.
  • the apply patch 42 may include any number of pixels of the target patch 38 .
  • the apply patch has a plus pattern consisting of the selected target pixel and pixels immediately to the right, left, above, and below the selected target pixel.
  • the apply patch is a 3 ⁇ 3 patch centered on the selected target pixel.
  • the apply patch consists of the same pixels as the target patch.
  • the apply patch consists two consecutive pixels, such as pixels a and a+1.
  • a second target pixel in the apply patch is selected for modification. For example, referring to FIG. 4 , the second target pixel a+1 in the apply patch 42 is selected.
  • the second target pixel in the apply patch may be selected at random, by row, by column, or in any predetermined order.
  • the second target pixel is modified using the previously determined weight values for the original target pixel.
  • the second target pixel is associated with a second set of reference pixels.
  • the difference value between the original target patch and the original reference patch is considered to be a valid difference value between all pixels of the target patch and corresponding pixels of the reference patch.
  • a difference value between a target patch centered on pixel a and a reference patch centered on pixel c is also considered to be a valid difference value between pixel a+1 and pixel c+1. Accordingly, when modifying the second target pixel in the apply patch 42 , weight values that were determined in step 22 may be reused for the second set of reference pixels.
  • the previously determined weight values in step 22 are reused for the second set of reference pixels based on the second target pixel's position relative to the original target pixel from step 18 .
  • each of previously determined weight values from the original reference pixels is assigned to an adjacent one of the reference pixel (the second set of reference pixels).
  • a position of the second reference pixel relative to the original reference pixel is the same as a position of the second target pixel relative to the original target pixel.
  • the previously determined weight values are shifted to the second set of reference pixels by the same direction and distance as the second target pixel is shifted from the original target pixel. For example, referring to FIG.
  • the weight value corresponding to the reference pixel c is shifted and assigned to reference pixel c+1.
  • the weight value corresponding to the reference pixel c is assigned to reference pixel d when the selected target pixel is pixel a and the new target pixel is pixel b, as shown in FIG. 5 ; and the weight value corresponding to the reference pixel c is assigned to reference pixel d+1 when the selected target pixel is pixel a and the new target pixel is pixel b+1, as shown in FIG. 6 .
  • the second target pixel is then modified with the previously determined weight values assigned to the second set of reference pixels.
  • the second target pixel may be modified by replacing its value with a weighted average of the second set of reference pixels for noise reduction. Therefore, in contrast to the modification of the target pixel in steps 20 - 24 , difference values and weight values do not need to be determined for the modification of the second target pixel.
  • step 30 the processing returns to step 26 to determine whether there are additional pixels in the apply patch of the selected target pixel.
  • steps 28 - 30 are repeated until each pixel in the apply patch has been modified.
  • steps 28 - 30 are repeated until pixels a+1, b, and b+1 have been modified by the optimized image filter.
  • the previously determined difference values are reused, instead of the weight values in step 30 .
  • a difference value between a target patch and a reference patch is considered to be a valid difference value for all pixels of the target patch and respective corresponding pixels of the reference patch.
  • each of the previously determined difference values determined in step 20 is used for a new reference pixel such that a position of the new reference pixel relative to the reference pixel corresponding to the previously determined difference value is the same as a position of the new target pixel relative to the selected target pixel.
  • the previously determined difference values are shifted to a new set of reference pixels by the same direction and difference value as the new target pixel is shifted from the selected target pixel.
  • the difference value that was determined between pixel a and reference pixel c is reused as a difference value between pixel a+1 and pixel c+1.
  • a new weighting function may be used in step 30 to determine a new weight values for the new set of reference pixels, similar to step 22 .
  • the new target pixel may then be modified using the new weight values, similar to step 24 .
  • step 32 it is determined whether there are additional pixels in the input image that have not been filtered by the image filter. If there are additional unfiltered image pixels in the input image, the processing returns to step 18 . If there are no additional unfiltered pixels in the input image, the processing moves to step 34 .
  • step 34 the image filter has completed processing and the filtered image is provided.
  • each block shown in FIGS. 1-2 may represent one or more blocks as appropriate to a specific embodiment or may be combined with other blocks.
  • FIG. 7 is a schematic illustrating an example of an electronic device 44 for implementing an optimized image filter according to one embodiment disclosed herein.
  • the electronic device 44 include a digital camera, a mobile telephone, a gaming device, a computer, a tablet, a television, or a set-top box.
  • the electronic device 44 includes a processing unit 46 , a memory 48 , an input device 50 , an output device 52 , and an I/O interface 54 . It should be noted that the electronic device 44 may include additional functionalities and components than those illustrated in FIG. 7 .
  • the processing unit 46 is configured to perform the processing for the optimized image filter.
  • the processing unit 46 is a digital signal processor.
  • the memory 48 may be a non-volatile memory, such as ROM, a volatile memory, such as RAM, or a combination thereof.
  • the optimized image filter is implemented in software and is stored in the memory 48 .
  • the input device 50 and the output device 52 may include devices used by a user to interact with the electronic device 44 .
  • Non-limiting examples of the input device 50 include a sensor, such as a CMOS or CCD sensor, of a digital camera; a keyboard; a mouse; buttons; and a touch screen.
  • Non-limiting examples of the output device 52 include a display, a television, a computer monitor, and speakers.
  • the I/O interface 54 is configured to send and receive data.
  • the I/O interface 54 may be coupled to a satellite antenna, a world wide web, or an external electronic device to send and receive multimedia content.
  • signal bearing media include, but are not limited to, the following: recordable type media such as floppy disks, hard disk drives, CD ROMs, digital tape, and computer memory.

Abstract

Various embodiments provide an optimized image filter. The optimized image and video obtains an input image and selects a target pixel for modification. Difference values are then determined between the selected target pixel and each reference pixel of a search area. Subsequently, a weighting function is used to determine weight values for each of the reference pixels of the search area based on their respective difference value. The selected target pixel is then modified by the optimized image filter using the determined weight values. A new target pixel in an apply patch is then selected for modification. The new target pixel is modified using the previously determined weight values reassigned to a new set of reference pixels. The previously determined weight values are reassigned to the new set of reference pixels based on each of the new set of reference pixels' position relative to the new target pixel.

Description

    BACKGROUND
  • Technical Field
  • The present disclosure generally relates to filter optimization. In particular, the present disclosure is directed to an optimized image filter having a weighting function that depends on neighboring pixel values.
  • Description of the Related Art
  • With the improvement of display devices, such as televisions, computers, tablets, and smartphones, there is a large demand for high quality images and video. Digital image processing is often used to improve the quality of images and video. For example, image filters are used to digitally reduce noise present in images and video. See U.S. Pat. No. 6,108,455 filed May 29, 1998 and entitled “Non-linear Image Filter for Filtering Noise.”
  • A common noise reduction filter is a finite impulse response (FIR) filter. An adaptive FIR filter's convolution kernel (matrix of pixels) may be defined by equation (1):
  • pix out ( i ) = 1 N ( i ) j Ω w ( i , j ) × pix in ( j ) ( 1 )
  • where i and j are 2D coordinate vectors; i represents coordinates of a target pixel that is to be processed; j represents coordinates of a reference pixel; pixin(j) are input pixel values in the kernel; pixout(i) is a filtered value of pixin(i); w(i,j) is a weighting function; N(i) is the normalization factor: N(i)=ΣjεΩw(i,j); and Ω is a search area of an image, which is typically a square kernel of pixels centered on the target pixel.
  • In general, the filter of equation (1) is a weighted average of surrounding pixels. The weighting function w(i,j) for the filter of equation (1) can be computed in a plurality of ways. For example, the weighting function w(i,j) for a bilateral or sigma filter is a product of spatial weights and a photonic (or range) weights. The weighting function w(i,j) for a bilateral filter may be defined by equation (2):

  • w bilateral(i,j)=f(∥i,j∥)=f(∥i,j∥)×g(|pixin(i)−pixin(j)|)  (2)
  • where f( ) and g( ) are, ideally, continuous and monotonous decreasing functions, such as a Gaussian curve; and ∥i,j∥ designates a Euclidean distance between the spatial positions of pixels i and j.
  • Another common filter for noise reduction is a non-local filter. For a non-local filter, the weighting function w(i,j) is dependent upon a difference between patches p of pixels centered on target and reference pixels. A patch, as used herein, refers to a subset of pixels. The weighting function w(i,j) for a non-local filter may be defined by equation (3):

  • w non-local(i,j)=g(√{square root over (Σkεp(i),lεp(j)(pixin(k)−pixin(l))2)})  (3)
  • The image filters described above are well known in the art and will not be discussed in detail in this description.
  • BRIEF SUMMARY
  • In accordance with an embodiment of the present disclosure, an optimized image filter is provided. The optimized image filter obtains an input image and selects a first target pixel for modification within a search area (a first subset of pixels of the image). In one filtering method, a sum of absolute differences (SAD) values are then determined between the selected first target pixel and each reference pixel of a search area. The SAD values are computed from a second subset of pixels that is within the search area with a third subset of pixels within the search area. The second subset of pixels being associated with the first target pixel and each third subset of pixels being associated with the respective reference pixel.
  • Subsequently, a weighting function is used to determine weight values for each of the reference pixels based on their respective SAD value. The first target pixel is then modified by the image filter using the determined weight values.
  • Following the modification of the first target pixel, a second target pixel within in an apply patch is selected for modification. The apply patch being a fourth subset of pixels that includes the first target pixel. The second target pixel is modified using the previously determined weight values from the first target pixel, i.e., weighted values are not computed for the second target pixel within the apply patch. Instead, each of the reference pixels of the search area for the second target pixel will be assigned the previously determined weight values computed for the first target pixel. In particular, the weight values are reassigned to the set of reference pixels associated with the second target pixel based on a relative position of the first target pixel to the second target pixel. For example, if the second target pixel is one pixel to the right of the first target pixel then each of the new set of reference pixels' will be reassigned the weight value from one pixel to the left of it. Thus, in contrast to the modification of the first target pixel, SAD values and weight values do not need to be determined for the modification of the second target pixel. As a result, the image filter has a low level of complexity, processing time can be reduced, especially in software implementations of the image filter, and power consumption is improved.
  • BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
  • The foregoing and other features and advantages of the present disclosure will be more readily appreciated as the same become better understood from the following detailed description when taken in conjunction with the accompanying drawings.
  • FIG. 1 is a flow diagram illustrating an example of data flow for an optimized image filter according to one embodiment disclosed herein;
  • FIG. 2 is a flow diagram illustrating an example of processing for an optimized image filter according to an embodiment disclosed herein;
  • FIG. 3 is a diagram illustrating an example of modifying a first pixel of an apply patch according to one embodiment disclosed herein;
  • FIG. 4 is a diagram illustrating an example of modifying a second pixel of the apply patch of FIG. 3 according to one embodiment disclosed herein;
  • FIG. 5 is a diagram illustrating an example of modifying a third pixel of the apply patch of FIG. 3 according to one embodiment disclosed herein;
  • FIG. 6 is a diagram illustrating an example of modifying a fourth pixel of the apply patch of FIG. 3 according to one embodiment disclosed herein; and
  • FIG. 7 is a schematic illustrating an example of an electronic device for implementing an optimized image filter according to one embodiment disclosed herein.
  • DETAILED DESCRIPTION
  • In the following description, certain specific details are set forth in order to provide a thorough understanding of various embodiments of the disclosure. However, one skilled in the art will understand that the disclosure may be practiced without these specific details. In some instances, well-known processes associated with digital image processing have not been described in detail to avoid obscuring the descriptions of the embodiments of the present disclosure.
  • Unless the context requires otherwise, throughout the specification and claims that follow, the word “comprise” and variations thereof, such as “comprises” and “comprising,” are to be construed in an open, inclusive sense, that is, as “including, but not limited to.”
  • Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
  • In the drawings, identical reference numbers identify similar features or elements. The size and relative positions of features in the drawings are not necessarily drawn to scale.
  • Most image filters perform processing pixel-by-pixel and require a significant amount of calculations. Consequently, image filters often consume large amounts processing time or power. High processing times and power consumption are problematic for real-time applications and portable electronic devices. For example, digital cameras obtain and display images to users in real-time. Substantial delays or excessive power consumption results in a poor user experience. Low processing time is especially important for displaying videos in real-time. Many videos produce 720p images at 30 frames per second or even 4 k images at 60 frames per second. Any delay in the video will be noticeable to users.
  • Acceleration techniques are often used to lower the complexity of image filters, and improve processing time and power consumption. A common acceleration technique is to lower the size of a search area, such as the search area Ω, of an image, or lowering the size of pixels subsets, such as a target patch 38, used for difference calculations. FIG. 3 illustrates a search area 36, which is a portion of the overall image that is used for processing, and the target patch 38. Lowering the size of the search area or the target patch, however, also reduces the efficiency of the technique and may impact image sharpness and resolution. Particularly, lowering the size of the search area prevents the removal of lower frequency noise and reduces the ability to find acceptable matching areas, and lowering the size of target patch increases the number of false matching areas. Another common acceleration technique is to use a relatively simple difference calculation for non-local filters. Using a simpler difference calculation has been proven to provide good results; however, the complexity is still too high, and further complexity reduction is advantageous.
  • The present disclosure is directed to an image filter that reduces complexity by reducing a total amount of calculations used for a weighting function of the image filter. In particular, the image filter determines weight values for a selected target pixel a in FIG. 3, and then reuses the determined weight values for other target pixels a+1, b, b+1. By reusing previously determined weight values for multiple target pixels, the complexity level of the image filter is reduced and processing time and power consumption is improved. The processing of the image filter will be discussed in further detail with respect to FIGS. 1-6.
  • FIG. 1 is a flow diagram illustrating an example of data flow for an image filter according to one embodiment disclosed herein.
  • At a first part of the sequence 10, an input image is obtained for digital image processing. The input image may be a single image or may be single frame of a stream of input images, such as a video. The input image may be obtained from a variety of sources, such as an image sensor, a multimedia content provider, memory, and a world wide web.
  • In a subsequent step 12, the input image is provided to the image filter for processing. For example, the image filter may modify the input image to digitally reduce noise present in the input image and produce a final image for a user. Processing for the image filter will be discussed in further detail with respect to FIGS. 2-6.
  • In step 14, the image filter has completed processing and a filtered image is obtained. Although not shown, the data flow of FIG. 1 may be repeated for multiple images. For example, the data flow of FIG. 1 may be repeated for real-time processing of multiple images or a stream of input images, such as a video.
  • FIG. 2 is a flow diagram illustrating an example of processing for an image filter according to an embodiment disclosed herein. It is beneficial to review the steps of FIG. 2 simultaneously with FIGS. 3-6, which are diagrams illustrating examples of modifying pixels of an apply patch according to one embodiment disclosed herein.
  • At a first part of the sequence 16, the image filter obtains an input image. As previously discussed, the input image may be a single image or may be single frame of a stream of input images, such as a video.
  • In a subsequent step 18, a target pixel is selected for modification. For example, referring to FIG. 3, a target pixel a is selected. To process the entire image, multiple target pixels are processed sequentially. Target pixels may be selected at random, by row, by column, or in any predetermined order. For example, target pixels may be selected by starting at an upper left corner of the input image, selecting pixels of a first row from left to right, moving to the next row, and then continue this pattern until reaching a lower right corner of the input image. In an alternative embodiment, every other pixel of the input image is selected as a target pixel.
  • The selected target pixel has a corresponding search area, target patch, reference patch, and apply patch. For example, referring to FIG. 3, the target pixel a has a corresponding search area 36, a target patch 38, a reference patch 40, and an apply patch 42. The patches are subsets of pixels within the search area 36.
  • The search area includes reference pixels that surround the selected target pixel. The reference pixels are used for modification of the target pixel. For example, referring to FIG. 3, the target pixel a may be modified by replacing its value with a weighted average of the reference pixels of the search area 36 for noise reduction. As will be discussed with respect to steps 20 and 22, the weights of the reference pixels are based on a distance or difference calculation, such as a sum of absolute differences (SAD) or sum of squared differences (SSD).
  • The target patch 38 and the reference patch 40 have the same dimensions and are used to determine a similarity between the selected target pixel and a reference pixel. The similarity between the target pixel and the reference pixel may be computed as a difference value, as each pixel may have a numerical representation and the similarity is a comparison of the numerical representation of each pixel. This can also be referred to as a distance between the target pixel's value and the reference pixel's value, where the distance is not necessarily representative of the physical space between the pixels in the array.
  • In a preferred embodiment, the target patch and the reference patch are centered on the target pixel and the reference pixel, respectively. For example, referring to FIG. 3, the target patch 38 and the reference patch 40 are used to determine a difference value between the target pixel a and a reference pixel c. Each of the reference pixels within the search area 36 will be used to create a difference value with respect to the target pixel. Accordingly, as each reference pixel in the search area is processed, a reference patch 40 will be associated with the reference pixel being processed. The determination of difference values will be discussed in further detail with respect to step 20.
  • The apply patch 42 includes the target pixel a and is a subset of the target patch 38. The apply patch includes additional target pixels a+1, b, b+1 that are modified using previously determined weight values from the target pixel a. The apply patch will be discussed in further detail with respect to steps 26-30.
  • In step 20, difference values are determined between the selected target pixel and each of the reference pixels of the search area. A difference value between a first pixel (the target pixel a) and a second pixel (reference pixel c) is determined by comparing pixels in a first patch centered on the first pixel (the target patch 38) and respective corresponding pixels in a second patch centered on the second pixel (reference patch 40). The difference values may be determined using a calculation, such as SAD or SSD. SAD and SSD calculations are well known in the art and will not be discussed in detail in this description.
  • For example, referring to FIG. 3, a difference or distance between the target pixel a value and the reference pixel c value is determined by calculating a difference value between the target patch 38 centered on the target pixel a and the reference patch 40 centered on the reference pixel c. Difference values are determined between the target pixel a and each reference pixel in the search area 36.
  • It should be noted that the dimensions of the search area 36, the target patch 38, and the reference patch 40 shown in FIGS. 3-6 are for illustrative purposes. The search area 36, the target patch 38, and the surrounding patch 40 may have any size. In a preferred embodiment, the search area is larger than the target patch.
  • In other words, once the target pixel a is identified and the target patch 38 is identified, then a reference pixel is selected, such as reference pixel c in FIG. 3. A reference patch is identified that corresponds to reference pixel c, for example, reference patch 40. The target patch 38 and the reference patch 40 have the same dimension, i.e. include the same number of pixels in the same shape. In FIG. 3, the target patch and the reference patch are both 5×5 arrays of pixels. Then, each pixel of the target patch is compared to each pixel of the reference patch and then a single value is calculated to generate the difference value.
  • In step 22, a weighting function is used to determine weight values for each of the reference pixels of the search area based on their respective difference value between the selected target pixel determined in step 20. For example, referring to FIG. 3, a weight value is determined for the reference pixel c based on its determined difference value between the target pixel a from step 20. Weight values are determined for each reference pixel of the search area 36. In a preferred embodiment, a weight value for a reference pixel is inversely related to its determined difference value. That is, reference pixels that are similar to the selected target pixel (i.e., smaller difference values) are given larger weight values, and vice versa.
  • In order to determine the weight value for every pixel in the search area 36 when processing for target pixel a, the comparison and determination of distances values as done for the reference pixel c above, is performed for every pixel in the search area.
  • In step 24, the selected target pixel is modified by the image filter using the weight values computed in step 22. For example, referring to FIG. 3, the target pixel a may be modified by replacing its value with a weighted average of the reference pixels of the search area 36 for noise reduction. The image filter may be any type of filter that utilizes a weighting function based on the difference between a target pixel value and reference pixel values.
  • In step 26, it is determined whether there are additional pixels in the apply patch 42 associated the selected target pixel a. For example, it is determined whether the apply patch 42 includes additional pixels besides the target pixel a. If there are no additional pixels in the apply patch, the processing moves to step 32. If there are additional pixels in the apply patch, the processing moves to step 28.
  • It should be noted that the dimensions and pattern of the apply patch 42 shown in FIGS. 3-6 are for illustrative purposes. An apply patch may include any number of pixels that are part of a target patch. For example, the apply patch 42 may include any number of pixels of the target patch 38. In one embodiment, the apply patch has a plus pattern consisting of the selected target pixel and pixels immediately to the right, left, above, and below the selected target pixel. In another embodiment, the apply patch is a 3×3 patch centered on the selected target pixel. In a further embodiment, the apply patch consists of the same pixels as the target patch. In an even further embodiment, the apply patch consists two consecutive pixels, such as pixels a and a+1.
  • In step 28, a second target pixel in the apply patch is selected for modification. For example, referring to FIG. 4, the second target pixel a+1 in the apply patch 42 is selected. The second target pixel in the apply patch may be selected at random, by row, by column, or in any predetermined order.
  • In step 30, the second target pixel is modified using the previously determined weight values for the original target pixel. The second target pixel is associated with a second set of reference pixels. The difference value between the original target patch and the original reference patch is considered to be a valid difference value between all pixels of the target patch and corresponding pixels of the reference patch. For example, referring to FIG. 4, a difference value between a target patch centered on pixel a and a reference patch centered on pixel c is also considered to be a valid difference value between pixel a+1 and pixel c+1. Accordingly, when modifying the second target pixel in the apply patch 42, weight values that were determined in step 22 may be reused for the second set of reference pixels.
  • The previously determined weight values in step 22 are reused for the second set of reference pixels based on the second target pixel's position relative to the original target pixel from step 18. In particular, each of previously determined weight values from the original reference pixels is assigned to an adjacent one of the reference pixel (the second set of reference pixels). A position of the second reference pixel relative to the original reference pixel is the same as a position of the second target pixel relative to the original target pixel. In other words, the previously determined weight values are shifted to the second set of reference pixels by the same direction and distance as the second target pixel is shifted from the original target pixel. For example, referring to FIG. 4, when the selected target pixel is pixel a and the new target pixel is pixel a+1, the weight value corresponding to the reference pixel c is shifted and assigned to reference pixel c+1. Similarly, the weight value corresponding to the reference pixel c is assigned to reference pixel d when the selected target pixel is pixel a and the new target pixel is pixel b, as shown in FIG. 5; and the weight value corresponding to the reference pixel c is assigned to reference pixel d+1 when the selected target pixel is pixel a and the new target pixel is pixel b+1, as shown in FIG. 6.
  • The second target pixel is then modified with the previously determined weight values assigned to the second set of reference pixels. For example, similar to step 24, the second target pixel may be modified by replacing its value with a weighted average of the second set of reference pixels for noise reduction. Therefore, in contrast to the modification of the target pixel in steps 20-24, difference values and weight values do not need to be determined for the modification of the second target pixel.
  • By reusing previously determined weight values, it is possible to divide the input image in to a plurality of apply patches and use the same weight values for all pixels belonging to the same apply patch. Using the same weight values for each pixel of an apply patch reduces the total amount of calculations for the weighting function of the image filter. For example, referring to FIGS. 4-6, reusing the determined weight values for target pixel a for pixels a+1, b, and b+1 leads to a reduction factor of four. Similarly, a 3×3 apply patch leads to a reduction factor of nine, and a 2×1 patch leads to a reduction factor of two. Accordingly, a level of optimization may be adjusted by controlling the size of the apply patch.
  • Subsequent to step 30, the processing returns to step 26 to determine whether there are additional pixels in the apply patch of the selected target pixel. As such, steps 28-30 are repeated until each pixel in the apply patch has been modified. For example, steps 28-30 are repeated until pixels a+1, b, and b+1 have been modified by the optimized image filter.
  • In an alternative embodiment, the previously determined difference values are reused, instead of the weight values in step 30. As previously discussed, a difference value between a target patch and a reference patch is considered to be a valid difference value for all pixels of the target patch and respective corresponding pixels of the reference patch. Accordingly, similar to the reusing of the previously determined weight values, each of the previously determined difference values determined in step 20 is used for a new reference pixel such that a position of the new reference pixel relative to the reference pixel corresponding to the previously determined difference value is the same as a position of the new target pixel relative to the selected target pixel. In other words, the previously determined difference values are shifted to a new set of reference pixels by the same direction and difference value as the new target pixel is shifted from the selected target pixel. For example, referring to FIG. 4, when the selected target pixel is pixel a and the new target pixel is pixel a+1, the difference value that was determined between pixel a and reference pixel c is reused as a difference value between pixel a+1 and pixel c+1. By reusing the previously determined difference values, instead of the previously determined weight values, a new weighting function may be used in step 30 to determine a new weight values for the new set of reference pixels, similar to step 22. The new target pixel may then be modified using the new weight values, similar to step 24.
  • Returning to step 26, if there are no additional pixels in the apply patch of the selected target pixel, the processing moves to step 32. In step 32, it is determined whether there are additional pixels in the input image that have not been filtered by the image filter. If there are additional unfiltered image pixels in the input image, the processing returns to step 18. If there are no additional unfiltered pixels in the input image, the processing moves to step 34.
  • In step 34, the image filter has completed processing and the filtered image is provided.
  • It should be noted that each block shown in FIGS. 1-2 may represent one or more blocks as appropriate to a specific embodiment or may be combined with other blocks.
  • FIG. 7 is a schematic illustrating an example of an electronic device 44 for implementing an optimized image filter according to one embodiment disclosed herein. Non-limiting examples of the electronic device 44 include a digital camera, a mobile telephone, a gaming device, a computer, a tablet, a television, or a set-top box. In one embodiment, the electronic device 44 includes a processing unit 46, a memory 48, an input device 50, an output device 52, and an I/O interface 54. It should be noted that the electronic device 44 may include additional functionalities and components than those illustrated in FIG. 7.
  • The processing unit 46 is configured to perform the processing for the optimized image filter. In one embodiment, the processing unit 46 is a digital signal processor. The memory 48 may be a non-volatile memory, such as ROM, a volatile memory, such as RAM, or a combination thereof. In one embodiment, the optimized image filter is implemented in software and is stored in the memory 48. The input device 50 and the output device 52 may include devices used by a user to interact with the electronic device 44. Non-limiting examples of the input device 50 include a sensor, such as a CMOS or CCD sensor, of a digital camera; a keyboard; a mouse; buttons; and a touch screen. Non-limiting examples of the output device 52 include a display, a television, a computer monitor, and speakers. The I/O interface 54 is configured to send and receive data. For example, the I/O interface 54 may be coupled to a satellite antenna, a world wide web, or an external electronic device to send and receive multimedia content.
  • The foregoing detailed description has set forth various embodiments of the devices and/or processes via the use of block diagrams, schematics, and examples. Insofar as such block diagrams, schematics, and examples contain one or more functions and/or operations, it will be understood by those skilled in the art that each function and/or operation within such block diagrams, flowcharts, or examples can be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, or virtually any combination thereof. In one embodiment, the present subject matter may be implemented via Application Specific Integrated Circuits (ASICs). However, those skilled in the art will recognize that the embodiments disclosed herein, in whole or in part, can be equivalently implemented in standard integrated circuits, as one or more computer programs running on one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs running on one or more controllers (e.g., microcontrollers) as one or more programs running on one or more processors (e.g., microprocessors, digital signal processors), as firmware, or as virtually any combination thereof, and that designing the circuitry and/or writing the code for the software and or firmware would be well within the skill of one of ordinary skill in the art in light of this disclosure.
  • Those of skill in the art will recognize that many of the methods or algorithms set out herein may employ additional acts, may omit some acts, and/or may execute acts in a different order than specified.
  • In addition, those skilled in the art will appreciate that the mechanisms of taught herein are capable of being distributed as a program product in a variety of forms, and that an illustrative embodiment applies equally regardless of the particular type of physical signal bearing media used to actually carry out the distribution. Examples of signal bearing media include, but are not limited to, the following: recordable type media such as floppy disks, hard disk drives, CD ROMs, digital tape, and computer memory.
  • The various embodiments described above can be combined to provide further embodiments. Aspects of the embodiments can be modified, if necessary to employ concepts of the various patents, applications and publications to provide yet further embodiments.
  • It will be appreciated that, although specific embodiments of the present disclosure are described herein for purposes of illustration, various modifications may be made without departing from the spirit and scope of the present disclosure. Accordingly, the present disclosure is not limited except as by the appended claims.
  • These and other changes can be made to the embodiments in light of the above-detailed description. In general, in the following claims, the terms used should not be construed to limit the claims to the specific embodiments disclosed in the specification and the claims, but should be construed to include all possible embodiments along with the full scope of equivalents to which such claims are entitled. Accordingly, the claims are not limited by the disclosure.

Claims (18)

1. A method, comprising:
receiving an image having a plurality of pixels;
selecting an apply patch of the image, the apply patch including first and second pixels of the plurality of pixels, the second pixel being adjacent to the first pixel;
selecting a search area of the image, the search area including at least a third pixel of the plurality of pixels;
determining a difference value between the first pixel and the third pixel based on the pixels surrounding the first pixel and the third pixel;
determining a weight value based on the difference value;
assigning the weight value to the third pixel;
modifying the first pixel with the weight value assigned to the third pixel;
selecting a fourth pixel, the fourth pixel being adjacent to the third pixel;
assigning the weight value to the fourth pixel; and
modifying the second pixel with the weight value assigned to the fourth pixel.
2. The method of claim 1 wherein the selecting of the apply patch includes selecting the second pixel to be immediately adjacent to the first pixel.
3. The method of claim 1 wherein the selecting of the fourth pixel includes selecting the fourth pixel such that a position of the fourth pixel in the image relative to the third pixel is the same as a position of the second pixel in the image relative to the first pixel.
4. The method of claim 1 wherein the determining of the weight value includes setting the weight value to a value that is inversely related to the difference value.
5. The method of claim 1 wherein the determining of the weight value is includes determining the weight value for a finite impulse response filter.
6. The method of claim 1, further comprising selecting target patch that includes the apply patch, selecting a reference patch that includes the third pixel, and determining difference value from the pixels in the target patch and the reference patch.
7. The method of claim 1 wherein the selecting of the apply area includes setting a size of the apply area to be smaller than a size of the search area.
8. The method of claim 1 wherein the receiving of the image includes obtaining a single frame of a plurality of video frames.
9. The method of claim 1 wherein the selecting of the apply patch includes selecting a fifth pixel adjacent to the first and second pixels, the method further including:
selecting a sixth pixel of the plurality of pixels, the sixth pixel being adjacent to the third and fourth pixels;
assigning the weight value to the sixth pixel; and
modifying the fifth pixel with the weight value assigned to the sixth pixel.
10. The method of claim 9 wherein the selecting of the sixth pixel includes selecting the sixth pixel such that a position of the sixth pixel in the image relative to the third and fourth pixels is the same as a position of the fifth pixel in the image relative to the first and second pixels.
11. The method of claim 1 wherein the difference value is determined by using a sum of absolute differences or a sum of squared differences.
12. A method, comprising:
selecting a first pixel of a plurality of pixels of an image;
determining a plurality of difference values, the plurality of difference values being determined between a target patch and each of a plurality of reference patches, the target patch including the first pixel, the plurality of reference patches each including a respective reference pixel;
determining a plurality of weight values for each pixel in a first search area, the first search area including the target patch and each of the plurality of reference patches, the determining of the plurality of weight values being based on the plurality of difference values;
modifying the first pixel using the plurality of weight values;
selecting a second pixel of the plurality of pixels, the second pixel being adjacent to the first pixel; and
modifying the second pixel using the plurality of weight values.
13. The method of claim 12 wherein the modifying of the first pixel includes assigning the plurality of weight values to respective pixels of the plurality of pixels, and modifying the first pixel using the assigned plurality of weight values.
14. The method of claim 13 wherein the modifying of the second pixel includes reassigning the plurality of weight values to a second search area, and modifying the second pixel using the reassigned plurality of weight values.
15. The method of claim 14 wherein the reassigning of the plurality of weight values to the second search area includes shifting the plurality of weight values to the pixels of the second search area by the same direction and distance as the second pixel is from the first pixel.
16. The method of claim 12 wherein the determining of the plurality of weight values includes setting the plurality of weight values to values that are inversely related to the plurality of difference values.
17. The method of claim 12 wherein the determining of the plurality of weight values includes determining the plurality of weight values for a finite impulse response filter.
18. The method of claim 12 wherein each of the plurality of difference value is determined by using a sum of absolute differences or a sum of squared differences.
US14/983,150 2015-12-29 2015-12-29 System and method for adaptive pixel filtering Active 2036-01-22 US9710722B1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US14/983,150 US9710722B1 (en) 2015-12-29 2015-12-29 System and method for adaptive pixel filtering
CN201610465099.1A CN106937020B (en) 2015-12-29 2016-06-23 System and method for adaptive pixel filter
CN201911124050.XA CN110852334B (en) 2015-12-29 2016-06-23 System and method for adaptive pixel filtering
US15/636,294 US10186022B2 (en) 2015-12-29 2017-06-28 System and method for adaptive pixel filtering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14/983,150 US9710722B1 (en) 2015-12-29 2015-12-29 System and method for adaptive pixel filtering

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/636,294 Continuation US10186022B2 (en) 2015-12-29 2017-06-28 System and method for adaptive pixel filtering

Publications (2)

Publication Number Publication Date
US20170185863A1 true US20170185863A1 (en) 2017-06-29
US9710722B1 US9710722B1 (en) 2017-07-18

Family

ID=59088078

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/983,150 Active 2036-01-22 US9710722B1 (en) 2015-12-29 2015-12-29 System and method for adaptive pixel filtering
US15/636,294 Active US10186022B2 (en) 2015-12-29 2017-06-28 System and method for adaptive pixel filtering

Family Applications After (1)

Application Number Title Priority Date Filing Date
US15/636,294 Active US10186022B2 (en) 2015-12-29 2017-06-28 System and method for adaptive pixel filtering

Country Status (2)

Country Link
US (2) US9710722B1 (en)
CN (2) CN106937020B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020011756A1 (en) * 2018-07-12 2020-01-16 Telefonaktiebolaget Lm Ericsson (Publ) Bilateral alpha omega: calculating lut argument with few arithmetic operations
US10803567B2 (en) 2018-01-16 2020-10-13 Realtek Semiconductor Corporation Image processing method and image processing device
CN113553460A (en) * 2021-08-13 2021-10-26 北京安德医智科技有限公司 Image retrieval method and device, electronic device and storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102017212214A1 (en) * 2017-07-17 2019-01-17 Carl Zeiss Microscopy Gmbh A method of recording an image using a particle microscope and particle microscope
CN113434715A (en) * 2020-03-23 2021-09-24 瑞昱半导体股份有限公司 Method for searching for image and image processing circuit

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080309955A1 (en) * 2007-06-14 2008-12-18 Samsung Electronics Co., Ltd. Image forming apparatus and method to improve image quality thereof
US20100091846A1 (en) * 2007-04-09 2010-04-15 Ntt Docomo, Inc Image prediction/encoding device, image prediction/encoding method, image prediction/encoding program, image prediction/decoding device, image prediction/decoding method, and image prediction decoding program
US20110063517A1 (en) * 2009-09-15 2011-03-17 Wei Luo Method And System For Utilizing Non-Local Means (NLM) For Separation Of Luma (Y) And Chroma (CBCR) Components
US20120027319A1 (en) * 2010-07-28 2012-02-02 Vatics Inc. Method and electronic device for reducing digital image noises
US9432596B2 (en) * 2012-10-25 2016-08-30 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US20170039444A1 (en) * 2014-04-11 2017-02-09 Jianguo Li Object detection using directional filtering

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3092024B2 (en) * 1991-12-09 2000-09-25 松下電器産業株式会社 Image processing method
US6108455A (en) 1998-05-29 2000-08-22 Stmicroelectronics, Inc. Non-linear image filter for filtering noise
KR20050053135A (en) * 2003-12-02 2005-06-08 삼성전자주식회사 Apparatus for calculating absolute difference value, and motion prediction apparatus and motion picture encoding apparatus utilizing the calculated absolute difference value
US7822285B2 (en) * 2004-05-20 2010-10-26 Omnivision Technologies, Inc. Methods and systems for locally adaptive image processing filters
US20080212888A1 (en) * 2007-03-01 2008-09-04 Barinder Singh Rai Frame Region Filters
JP4640508B2 (en) * 2009-01-09 2011-03-02 ソニー株式会社 Image processing apparatus, image processing method, program, and imaging apparatus
US8471865B2 (en) * 2010-04-02 2013-06-25 Intel Corporation System, method and apparatus for an edge-preserving smooth filter for low power architecture
CN102236885A (en) * 2010-04-21 2011-11-09 联咏科技股份有限公司 Filter for reducing image noise and filtering method
US8907973B2 (en) 2012-10-22 2014-12-09 Stmicroelectronics International N.V. Content adaptive image restoration, scaling and enhancement for high definition display
CN103002285B (en) * 2012-12-06 2015-07-08 深圳广晟信源技术有限公司 Complexity generation method and device of video unit
US9860529B2 (en) * 2013-07-16 2018-01-02 Qualcomm Incorporated Processing illumination compensation for video coding
JP6173199B2 (en) * 2013-12-09 2017-08-02 オリンパス株式会社 Image processing apparatus, image processing method, and imaging apparatus

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100091846A1 (en) * 2007-04-09 2010-04-15 Ntt Docomo, Inc Image prediction/encoding device, image prediction/encoding method, image prediction/encoding program, image prediction/decoding device, image prediction/decoding method, and image prediction decoding program
US20080309955A1 (en) * 2007-06-14 2008-12-18 Samsung Electronics Co., Ltd. Image forming apparatus and method to improve image quality thereof
US20110063517A1 (en) * 2009-09-15 2011-03-17 Wei Luo Method And System For Utilizing Non-Local Means (NLM) For Separation Of Luma (Y) And Chroma (CBCR) Components
US20120027319A1 (en) * 2010-07-28 2012-02-02 Vatics Inc. Method and electronic device for reducing digital image noises
US9432596B2 (en) * 2012-10-25 2016-08-30 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US20170039444A1 (en) * 2014-04-11 2017-02-09 Jianguo Li Object detection using directional filtering

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10803567B2 (en) 2018-01-16 2020-10-13 Realtek Semiconductor Corporation Image processing method and image processing device
WO2020011756A1 (en) * 2018-07-12 2020-01-16 Telefonaktiebolaget Lm Ericsson (Publ) Bilateral alpha omega: calculating lut argument with few arithmetic operations
US11233994B2 (en) 2018-07-12 2022-01-25 Telefonakttebolaget Lm Ericsson (Publ) Bilateral alpha omega: calculating LUT argument with few arithmetic operations
CN113553460A (en) * 2021-08-13 2021-10-26 北京安德医智科技有限公司 Image retrieval method and device, electronic device and storage medium

Also Published As

Publication number Publication date
CN110852334A (en) 2020-02-28
US10186022B2 (en) 2019-01-22
US20170301072A1 (en) 2017-10-19
CN106937020A (en) 2017-07-07
CN110852334B (en) 2023-08-22
US9710722B1 (en) 2017-07-18
CN106937020B (en) 2019-12-03

Similar Documents

Publication Publication Date Title
US10186022B2 (en) System and method for adaptive pixel filtering
US9615039B2 (en) Systems and methods for reducing noise in video streams
US20210004962A1 (en) Generating effects on images using disparity guided salient object detection
JP2020518191A (en) Quantization parameter prediction maintaining visual quality using deep neural network
WO2015184208A1 (en) Constant bracketing for high dynamic range operations (chdr)
CN108986197B (en) 3D skeleton line construction method and device
US20150302551A1 (en) Content aware video resizing
US10230957B2 (en) Systems and methods for encoding 360 video
US9031350B2 (en) Method for processing edges in an image and image processing apparatus
JP2017098957A (en) Method for generating user interface presenting videos
CN116744056A (en) Electronic device and control method thereof
CN113132695A (en) Lens shadow correction method and device and electronic equipment
US20150363666A1 (en) Image processing method and image processing device
WO2015198368A1 (en) Image processing device and image processing method
CN106157257A (en) The method and apparatus of image filtering
CN113744159A (en) Remote sensing image defogging method and device and electronic equipment
CN113628259A (en) Image registration processing method and device
CN110717913B (en) Image segmentation method and device
US20150130907A1 (en) Plenoptic camera device and shading correction method for the camera device
CN115439386A (en) Image fusion method and device, electronic equipment and storage medium
US20130300890A1 (en) Image processing apparatus, image processing method, and program
CN114419322A (en) Image instance segmentation method and device, electronic equipment and storage medium
CN103974043B (en) Image processor and image treatment method
US10062149B2 (en) Methods for blending resembling blocks and apparatuses using the same
WO2015128302A1 (en) Method and apparatus for filtering and analyzing a noise in an image

Legal Events

Date Code Title Description
AS Assignment

Owner name: STMICROELECTRONICS SA, FRANCE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:DROUOT, ANTOINE;REEL/FRAME:037719/0182

Effective date: 20160104

Owner name: STMICROELECTRONICS INTERNATIONAL N.V., NETHERLANDS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CHANDRA, MAHESH;REEL/FRAME:037800/0244

Effective date: 20151221

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4