WO2014064968A1 - 画像処理装置、画像処理方法及びコンピュータプログラム - Google Patents
画像処理装置、画像処理方法及びコンピュータプログラム Download PDFInfo
- Publication number
- WO2014064968A1 WO2014064968A1 PCT/JP2013/068226 JP2013068226W WO2014064968A1 WO 2014064968 A1 WO2014064968 A1 WO 2014064968A1 JP 2013068226 W JP2013068226 W JP 2013068226W WO 2014064968 A1 WO2014064968 A1 WO 2014064968A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- smoothing
- pixel
- image processing
- value
- determination
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 381
- 238000003672 processing method Methods 0.000 title claims abstract description 48
- 238000004590 computer program Methods 0.000 title claims abstract description 27
- 238000009499 grossing Methods 0.000 claims abstract description 794
- 238000001914 filtration Methods 0.000 claims abstract description 37
- 238000000034 method Methods 0.000 claims description 343
- 230000008569 process Effects 0.000 claims description 312
- 238000004364 calculation method Methods 0.000 claims description 127
- 239000011159 matrix material Substances 0.000 claims description 80
- 238000000605 extraction Methods 0.000 claims description 35
- 239000000284 extract Substances 0.000 abstract description 12
- 238000010586 diagram Methods 0.000 description 32
- 230000008859 change Effects 0.000 description 20
- 238000011156 evaluation Methods 0.000 description 9
- 239000004973 liquid crystal related substance Substances 0.000 description 9
- 230000006835 compression Effects 0.000 description 5
- 238000007906 compression Methods 0.000 description 5
- 238000004321 preservation Methods 0.000 description 5
- 230000001186 cumulative effect Effects 0.000 description 4
- 238000001514 detection method Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000006866 deterioration Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000010606 normalization Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000012447 hatching Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000002834 transmittance Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/40—Picture signal circuits
- H04N1/409—Edge or detail enhancement; Noise or error suppression
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/182—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/86—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/142—Edging; Contouring
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/21—Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20004—Adaptive image processing
- G06T2207/20012—Locally adaptive
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
Definitions
- the present invention relates to an image processing apparatus, an image processing method, and a computer program that can determine noise included in an input image and remove or reduce the noise.
- noise included in an image for example, noise in which the pixel value changes stepwise between adjacent pixels, such as block noise
- the block noise is noise generated when image compression processing such as MPEG (Moving Picture Expert Group) or JPEG (Joint Photographic Experts Group) is performed.
- image compression processing such as MPEG or JPEG
- the original image is divided into blocks of a specific size, and the image is compressed for each block. For this reason, pixel values become discontinuous at the boundary between adjacent blocks, and the boundary portion of the block may be visually recognized by the user as noise.
- Such block noise is not limited to image compression processing, and may occur when an image is divided into blocks of a specific size and image processing is performed.
- a spatial difference between adjacent pixels is calculated for an input video signal, a spatial difference comparison determination signal is output from a comparison result of a plurality of adjacent spatial differences, and the spatial difference comparison determination signal is phase-shifted.
- Block noise detection device that counts every time and outputs a cumulative value signal, compares the values of a plurality of cumulative value signals, and outputs the phase corresponding to the cumulative value signal having the maximum value as a maximum cumulative phase signal Has been proposed.
- the block noise detection device described in Patent Document 1 has a problem that the size of block noise that can be detected is constant. For this reason, an image generated by image processing with a variable block size cannot be handled.
- the boundary of the block noise to be detected is one pixel wide, and when the peripheral pixels of the pixel corresponding to the noise boundary are smoothed, the right and left ( Alternatively, there is a problem that one of the adjacent areas is smoothed by one pixel.
- the block noise detection apparatus described in Patent Document 1 is configured to always perform smoothing on a detected portion. For this reason, the pixels that do not actually need to be smoothed are also smoothed, and the image quality may deteriorate.
- the same smoothing filter is used regardless of the type of detected noise (for example, noise including only low frequency components, noise including low frequency components and high frequency components). It was done using. For this reason, when a plurality of types of noise are included in an image, a noise removal result may be obtained in which one noise has high removal performance and the other noise has low removal performance.
- the present invention has been made in view of such circumstances, and an object of the present invention is to provide an image processing apparatus and an image processing method capable of accurately detecting noise contained in an input image and removing or reducing the noise. And providing a computer program. Another object of the present invention is to provide an image processing apparatus and image processing capable of performing appropriate smoothing to remove or reduce noise even when the input image includes a plurality of types of noise. It is to provide a method and a computer program.
- An image processing apparatus is an image processing apparatus that performs image processing for generating an output image in which noise is removed or reduced from an input image composed of a plurality of pixels arranged in a matrix.
- Specific area extracting means for extracting a specific area including pixels, and smoothing for determining whether or not the pixel of interest is a target of the first smoothing process based on the specific area extracted by the specific area extracting means
- the determination unit and the smoothing determination unit determine that the first smoothing process is to be performed, the first smoothing process using the smoothing filter is performed to smooth the pixel value of the target pixel.
- the first smoothing means and the smoothing determining means determine that the first smoothing process is not to be performed, a second smoothing process using an edge-preserving smoothing filter is performed, Pixel value of the target pixel Second smoothing means for smoothing, and the pixel value of each pixel of the input image is smoothed by the first smoothing means or by the second smoothing means An output image having one of the smoothed pixel values is generated.
- the image processing apparatus further includes a noise boundary that determines a direction in which a noise boundary portion extends in the specific area when the smoothing determination unit determines that the first smoothing process is to be performed.
- Direction determining means, and noise boundary position determining means for determining a position of a noise boundary portion in the specific region when the smoothing determining means determines that the first smoothing process is to be performed.
- the noise boundary position determining unit determines that the smoothing determining unit is a target of the smoothing process, and the direction determined by the noise boundary direction determining unit is a predetermined direction. In this case, the position of the boundary portion of the noise in the specific region is determined.
- the image processing apparatus further includes a specific area expansion unit that expands the specific area when the direction determined by the noise boundary direction determination unit is not a predetermined direction, the noise boundary direction determination unit, The direction is determined again for the specific area enlarged by the specific area enlarging means.
- the direction determined by the noise boundary direction determination unit is a predetermined direction, or the specific region is expanded to a predetermined size by the specific region expansion unit.
- the specific area enlargement by the specific area enlargement means and the direction determination by the noise boundary direction determination means are repeatedly performed.
- the image processing apparatus includes a smoothing filter storage unit that stores a plurality of smoothing filters used by the first smoothing unit, a position determined by the noise boundary position determination unit, and a determination of the noise boundary direction.
- Filter selecting means for selecting a smoothing filter from the smoothing filter storage means in accordance with the direction determined by the means, wherein the first smoothing means uses the smoothing filter selected by the filter selecting means.
- the pixel value of the target pixel is smoothed.
- the image processing apparatus further includes a differential value calculating unit that calculates a differential value of a pixel value between pixels with respect to the image of the specific region extracted by the specific region extracting unit, and the smoothing determining unit Is based on the differential value calculated by the differential value calculation means to determine whether or not the pixel of interest is to be subjected to smoothing processing.
- the differential value calculating unit calculates a primary differential value and a secondary differential value of pixel values between adjacent pixels
- the smoothing determining unit includes: The determination is performed based on the primary differential value and the secondary differential value calculated by the differential value calculating means.
- the image processing apparatus also provides a first-order differential value binary that binarizes the first-order differential value according to whether or not the first-order differential value calculated by the differential value calculation means exceeds a threshold value.
- a first logical sum calculation means for calculating a logical sum of the first differential values binarized by the first differential value binarization means, and a second differential value calculated by the differential value calculation means is a threshold value.
- a second logical sum calculating means for calculating the first logical sum calculating means and a third logical sum calculating means for calculating a logical sum of the calculation results of the first logical sum calculation means and the second logical sum calculation means,
- the deciding means is characterized in that the decision is made based on the calculation result of the third logical sum calculating means.
- the image processing apparatus includes the differential value calculation means, the primary differential value binarization means, the first logical sum calculation means, the secondary differential value binarization means, and the second logical sum.
- the calculation means and the third logical sum calculation means are configured to perform processing in the vertical direction and the horizontal direction of the specific area, respectively, and the smoothing determination means is the vertical direction and the third logical sum calculation means. The determination is made based on the calculation result in the horizontal direction.
- the image processing apparatus further includes a noise boundary that determines a direction in which a noise boundary portion extends in the specific area when the smoothing determination unit determines that the first smoothing process is to be performed.
- Direction determining means wherein the noise boundary direction determining means has a noise boundary portion in the specific region in the vertical direction and / or on the basis of the calculation result of the vertical direction and horizontal direction by the third logical sum calculation means. It is characterized in that it is determined whether or not it extends in the lateral direction.
- the image processing apparatus further includes a noise boundary position determining unit that determines a position of a noise boundary portion in the specific region when the smoothing determining unit determines that the first smoothing process is to be performed.
- the noise boundary position determining means includes a position of a noise boundary portion in the specific area based on a pattern in the specific area of the secondary differential value binarized by the secondary differential value binarizing means. Is determined.
- the image processing apparatus includes a Sobel filter storage unit that stores a plurality of Sobel filters that detect the strength of the edge component in each specific direction in different directions, and the specific region extracted by the specific region extraction unit.
- Edge strength calculating means for performing filtering using a plurality of Sobel filters stored in the Sobel filter storage means on the region, and calculating the strength of edge components included in the specific region in a plurality of directions, and the edge strength calculation
- Edge strength difference determination means for determining whether or not the difference between the maximum value and the minimum value of the plurality of edge strengths calculated by the means exceeds a threshold value, wherein the second smoothing means has the difference as the threshold value. If the edge intensity difference determining means determines that the value does not exceed the specific region using a smoothing filter that is not an edge-preserving type Characterized in that the pixel value of the pixel of interest are to be smoothed.
- the image processing apparatus further includes an application determination unit that determines whether or not to apply a smoothing result by the second smoothing unit, and the pixel value of each pixel of the input image Generating an output image having either a pixel value smoothed by the first smoothing means, a pixel value smoothed by the second smoothing means, or an original pixel value not smoothed; It is characterized by being.
- the image processing apparatus performs second filtering on the specific area extracted by the specific area extraction unit using a Laplacian filter, and calculates an intensity of an edge component included in the specific area.
- Calculating means, and edge strength determining means for determining whether or not the strength calculated by the second edge strength calculating means exceeds a threshold, and the application determining means is configured to detect the edge when the strength exceeds the threshold.
- the strength determining means determines, it is determined that the smoothing result by the second smoothing means is applied, and when the edge strength determining means determines that the strength does not exceed the threshold, the second smoothing It is characterized in that it is determined that the smoothing result by means is not applied.
- the image processing apparatus includes an increase / decrease number calculation unit that calculates the increase / decrease number of pixel values between pixels adjacent in a specific direction in the specific region extracted by the specific region extraction unit, and the increase / decrease number calculation.
- An increase / decrease count determination unit that determines whether the increase / decrease count calculated by the means exceeds a threshold, and the application determination unit determines that the increase / decrease count does not exceed the threshold, When the smoothing result by the second smoothing unit is determined to be applied and the increase / decrease count determining unit determines that the increase / decrease count exceeds the threshold value, the smoothing result by the second smoothing unit is not applied. It is characterized by being judged.
- the image processing apparatus calculates a difference between a pixel value of a target pixel included in the specific area extracted by the specific area extraction unit and a pixel value smoothed by the second smoothing unit.
- Smoothing difference calculating means, and smoothing difference determining means for determining whether or not the difference calculated by the smoothing difference calculating means exceeds a threshold, wherein the application determining means does not exceed the threshold.
- the smoothing difference determining means determines, it determines that the smoothing result by the second smoothing means is applied, and when the smoothing difference determining means determines that the difference exceeds the threshold, the second smoothing It is characterized in that it is determined that the smoothing result by means is not applied.
- the image processing apparatus is an image processing apparatus that performs image processing for generating an output image in which noise is removed or reduced from an input image composed of a plurality of pixels arranged in a matrix. Based on the specific area extracting means for extracting the specific area including the target pixel from the specific area extracted by the specific area extracting means, it is determined whether or not the target pixel is the target of the first smoothing process. When the smoothing determining unit and the smoothing determining unit determine that the first smoothing process is to be performed, the first smoothing process using the smoothing filter is performed to smooth the pixel value of the target pixel.
- a pixel value of each pixel of the input image comprising: a second smoothing unit that smoothes the prime value; and an application determination unit that determines whether or not to apply a smoothing result obtained by the second smoothing unit. Is generated as either the pixel value smoothed by the first smoothing means, the pixel value smoothed by the second smoothing means, or the original pixel value that has not been smoothed. It is made to do so.
- the image processing apparatus is an image processing apparatus that performs image processing for generating an output image in which noise is removed or reduced from an input image composed of a plurality of pixels arranged in a matrix.
- Specific area extraction means for extracting a specific area including the pixel of interest from, a differential value calculation means for calculating a differential value of a pixel value between pixels for the image of the specific area extracted by the specific area extraction means, Based on the differential value calculated by the differential value calculation means, a smoothing determination means for determining whether or not the target pixel is to be subjected to a smoothing process, and a determination that the smoothing determination means is to be a smoothing process target
- the noise boundary direction determining means for determining the direction in which the noise boundary portion extends in the specific area, and the smoothing determining means determines that the target is to be smoothed.
- a noise boundary position determining unit that determines a position of a noise boundary portion in a region, and the direction determined by the noise boundary direction determining unit and the noise boundary when the smoothing determining unit determines to be a smoothing target
- a smoothing unit that performs a smoothing process on the pixel of interest based on the position determined by the position determining unit.
- the image processing apparatus includes a specific area extracting unit that extracts a specific area including a target pixel from an input image including a plurality of pixels arranged in a matrix, and the specific area extracting unit extracts the specific area. Based on the differential value calculated by the differential value calculating means for calculating the differential value of the pixel value between the pixels with respect to the image of the specific area, a noise boundary portion extends in the specific area. Noise boundary direction determining means for determining a direction to perform, and noise boundary position determining means for determining the position of a noise boundary portion in the specific region based on the differential value calculated by the differential value calculating means. And
- an image processing method is an image processing method for generating an output image from which noise is removed or reduced from an input image composed of a plurality of pixels arranged in a matrix.
- a specific area extracting step for extracting a specific area to be included, and a smoothing determination for determining whether or not the pixel of interest is to be subjected to a first smoothing process based on the specific area extracted in the specific area extracting step
- the first smoothing process using the smoothing filter is performed to smooth the pixel value of the target pixel.
- the second smoothing process using the edge-preserving smoothing filter is performed.
- a generation step of generating an output image with any of the pixel values smoothed in the smoothing step is performed.
- the image processing method determines a direction in which a boundary portion of noise extends in the specific region when the smoothing determination step determines that the first smoothing process is to be performed.
- the smoothing determination step is a target of smoothing processing
- the direction determined in the noise boundary direction determination step is When the direction is a predetermined direction, the position of the noise boundary in the specific region is determined.
- the image processing method further includes a specific area expanding step of expanding the specific area when the direction determined in the noise boundary direction determining step is not a predetermined direction, and the noise boundary direction determining step includes The direction is determined again for the specific area enlarged in the specific area enlargement step.
- the direction determined in the noise boundary direction determination step is a predetermined direction, or until the specific area is expanded to a predetermined size in the specific area expansion step, The specific area enlargement by the specific area enlargement step and the direction determination by the noise boundary direction determination means are repeatedly performed.
- the image processing method stores a plurality of smoothing filters used by the first smoothing unit, and determines the position determined in the noise boundary position determination step and the noise boundary direction determination step.
- the image processing method includes a differential value calculation step of calculating a differential value of a pixel value between pixels with respect to the image of the specific region extracted in the specific region extraction step, and the smoothing determination In the step, it is determined based on the differential value calculated in the differential value calculation step whether or not the target pixel is to be subjected to a smoothing process.
- the differential value calculating step calculates a primary differential value and a secondary differential value of pixel values between adjacent pixels
- the smoothing determining step calculates the differential value. The determination is performed based on the primary differential value and the secondary differential value calculated in the step.
- the image processing method provides a primary differential value 2 for binarizing the primary differential value according to whether or not the primary differential value calculated in the differential value calculating step exceeds a threshold value.
- a first logical sum calculation step for calculating a logical sum of the primary differential values binarized in the first step and the first differential value binarization step; and the second differential calculation calculated in the differential value calculation step.
- a secondary differential value binarizing step for binarizing the secondary differential value according to whether or not a value exceeds a threshold value, and a secondary differential value binarized by the secondary differential value binarizing step A second logical sum calculating step for calculating a logical sum of values, and a third logical sum calculating step for calculating a logical sum of the calculation result in the first logical sum calculation step and the calculation result in the second logical sum calculation step.
- the third logical sum calculation step And performing determination based on the calculation result by.
- the image processing method includes the differential value calculation step, the primary differential value binarization step, the first logical sum calculation step, the secondary differential value binarization step, and the second logical sum.
- the calculation step and the third logical sum calculation step the vertical direction and the horizontal direction of the specific area are processed, respectively, and in the smoothing determination step, the vertical direction and the horizontal direction are calculated by the third logical sum calculation step. The determination is made based on the result.
- the image processing method determines a direction in which a boundary portion of noise extends in the specific region when the smoothing determination step determines that the first smoothing process is to be performed.
- a boundary direction determination step wherein the noise boundary direction determination step includes a boundary portion of noise in the specific region based on the vertical and horizontal calculation results of the third logical sum calculation step. Alternatively, it is determined whether or not it extends in the lateral direction.
- the image processing method determines a noise boundary position for determining a position of a noise boundary portion in the specific area when it is determined that the first smoothing process is performed in the smoothing determination step.
- a noise boundary position determination step including a step of determining a noise boundary portion in the specific region based on a pattern of the secondary differential value binarized in the secondary differential value binarization step in the specific region. The position of is determined.
- a plurality of Sobel filters for detecting the intensity of the edge component in a specific direction are stored for different directions, and the specific region extracted in the specific region extraction step is stored.
- An edge strength difference determination step for determining whether or not a difference between the maximum value and the minimum value exceeds a threshold value, and in the second smoothing step, the edge strength difference determination is performed if the difference does not exceed the threshold value. If determined in step, the pixel value of the pixel of interest in the specific region is calculated using a smoothing filter that is not an edge-preserving type. Characterized in that it smooth.
- the image processing method includes an application determination step for determining whether or not to apply the smoothing result of the second smoothing step, and the generation step includes a pixel of each pixel of the input image.
- An output image whose value is one of the pixel value smoothed in the first smoothing step, the pixel value smoothed in the second smoothing step, or the original pixel value not smoothed It is characterized by generating.
- a second edge that calculates a strength of an edge component included in the specific region by performing filter processing using a Laplacian filter on the specific region extracted in the specific region extraction step.
- An intensity calculation step, and an edge intensity determination step for determining whether the intensity calculated in the second edge intensity calculation step exceeds a threshold value.
- the application determination step when the intensity exceeds the threshold value If it is determined in the edge strength determination step, it is determined that the smoothing result in the second smoothing step is applied, and if it is determined in the edge strength determination step that the strength does not exceed the threshold, It is characterized by determining that the smoothing result by 2 smoothing steps is not applied.
- the image processing method includes an increase / decrease number calculation step for calculating the increase / decrease number of pixel values between pixels adjacent in a specific direction in the specific region extracted in the specific region extraction step, and the increase / decrease number
- An increase / decrease count determination step for determining whether or not the increase / decrease count calculated in the calculation step exceeds a threshold value.
- the increase / decrease count determination step determines that the increase / decrease count does not exceed the threshold value. If it is determined that the smoothing result of the second smoothing step is to be applied, and the increase / decrease count determination step determines that the increase / decrease count exceeds the threshold value, smoothing by the second smoothing step It is determined that the result is not applied.
- the difference between the pixel value of the target pixel included in the specific area extracted in the specific area extraction step and the pixel value smoothed in the second smoothing step is calculated.
- the difference does not exceed the threshold value.
- an image processing method is an image processing method for generating an output image from which noise is removed or reduced from an input image composed of a plurality of pixels arranged in a matrix.
- a specific area extracting step for extracting a specific area to be included, and a smoothing determination for determining whether or not the pixel of interest is to be subjected to a first smoothing process based on the specific area extracted in the specific area extracting step
- the first smoothing process using the smoothing filter is performed to smooth the pixel value of the target pixel.
- the second smoothing process using the edge-preserving smoothing filter is performed.
- a second smoothing step for smoothing the pixel value of the pixel of interest; an application determining step for determining whether or not to apply a smoothing result obtained by the second smoothing step; and for each pixel of the input image An output image in which the pixel value is one of the pixel value smoothed in the first smoothing step, the pixel value smoothed in the second smoothing step, or the original pixel value that has not been smoothed.
- a generating step for generating.
- an image processing method is an image processing method for generating an output image from which noise is removed or reduced from an input image composed of a plurality of pixels arranged in a matrix.
- a specific area extracting step for extracting a specific area
- a differential value calculating step for calculating a differential value of a pixel value between pixels for the image of the specific area extracted in the specific area extracting step
- the differential value calculation Based on the differential value calculated in the step, the smoothing determination step for determining whether or not the target pixel is to be subjected to the smoothing process, and the smoothing determination step is determined to be the target for the smoothing process.
- the noise boundary direction determination step for determining the direction in which the noise boundary portion extends in the specific region and the smoothing determination step are set as objects of smoothing processing.
- a smoothing step of performing a smoothing process on the pixel of interest based on the direction determined in step 1 and the position determined in the noise boundary position determination step.
- the image processing method includes a specific area extraction step for extracting a specific area including a target pixel from an input image composed of a plurality of pixels arranged in a matrix, and the specific area extraction step extracts the specific area.
- a noise boundary portion in the specific region A noise boundary direction determining step for determining an extending direction; and a noise boundary position determining step for determining a position of a noise boundary portion in the specific region based on the differential value calculated in the differential value calculating step. It is characterized by that.
- the computer program according to the present invention is a computer program for causing a computer to perform image processing for generating an output image in which noise is removed or reduced from an input image composed of a plurality of pixels arranged in a matrix.
- Specific region extraction means for extracting a specific region including the pixel of interest from the input image, and whether the pixel of interest is a target of the first smoothing process based on the specific region extracted by the specific region extraction means.
- Smoothing determining means for determining whether or not, and when the smoothing determining means determines that the first smoothing process is to be performed, a first smoothing process using a smoothing filter is performed, and the target pixel First smoothing means for smoothing the pixel value of the pixel, and edge-preserving smoothing when the smoothing determining means determines that the pixel value is not subject to the first smoothing process.
- a second smoothing unit that performs a second smoothing process using a filter to smooth the pixel value of the pixel of interest, and a pixel value of each pixel of the input image to the first smoothing unit It is operated as a generating means for generating an output image having either the pixel value smoothed in this way or the pixel value smoothed by the second smoothing means.
- the computer program according to the present invention is a computer program for causing a computer to perform image processing for generating an output image in which noise is removed or reduced from an input image composed of a plurality of pixels arranged in a matrix.
- Specific region extraction means for extracting a specific region including the pixel of interest from the input image, and whether the pixel of interest is a target of the first smoothing process based on the specific region extracted by the specific region extraction means.
- Smoothing determining means for determining whether or not, and when the smoothing determining means determines that the first smoothing process is to be performed, a first smoothing process using a smoothing filter is performed, and the target pixel First smoothing means for smoothing the pixel value of the pixel, and edge-preserving smoothing when the smoothing determining means determines that the pixel value is not subject to the first smoothing process.
- the computer program according to the present invention is a computer program for causing a computer to perform image processing for generating an output image in which noise is removed or reduced from an input image composed of a plurality of pixels arranged in a matrix.
- Specific region extracting means for extracting a specific region including the target pixel from the input image, and a differential value for calculating a differential value of a pixel value between pixels with respect to the image of the specific region extracted by the specific region extracting means.
- a noise boundary direction determining means for determining a direction in which a noise boundary portion extends in the specific area, and the smoothing judgment.
- the noise boundary position determining means for determining the position of the noise boundary portion in the specific region and the smoothing determining means are determined to be the target of the smoothing process.
- the pixel is operated as a smoothing unit that performs a smoothing process on the target pixel.
- the computer program according to the present invention includes a specific area extracting unit that extracts a specific area including a target pixel from an input image composed of a plurality of pixels arranged in a matrix, and the specific area extracting unit includes: Based on the differential value calculated by the differential value calculating means for calculating the differential value of the pixel value between the pixels with respect to the extracted image of the specific area, the boundary portion of the noise in the specific area A noise boundary direction determining unit that determines an extending direction; and a noise boundary position determining unit that determines a position of a noise boundary portion in the specific region based on the differential value calculated by the differential value calculating unit. It is characterized by.
- the specific area including the target pixel is extracted from the input image, the differential value of the pixel value between adjacent pixels is calculated, and the target pixel in the specific area is smoothed based on the calculated differential value. It is determined whether or not to be a target. In the case of the smoothing target, the extending direction and the position of the noise boundary are determined. Accordingly, it is possible to perform appropriate filtering processing based on the direction and position of the noise boundary portion, and it is possible to accurately remove or reduce noise from the input image.
- the specific area when the extending direction of the noise boundary portion in the specific area is not a predetermined direction, the specific area is enlarged and the direction of the noise boundary portion is determined again. This may be configured to be repeated until the direction of the noise boundary portion becomes a predetermined direction or until the specific area is enlarged to a predetermined size. Thereby, block noise of various magnitudes can be detected with high accuracy.
- the primary differential value and the secondary differential value of the pixel values between adjacent pixels are calculated to determine whether or not to perform the smoothing process on the target pixel in the specific region.
- the primary differential value is binarized by comparison with the threshold value and the logical sum thereof is calculated.
- the secondary differential value is binarized by comparison with the threshold value and the logical sum is calculated and the primary differential value is calculated.
- a logical sum of the logical sums of the second-order differential values can be further calculated, and the determination can be made based on the calculation result.
- the noise boundary portion extends in the vertical direction and / or the horizontal direction based on the logical sum regarding the vertical direction and the horizontal direction obtained by performing the above calculation. To decide. Thus, it can be easily determined whether or not the extending direction of the noise boundary portion is the vertical direction and / or the horizontal direction. Further, the position of the noise boundary portion in the specific area is determined based on the 0/1 arrangement pattern in the specific area of the binarized secondary differential value. Thereby, the position of the boundary portion of noise can be easily determined.
- a plurality of smoothing filters for removing or reducing noise are stored, and a smoothing filter is selected and selected according to the position and direction of the boundary portion of noise existing in a specific region.
- the pixel value of the pixel of interest in the specific area is smoothed by the smoothing filter.
- the image processing apparatus may calculate the pixel value smoothed by the first smoothing process, the pixel value smoothed by the second smoothing process, or the pixel value of the original input image that has not been smoothed.
- Either one is selected as the pixel value of the output image with respect to the target pixel of the input image.
- An output image can be generated by performing such selection for all the pixels of the input image. Thereby, smoothing suitable for each pixel of the input image can be performed, and an output image can be generated without performing smoothing for pixels that do not need smoothing.
- the strength of the edge component included in the specific region is calculated in a plurality of directions using a plurality of Sobel filters, the maximum value and the minimum value are selected from the calculated plurality of edge strengths, and the difference between them is calculated. Whether or not exceeds a threshold value.
- the specific region can be regarded as a substantially flat image that does not include an edge component, and thus smoothing can be performed using a smoothing filter that is not an edge-preserving type.
- whether to apply a smoothing processing result using an edge-preserving smoothing filter is determined based on one or more conditions. For example, if the input image is an image having a clear edge or pattern, the original pixel value is used without applying the result of the smoothing process because the image quality may be deteriorated by performing the smoothing process. To do.
- filter processing using a Laplacian filter is performed on the extracted specific region, the strength of the edge component included in the specific region is calculated, and it is determined whether or not the calculated strength exceeds a threshold value.
- the smoothing process result is applied.
- the smoothing process result is not applied.
- the edge strength is determined to be small depending on the arrangement, etc., and the edges are smoothed without being saved by the edge preserving type smoothing filter.
- the frequency of the pixels in the specific region is calculated. If the calculated frequency is larger than the threshold value, the specific region is regarded as a texture region, and the result of the smoothing process is calculated. Does not apply. Thereby, it is possible to prevent the texture from being deteriorated by the smoothing process.
- the difference between the pixel values of the target pixel before and after the smoothing process is calculated.
- the result of the smoothing process is not applied. Thereby, the influence of smoothing can be suppressed within the threshold value.
- noise included in the input image can be accurately detected, and smoothing suitable for the position and direction of the boundary portion of the noise can be performed. It can be well removed or reduced.
- the direction of the noise boundary in the specific area is not a predetermined direction, the specific area is enlarged and the direction of the noise boundary is determined again, thereby detecting block noise of various sizes with high accuracy. be able to.
- the pixel value smoothed using the smoothing filter the pixel value smoothed using the edge preserving type smoothing filter, or the original unsmoothed value
- FIG. 1 is a block diagram illustrating a configuration of a display device according to the present embodiment.
- reference numeral 1 denotes a display device, which is a device that performs various image processing on a still image or a moving image input from an external device such as the PC 5 and displays it on the liquid crystal panel 13.
- the display device 1 includes an image input unit 16 for driving the liquid crystal panel 13 based on an input image from the PC 5, an image expansion unit 17, an image processing unit 20, a panel drive unit 18, and the like.
- the display device 1 also includes a backlight 14 that irradiates light for display onto the back surface of the liquid crystal panel 13 and a light driving unit 15 that drives the backlight 14.
- the display device 1 includes an operation unit 12 that receives a user operation, and a control unit 11 that controls the operation of each unit in the device according to the received operation.
- the control unit 11 is configured using an arithmetic processing device such as a CPU (Central Processing Unit) or an MPU (Micro Processing Unit).
- the operation unit 12 includes one or a plurality of switches arranged on the front peripheral edge or the side surface of the casing of the display device 1.
- the operation unit 12 receives a user operation using these switches, and the received operation content is displayed on the control unit 11. To notify.
- the user can perform an operation for changing the brightness setting or the color balance setting related to the image display using the operation unit 12, and the control unit 11 can change each unit in the apparatus according to the setting content received by the operation unit 12. To control the operation.
- the image input unit 16 has a connection terminal for connecting an external device, and an external device such as the PC 5 is connected via a video signal cable.
- image data compressed by a compression method such as MPEG or JPEG is input from the PC 5 to the display device 1 as an input image.
- the image input unit 16 gives an input image from the PC 5 to the image expansion unit 17.
- the image expansion unit 17 expands the input image from the image input unit 16 by a method corresponding to each compression method, and gives the image to the image processing unit 20.
- the image processing unit 20 can perform various image processes on the input image given from the image expansion unit 17. In the present embodiment, the image processing unit 20 can perform image processing to remove (or reduce) stepped noise such as block noise included in the input image. Details of image processing for noise removal performed by the image processing unit 20 will be described later.
- the image processing unit 20 gives the image subjected to the image processing to the panel driving unit 18.
- the panel driving unit 18 generates and outputs a driving signal for driving each pixel constituting the liquid crystal panel 13 according to the input image given from the image processing unit 20.
- the liquid crystal panel 13 is a display device that displays an image by arranging a plurality of pixels in a matrix and changing the transmittance of each pixel according to a drive signal from the panel drive unit 18.
- the backlight 14 is configured using a light source such as an LED (Light Emitting Diode) or a CCFL (Cold Cathode Fluorescent Lamp), and irradiates the back surface of the liquid crystal panel 13 with light.
- the backlight 14 emits light by a driving voltage or a driving current given from the light driving unit 15.
- the light driving unit 15 generates a driving voltage or a driving current according to a control signal from the control unit 11 and outputs the driving voltage or driving current to the backlight 14.
- the control unit 11 determines the drive amount of the backlight 14 according to the brightness setting received by the operation unit 12, and outputs a control signal corresponding to the determined drive amount to the light drive unit 15.
- FIG. 2 is a block diagram illustrating a configuration example of the image processing unit 20, and illustrates a block relating to noise removal processing from an input image.
- the image processing unit 20 includes a specific region extracting unit 21 that extracts a region of a specific size from the input image.
- the specific area extraction unit 21 performs a process of extracting a specific area of, for example, 5 ⁇ 5 pixels centered on the target pixel for one pixel (hereinafter referred to as the target pixel) in the input image.
- the specific region extracted by the specific region extraction unit 21 is given to the smoothing determination unit 22, the first smoothing unit 23, the second smoothing unit 24, the application determination unit 25, and the pixel value selection unit 26.
- the smoothing determination unit 22 performs the smoothing of the pixel of interest included in the specific region by examining the pixel values of the plurality of pixels included in the specific region extracted by the specific region extraction unit 21 and changes thereof. The process which determines is performed. The smoothing determination unit 22 notifies the pixel value selection unit 26 of whether or not smoothing processing can be performed. Moreover, the smoothing determination part 22 performs the process which determines the direction and position of a noise boundary in a specific area, when a noise boundary exists in a specific area. The smoothing determination unit 22 gives the determined direction and position of the noise boundary to the first smoothing unit 23.
- the first smoothing unit 23 stores a plurality of smoothing filters, and smoothes the image by selecting one of the smoothing filters and performing a filtering process on a specific region.
- the first smoothing unit 23 selects one smoothing filter according to the direction and position of the noise boundary given from the smoothing determination unit 22.
- the first smoothing unit 23 supplies the pixel value selection unit 26 with the result of smoothing the specific area by the smoothing filter, that is, the pixel value of the pixel of interest in the smoothed specific area.
- the second smoothing unit 24 performs a filtering process using an edge-preserving smoothing filter on the specific region.
- An edge-preserving smoothing filter can store high-frequency components (such as edges) included in a specific region to smooth pixel values, and can perform filtering processing that does not cause significant deterioration in image quality.
- the second smoothing unit 24 stores a plurality of edge-preserving smoothing filters corresponding to the edge direction, determines the direction of the edge included in the specific region, and smoothes using the filter corresponding to the edge direction. I do.
- the second smoothing unit 24 gives the pixel value selection unit 26 the result of smoothing the specific region by the edge-preserving smoothing filter, that is, the pixel value of the target pixel in the smoothed specific region.
- the smoothing result by the second smoothing unit 24 is also given to the application determining unit 25.
- the application determining unit 25 determines whether to apply the smoothing result by the second smoothing unit 24 based on the feature of the pixel value of the specific region and / or the smoothing result by the second smoothing unit 24, or the like. Do.
- the application determination unit 25 determines, for example, based on the amount of edge component in the specific region, the change pattern of the pixel value in the specific region, and / or the difference in pixel values before and after smoothing by the second smoothing unit 24. I do.
- the application determination unit 25 gives the determination result to the pixel value selection unit 26.
- the pixel value smoothed by the first smoothing unit, the pixel value smoothed by the second smoothing unit, and the smoothing are performed on the target pixel of the input image. Three pixel values of the original pixel values that have not been performed are input.
- the pixel value selection unit 26 selects one pixel value from the three input pixel values according to whether or not the smoothing process determined by the smoothing determination unit 22 is performed and the determination result by the application determination unit 25. Output.
- the pixel value selection unit 26 selects and outputs the pixel value smoothed by the first smoothing unit 23.
- the application determination unit 25 determines to apply
- the pixel value selection unit 26 is smoothed by the second smoothing unit 24. Select and output pixel values.
- the pixel value selection unit 26 selects the original pixel value that has not been smoothed. Select to output.
- the image processing unit 20 can generate an output image by performing the processing of the specific area extraction unit 21 to the pixel value selection unit 26 for all the pixels of the input image, and outputs the generated image to the panel drive unit 18. To do.
- the image processing unit 20 shown in the block diagram of FIG. 2 has a configuration in which the first smoothing unit 23, the second smoothing unit 24, and the like perform processing in parallel and finally select the processing result of each processing. However, it is not limited to this.
- the image processing unit 20 may sequentially determine whether or not to perform the smoothing process as described below, and may perform one of the smoothing processes when the condition is satisfied.
- FIG. 3 is a flowchart showing an example of the procedure of noise removal processing by the image processing unit 20.
- the image processing unit 20 selects one target pixel from the input image from the image expansion unit 17 (step S1), and extracts a specific area having a predetermined size including the selected target pixel (step S2).
- the image processing unit 20 performs a smoothing determination process on the extracted specific area (step S3), and determines whether or not it is determined in the smoothing determination process to perform the smoothing process on the target pixel in the specific area.
- Check step S4.
- the image processing unit 20 When it is determined that the smoothing process is to be performed (S4: YES), the image processing unit 20 performs the filtering process for the specific region using the smoothing filter (step S5). At this time, the image processing unit 20 performs filtering processing by selecting one smoothing filter from a plurality of prestored smoothing filters based on the direction and position of the noise boundary determined in the smoothing determination processing. The image processing unit 20 outputs the pixel value of the target pixel in the specific region that has been subjected to the filtering process using the smoothing filter as a processing result (step S6).
- the image processing unit 20 When it is determined that the smoothing process is not performed (S4: NO), the image processing unit 20 performs the filtering process for the specific region using the edge preserving type smoothing filter (step S7), and the filtering process. An application determination process related to the result is performed (step S8). At this time, the image processing unit 20 performs filtering processing by selecting one of a plurality of edge-preserving smoothing filters stored in advance based on the direction of the edge included in the specific region. Further, the image processing unit 20 performs application determination processing based on one or more conditions among the amount of edge components in the specific region, the change pattern of the pixel value in the specific region, the smoothing result in step S7, and the like. .
- the image processing unit 20 determines whether or not to apply the result of the filtering process of the edge-preserving smoothing filter (step S9). When it determines with applying (S9: YES), the image process part 20 outputs the pixel value of the attention pixel of the specific area
- the image processing unit 20 can generate an output image by repeatedly performing the processes in steps S1 to S11 for all the pixels of the input image.
- Each pixel of the output image to be generated is a pixel obtained by performing filtering using a smoothing filter on each pixel of the input image, a pixel obtained by performing filtering using an edge preserving smoothing filter, or smoothing. Any pixel that has not been performed (the same pixel as the input image), and the output image is an image in which staircase noise such as block noise has been removed or reduced from the input image.
- the smoothing determination process is a process performed by the smoothing determination unit 22 in FIG. 2, and is a process performed in step S3 in FIG.
- FIG. 4 is a flowchart illustrating an outline of the procedure of the smoothing determination process performed by the image processing unit 20.
- the image processing unit 20 performs a horizontal direction (horizontal direction) determination process (step S21) and a vertical direction (longitudinal direction) determination process (step S22) on the specific area extracted from the input image, and sets the specific area. It is determined whether or not the included pixel of interest is to be smoothed (step S23). If it is determined that the target pixel is not subject to the smoothing process (S23: NO), the image processing unit 20 ends the smoothing determination process.
- the image processing unit 20 When it is determined that the target pixel is the target of the smoothing process (S23: YES), the image processing unit 20 performs the direction determination process for the noise boundary existing in the specific area (step S24). From the result of the direction determination process, the image processing unit 20 determines whether or not a noise boundary extending in one of the horizontal direction and the vertical direction exists in the specific region (step S25).
- the image processing unit 20 determines whether there is a noise boundary extending in another direction in the specific region (Ste S26).
- the noise boundary extending in the other direction is, for example, a noise boundary extending in a specific direction such as 45 ° or 135 ° in the specific region, or extending in the horizontal direction and the vertical direction in the specific region, for example. It is a noise boundary such as an existing L-shape, T-shape, or cross shape. If it is determined that there is a noise boundary in another direction (S26: YES), the image processing unit 20 ends the smoothing determination process.
- the image processing unit 20 determines whether or not the size of the specific area is a predetermined size (step S27). . If the specific area is not the predetermined size (S27: NO), the image processing unit 20 enlarges the specific area (step S28), returns to step S24, and performs the noise boundary direction determination process again. If the specific area has a predetermined size (S27: YES), the image processing unit 20 ends the smoothing determination process.
- the image processing unit 20 performs a noise boundary position determination process in the specific area (step S29), and smoothes it. The determination process ends.
- FIG. 5 is a schematic diagram for explaining the horizontal direction determination processing performed in step S21.
- the image processing unit 20 selects one target pixel from the input image, and extracts a specific area 100 having a predetermined size (5 ⁇ 5 pixels in the illustrated example) including the target pixel. In the drawing, the pixel of interest in the specific area 100 is shown with hatching.
- the image processing unit 20 first calculates a primary differential value of a pixel value between pixels adjacent in the horizontal direction in the specific region 100 (that is, a difference in pixel value between adjacent pixels). As a result, the image processing unit 20 obtains a horizontal primary differential value matrix 101 composed of 4 ⁇ 5 primary differential values.
- the image processing unit 20 binarizes the horizontal primary differential value matrix 101 by comparing the absolute value of each primary differential value of the horizontal primary differential value matrix 101 with a predetermined threshold value (for example, the primary differential value matrix). 1 when the absolute value of the value ⁇ the threshold value, and 0 when the absolute value of the primary differential value ⁇ the threshold value).
- a predetermined threshold value for example, the primary differential value matrix
- the image processing unit 20 performs an OR (logical sum) operation on the four binarized primary differential values arranged in the horizontal direction, and a horizontal primary differential value OR sequence 102 composed of five calculation results.
- the image processing unit 20 calculates a differential value (that is, a secondary differential value) of primary differential values adjacent in the horizontal direction in the horizontal primary differential value matrix 101. As a result, the image processing unit 20 obtains a horizontal secondary differential value matrix 103 composed of 3 ⁇ 5 secondary differential values. The image processing unit 20 binarizes the horizontal secondary differential value matrix 103 by comparing the absolute value of each secondary differential value of the horizontal secondary differential value matrix 103 with a predetermined threshold (for example, secondary differential value). If the absolute value of the value ⁇ the threshold value, the absolute value of the first and second differential values ⁇ 0 if the threshold value is satisfied).
- a predetermined threshold for example, secondary differential value
- the image processing unit 20 performs an OR operation on the three binarized secondary differential values arranged in the horizontal direction, and obtains a horizontal secondary differential value OR sequence 104 composed of five calculation results.
- the threshold for binarizing the primary differential value and the threshold for binarizing the secondary differential value may be the same value or different values, and the design stage of the display device 1 And so on.
- the image processing unit 20 performs an OR operation of two values at corresponding positions of the horizontal primary differential value OR sequence 102 and the horizontal secondary differential value OR sequence 104, and a horizontal OR sequence 105 constituted by five calculation results. Get.
- the image processing unit 20 performs an OR operation on the upper three values (that is, the first to third values) of the horizontal OR column 105 to obtain the horizontal upper OR value 106, and the lower three values (that is, the third to fifth values).
- the horizontal lower OR value 107 is obtained by performing an OR operation of the (th value). Further, the image processing unit 20 performs an AND operation on the horizontal upper OR value 106 and the horizontal lower OR value 107 to obtain a horizontal direction determination result 108.
- the horizontal direction determination result 108 obtained in this way is 1-bit information whose value is “0” or “1”.
- the horizontal direction determination result 108 indicates whether or not the target pixel in the specific area 100 is included in a block having a low frequency component (a change in the pixel value is constant and small in the horizontal direction). When the value of the horizontal direction determination result 108 is “0”, it indicates that the target pixel may be included in the low frequency component block. Conversely, when the value of the horizontal direction determination result 108 is “1”, it indicates that the pixel of interest is not included in the low-frequency component block (that is, cannot be block noise).
- FIG. 6 is a flowchart illustrating a procedure of horizontal direction determination processing performed by the image processing apparatus 20.
- the image processing unit 20 calculates a primary differential value between pixels adjacent in the horizontal direction of the specific region 100 (step S31).
- the image processing unit 20 binarizes the primary differential value by comparing the absolute value of the calculated primary differential value with a threshold (step S32), and performs horizontal OR on the binarized primary differential value. Calculation is performed (step S33).
- the image processing unit 20 calculates a secondary differential value in the horizontal direction of the specific region 100 by further calculating a horizontal differential value with respect to the calculation result in step S31 (step S34).
- the image processing unit 20 binarizes the secondary differential value by comparing the absolute value of the calculated secondary differential value with a threshold (step S35), and performs horizontal OR on the binarized secondary differential value. Calculation is performed (step S36).
- the image processing unit 20 further ORs the calculation result of step S33 and the calculation result of step S36 at the corresponding position (step S37).
- a plurality of OR operation values are obtained, and the image processing unit 20 performs an OR operation on the upper half value (step S38) and an OR operation on the lower half value (step S39). Further, the image processing unit 20 performs an AND operation on the calculation result of step S38 and the calculation result of step S39 (step S40), and ends the horizontal direction determination process.
- FIG. 7 is a schematic diagram for explaining the vertical direction determination processing performed in step S22.
- the vertical direction determination process is substantially the same as the horizontal direction determination process except that the calculation direction is different.
- the image processing unit 20 first calculates a primary differential value of pixel values between pixels adjacent in the vertical direction in the specific region 100. As a result, the image processing unit 20 obtains a vertical primary differential value matrix 111 composed of 5 ⁇ 4 primary differential values.
- the image processing unit 20 binarizes the vertical primary differential value matrix 111 by comparing the absolute value of each primary differential value of the vertical primary differential value matrix 111 with a predetermined threshold value.
- the image processing unit 20 performs an OR operation on the four binarized primary differential values arranged in the vertical direction, and obtains a vertical primary differential value OR row 112 including five calculation results.
- the image processing unit 20 calculates a differential value (that is, a secondary differential value) of the primary differential values adjacent in the vertical direction in the vertical primary differential value matrix 111. As a result, the image processing unit 20 obtains a vertical secondary differential value matrix 113 composed of 5 ⁇ 3 secondary differential values. The image processing unit 20 binarizes the vertical secondary differential value matrix 113 by comparing the absolute value of each secondary differential value of the vertical secondary differential value matrix 113 with a predetermined threshold value. Next, the image processing unit 20 performs an OR operation on the three binarized secondary differential values arranged in the vertical direction, and obtains a vertical secondary differential value OR row 114 including five calculation results.
- a differential value that is, a secondary differential value
- the image processing unit 20 performs an OR operation on the two values at the corresponding positions of the vertical primary differential value OR row 112 and the vertical secondary differential value OR row 114, and a vertical OR row 115 constituted by five calculation results. Get.
- the image processing unit 20 performs an OR operation on the upper three values of the vertical OR row 115 to obtain the vertical upper OR value 116 and obtains a vertical lower OR value 117 by performing an OR operation on the lower three values. Further, the image processing unit 20 performs an AND operation on the upper vertical OR value 116 and the lower vertical OR value 117 to obtain a vertical direction determination result 118.
- the vertical direction determination result 118 obtained in this way is 1-bit information whose value is “0” or “1”.
- the vertical direction determination result 118 indicates whether or not the target pixel in the specific area 100 is included in a block having a low frequency component (a change in the pixel value is constant and small in the vertical direction). When the value of the vertical direction determination result 118 is “0”, it indicates that the target pixel may be included in the low frequency component block. If the value of the horizontal direction determination result 108 is “0” and the value of the vertical direction determination result 118 is “0”, it can be determined that the pixel of interest is included in the low frequency component block. When the value of the vertical direction determination result 108 is “1”, it indicates that the pixel of interest is not included in the low frequency component block.
- the image processing unit 20 smoothes the target pixel in the specific region 100 based on the horizontal direction determination result 108 obtained by the horizontal direction determination process and the vertical direction determination result 118 obtained by the vertical direction determination process. It is determined whether or not to be the target of. Specifically, when the horizontal direction determination result 108 is “0” and the value of the vertical direction determination result 118 is “0”, the image processing unit 20 sets the target pixel as the target of the smoothing process and thereafter Perform the process. In addition, when the value of either the horizontal direction determination result 108 or the vertical direction determination result 118 is “1”, the image processing unit 20 excludes the target pixel from the smoothing process target and ends the smoothing determination process. To do.
- FIG. 8 is a schematic diagram for explaining the noise boundary direction determination processing.
- the image processing unit 20 determines whether the direction of the noise boundary included in the specific region 100 is any of the horizontal pattern in FIG. 8A, the vertical pattern in FIG. 8B, the internal pattern in FIG. 8C, or the other pattern in FIG. To determine whether The horizontal pattern in FIG. 8A is a pattern in which a noise boundary extends in the horizontal direction (lateral direction) of the specific region 100.
- FIG. 8B is a pattern in which a noise boundary extends in the vertical direction (longitudinal direction) of the specific region 100.
- the internal pattern in FIG. 8C is a pattern when the noise boundary is not included in the specific area 100 and the specific area 100 is an internal area of block noise.
- the other patterns in FIG. 8D are all patterns other than those in FIGS. 8A to 8C, and the illustrated pattern is an example.
- the image processing unit 20 performs noise boundary direction determination processing using data generated in the process of the above-described horizontal direction determination processing and vertical direction determination processing. Specifically, the image processing unit 20 uses the horizontal OR column 105 generated by the horizontal direction determination process and the vertical OR row 115 generated by the vertical direction determination process. The image processing unit 20 determines whether or not all of the five values included in the horizontal OR row 105 (however, when the specific area 100 is enlarged, more than five values are included) are “0”. Similarly, the image processing unit 20 determines whether or not all five values included in the vertical OR row 115 are “0”. When all the values of the horizontal OR row 105 are “0”, the pixel value of the specific area 100 changes smoothly in the horizontal direction. When all the values of the vertical OR row 115 are “0”, the pixel value of the specific area 100 smoothly changes in the vertical direction.
- the image processing unit 20 determines that the noise boundary of the specific area 100 is the horizontal pattern in FIG. 8A.
- the image processing unit 20 determines that the noise boundary of the specific region 100 is the vertical pattern of FIG. 8B.
- the image processing unit 20 sets the specific region 100 as an internal pattern (that is, a noise boundary in the input image). (A region surrounded by, sandwiched or adjacent). Further, when the horizontal OR column 105 is not all “0” and the vertical OR row 115 is not all “0”, the image processing unit 20 determines that the noise boundary of the specific region 100 is the other pattern in FIG. 8D.
- the image processing unit 20 determines that the specific area 100 is the internal pattern of FIG. 8C in the noise boundary direction determination process, the image processing unit 20 enlarges the specific area 100 (for example, 5 ⁇ 5 pixels ⁇ 7 ⁇ 7 pixels). ⁇ 9 ⁇ 9 pixels ⁇ ).
- the image processing unit 20 calculates the horizontal OR column 105 and the vertical OR row 115 for the enlarged specific region 100 by the same method as that shown in FIGS. Based on the calculated horizontal OR column 105 and vertical OR row 115, the image processing unit 20 determines which of the patterns shown in FIG.
- the image processing unit 20 determines that the noise boundary of the enlarged specific area 100 is the horizontal pattern in FIG. 8A, the vertical pattern in FIG. 8B, or the other pattern in FIG.
- the size of the specific area 100 is a predetermined size.
- the process is repeated by enlarging the specific area 100 until it becomes 9 (for example, 9 ⁇ 9 pixels).
- the image processing unit 20 ends the smoothing determination process without further enlarging the specific area 100.
- FIG. 9 is a schematic diagram for explaining the noise boundary position determination process, and shows an example in which the noise boundary of the specific region 100 is a vertical pattern.
- the position of the noise boundary of the vertical pattern existing in the specific area 100 of 5 ⁇ 5 pixels is one of the four positions shown in FIGS. 9A to 9D. Since the position determination process when the noise boundary is a horizontal pattern is substantially the same as the case of the vertical pattern, the description is omitted.
- the image processing unit 20 performs noise boundary direction determination processing using data generated in the process of the horizontal direction determination processing described above. Specifically, the image processing unit 20 uses a binarized horizontal secondary differential value matrix 103a obtained by binarizing the horizontal secondary differential value matrix 103 generated in the horizontal direction determination process by comparison with a threshold value. For example, as shown in FIG. 9A, when the noise boundary is located at the left end of the specific region 100, the 3 ⁇ 5 binarized horizontal second-order differential value matrix 103a includes “1” in the first column, The third column is a pattern of “0”. When the noise boundary is located at the center left of the specific region 100 as shown in FIG.
- the 3 ⁇ 5 binarized horizontal second-order differential value matrix 103a includes “1” in the first and second columns,
- the third column is a pattern of “0”.
- the 3 ⁇ 5 binarized horizontal second-order differential value matrix 103a includes “1” in the second and third columns,
- the first column has a pattern of “0”.
- the 3 ⁇ 5 binarized horizontal second-order differential value matrix 103a includes “1” in the third column, and the first and second columns.
- the column has a pattern of “0”.
- the image processing unit 20 examines the arrangement pattern of “0” and “1” included in the binarized horizontal second-order differential value matrix 103a generated by the horizontal direction determination process, thereby making the vertical included in the specific region 100.
- the position of the noise boundary of the pattern can be determined.
- the determination of the position of the noise boundary of the horizontal pattern is the same, and the image processing unit 20 binarizes the vertical second derivative matrix 113 generated by the vertical direction determination process by binarization by comparison with a threshold value.
- the position of the noise boundary of the horizontal pattern included in the specific area 100 is determined by examining the arrangement pattern of “0” and “1” included in the binarized vertical second-order differential value matrix using the second derivative matrix. can do.
- the image processing unit 20 performs a process of smoothing the pixel value of the specific region 100.
- the image processing unit 20 stores a plurality of smoothing filters for performing the smoothing process, and the one based on the direction determined by the noise boundary direction determination process, the position determined by the noise boundary position determination process, and the like.
- a smoothing filter is selected, and the pixel value of the pixel of interest in the specific area 100 is smoothed using the selected smoothing filter.
- FIGS. 10 and 11 are schematic diagrams illustrating an example of a smoothing filter.
- the size of the smoothing filter for the specific region 100 of 5 ⁇ 5 pixels is 5 ⁇ 5.
- This calculation method is similarly used in filtering processing using other filters (edge-preserving smoothing filter, Sobel filter, Laplacian filter, etc.) described below.
- the example shown in FIG. 10A is a smoothing filter used when the noise boundary of the specific region 100 is a vertical pattern.
- the smoothing filter of FIG. 10A five values in the third row are set to 1 and other values are set to 0.
- the smoothing filter of FIG. 10A the average value of the target pixel of the specific region 100 and the four pixels located on the left and right sides of the target pixel can be calculated.
- the average value of the pixels is output as a smoothing result of the target pixel.
- the smoothing filter in FIG. 10A performs smoothing by calculating an average value of pixel values in a direction that intersects the noise boundary of the vertical pattern.
- the example shown in FIG. 10B is a smoothing filter used when the noise boundary of the specific region 100 is a vertical pattern as in FIG. 10A.
- the smoothing filter in FIG. 10B further considers the position of the noise boundary, and is used when the position of the noise boundary is the center left shown in FIG. 9B.
- the center value is set to 3
- the value on the left side is set to 2
- the other values are set to 0.
- the example shown in FIG. 10C is a smoothing filter used when the noise boundary of the specific region 100 is a horizontal pattern.
- the smoothing filter of FIG. 10C the five values in the third column are set to 1 and the other values are set to 0.
- the smoothing filter in FIG. 10C the average value of the target pixel of the specific region 100 and the four pixels located on the upper and lower sides of the target pixel can be calculated.
- the average value of the pixels is output as a smoothing result of the target pixel.
- the smoothing filter in FIG. 10C performs smoothing by calculating an average value of pixel values in a direction intersecting the noise boundary of the horizontal pattern.
- the example shown in FIG. 11D is a smoothing filter used when the noise boundary of the specific region 100 is another pattern.
- the average value of the pixel values of all the pixels in the specific region 100 can be calculated, and the image processing unit 20 outputs the average value as a smoothing result of the target pixel.
- the example shown in FIG. 11E is a smoothing filter that is used for a specific region 100 in which no noise boundary exists in the specific region 100 of 5 ⁇ 5 pixels and there is a noise boundary when expanded to 7 ⁇ 7 pixels.
- the smoothing filter of FIG. 11E is an enlargement of the smoothing filter of FIG. 11D and is used when the noise boundary is another pattern.
- the smoothing filter of FIGS. 10A-C can be expanded to a size of 7 ⁇ 7.
- the image processing unit 20 selects a smoothing filter according to the direction and position of the noise boundary, and smoothes and outputs the pixel value of the target pixel in the specific region 100.
- the smoothing filter shown in FIGS. 10 and 11 is an example, and the present invention is not limited to this.
- the image processing unit 20 determines whether or not there is a noise boundary, but does not determine the direction and position thereof, and performs filtering processing using the smoothing filter illustrated in FIG. 11 for the specific region 100 determined to have a noise boundary. It is good also as a structure to perform.
- Edge preserving type smoothing process > ⁇ 3-1.
- Edge-preserving smoothing filter> When it is determined that the smoothing process is not performed in the above-described smoothing determination process, the image processing unit 20 smoothes the pixel value of the target pixel in the specific region 100 using an edge-preserving smoothing filter. Process.
- the image processing unit 20 stores a plurality of edge-preserving smoothing filters, determines the direction of the edge component included in the specific region 100, selects one filter according to the determined direction of the edge component, The pixel value of the pixel of interest in the specific area 100 is smoothed using the selected filter.
- FIGS. 12 to 14 are schematic diagrams showing an example of an edge-preserving smoothing filter.
- the direction of the edge component of the specific region 100 is horizontal (0 °), diagonal 45 °, vertical (90 °), diagonal 135 °, diagonal 22.5 °, diagonal 67. It is determined whether there are eight directions of 5 °, oblique 112.5 °, oblique 157.5 °, or no direction. For this reason, the image processing unit 20 stores eight edge preserving smoothing filters corresponding to eight directions and an isotropic smoothing filter that is not an edge preserving type corresponding to the non-direction determination.
- the filter shown in FIG. 12A is a smoothing filter that stores edge components in the horizontal direction.
- the image processing unit 20 determines that the direction of the edge component of the specific region 100 is the horizontal direction, the image processing unit 20 performs a smoothing process using the filter of FIG. 12A.
- the image processing unit 20 performs the filtering process using the filter of FIG. 12A, the pixel value of the target pixel in the specific region 100 and the four pixels located on the left and right sides along the edge component direction with respect to the target pixel.
- a weighted average value with the pixel value of is calculated.
- the pixel values are weighted so that the pixels closer to the target pixel become heavier.
- the image processing unit 20 outputs the calculated average value as a smoothing result of the target pixel.
- the filter shown in FIG. 12B is a smoothing filter that stores an edge component in a 45 ° oblique direction. A weighted average value of the pixel of interest and the pixel values of four pixels located in the 45 ° oblique direction with respect to this pixel. Can be calculated.
- the filter illustrated in FIG. 12C is a smoothing filter that stores edge components in the vertical (90 °) direction.
- the filter shown in FIG. 12D is a smoothing filter that stores edge components in the oblique 135 ° direction.
- the filter shown in FIG. 13E is a smoothing filter that stores edge components in the diagonal 22.5 ° direction, and weights the pixel of interest and the pixel values of six pixels positioned in the diagonal 22.5 ° direction. The average value obtained can be calculated.
- the filter illustrated in FIG. 13F is a smoothing filter that stores edge components in a diagonal 67.5 ° direction.
- the filter illustrated in FIG. 13G is a smoothing filter that stores edge components in the oblique 112.5 ° direction.
- the filter shown in FIG. 13H is a smoothing filter that stores edge components in the oblique 157.5 ° direction.
- the filter shown in FIG. 14 is a smoothing filter used when it is determined that the direction of the edge component of the specific region 100 does not correspond to the above eight directions.
- the smoothing filter can calculate a weighted average value of the pixel values of all the pixels in the specific region 100 by performing weighting according to the distance from the target pixel.
- a smoothing filter X for preserving edges in the direction of an arbitrary angle x (0 ° ⁇ x ⁇ 180 °) is It can be generated using an expression.
- the edge preserving smoothing filter in the 0 ° direction shown in FIG. 12 is A
- the edge preserving smoothing filter in the 45 ° direction is B
- the edge preserving smoothing filter in the 90 ° direction is C
- An edge-preserving smoothing filter in the direction of D is denoted by D.
- the edge preserving smoothing filter in the 22.5 ° direction shown in FIG. 13 is E
- the edge preserving smoothing filter in the 67.5 ° direction is F
- the edge preserving smoothing in the 112.5 ° direction is shown in FIG.
- the filter is G
- the edge-preserving smoothing filter in the 157.5 ° direction is H.
- ⁇ is a coefficient determined according to the angle x, and 0 ⁇ ⁇ 1.
- the image processing unit 20 needs to determine the direction of the edge component in the specific region 100 in order to select one filter from the plurality of edge-preserving smoothing filters. is there.
- the image processing unit 20 can determine the direction of the edge component by performing a filtering process using a Sobel filter.
- FIG. 15 is a schematic diagram illustrating an example of a Sobel filter for direction determination.
- the image processing unit 20 is a diagram corresponding to each direction in order to determine whether the direction of the edge component is four directions of horizontal (0 °), diagonal 45 °, vertical (90 °), and diagonal 135 °. Four Sobel filters shown in 15A to D are used.
- the illustrated Sobel filter is a 3 ⁇ 3 matrix for determining the strength of edge components in a direction orthogonal to the arrangement direction of elements whose values are set to zero.
- the image processing unit 20 calculates the strength of the edge component by performing a filtering process using the illustrated Sobel filter on the 3 ⁇ 3 region centered on the pixel of interest in the specific region 100 (which is still calculated). Indicates that the closer the absolute value is to 0, the stronger the edge component is).
- the edge-preserving smoothing filter includes 22.5 °, 67.5 °, 112.5 °, and 4 filters corresponding to 0 °, 45 °, 90 °, and 135 °.
- Four filters corresponding to 157.5 ° are used.
- the image processing unit 20 needs to acquire the strengths of the edge components for 22.5 °, 67.5 °, 112.5 °, and 157.5 °. Therefore, the image processing unit 20 uses one of the two methods described below (or a combination of both) to generate edge components related to 22.5 °, 67.5 °, 112.5 °, and 157.5 °. Calculate or estimate intensity.
- the image processing unit 20 adds 22.5 °, 67.
- the four basic Sobel filters corresponding to 0 °, 45 °, 90 °, and 135 ° In addition to the four basic Sobel filters corresponding to 0 °, 45 °, 90 °, and 135 °.
- Four extended Sobel filters corresponding to 5 °, 112.5 °, and 157.5 ° are stored, and the strength of the edge component is calculated using these eight Sobel filters.
- the extended Sobel filter X corresponding to an arbitrary angle x [rad] can be calculated by the following equations (1) to (4). Note that 0 ⁇ x ⁇ .
- a basic Sobel filter with an angle 0 (0 °) is A
- a basic Sobel filter with an angle ⁇ / 4 (45 °) is B
- a basic Sobel filter with an angle ⁇ / 2 (90 °) is C
- an angle 3 ⁇ A / 4 (135 °) basic Sobel filter is designated as D.
- ⁇ is a coefficient determined according to the angle x, and 0 ⁇ ⁇ 1.
- ⁇ 1/2.
- ⁇ 2/3. That is, when 0 ⁇ x ⁇ / 4, the coefficient ⁇ is determined according to the ratio of the difference between the angle x and the angle 0 of the calculated Sobel filter X and the difference between the angle x and ⁇ / 4. Can do.
- a 3 ⁇ 3 Sobel filter for calculating the intensity of the edge component for an arbitrary angle x [rad] is represented by the following (5) to (8 ) Expression. Note that 0 ⁇ x ⁇ .
- each value included in the 45 ° Sobel filter is 0 ° Sobel filter. It can be seen that these values are arranged rotated by 45 °.
- the value of the 22.5 ° Sobel filter calculated by the above equation corresponds to the value obtained by rotating each value of the 0 ° Sobel filter by 22.5 °.
- Each value of the 22.5 ° Sobel filter corresponds to an average value of corresponding values of the 0 ° and 45 ° Sobel filters.
- the image processing unit 20 stores four basic Sobel filters and four extended Sobel filters generated in advance, and uses these eight Sobel filters. To calculate the edge strength.
- the image processing unit 20 can calculate the strength of the edge component in eight directions, and selects one of the edge-preserving smoothing filters shown in FIGS. 12 and 13 based on the calculated strength. be able to.
- the image processing unit 20 may store the same number of Sobel filters in one-to-one correspondence with the stored angles of the edge-preserving smoothing filter. (In this case, since the image processing unit 20 can determine which smoothing filter to use based on the strength of the edge component calculated using each Sobel filter, the following (b) edge strength estimation processing is performed. No need.)
- the image processing unit 20 stores the four Sobel filters shown in FIGS.
- the 5 °, 112.5 °, and 157.5 ° Sobel filters need not be stored.
- the intensity of the edge component related to the angle that does not store the Sobel filter is calculated by the image processing unit 20 in the following (b) edge intensity estimation process.
- the image processing unit 20 calculates using the Sobel filter. A process of estimating the strength of the edge component in a direction that cannot be performed is performed. The image processing unit 20 estimates the strength of edge components in other directions based on the strengths of a plurality of edge components calculated in (a) edge strength calculation processing.
- FIG. 16 is a graph showing an example of the correspondence between the value (absolute value) calculated by the Sobel filter and the angle.
- the horizontal axis represents the angle [°]
- the vertical axis represents the absolute value of the value indicating the intensity of the edge component calculated by the Sobel filter.
- the change in the absolute value of the value calculated using the Sobel filter is a change having a maximum value and a minimum value.
- the image processing unit 20 of the display device 1 according to the present embodiment uses the edge component strengths in four directions of 0 °, 45 °, 90 °, and 135 ° calculated by the four Sobel filters shown in FIG. By performing an interpolation process based on this, the strength of the edge component in the other direction is estimated.
- FIG. 17 is a schematic diagram for explaining an interpolation process for intensity estimation.
- a point indicated by a black circle in the figure is a point indicating an intensity calculated by a filtering process using a Sobel filter.
- points A1 X1, Y1
- A2 X2, Y2
- A3 X3, Y3
- A4 X4, Y4
- points indicated by hatched circles in the figure are points C (Xi, Yi) indicating the intensity estimated based on A1 to A4.
- Point C is a point between points A2 and A3, and the ratio of the distance from coordinates X2 to Xi to the distance from coordinates Xi to X3 is r: 1-r (where 0 ⁇ r ⁇ 1). Shall.
- a point B13 indicated by a white circle in the figure is a point of the coordinate X2 interpolated from the points A1 and A3 by linear interpolation (primary interpolation).
- the point B24 is a point of the coordinate X3 that is interpolated from the points A2 and A4 by linear interpolation.
- Point B23 is a point of coordinates Xi interpolated from points A2 and A3 by linear interpolation.
- the distance between the points A2 and B13 difference between the Y coordinates of both points
- the distance between the points A3 and B24 is ⁇ .
- the distance between the points B23 and C is ⁇ .
- the point C to be calculated is linearly interpolated using two points (points A1 and A4) that are distant from the point (point B23) that is interpolated by linear interpolation using the two neighboring points (points A2 and A3).
- the average value is calculated by weighting the errors ( ⁇ and ⁇ ) of the points (points B13 and B24) obtained by interpolating the two neighboring points, and the average value ( ⁇ ) is added.
- the above equations (11) to (14) are applied to the strength of the edge component calculated by the Sobel filter.
- the edge components calculated by the 0 °, 45 °, 90 °, and 135 ° Sobel filters (absolute values) shown in FIG. 15 are a, b, c, and d, respectively, and estimated 22.5 ° edges.
- e be the intensity (absolute value) of the component.
- the intensity of the edge component at 135 ° can be regarded as the intensity at ⁇ 45 °.
- the image processing unit 20 cannot calculate with the Sobel filter based on the edge component intensities a to d calculated with the four Sobel filters shown in FIG. 15 and the equations (21) to (24). It is possible to estimate the strengths e to h of the edge components in other directions.
- the image processing unit 20 calculates the intensity of edge components in eight directions using four basic Sobel filters and four extended Sobel filters (first method). Alternatively, the image processing unit 20 calculates the strength of the edge component in each direction using the four Sobel filters in FIGS. 15A to 15D, and estimates the strength of the edge component in the other directions (second second). Method). The image processing unit 20 compares the strengths of a plurality of edge components obtained by calculation or estimation, and determines the direction with the strongest strength (the absolute value of the calculated value is the smallest) as the direction of the edge component of the specific region 100. judge. The image processing unit 20 reads an edge-preserving smoothing filter (see FIGS. 12 and 13) corresponding to the determined direction, performs a filtering process on the specific region 100, and smoothes the pixel value of the target pixel.
- an edge-preserving smoothing filter see FIGS. 12 and 13
- FIG. 18 is a graph showing an example of a correspondence between a value (absolute value) calculated by a Sobel filter and an angle.
- the horizontal axis represents the angle [°]
- the vertical axis represents the absolute value of the value indicating the intensity of the edge component calculated by the Sobel filter.
- the example shown in FIG. 18 is a result of performing an operation using a Sobel filter on an image including a 1-dot texture or a 1-dot width thin line.
- the image processing unit 20 selects the maximum value and the minimum value from the absolute value of the calculated edge component intensity and the absolute value of the edge component intensity estimated therefrom, and calculates the difference.
- the image processing unit 20 regards the specific region 100 as an image having no directionality, and performs a smoothing process using the isotropic smoothing filter illustrated in FIG.
- FIG. 19 is a flowchart showing a procedure of edge preserving type smoothing processing, which is processing performed in step S7 of FIG.
- the image processing unit 20 first reads out a plurality of Sobel filters stored in advance in a memory or the like (step S51).
- the image processing unit 20 performs a filtering process on the specific region 100 using each Sobel filter, and calculates the strength of the edge component in each direction (Step S52).
- the image processing unit 20 estimates the strengths of the edge components in the other directions based on the calculated strengths of the plurality of edge components (step S53).
- the image processing unit 20 calculates the difference between the absolute value of the intensity calculated in step S52 and the maximum value and the minimum value of the absolute value of intensity estimated in step S53 (step S54), and the calculated difference exceeds the threshold value. It is determined whether or not (step S55).
- the image processing unit 20 uses the intensity calculated in step S52 and the intensity estimated in step S53 to maximize the intensity (the absolute value is minimum). By determining, the direction of the edge component included in the specific region 100 is determined (step S56). The image processing unit 20 reads an edge-preserving smoothing filter corresponding to the determined direction of the edge component (step S57).
- the image processing unit 20 determines that the edge component of the specific region 100 is not directional, and is not an edge-preserving smoothing filter but isotropic smoothing. Read out the filter (step S58).
- the image processing unit 20 performs a filtering process on the specific region 100 using the smoothing filter read in step S57 or S58, smoothes the target pixel (step S59), and ends the process.
- the image processing unit 20 performs a smoothing process using an edge-preserving smoothing filter.
- An edge-preserving smoothing filter is applied to a region containing a certain amount of high-frequency components. For example, in the case of an image having a clear edge or pattern, the image quality can be reduced by performing a filtering process. There is a risk of deterioration. Therefore, the image processing unit 20 determines whether or not to apply the smoothing result by the edge preserving smoothing filter based on one or a plurality of conditions.
- three examples of determination conditions will be described.
- the image processing unit 20 calculates the amount (intensity) of an edge component in the specific region 100, and when the calculated edge amount is smaller than a threshold value, smoothing by an edge preserving type smoothing filter It is determined that the processing result is applied. Thereby, since smoothing is not performed when the amount of edges is large, it is possible to prevent an image having a clear edge or pattern from being smoothed to deteriorate the image quality.
- the edge amount of the specific region 100 can be calculated using, for example, a Laplacian filter.
- FIG. 20 is a schematic diagram for explaining edge amount determination using a Laplacian filter.
- the image processing unit 20 performs processing using a Laplacian filter 121 stored in advance.
- the Laplacian filter 121 is a 3 ⁇ 3 matrix, the center value is set to 8, and all other values around it are set to ⁇ 1.
- the image processing unit 20 extracts 3 ⁇ 3 partial areas 100a to 100i from the specific area 100 which is a 5 ⁇ 5 matrix. From the 5 ⁇ 5 matrix, it is possible to extract nine 3 ⁇ 3 partial areas 100a to 100i.
- the image processing unit 20 performs a filtering process using the Laplacian filter 121 on the extracted partial regions 100a to 100i, and calculates nine values A to I as processing results.
- the calculated evaluation value S represents the amount of edge components in the specific region 100.
- the image processing unit 20 determines whether or not the calculated evaluation value S exceeds a threshold value. When the evaluation value S does not exceed the threshold value, the image processing unit 20 determines to apply the result of the smoothing process using the edge preserving type smoothing filter. Further, when the evaluation value S exceeds the threshold value, the image processing unit 20 determines that the result of the smoothing process using the edge preserving type smoothing filter is not applied.
- the image processing unit 20 determines whether or not the image is a texture image based on the change characteristics of the pixel values in the specific region 100, and determines that the smoothing processing result is applied when the texture image is not a texture image. In the case of an image, it can be configured to determine that the smoothing processing result is not applied.
- the image processing unit 20 calculates the frequency of the pixels in the specific region 100 (the number of increase / decrease of the pixel value), and applies a smoothing processing result using an edge-preserving smoothing filter based on the frequency. It is determined whether or not.
- 21 and 22 are schematic diagrams for explaining application determination based on the frequency. As shown in FIGS. 21A to 22D, the image processing unit 20 calculates the frequency in the four directions of the specific region 100 in the horizontal direction, the vertical direction, the oblique 45 ° direction, and the oblique 135 ° direction.
- the image processing unit 20 calculates a difference between pixel values of pixels adjacent in the horizontal direction in the 5 ⁇ 5 specific area 100 and generates a 4 ⁇ 5 matrix using the calculated difference values. To do. At this time, the image processing unit 20 sets the value of the corresponding matrix to 0 when the calculated difference value is equal to or less than the threshold value.
- the image processing unit 20 checks the sign (plus, minus, or value 0) of adjacent values in the horizontal direction of the 4 ⁇ 5 matrix, and points each set of adjacent values. When the signs of adjacent values are different, or when one of the adjacent values is 0 and the other is not 0, the image processing unit 20 gives one point to this set.
- the image processing unit 20 calculates the sum of these 15 points, and sets a value that is three times the sum as the frequency A in the horizontal direction.
- the frequency A is obtained by multiplying the sum by three because the number of points obtained by the calculation in the oblique 45 ° and 135 ° directions described later is as small as 9, so that the number of points is less than the calculated frequency. This is because normalization is performed with weights corresponding to the numbers.
- the image processing unit 20 generates a 5 ⁇ 4 matrix based on the difference between pixel values of pixels adjacent in the vertical direction in the specific region 100 (the difference value is equal to or less than a threshold value).
- the value of the matrix is 0).
- the image processing unit 20 calculates the sum of these 15 points, and sets the value of three times the sum as the frequency B in the vertical direction.
- the image processing unit 20 generates a 4 ⁇ 4 matrix based on the difference between pixel values of pixels adjacent in the specific region 100 in an oblique 45 ° direction. (If the difference value is less than or equal to the threshold value, the matrix value is 0).
- the image processing unit 20 calculates the sum of these nine points, and sets a value five times the sum as the frequency C in the oblique 45 ° direction.
- the image processing unit 20 generates a 4 ⁇ 4 matrix based on the difference between pixel values of pixels adjacent in the specific region 100 in an oblique 135 ° direction. (If the difference value is less than or equal to the threshold value, the matrix value is 0).
- the image processing unit 20 calculates the sum of these nine points, and sets a value five times the sum as the frequency C in the oblique 135 ° direction.
- the image processing unit 20 selects a minimum value from the calculated four vibration S81 numbers A to D, and determines whether or not the minimum value exceeds a threshold value. If the minimum value of the frequencies A to D does not exceed the threshold value, the image processing unit 20 determines to apply the result of the smoothing process using the edge-preserving smoothing filter. Further, when the minimum value of the frequencies A to D exceeds the threshold value, the image processing unit 20 determines that the result of the smoothing process using the edge preserving type smoothing filter is not applied.
- Pixel value change amount determination Further, for example, whether the image processing unit 20 applies the smoothing process result according to the change amount of the pixel value of the target pixel by the smoothing process using the edge preserving type smoothing filter. It can be set as the structure which determines whether or not.
- the image processing unit 20 calculates a difference between the pixel value of the target pixel before the smoothing process (that is, the pixel value of the input image) and the pixel value of the target pixel that has been subjected to the smoothing process, and the difference is a threshold value. If the difference does not exceed the threshold value, it is determined that the smoothing process result is applied, and if the difference exceeds the threshold value, it is determined that the smoothing process result is not applied. As a result, it is possible to prevent the pixel value of the target pixel from being largely changed by the smoothing process.
- the image processing unit 20 determines whether to apply the result of the smoothing process using the edge-preserving smoothing filter based on the determinations (1) to (3) above. Specifically, the image processing unit 20 applies the result of the smoothing process when it is determined that the result of the edge preserving type smoothing process is applied in all of the above determinations (1) to (3). If it is determined in any of the determinations (1) to (3) that the result of the edge preserving smoothing process is not applied, the image processing unit 20 does not apply the result of the smoothing process.
- the image processing unit 20 may be configured to perform one or two determinations of the three conditions instead of performing all the determinations of the three conditions (1) to (3).
- the image processing unit 20 may be configured to perform application determination based on conditions different from the above (1) to (3).
- the image processing unit 20 performs smoothing processing using an edge preserving smoothing filter and then determines whether or not to apply the smoothing result.
- the present invention is not limited to this.
- the image processing unit 20 performs the application determination first and performs smoothing by the edge preserving smoothing filter according to the determination result. It is good also as a structure which performs a process.
- the image processing unit 20 performs edge amount determination processing.
- the image processing unit 20 extracts one 3 ⁇ 3 partial areas 100a to 100i from the 5 ⁇ 5 specific area 100 (step S71), and performs filtering processing by the Laplacian filter 121 on the extracted partial areas (step S71). S72).
- the image processing unit 20 determines whether or not the extraction of all the specific areas 100a to 100i and the filtering process by the Laplacian filter 121 have been completed (step S73). When these processes are not performed on all the specific areas 100a to 100i (S73: NO), the image processing unit 20 returns the process to step S71 to extract other partial areas 100a to 100i and perform Laplacian filter 121. Perform the filtering process.
- the image processing unit 20 calculates the sum of absolute values of the plurality of values calculated in the filtering process. Calculation is made (step S74), and this sum is taken as an evaluation value S.
- the image processing unit 20 determines whether or not the evaluation value S is less than the threshold value (step S75). When the evaluation value S is equal to or greater than the threshold value (S75: NO), the image processing unit 20 determines not to apply the result of the edge preserving smoothing process (step S85), and ends the application determination process.
- the image processing unit 20 When the evaluation value S is less than the threshold value (S75: YES), the image processing unit 20 performs texture determination processing.
- the image processing unit 20 calculates a pixel value difference between adjacent pixels in one direction of the specific region 100 (step S76).
- the image processing unit 20 assigns points based on the sign of the difference value adjacent in one direction to the matrix constituted by the calculated differences (step S77).
- the image processing unit 20 calculates the sum of a plurality of points, performs normalization by multiplying the calculated sum by a predetermined amount (step S78), and sets the normalized value as the frequency in this direction. Thereafter, the image processing unit 20 determines whether or not the processing in steps S76 to S78 has been completed for all directions to be processed (step S79). If the process has not been completed for all directions (S79: NO), the image processing unit 20 returns the process to step S76, and performs the same process for the other directions.
- the image processing unit 20 selects the minimum value from the frequencies calculated for each direction (step S80), and whether or not the minimum value of the frequencies is less than the threshold value. Is determined (step S81). When the minimum value of the frequency is equal to or greater than the threshold value (S81: NO), the image processing unit 20 determines not to apply the result of the edge preserving smoothing process (step S85), and ends the application determination process.
- the image processing unit 20 performs a pixel value change amount determination process.
- the image processing unit 20 calculates a change amount (difference) for the target pixel value before and after the smoothing process using the edge-preserving smoothing filter (step S82), and whether or not the change amount is less than a threshold value. Is determined (step S83).
- the image processing unit 20 determines to apply the result of the edge preserving smoothing process (step S84), and ends the application determination process. If the amount of change is equal to or greater than the threshold (S83: NO), the image processing unit 20 determines not to apply the result of the edge preserving smoothing process (step S85), and ends the application determination process.
- the display device 1 extracts the specific area 100 including the target pixel from the input image, and determines whether or not to perform the smoothing process on the target pixel in the specific area 100.
- the display apparatus 1 determines with performing a smoothing process, the display apparatus 1 performs the filtering process using a smoothing filter, and smoothes an attention pixel.
- the display device 1 performs a filtering process using an edge-preserving smoothing filter to smooth the target pixel.
- the display device 1 determines whether or not to apply the smoothing result by the edge preserving smoothing filter according to the pixel value in the specific region 100 or the smoothing result by the edge preserving smoothing filter. To do.
- the display device 1 uses the pixel value smoothed by the smoothing filter, the pixel value smoothed by the edge-preserving smoothing filter, or the pixel value that has not been smoothed for the target pixel of the input image. Can be output.
- the display device 1 can generate and display an output image subjected to smoothing suitable for each pixel by performing these processes for all the pixels of the input image.
- the display device 1 calculates a differential value of the pixel value between the pixels in the specific area 100, and determines whether or not to perform the smoothing process on the target pixel in the specific area 100 based on the calculated differential value. Further, when it is determined that the smoothing process is to be performed, the display device 1 determines the direction and position of the noise boundary in the specific region 100. As a result, smoothing suitable for the direction and position of the noise boundary existing in the specific region 100 can be performed, so that noise can be accurately removed or reduced from the input image.
- the display device 1 enlarges the size of the specific region 100 when the direction of the noise boundary in the specific region 100 is not a horizontal pattern, a vertical pattern, or other patterns (that is, when no noise boundary exists in the specific region 100). The determination is repeated until the noise boundary of the pattern is detected or the specific area 100 is enlarged to a predetermined size. Thereby, the display apparatus 1 can detect and remove or reduce block noise of various sizes with high accuracy.
- the display device 1 calculates a primary differential value and a secondary differential value of pixel values between adjacent pixels in the specific region 100, and determines whether or not to perform a smoothing process on the target pixel in the specific region 100. .
- the display device 1 generates a matrix in which the primary differential values are binarized by comparison with the threshold value, calculates the logical sum of each row or each column of the matrix, and also compares the secondary differential value with the threshold value.
- a matrix in which the values are binarized is generated, and a logical sum of each row or each column of the matrix is calculated.
- the display device 1 further calculates the logical sum of the logical sum related to the calculated primary differential value and the logical sum related to the secondary differential value, and determines whether smoothing processing can be performed based on the calculation result. Accordingly, the display device 1 can accurately determine whether smoothing processing can be performed on the target pixel in the specific region 100.
- the display device 1 calculates the differential value and the logical sum in the vertical direction and the horizontal direction of the specific area 100, and the noise boundary extends in the vertical direction or the horizontal direction based on the logical sum calculated in each direction. Determine whether or not.
- the display device 1 determines the position of the noise boundary in the specific region 100 based on the arrangement pattern of 0/1 in the matrix of secondary differential values. Accordingly, the display device 1 determines whether the extending direction of the noise boundary included in the specific area 100 is the vertical direction or the horizontal direction (vertical direction or horizontal direction), and the position of the noise boundary in the specific area 100. It can be easily judged.
- the display device 1 stores a plurality of smoothing filters for removing or reducing noise, selects a smoothing filter according to the determined direction and / or position of the noise boundary, and uses the selected smoothing filter.
- the pixel value of the target pixel in the specific area 100 is smoothed. Thereby, the display apparatus 1 can perform smoothing suitable for the direction and position of the noise boundary in the specific region 100.
- the display device 1 calculates the strength of the edge component included in the specific region 100 in a plurality of directions using a plurality of Sobel filters, and estimates the strength of the edge component in other directions based on the calculated strength.
- the display device 1 determines the extending direction of the edge component in the specific region 100 based on the calculated and estimated intensity of the edge component, and performs smoothing using an edge preserving type smoothing filter corresponding to the determined direction. Do.
- the display device 1 can estimate the strength of the edge component in a direction that cannot be calculated by the stored Sobel filter, and can determine the direction of the edge component in more directions. Therefore, the display device 1 can select an edge-preserving smoothing filter that is more suitable for the edge component of the specific region 100, and can perform smoothing with high accuracy.
- the display device 1 performs the edge component intensity by the Sobel filter in at least four directions (0 °, 45 °, 90 °, and 135 °), and based on the calculated four intensities, the other direction (22.5 ° , 67.5 °, 112.5 ° and 157.5 °).
- the display device 1 stores equations (21) to (24) for estimating the strength of the edge component by linear interpolation, and substitutes the four intensities calculated using the Sobel filter into this arithmetic equation. Thus, it is possible to calculate an estimated value of the intensity in other directions.
- the display device 1 has 22.5 °, 67.5 °, 112.5 °, and 157.5 °.
- Four corresponding extended Sobel filters may be stored, and the strengths in the eight directions may be calculated using the eight Sobel filters.
- the four extended Sobel filters can be calculated in advance using the equations (1) to (4) or the equations (5) to (8). Accordingly, the display device 1 can select an edge-preserving smoothing filter that is more suitable for the edge component of the specific region 100, and can perform smoothing with high accuracy.
- the display device 1 selects the maximum value and the minimum value from the calculated and estimated plurality of edge strengths, and determines whether or not these differences exceed a threshold value. When the difference does not exceed the threshold value, the display device 1 regards the specific region 100 as a substantially flat image that does not include an edge component, and performs smoothing using an isotropic smoothing filter that is not an edge-preserving type. This makes it possible to appropriately smooth an image that does not include an edge component.
- the display device 1 calculates the strength of the edge component for the specific region 100 using a Laplacian filter, and determines whether or not the calculated strength exceeds a threshold value. When the calculated intensity does not exceed the threshold, the display device 1 does not apply the result of the smoothing process using the edge preserving type smoothing filter. Thereby, for example, when the input image is an image having a clear edge or pattern, it is possible to prevent the image quality from being deteriorated by the smoothing process.
- the display device 1 calculates the increase / decrease frequency of the pixel value of the adjacent pixel in the specific region 100, that is, the vibration frequency, and determines whether the calculated vibration frequency exceeds the threshold value.
- the display device 1 does not apply the result of the smoothing process using the edge preserving type smoothing filter.
- the display device 1 calculates a change amount of the pixel value of the target pixel before and after the smoothing process using the edge preserving type smoothing filter, and determines whether or not the calculated change amount exceeds the threshold value.
- the display device 1 does not apply the result of the smoothing process using the edge preserving type smoothing filter.
- the display device 1 is configured to perform smoothing using an edge-preserving smoothing filter when it is determined that smoothing processing is not performed on the target pixel in the specific region 100.
- the present invention is not limited to this, and the display device 1 may be configured not to perform smoothing using an edge-preserving smoothing filter.
- the display device 1 is configured to extract a 5 ⁇ 5 pixel area as the specific area 100, but is not limited to this.
- the display device 1 has other sizes such as 3 ⁇ 3 pixels or 7 ⁇ 7 pixels. An area may be extracted as the specific area 100.
- the smoothing filter shown in FIG.10 and FIG.11 is an example, Comprising: It does not restrict to this.
- the isotropic smoothing filter shown in FIG. 14 is an example, and the present invention is not limited to this.
- the Sobel filter shown in FIG. 15 and the Laplacian filter shown in FIG. 20 are examples, and the present invention is not limited to this.
- the display device 1 calculates the intensity of edge components of 0 °, 45 °, 90 °, and 135 ° using a Sobel filter, and 22.5 °, 67.5 °, 112.5 °, and 157.
- the intensity of the 5 ° edge component is calculated or estimated, but these directions (angles) are merely examples, and the present invention is not limited to this.
- the display device 1 stores eight sobel filters and stores the first method for calculating the intensity of edge components in eight directions, or stores four sobel filters, and other four directions. Any of the second methods for estimating the intensity of the edge component may be used. Furthermore, the first and second methods may be combined.
- the configuration is such that the strength of the edge component in the eight directions is calculated by the eight-direction Sobel filter, the strength of the edge component in the other eight directions is estimated, and the strength of the edge component in the sixteen directions is obtained can do.
- the display device 1 is configured to determine whether or not to apply the result of the smoothing process using the edge-preserving smoothing filter under the three conditions of edge amount determination, texture determination, and pixel value change amount determination. However, it is not limited to this.
- the display device 1 may perform application determination based on, for example, one or two of the above three conditions, or may perform application determination based on conditions other than the above three conditions, for example.
- the display device 1 is configured to perform two types of smoothing processing on the input image, the first smoothing processing using the smoothing filter or the second smoothing processing using the edge preserving type smoothing filter. However, it is not limited to this.
- the display device 1 may have a configuration in which the first smoothing process is performed and the second smoothing process is not performed. In this case, for example, in the flowchart of FIG. 3, when it is determined in step S4 that smoothing is not performed, the display device 1 may be configured to perform the process of outputting the pixel value of the target pixel in step S11.
- the display device 1 also performs noise boundary direction determination processing (step S24) and position determination processing (step S29) on the specific region 100 extracted from the input image, and smoothes based on the determined direction and position. It may be configured to perform image processing other than processing.
- the display device 1 may be configured to perform the second smoothing process without performing the first smoothing process.
- the process proceeds to step S ⁇ b> 7 to perform the smoothing process using the edge-preserving smoothing filter.
- the display device 1 may be configured not to perform application determination as to whether or not to apply the result of the edge preserving smoothing filter process. In this case, the configuration may be such that the result of the edge preserving smoothing filter process is always applied.
- the display device 1 also performs edge component direction determination processing (step S52), edge strength estimation processing (step S53), and the like on the specific region 100 extracted from the input image.
- Step S56 may be performed, and image processing other than smoothing processing may be performed based on the determined direction.
- the display device 1 has been described as an example of a device that removes or reduces noise included in an input image.
- the present invention is not limited to this, and the same applies to other various image processing devices.
- Configuration can be applied.
- the same configuration can be applied to a display device that displays not an input image from the PC 5 but an image related to a television broadcast received by a tuner or the like.
- a similar configuration can be applied to an image processing apparatus that prints an input image, such as a printer or a facsimile.
- the PC may perform processing such as image expansion and noise removal described above, and an image from which noise is removed or reduced may be input to the display device.
- the above-described image processing function may be provided as a computer program, and the above-described image processing may be performed by a PC CPU executing the computer program.
- the computer program may be provided by being recorded on a recording medium such as a disk or a memory card, or may be provided via a network such as the Internet.
- FIG. 25 is a block diagram showing a configuration of the PC 155 according to the modification.
- the PC 155 according to the modified example decompresses an input image compressed by a method such as JPEG or MPEG, performs image processing for removing or reducing noise, and outputs the processed image to the display device 151.
- the display device 151 displays an input image from the PC 155 on the liquid crystal panel.
- the PC 155 includes a CPU 156, an operation unit 157, a primary storage unit 158, a secondary storage unit 159, a recording medium mounting unit 160, a communication unit 161, an image output unit 162, and the like.
- the CPU 156 reads and executes the image processing program 181 stored in the secondary storage unit 159, whereby the above-described image expansion unit 17, the image processing unit 20, and the like are realized as software functional blocks.
- the operation unit 157 is an input device such as a keyboard and a mouse. The operation unit 157 receives a user operation and notifies the CPU 156 of the operation.
- the primary storage unit 158 includes a memory element such as SRAM (Static Random Access Memory), and temporarily stores various data used for the processing of the CPU 156.
- the secondary storage unit 159 is configured by a storage device such as a hard disk, and stores various computer programs such as the image processing program 181 and various data required for executing the computer program.
- the recording medium mounting unit 160 is a device such as a disk drive or a memory card slot, and is loaded with a recording medium 180 such as a DVD (Digital Versatile Disk) or a memory card, and reads a computer program and data recorded on the recording medium. .
- the communication unit 161 communicates with other devices via a network such as the Internet by wireless or wired.
- the image output unit 162 outputs data subjected to image processing by the CPU 156 to the display device 151.
- the image processing program 181 is provided by being recorded on the recording medium 180.
- the CPU 156 of the PC 155 reads the image processing program 181 and the like recorded on the recording medium 180 and stores them in the secondary storage unit 159.
- the CPU 156 operates as the image expansion unit 17 and the image processing unit 20 by reading out and executing the image processing program 181 from the secondary storage unit 159.
- the operation unit 157 receives an instruction to display an image compressed by a method such as JPEG or MPEG
- the CPU 156 obtains an image to be displayed by, for example, reading it from the secondary storage unit 159, and decompresses the image.
- the unit 17 decompresses the image.
- the CPU 156 performs image processing for removing or reducing noise included in the decompressed image in the image processing unit 20.
- the image processing unit 20 extracts the specific area 100 including the target pixel from the expanded image, determines whether to perform the smoothing process on the target pixel in the extracted specific area 100, and determines to perform the smoothing process. In this case, a smoothing process using a smoothing filter is performed. Further, when it is determined that the smoothing process is not performed, the image processing unit 20 performs a smoothing process using an edge-preserving smoothing filter and determines whether to apply this process.
- the image processing unit 20 repeats these processes for all the pixels of the image, and the pixel value of each pixel is smoothed by the smoothing filter, the pixel value smoothed by the edge-preserving smoothing filter, or Then, an image having one of the original pixel values is generated and output. Accordingly, the image processing unit 20 can generate an output image from which stepped noise such as block noise has been removed or reduced.
- Image processing part (specific area extraction means, smoothing determination means, noise boundary position determination means, noise Boundary direction determining means, specific area expanding means, primary differential value binarizing means, first logical sum calculating means, secondary differential value binarizing means, second logical sum calculating means, third logical sum calculating means, smoothing Filter storage means, filter selection means, smoothing means, first smoothing means, second smoothing means, application determination means, Sobel filter storage means, edge strength calculation means, edge strength estimation means, edge direction determination Means, edge-preserving smoothing filter storage means, edge strength difference determination means, second edge strength calculation means, edge strength determination means, increase / decrease count calculation means, increase / decrease count determination Means, smoothing the difference calculating means, smooth difference determination means) 21 Specific area extraction unit (specific area extraction means) 22 Smoothing determination unit (smoothing
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Facsimile Image Signal Circuits (AREA)
- Picture Signal Circuits (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Image Analysis (AREA)
Abstract
Description
これにより、ノイズの境界部分の方向及び位置に基づいて適切なフィルタリング処理が可能となり、入力画像からノイズを精度よく除去又は低減することが可能となる。
これらにより、特定領域内のノイズを精度よく検出することができる。
また2値化した2次微分値の特定領域内における0/1の配置パターンに基づいて、特定領域内のノイズの境界部分の位置を決定する。これによりノイズの境界部分の位置を容易に決定することができる。
また画像処理装置は、第1の平滑化処理にて平滑化された画素値、第2の平滑化処理にて平滑化された画素値、又は、平滑化していない元の入力画像の画素値のいずれか1つを、入力画像の注目画素に対する出力画像の画素値として選択する。入力画像の全ての画素についてこのような選択を行うことで出力画像を生成することができる。
これにより、入力画像の各画素に適した平滑化を行うことができると共に、平滑化が不要な画素については平滑化を行うことなく出力画像を生成することができる。
例えば、入力画像が明確なエッジ又はパターン等を有する画像である場合、平滑化処理を行うことによって画質を劣化させる虞があるため、平滑化処理の結果を適用せずに元の画素値を採用する。
次に、画像処理部20が行う平滑化判定処理について説明する。平滑化判定処理は、図2の平滑化判定部22が行う処理であり、図3のステップS3にて行う処理である。図4は、画像処理部20が行う平滑化判定処理の手順の概略を示すフローチャートである。まず画像処理部20は、入力画像から抽出した特定領域について、水平方向(横方向)の判定処理(ステップS21)及び垂直方向(縦方向)の判定処理(ステップS22)を行って、特定領域に含まれる注目画素を平滑化処理の対象とするか否かを判定する(ステップS23)。注目画素を平滑化処理の対象としないと判定した場合(S23:NO)、画像処理部20は、平滑化判定処理を終了する。
図5は、ステップS21にて行う水平方向判定処理を説明するための模式図である。画像処理部20は、入力画像から一の注目画素を選択し、この注目画素を含む所定サイズ(図示の例では5×5画素)の特定領域100を抽出する。なお本図においては、特定領域100内の注目画素に、ハッチングを付して示してある。水平方向判定処理において、画像処理部20は、まず特定領域100において水平方向に隣接する画素間の画素値の1次微分値(即ち、隣接画素間の画素値の差分)を算出する。これにより画像処理部20は、4×5の1次微分値により構成される水平1次微分値マトリクス101を得る。画像処理部20は、水平1次微分値マトリクス101の各1次微分値の絶対値と予め決定された閾値との比較により、水平1次微分値マトリクス101を2値化する(例えば1次微分値の絶対値≧閾値の場合に1、1次微分値の絶対値<閾値の場合に0とする)。次いで画像処理部20は、水平方向に並ぶ4つの2値化された1次微分値に対してOR(論理和)演算を行い、5つの演算結果で構成される水平1次微分値OR列102を得る。
特定領域100内の注目画素を平滑化処理の対象とすると判定した場合、画像処理部20は、特定領域100に含まれるノイズ境界の延在方向を決定する処理を行う。図8は、ノイズ境界方向決定処理を説明するための模式図である。ノイズ境界方向決定処理において、画像処理部20は、特定領域100に含まれるノイズ境界の方向が、図8Aの水平パターン、図8Bの垂直パターン、図8Cの内部パターン又は図8Dのその他パターンのいずれであるかを決定する。図8Aの水平パターンは、特定領域100の水平方向(横方向)にノイズ境界が延在するパターンである。図8Bの垂直パターンは、特定領域100の垂直方向(縦方向)にノイズ境界が延在するパターンである。図8Cの内部パターンは、特定領域100内にノイズ境界が含まれず、特定領域100がブロックノイズの内部領域である場合のパターンである。図8Dのその他パターンは、図8A~C以外の全てのパターンであり、図示のパターンは一例である。
画像処理部20は、特定領域100のノイズ境界が図8Aの水平パターン又は図8Bの垂直パターンである場合に、特定領域100におけるノイズ境界の位置を決定する処理を行う。図9は、ノイズ境界位置決定処理を説明するための模式図であり、特定領域100のノイズ境界が垂直パターンである場合の例を示してある。5×5画素の特定領域100に存在する垂直パターンのノイズ境界の位置は、図9A~図9Dに示す4つの位置のいずれかである。なおノイズ境界が水平パターンの場合の位置決定処理については、垂直パターンの場合と略同じであるため、説明を省略する。
上述の平滑化判定処理にて平滑化処理を行うと判定された場合、画像処理部20は、この特定領域100の画素値を平滑化する処理を行う。画像処理部20は、平滑化処理を行うための平滑化フィルタを複数記憶しており、ノイズ境界方向決定処理にて決定した方向及びノイズ境界位置決定処理にて決定した位置等に基づいて一の平滑化フィルタを選択し、選択した平滑化フィルタを用いて特定領域100の注目画素の画素値を平滑化する。
上述の平滑化判定処理にて平滑化処理を実施しないと判定された場合、画像処理部20は、エッジ保存型の平滑化フィルタを用いて、特定領域100の注目画素の画素値を平滑化する処理を行う。画像処理部20は、エッジ保存型の平滑化フィルタを複数記憶しており、特定領域100に含まれるエッジ成分の方向を判定し、判定したエッジ成分の方向に応じて一のフィルタを選択し、選択したフィルタを用いて特定領域100の注目画素の画素値を平滑化する。
22.5°<x<45°の場合: X=αE+(1-α)B
45°<x<67.5°の場合: X=αB+(1-α)F
67.5°<x<90°の場合: X=αF+(1-α)C
90°<x<112.5°の場合: X=αC+(1-α)G
112.5°<x<135°の場合: X=αG+(1-α)D
135°<x<157.5°の場合: X=αD+(1-α)H
157.5°<x<180°の場合: X=αH+(1-α)A
画像処理部20は、これら複数のエッジ保存型の平滑化フィルタから一のフィルタを選択するために、特定領域100のエッジ成分の方向を判定する必要がある。例えば画像処理部20は、ソーベルフィルタを用いたフィルタリング処理を行うことによってエッジ成分の方向判定を行うことができる。図15は、方向判定のためのソーベルフィルタの一例を示す模式図である。画像処理部20は、エッジ成分の方向が水平(0°)、斜め45°、垂直(90°)及び斜め135°の4方向のいずれであるかを判定するために、各方向に対応した図15A~Dに示す4つのソーベルフィルタを用いる。
π/4<x<π/2の場合: X=αB+(1-α)C …(2)
π/2<x<3π/4の場合: X=αC+(1-α)D …(3)
3π/4<x<πの場合: X=αD+(1-α)E …(4)
記憶しているエッジ保存型の平滑化フィルタの数に対して、記憶しているソーベルフィルタの数が少ない場合、画像処理部20は、ソーベルフィルタを用いて算出することができない方向についてのエッジ成分の強度を推定する処理を行う。画像処理部20は、(a)エッジ強度算出処理にて算出した複数のエッジ成分の強度を基に、これ以外の方向に関するエッジ成分の強度を推定する。
Yi=(1-r)×Y2+r×Y3+Δ …(11)また上記の(11)式においてΔは以下の(12)式で表すことができる。
Δ=(1-r)×Δα+r×Δβ …(12)また更にΔα及びΔβは以下の(13)式及び(14)式で表すことができる。
Δα=Y2-(Y1+Y3)/2 …(13)
Δβ=Y3-(Y2+Y4)/2 …(14)
A1(135°,d)
A2(0°,a)
A3(45°,b)
A4(90°,c)
C(22.5°,e)またこの場合、r=1/2である。
e={3×(a+b)-(c+d)}/4 …(21)
同様にして、67.5°のエッジ成分の強度をfとし、112.5°のエッジ成分の強度をgとし、157.5°のエッジ成分の強度をhとすると、以下の(22)式~(24)式が得られる。
f={3×(b+c)-(d+a)}/4 …(22)
g={3×(c+d)-(a+b)}/4 …(23)
h={3×(d+a)-(b+c)}/4 …(24)
画像処理部20は、4つの基本ソーベルフィルタ及び4つの拡張ソーベルフィルタを用いて8方向のエッジ成分の強度を算出する(第1の方法)。又は、画像処理部20は、図15A~Dの4つのソーベルフィルタを用いて各方向についてのエッジ成分の強度を算出すると共に、その他の方向についてのエッジ成分の強度を推定する(第2の方法)。画像処理部20は、算出又は推定により得られた複数のエッジ成分の強度を比較し、最も強度が強い(算出した値の絶対値が最も小さい)方向を、特定領域100のエッジ成分の方向と判定する。画像処理部20は、判定した方向に対応するエッジ保存型の平滑化フィルタ(図12及び図13参照)を読み出し、特定領域100に対するフィルタリング処理を行って、注目画素の画素値を平滑化する。
上述のように画像処理部20は、平滑化判定処理にて平滑化処理を実施しないと判定された場合、エッジ保存型の平滑化フィルタを用いた平滑化処理を行う。エッジ保存型の平滑化フィルタは、ある程度の高周波成分を含む領域に対して実施するものであるが、例えば明確なエッジ又はパターン等を有する画像などの場合には、フィルタリング処理を行うことによって画質が劣化する虞がある。そこで画像処理部20は、エッジ保存型平滑化フィルタによる平滑化結果を適用するか否かの判定を、一又は複数の条件に基づいて行う。以下に、判定条件の例を3つ挙げて説明する。
例えば画像処理部20は、特定領域100内のエッジ成分の量(強度)を算出し、算出したエッジ量が閾値より小さい場合に、エッジ保存型の平滑化フィルタによる平滑化処理結果を適用すると判定する。これにより、エッジ量が大きい場合には平滑化が行われないため、明確なエッジ又はパターン等を有する画像が平滑化されて画質が劣化することを防止できる。特定領域100のエッジ量は、例えばラプラシアンフィルタを用いて算出することができる。
また、1画素単位の細かいテクスチャ画像などの場合、平滑化処理を行うとそのテクスチャがつぶれて画質が劣化する虞がある。そこで、例えば画像処理部20は、特定領域100内の画素値の変化特性などに基づいてテクスチャ画像であるか否かを判断し、テクスチャ画像でない場合に平滑化処理結果を適用すると判定し、テクスチャ画像の場合には平滑化処理結果を適用しないと判定する構成とすることができる。
また例えば画像処理部20は、エッジ保存型の平滑化フィルタを用いた平滑化処理による注目画素の画素値の変化量に応じて、平滑化処理結果を適用するか否かの判定を行う構成とすることができる。画像処理部20は、平滑化処理を行う前の注目画素の画素値(即ち入力画像の画素値)と、平滑化処理を行った注目画素の画素値との差分を算出し、この差分が閾値を超えない場合に平滑化処理結果を適用すると判定し、差分が閾値を超える場合に平滑化処理結果を適用しないと判定する。これにより平滑化処理によって注目画素の画素値が大きく変化することを防止することができる。
また例えばPCが画像の伸張及び上述のノイズ除去等の処理を行い、ノイズが除去又は低減された画像を表示装置へ入力する構成としてもよい。この場合、上述の画像処理の機能がコンピュータプログラムとして提供され、PCのCPUがコンピュータプログラムを実行することによって上述の画像処理が行われる構成であってよい。またコンピュータプログラムは、ディスク又はメモリカード等の記録媒体に記録されて提供されてもよく、インターネットなどのネットワークを介して提供されてもよい。
5 PC
11 制御部
12 操作部
13 液晶パネル
14 バックライト
15 ライト駆動部
16 画像入力部
17 画像伸張部
18 パネル駆動部
20 画像処理部(特定領域抽出手段、平滑化判定手段、ノイズ境界位置決定手段、ノイズ境界方向決定手段、特定領域拡大手段、1次微分値2値化手段、第1論理和算出手段、2次微分値2値化手段、第2論理和算出手段、第3論理和算出手段、平滑化フィルタ記憶手段、フィルタ選択手段、平滑化手段、第1の平滑化手段、第2の平滑化手段、適用判定手段、ソーベルフィルタ記憶手段、エッジ強度算出手段、エッジ強度推定手段、エッジ方向決定手段、エッジ保存型平滑化フィルタ記憶手段、エッジ強度差分判定手段、第2のエッジ強度算出手段、エッジ強度判定手段、増減回数算出手段、増減回数判定手段、平滑差分算出手段、平滑差分判定手段)
21 特定領域抽出部(特定領域抽出手段)
22 平滑化判定部(平滑化判定手段)
23 第1平滑化部(平滑化手段、第1の平滑化手段)
24 第2平滑化部(第2の平滑化手段)
25 適用判定部(適用判定手段)
26 画素値選択部
100 特定領域
101 水平1次微分値マトリクス
102 水平1次微分値OR列
103 水平2次微分値マトリクス
103a 2値化水平2次微分値マトリクス
104 水平2次微分値OR列
105 水平OR列
106 水平上位OR値
107 水平下位OR値
108 水平方向判定結果
111 垂直1次微分値マトリクス
112 垂直1次微分値OR行
113 垂直2次微分値マトリクス
114 垂直2次微分値OR行
115 垂直OR行
116 垂直上位OR値
117 垂直下位OR値
118 垂直方向判定結果
151 表示装置
155 PC(画像処理装置)
156 CPU
157 操作部
158 1次記憶部
159 2次記憶部
160 記録媒体装着部
161 通信部
162 画像出力部
180 記録媒体
181 画像処理プログラム(コンピュータプログラム)
Claims (44)
- マトリクス状に配された複数の画素で構成される入力画像からノイズを除去又は低減した出力画像を生成する画像処理を行う画像処理装置において、
前記入力画像から注目画素を含む特定領域を抽出する特定領域抽出手段と、
該特定領域抽出手段が抽出した特定領域に基づいて、前記注目画素を第1の平滑化処理の対象とするか否かを判定する平滑化判定手段と、
該平滑化判定手段が第1の平滑化処理の対象とすると判定した場合に、平滑化フィルタを用いた第1の平滑化処理を行い、前記注目画素の画素値を平滑化する第1の平滑化手段と、
前記平滑化判定手段が第1の平滑化処理の対象としないと判定した場合に、エッジ保存型の平滑化フィルタを用いた第2の平滑化処理を行い、前記注目画素の画素値を平滑化する第2の平滑化手段と
を備え、
前記入力画像の各画素の画素値を、前記第1の平滑化手段にて平滑化した画素値、又は、前記第2の平滑化手段にて平滑化した画素値のいずれかとした出力画像を生成するようにしてあること
を特徴とする画像処理装置。 - 前記平滑化判定手段が第1の平滑化処理の対象とすると判定した場合に、前記特定領域においてノイズの境界部分が延在する方向を決定するノイズ境界方向決定手段と、
前記平滑化判定手段が第1の平滑化処理の対象とすると判定した場合に、前記特定領域におけるノイズの境界部分の位置を決定するノイズ境界位置決定手段と
を備えること
を特徴とする請求項1に記載の画像処理装置。 - 前記ノイズ境界位置決定手段は、前記平滑化判定手段が平滑化処理の対象とすると判定し、且つ、前記ノイズ境界方向決定手段が決定した方向が所定方向である場合に、前記特定領域におけるノイズの境界部分の位置を決定するようにしてあること
を特徴とする請求項2に記載の画像処理装置。 - 前記ノイズ境界方向決定手段が決定した方向が所定方向でない場合に、前記特定領域を拡大する特定領域拡大手段を更に備え、
前記ノイズ境界方向決定手段は、前記特定領域拡大手段が拡大した特定領域について方向決定を再度行うようにしてあること
を特徴とする請求項2又は請求項3に記載の画像処理装置。 - 前記ノイズ境界方向決定手段が決定した方向が所定方向となるまで、又は、前記特定領域拡大手段により特定領域が所定の大きさに拡大されるまで、前記特定領域拡大手段による特定領域の拡大及び前記ノイズ境界方向決定手段による方向決定を繰り返して行うようにしてあること
を特徴とする請求項4に記載の画像処理装置。 - 前記第1の平滑化手段が用いる平滑化フィルタを複数記憶した平滑化フィルタ記憶手段と、
前記ノイズ境界位置決定手段が決定した位置及び前記ノイズ境界方向決定手段が決定した方向に応じて、前記平滑化フィルタ記憶手段から平滑化フィルタを選択するフィルタ選択手段と
を備え、
前記第1の平滑化手段は、前記フィルタ選択手段が選択した平滑化フィルタを用いて前記注目画素の画素値を平滑化するようにしてあること
を特徴とする請求項2乃至請求項5のいずれか1つに記載の画像処理装置。 - 前記特定領域抽出手段が抽出した特定領域の画像に対して、画素間の画素値の微分値を算出する微分値算出手段を備え、
前記平滑化判定手段は、前記微分値算出手段が算出した微分値に基づいて、前記注目画素を平滑化処理の対象とするか否かを判定するようにしてあること
を特徴とする請求項1乃至請求項6のいずれか1つに記載の画像処理装置。 - 前記微分値算出手段は、隣接する画素間の画素値の1次微分値及び2次微分値を算出するようにしてあり、
前記平滑化判定手段は、前記微分値算出手段が算出した1次微分値及び2次微分値に基づいて判定を行うようにしてあること
を特徴とする請求項7に記載の画像処理装置。 - 前記微分値算出手段が算出した1次微分値が閾値を超えるか否かに応じて、前記1次微分値を2値化する1次微分値2値化手段と、
前記1次微分値2値化手段が2値化した1次微分値の論理和を算出する第1論理和算出手段と、
前記微分値算出手段が算出した2次微分値が閾値を超えるか否かに応じて、前記2次微分値を2値化する2次微分値2値化手段と、
前記2次微分値2値化手段が2値化した2次微分値の論理和を算出する第2論理和算出手段と、
前記第1論理和算出手段の算出結果及び前記第2論理和算出手段の算出結果の論理和を算出する第3論理和算出手段と
を備え、
前記平滑化判定手段は、前記第3論理和算出手段の算出結果に基づいて判定を行うようにしてあること
を特徴とする請求項8に記載の画像処理装置。 - 前記微分値算出手段、前記1次微分値2値化手段、前記第1論理和算出手段、前記2次微分値2値化手段、前記第2論理和算出手段及び前記第3論理和算出手段は、前記特定領域の縦方向及び横方向についてそれぞれ処理を行うようにしてあり、
前記平滑化判定手段は、前記第3論理和算出手段による前記縦方向及び横方向の算出結果に基づいて判定を行うようにしてあること
を特徴とする請求項9に記載の画像処理装置。 - 前記平滑化判定手段が第1の平滑化処理の対象とすると判定した場合に、前記特定領域においてノイズの境界部分が延在する方向を決定するノイズ境界方向決定手段を備え、
該ノイズ境界方向決定手段は、前記第3論理和算出手段による前記縦方向及び横方向の算出結果に基づいて、ノイズの境界部分が前記特定領域において前記縦方向及び/又は前記横方向に延在しているか否かを決定するようにしてあること
を特徴とする請求項10に記載の画像処理装置。 - 前記平滑化判定手段が第1の平滑化処理の対象とすると判定した場合に、前記特定領域におけるノイズの境界部分の位置を決定するノイズ境界位置決定手段を備え、
該ノイズ境界位置決定手段は、前記2次微分値2値化手段が2値化した2次微分値の前記特定領域内におけるパターンに基づいて、前記特定領域におけるノイズの境界部分の位置を決定するようにしてあること
を特徴とする請求項9乃至請求項11のいずれか1つに記載の画像処理装置。 - 各々が特定方向に関するエッジ成分の強度を検出するソーベルフィルタを、異なる方向について複数記憶するソーベルフィルタ記憶手段と、
前記特定領域抽出手段が抽出した特定領域に対して前記ソーベルフィルタ記憶手段が記憶した複数のソーベルフィルタによるフィルタ処理を行い、前記特定領域に含まれるエッジ成分の強度を複数方向について算出するエッジ強度算出手段と、
前記エッジ強度算出手段が算出した複数のエッジ強度の最大値及び最小値の差分が閾値を超えるか否かを判定するエッジ強度差分判定手段と
を更に備え、
前記第2の平滑化手段は、前記差分が前記閾値を超えないと前記エッジ強度差分判定手段が判定した場合、エッジ保存型でない平滑化フィルタを用いて前記特定領域の注目画素の画素値を平滑化するようにしてあること
を特徴とする請求項1乃至請求項12のいずれか1つに記載の画像処理装置。 - 前記第2の平滑化手段による平滑化結果を適用するか否かを判定する適用判定手段を備え、
前記入力画像の各画素の画素値を、前記第1の平滑化手段にて平滑化した画素値、前記第2の平滑化手段にて平滑化した画素値、又は、平滑化していない元の画素値のいずれかとした出力画像を生成するようにしてあること
を特徴とする請求項1乃至請求項13のいずれか1つに記載の画像処理装置。 - 前記特定領域抽出手段が抽出した特定領域に対してラプラシアンフィルタによるフィルタ処理を行い、前記特定領域に含まれるエッジ成分の強度を算出する第2のエッジ強度算出手段と、
該第2のエッジ強度算出手段が算出した強度が閾値を超えるか否かを判定するエッジ強度判定手段と
を備え、
前記適用判定手段は、
前記強度が前記閾値を超えると前記エッジ強度判定手段が判定した場合、前記第2の平滑化手段による平滑化結果を適用すると判定し、
前記強度が前記閾値を超えないと前記エッジ強度判定手段が判定した場合、前記第2の平滑化手段による平滑化結果を適用しないと判定するようにしてあること
を特徴とする請求項14に記載の画像処理装置。 - 前記特定領域抽出手段が抽出した特定領域内にて特定方向に隣接する画素間の画素値の増減回数を算出する増減回数算出手段と、
該増減回数算出手段が算出した増減回数が閾値を超えるか否かを判定する増減回数判定手段と
を備え、
前記適用判定手段は、
前記増減回数が前記閾値を超えないと前記増減回数判定手段が判定した場合、前記第2の平滑化手段による平滑化結果を適用すると判定し、
前記増減回数が前記閾値を超えると前記増減回数判定手段が判定した場合、前記第2の平滑化手段による平滑化結果を適用しないと判定するようにしてあること
を特徴とする請求項14又は請求項15に記載の画像処理装置。 - 前記特定領域抽出手段が抽出した特定領域に含まれる注目画素の画素値、及び、前記第2の平滑化手段により平滑化された画素値の差分を算出する平滑差分算出手段と、
該平滑差分算出手段が算出した差分が閾値を超えるか否かを判定する平滑差分判定手段と
を備え、
前記適用判定手段は、
前記差分が前記閾値を超えないと前記平滑差分判定手段が判定した場合、前記第2の平滑化手段による平滑化結果を適用すると判定し、
前記差分が前記閾値を超えると前記平滑差分判定手段が判定した場合、前記第2の平滑化手段による平滑化結果を適用しないと判定するようにしてあること
を特徴とする請求項14乃至請求項16のいずれか1つに記載の画像処理装置。 - マトリクス状に配された複数の画素で構成される入力画像からノイズを除去又は低減した出力画像を生成する画像処理を行う画像処理装置において、
前記入力画像から注目画素を含む特定領域を抽出する特定領域抽出手段と、
該特定領域抽出手段が抽出した特定領域に基づいて、前記注目画素を第1の平滑化処理の対象とするか否かを判定する平滑化判定手段と、
該平滑化判定手段が第1の平滑化処理の対象とすると判定した場合に、平滑化フィルタを用いた第1の平滑化処理を行い、前記注目画素の画素値を平滑化する第1の平滑化手段と、
前記平滑化判定手段が第1の平滑化処理の対象としないと判定した場合に、エッジ保存型の平滑化フィルタを用いた第2の平滑化処理を行い、前記注目画素の画素値を平滑化する第2の平滑化手段と、
該第2の平滑化手段による平滑化結果を適用するか否かを判定する適用判定手段と
を備え、
前記入力画像の各画素の画素値を、前記第1の平滑化手段にて平滑化した画素値、前記第2の平滑化手段にて平滑化した画素値、又は、平滑化していない元の画素値のいずれかとした出力画像を生成するようにしてあること
を特徴とする画像処理装置。 - マトリクス状に配された複数の画素で構成される入力画像からノイズを除去又は低減した出力画像を生成する画像処理を行う画像処理装置において、
前記入力画像から注目画素を含む特定領域を抽出する特定領域抽出手段と、
該特定領域抽出手段が抽出した特定領域の画像に対して、画素間の画素値の微分値を算出する微分値算出手段と、
該微分値算出手段が算出した微分値に基づいて、前記注目画素を平滑化処理の対象とするか否かを判定する平滑化判定手段と、
該平滑化判定手段が平滑化処理の対象とすると判定した場合に、前記特定領域においてノイズの境界部分が延在する方向を決定するノイズ境界方向決定手段と、
前記平滑化判定手段が平滑化処理の対象とすると判定した場合に、前記特定領域におけるノイズの境界部分の位置を決定するノイズ境界位置決定手段と、
前記平滑化判定手段が平滑化処理の対象とすると判定した場合に、前記ノイズ境界方向決定手段が決定した方向及び前記ノイズ境界位置決定手段が決定した位置に基づいて、前記注目画素に対する平滑化処理を行う平滑化手段と
を備えることを特徴とする画像処理装置。 - マトリクス状に配された複数の画素で構成される入力画像から注目画素を含む特定領域を抽出する特定領域抽出手段と、
該特定領域抽出手段が抽出した特定領域の画像に対して、画素間の画素値の微分値を算出する微分値算出手段と、
該微分値算出手段が算出した微分値に基づいて、前記特定領域においてノイズの境界部分が延在する方向を決定するノイズ境界方向決定手段と、
前記微分値算出手段が算出した微分値に基づいて、前記特定領域におけるノイズの境界部分の位置を決定するノイズ境界位置決定手段と
を備えることを特徴とする画像処理装置。 - マトリクス状に配された複数の画素で構成される入力画像からノイズを除去又は低減した出力画像を生成する画像処理方法において、
前記入力画像から注目画素を含む特定領域を抽出する特定領域抽出ステップと、
該特定領域抽出ステップにて抽出した特定領域に基づいて、前記注目画素を第1の平滑化処理の対象とするか否かを判定する平滑化判定ステップと、
該平滑化判定ステップにて第1の平滑化処理の対象とすると判定した場合に、平滑化フィルタを用いた第1の平滑化処理を行い、前記注目画素の画素値を平滑化する第1の平滑化ステップと、
前記平滑化判定ステップにて第1の平滑化処理の対象としないと判定した場合に、エッジ保存型の平滑化フィルタを用いた第2の平滑化処理を行い、前記注目画素の画素値を平滑化する第2の平滑化ステップと、
前記入力画像の各画素の画素値を、前記第1の平滑化ステップにて平滑化した画素値、又は、前記第2の平滑化ステップにて平滑化した画素値のいずれかとした出力画像を生成する生成ステップと
を含むことを特徴とする画像処理方法。 - 前記平滑化判定ステップにて第1の平滑化処理の対象とすると判定した場合に、前記特定領域においてノイズの境界部分が延在する方向を決定するノイズ境界方向決定ステップと、
前記平滑化判定ステップにて第1の平滑化処理の対象とすると判定した場合に、前記特定領域におけるノイズの境界部分の位置を決定するノイズ境界位置決定ステップと
を含むこと
を特徴とする請求項21に記載の画像処理方法。 - 前記ノイズ境界位置決定ステップでは、前記平滑化判定ステップにて平滑化処理の対象とすると判定し、且つ、前記ノイズ境界方向決定ステップにて決定した方向が所定方向である場合に、前記特定領域におけるノイズの境界部分の位置を決定すること
を特徴とする請求項22に記載の画像処理方法。 - 前記ノイズ境界方向決定ステップにて決定した方向が所定方向でない場合に、前記特定領域を拡大する特定領域拡大ステップを更に含み、
前記ノイズ境界方向決定ステップでは、前記特定領域拡大ステップにて拡大した特定領域について方向決定を再度行うこと
を特徴とする請求項22又は請求項23に記載の画像処理方法。 - 前記ノイズ境界方向決定ステップにて決定した方向が所定方向となるまで、又は、前記特定領域拡大ステップにて特定領域を所定の大きさに拡大するまで、前記特定領域拡大ステップによる特定領域の拡大及び前記ノイズ境界方向決定手段による方向決定を繰り返して行うこと
を特徴とする請求項24に記載の画像処理方法。 - 前記第1の平滑化手段が用いる平滑化フィルタを複数記憶しておき、
前記ノイズ境界位置決定ステップにて決定した位置及び前記ノイズ境界方向決定ステップにて決定した方向に応じて、記憶した複数の平滑化フィルタから、平滑化フィルタを選択するフィルタ選択ステップを含み、
前記第1の平滑化ステップでは、前記フィルタ選択ステップにて選択した平滑化フィルタを用いて前記注目画素の画素値を平滑化すること
を特徴とする請求項22乃至請求項25のいずれか1つに記載の画像処理方法。 - 前記特定領域抽出ステップにて抽出した特定領域の画像に対して、画素間の画素値の微分値を算出する微分値算出ステップを含み、
前記平滑化判定ステップでは、前記微分値算出ステップにて算出した微分値に基づいて、前記注目画素を平滑化処理の対象とするか否かを判定すること
を特徴とする請求項21乃至請求項26のいずれか1つに記載の画像処理方法。 - 前記微分値算出ステップでは、隣接する画素間の画素値の1次微分値及び2次微分値を算出し、
前記平滑化判定ステップでは、前記微分値算出ステップにて算出した1次微分値及び2次微分値に基づいて判定を行うこと
を特徴とする請求項27に記載の画像処理方法。 - 前記微分値算出ステップにて算出した1次微分値が閾値を超えるか否かに応じて、前記1次微分値を2値化する1次微分値2値化ステップと、
前記1次微分値2値化ステップにて2値化した1次微分値の論理和を算出する第1論理和算出ステップと、
前記微分値算出ステップにて算出した2次微分値が閾値を超えるか否かに応じて、前記2次微分値を2値化する2次微分値2値化ステップと、
前記2次微分値2値化ステップにて2値化した2次微分値の論理和を算出する第2論理和算出ステップと、
前記第1論理和算出ステップでの算出結果及び前記第2論理和算出ステップでの算出結果の論理和を算出する第3論理和算出ステップと
を含み、
前記平滑化判定ステップでは、前記第3論理和算出ステップによる算出結果に基づいて判定を行うこと
を特徴とする請求項28に記載の画像処理方法。 - 前記微分値算出ステップ、前記1次微分値2値化ステップ、前記第1論理和算出ステップ、前記2次微分値2値化ステップ、前記第2論理和算出ステップ及び前記第3論理和算出ステップでは、前記特定領域の縦方向及び横方向についてそれぞれ処理を行い、
前記平滑化判定ステップでは、前記第3論理和算出ステップによる前記縦方向及び横方向の算出結果に基づいて判定を行うこと
を特徴とする請求項29に記載の画像処理方法。 - 前記平滑化判定ステップにて第1の平滑化処理の対象とすると判定した場合に、前記特定領域においてノイズの境界部分が延在する方向を決定するノイズ境界方向決定ステップを含み、
該ノイズ境界方向決定ステップでは、前記第3論理和算出ステップによる前記縦方向及び横方向の算出結果に基づいて、ノイズの境界部分が前記特定領域において前記縦方向及び/又は前記横方向に延在しているか否かを決定すること
を特徴とする請求項30に記載の画像処理方法。 - 前記平滑化判定ステップにて第1の平滑化処理の対象とすると判定した場合に、前記特定領域におけるノイズの境界部分の位置を決定するノイズ境界位置決定ステップを含み、
該ノイズ境界位置決定ステップでは、前記2次微分値2値化ステップにて2値化した2次微分値の前記特定領域内におけるパターンに基づいて、前記特定領域におけるノイズの境界部分の位置を決定すること
を特徴とする請求項29乃至請求項31のいずれか1つに記載の画像処理方法。 - 各々が特定方向に関するエッジ成分の強度を検出するソーベルフィルタを異なる方向について複数記憶しておき、前記特定領域抽出ステップにて抽出した特定領域に対して、記憶した複数のソーベルフィルタによるフィルタ処理を行い、前記特定領域に含まれるエッジ成分の強度を複数方向について算出するエッジ強度算出ステップと、
前記エッジ強度算出ステップにて算出した複数のエッジ強度の最大値及び最小値の差分が閾値を超えるか否かを判定するエッジ強度差分判定ステップと
を更に含み、
前記第2の平滑化ステップでは、前記差分が前記閾値を超えないと前記エッジ強度差分判定ステップにて判定した場合、エッジ保存型でない平滑化フィルタを用いて前記特定領域の注目画素の画素値を平滑化すること
を特徴とする請求項21乃至請求項32のいずれか1つに記載の画像処理方法。 - 前記第2の平滑化ステップによる平滑化結果を適用するか否かを判定する適用判定ステップを備え、
前記生成ステップは、前記入力画像の各画素の画素値を、前記第1の平滑化ステップにて平滑化した画素値、前記第2の平滑化ステップにて平滑化した画素値、又は、平滑化していない元の画素値のいずれかとした出力画像を生成すること
を特徴とする請求項21乃至請求項33のいずれか1つに記載の画像処理方法。 - 前記特定領域抽出ステップにて抽出した特定領域に対してラプラシアンフィルタによるフィルタ処理を行い、前記特定領域に含まれるエッジ成分の強度を算出する第2のエッジ強度算出ステップと、
該第2のエッジ強度算出ステップにて算出した強度が閾値を超えるか否かを判定するエッジ強度判定ステップと
を含み、
前記適用判定ステップでは、
前記強度が前記閾値を超えると前記エッジ強度判定ステップにて判定した場合、前記第2の平滑化ステップによる平滑化結果を適用すると判定し、
前記強度が前記閾値を超えないと前記エッジ強度判定ステップにて判定した場合、前記第2の平滑化ステップによる平滑化結果を適用しないと判定すること
を特徴とする請求項34に記載の画像処理方法。 - 前記特定領域抽出ステップにて抽出した特定領域内にて特定方向に隣接する画素間の画素値の増減回数を算出する増減回数算出ステップと、
該増減回数算出ステップにて算出した増減回数が閾値を超えるか否かを判定する増減回数判定ステップと
を含み、
前記適用判定ステップでは、
前記増減回数が前記閾値を超えないと前記増減回数判定ステップにて判定した場合、前記第2の平滑化ステップによる平滑化結果を適用すると判定し、
前記増減回数が前記閾値を超えると前記増減回数判定ステップにて判定した場合、前記第2の平滑化ステップによる平滑化結果を適用しないと判定すること
を特徴とする請求項34又は請求項35に記載の画像処理方法。 - 前記特定領域抽出ステップにて抽出した特定領域に含まれる注目画素の画素値、及び、前記第2の平滑化ステップにより平滑化された画素値の差分を算出する平滑差分算出ステップと、
該平滑差分算出ステップにて算出した差分が閾値を超えるか否かを判定する平滑差分判定ステップと
を含み、
前記適用判定ステップでは、
前記差分が前記閾値を超えないと前記平滑差分判定ステップにて判定した場合、前記第2の平滑化ステップによる平滑化結果を適用すると判定し、
前記差分が前記閾値を超えると前記平滑差分判定ステップにて判定した場合、前記第2の平滑化ステップによる平滑化結果を適用しないと判定すること
を特徴とする請求項34乃至請求項36のいずれか1つに記載の画像処理方法。 - マトリクス状に配された複数の画素で構成される入力画像からノイズを除去又は低減した出力画像を生成する画像処理方法において、
前記入力画像から注目画素を含む特定領域を抽出する特定領域抽出ステップと、
該特定領域抽出ステップにて抽出した特定領域に基づいて、前記注目画素を第1の平滑化処理の対象とするか否かを判定する平滑化判定ステップと、
該平滑化判定ステップにて第1の平滑化処理の対象とすると判定した場合に、平滑化フィルタを用いた第1の平滑化処理を行い、前記注目画素の画素値を平滑化する第1の平滑化ステップと、
前記平滑化判定ステップにて第1の平滑化処理の対象としないと判定した場合に、エッジ保存型の平滑化フィルタを用いた第2の平滑化処理を行い、前記注目画素の画素値を平滑化する第2の平滑化ステップと、
該第2の平滑化ステップによる平滑化結果を適用するか否かを判定する適用判定ステップと、
前記入力画像の各画素の画素値を、前記第1の平滑化ステップにて平滑化した画素値、前記第2の平滑化ステップにて平滑化した画素値、又は、平滑化していない元の画素値のいずれかとした出力画像を生成する生成ステップと
を含むことを特徴とする画像処理方法。 - マトリクス状に配された複数の画素で構成される入力画像からノイズを除去又は低減した出力画像を生成する画像処理方法において、
前記入力画像から注目画素を含む特定領域を抽出する特定領域抽出ステップと、
該特定領域抽出ステップにて抽出した特定領域の画像に対して、画素間の画素値の微分値を算出する微分値算出ステップと、
該微分値算出ステップにて算出した微分値に基づいて、前記注目画素を平滑化処理の対象とするか否かを判定する平滑化判定ステップと、
該平滑化判定ステップにて平滑化処理の対象とすると判定した場合に、前記特定領域においてノイズの境界部分が延在する方向を決定するノイズ境界方向決定ステップと、
前記平滑化判定ステップにて平滑化処理の対象とすると判定した場合に、前記特定領域におけるノイズの境界部分の位置を決定するノイズ境界位置決定ステップと、
前記平滑化判定ステップにて平滑化処理の対象とすると判定した場合に、前記ノイズ境界方向決定ステップにて決定した方向及び前記ノイズ境界位置決定ステップにて決定した位置に基づいて、前記注目画素に対する平滑化処理を行う平滑化ステップと
を含むことを特徴とする画像処理方法。 - マトリクス状に配された複数の画素で構成される入力画像から注目画素を含む特定領域を抽出する特定領域抽出ステップと、
該特定領域抽出ステップにて抽出した特定領域の画像に対して、画素間の画素値の微分値を算出する微分値算出ステップと、
該微分値算出ステップにて算出した微分値に基づいて、前記特定領域においてノイズの境界部分が延在する方向を決定するノイズ境界方向決定ステップと、
前記微分値算出ステップにて算出した微分値に基づいて、前記特定領域におけるノイズの境界部分の位置を決定するノイズ境界位置決定ステップと
を含むことを特徴とする画像処理方法。 - コンピュータに、マトリクス状に配された複数の画素で構成される入力画像からノイズを除去又は低減した出力画像を生成する画像処理を行わせるコンピュータプログラムにおいて、
コンピュータを、
前記入力画像から注目画素を含む特定領域を抽出する特定領域抽出手段と、
該特定領域抽出手段が抽出した特定領域に基づいて、前記注目画素を第1の平滑化処理の対象とするか否かを判定する平滑化判定手段と、
該平滑化判定手段が第1の平滑化処理の対象とすると判定した場合に、平滑化フィルタを用いた第1の平滑化処理を行い、前記注目画素の画素値を平滑化する第1の平滑化手段と、
前記平滑化判定手段が第1の平滑化処理の対象としないと判定した場合に、エッジ保存型の平滑化フィルタを用いた第2の平滑化処理を行い、前記注目画素の画素値を平滑化する第2の平滑化手段と、
前記入力画像の各画素の画素値を、前記第1の平滑化手段にて平滑化した画素値、又は、前記第2の平滑化手段にて平滑化した画素値のいずれかとした出力画像を生成する生成手段
として動作させることを特徴とするコンピュータプログラム。 - コンピュータに、マトリクス状に配された複数の画素で構成される入力画像からノイズを除去又は低減した出力画像を生成する画像処理を行わせるコンピュータプログラムにおいて、
コンピュータを、
前記入力画像から注目画素を含む特定領域を抽出する特定領域抽出手段と、
該特定領域抽出手段が抽出した特定領域に基づいて、前記注目画素を第1の平滑化処理の対象とするか否かを判定する平滑化判定手段と、
該平滑化判定手段が第1の平滑化処理の対象とすると判定した場合に、平滑化フィルタを用いた第1の平滑化処理を行い、前記注目画素の画素値を平滑化する第1の平滑化手段と、
前記平滑化判定手段が第1の平滑化処理の対象としないと判定した場合に、エッジ保存型の平滑化フィルタを用いた第2の平滑化処理を行い、前記注目画素の画素値を平滑化する第2の平滑化手段と、
該第2の平滑化手段による平滑化結果を適用するか否かを判定する適用判定手段と、
前記入力画像の各画素の画素値を、前記第1の平滑化手段にて平滑化した画素値、前記第2の平滑化手段にて平滑化した画素値、又は、平滑化していない元の画素値のいずれかとした出力画像を生成する生成手段
として動作させることを特徴とするコンピュータプログラム。 - コンピュータに、マトリクス状に配された複数の画素で構成される入力画像からノイズを除去又は低減した出力画像を生成する画像処理を行わせるコンピュータプログラムにおいて、
コンピュータを、
前記入力画像から注目画素を含む特定領域を抽出する特定領域抽出手段と、
該特定領域抽出手段が抽出した特定領域の画像に対して、画素間の画素値の微分値を算出する微分値算出手段と、
該微分値算出手段が算出した微分値に基づいて、前記注目画素を平滑化処理の対象とするか否かを判定する平滑化判定手段と、
該平滑化判定手段が平滑化処理の対象とすると判定した場合に、前記特定領域においてノイズの境界部分が延在する方向を決定するノイズ境界方向決定手段と、
前記平滑化判定手段が平滑化処理の対象とすると判定した場合に、前記特定領域におけるノイズの境界部分の位置を決定するノイズ境界位置決定手段と、
前記平滑化判定手段が平滑化処理の対象とすると判定した場合に、前記ノイズ境界方向決定手段が決定した方向及び前記ノイズ境界位置決定手段が決定した位置に基づいて、前記注目画素に対する平滑化処理を行う平滑化手段
として動作させることを特徴とするコンピュータプログラム。 - コンピュータを、
マトリクス状に配された複数の画素で構成される入力画像から注目画素を含む特定領域を抽出する特定領域抽出手段と、
該特定領域抽出手段が抽出した特定領域の画像に対して、画素間の画素値の微分値を算出する微分値算出手段と、
該微分値算出手段が算出した微分値に基づいて、前記特定領域においてノイズの境界部分が延在する方向を決定するノイズ境界方向決定手段と、
前記微分値算出手段が算出した微分値に基づいて、前記特定領域におけるノイズの境界部分の位置を決定するノイズ境界位置決定手段
として動作させることを特徴とするコンピュータプログラム。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP13848489.4A EP2911382B1 (en) | 2012-10-22 | 2013-07-03 | Image processing device and image processing method |
US14/437,344 US9241091B2 (en) | 2012-10-22 | 2013-07-03 | Image processing device, image processing method, and computer program |
RU2015118457/07A RU2598899C1 (ru) | 2012-10-22 | 2013-07-03 | Устройство и способ обработки изображения |
CN201380065546.9A CN104854856B (zh) | 2012-10-22 | 2013-07-03 | 图像处理装置、图像处理方法 |
ES13848489T ES2738306T3 (es) | 2012-10-22 | 2013-07-03 | Dispositivo de procesamiento de imágenes y método de procesamiento de imágenes |
AU2013336028A AU2013336028B2 (en) | 2012-10-22 | 2013-07-03 | Image processing device, image processing method, and computer program |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012232995 | 2012-10-22 | ||
JP2012-232995 | 2012-10-22 | ||
JP2012277302A JP5564553B2 (ja) | 2012-10-22 | 2012-12-19 | 画像処理装置、画像処理方法及びコンピュータプログラム |
JP2012-277302 | 2012-12-19 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014064968A1 true WO2014064968A1 (ja) | 2014-05-01 |
Family
ID=50544349
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/068226 WO2014064968A1 (ja) | 2012-10-22 | 2013-07-03 | 画像処理装置、画像処理方法及びコンピュータプログラム |
Country Status (8)
Country | Link |
---|---|
US (1) | US9241091B2 (ja) |
EP (1) | EP2911382B1 (ja) |
JP (1) | JP5564553B2 (ja) |
CN (1) | CN104854856B (ja) |
AU (1) | AU2013336028B2 (ja) |
ES (1) | ES2738306T3 (ja) |
RU (1) | RU2598899C1 (ja) |
WO (1) | WO2014064968A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111062878A (zh) * | 2019-10-31 | 2020-04-24 | 深圳先进技术研究院 | 图像的去噪方法、装置及计算机可读存储介质 |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6164926B2 (ja) * | 2013-05-16 | 2017-07-19 | オリンパス株式会社 | ノイズ低減処理装置 |
JP6444233B2 (ja) * | 2015-03-24 | 2018-12-26 | キヤノン株式会社 | 距離計測装置、距離計測方法、およびプログラム |
WO2016203282A1 (en) | 2015-06-18 | 2016-12-22 | The Nielsen Company (Us), Llc | Methods and apparatus to capture photographs using mobile devices |
CN106407919B (zh) * | 2016-09-05 | 2019-09-10 | 珠海赛纳打印科技股份有限公司 | 基于图像处理的文本分离方法及装置和图像形成设备 |
US10957047B2 (en) * | 2017-02-15 | 2021-03-23 | Panasonic Intellectual Property Management Co., Ltd. | Image processing device and image processing method |
JP2019036821A (ja) * | 2017-08-14 | 2019-03-07 | キヤノン株式会社 | 画像処理装置、画像処理方法、及びプログラム |
CN113379636B (zh) * | 2021-06-21 | 2024-05-03 | 苏州睿新微系统技术有限公司 | 一种红外图像非均匀性校正方法、装置、设备及存储介质 |
CN115829838B (zh) * | 2022-11-23 | 2023-08-11 | 爱芯元智半导体(上海)有限公司 | 一种图像扩边电路、芯片和方法 |
CN116071242B (zh) * | 2023-03-17 | 2023-07-14 | 山东云海国创云计算装备产业创新中心有限公司 | 一种图像处理方法、系统、设备以及存储介质 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000331174A (ja) * | 1999-05-20 | 2000-11-30 | Matsushita Electric Ind Co Ltd | エッジ処理方法 |
JP2003008898A (ja) * | 2001-06-20 | 2003-01-10 | Sony Corp | 画像処理方法および装置 |
JP2005117449A (ja) * | 2003-10-09 | 2005-04-28 | Victor Co Of Japan Ltd | モスキートノイズ低減装置、モスキートノイズ低減方法、及びモスキートノイズ低減用プログラム |
JP2008153812A (ja) * | 2006-12-15 | 2008-07-03 | Sharp Corp | ノイズ除去装置 |
JP4145665B2 (ja) * | 2001-05-10 | 2008-09-03 | 松下電器産業株式会社 | 画像処理装置及び画像処理方法 |
JP2009076973A (ja) * | 2007-09-18 | 2009-04-09 | Toshiba Corp | ノイズ除去装置及びノイズ除去方法 |
JP2009105990A (ja) | 2007-02-16 | 2009-05-14 | Mitsubishi Electric Corp | ブロックノイズ検出装置及び方法、並びにブロックノイズ除去装置及び方法 |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7437013B2 (en) * | 2003-12-23 | 2008-10-14 | General Instrument Corporation | Directional spatial video noise reduction |
JP4635779B2 (ja) * | 2005-08-18 | 2011-02-23 | ソニー株式会社 | データ処理装置およびデータ処理方法、並びにプログラム |
RU2383055C2 (ru) * | 2007-03-14 | 2010-02-27 | Самсунг Электроникс Ко., Лтд. | Способ определения и сглаживания ступенчатых краев на изображении |
JP4375464B2 (ja) * | 2007-09-11 | 2009-12-02 | 三菱電機株式会社 | ノイズ低減装置 |
JP2009238009A (ja) * | 2008-03-27 | 2009-10-15 | Toshiba Corp | 画像処理装置、及び方法 |
JP2010004111A (ja) * | 2008-06-18 | 2010-01-07 | Nec Electronics Corp | 画像処理装置および画像処理方法並びにプログラム |
JP4763027B2 (ja) * | 2008-08-27 | 2011-08-31 | シャープ株式会社 | 画像処理装置、画像形成装置、画像処理方法、画像処理プログラム及びコンピュータ読み取り可能な記録媒体 |
JP5326920B2 (ja) * | 2009-08-07 | 2013-10-30 | 株式会社リコー | 画像処理装置、画像処理方法、及び、コンピュータプログラム |
-
2012
- 2012-12-19 JP JP2012277302A patent/JP5564553B2/ja active Active
-
2013
- 2013-07-03 ES ES13848489T patent/ES2738306T3/es active Active
- 2013-07-03 US US14/437,344 patent/US9241091B2/en active Active
- 2013-07-03 WO PCT/JP2013/068226 patent/WO2014064968A1/ja active Application Filing
- 2013-07-03 EP EP13848489.4A patent/EP2911382B1/en active Active
- 2013-07-03 CN CN201380065546.9A patent/CN104854856B/zh active Active
- 2013-07-03 RU RU2015118457/07A patent/RU2598899C1/ru not_active IP Right Cessation
- 2013-07-03 AU AU2013336028A patent/AU2013336028B2/en not_active Ceased
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000331174A (ja) * | 1999-05-20 | 2000-11-30 | Matsushita Electric Ind Co Ltd | エッジ処理方法 |
JP4145665B2 (ja) * | 2001-05-10 | 2008-09-03 | 松下電器産業株式会社 | 画像処理装置及び画像処理方法 |
JP2003008898A (ja) * | 2001-06-20 | 2003-01-10 | Sony Corp | 画像処理方法および装置 |
JP2005117449A (ja) * | 2003-10-09 | 2005-04-28 | Victor Co Of Japan Ltd | モスキートノイズ低減装置、モスキートノイズ低減方法、及びモスキートノイズ低減用プログラム |
JP2008153812A (ja) * | 2006-12-15 | 2008-07-03 | Sharp Corp | ノイズ除去装置 |
JP2009105990A (ja) | 2007-02-16 | 2009-05-14 | Mitsubishi Electric Corp | ブロックノイズ検出装置及び方法、並びにブロックノイズ除去装置及び方法 |
JP2009076973A (ja) * | 2007-09-18 | 2009-04-09 | Toshiba Corp | ノイズ除去装置及びノイズ除去方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP2911382A4 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111062878A (zh) * | 2019-10-31 | 2020-04-24 | 深圳先进技术研究院 | 图像的去噪方法、装置及计算机可读存储介质 |
CN111062878B (zh) * | 2019-10-31 | 2023-04-18 | 深圳先进技术研究院 | 图像的去噪方法、装置及计算机可读存储介质 |
Also Published As
Publication number | Publication date |
---|---|
AU2013336028A1 (en) | 2015-06-11 |
CN104854856B (zh) | 2017-12-08 |
US9241091B2 (en) | 2016-01-19 |
JP2014103648A (ja) | 2014-06-05 |
US20150312442A1 (en) | 2015-10-29 |
EP2911382A1 (en) | 2015-08-26 |
ES2738306T3 (es) | 2020-01-21 |
EP2911382B1 (en) | 2019-05-08 |
CN104854856A (zh) | 2015-08-19 |
RU2598899C1 (ru) | 2016-10-10 |
JP5564553B2 (ja) | 2014-07-30 |
AU2013336028B2 (en) | 2017-02-02 |
EP2911382A4 (en) | 2015-12-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5564553B2 (ja) | 画像処理装置、画像処理方法及びコンピュータプログラム | |
CN110189349B (zh) | 图像处理方法及装置 | |
JP5308062B2 (ja) | 偽輪郭を探知及び除去する方法並びに装置 | |
US20080292182A1 (en) | Noise reduced color image using panchromatic image | |
US10818018B2 (en) | Image processing apparatus, image processing method, and non-transitory computer-readable storage medium | |
JP2006221403A (ja) | 画像処理装置、画像処理方法、画像処理方法のプログラム及び画像処理方法のプログラムを記録した記録媒体 | |
JP2005332130A (ja) | 画像処理装置、画像処理方法、画像処理方法のプログラム及び画像処理方法のプログラムを記録した記録媒体 | |
JP5314271B2 (ja) | 映像の鮮明度向上のための装置および方法 | |
US20200219229A1 (en) | Edge-Aware Upscaling for Improved Screen Content Quality | |
JP2009212969A (ja) | 画像処理装置、画像処理方法、及び画像処理プログラム | |
TWI546777B (zh) | 影像處理裝置與方法 | |
JP4764903B2 (ja) | テキストマップの中からライン構造を検出する方法および画像処理装置 | |
JP4180043B2 (ja) | 3次元図形描画処理装置、画像表示装置、3次元図形描画処理方法、これをコンピュータに実行させるための制御プログラムおよび、これを記録したコンピュータ読み取り可能な可読記録媒体 | |
JP4196274B2 (ja) | 画像信号処理装置および方法、プログラム、並びに記録媒体 | |
US6968074B1 (en) | Image processing device, image processing method, and storage medium | |
EP2034442A1 (en) | Method for non-photorealistic rendering of an image frame sequence | |
JP5617841B2 (ja) | 画像処理装置、画像処理方法および画像処理用プログラム | |
JP5634494B2 (ja) | 画像処理装置、画像処理方法及びコンピュータプログラム | |
US20100149181A1 (en) | Vector graphics system and vector graphics rendering method | |
US20030118106A1 (en) | Image processing device, image processing method, and storage medium | |
JP2004199622A (ja) | 画像処理装置、画像処理方法、記録媒体およびプログラム | |
JP2008042545A (ja) | 画像処理装置及び画像処理プログラム | |
JP2011070595A (ja) | 画像処理装置、画像処理方法、および画像処理プログラム | |
JP2015106318A (ja) | 画像処理装置および画像処理方法 | |
US6810134B2 (en) | Method, system and apparatus for choosing an optimal candidate value for block matching |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13848489 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2013848489 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2015118457 Country of ref document: RU Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2013336028 Country of ref document: AU Date of ref document: 20130703 Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14437344 Country of ref document: US |