EP3167429B1 - Système et procédé de prise en charge d'un débruitage d'image basé sur le degré de différentiation de blocs voisins - Google Patents

Système et procédé de prise en charge d'un débruitage d'image basé sur le degré de différentiation de blocs voisins Download PDF

Info

Publication number
EP3167429B1
EP3167429B1 EP15874390.6A EP15874390A EP3167429B1 EP 3167429 B1 EP3167429 B1 EP 3167429B1 EP 15874390 A EP15874390 A EP 15874390A EP 3167429 B1 EP3167429 B1 EP 3167429B1
Authority
EP
European Patent Office
Prior art keywords
pixels
color
pixel
denoising
grad
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP15874390.6A
Other languages
German (de)
English (en)
Other versions
EP3167429A1 (fr
EP3167429A4 (fr
Inventor
Zisheng Cao
Junping MA
Xing Chen
Mingyu Wang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of EP3167429A1 publication Critical patent/EP3167429A1/fr
Publication of EP3167429A4 publication Critical patent/EP3167429A4/fr
Application granted granted Critical
Publication of EP3167429B1 publication Critical patent/EP3167429B1/fr
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/28Indexing scheme for image data processing or generation, in general involving image processing hardware
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • G06T2207/20012Locally adaptive

Definitions

  • the disclosed embodiments relate generally to digital image signal processing and more particularly, but not exclusively, to image denoising.
  • CCD charge-coupled device
  • CMOS complementary metal-oxide-semiconductor
  • US 2014/118581 A1 considers noise in RAW image data. Parameters including pixels used as a target area and reference pixels are determined in the RAW image data, based on color filter information for the RAW image data. The RAW image data is corrected based on the parameters thus determined.
  • US 2008/075394 A1 describes a demosaic system and method that supports multiple CFA pattern inputs.
  • the demosaic system is capable of handling both RGB Bayer input and CMYG input and perform demosaic operations on both inputs to recover full-color images from the raw input images.
  • the system uses a variable number gradient demosiac process.
  • Li-Li Xing et al "The algorithms about fast non-local means based image denoising", ACTA, Mathematicae Applicatae Sinica, Springer, Berlin, volume 28, number 2, April 29, 2012, pages 247 to 254 , considers neighborhood filters in which the neighborhood filters estimate the value at one point by taking into account the grey-level values of neighborhood pixels.
  • Losson O. et al. “Chapter 5 - Comparison of the color demosaicing methods ", describes a comparison of the performance of demosaicing methods based on presented measurements in which the methods are applied to 12 images of a benchmark Kodak database
  • the invention relates to a computer-implemented image processing method according to independent claim 1, to an image processing system according to independent claim 7 and to a computer-readable medium according to independent claim 8. Preferred embodiments of the claimed invention are described in the dependent claims.
  • the image processing device operates to obtain a first set of characterization values, which represents a first group of pixels that are associated with a denoising pixel in an image. Also, the image processing device can obtain a second set of characterization values, which represents a second group of pixels that are associated with a denoising reference pixel.
  • the image processing device operates to use the first set of characterization values and the second set of characterization values to determine a similarity between the denoising pixel and the denoising reference pixel. Then, the image processing device can calculate a denoised value for the denoising pixel based on the determined similarity between the denoising pixel and the denoising reference pixel.
  • RGB image format as example for a digital image format. It will be apparent to those skilled in the art that other types of digital image formats can be used without limitation.
  • the image collecting process can use image sensors for collecting various image information.
  • a color filter array (CFA), or a color filter mosaic (CFM) may be placed over the CCD and CMOS image sensors.
  • CFA or CFM involves a mosaic of tiny color filters, which are prone to introducing noise into the captured image.
  • an image process can perform various denoising operations on the captured images.
  • the denoising operations can be either pixel-based or patch-based.
  • the pixel-based denoising method such as a bilateral filter method, is easy to implement, e.g. using the application-specific integrated circuit (ASIC).
  • the patch-based denoising method such as a non-local means algorithm, can be used for obtaining a digital image with better quality.
  • the value for the denoised image pixel, NL[ v ]( i ) can be determined based on the weighted contribution by all other pixels in the image ( I ), using the following equation.
  • NL v i ⁇ j ⁇ I w i j v j
  • the weight function w ( i,j ) for a denoising reference pixel ( j ) can be determined based on the similarity between the denoising pixel ( i ) and the denoising reference pixel ( j ).
  • the similarity can be defined as the distance between two vectors, v ( ) and v ( ), whereas is a multi-dimensional vector that represents a neighborhood block that is associated with (such as surrounding) the denoising pixel ( i ) and is a multi-dimensional vector that represents a neighborhood block that is associated with the denoising reference pixel ( j ).
  • the weight function w ( i,j ) for the denoising reference pixel ( j ) can be determined using the following equation.
  • w i j 1 Z i e ⁇ ⁇ v N i ⁇ v N j ⁇ 2 , a 2 h 2
  • Z(i) is the normalization constant, which can be defined using the following equation.
  • Z i ⁇ j ⁇ I e ⁇ ⁇ v N i ⁇ v N j ⁇ 2 , a 2 h 2
  • FIG. 1 is an exemplary illustration of supporting image denoising based on neighborhood block dimensionality reduction, in accordance with various embodiments of the present invention.
  • an imaging process 100 can use a filter window 110 for denoising a pixel 101 in an image, e.g. a Bayer (mosaic) image.
  • an image e.g. a Bayer (mosaic) image.
  • the filter window 110 includes a plurality of denoising reference pixels (e.g. the pixel 102), which can be used for denoising the pixel 101.
  • the value of the denoised image pixel, (NL[ v ]( i )), for the denoising pixel ( i ) 101 can be determined using the following equation.
  • NL v i ⁇ j ⁇ ⁇ i w i j v j
  • the weight function w ( i,j ) can be determined based on the similarity 120 between the denoising pixel ( i ) 101 and a denoising reference pixel ( j ) 102 in the filter window ( ⁇ i ) 110.
  • the similarity 120 can be defined as the distance between two vectors, v ( ) and v ( ), whereas is a multi-dimensional vector that represents a neighborhood block 111 surrounding the denoising pixel ( i ) 101 and is a multi-dimensional vector that represents a neighborhood block 112 surrounding the denoising reference pixel ( j ) 102.
  • the weight function w(i,j) can be determined based on the rectilinear distance (i.e. the L 1 distance), using the following equation.
  • w i j 1 Z i e ⁇ ⁇ v N i ⁇ v N j ⁇ 1 h 2
  • Z( i ) is the normalization constant, which can be defined using the following equation.
  • Z i ⁇ j ⁇ ⁇ i e ⁇ ⁇ v N i ⁇ v N j ⁇ 1 h 2
  • the vectors, v ( ) and v ( ), are multi-dimensional vectors, since the neighborhood block 101 and the neighborhood block 102 are both three by three-by-three (3x3) blocks.
  • the system can further reduce the computation cost for denoising a pixel 101 in an image by taking advantage of a neighborhood block dimensionality reduction feature.
  • the system can use a characterization vector ( P i ) 121, which includes a set of characterization values, for representing a group of pixels such as the pixels in the neighborhood block 111 associated with the denoising pixel ( i ) 101.
  • the system can use a characterization vector ( P j ) 122, which includes a set of characterization values, for representing a group of pixels such as the pixels the neighborhood block 112 associated with the denoising pixel ( j ) 102.
  • the characterization vector ( P i ) 121 and the characterization vector ( P j ) 122 can be used for representing the pixels not restricted in regular neighborhood blocks, such as the pixels in irregular neighborhood blocks, neighborhood blocks with different sizes, or even discrete forms.
  • Z( i ) is the normalization constant, which can be defined using the following equation.
  • Z i ⁇ j ⁇ ⁇ i e ⁇ ⁇ P i ⁇ P j ⁇ 1 h 2
  • both the characterization vectors ( P i and P j ) may include four (4) color components. It may take seven (4 + 3) operations for the imaging process 100 to determine the similarity between the denoising pixel ( i ) 101 and a denoising reference pixel ( j ) 102. Thus, it may take three hundred forty-three (7 x 7 x 7) operations to calculate the denoised value for the denoising pixel ( i ) 101 using the filter window ( ⁇ i ) 110, which is a seven-by-seven (7x7) block.
  • the system can significantly reduce the computation cost for determining the similarity between the denoising pixel ( i ) 101 and a denoising reference pixel ( j ) 102, and thereby reduces the computation cost for calculating the denoised value for the denoising pixel ( i ) 101 using the filter window ( ⁇ i ) 110.
  • FIG. 2 is an exemplary illustration of a filter window for denoising a pixel in an RGB image, in accordance with various embodiments of the present invention.
  • an imaging process can use a filter window 210 for performing a denoising operation on a denoising pixel 201 in an RGB image 200.
  • the denoising pixel 201 in the RGB image 200 may have different colors.
  • the denoising pixel 201 which locates at the center of the filter window 210, is a red color pixel (R).
  • the denoising pixel 201 can be a red pixel (R), a blue pixel (B), or a green pixel (Gr or Gb) in the RGB image 200 without limitation.
  • the determining of the similarity between the denoising pixel 201 and the denoising reference pixels can be based on the different neighborhood blocks (211-212 and 221-222).
  • the neighborhood blocks can be in different sizes.
  • the neighborhood blocks 211 and 212 are both three-by-three (3x3) blocks, while the neighborhood blocks 221 and 222 are both five-by-five (5x5) blocks.
  • the neighborhood block 222 may include pixels that are outside of the filter window 210.
  • neighborhood blocks may be in different geometry shapes, such as a polygon, a circle, an ellipse, or other regular shapes such as a cube, or a sphere. Also, the neighborhood blocks may be in various irregular shapes.
  • FIG. 3 is an exemplary illustration of supporting dimensionality reduction for a neighborhood block, in accordance with various embodiments of the present invention.
  • an imaging process 300 can determine a characterization vector 302 based on a neighborhood block 301 with a center pixel 310, which can be either a denoising pixel or a denoising reference pixel.
  • the characterization vector 302 can include various color components that correspond to the different colors in a color model used by the image.
  • the characterization vector 302 can include a component ( R ) for the red color, a component ( G ) for the green color, and a component ( B ) for the blue color.
  • the characterization vector can include a grayscale component ( X ), in which case the characterization vector 302 can be represented using the following equation.
  • P R ⁇ G ⁇ B ⁇ X ⁇ T
  • a color component in the characterization vector 302 can be determined based on the value for the color, which is associated with the center pixel 310 in the neighborhood block 301.
  • a non-selective averaging method can be used for determining a color component in the characterization vector 302.
  • the non-selective averaging method can be used for determining a color component in the characterization vector 302, when a set of pixels in the neighborhood block 301, having a color associated with the color component, constitute only one direction (e.g. 313 or 314) through the center pixel 310 of the neighborhood block 301.
  • the non-selective averaging method can be used for determining a color component in the characterization vector 302, when a set of pixels in the neighborhood block 301, having a color associated with the color component, are substantially isotropic in the neighborhood block 301.
  • a selective averaging method can be used for determining a color component in the characterization vector 302, when a set of pixels in the neighborhood pixel block 301, having a color associated with the color component, constitute multiple directions (e.g. 311-312) in the neighborhood block.
  • the imaging process 300 can support the selective averaging method based on the directional judgment.
  • the selective averaging method which is gradient-based, can apply the averaging calculation on a subset of the pixels, having a color associated with the color component.
  • the subset of the pixels can be associated with the direction (311 or 312) with the minimum gradient in the neighborhood pixel block 301.
  • FIG 4 is an exemplary illustration of using a selective averaging method for supporting dimensionality reduction, in accordance with various embodiments of the present invention.
  • an imaging process 400 can apply a selective averaging method on pixels with the same color in a neighborhood block, based on the directional judgment.
  • the imaging process 400 can calculate the gradients for the pixels with same color along different directions in a neighborhood block.
  • the imaging process 400 can compare the gradients for different directions to obtain the maximum gradient (e.g. grad_max) and the minimum gradient (e.g. grad_min).
  • the imaging process 400 can compare the difference between the maximum gradient and the minimum gradient (i.e.,
  • a threshold e.g., TH
  • the imaging process 400 can use a non-selective averaging method to calculate the average value for the pixels along multiple directions.
  • the imaging process 400 can select the pixels along the direction with the minimum gradient. Then, at step 406, the imaging process 400 can calculate the average value for the pixels along the selected direction (i.e., the pixels along the direction with the minimum gradient).
  • Figures 5(a) -(d) illustrate different types of exemplary neighborhood blocks in an RGB image, in accordance with various embodiments of the present invention.
  • Figure 5(a) shows a three-by-three (3x3) neighborhood block with a red color pixel (R) locating at the center of the neighborhood block.
  • Figure 5(b) shows a three-by-three (3x3) neighborhood block with a green color pixel (Gr) locating at the center of the neighborhood block.
  • Figure 5(c) shows a three-by-three (3x3) neighborhood block with a blue color pixel (B) locating at the center of the neighborhood block.
  • Figure 5(d) shows a three-by-three (3x3) neighborhood block with a green color pixel (Gb) locating at the center of the neighborhood block.
  • FIG. 6 is an exemplary illustration of supporting dimensionality reduction for a neighborhood block in Figure 5(a) , in accordance with various embodiments of the present invention.
  • an imaging process 600 can determine a characterization vector 602 for a neighborhood block 601 in an RGB image.
  • the characterization vector 602 can include various color components, such as a component associated with the red color ( R ), a component associated with the green color ( G ), a component associated with the blue color ( B ).
  • the center pixel 610 in the neighborhood block 601 is a red color pixel (R5).
  • the component associated with the red color ( R ) in the characterization vector 602 can be determined based on the value of the red color pixel, R5.
  • the blue color pixels (B1, B3, B7 and B9) in the neighborhood block 601 constitute multiple directions through the center pixel (R5) 610. As shown in Figure 6 , the direction 611 involves the blue color pixels (B1 and B9) and the direction 612 involves the blue color pixels (B3 and B7).
  • the imaging process 600 can calculate a gradient (e.g. grad_B1) for the direction 611 and a gradient (e.g. grad_B2) for the direction 612. Then, the imaging process 600 can determine the maximum gradient (e.g. grad_max) and the minimum gradient (e.g. grad_min) based on the gradients, grad_B1 and grad_B2.
  • a gradient e.g. grad_B1
  • a gradient e.g. grad_B2
  • the imaging process 600 can determine the maximum gradient (e.g. grad_max) and the minimum gradient (e.g. grad_min) based on the gradients, grad_B1 and grad_B2.
  • the imaging process 600 can use the selective averaging method for determining the component associated with the blue color ( B ). For example, the imaging process 600 can select the blue pixels (B1 and B9), if the grad_B1 along the direction 611 is less than the grad_B2 along the direction 612. Also, the imaging process 600 can select the blue pixels (B3 and B7), if the grad_B1 along the direction 611 is larger than the grad_B2 along the direction 612. The imaging process 600 can select either the blue pixels (B1 and B9) or the blue pixels (B3 and B7), if the grad_B1 along the direction 611 is equal to the grad_B2 along the direction 612. Then, the imaging process 600 can use the average value for the selected blue pixels for determining the component associated with the blue color ( B ).
  • the imaging process 600 can use the non-selective averaging method for determining the component associated with the blue color ( B ) based on the average value for the blue pixels (B1, B3, B7, and B9) in the neighborhood block 601.
  • the green color pixels (Gb2, Gr4, Gr6 and Gb8) in the neighborhood block 601 constitute multiple directions through the center pixel (R5) 610.
  • the direction 613 involves the green color pixels (Gb2 and Gb8) and the direction 614 involves the green color pixels (Gr4 and Gr6).
  • the imaging process 600 can calculate a gradient (e.g. grad_G1) for the direction 613 and a gradient (e.g. grad_G2) for the direction 614. Then, the imaging process 600 can determine the maximum gradient (e.g. grad_max) and the minimum gradient (e.g. grad_min) based on the gradients, grad_G1 and grad_G2.
  • a gradient e.g. grad_G1
  • a gradient e.g. grad_G2
  • a gradient e.g. grad_G2
  • the imaging process 600 can determine the maximum gradient (e.g. grad_max) and the minimum gradient (e.g. grad_min) based on the gradients, grad_G1 and grad_G2.
  • the imaging process 600 can use the selective averaging method for determining the component associated with the blue color ( G ). For example, the imaging process 600 can select the green pixels (Gb2 and Gb8), if the grad_G1 along the direction 613 is less than the grad_G2 along the direction 614. Also, the imaging process 600 can select the green pixels (Gr4 and Gr6), if the grad_G1 along the direction 613 is larger than the grad_G2 along the direction 614.
  • the imaging process 600 can select either the green pixels (Gb2 and Gb8) or the green pixels (Gr4 and Gr6), if the grad_G1 along the direction 613 is equal to the grad_G2 along the direction 614. Then, the imaging process 600 can use the average value for the selected blue pixels for determining the component associated with the green color ( G ).
  • the imaging process 600 can use the non-selective averaging method for determining the component associated with the green color ( G ) based on the average value for the green pixels (Gb2, Gr4, Gr6 and Gb8) in the neighborhood block 601.
  • the characterization vector 602 may include a component associated with the grayscale ( X ), which can be defined using the following equation.
  • X ⁇ R ⁇ + 2 ⁇ G ⁇ + B ⁇ ⁇ 2
  • the grayscale component ( X ) accounts for the contribution from the different color components, i.e. the contribution by the red color component ( R ), the contribution by the green color component ( G ), and the contribution by the blue color component ( B ).
  • FIG. 7 is an exemplary illustration of supporting dimensionality reduction for a neighborhood block in Figure 5(b) , in accordance with various embodiments of the present invention.
  • an imaging process 700 can determine a characterization vector 702 for a neighborhood block 701 in an RGB image.
  • the characterization vector 702 can include different components, such as a component associated with the red color ( R ), a component associated with the green color ( G ), a component associated with the blue color ( B ).
  • the center pixel 710 of the neighborhood block 701 is a green color pixel (Gr5).
  • the component associated with the green color ( G ) can be determined based on the value of the green color pixel, Gr5.
  • the blue color pixels (B2 and B8) constitute a single direction 713 in the neighborhood block 701.
  • the component associated with the blue color ( B ) can be determined based on the average value for the blue color pixels (B2 and B8) along the direction 713 in the neighborhood block 701.
  • the red color pixels (R4 and R6) constitute a single direction 714 in the neighborhood block 701.
  • the component associated with the red color ( R ) can be determined based on the average value for the red color pixels (R4 and R6) along the direction 714 in the neighborhood block 701.
  • the characterization vector 702 may include a component associated with the grayscale ( X ), as defined in the following equation.
  • X ⁇ R ⁇ + 2 ⁇ G ⁇ edge + B ⁇ ⁇ 2
  • the grayscale component ( X ) accounts for the contribution from the different color components, i.e. the contribution by the red color component ( R ), the contribution by the green color component ( G edge ), and the contribution by the blue color component ( B ).
  • the green color pixels (Gb1, Gb3, Gb7, and Gb9) in the neighborhood block 701 constitute multiple direction through the center pixel (Gr5) 710.
  • the imaging process 700 can calculate a gradient (e.g. grad_G1) for the direction 711 and a gradient (e.g. grad_G2) for the direction 712. Then, the imaging process 700 can determine the maximum gradient (e.g. grad_max) and the minimum gradient (e.g. grad_min) based on the gradients, grad_G1 and grad_G2.
  • the imaging process 700 can use the selective averaging method for determining the contribution by the green color component ( G edge ). For example, the imaging process 700 can select the green pixels (Gb1 and Gb9), if the grad_G1 along the direction 711 is less than the grad_G2 along the direction 712. Also, the imaging process 700 can select the green pixels (Gb3 and Gb7), if the grad_G1 along the direction 711 is larger than the grad_G2 along the direction 712.
  • the imaging process 700 can select either the green pixels (Gb1 and Gb9) or the green pixels (Gb3 and Gb7), if the grad_G1 along the direction 711 is equal to the grad_G2 along the direction 712. Then, the imaging process 700 can use the average value for the selected blue pixels for determining the contribution by the green color component ( G edge ).
  • the imaging process 700 can use the non-selective averaging method for determining the contribution by the green color component ( G edge ) based on the average value for all green pixels (Gb1, Gb3, Gb7, and Gb9) in the neighborhood block 701.
  • FIG 8 is an exemplary illustration of preserving an edge line in a neighborhood block, in accordance with various embodiments of the present invention.
  • an edge line 810 in an RGB image may cross through a neighborhood block 801 with a green center pixel (Gr5).
  • the pixels (Gb1, B2, Gb3, R6, and Gb9) may locate on the light color side, while the pixels (R4, Gr5, Gb7, and B8) locates on the dark color side.
  • the gradient along the direction 811, which involves the green pixels (Gb1 and Gb9) locating on the same side of the edge line 810, should be less than the gradient along the direction 812, which involves the green pixels (Gb3 and Gb7) locating on the opposite sides of the edge line 810.
  • the green color contribution to the grayscale ( G edge ) can be determined based on the average value of the green pixels (Gb1 and Gb9).
  • the component associated with the green color ( G ) can be determined based on the value of the green color center pixel (Gr5), which is likely larger than the average value of Gb1 and Gb9 (i.e. G edge ).
  • the imaging process 800 can avoid smoothing out the green color center pixel (Gr5), which ensures that the edge line 810 is preserved, during the denoising operation.
  • FIG 9 is an exemplary illustration of supporting dimensionality reduction for a neighborhood block in Figure 5(c) , in accordance with various embodiments of the present invention.
  • an imaging process 900 can determine a characterization vector 902 for a neighborhood block 901 in an RGB image.
  • the characterization vector 902 can include various color components, such as a component associated with the red color ( R ), a component associated with the green color ( G ), a component associated with the blue color ( B ).
  • the center pixel 910 in the neighborhood block 901 is a blue color pixel (B5).
  • the component associated with the blue color ( B ) in the characterization vector 902 can be determined based on the value of the blue color pixel, B5.
  • the red color pixels (R1, R3, R7 and R9) in the neighborhood block 901 constitute multiple directions through the center pixel (B5) 910. As shown in Figure 9 , the direction 911 involves the red color pixels (R1 and R9) and the direction 912 involves the red color pixels (R3 and R7).
  • the imaging process 900 can calculate a gradient (e.g. grad_R1) for the direction 911 and a gradient (e.g. grad_R2) for the direction 912. Then, the imaging process 900 can determine the maximum gradient (e.g. grad_max) and the minimum gradient (e.g. grad_min) based on the gradients, grad_R1 and grad_R2.
  • a gradient e.g. grad_R1
  • a gradient e.g. grad_R2
  • the imaging process 900 can determine the maximum gradient (e.g. grad_max) and the minimum gradient (e.g. grad_min) based on the gradients, grad_R1 and grad_R2.
  • the imaging process 900 can use the selective averaging method for determining the component associated with the red color ( R ). For example, the imaging process 900 can select the red pixels (R1 and R9), if the grad_R1 along the direction 911 is less than the grad_R2 along the direction 912. Also, the imaging process 900 can select the red pixels (R3 and R7), if the grad_R1 along the direction 911 is larger than the grad_R2 along the direction 912. The imaging process 900 can select either the red pixels (R1 and R9) or the red pixels (R3 and R7), if the grad_R1 along the direction 911 is equal to the grad_R2 along the direction 912. Then, the imaging process 900 can use the average value for the selected red pixels to determine the component associated with the red color ( R ).
  • the imaging process 900 can use the non-selective averaging method for determining the component associated with the red color ( R ) based on the average value for the red pixels (R1, R3, R7, and R9) in the neighborhood block 901.
  • the green color pixels (Gr2, Gb4, Gb6 and Gr8) in the neighborhood block 901 constitute multiple directions through the center pixel (B5) 910.
  • the direction 913 involves the green color pixels (Gr2 and Gr8) and the direction 914 involves the green color pixels (Gb4 and Gb6).
  • the imaging process 900 can calculate a gradient (e.g. grad_G1) for the direction 913 and a gradient (e.g. grad_G2) for the direction 914. Then, the imaging process 900 can determine the maximum gradient (e.g. grad_max) and the minimum gradient (e.g. grad_min) based on the gradients, grad_G1 and grad_G2.
  • a gradient e.g. grad_G1
  • a gradient e.g. grad_G2
  • a gradient e.g. grad_G2
  • the imaging process 900 can determine the maximum gradient (e.g. grad_max) and the minimum gradient (e.g. grad_min) based on the gradients, grad_G1 and grad_G2.
  • the imaging process 900 can use the selective averaging method for determining the component associated with the green color ( G ). For example, the imaging process 600 can select the green pixels (Gr2 and Gr8), if the grad_G1 along the direction 913 is less than the grad_G2 along the direction 914. Also, the imaging process 900 can select the green pixels (Gb4 and Gb6), if the grad_G1 along the direction 913 is larger than the grad_G2 along the direction 914.
  • the imaging process 900 can select either the green pixels (Gr2 and Gr8) or the green pixels (Gb4 and Gb6), if the grad_G1 along the direction 913 is equal to the grad_G2 along the direction 914. Then, the imaging process 900 can use the average value for the selected green pixels for determining the component associated with the green color ( G ).
  • the imaging process 900 can use the non-selective averaging method for determining the component associated with the green color ( G ) based on the average value for the green pixels (Gr2, Gb4, Gb6 and Gr8) in the neighborhood block 901.
  • the characterization vector 902 may include a component associated with the grayscale ( X ), which can be defined using the following equation.
  • X ⁇ R ⁇ + 2 ⁇ G ⁇ + B ⁇ ⁇ 2
  • the grayscale component ( X ) accounts for the contribution from the different color components, i.e. the contribution by the red color component ( R ), the contribution by the green color component ( G ), and the contribution by the blue color component ( B ).
  • FIG 10 is an exemplary illustration of supporting dimensionality reduction for a neighborhood block in Figure 5(d) , in accordance with various embodiments of the present invention.
  • an imaging process 1000 can determine a characterization vector 1002 for a neighborhood block 1001 in an RGB image.
  • the characterization vector 1002 can include different components, such as a component associated with the red color ( R ), a component associated with the green color ( G ), a component associated with the blue color ( B ).
  • the center pixel 1010 of the neighborhood block 1001 is a green color pixel (Gb5).
  • the component associated with the green color ( G ) can be determined based on the value of the green color pixel, Gb5.
  • the red color pixels (R2 and R8) constitute a single direction 1013 in the neighborhood block 1001.
  • the component associated with the red color ( R ) can be determined based on the average value for the blue color pixels (R2 and R8) along the direction 1013 in the neighborhood block 1001.
  • the blue color pixels (B4 and B6) constitute a single direction 1014 in the neighborhood block 1001.
  • the component associated with the blue color ( B ) can be determined based on the average value for the blue color pixels (B4 and B6) along the direction 1014 in the neighborhood block 1001.
  • the characterization vector 1002 may include a component associated with the grayscale ( X ), as defined in the following equation.
  • X ⁇ R ⁇ + 2 ⁇ G ⁇ edge + B ⁇ ⁇ 2
  • the grayscale component ( X ) accounts for the contribution from the different color components, i.e. the contribution by the red color component ( R ), the contribution by the green color component ( G edge ), and the contribution by the blue color component ( B ).
  • the green color pixels (Gr1, Gr3, Gr7, and Gr9) in the neighborhood block 1001 constitute multiple direction through the center pixel (Gb5) 1010.
  • the imaging process 1000 can calculate a gradient (e.g. grad_G1) for the direction 1011 and a gradient (e.g. grad_G2) for the direction 1012. Then, the imaging process 1000 can determine the maximum gradient (e.g. grad_max) and the minimum gradient (e.g. grad_min) based on the gradients, grad_G1 and grad_G2.
  • the imaging process 1000 can use the selective averaging method for determining the contribution by the green color component ( G edge ). For example, the imaging process 1000 can select the green pixels (Gr1 and Gr9), if the grad_G1 along the direction 1011 is less than the grad_G2 along the direction 1012. Also, the imaging process 1000 can select the green pixels (Gr3 and Gr7), if the grad_G1 along the direction 1011 is larger than the grad_G2 along the direction 1012. The imaging process 1000 can select either the green pixels (Gr1 and Gr9) or the green pixels (Gr3 and Gr7), if the grad_G1 along the direction 1011 is equal to the grad_G2 along the direction 1012. Then, the imaging process 1000 can use the average value for the selected blue pixels for determining contribution by the green color component ( G edge ).
  • the imaging process 1000 can use the non-selective averaging method for determining the contribution by the green color component ( G edge ) based on the average value for all green pixels (Gr1, Gr3, Gr7, and Gr9) in the neighborhood block 1001.
  • the imaging process 1000 can avoid smoothing out the green color center pixel (Gb5) and can preserve a edge line during the denoising operation.
  • FIG 11 shows a flowchart of supporting image denoising based on neighborhood block dimensionality reduction, in accordance with various embodiments of the present invention.
  • an imaging process can obtain a first set of characterization values, which represents a first group of pixels that are associated with a denoising pixel in an image.
  • the imaging process can obtain a second set of characterization values, which represents a second group of pixels that are associated with a denoising reference pixel.
  • the imaging process can use the first set of characterization values and the second set of characterization values to determine a similarity between the denoising pixel and the denoising reference pixel.
  • processors can include, without limitation, one or more general purpose microprocessors (for example, single or multi-core processors), application-specific integrated circuits, application-specific instruction-set processors, graphics processing units, physics processing units, digital signal processing units, coprocessors, network processing units, audio processing units, encryption processing units, and the like.
  • the storage medium can include, but is not limited to, any type of disk including floppy disks, optical discs, DVD, CD-ROMs, microdrive, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, DRAMs, VRAMs, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ICs), or any type of media or device suitable for storing instructions and/or data.
  • features of the present invention can be incorporated in software and/or firmware for controlling the hardware of a processing system, and for enabling a processing system to interact with other mechanism utilizing the results of the present invention.
  • software or firmware may include, but is not limited to, application code, device drivers, operating systems and execution environments/containers.
  • ASICs application specific integrated circuits
  • FPGA field-programmable gate array
  • the present invention may be conveniently implemented using one or more conventional general purpose or specialized digital computer, computing device, machine, or microprocessor, including one or more processors, memory and/or computer readable storage media programmed according to the teachings of the present disclosure.
  • Appropriate software coding can readily be prepared by skilled programmers based on the teachings of the present disclosure, as will be apparent to those skilled in the software art.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Facsimile Image Signal Circuits (AREA)

Claims (8)

  1. Procédé de traitement d'image mis en œuvre par ordinateur, le procédé comprenant les étapes ci-dessous consistant à :
    obtenir un premier vecteur de caractérisation (121) qui inclut un premier ensemble de valeurs de caractérisation, lequel représente un premier groupe de pixels entourant un pixel de débruitage (101, 201), le pixel de débruitage (101, 201) étant inclus dans une fenêtre de filtre (110, 210) d'une image, dans lequel l'image est une image Bayer ;
    utiliser la fenêtre de filtre (110, 210) en vue de débruiter le pixel de débruitage (101, 201), dans lequel la fenêtre de filtre (110, 210) inclut une pluralité de pixels de référence de débruitage (102, 202, 203) ;
    obtenir un second vecteur de caractérisation (122) qui inclut un second ensemble de valeurs de caractérisation, lequel représente un second groupe de pixels entourant un pixel de référence de débruitage (102, 202, 203) de la pluralité de pixels de référence de débruitage (102, 202, 203) ; et
    dans lequel l'image est basée sur un modèle de couleurs, dans lequel le modèle de couleurs est un modèle de couleurs rouge, vert, bleu, RGB, et dans lequel chaque ensemble parmi lesdits premier et second ensembles de valeurs de caractérisation inclut une composante de couleur pour la couleur rouge, une composante de couleur pour la couleur verte, une composante de couleur pour la couleur bleue, et, outre les composantes de couleur, une composante d'échelle de gris tenant compte des contributions des composantes de couleur ;
    dans lequel, dans chaque ensemble parmi les premier et second ensembles de valeurs de caractérisation, les composantes de couleur sont déterminées sur la base d'une valeur pour une couleur associée à un pixel central dans le groupe de pixels correspondant ;
    utiliser le premier ensemble de valeurs de caractérisation et le second ensemble de valeurs de caractérisation en vue de déterminer une similarité entre le pixel de débruitage (101, 201) et le pixel de référence de débruitage (102, 202, 203) ; et
    calculer une valeur de débruitage pour le pixel de débruitage (101, 201) sur la base de la similarité déterminée entre le pixel de débruitage (101, 201) et le pixel de référence de débruitage (102, 202, 203).
  2. Procédé de traitement d'image selon la revendication 1, comprenant en outre les étapes ci-dessous consistant à :
    calculer une pluralité de poids pour la pluralité de pixels de référence de débruitage (102, 202, 203) dans la fenêtre de filtre (110, 210), dans lequel chaque dit poids est associé à un pixel de référence de débruitage différent (102, 202, 203) dans la fenêtre de filtre (110, 210), et dans lequel chaque dit poids est déterminé sur la base d'une similarité entre le pixel de débruitage (101, 201) et chaque dit pixel de référence de débruitage (102, 202, 203) ; et
    utiliser la pluralité de poids qui sont associés à la pluralité de pixels de référence de débruitage (102, 202, 203) en vue de calculer la valeur débruitée pour le pixel de débruitage (101, 201).
  3. Procédé de traitement d'image selon la revendication 1, comprenant en outre l'étape ci-dessous consistant à :
    utiliser un procédé de calcul de moyenne non sélectif en vue de déterminer une composante de couleur dans l'ensemble de valeurs de caractérisation lorsqu'un ensemble de pixels dans le groupe de pixels correspondant, présentant une couleur associée à la composante de couleur, ne constitue qu'une seule direction dans le groupe de pixels correspondant à travers le pixel central.
  4. Procédé de traitement d'image selon la revendication 1, comprenant en outre l'étape ci-dessous consistant à :
    utiliser un procédé de calcul de moyenne sélectif, lequel est basé sur le gradient, en vue de déterminer une composante de couleur dans l'ensemble de valeurs de caractérisation lorsqu'un ensemble de pixels dans le groupe de pixels correspondant, présentant une couleur associée à la composante de couleur, constituent de multiples directions dans le groupe de pixels correspondant.
  5. Procédé de traitement d'image selon la revendication 4, dans lequel :
    le procédé de calcul de moyenne sélectif a pour fonction de calculer une valeur moyennée pour un ou plusieurs pixels présentant une couleur associée à la composante de couleur dans le groupe de pixels correspondant, dans lequel ledit un ou lesdits plusieurs pixels sont sélectionnés parmi l'ensemble de pixels le long d'une direction, avec un gradient minimum, à travers un pixel central du groupe de pixels correspondant.
  6. Procédé de traitement d'image selon la revendication 4, comprenant en outre les étapes ci-dessous consistant à :
    configurer un paramètre de seuil basé sur un niveau de bruit dans l'image ; et
    obtenir une valeur de moyenne pour l'ensemble de pixels dans le groupe de pixels correspondant, en vue de déterminer la composante de couleur dans l'ensemble de valeurs de caractérisation, lorsqu'une différence de gradient parmi lesdites multiples directions dans le groupe de pixels correspondant est inférieure à la valeur du paramètre de seuil.
  7. Système de traitement d'image, comprenant :
    un ou plusieurs microprocesseurs ;
    un procédé d'imagerie s'exécutant sur ledit un ou lesdits plusieurs microprocesseurs, dans lequel le procédé d'imagerie a pour fonction de mettre en œuvre les opérations du procédé selon l'une quelconque des revendications 1 à 6.
  8. Support non transitoire lisible par ordinateur stockant des instructions qui, lorsqu'elles sont exécutées par un ou plusieurs ordinateurs, amènent ledit un ou lesdits plusieurs ordinateurs à mettre en œuvre les opérations conformément au procédé selon l'une quelconque des revendications 1 à 6.
EP15874390.6A 2015-05-15 2015-05-15 Système et procédé de prise en charge d'un débruitage d'image basé sur le degré de différentiation de blocs voisins Active EP3167429B1 (fr)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/079093 WO2016183743A1 (fr) 2015-05-15 2015-05-15 Système et procédé de prise en charge d'un débruitage d'image basé sur le degré de différentiation de blocs voisins

Publications (3)

Publication Number Publication Date
EP3167429A1 EP3167429A1 (fr) 2017-05-17
EP3167429A4 EP3167429A4 (fr) 2017-08-02
EP3167429B1 true EP3167429B1 (fr) 2019-12-25

Family

ID=57319135

Family Applications (1)

Application Number Title Priority Date Filing Date
EP15874390.6A Active EP3167429B1 (fr) 2015-05-15 2015-05-15 Système et procédé de prise en charge d'un débruitage d'image basé sur le degré de différentiation de blocs voisins

Country Status (5)

Country Link
US (3) US9773297B2 (fr)
EP (1) EP3167429B1 (fr)
JP (1) JP6349614B2 (fr)
CN (1) CN107615331B (fr)
WO (1) WO2016183743A1 (fr)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10902558B2 (en) * 2018-05-18 2021-01-26 Gopro, Inc. Multiscale denoising of raw images with noise estimation
GB2570528B (en) 2018-06-25 2020-06-10 Imagination Tech Ltd Bilateral filter with data model
CN110211082B (zh) * 2019-05-31 2021-09-21 浙江大华技术股份有限公司 一种图像融合方法、装置、电子设备及存储介质
CN110796615B (zh) * 2019-10-18 2023-06-02 浙江大华技术股份有限公司 一种图像去噪方法、装置以及存储介质
CN111062904B (zh) * 2019-12-09 2023-08-11 Oppo广东移动通信有限公司 图像处理方法、图像处理装置、电子设备和可读存储介质
CN117834863A (zh) * 2020-04-14 2024-04-05 Lg电子株式会社 点云数据发送设备和方法、点云数据接收设备和方法
CN116097297A (zh) * 2020-09-02 2023-05-09 Oppo广东移动通信有限公司 去除图像中的噪声的方法和电子设备
CN113610871A (zh) * 2021-08-11 2021-11-05 河南牧原智能科技有限公司 一种基于红外成像的个体分割方法及系统
WO2024147826A1 (fr) * 2023-01-05 2024-07-11 Transformative Optics Corporation Capteurs d'image en couleur, procédés et systèmes
CN117057995B (zh) * 2023-08-23 2024-07-26 上海玄戒技术有限公司 图像处理方法、装置、芯片、电子设备及存储介质

Family Cites Families (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6847737B1 (en) * 1998-03-13 2005-01-25 University Of Houston System Methods for performing DAF data filtering and padding
EP1289309B1 (fr) * 2001-08-31 2010-04-21 STMicroelectronics Srl Filtre anti-parasite pour des données d'image de motif Bayer
FR2870071B1 (fr) * 2004-05-05 2006-12-08 Centre Nat Rech Scient Cnrse Procede de traitement de donnees d'images, par reduction de bruit d'image, et camera integrant des moyens de mise en oeuvre de ce procede
US7376288B2 (en) * 2004-05-20 2008-05-20 Micronas Usa, Inc. Edge adaptive demosaic system and method
JP2006023959A (ja) * 2004-07-07 2006-01-26 Olympus Corp 信号処理システム及び信号処理プログラム
US7929798B2 (en) 2005-12-07 2011-04-19 Micron Technology, Inc. Method and apparatus providing noise reduction while preserving edges for imagers
JP4967921B2 (ja) * 2007-08-10 2012-07-04 セイコーエプソン株式会社 画像処理のための装置、方法、および、プログラム
JP5012315B2 (ja) * 2007-08-20 2012-08-29 セイコーエプソン株式会社 画像処理装置
KR101389562B1 (ko) 2007-11-15 2014-04-25 삼성전자주식회사 이미지 처리 장치 및 방법
US8135237B2 (en) 2008-02-25 2012-03-13 Aptina Imaging Corporation Apparatuses and methods for noise reduction
US8280185B2 (en) 2008-06-27 2012-10-02 Microsoft Corporation Image denoising techniques
JP5220677B2 (ja) * 2009-04-08 2013-06-26 オリンパス株式会社 画像処理装置、画像処理方法および画像処理プログラム
US8638395B2 (en) * 2009-06-05 2014-01-28 Cisco Technology, Inc. Consolidating prior temporally-matched frames in 3D-based video denoising
TWI399079B (zh) * 2009-09-18 2013-06-11 Altek Corp Noise Suppression Method for Digital Image
TWI393073B (zh) 2009-09-21 2013-04-11 Pixart Imaging Inc 影像雜訊濾除方法
US20110075935A1 (en) 2009-09-25 2011-03-31 Sony Corporation Method to measure local image similarity based on the l1 distance measure
CN102045513B (zh) * 2009-10-13 2013-01-02 原相科技股份有限公司 图像噪声滤除方法
KR101674078B1 (ko) 2009-12-16 2016-11-08 삼성전자 주식회사 블록 기반의 영상 잡음 제거 방법 및 장치
US8345130B2 (en) 2010-01-29 2013-01-01 Eastman Kodak Company Denoising CFA images using weighted pixel differences
US20120200754A1 (en) 2011-02-09 2012-08-09 Samsung Electronics Co., Ltd. Image Noise Reducing Systems And Methods Thereof
US8879841B2 (en) 2011-03-01 2014-11-04 Fotonation Limited Anisotropic denoising method
US8417047B2 (en) 2011-03-01 2013-04-09 Microsoft Corporation Noise suppression in low light images
TWI500482B (zh) 2011-03-24 2015-09-21 Nat Univ Tsing Hua 利用離心資源之真空裝置
CN102663719B (zh) * 2012-03-19 2014-06-04 西安电子科技大学 基于非局部均值的Bayer型CFA图像去马赛克方法
KR101910870B1 (ko) * 2012-06-29 2018-10-24 삼성전자 주식회사 잡음 제거 장치, 시스템 및 방법
CN103679639B (zh) * 2012-09-05 2017-05-24 北京大学 基于非局部均值的图像去噪方法和装置
CN102930519B (zh) * 2012-09-18 2015-09-02 西安电子科技大学 基于非局部均值的sar图像变化检测差异图生成方法
JP6071419B2 (ja) * 2012-10-25 2017-02-01 キヤノン株式会社 画像処理装置及び画像処理方法
KR101990540B1 (ko) 2012-10-31 2019-06-18 삼성전자주식회사 이미지 프로세싱 방법, 그 이미지 신호 프로세서 및 이를 포함하는 이미지 센싱 시스템
CN103020908B (zh) 2012-12-05 2015-09-09 华为技术有限公司 图像降噪的方法和设备
CN103927729A (zh) * 2013-01-10 2014-07-16 清华大学 图像处理方法及图像处理装置
CN103116879A (zh) * 2013-03-15 2013-05-22 重庆大学 一种基于邻域加窗的非局部均值ct成像去噪方法
CN103491280B (zh) 2013-09-30 2016-01-20 上海交通大学 一种拜耳图像联合去噪插值方法
CN103841388A (zh) * 2014-03-04 2014-06-04 华为技术有限公司 一种去马赛克的方法及装置
CN103871035B (zh) * 2014-03-24 2017-04-12 华为技术有限公司 图像去噪方法及装置
EP3136348A4 (fr) 2014-04-20 2017-05-03 Shoichi Murase Livre d'images électronique changeant en continu avec une exécution de défilement
CN104010114B (zh) * 2014-05-29 2017-08-29 广东威创视讯科技股份有限公司 视频去噪方法和装置
CN104202583B (zh) 2014-08-07 2017-01-11 华为技术有限公司 图像处理装置和方法
US9852353B2 (en) * 2014-11-12 2017-12-26 Adobe Systems Incorporated Structure aware image denoising and noise variance estimation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
M. MAHMOUDI ET AL: "Fast image and video denoising via nonlocal means of similar neighborhoods", IEEE SIGNAL PROCESSING LETTERS., vol. 12, no. 12, 1 December 2005 (2005-12-01), US, pages 839 - 842, XP055537140, ISSN: 1070-9908, DOI: 10.1109/LSP.2005.859509 *

Also Published As

Publication number Publication date
US20180322619A1 (en) 2018-11-08
US9773297B2 (en) 2017-09-26
US10026154B2 (en) 2018-07-17
JP6349614B2 (ja) 2018-07-04
JP2017518546A (ja) 2017-07-06
EP3167429A1 (fr) 2017-05-17
US10515438B2 (en) 2019-12-24
WO2016183743A1 (fr) 2016-11-24
CN107615331A (zh) 2018-01-19
US20170337666A1 (en) 2017-11-23
EP3167429A4 (fr) 2017-08-02
US20170061585A1 (en) 2017-03-02
CN107615331B (zh) 2021-03-02

Similar Documents

Publication Publication Date Title
EP3167429B1 (fr) Système et procédé de prise en charge d'un débruitage d'image basé sur le degré de différentiation de blocs voisins
EP2130176B1 (fr) Mappage de contours utilisant des pixels panchromatiques
EP2130175B1 (fr) Mappage de bord incorporant des pixels panchromatiques
WO2009130820A1 (fr) Dispositif et procédé de traitement d'image, affichage, programme, et support d'enregistrement
JP5767064B2 (ja) イメージのエッジ向上方法
US20130243346A1 (en) Method and apparatus for deblurring non-uniform motion blur using multi-frame including blurred image and noise image
US9286653B2 (en) System and method for increasing the bit depth of images
US11854157B2 (en) Edge-aware upscaling for improved screen content quality
JP2015062270A (ja) 画像処理装置
CN110503611A (zh) 图像处理的方法和装置
JP4934839B2 (ja) 画像処理装置及びその方法並びにプログラム
CN105809677B (zh) 一种基于双边滤波器的图像边缘检测方法及系统
CN114679542B (zh) 图像处理方法和电子装置
CN111986144B (zh) 一种图像模糊判断方法、装置、终端设备及介质
US10999541B2 (en) Image processing apparatus, image processing method and storage medium
JP3959547B2 (ja) 画像処理装置、画像処理方法、及び情報端末装置
KR101582800B1 (ko) 적응적으로 컬러 영상 내의 에지를 검출하는 방법, 장치 및 컴퓨터 판독 가능한 기록 매체
McCrackin et al. Strategic image denoising using a support vector machine with seam energy and saliency features
JP2011155365A (ja) 画像処理装置および画像処理方法
EP4386656A1 (fr) Procédé de démosaïquage et dispositif de démosaïquage
Džaja et al. Solving a two-colour problem by applying probabilistic approach to a full-colour multi-frame image super-resolution
JP4529033B2 (ja) 低域濾過の方法
US20230093967A1 (en) Purple-fringe correction method and purple-fringe correction device
WO2009144768A1 (fr) Dispositif et procédé de correction d’image
JP6155609B2 (ja) 視差算出装置及びプログラム

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20160704

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

A4 Supplementary search report drawn up and despatched

Effective date: 20170703

RIC1 Information provided on ipc code assigned before grant

Ipc: G06T 5/00 20060101AFI20170627BHEP

Ipc: G06T 5/20 20060101ALI20170627BHEP

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
17Q First examination report despatched

Effective date: 20180810

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTG Intention to grant announced

Effective date: 20190730

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: SZ DJI TECHNOLOGY CO., LTD.

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602015044453

Country of ref document: DE

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 1217955

Country of ref document: AT

Kind code of ref document: T

Effective date: 20200115

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: NL

Ref legal event code: FP

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200325

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200326

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200325

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200520

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200425

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602015044453

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1217955

Country of ref document: AT

Kind code of ref document: T

Effective date: 20191225

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

26N No opposition filed

Effective date: 20200928

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200531

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200531

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20200531

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200515

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200515

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200531

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20210520

Year of fee payment: 7

Ref country code: FR

Payment date: 20210520

Year of fee payment: 7

Ref country code: IT

Payment date: 20210527

Year of fee payment: 7

Ref country code: NL

Payment date: 20210519

Year of fee payment: 7

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20210525

Year of fee payment: 7

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 602015044453

Country of ref document: DE

REG Reference to a national code

Ref country code: NL

Ref legal event code: MM

Effective date: 20220601

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20220515

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20220531

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20220515

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20221201

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20220601

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20220515