WO2017001096A1 - Static soiling detection and correction - Google Patents
Static soiling detection and correction Download PDFInfo
- Publication number
- WO2017001096A1 WO2017001096A1 PCT/EP2016/060379 EP2016060379W WO2017001096A1 WO 2017001096 A1 WO2017001096 A1 WO 2017001096A1 EP 2016060379 W EP2016060379 W EP 2016060379W WO 2017001096 A1 WO2017001096 A1 WO 2017001096A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pixel
- value
- values
- image
- quotient
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20182—Noise reduction or smoothing in the temporal domain; Spatio-temporal filtering
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
Definitions
- One factor that affects the image quality and which is diffi ⁇ cult to control is the degree of contamination of the optical system of the camera.
- the cameras may be positioned at places with less risk of contamination, or the cameras may be cleaned by an electric wiper. Despite of these provisions, it is impossible to avoid a contamination of the optical system completely. Therefore, it has been proposed to detect dirt particles on a camera lens automatically in order to trigger an appropriate action.
- An example for such an automatic de ⁇ tection of lens contaminations is disclosed in the European patent application EP 2351351.
- the present specification discloses a com- puter implemented method for detecting image artifacts.
- Image data with image frames is received from a vehicle cam ⁇ era, for example over an automotive data bus, and an inten ⁇ sity difference between neighbouring pixels in a first direc- tion of an image frame is compared with a pre-determined up ⁇ per threshold and with a pre-determined lower threshold.
- the first direction may correspond to the rows of an image frame.
- the pixel transition values can also be computed in a second direction, or y-direction, with respect to the pixel locations of the image frame. Thereby, the over- all detection quality can be improved and stripe shaped arti ⁇ facts can be avoided.
- the second direction may correspond to the columns of an image frame.
- a pixel transition value is set to a first value when the previously computed intensity difference of neighbouring pix ⁇ els is greater than the pre-determined upper threshold.
- the pixel transition value is set to a second value when the in ⁇ tensity difference is less than the pre-determined lower threshold and the pixel transition value is set to zero when the intensity difference lies between the pre-determined up ⁇ per threshold and the pre-determined lower threshold.
- the upper threshold can be set to a positive value and the lower threshold to a negative value, and the positive and the negative value can have equal magnitude.
- the upper threshold and the lower threshold may also be equal and, in particular, they may both be equal to zero.
- the first value can be chosen as a positive value, such as 1 or a posi ⁇ tive constant a, and the second value can be chosen as a negative value, such as -1 or as the negative of the first value .
- the pixel transition value is also referred to as "transition type”. Accumulated pixel transition values are computed from the pixel transition values of corresponding pixel locations of the image frames by applying a low pass filter with respect to time, wherein time is represented by the frame index.
- the low pass filter is computed as an accumu ⁇ lated value at a frame index f for the respective first and second direction.
- the accumulated value is computed as a weighted sum of the accumulated value at the earlier frame index f - 1 and the pixel transition value at the earlier frame index f.
- the weight factor of the accu ⁇ mulated value at the earlier frame index f - 1 may be set to at least 0.95.
- the accumulated pixel transition values are smoothed out with respect to the pixel locations by applying a spatial filter to the pixel locations, in particular by computing a convolution of the spatial filter.
- the spatial filter can be provided as filter with filter co- efficients between 0 and 1 that fall off to zero as a func ⁇ tion of a difference from a central point, for example by a circular filter.
- the low pass filtering with respect to time is performed before the spatial filtering.
- spatial filtering is performed before the low pass filter with respect to time.
- the low pass filter is applied to the pixel transition values to ob ⁇ tain accumulated pixel transition values and the spatial fil- ter is applied to the accumulated pixel transition values.
- the spatial filter is applied to the pixel transition values to obtain smoothed pixel transition values and the low pass filter with respect to time is applied to the smoothed pixel transition values.
- the spatial filter is realized as an aver ⁇ aging filter, for which the filter coefficients add up to 1. This is equivalent to a total volume of 1 under the filter function if the filter is defined step-wise and the coordi ⁇ nates (x, y) have a distance of 1.
- Magnitude values of the pixel locations are computed for the smoothed pixel transition values of the pixel locations. If the smoothed pixel transition values are computed with re ⁇ spect to one direction only, the magnitude values can be com- puted by taking the modulus.
- a magnitude value can be computed by adding the squared values for the respective first and second direc ⁇ tions, and in particular, it can be computed as an L2-norm, which is also referred to as Euclidean norm. Then, the pixels of potential artifact regions are identified by comparing the magnitude value for given pixel locations (x, y) with a pre- determined detection threshold.
- the present specification discloses a computer implemented method for correcting image artifacts. According to this method, image data with image frames is re ⁇ ceived from a vehicle camera, for example via an automotive data bus .
- Pixel quotient values for the respective pixel locations are computed in a first direction, or x-direction.
- the first direction can be provided by the rows of an image frame.
- pixel quotient values for the re ⁇ spective pixel locations can also be computed in a second di ⁇ rection, or y-direction.
- the second direction can be provided by the columns of an image frame.
- a numerator of the pixel quotient value comprises an image intensity at a given pixel location and a denominator of the pixel quotient value comprises an image intensity at a neighbouring pixel in the respective first or second direc- tion.
- the method is "localized", and does not combine pixels from pixel loca ⁇ tions, which are far apart. This feature contributes to a sparse matrix for a system of linear equations.
- Median values of the pixel quotient values are computed for the respective pixel locations with respect to time, wherein time is represented by frame index.
- the median value can be computed as a streaming median value, which ap ⁇ proximates a true median.
- the attenuation factors of the pixel locations of the image frames are computed as an approximate solution to a system of linear equations in the attenuation factors of the respective pixel locations (x, y) , wherein the attenuation factors of the pixel locations are represented as a vector.
- the system of linear equations comprises a first set of lin ⁇ ear equations, in which the previously determined median val ⁇ ues appear as pre-factor of the respective attenuation factors.
- the system of linear equations comprises a second set of linear equations, which determine values of the attenuation factors at corresponding pixel locations.
- the second set of linear equations may be determined by the abovementioned method for identifying image artifacts.
- a corrected pixel intensity for a pixel of the image frame at a given time t is derived by dividing the observed pixel in ⁇ tensity by the previously determined attenuation factor B (x, y) , where the attenuation factor lies between 0 and 1.
- the median values of the pixel quotient values are obtained as streaming median values of the pixel quotient values up to a frame index f.
- the stream ⁇ ing median value is derived from a median value estimate for the previous frame index f - 1 and the pixel quotient value at frame index f.
- the streaming median value approximates the true value of a median.
- the streaming median value of the current frame index and pixel is computed by adding a pre-determined value
- the abovementioned system of linear equation can be solved approximately using an iterative method.
- a num ⁇ ber of iteration steps may be determined in advance or in de ⁇ pendence of a convergence rate.
- the pre-factors of the attenuation factor in the linear equa ⁇ tions can be defined as elements of a constraint matrix.
- the method comprises multiplying the system of linear equations with the transposed constraint matrix.
- the resulting system of linear equations is solved using an iterative method.
- the iterative method can be provided by a conjugate gradient method, which is used for finding the minimum of a quadratic form that is defined by the resulting equation.
- the present specification discloses a computation unit for carrying out the abovementioned method of detecting image artifacts, for example by providing integrated circuits, ASICS, microprocessors computer readable memory with data and computer readable instructions and the like.
- the computation unit comprises an input connection for receiving image data and an output connection for outputting locations of detected pixels.
- the output and input connections may also coincide.
- the locations of detected pixels can also be marked in a mem- ory area, for example by providing pointers to data struc ⁇ tures etc.
- the computation unit is operative to execute the abovemen- tioned artifact detection method, in particular, the computa ⁇ tion unit is operative to compare intensity differences be ⁇ tween neighbouring pixels in a first direction with a predetermined upper threshold and with a pre-determined lower threshold and to set a pixel transition value according to the intensity difference.
- the computation unit sets the pixel transition value to a first value when the intensity difference is greater than the pre-determined upper threshold, to a second value when the intensity difference is less than the pre-determined lower threshold and sets the pixel transition value to zero when the intensity difference lies between the pre-determined up ⁇ per threshold and the pre-determined lower threshold. Furthermore, the computation unit computes accumulated pixel transition values of the respective pixel transition values by applying a low pass filter with respect to a frame index or with respect to time. The computation unit computes smoothed pixel transition values by applying a spatial filter to the accumulated pixel transition values and computes a magnitude value of the smoothed pixel transition values for the pixel locations of the image frame.
- the computation unit outputs the detected pixels via the out- put connection, for example by storing a reference to pixel locations or the coordinates of the pixel locations of the detected artifacts in a computer readable memory of the com ⁇ putation unit.
- the computation unit identifies pixels of potential ar- tifact regions by comparing the magnitude value with a prede ⁇ termined detection threshold.
- the present specification discloses a vehicle cam ⁇ era with the aforementioned computation unit, wherein the ve- hicle camera is connected to the input connection of the com ⁇ putation unit.
- the present specification discloses a computation unit for correcting image artifacts.
- the computa- tion unit comprises an input connection for receiving image data and an output connection for outputting corrected image frames, which may also coincide for a bidirectional data con ⁇ nection .
- the computation unit is operative to execute the abovemen- tioned method for correcting image artifacts.
- the computation unit is operative to compute pixel quotient values in a first direction, wherein the pixel quotient val ⁇ ues are derived from a quotient.
- the numerator of the quo- tient comprising an image intensity at a given pixel location and the denominator comprising an image intensity at a neighbouring pixel in the first direction.
- the computation unit computes median values of the pixel quotient values with respect to time and computes attenuation factors of the respective pixel locations of the image frame.
- the attenuation factors are computed as an ap- proximate solution to a system of linear equations in the at ⁇ tenuation factor, the system of linear equations comprising a first set of linear equations and a second set of linear equations .
- the equations of the first set of equations relate the value of an attenuation factor at a first pixel location to the value of an attenuation factor at an adjacent or neighbouring pixel location in the respective first or second direction.
- the median values ap ⁇ pear as pre-factor of the attenuation factors.
- the second set of linear equations determines values of the attenuation factors at respective pixel locations, which are known by other means, for example by using the abovementioned artifact detection method.
- the computation unit derives corrected pixel intensi ⁇ ties by dividing the observed pixel intensities, or, in other words, the pixel intensities in the received current image frame, by the corresponding attenuation factors B (x, y) of the respective pixel locations.
- the present specification discloses a vehicle camera with the computation unit for correcting the image artifacts, wherein the vehicle camera is connected to the input connection of the computation unit.
- Figure 2 shows a pixel variation measure of the image of
- Figure 3 shows a pixel variation measure of the image of
- Figure 4 shows the result of smoothing out the image of Fig
- Figure 5 shows the result of smoothing out the image of Fig
- Figure 6 shows an overall pixel variation measure that is computed from the arrays of Figs. 4 and 5
- Figure 7 shows the result of thresholding the overall pixel variation measure of Fig. 6,
- Figure 8 shows an image with an overlaid synthetic blur
- Figure 9 shows a corrected image, which is derived from the image of Fig. 8,
- Figure 10 shows a pixel variation measure ⁇ _ ⁇ in the x- direction of Fig. 8,
- Figure 11 shows a pixel variation measure ⁇ _y in the y- direction of Fig. 8,
- Figure 12 shows the synthetic blur mask
- Figure 13 shows the estimated blur mask
- Figure 14 shows an original image with artifacts
- Figure 15 shows a corrected image
- Figure 16 shows a pixel variation measure ⁇ _ ⁇ in the x- direction of Fig. 14,
- Figure 17 shows a pixel variation measure ⁇ _y in the y- direction of Fig. 14,
- Figure 18 shows an estimated image attenuation or blur mask
- Figure 19 shows an image defect correction system according to the present specifcation .
- a common assumption in imaging systems is that the radiance emitted from a scene is observed directly at the sensor. How ⁇ ever, there are often physical layers or media lying between the scene and the imaging system. For example, the lenses of vehicle cameras, consumer digital cameras, or the front win ⁇ dows of security cameras often accumulate various types of contaminants over time such as fingerprints, dust and dirt. Also, the exposure of cameras to aggressive environments can cause defects in the optical path, like stone chips, rifts or scratches at the camera lens. Artifacts from a dirty camera lens are shown in Fig. 1.
- a computational algorithm according to the present specification may provide advantages by artificially removing the artifacts caused by dirt or by a lightly damaged-lens , so that the methods ana ⁇ lyzing the images can operate properly.
- an algo ⁇ rithm makes use of a computational model for the process of image formation to de ⁇ tect that the lens are dirty or directly recover the image information, in particular those image points which are still partially visible in the captured images.
- the current specification discloses two types of image cor- rection methods, which make use of these observations.
- Ac ⁇ cording to a first type of method a method a location where the lens contains attenuation or occluding-type artifacts is detected.
- the amount by which the images are attenuated at each pixel is detected and s an estimate of the artifact-free image is obtained.
- the methods use only the information measured from a sequence of images, which is obtained in an automated way. They make use of temporal information but require only a small number of frames to achieve a solution.
- the methods according to the present specification do not require that the images are to ⁇ tally uncorrelated, but only that there is some movement, as the one expected in, for example, a moving vehicle.
- the meth ⁇ ods work best when the statistics of the images being cap ⁇ tured obeys a natural image statistics.
- Image inpainting and hole-filling techniques assume that the location of the artifacts are known and then replace the af ⁇ fected areas with a synthesized estimate obtained from the neighboring regions.
- a correction method accord ⁇ ing to the present specification makes use of information of the original scene that is still partially accessible to re ⁇ cover the original scene. In many cases, the result is more faithful to the actual structure of the original unobserved image. In areas where the image is totally obstructed, inpainting methods can be used.
- Figs. 1 to 7 illustrate a method for detecting image attenua ⁇ tions according to a first embodiment of the present specifi- cation.
- Figs. 8 to 18 illustrate a method for correcting im ⁇ age contaminations according to a second embodiment of the present specification.
- the pixel numbers in the x-direction are indicated on the x-axis and the pixel numbers in y- direction are indicated on the y-axis.
- the image format of the image in Figs. 1 - 18 is about 1280 x 800 pixels.
- a detection method is dis- closed, which is suitable for detecting if there is a distur ⁇ nadoe in the optical path caused by attenuating or obstruct ⁇ ing elements.
- the model for describing attenuating or obstructing elements is:
- I f I of -B, (1)
- the index " f" which is also referred to as time index "t”
- I 0 f the original unobserved image
- B E [0,1] is the attenuation mask, where 0 indicates total obstruction and 1, no obstruction.
- the intensity "I” refers to luminance val ⁇ ues, but similar processing can be done in RGB or in other color spaces. Computing the horizontal derivative of the pre ⁇ vious equation leads to
- T is a threshold.
- the cor ⁇ rected Figures 9 and 15, the time averaged transition magni ⁇ tudes of Figs. 6 and 7 and the estimated blur masks of 13 and 18 have been obtained with a moving camera and after applying the method for a few frames.
- IIR Infinite Impulse Response
- a black colour signifies a negative transi ⁇ tion
- a white colour signifies a positive transition
- a grey colour signifies no transition.
- the intensity values of the resulting smoothed out arrays 5 ⁇ ( , ) and S y (x,y) are illustrated in Figs. 4 and 5, respectively, if the original image is given by Fig. 1.
- Isolated black and white pixels and stripe shaped arti ⁇ facts, which are still present in Figs. 2 and 3, are sup- pressed or eliminated in Figs. 4 and 5, and the light and dark regions are more contiguous and have smoother bounda ⁇ ries .
- a "circular filter” refers to a filter that is circu- larly symmetric with respect to the spatial dimensions x and y.
- a symmetric multivariate Gaussian filter or a Mexican-hat shaped filter are examples for circular filters.
- any filter shape and type can be used, depending on image resolution and camera and filter properties.
- the overall magnitude Sf(x,y) of a transition at the pixel location (x,y) is computed as the Euclidean norm of the indi ⁇ vidual magnitudes for the x- and y- directions: and a transition exists if Sf(x,y) ⁇ T 2 .
- a threshold T 2 0.2 is used.
- the computa ⁇ tion of the sign, the addition for many pixels (in this case) and a threshold is denoted in the robust statistics litera- ture as the sign test.
- Fig. 6 shows the intensities of the array Sf(x,y)
- Fig. 7 shows the thresholded array Sf(x,y), when the recorded image is provided by Fig. 1.
- Fig. 7 shows that the algorithm detects dirt regions but also other time independent features with strongly varying intensities such as the lens border and the border of the car from which the image is taken. Features like the car border and the lens border are always present in the image and can be identified and masked out easily. Conversely, the
- Fig. 7 can also be used to identify image portions which are not affected by dirt, scratches and the like.
- Second embodiment correcting the attenuation
- a method for determining an amount of at ⁇ tenuation and for obtaining an estimate of the artifact-free image based on the determined amount of attenuation.
- This em- bodiment is illustrated in the Figs. 8 - 18.
- Fig. 8 shows an image with an overlaid artificial contamination with a blur mask that comprises the letters "t e s t".
- Fig. 9 shows a re ⁇ covered image, which is derived from the image of Fig. 8 ac ⁇ cording to the below mentioned image recovery algorithm.
- Fig. 10 shows a pixel variation measure ⁇ ⁇ in the x-direction of Fig. 8 and Fig.
- FIG. 11 shows a pixel variation measure ⁇ ⁇ in the y-direction of Fig. 8.
- the computation of the pixel variation measure is explained further below.
- Fig. 12 shows the actual blur mask and
- Fig. 13 shows the es ⁇ timated blur mask, which is obtained by solving the below mentioned equation (19) .
- the final result of Fig. 9 is ob ⁇ tained by solving the below mentioned equation (21) .
- Figs. 14 - 18 show the analogous results to Figs. 8 to 13 us ⁇ ing the original image and a real contamination instead of an artificial blur mask.
- Fig. 14 shows the original image
- Fig. 15 shows the corrected image using the below mentioned image correction method.
- FIG. 16 shows a pixel variation measure ⁇ ⁇ in the x-direction of Fig. 14 and Fig. 17 shows a pixel variation measure ⁇ ⁇ in the y-direction of Fig. 14.
- Figure 18 shows an estimated blur mask or attenuation.
- an approximation of the median can be calculated according to the following method.
- ⁇ is a suitably chosen value and t is a time index, such as the frame index f.
- This method does not require that all previous values of m are stored and does only a compari ⁇ son and an addition per point and observation, which is very efficient from a computational and storage point of view.
- t ⁇ oo m(t) ⁇ median( ⁇ p(0), ... , p(t) ⁇ ) , or, in other words, the median estimate approaches the real value of the median for sufficiently large values of t.
- Concerning the value of ⁇ if ⁇ is too small, m(t) will tend towards the real value of the median too slowly. If ⁇ is too large, it will tend towards the value of the real median quickly but will then oscilate too much.
- the first and third quartiles can be computed respectively as: m t - 1) + ⁇ /2 if p(t) > m(t - 1) (12)
- First quartile: m(t) ⁇ ⁇ m t - 1) - 3 ⁇ /2 if p(t) ⁇ m(t - 1)
- the attenuation factor B is estimated using the previously calculated streaming median method to estimate an approxima tion x(x,y of the median value of f x (x,y) over time. Using the relationship
- the pixel locations (x, y) may be obtained, for example, by using the detection method according to the first embodiment.
- the equations (15), (16) and (18) can be represented in ma ⁇ trix form through the equation
- the number of constraints "tconstraints" is equal to the num ⁇ ber of constraint equations (15), (16) and (18).
- the number of constraints is approximately (#X-1)*#Y horizontal con- straints plus (#Y-1)*#X vertical constraints plus N con ⁇ straints for N points in which B is known.
- This equation is also known as a normal equation in the context of a least squares approximation.
- the normal equation is solved approximately with an iterative method, thereby obtaining the vector b.
- the iterative method may be provided by a least square solver, such as the conjugate gradient method, which approximates the vector b that minimizes the quadratic form
- the array B is obtained from the column vector b by reshaping the vector b back into array form.
- the unobserved image is estimated simply by dividing each pixel of the observed image with the estimated B for that pixel,
- constraint equations of equation (18) that are not needed are identified and are not included into the matrix S.
- an algorithm may identify boundary regions of the artifacts and exclude points (x, y) outside the boundary regions from the equations (18) and from the vector b.
- At least one constraint equation (18) is provided for each row of the image frames and, if present, for each column of the image frames.
- the one or more known attenuation values B (x, y) can be used to find the attenua ⁇ tion using equations (15) and (16) in the pixel locations in which the attenuation is not known beforehand.
- Fig. 19 shows, by way of example, an image defect correction system 10 according to the present application.
- a sensor surface 12 of a video camera is connected to an image capture unit 13 which is connected to a video buffer 14.
- An artifact detection unit 15 and an artifact correction unit 16 are con- nected to the video buffer 14.
- a display 17 is connected to the artifact correction unit 16.
- the dashed error indicates an optional use of an output of the artifact detection unit 15 as input for the artifact correction unit 16.
- an image evaluation unit 19 is connected to the artifact correction unit 16.
- Various driver assistance units such as a brake assistant unit 20, a parking assistant unit 21 and a traffic sign detection unit 22 are connected to the image evaluation unit 19.
- the display 18 is connected to the units 20, 21, 22 for displaying output data of the units 20, 21 and 22.
- the artifact detection unit 15 is operative to execute an ar ⁇ tifact detection according to the first embodiment of the present specification and the artifact correction unit 16 is operative to execute an artifact correction method according to the second embodiment of the present specification, for example by providing a computing means such as a microproces ⁇ sor, an integrated circuit, an ASIC, a computer readable mem ⁇ ory for storing data and computer executable code etc.
- a computing means such as a microproces ⁇ sor, an integrated circuit, an ASIC, a computer readable mem ⁇ ory for storing data and computer executable code etc.
- the pixel matrix may be traversed column-wise instead of row by row and the direction of traversing the ma ⁇ trix may be reversed.
- the constraint equation for the at ⁇ tenuation may be expressed in terms of the preceding pixel (x, y-1) or (x-1, y) instead of being expressed in terms of the next pixel (x, y + 1) or (x + 1, y) . In this case, there is no constraint equation for the first column or for the first row, respectively.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Facsimile Image Signal Circuits (AREA)
- Image Analysis (AREA)
Priority Applications (5)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2017567405A JP6788619B2 (ja) | 2015-07-02 | 2016-05-10 | 静的汚れ検出及び補正 |
| KR1020177037464A KR102606931B1 (ko) | 2015-07-02 | 2016-05-10 | 정적 오염 감지 및 보정 |
| CN201680038530.2A CN107710279B (zh) | 2015-07-02 | 2016-05-10 | 静态脏污检测与校正 |
| US15/845,645 US10521890B2 (en) | 2015-07-02 | 2017-12-18 | Static soiling detection and correction |
| US16/562,140 US11138698B2 (en) | 2015-07-02 | 2019-09-05 | Static soiling detection and correction |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| EP15174934.8 | 2015-07-02 | ||
| EP15174934.8A EP3113107B1 (en) | 2015-07-02 | 2015-07-02 | Static soiling detection and correction |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US15/845,645 Continuation US10521890B2 (en) | 2015-07-02 | 2017-12-18 | Static soiling detection and correction |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2017001096A1 true WO2017001096A1 (en) | 2017-01-05 |
Family
ID=53836360
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2016/060379 Ceased WO2017001096A1 (en) | 2015-07-02 | 2016-05-10 | Static soiling detection and correction |
Country Status (6)
| Country | Link |
|---|---|
| US (2) | US10521890B2 (enExample) |
| EP (1) | EP3113107B1 (enExample) |
| JP (1) | JP6788619B2 (enExample) |
| KR (1) | KR102606931B1 (enExample) |
| CN (1) | CN107710279B (enExample) |
| WO (1) | WO2017001096A1 (enExample) |
Families Citing this family (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109101889B (zh) * | 2018-07-12 | 2019-08-02 | 新昌县哈坎机械配件厂 | 基于灰尘分析的指纹扫描机构 |
| CN111147761A (zh) * | 2019-12-28 | 2020-05-12 | 中国第一汽车股份有限公司 | 一种车用摄像设备清洗方法、系统、车辆及存储介质 |
| US11276156B2 (en) * | 2020-01-07 | 2022-03-15 | GM Global Technology Operations LLC | Gaussian image quality analysis tool and method for operation |
| DE102020003199A1 (de) * | 2020-05-28 | 2020-08-06 | Daimler Ag | Verfahren zur Erkennung von Bildartefakten, Steuereinrichtung zur Durchführung eines solchen Verfahrens, Erkennungsvorrichtung mit einer solchen Steuereinrichtung und Kraftfahrzeug mit einer solchen Erkennungsvorrichtung |
| CN111967345B (zh) * | 2020-07-28 | 2023-10-31 | 国网上海市电力公司 | 一种实时判定摄像头遮挡状态的方法 |
| DE102020209796A1 (de) | 2020-08-04 | 2022-02-10 | Robert Bosch Gesellschaft mit beschränkter Haftung | Verfahren und Anordnung zum Betreiben einer kamerabasierten Sensoreinrichtung, Computerprogrammprodukt und landwirtschaftliche Vorrichtung |
| WO2022092620A1 (en) | 2020-10-30 | 2022-05-05 | Samsung Electronics Co., Ltd. | Method and system operating an imaging system in an image capturing device based on artificial intelligence techniques |
| JP7617395B2 (ja) | 2021-02-19 | 2025-01-20 | 日本製鉄株式会社 | 画像処理装置、画像処理方法及びプログラム |
| CN115393250B (zh) * | 2021-05-25 | 2025-09-16 | 上海西门子医疗器械有限公司 | 线伪影的评估方法、系统、x射线机及存储介质 |
| CN114271791B (zh) * | 2022-01-12 | 2023-03-24 | 广州永士达医疗科技有限责任公司 | 一种oct成像系统的伪影检测方法及装置 |
| CN114596280B (zh) * | 2022-03-08 | 2022-09-09 | 常州市宏发纵横新材料科技股份有限公司 | 一种碳纤维布面生产过程中碎屑纸的检测方法及装置 |
| US12470835B2 (en) * | 2023-02-06 | 2025-11-11 | Motorola Solutions, Inc. | Method and apparatus for analyzing a dirty camera lens to determine if the dirty camera lens causes a failure to detect various events |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP1258403A2 (en) * | 2001-05-16 | 2002-11-20 | Nippon Sheet Glass Co., Ltd. | Object sensing device and wiper controlling apparatus using the same |
| WO2003060826A1 (de) * | 2002-01-17 | 2003-07-24 | Robert Bosch Gmbh | Verfahren und vorrichtung zur erkennung von sichtbehinderungen bei bildsensorsystemen |
| EP2351351A1 (en) | 2008-10-01 | 2011-08-03 | HI-KEY Limited | A method and a system for detecting the presence of an impediment on a lens of an image capture device to light passing through the lens of an image capture device |
| DE102011013527A1 (de) * | 2011-03-10 | 2012-01-05 | Daimler Ag | Verfahren und Vorrichtung zur Ermittlung von Wasser auf einer Fahrzeugscheibe |
Family Cites Families (27)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| NL263833A (enExample) | 1960-04-23 | |||
| US6643410B1 (en) * | 2000-06-29 | 2003-11-04 | Eastman Kodak Company | Method of determining the extent of blocking artifacts in a digital image |
| US6965395B1 (en) * | 2000-09-12 | 2005-11-15 | Dialog Semiconductor Gmbh | Methods and systems for detecting defective imaging pixels and pixel values |
| TWI373734B (en) * | 2003-03-17 | 2012-10-01 | Qualcomm Inc | Method and apparatus for improving video quality of low bit-rate video |
| JP2004310282A (ja) * | 2003-04-03 | 2004-11-04 | Nissan Motor Co Ltd | 車両検出装置 |
| JP2005044196A (ja) * | 2003-07-23 | 2005-02-17 | Sharp Corp | 移動体周辺監視装置、自動車、移動体周辺監視方法、制御プログラムおよび可読記録媒体 |
| KR100547140B1 (ko) * | 2003-09-13 | 2006-01-26 | 삼성전자주식회사 | 디지털 영상 확대방법 및 장치 |
| US7295233B2 (en) * | 2003-09-30 | 2007-11-13 | Fotonation Vision Limited | Detection and removal of blemishes in digital images utilizing original images of defocused scenes |
| US7046902B2 (en) * | 2003-09-30 | 2006-05-16 | Coractive High-Tech Inc. | Large mode field diameter optical fiber |
| US20070047834A1 (en) * | 2005-08-31 | 2007-03-01 | International Business Machines Corporation | Method and apparatus for visual background subtraction with one or more preprocessing modules |
| FR2901218B1 (fr) * | 2006-05-22 | 2009-02-13 | Valeo Vision Sa | Procede de detection de pluie sur un parebrise |
| WO2008022005A2 (en) * | 2006-08-09 | 2008-02-21 | Fotonation Vision Limited | Detection and correction of flash artifacts from airborne particulates |
| JP4654208B2 (ja) * | 2007-02-13 | 2011-03-16 | 日立オートモティブシステムズ株式会社 | 車載用走行環境認識装置 |
| WO2008119480A2 (en) * | 2007-03-31 | 2008-10-09 | Sony Deutschland Gmbh | Noise reduction method and unit for an image frame |
| US7813528B2 (en) * | 2007-04-05 | 2010-10-12 | Mitsubishi Electric Research Laboratories, Inc. | Method for detecting objects left-behind in a scene |
| US8244057B2 (en) * | 2007-06-06 | 2012-08-14 | Microsoft Corporation | Removal of image artifacts from sensor dust |
| CN100524358C (zh) * | 2007-11-15 | 2009-08-05 | 南方医科大学 | 一种改进的锥形束ct环形伪影的消除方法 |
| US8351736B2 (en) * | 2009-06-02 | 2013-01-08 | Microsoft Corporation | Automatic dust removal in digital images |
| JP5241782B2 (ja) * | 2010-07-30 | 2013-07-17 | 株式会社日立製作所 | カメラ異常検出装置を有する監視カメラシステム |
| JP2012038048A (ja) * | 2010-08-06 | 2012-02-23 | Alpine Electronics Inc | 車両用障害物検出装置 |
| EA016450B1 (ru) * | 2011-09-30 | 2012-05-30 | Закрытое Акционерное Общество "Импульс" | Способ коррекции яркости дефектных пикселей цифрового монохромного изображения |
| JP6122269B2 (ja) * | 2011-12-16 | 2017-04-26 | キヤノン株式会社 | 画像処理装置、画像処理方法、及びプログラム |
| US9042645B2 (en) * | 2012-05-16 | 2015-05-26 | Imec | Feature detection in numeric data |
| CN103871039B (zh) * | 2014-03-07 | 2017-02-22 | 西安电子科技大学 | 一种sar图像变化检测差异图生成方法 |
| CN104504669B (zh) * | 2014-12-12 | 2018-03-23 | 天津大学 | 一种基于局部二值模式的中值滤波检测方法 |
| US10262397B2 (en) * | 2014-12-19 | 2019-04-16 | Intel Corporation | Image de-noising using an equalized gradient space |
| CN104680532B (zh) * | 2015-03-02 | 2017-12-08 | 北京格灵深瞳信息技术有限公司 | 一种对象标注方法及装置 |
-
2015
- 2015-07-02 EP EP15174934.8A patent/EP3113107B1/en active Active
-
2016
- 2016-05-10 KR KR1020177037464A patent/KR102606931B1/ko active Active
- 2016-05-10 WO PCT/EP2016/060379 patent/WO2017001096A1/en not_active Ceased
- 2016-05-10 CN CN201680038530.2A patent/CN107710279B/zh active Active
- 2016-05-10 JP JP2017567405A patent/JP6788619B2/ja active Active
-
2017
- 2017-12-18 US US15/845,645 patent/US10521890B2/en active Active
-
2019
- 2019-09-05 US US16/562,140 patent/US11138698B2/en active Active
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP1258403A2 (en) * | 2001-05-16 | 2002-11-20 | Nippon Sheet Glass Co., Ltd. | Object sensing device and wiper controlling apparatus using the same |
| WO2003060826A1 (de) * | 2002-01-17 | 2003-07-24 | Robert Bosch Gmbh | Verfahren und vorrichtung zur erkennung von sichtbehinderungen bei bildsensorsystemen |
| EP2351351A1 (en) | 2008-10-01 | 2011-08-03 | HI-KEY Limited | A method and a system for detecting the presence of an impediment on a lens of an image capture device to light passing through the lens of an image capture device |
| DE102011013527A1 (de) * | 2011-03-10 | 2012-01-05 | Daimler Ag | Verfahren und Vorrichtung zur Ermittlung von Wasser auf einer Fahrzeugscheibe |
Non-Patent Citations (3)
| Title |
|---|
| ANONYMOUS: "algorithm - How to calculate or approximate the median of a list without storing the list - Stack Overflow", STACK OVERFLOW, 27 January 2010 (2010-01-27), XP055269426, Retrieved from the Internet <URL:http://stackoverflow.com/questions/638030/how-to-calculate-or-approximate-the-median-of-a-list-without-storing-the-list> [retrieved on 20160428] * |
| JINWEI GU ET AL: "Removing image artifacts due to dirty camera lenses and thin occluders", ACM TRANSACTIONS ON GRAPHICS (TOG), vol. 28, no. 5, 1 December 2009 (2009-12-01), US, pages 1, XP055269428, ISSN: 0730-0301, DOI: 10.1145/1618452.1618490 * |
| LIN S ET AL: "Removal of Image Artifacts Due to Sensor Dust", INTERNET CITATION, 1 January 2007 (2007-01-01), pages 1 - 8, XP002563625, Retrieved from the Internet <URL:http://research.microsoft.com/apps/pubs/default.aspx?id=6943> [retrieved on 20100211] * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN107710279A (zh) | 2018-02-16 |
| US11138698B2 (en) | 2021-10-05 |
| EP3113107A1 (en) | 2017-01-04 |
| KR20180026399A (ko) | 2018-03-12 |
| JP2018523222A (ja) | 2018-08-16 |
| US20180174277A1 (en) | 2018-06-21 |
| JP6788619B2 (ja) | 2020-11-25 |
| CN107710279B (zh) | 2021-07-27 |
| US10521890B2 (en) | 2019-12-31 |
| EP3113107B1 (en) | 2018-12-05 |
| KR102606931B1 (ko) | 2023-11-27 |
| US20200034954A1 (en) | 2020-01-30 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11138698B2 (en) | Static soiling detection and correction | |
| You et al. | Adherent raindrop modeling, detection and removal in video | |
| Eigen et al. | Restoring an image taken through a window covered with dirt or rain | |
| CN109584204B (zh) | 一种图像噪声强度估计方法、存储介质、处理及识别装置 | |
| JP5908174B2 (ja) | 画像処理装置及び画像処理方法 | |
| KR101582479B1 (ko) | 동영상에 포함된 헤이즈 제거를 위한 영상 처리 장치 및 그 방법 | |
| JP6744336B2 (ja) | 予想されるエッジ軌跡を用いたレンズ汚染の検出 | |
| KR101097673B1 (ko) | 화질 개선을 위한 노이즈 검출 및 평가 기술 | |
| JP2004522372A (ja) | 時空間適応的雑音除去/高画質復元方法及びこれを応用した高画質映像入力装置 | |
| Mathias et al. | Underwater image restoration based on diffraction bounded optimization algorithm with dark channel prior | |
| KR101784350B1 (ko) | 개선된 메디안 다크 채널 프라이어에 기반한 안개 제거 방법 및 장치 | |
| CN119360385A (zh) | 一种隧道衬砌细微裂缝快速定位识别方法及系统 | |
| WO2012063533A1 (ja) | 画像処理装置 | |
| CN110084761B (zh) | 一种基于灰色关联度引导滤波的图像去雾方法 | |
| CN114359183B (zh) | 图像质量评估方法及设备、镜头遮挡的确定方法 | |
| CN112825189B (zh) | 一种图像去雾方法及相关设备 | |
| Wang et al. | ClearSight: Deep Learning-Based Image Dehazing for Enhanced UAV Road Patrol | |
| KR20200099834A (ko) | 움직임 검출을 수행하는 방법 및 움직임 검출을 수행하는 영상처리장치 | |
| CN114693542A (zh) | 图像去雾方法和使用图像去雾方法的图像去雾设备 | |
| Kim et al. | Automatic film line scratch removal system based on spatial information | |
| JP3959547B2 (ja) | 画像処理装置、画像処理方法、及び情報端末装置 | |
| Chen et al. | Contrast Restoration of Hazy Image in HSV Space | |
| Goto et al. | Image restoration method for non-uniform blurred images | |
| Vougioukas et al. | Adaptive deblurring of surveillance video sequences that deteriorate over time | |
| WO2018028974A1 (en) | Method and apparatus for soiling detection, image processing system and advanced driver assistance system |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16724326 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 2017567405 Country of ref document: JP Kind code of ref document: A |
|
| ENP | Entry into the national phase |
Ref document number: 20177037464 Country of ref document: KR Kind code of ref document: A |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 16724326 Country of ref document: EP Kind code of ref document: A1 |