WO2010115228A1 - Amélioration de données d'images - Google Patents
Amélioration de données d'images Download PDFInfo
- Publication number
- WO2010115228A1 WO2010115228A1 PCT/AU2009/000439 AU2009000439W WO2010115228A1 WO 2010115228 A1 WO2010115228 A1 WO 2010115228A1 AU 2009000439 W AU2009000439 W AU 2009000439W WO 2010115228 A1 WO2010115228 A1 WO 2010115228A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- point
- image data
- optical depth
- optical
- Prior art date
Links
- 230000002708 enhancing effect Effects 0.000 title claims description 10
- 230000003287 optical effect Effects 0.000 claims abstract description 80
- 238000000034 method Methods 0.000 claims abstract description 42
- 239000000443 aerosol Substances 0.000 claims abstract description 21
- 230000001419 dependent effect Effects 0.000 claims description 6
- 238000013442 quality metrics Methods 0.000 claims description 6
- 230000000694 effects Effects 0.000 abstract description 5
- 230000006870 function Effects 0.000 description 23
- 238000004422 calculation algorithm Methods 0.000 description 12
- 238000005286 illumination Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 230000007423 decrease Effects 0.000 description 4
- 239000000203 mixture Substances 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 230000008033 biological extinction Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000003708 edge detection Methods 0.000 description 3
- 238000009472 formulation Methods 0.000 description 3
- 238000002372 labelling Methods 0.000 description 3
- 239000013598 vector Substances 0.000 description 3
- 238000001429 visible spectrum Methods 0.000 description 3
- 230000001186 cumulative effect Effects 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 238000010521 absorption reaction Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000002238 attenuated effect Effects 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000002146 bilateral effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000003292 diminished effect Effects 0.000 description 1
- 238000002924 energy minimization method Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 238000005381 potential energy Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000002922 simulated annealing Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
Definitions
- the invention concerns image processing that includes the estimation of the optical depths of points in image data in order to enhance the image data.
- image processing includes the estimation of the optical depths of points in image data in order to enhance the image data.
- Applications of the invention include, but are not limited to, surveillance systems so that objects in the scene of the image can be better identified once the image data has been enhanced to reduce the estimated noise.
- aspects of the invention include a method, software, computer apparatus and a field programmable gate array.
- edge detection looks at the difference in intensities between a pair of neighbouring pixels. If the difference is large, there is most likely an edge between them.
- Fog occurs when water droplets are suspended in the air.
- the presence of fog or any other aerosol leads to two types of light reaching the camera or observer: "transmitted” and “path” components (referred to as the "direct” and “airlight” components in [I]).
- light 1 1 from the illumination source 10 reflects off objects in the scene and travels 12 towards the observer or camera 15.
- the amount of diffuse light increases, as the direct light is scattered multiple times before reaching the surfaces of the scene.
- light 12 may scatter 16 while travelling towards the observer.
- this "transmitted" or “direct” component is a diminished version of radiance LQ at the surface of the object, and the attenuation is determined by the optical distance /3d(x) [1], where x represents a particular point in the image, ⁇ i(x) the geometric distance of said point, and ⁇ is the extinction co-efficient:
- the invention provides a method of enhancing image data representing an image of a scene including aerosols, the method comprising:
- optical depth can be estimated without any geometric information about the image or any user interactions.
- the costs may a quality metric such as any one or more of: intensity contrast, colour, hue, saturation, geometric distance, such as scene geometry or image plane geometry; or entropy.
- Step (a) may further comprise determining the first cost of each point by: enhancing that point based on a candidate optical depth; and determining the quality metric of the point compared to the enhanced version of that point.
- the candidate optical depth may be a set of optical depths, and the quality metric may be determined for that point for each optical depth in the set of optical depths.
- the first cost function may be a tabulation of values.
- the candidate optical depth may be a predetermined algebraic expression and may be different for different points.
- step (a) may further comprise determining the first cost of each point by determining a dissimilarity between a metric of that point enhanced based on the candidate optical depth and a metric of an equivalent point in a reference image.
- Step (b) is performed automatically, that is, it docs not require input from the user.
- Each point may be considered in four or eight sets of two points, each set including that point and one of an immediate neighbouring point of that point.
- the predefined correlation may be that points of a set have similar optical depths.
- the predefined correlation may be that optical depths of vertically and/or horizontally aligned points of a set have increasing optical depths in a predetermined direction.
- the predefined correlation may be that there is no correlation between points in a set.
- the second costs may also depend on other factors, such as image plane distance or intensity.
- the predetermined correlation may be based on image intensity or colour values of points of a set.
- the predetermined correlation may be based on edge strengths in the image between points of the set.
- the first and second costs may be defined in such a way that the formulation is consistent with that of a Markov Random Field or Conditional Random Field.
- the estimate of optical depths may be confined to a predetermined range of optical depths comprised of a set of discrete values. This may be a labelling problem wherein each point is assigned a label associated with one of the discrete values.
- the optimising of step (c) may comprise using graph cuts, such as ⁇ -expansion, ⁇ -/?swap or multi-label graph cuts.
- step (c) may comprise using simulated annealing, (loopy) belief propagation, iterative conditional modes or message passing algorithms, such as tree reweighted message passing.
- the method may further comprise using the enhanced image in further image processing, such as object detection in the images.
- the image data may represent a time ordered sequence of images. Estimating the optical depth for that point in other images in the sequence may be based on a previously determined optical depth for the equivalent point in the one or more previous images in the sequence. The method may be performed on a subset of the time ordered sequence of images, such as every fourth image.
- the aerosol in the image may be fog or haze.
- the point may be one or more pixels.
- the invention comprises software that when installed in a computer causes it to operate in accordance with the method described above.
- the software may be written in OpenGL.
- the invention is a computer apparatus to enhance image data representing an image of a scene including aerosols, the computer having: storage means to store the image data; and software installed to cause a processor to operate in accordance with the method described above.
- the computer may be a computer vision system including a camera to capture the image data.
- the processor may be a central processing unit of a general * purpose computer or a graphics processing unit of a graphics card.
- the invention is a field programmable gate array designed to implement the method described above.
- the invention may be implemented in the hardware of a graphics card of a computer system.
- a first or second cost may be determined with reference to a table of predetermined costs.
- the invention is a method of enhancing image data representing an image of a scene including aerosols, the method comprising:
- the function may be dependent on any one or more of: the image data itself predetermined geometric information about the scene, or previous image data of the scene.
- step (b) requires no extra information from either a user or additional sensors.
- the function may be an energy function associated with a Markov Random Field.
- the invention does not require the use of specialised hardware, such as polarizers or infrared (IR) cameras.
- specialised hardware such as polarizers or infrared (IR) cameras.
- the invention can be used with cameras that capture images only in the visible spectrum. It is an advantage of at least one embodiment of the present invention that the invention can retrospectively be applied for use with existing camera installations of video surveillance systems.
- the algorithm aims to enhance the noisy images, so that low-level image processing techniques can still provide suitable information to the higher-level image understanding algorithms. Although an approximation of the scene on a clear day would be useful to a human operator, this is not necessarily the optimal correction for a computer vision system.
- Figs. l (a) and (b) schematically show the "transmitted” and “path” components of light respectively (prior art).
- Fig. 2 is a schematic diagram of video surveillance system of an example of the invention.
- Fig. 3 (a) is a sample image
- Fig. 3(b) is an enhanced version of the sample image of Fig. 3(a) according to the Example 1 ;
- Fig. 3(c) is an enhanced version of the sample image of Fig. 3(a) according to the Example 2
- Fig. 3(d) is pseudo code for estimating L ⁇ ;
- Fig. 4(a) is a flowchart of an example of the invention.
- Figs. 4(b) is a tabulation of example first, second and combination costs
- Fig. 5(a) is a representation of a cost versus depth chart for a quality metric of a point for Example 1 ;
- Fig. 5(b) is a representation of the cost versus depth chart for applying a predetermined correlation to the estimated optical depths of Example 1 ;
- Fig. 6(a) is a natural image and Fig. 6(b) is the image of Fig. 6(a) synthesized with fog;
- Fig. 7 is the measured contrast within the box 82 of Fig. 7(b) with respect to eveiy possible optical depth value, where the inversion of this function is Fig. 5(a).
- Fig. 8 is the pseudo code for visibility enhancement of Example 1 ;
- Fig. 9 is the pseudo code for data cost of Example 1 ;
- Figs. 10(a) is an image showing the estimated "path” or “airlight” component and Fig. l ⁇ (b) is the same image as Fig. 10(a) but showing the estimated source radiance for Example 1; and Fig. 1 1 (a) is an image showing the values of Y and Fig. 1 1 (b) is the same image of Fig. 1 1 having blurred Y;
- Fig. 12(a) is a representation of a cost v appearance difference chart for a quality metric of a point for Example 2;
- Fig. 12(b) schematically shows the geometry of an image taken by a camera
- Fig. 12(c) is a representation of a cost v depth difference for applying a predetermined correlation to the estimated optical depths of Example 2.
- a video surveillance system 10 in which a video camera 12 captures a visible scene 12 as image data.
- the image data represents a sequence of images that form a video.
- the video camera 10 operates in the visible spectrum only.
- the video camera 10 includes a processor that automatically causes the captured image data, possibly compressed, to be transmitted in real-time over the Internet 14 to a server 16 that is also connected to the Internet 14.
- the server 16 receives and stores the image data in a local datastore 18.
- the server 16 also stores processor readable instructions and a processor of the server 16 operates to read and execute the instructions to cause the server 16 to operate as described in further detail below.
- a display device, such as a monitor 20 may be connected to the server to display the enhanced images.
- the transmission of image data may be performed wireiessly.
- the server 16 may be directly connected (i.e. not over the Internet) to the camera 10.
- the server 16 may in fact be a personal computer.
- a background subtraction algorithm may be applied early in the processing of the sequence of images.
- a sample image as captured by the camera 10 is shown in Fig. 3(a), and includes noise introduced by the presence of aerosols (e.g. fog) in the scene.
- the image is comprised of a set of points, and in this example a point is a pixel.
- the optical depth is the geometric distance, d, scaled by the extinction co-efficient of the fog, ⁇ .
- estimating the noise in the image caused by fog is equivalent to the task of estimating an optical depth map for a set of points in a foggy input image.
- the appearance of the horizon, L 00 should be white [2], but since cameras are usually governed by automatic exposure circuitry and white balance, this is not always the case Further, haze may also be coloured.
- the horizon is the brightest part of the image, and we can formulate an estimate of L ⁇ using the high intensity pixels from the foggy image (steps 2 and 3 in Fig. 3(d)). We can then apply simple colour correction (step 4 in Fig. 3(d)) to ensure that L 00 is white [I]. Once this correction is performed, in greyscale (monochrome) images, L n - I; in RGB colour images:
- the first cost values for points x, y and z are shown at 64.
- the values of the cumulative cost function are shown at 68.
- Optimisation of the energy function comprises identifying the values of the points x, y and z that produce a minimum value of E(x,y,z). It can be seen that 2 is the minimum value of E(x,y,z) and this corresponds to the candidate optical depth of 1 for points x, y and z. In general, ,there could be more than one minimum value of E(x,y,z), however, only one minimum needs to be identified even though there might be multiple.
- Example 1 Where no reference image is available
- no additional information about the image is available (such as a less noisy image of the scene captured when there was no fog).
- optical depth values are estimated by trying to match the statistics of the enhanced image to baseline statistics of natural images. Since an image of the scene with less noise should be of higher quality than the captured foggy image, we can estimate the depth of a pixel by determining the distance at which the corrected local image region achieves maximum 'information'. This strategy will only produce an enhanced image - not an estimate of scene on a clear day.
- contrast is calculated using the sum of image gradient magnitudes.
- each pixel (the centre of the 5 x 5 patch) should be assigned the depth which gives it maximal contrast, and which corresponds to the minimum value in Fig. 5 (a).
- neighboring values should be similar and from this determine one or more reference sets of points having optical depths that confirm to this similar behaviour 70 and incorporate an additional cost of deviating from this pattern (i.e. correlation) a function of the distance between the labels (i.e. optical depths). This is shown on Fig. 5(b).
- Eq. (4) produces an estimate Z, ⁇ (x) of the scene not affected by scattering or absorption, but requires values of /-, o & s (x) and d(x) for every point x (where x is a 2D spatial location) as well as global estimates of L ⁇ and ⁇ .
- the number of known variables in Eq.(4) is less than the number of unknown variables.
- the output image, L 0 must have better contrast compared to the input image, 2.
- the variation of the values of e ⁇ is dependent solely on the depth of the objects, d(x), implying that objects with the same depth will have the same value of e ⁇ regardless their appearance.
- the values of e " ⁇ w for neighboring pixels tend to be the same.
- e ⁇ changes smoothly across small local areas. The exception is pixels at depth discontinuities, whose number is relatively small.
- C(p(x)) OUT cost function 60. While the largest number of C(p(x)) does not always represent the actual value of L pat ⁇ , it represents the enhanced visibility of the input image. This is suitable since we do not intend to recover the original colour or reflectance of the images in' clear days. Our main purpose is to enhance the visibility of scenes in bad weather with some degree of accuracy on the scene's colours.
- E 1 -(ZO is based on the contrast measure discussed previously.
- Each function QO is normalized to occupy the range [0.0,1.0] by dividing by the maximum value
- the ⁇ scaled function is then inverted, so that the minimum of the function corresponds to the best candidate optical depth.
- the second term (the smoothness term) is defined as:
- the pseudocode of Fig. 9 starts with an iteration of every pixel in L n ⁇ - x represents the 2D location of a pixel.
- step 4.1 we crop an « ⁇ n patch which is centred at a location x. n must be small enough to make the assumption that the airlight in the patch is uniform valid, yet not so small that we could possibly lose the texture or edge information.
- the patch size could be 5 x 5 or 7 x 7, depending on the size and scale of the input image.
- step 4.2 and 4.2.1 for every possible value of z, we compute the enhanced version of the patch, using Eq.(4).
- step 4.2.2 we compute the contrast using Eq. (5). After all iterations finish, the contrast measure is normalized and inverted Eq.(9), where for each pixel, E, ⁇ z) is a vector of m values.
- step 5 of the pseudocode of Fig. 8 we compute the smoothness cost using Eq.(lO).
- step 6 to do the inference in MRFs with number of labels equal to m, we use the graph-cut algorithm alpha-expansion with multiple labels (i.e. [4]).
- step 7 we finally compute the direct attenuation for the whole image from the estimated airlight using Eq.(3a).
- Fig. 10(a) and (b) show the results of the airlight, Lpatu, and the corresponding enhancemenr LQ respectively.
- the algorithm is exactly the same, except that each image only contains 2D data, and the horizon brightness L « > is a scalar and not a three-vector.
- Fig. 1 1 show the values Y , and the blurred Y (the initial airlight).
- FIG. 3(b) An enhanced version of the sample image of Fig. 3(a) is shown in Fig. 3(b). This image has been enhanced to remove the noise effect corresponding to the estimated optical depth according to Example i of each pixel respectively.
- This example utilizes an image L re f of the scene captured in good visibility conditions from which a reference depth can be determined.
- pan-tilt-zoom cameras arc popular for surveillance applications, they often have one or more default orientations, which can be treated as fixed vantage points.
- each pixel when enhanced, should be very similar to its equivalent in the reference image.
- an enhanced pixel may differ significantly from the reference image. If the pixel corresponds to a foreground object, it will most likely not occur in the other image, as there is usually a large time gap between the two images. Similarly, in fog, many surfaces become wet, and their appearance may change drastically. Pavement, for instance becomes darker and possibly reflective if puddles foim.
- a probability cost function is based on the differences.
- This example employs a more complex pattern 70 than Example 1 , and is based on camera geometry. We assume the camera is placed upright, and in a typical surveillance pose (high in the air and tilted down towards the ground), m this situation, pixels towards the bottom of the image should appear close to the camera, while pixels at the top of the image should be far away, see Fig. 12(b).
- the alpha-expansion algorithm was used in the previous example. Although it is also able to solve this MRF formulation, we illustrate how different optimisation algorithms can be employed by using the multi-label graph construction of [5]. The algorithm is guaranteed to produce the optimal solution, and initialisation is not necessary. Therefore, unlike the previous example, we do not need to produce a blurred intensity image to generate an initial labelling.
- Example 2 multiple reference images could be used to reduce the effect of outliers.
- the geometric model of the pattern of Example 2 could be expanded incorporate considerations related to horizontal position.
- FIG. 3(c) An enhanced version of the sample image of Fig. 3(a) is shown in Fig. 3(c). This image has been enhanced to remove the noise effect corresponding to the estimated optical depth according to Example 2 of each pixel respectively.
- the cameras used with this invention can be those limited to the visible spectrum.
- the enhancement processes For example, if a large part of the image is completely obscured by fog it js not possible to accurately infer what the scene in the image would look tike in good conditions.
- Example 2 While the two examples above are mutually independent we note that the pattern aspects of Example 2 can be augmented with the depth estimation technique of Example 1.
- Example 1 The depth estimation technique of Example 1 can be used in conjunction with the pattern aspects of the reference image solution of Example 2.
- the enhancement operation Eq. (3) is relatively straightforward if L pa u ⁇ is fixed.
- the source radiance LQ is computed using one subtraction and one multiplication operation, followed by a clamping function to ensure the result is within the range [0.0,1.0], If a computer and software is used to implement the described enhancement method, one can achieve better performance by conducting the enhancement stage with a graphics processing unit (GPU), if one is available within the computer.
- the enhancement operation is written in OpenGL (or any equivalent graphics card language) and downloaded onto the GPU. When each frame is received from the camera, it is also downloaded onto the GPU. When the optical depth map of the scene has been estimated, it is also downloaded onto the GPU. When all the information is available, the program is able to enhance the downloaded camera frame in real-time. .
- the central processing unit (CPU) may be used to estimate new depth maps while the enhancement operation is conducted in parallel on the GPU. Each new depth map estimate can be downloaded to the GPU as soon as it is available.
- Tracking can also be used to assist in estimating the noise. For example, observing objects moving towards or away from the camera at a constant speed (reasonable for vehicles or pedestrians), the estimate of the fog density can be amended by watching how the appearance of the objects degrades or improves as the distances increase or decreases. Moreover, we can also get distances along the ground plane by observing the path of the object in the image plane. Cameras mounted close to the ground for license plate reading appiication are ideal for this.
- a filter may be used to clean up the raw results.
- a bilateral filter for instance, , considers similarity in intensity, spatial and temporal domains.
- a per- pixel Gaussian mixture model is able to produce a time averaged background without corrupting the appearance of foreground objects.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/AU2009/000439 WO2010115228A1 (fr) | 2009-04-09 | 2009-04-09 | Amélioration de données d'images |
US13/263,300 US8837857B2 (en) | 2009-04-09 | 2009-04-09 | Enhancing image data |
AU2009344148A AU2009344148B2 (en) | 2009-04-09 | 2009-04-09 | Enhancing image data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/AU2009/000439 WO2010115228A1 (fr) | 2009-04-09 | 2009-04-09 | Amélioration de données d'images |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2010115228A1 true WO2010115228A1 (fr) | 2010-10-14 |
Family
ID=42935574
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/AU2009/000439 WO2010115228A1 (fr) | 2009-04-09 | 2009-04-09 | Amélioration de données d'images |
Country Status (3)
Country | Link |
---|---|
US (1) | US8837857B2 (fr) |
AU (1) | AU2009344148B2 (fr) |
WO (1) | WO2010115228A1 (fr) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102254306A (zh) * | 2011-07-14 | 2011-11-23 | 北京邮电大学 | 一种基于图像简化分层模型的实时图像去雾方法 |
CN104574387A (zh) * | 2014-12-29 | 2015-04-29 | 张家港江苏科技大学产业技术研究院 | 水下视觉slam系统中的图像处理方法 |
US20200026960A1 (en) * | 2018-07-17 | 2020-01-23 | Nvidia Corporation | Regression-based line detection for autonomous driving machines |
US11430210B2 (en) * | 2020-06-18 | 2022-08-30 | Raytheon Company | Methods and system for estimation of lambertian equivalent reflectance for reflective band imagery |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101736468B1 (ko) * | 2012-12-24 | 2017-05-29 | 한화테크윈 주식회사 | 영상 처리 장치 및 방법 |
CN106462947B (zh) | 2014-06-12 | 2019-10-18 | Eizo株式会社 | 除雾装置及图像生成方法 |
US9508126B2 (en) * | 2015-02-17 | 2016-11-29 | Adobe Systems Incorporated | Image haze removal using fast constrained transmission estimation |
CN104941698B (zh) * | 2015-06-02 | 2017-04-26 | 云南省交通规划设计研究院 | 一种雾霾模拟装置 |
KR102300531B1 (ko) | 2015-09-18 | 2021-09-09 | 서강대학교산학협력단 | 영상 연무 제거 장치 및 영상 연무 제거 방법 |
US9870511B2 (en) * | 2015-10-14 | 2018-01-16 | Here Global B.V. | Method and apparatus for providing image classification based on opacity |
JP6361631B2 (ja) * | 2015-10-29 | 2018-07-25 | Smk株式会社 | 車載センサ、車両用灯具及び車両 |
US10269098B2 (en) * | 2016-11-01 | 2019-04-23 | Chun Ming Tsang | Systems and methods for removing haze in digital photos |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080069445A1 (en) * | 2003-03-07 | 2008-03-20 | Martin Weber | Image processing apparatus and methods |
US20080193034A1 (en) * | 2007-02-08 | 2008-08-14 | Yu Wang | Deconvolution method using neighboring-pixel-optical-transfer-function in fourier domain |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7013439B2 (en) * | 2002-01-31 | 2006-03-14 | Juan Andres Torres Robles | Contrast based resolution enhancing technology |
US8396324B2 (en) * | 2008-08-18 | 2013-03-12 | Samsung Techwin Co., Ltd. | Image processing method and apparatus for correcting distortion caused by air particles as in fog |
-
2009
- 2009-04-09 AU AU2009344148A patent/AU2009344148B2/en not_active Ceased
- 2009-04-09 WO PCT/AU2009/000439 patent/WO2010115228A1/fr active Application Filing
- 2009-04-09 US US13/263,300 patent/US8837857B2/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080069445A1 (en) * | 2003-03-07 | 2008-03-20 | Martin Weber | Image processing apparatus and methods |
US20080193034A1 (en) * | 2007-02-08 | 2008-08-14 | Yu Wang | Deconvolution method using neighboring-pixel-optical-transfer-function in fourier domain |
Non-Patent Citations (2)
Title |
---|
NAYAR, S.K. ET AL.: "Vision in bad weather", IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 1999) * |
TAN, R.T.: "Visibility in Bad Weather from A Single Image", IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2008 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102254306A (zh) * | 2011-07-14 | 2011-11-23 | 北京邮电大学 | 一种基于图像简化分层模型的实时图像去雾方法 |
CN104574387A (zh) * | 2014-12-29 | 2015-04-29 | 张家港江苏科技大学产业技术研究院 | 水下视觉slam系统中的图像处理方法 |
US20200026960A1 (en) * | 2018-07-17 | 2020-01-23 | Nvidia Corporation | Regression-based line detection for autonomous driving machines |
US11604944B2 (en) * | 2018-07-17 | 2023-03-14 | Nvidia Corporation | Regression-based line detection for autonomous driving machines |
US11921502B2 (en) | 2018-07-17 | 2024-03-05 | Nvidia Corporation | Regression-based line detection for autonomous driving machines |
US11430210B2 (en) * | 2020-06-18 | 2022-08-30 | Raytheon Company | Methods and system for estimation of lambertian equivalent reflectance for reflective band imagery |
Also Published As
Publication number | Publication date |
---|---|
AU2009344148B2 (en) | 2015-10-22 |
US8837857B2 (en) | 2014-09-16 |
AU2009344148A1 (en) | 2011-11-03 |
US20130141594A1 (en) | 2013-06-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
AU2009344148B2 (en) | Enhancing image data | |
Zhang et al. | Fast haze removal for nighttime image using maximum reflectance prior | |
Wang et al. | Single image defogging by multiscale depth fusion | |
Fattal | Dehazing using color-lines | |
Guo et al. | An efficient fusion-based defogging | |
Hautiere et al. | Blind contrast enhancement assessment by gradient ratioing at visible edges | |
Tan | Visibility in bad weather from a single image | |
Tripathi et al. | Single image fog removal using anisotropic diffusion | |
US9426444B2 (en) | Depth measurement quality enhancement | |
Tripathi et al. | Removal of fog from images: A review | |
Yuan et al. | Image haze removal via reference retrieval and scene prior | |
Carr et al. | Improved single image dehazing using geometry | |
Babu et al. | A survey on analysis and implementation of state-of-the-art haze removal techniques | |
US20180211446A1 (en) | Method and apparatus for processing a 3d scene | |
Kerl et al. | Towards illumination-invariant 3D reconstruction using ToF RGB-D cameras | |
Yuan et al. | Image dehazing based on a transmission fusion strategy by automatic image matting | |
Yuan et al. | A confidence prior for image dehazing | |
Sahu et al. | Image dehazing based on luminance stretching | |
Raikwar et al. | An improved linear depth model for single image fog removal | |
Khan et al. | Recent advancement in haze removal approaches | |
Hong et al. | Single image dehazing based on pixel-wise transmission estimation with estimated radiance patches | |
CN109118546A (zh) | 一种基于单帧图像的景深等级估计方法 | |
Khmag | Image dehazing and defogging based on second-generation wavelets and estimation of transmission map | |
Wang et al. | An airlight estimation method for image dehazing based on gray projection | |
Kim | Edge-preserving and adaptive transmission estimation for effective single image haze removal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 09842848 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2009344148 Country of ref document: AU Date of ref document: 20090409 Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13263300 Country of ref document: US |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 09842848 Country of ref document: EP Kind code of ref document: A1 |