TWI451342B - Shadow Removal Method in Mobile Light Source Environment - Google Patents

Shadow Removal Method in Mobile Light Source Environment Download PDF

Info

Publication number
TWI451342B
TWI451342B TW099137170A TW99137170A TWI451342B TW I451342 B TWI451342 B TW I451342B TW 099137170 A TW099137170 A TW 099137170A TW 99137170 A TW99137170 A TW 99137170A TW I451342 B TWI451342 B TW I451342B
Authority
TW
Taiwan
Prior art keywords
image
illumination
shadow
foreground
background model
Prior art date
Application number
TW099137170A
Other languages
Chinese (zh)
Other versions
TW201218090A (en
Original Assignee
Univ Nat Chiao Tung
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Univ Nat Chiao Tung filed Critical Univ Nat Chiao Tung
Priority to TW099137170A priority Critical patent/TWI451342B/en
Publication of TW201218090A publication Critical patent/TW201218090A/en
Application granted granted Critical
Publication of TWI451342B publication Critical patent/TWI451342B/en

Links

Landscapes

  • Image Analysis (AREA)

Description

移動光源環境下的陰影去除方法Shadow removal method in moving light source environment

本發明係關於一種移動光源環境下的陰影去除方法,特別是針對在夜間環境中,去除移動光源對前景物體產生之陰影,以提升物體偵測之精確程度。The invention relates to a shadow removing method in a moving light source environment, in particular to removing a shadow generated by a moving light source on a foreground object in a nighttime environment, so as to improve the accuracy of object detection.

「智慧型生活環境」一直是人類的夢想,模擬人體各種知覺以建構智慧型環境,也成為近幾年來甚受注目的前瞻技術方向。智慧型技術的研究目標,即是要增加系統或設備對真實世界的感知能力,並利用各領域專家的智慧,感測環境狀況,並進而主動反應或改變環境,來服務、告知或提醒人類,達成智慧型環境的建立。「電腦視覺」在智慧型環境建置中,扮演非常關鍵性的角色。透過影像擷取設備拍攝週遭環境影像,並開發相關軟體系統,模擬人類視覺的感知與反應,乃是環境智慧化之重要關鍵。視覺式安全監控的技術需求,主要可分為「移動物體偵測與追蹤」、「動作分析」與「行為理解」三大類。若可將畫面影像中的前景物體確實地與背景分離,對於後續的追蹤、動作分析和行為理解,會有很大的助益。雖然目前針對前景物體偵測已有較多研究成果,多數研究著重在提出不同方法,進行各種環境的前景物體偵測,並偶而針對環境光線造成的物體陰影,提出相關去除方法。The "smart living environment" has always been the dream of mankind. It has simulated various human perceptions to construct a smart environment. It has also become a highly forward-looking technical direction in recent years. The research goal of smart technology is to increase the perception of the real world by the system or equipment, and to use the wisdom of experts in various fields to sense the environmental conditions and then actively respond or change the environment to serve, inform or remind humans. Achieve the establishment of a smart environment. "Computer Vision" plays a very important role in the development of intelligent environments. It is an important key to environmental intelligence to capture surrounding environmental images through image capture devices and develop related software systems to simulate the perception and response of human vision. The technical requirements of visual security monitoring can be divided into three categories: "moving object detection and tracking", "action analysis" and "behavior understanding". If the foreground object in the screen image can be reliably separated from the background, it will be of great help for subsequent tracking, motion analysis and behavioral understanding. Although there are many research results for foreground object detection, most studies focus on different methods to detect foreground objects in various environments, and occasionally propose object removal methods for object shadows caused by ambient light.

在物體陰影偵測與去除技術的相關論述中,大概可區分為以下幾類方法:In the discussion of object shadow detection and removal technology, it can be roughly divided into the following types of methods:

1. 像素投影法:將所有前景影像點,分別做垂直及水平方向投影,找出物體底部及陰影的交接點。此方法的缺點是需要知道光源方向,並且只能夠處理直立的物體,且陰影必須投射在地面。1. Pixel projection method: Project all foreground image points vertically and horizontally to find the intersection of the bottom of the object and the shadow. The disadvantage of this method is that you need to know the direction of the light source and only be able to handle upright objects, and the shadows must be projected on the ground.

2. 區域亮度分析法:將影像分為若干區塊,針對每個區塊計算亮度平均值並根據大小排序後,取其中位數做為區分陰影的門檻值,以將陰影區塊去除;此方法缺點是光源與背景必須分別有穩定的亮度。2. Area brightness analysis method: divide the image into several blocks, calculate the brightness average value for each block and sort according to the size, take the median as the threshold value to distinguish the shadow, to remove the shadow block; The disadvantage of the method is that the light source and the background must have stable brightness respectively.

3. 直方圖統計方法:利用亮度分佈統計,以機率方式將可能的前景像素再細部區別為陰影像素、前景物體像素或改判為背景像素。此方法缺點是必須假設陰影投射在平坦的路面,且需要了解光源與攝影機的相對位置。3. Histogram statistics method: Using the brightness distribution statistics, the possible foreground pixels are further distinguished into shadow pixels, foreground object pixels or changed to background pixels by probability. The disadvantage of this method is that it must be assumed that the shadow is projected on a flat road surface and that the relative position of the light source to the camera needs to be known.

4. 色彩不變方法:考慮前景亮素的明度及彩度,以及在HSV或RGB色彩空間的失真程度,定義數個門檻值,將可能的前景影像再細分為前景、背景、強光及陰影四類。此方法需要光源的顏色為白光,且假設陰影和非陰影有相似的色度;對於夜間昏暗環境下的陰影,則會容易產生誤判。4. Color invariant method: Consider the brightness and chroma of the foreground bright color, and the degree of distortion in the HSV or RGB color space, define several threshold values, and subdivide the possible foreground image into foreground, background, glare and shadow. Four categories. This method requires that the color of the light source be white, and that shadows and non-shadows have similar chromaticities; for shadows in dark nights, it is easy to make false positives.

5. 根據物理學理論方法:根據照度及反射的物理模型,使用時空反照率區分法,來分辨所偵測到的影響子區域是陰影或物體本身。此方法適合穩定的環境光源明亮度,且需要額外處理強光及鏡子的表面反射狀況。5. According to the physics theory method: according to the physical model of illuminance and reflection, use the space-time albedo discrimination method to distinguish the detected sub-regions from shadows or objects themselves. This method is suitable for stable ambient light source brightness and requires additional processing of glare and mirror surface reflection.

夜間環境與日間環境最主要的不同,在於前景影像與背景環境的亮度或顏色差異性較不明顯、陰影的影響較嚴重、以及容易受到光源劇烈變化的影響。夜間的室外環境,因為常有行駛中車輛之車燈照射,光線變化迅速導致陰影的快速變化,而加重了陰影去除的困難度,也顯示了本發明的技術優勢。The most important difference between the nighttime environment and the daytime environment is that the brightness or color difference of the foreground image and the background environment is less obvious, the influence of the shadow is more serious, and it is susceptible to the drastic changes of the light source. The outdoor environment at night, because of the illumination of the vehicle in the vehicle, the rapid change of the light quickly causes the shadow to change rapidly, which increases the difficulty of the shadow removal, and also shows the technical advantage of the present invention.

關於移動光源環境下的陰影去除之先前技術,請另參考專利第TW I298857、TW I323434、TW I250466、TW I323434、TW I298857、TW I220969、TW201002073、TW201025189、TW201025193、TW201025198、TW201019268、TW201021574、TW201026081、TW201001338、TW200912772、US5402118、US5548659、US6259802、US6950123、US7199821、US2003/0123703、US2006/0285723、US 2007/0127774號等專利案。For the prior art of shadow removal in a moving light source environment, please refer to patents TW I298857, TW I323434, TW I250466, TW I323434, TW I298857, TW I220969, TW201002073, TW201025189, TW201025193, TW201025198, TW201019268, TW201021574, TW201026081, TW201001338. Patent cases such as TW200912772, US5402118, US5548659, US6259802, US6950123, US7199821, US2003/0123703, US2006/0285723, US 2007/0127774.

而本案發明人亦於2009年7月發表「多光源環境下之亮度測定與物體陰影去除」,再於2009年11月於2009全國計算機會議上發表「多光源環境下之陰影模型建立與前景物體陰影去除」以作為本發明訓練階段中建構「原始背景模型」的基礎,使本發明即以原始背景模型為基礎,動態地根據目前畫面之光照特性,模擬出現況背景模型,可不受前景物體的影響,並提高前景像素之偵測效果,以及陰影像素的判斷結果。The inventor of the case also published "Brightness Measurement and Object Shadow Removal in Multi-Light Source Environment" in July 2009, and then at the 2009 National Computer Conference in November 2009, "Shadow Model Establishment and Prospect Objects in Multi-Light Source Environment" Shadow removal is used as the basis for constructing the "original background model" in the training phase of the present invention, so that the present invention is based on the original background model, dynamically simulating the background model of the scene according to the illumination characteristics of the current picture, and is free from foreground objects. Affects and improves the detection of foreground pixels, as well as the judgment of shadow pixels.

本發明之主要目的即在於提供一種針對移動光源照射下之夜間監控環境,在視訊畫面具有變動且不規則光照區域時,前景物體陰影之判定與去除的方法,以降低陰影對物體偵測的影響,提升視訊監控系統在夜間環境的應用與成效。The main object of the present invention is to provide a method for determining and removing the shadow of a foreground object when the video screen has a variable and irregular illumination area for the nighttime monitoring environment under the illumination of the moving light source, so as to reduce the influence of the shadow on the object detection. Improve the application and effectiveness of video surveillance systems in the night environment.

本發明之次一目的係在於提供一種利用攝影機偵測前景物體時,為求物體位置偵測之精確度,應該分析目前畫面狀況,動態調整原始背景模型,成為現況背景模型的方法。The second object of the present invention is to provide a method for detecting the foreground object by using a camera, and to determine the accuracy of the object position detection, and to analyze the current picture condition and dynamically adjust the original background model to become the current background model.

本發明之另一目的係在於提供一種夜間環境中,透過分析不同顏色光源對不同透光程度物體產生之陰影色彩特性,建構陰影模型的方法。Another object of the present invention is to provide a method for constructing a shadow model by analyzing shadow color characteristics of different light-emitting objects by analyzing different color light sources in a nighttime environment.

本發明之又一目的係在於針對一個具有移動光源照射下的夜間環境,提供自動偵測並分析監控畫面中之光照區域數量、位置與色彩分布特性的方法。A further object of the present invention is to provide a method for automatically detecting and analyzing the number, location and color distribution characteristics of illumination regions in a surveillance picture for a nighttime environment illuminated by a moving light source.

本發明之再一目的係在於提供一種模擬現況背景的方法,以事先訓練的原始背景模型為基礎,利用現況畫面的光照區域特性,以漸層渲染方式於原始背景模型中自動模擬光照區域,得到現況背景模型。A further object of the present invention is to provide a method for simulating the background of the current situation. Based on the original background model trained in advance, the illumination region is automatically simulated in the original background model by using the illumination region characteristic of the current scene. Current background model.

本發明之他一目的係在於提供一種像素為基礎(pixel-based)的陰影去除方法,利用模擬出的現況背景模型先偵測出視訊畫面中的前景像素,再依光照區域特性配合相對的陰影模型,即可判定陰影像素並予以去除。Another object of the present invention is to provide a pixel-based shadow removal method, which uses the simulated current background model to first detect foreground pixels in a video image, and then matches the relative shadow according to the characteristics of the illumination region. The model can determine the shadow pixels and remove them.

可達成上述發明目的之移動光源環境下的陰影去除方法,可包括應用習用的訓練階段與本發明測試階段兩部份。在訓練階段,首先從一段事先拍攝之視訊中,擷取夠多的畫面影像,進行相同位置像素點的色彩分布統計,分別記錄各位置的顏色平均值與變異數,以建構原始背景模型;此外,並以常見顏色(如白色和黃色)的光源分別照射不同透光程度的物體,擷取陰影區域並統計陰影像素顏色的分布,記錄為陰影模型。The method for removing shadows in a moving light source environment that achieves the above object can include both a conventional training phase and a test phase of the present invention. In the training phase, firstly, from a pre-recorded video, draw enough image images, perform color distribution statistics of pixels at the same position, and record the color average and variation of each position to construct the original background model; And light sources of different light transmittances are respectively illuminated by light sources of common colors (such as white and yellow), the shaded areas are extracted, and the distribution of shadow pixel colors is counted and recorded as a shadow model.

在測試階段,先分析視訊畫面中的像素亮度,找出數個明亮極端點,再以擴散搜尋方式,找出光照區域數量及位置,並記錄各區域的色彩分布;然後,利用光照區域特性(數量、位置與色彩分布),以原始背景模型為基礎,透過漸層渲染方式,自動模擬出相同數量、相同區域位置、以及類似的色彩分布的光照區域,產生現況背景模型,再以此現況背景模型進行前景像素之偵測;所得之前景像素再透過陰影模型,將符合條件的前景像素視為陰影,予以去除;剩餘之前景像素可組合成前景物體,得到較精確之前景物體偵測結果。In the test phase, first analyze the brightness of the pixels in the video image, find several bright extreme points, and then use the diffusion search method to find the number and position of the illumination area, and record the color distribution of each area; then, use the illumination area characteristics ( Quantity, position and color distribution), based on the original background model, automatically simulates the same number, the same area position, and similar color distribution of the illumination area through the gradient rendering method to generate the current background model, and then the current background The model performs foreground pixel detection; the obtained foreground pixel is further transmitted through the shadow model, and the qualified foreground pixels are regarded as shadows and removed; the remaining foreground pixels can be combined into foreground objects to obtain a more accurate foreground object detection result.

請參閱圖二,本發明所提供之移動光源環境下的陰影去除方法,包含下列步驟:畫面影像極端點偵測步驟、光照區域範圍判定步驟、光照區域特性分析步驟、模擬光照區域步驟、修訂背景模型步驟、前景像素偵測步驟、陰影像素判定並去除步驟、以及前景物體組合並修正步驟所構成。Referring to FIG. 2, the method for removing shadows in a moving light source environment provided by the present invention includes the following steps: a screen image extreme point detecting step, a lighting region range determining step, a lighting region characteristic analyzing step, a simulated lighting region step, and a revision background. The model step, the foreground pixel detection step, the shadow pixel determination and removal step, and the foreground object combination and correction steps are formed.

本發明主要用途是針對在夜間環境中,去除移動光源對前景物體產生之陰影,以提升物體偵測之精確程度。所使用之攝影裝置為固定式,且不考慮其他霓虹燈等複雜光源的照射影響;所以,在物體偵測方法上,採用背景相減法,此方法處理速度快,簡單易用,所得到的前景物體效果也較好。The main purpose of the invention is to remove the shadow generated by the moving light source on the foreground object in the nighttime environment, so as to improve the accuracy of the object detection. The photographic device used is fixed, and does not consider the influence of illumination of complex light sources such as other neon lights; therefore, in the object detection method, the background subtraction method is adopted, which is fast in processing, simple and easy to use, and the obtained foreground object The effect is also good.

而背景相減法必須事先建構原始背景模型(background model)與陰影模型(shadow model),該建構方式於2009全國計算機會議上發表「多光源環境下之陰影模型建立與前景物體陰影去除」,如圖一與附件一(A)~(B)所示,為從一段事先拍攝之視訊中,擷取一段未經挑選且一定數量的畫面影像(約數十張至數百張畫面影像),並計算畫面影像位置相同的像素顏色分佈情況,分別記錄各位置的顏色平均值與變異數,以建立一初始化的原始背景模型。此外,並以常見顏色(如白色和黃色)的光源分別照射不同透光程度的物體,擷取陰影區域並統計陰影像素顏色的分布,記錄為陰影模型。The background subtraction method must construct the original background model and the shadow model in advance. The construction method was published at the 2009 National Computer Conference. "Shadow model creation and shadow removal of foreground objects in a multi-light environment", as shown in the figure. As shown in Annexes I (A) to (B), in order to capture an unselected and certain number of images (approximately tens to hundreds of images) from a pre-recorded video, and calculate The pixel color distribution of the same image position is recorded, and the color average and the variance of each position are recorded to establish an initial background model. In addition, the objects of different light transmittances are respectively illuminated by light sources of common colors (such as white and yellow), the shaded areas are extracted, and the distribution of the color of the shadow pixels is counted and recorded as a shadow model.

因為夜間環境較為昏暗,各光源對前景物體造成明顯且多重之陰影,嚴重影響物體偵測的準確度。由於光源的數量、方位與色彩會對物體產生程度不同的陰影特徵;本發明以夜間室內外之白色與黃色光源為標的,提出多光源環境下之陰影模型建立,並應用於夜間環境之移動物體偵測。該建立陰影模型步驟為在多光源環境下透過分析各光源對不同透光性物體的陰影特徵,來建立陰影模型,以應用於夜間環境之移動物體偵測,不同光源色彩(光源的數量、方位與色彩值)會對物體產生程度不同的陰影特徵,為建立不同的陰影特徵,係以一張純白校正紙將不同透光性物體(不透光物體、半透光物體、全透光物體)放置於光照區域中心處,分別拍攝白光環境(如附件二(A)~(C))和黃光環境(如附件三(A)~(C))所照射之畫面影像,並分析物體陰影特徵,以建置陰影模型;以白光對不透光物體(如附件二(A))照射為例,物體像素(如附件四(A)~(C))與陰影像素(如附件五(A)~(C))在RGB色彩模型的分佈,已有明顯的差異,可訂定適當的門檻值,作為分辨兩類像素的依據;然而,黃光照射下的不透光物體(如附件三(A)),在RGB色彩模型中,物體像素(如附件六(A)~(C))與陰影像素(如附件七(A)~(C))的分布區隔不易,必須轉換至YCb Cr 色彩模型,並以訂定多重門檻值方式,區隔物體像素(如附件八(A)~(C))與陰影像素(如附件九(A)~(C)),透過分析各光源對不同透光性物體的陰影特徵,即可建立出如(Typelight 、Typeperv 、ClrInfoshadow )的;其中,Typelight 為光源色彩型態、Typeperv 為物體透光性型態、而ClrInfoshadow 則是陰影像素之色彩特性。Because the nighttime environment is dim, each light source causes obvious and multiple shadows on the foreground object, which seriously affects the accuracy of object detection. Since the number, orientation and color of the light source can produce different degrees of shadow features on the object; the present invention targets the white and yellow light sources indoors and outdoors at night, proposes a shadow model in a multi-light source environment, and applies the moving object in the night environment. Detection. The step of establishing a shadow model is to establish a shadow model by analyzing the shadow features of each light source on different light-transmitting objects in a multi-light source environment, so as to apply to the moving object detection in the night environment, the color of different light sources (the number and orientation of the light sources) And the color value) will produce a different degree of shadow characteristics to the object, in order to establish different shadow features, a pure white correction paper will be used for different translucent objects (opaque objects, semi-transparent objects, all-transparent objects) Placed in the center of the light area to capture the image of the white light environment (such as Annex II (A) ~ (C)) and the yellow light environment (such as Annex III (A) ~ (C)), and analyze the shadow characteristics of the object To create a shadow model; white light for opaque objects (such as Annex II (A)) as an example, object pixels (such as Annexes (A) ~ (C)) and shadow pixels (such as Annex V (A) ~(C)) In the distribution of RGB color models, there are obvious differences, and the appropriate threshold can be set as the basis for distinguishing two types of pixels; however, opaque objects under yellow light (such as Annex III ( A)), in the RGB color model, object pixels (such as attachments six (A) ~ (C)) and shadows Hormone (such as Annex VII (A) ~ (C)) is not easy distribution segment, must be converted to a YC b C r color model, and to provide for multiple threshold embodiment, the object pixel segment (e.g., Annex VIII (A) ~ (C)) and shadow pixels (such as Annexes IX (A) ~ (C)), by analyzing the shadow characteristics of each light source on different translucent objects, you can create (Type light , Type perv , ClrInfo shadow ) Among them, Type light is the color mode of the light source, Type perv is the light transmissive type of the object, and ClrInfo shadow is the color characteristic of the shadow pixel.

如圖二所示,本發明為利用事先建立的原始背景模型與陰影模型先偵測視訊畫面影像,再分析視訊畫面中的像素亮度,找出數個明亮極端點,再以擴散搜尋方式,找出光照區域數量及位置,並記錄各區域的色彩分布。然後,利用光照區域特性(數量、位置與色彩分布),以原始背景模型為基礎,透過漸層渲染方式,自動模擬出相同數量、相同區域位置、以及類似的色彩分布的光照區域,產生現況背景模型,再以此現況背景模型進行前景像素之偵測。所得之前景像素再透過陰影模型,將符合條件的前景像素視為陰影,予以去除。剩餘之前景像素可組合成前景物體,得到較精確之前景物體偵測結果。As shown in FIG. 2, the present invention firstly detects the video image by using the original background model and the shadow model established in advance, and then analyzes the brightness of the pixels in the video image, finds several bright extreme points, and then finds the diffusion search method. The number and location of the illuminated areas, and record the color distribution of each area. Then, using the characteristics of the illumination area (number, position and color distribution), based on the original background model, through the gradient rendering method, automatically simulate the same number, the same area position, and similar color distribution of the illumination area, resulting in the current background The model is then used to detect foreground pixels using this current background model. The resulting foreground pixels are then passed through the shadow model, and the eligible foreground pixels are treated as shadows and removed. The remaining foreground pixels can be combined into a foreground object to obtain a more accurate foreground object detection result.

如圖二與附件十(A)~(B)所示,該畫面影像極端點偵測步驟為計算畫面影像中所有像素的亮度分布,求得最亮的波谷為門檻值,以找出相對的明亮像素為極端點;係為自動偵測視訊畫面影像,並分析畫面影像中之極端點,由於畫面影像中的光照區域有亮度的聚集性與漸層性持質,為先計算畫面影像中所有像素的亮度分布差異,以分布圖中最亮的波谷為門檻值,並求得畫面影像中最亮的波谷為門檻值,進而找出相對的明亮像素為極端點,並設定每個極端點為光照區域中心。As shown in Figure 2 and Annex X (A) ~ (B), the image image extreme point detection step is to calculate the brightness distribution of all the pixels in the picture image, and find the brightest trough as the threshold value to find the relative The bright pixels are extreme points; the system automatically detects the video image and analyzes the extreme points in the image. Because the illumination area in the image has brightness concentration and gradual persistence, all the image images are calculated first. The difference in brightness distribution of the pixel is the threshold value of the brightest trough in the distribution map, and the brightest trough in the picture image is used as the threshold value, thereby finding the relative bright pixels as extreme points, and setting each extreme point as The center of the light area.

如附件十(C)~(D)所示,該光照區域範圍判定步驟為以畫面影像之極端點為擴散種子,在偵測方面以擴散搜尋法進行光照區域之自動判斷識別,以計算出在畫面影像中的光照區域範圍,為先行設立光照區域中心的極端點為擴散種子,以便逐層往外搜尋外層鄰居點,並判定與極端點中心的亮度差異,若差異過大則歸屬於光照區域的邊緣點位置。最後,分別記錄極端中心點、與各方向之邊緣點位置以及色彩值為光照區域的特徵資訊。As shown in Annexes (C)~(D), the illumination area range determination step is to use the extreme points of the image image as the diffusion seed, and the detection and detection aspect is used to perform the automatic judgment and recognition of the illumination area by the diffusion search method to calculate the The range of the illumination area in the image is the diffusion point of the center of the illumination area, so that the outer neighbor points are searched out layer by layer, and the brightness difference from the center of the extreme point is determined. If the difference is too large, it belongs to the edge of the illumination area. Point location. Finally, the characteristic information of the extreme center point, the edge point position in each direction, and the color value are recorded separately.

該模擬光照區域步驟為以漸層渲染方式自動模擬光照區域以得出現況背景模型來修訂背景模型,係以背景模型為基礎,以漸層渲染方式自動模擬出相同數量、相同區域位置、以及類似的色彩分布的光照區域,進而自動產生現況背景模型;為了在背景模型中模擬各光照區域的出現情況,以符合監控環境現況之背景模型,本發明採用漸層渲染方式,以模擬方式將光照區域加入背景模型中。首先,根據光照區域中心與邊緣點位置的距離與色彩差異值,分別計算出RGB三原色各自的變動比例(DiffRatR,DiffRatG,DiffRatB);然後,在背景模型影像中,從光照區域中心位置開始,逐層以距離光照區域中心的遠近,將原像素值分別依比例漸次地增加RGB數值,讓該位置像素值增亮(如附件十一(A)~(C)所示),以此修訂後之模擬光照的背景模型來偵測目前畫面的前景物體;如附件十二(A)~(C)所示,該前景物體偵測步驟為使用模擬光照的現況背景模型,以背景相減法進行前景像素偵測;本發明針對每張視訊畫面,先經過光照區域的特性判定,透過逐層渲染的模擬方式,將相對位置的光照區域加入原始背景模型,產生現況背景模型,並具以進行前景像素偵測,可大幅減少光照區域造成的干擾。The simulated illumination area step is to automatically simulate the illumination area in a gradient rendering manner to modify the background model by using the background model. Based on the background model, the same number, the same area position, and the like are automatically simulated by the gradient layer rendering method. The color distribution of the illumination area, and then automatically generate the current background model; in order to simulate the occurrence of each illumination area in the background model to conform to the background model of the current situation of the monitoring environment, the present invention adopts a gradient rendering method to simulate the illumination area. Join the background model. Firstly, according to the distance between the center of the illumination region and the position of the edge point and the color difference value, the respective variation ratios of the three primary colors of RGB (DiffRatR, DiffRatG, DiffRatB) are respectively calculated; then, in the background model image, starting from the center position of the illumination region, The layer is gradually increased by the RGB value according to the distance from the center of the illumination area, and the pixel value of the position is brightened (as shown in Annex XI (A) ~ (C)). The background model of the simulated illumination is used to detect the foreground object of the current picture; as shown in Annexes 12(A)-(C), the foreground object detection step is a current background model using simulated illumination, and the foreground pixels are subtracted by the background subtraction method. Detection; the present invention is directed to each video picture, first through the characteristics of the illumination area, through the layer-by-layer rendering simulation mode, the relative position of the illumination area is added to the original background model, generating a current background model, and for foreground pixel detection The measurement can greatly reduce the interference caused by the illumination area.

如附件十三(A)~(D)所示,該陰影像素判定並去除步驟為採用模擬光照的現況背景模型,利用背景相減法得到的前景像素,再透過事先建立不同光源色彩對物體產生之陰影模型,即可針對所有前景像素,進行陰影像素之判定與去除。首先,每個前景像素點分別與相同位置背景像素的色彩相較,若同時符合顏色較背景暗且各顏色分量變暗幅度接近的兩個條件,則判定此位置的前景影像為陰影像素,予以去除;該前景元件組合並修正步驟,為針對去除陰影像素後之剩餘前景像素,先透過相連元件偵測方法(connected component detection),將相連的前景像素組成前景元件,再依據元件之大小與間距特性,進行雜訊元件去除與相近元件合併之修正工作,已完成最終前景物體之偵測結果。As shown in Annexes 13 (A) to (D), the shadow pixel determination and removal step is a current background model using simulated illumination, and the foreground pixels obtained by the background subtraction method are generated by previously establishing different light source colors. The shadow model allows for the determination and removal of shadow pixels for all foreground pixels. First, each foreground pixel is compared with the color of the background pixel at the same position. If the two conditions satisfying that the color is darker than the background and the darkening range of each color component is close, the foreground image of the position is determined as a shadow pixel. Removing; the foreground component combination and correction step, for the remaining foreground pixels after removing the shadow pixels, firstly connecting the foreground pixels to the foreground components through the connected component detection method, and then according to the size and spacing of the components The feature, the correction of the noise component removal and the similar component combination, has completed the detection result of the final foreground object.

本發明所提供之移動光源環境下的陰影去除方法,與前述引證案及其他習用技術相互比較時,更具有下列之優點:The method for removing shadows in the mobile light source environment provided by the present invention has the following advantages when compared with the foregoing cited documents and other conventional techniques:

1.本發明即以背景模型為基礎,動態地根據目前畫面影像之光照區域特性,以自動模擬方式將光照區域加入背景模型中,以得出現況背景模型,可不受前景影像的影響,並提高前景影像之偵測效果,以及陰影像素的判斷結果。對於夜間視訊監控系統,本發明可提供更為精確的前景影像偵測效果,以提升後續移動物體追蹤和行為分析的效能。1. The invention is based on the background model, dynamically adds the illumination region to the background model according to the characteristics of the illumination region of the current picture image, so as to obtain the background model, which is not affected by the foreground image and is improved. The detection effect of the foreground image and the judgment result of the shadow pixel. For the nighttime video surveillance system, the present invention can provide more accurate foreground image detection effects to improve the performance of subsequent moving object tracking and behavior analysis.

2.本發明透過一般攝影裝置拍攝夜間環境,即使有移動光源之照射,仍可透過分析視訊畫面影像中的光照區域特性,動態模擬背景環境現況之背景模型,並配合不同光源顏色之陰影模型,可以大量去除陰影像素,提升物體偵測之精確程度。應用此項發明,將對安全視訊監控產業提高市場佔有率以及產品競爭優勢。2. The present invention captures the nighttime environment through a general photographic device, and even if there is a moving light source, the background model of the background environment can be dynamically simulated by analyzing the characteristics of the illumination region in the video image, and the shadow model of different light source colors is matched. Shadow pixels can be removed in a large amount to improve the accuracy of object detection. Applying this invention will increase the market share and product competitive advantage for the security video surveillance industry.

綜上所述,本案不但在研究方法與實作應用上確屬創新,並能較現有方法增進上述多項功效,應已充分符合新穎性及進步性之法定發明專利要件,爰依法提出申請,懇請貴局核准本件發明專利申請案,以勵發明,至感德便。In summary, this case is not only innovative in research methods and practical applications, but also can enhance the above-mentioned multiple functions compared with the existing methods. It should fully comply with the statutory invention patent requirements of novelty and progress, and apply for it according to law. You have approved this invention patent application, in order to invent invention, to the sense of virtue.

圖一為陰影模型訓練建立流程;Figure 1 shows the process of establishing a shadow model training;

圖二為變動光源環境下之前景物體偵測流程;Figure 2 shows the process of detecting the foreground object in a changing light source environment;

附件一(A)~(B)為習用以一定數量的畫面影像所先行建立的原始背景模型,(A)為一定數量的畫面影像,(B)為建立完成的原始背景模型;Annexes (A) to (B) are the original background models established by a certain number of screen images, (A) for a certain number of screen images, and (B) for the original background model that is completed;

附件二(A)~(C)為以模擬不同色彩光源照射方式建立陰影模型,係以不同透光性物體((A)不透光物體、(B)半透光物體、(C)全透光物體)置於純白紙中心,拍攝白光環境之畫面影像;Annex 2 (A) ~ (C) is to establish a shadow model by simulating the illumination of different color light sources, with different light transmissive objects ((A) opaque objects, (B) semi-transparent objects, (C) full penetration The light object is placed in the center of pure white paper to capture the image of the white light environment;

附件三(A)~(C)為以模擬不同色彩光源照射方式建立陰影模型,係以不同透光性物體((A)不透光物體、(B)半透光物體、(C)全透光物體)置於純白紙中心,拍攝黃光環境之畫面影像;Annex III (A) ~ (C) is to establish a shadow model by simulating the illumination of different color light sources, with different light transmissive objects ((A) opaque objects, (B) semi-transmissive objects, (C) full penetration The light object is placed in the center of pure white paper to capture the image of the yellow light environment;

附件四(A)~(C)為以白光照射不透光物體,物體像素在RGB色彩模型之分佈;Annexes (A) to (C) are the distribution of object pixels in the RGB color model with white light illuminating the opaque object;

附件五(A)~(C)為以白光照射不透光物體,陰影像素在RGB色彩模型之分佈;Annexes (A) to (C) are the distribution of shadowed pixels in the RGB color model with white light illuminating opaque objects;

附件六(A)~(C)為以黃光照射不透光物體,物體像素在RGB色彩模型之分佈;Annexes (A) to (C) are the distribution of object pixels in the RGB color model by illuminating the opaque object with yellow light;

附件七(A)~(C)為以黃光照射不透光物體,陰影像素在RGB色彩模型之分佈;Annex VII (A) ~ (C) is the distribution of shadow pixels in the RGB color model with yellow light illuminating the opaque object;

附件八(A)~(C)為以黃光照射不透光物體,物體像素在YCb Cr 色彩模型之分佈;Annexes VIII (A) ~ (C) are the distribution of object pixels in the YC b C r color model by illuminating the opaque object with yellow light;

附件九(A)~(C)為以黃光照射不透光物體,陰影像素在YCb Cr 色彩模型之分佈;Annex IX (A) ~ (C) is the distribution of shadow pixels in the YC b C r color model with yellow light illuminating the opaque object;

附件十(A)~(D)為光照區域判定方法(A)為視訊畫面影像,(B)為畫面影像中所有像素的亮度分布,可利用最亮的波谷為門檻值,以找出亮度極端點;(C)為以畫面影像之亮度極端點為擴散種子,(D)為以擴散搜尋法計算出在畫面影像中的光照區域範圍;Annex 10 (A) ~ (D) is the illumination area determination method (A) is the video screen image, (B) is the brightness distribution of all the pixels in the picture image, and the brightest trough can be used as the threshold value to find the brightness extreme Point (C) is to use the brightness extreme point of the picture image as the diffusion seed, and (D) to calculate the range of the illumination area in the picture image by the diffusion search method;

附件十一(A)~(C)為使用原始背景模型模擬光照區域以修訂為現況背景模型,(A)為目前視訊畫面影像中偵測的光照區域,(B)為原始背景模型,(C)為模擬光照後的現況背景模型;Annexes XI (A) ~ (C) are used to simulate the illumination area using the original background model to be revised to the current background model, (A) is the illumination area detected in the current video image, and (B) is the original background model, (C) ) is the current background model after simulating illumination;

附件十二(A)~(C)為前景像素偵測結果,(A)為目前視訊畫面影像,(B)使用原始背景模型,(C)使用模擬光照的現況背景模型;Annexes 12(A)~(C) are foreground pixel detection results, (A) is the current video screen image, (B) uses the original background model, and (C) uses the current background model of simulated illumination;

附件十三(A)~(D)為陰影像素判定並去除步驟,(A)為目前視訊畫面影像,(B)為模擬光照的現況背景模型(C)去除陰影前的前景像素(D)去除陰影後的前景像素,可組合成較精確的前景物體。Annex XIII (A) ~ (D) is the shadow pixel determination and removal steps, (A) is the current video screen image, (B) is the current state of the simulated lighting background model (C) the foreground pixel (D) is removed before the shadow is removed The foreground pixels after the shadow can be combined into a more accurate foreground object.

Claims (10)

一種移動光源環境下的陰影去除方法,包括記錄視訊畫面影像中各光照區域之特徵資訊時,除了極端點中心的明亮位置與四週邊緣點位置外,系統同時也記錄光照區域中心與各邊緣之色彩值;為了在背景模型中模擬各光照區域的出現情況,以符合監控環境現況之背景模型,採用漸層渲染方式以模擬方式將光照區域加入背景模型中;首先根據光照區域中心與邊緣點位置的距離與色彩差異值,分別計算出RGB三原色各自的變動比例(DiffRatR,DiffRatG,DiffRatB);然後在背景模型影像中,從光照區域中心位置開始,逐層以距離光照區域中心的遠近,將原像素值分別依比例漸次地增加RGB數值,讓該位置像素值增亮,以此修訂後之模擬光照的背景模型來偵測目前畫面的前景物體。A method for removing shadows in a moving light source environment, including recording the characteristic information of each illumination area in the video image, in addition to the bright position of the center of the extreme point and the position of the surrounding edge points, the system also records the color of the center and each edge of the illumination area. Value; in order to simulate the occurrence of each illumination area in the background model, to match the background model of the current situation of the monitoring environment, the illumination area is added to the background model by simulation using the gradient rendering method; firstly, according to the position of the center and edge points of the illumination area The distance and color difference values are used to calculate the respective variation ratios of the three primary colors of RGB (DiffRatR, DiffRatG, DiffRatB); then, in the background model image, starting from the center of the illumination region, the original pixels are separated by the distance from the center of the illumination region. The values are gradually increased by RGB values in proportion, and the pixel values of the position are highlighted, and the background image of the simulated simulated illumination is used to detect the foreground object of the current picture. 如申請專利範圍第1 項所述之移動光源環境下的陰影去除方法,其中該極端點係為自動偵測並分析畫面影像中之光照區域與光亮特性,計算畫面影像中所有像素的亮度分布差異,並求得畫面影像中最亮的波谷為門檻值,進而找出相對的明亮像素為極端點。The method for removing shadows in a moving light source environment as described in claim 1 , wherein the extreme point is to automatically detect and analyze the illumination region and the brightness characteristic in the image of the image, and calculate the difference in brightness distribution of all pixels in the image of the image. And find the brightest trough in the picture image as the threshold value, and then find the relative bright pixels as extreme points. 如申請專利範圍第1 項所述之移動光源環境下的陰影去除方法,其中該模擬光照的背景模型進而包括搭配光源色彩判定以及陰影模型進行陰影像素判定與去除,即是與同位置背景像素的色彩相較,在三個顏色分量的差異幅度應接近,若同時符合顏色較背景暗且各顏色分量變暗幅度接近的兩個條件,則判定此位置的前景影像為陰影像素,予以去除。The method for removing shadows in a moving light source environment as described in claim 1 , wherein the background model of the simulated illumination further comprises a color image determination and a shadow model for performing shadow pixel determination and removal, that is, with the same position background pixel. Compared with the color, the difference amplitude between the three color components should be close. If the two conditions satisfying that the color is darker than the background and the darkening amplitude of each color component is close to each other, it is determined that the foreground image at this position is a shadow pixel and is removed. 一種移動光源環境下的陰影去除方法,包括:偵測視訊畫面影像;畫面影像極端點偵測,計算畫面影像中所有像素的亮度分布,求得最亮的波谷為門檻值,以找出相對的明亮像素為極端點;光照區域範圍判定,為以畫面影像之極端點為擴散種子,在偵測方面以擴散搜尋法進行光照區域之自動判斷與特性分析識別,以計算出在畫面影像中的光照區域範圍;光照區域特性分析,為在多光源環境下透過分析各光源對不同透光性物體的陰影特徵,來建立陰影模型;模擬光照區域,以漸層渲染方式自動模擬光照區域以得出現況背景模型來修訂背景模型;修訂背景模型,以光照區域中心與邊緣點位置的距離與色彩差異值,計算RGB數值各自的變動,當作渲染增亮之比例基準,透過修訂以構成模擬光照的背景模型;前景物體偵測,為使用模擬光照的現況背景模型,以背景相減法進行前景像素偵測;陰影像素判定並去除,為採用模擬光照的現況背景模型,利用背景相減法得到的前景像素,再透過事先建立不同光源色彩對物體產生之陰影模型,針對所有前景像素,進行陰影像素之判定與去除;前景元件組合並修正,為以修訂後之模擬光照的背景模型來偵測目前畫面的前景物體;完成前景物體,為精確地將前景物體偵測。A method for removing a shadow in a moving light source environment includes: detecting a video image of a video; detecting an extreme point of the image, calculating a brightness distribution of all pixels in the image, and finding a brightest valley as a threshold to find a relative The bright pixels are extreme points; the illumination area range is determined by using the extreme points of the picture image as the diffusion seed, and the detection and the aspect of the illumination area are automatically judged and characterized by the diffusion search method to calculate the illumination in the picture image. Area range; analysis of the characteristics of the illumination area, to establish a shadow model by analyzing the shadow features of each light source on different light-transmitting objects in a multi-light source environment; simulating the illumination area, automatically simulating the illumination area in a gradient rendering manner to obtain the situation The background model is used to revise the background model; the background model is revised, and the distance between the center of the illumination region and the position of the edge point and the color difference value are calculated, and the respective changes of the RGB values are calculated as a scale reference for rendering brightening, which is revised to form a background of the simulated illumination. Model; foreground object detection, a current background model using simulated lighting, The background subtraction method is used for foreground pixel detection; the shadow pixel is determined and removed, and the foreground pixel model using simulated illumination is used, and the foreground pixel obtained by the background subtraction method is used, and then the shadow model generated by the object of different light sources is created in advance, for all foregrounds. Pixels are used to determine and remove shadow pixels; foreground components are combined and modified to detect foreground objects of the current picture with a background image of the revised simulated illumination; and foreground objects are completed to accurately detect foreground objects. 如申請專利範圍第4 項所述之移動光源環境下的陰影去除方法,其中該畫面影像極端點偵測係為自動偵測並分析畫面影像中之光照區域與光亮特性,由於光照區域有亮度的聚集性與漸層性特質,計算畫面影像中所有像素的亮度分布差異,並求得畫面影像中最亮的波谷為門檻值,進而找出相對的明亮像素為極端點。The method for removing a shadow in a moving light source environment as described in claim 4 , wherein the image image extreme point detection system automatically detects and analyzes an illumination area and a light characteristic in the image image, because the illumination area has brightness. Aggregation and gradation characteristics, calculate the difference in brightness distribution of all pixels in the image, and find the brightest trough in the image as the threshold, and then find the relative bright pixels as extreme points. 如申請專利範圍第4 項所述之移動光源環境下的陰影去除方法,其中該光照區域範圍判定為設立光照區域中心的極端點為擴散種子,以便逐層往外搜尋外層鄰居點,並判定與極端點中心的亮度差異,若差異過大則歸屬於光照區域的邊緣點位置,並分別記錄極端中心點、與各方向之邊緣點位置以及色彩值為光照區域的特徵資訊。The method for removing shadows in a moving light source environment as described in claim 4 , wherein the illumination region is determined to be a diffusion seed at the center of the illumination region, so as to search for outer neighbor points layer by layer, and determine and extreme The difference in brightness at the center of the point, if the difference is too large, belongs to the edge point position of the illumination area, and records the characteristic information of the extreme center point, the edge point position in each direction, and the color value as the illumination area. 如申請專利範圍第4 項所述之移動光源環境下的陰影去除方法,其中該模擬光照區域係以背景模型為基礎,以漸層渲染方式,自動模擬該光照區域加入背景模型中,進而自動模擬出現況背景模型,並以此現況背景模型進行前景影像偵測。The method for removing shadows in a moving light source environment as described in claim 4 , wherein the simulated illumination region is based on a background model, and the illumination region is automatically simulated into a background model by a gradient rendering method, thereby automatically simulating The background model appears, and the foreground image is detected by the current background model. 如申請專利範圍第4 項所述之移動光源環境下的陰影去除方法,其中該修訂背景模型為以現況背景模型來修訂背景模型,在背景模型中,從光照區域中心位置開始,逐層以距離光照區域中心的遠近,將原像素值分別依比例漸次地增加RGB數值,讓該位置像素值增亮,透過修訂以構成模擬光照的背景模型。The method for removing shadows in a moving light source environment as described in claim 4 , wherein the revised background model is to modify the background model with a current background model. In the background model, starting from the center of the illumination region, the distance is layer by layer. In the distance of the center of the illumination area, the original pixel values are gradually increased by RGB values, and the pixel values of the position are brightened, and the background model of the simulated illumination is formed by revision. 如申請專利範圍第4 項所述之移動光源環境下的陰影去除方法,其中該前景物體偵測係針對每張視訊畫面,先經過光照區域的特性判定,透過逐層渲染的模擬方式,將相對位置的光照區域加入原始背景模型,產生現況背景模型,用以偵測前景像素。The method for removing shadows in a moving light source environment as described in claim 4 , wherein the foreground object detection system first determines the characteristics of the illumination region for each video picture, and performs a layer-by-layer rendering simulation mode. The illumination region of the location is added to the original background model to produce a current background model for detecting foreground pixels. 如申請專利範圍第4 項所述之移動光源環境下的陰影去除方法,其中該陰影像素判定並去除係將每個前景像素點分別與相位置背景像素的色彩相較,若同時符合顏色較背景暗且各顏色分量變暗幅度接近的兩個條件,則判定此位置的前景影像為陰影像素,予以去除。The method for removing shadows in a moving light source environment according to claim 4 , wherein the shadow pixel determining and removing system compares each foreground pixel with the color of the phase pixel of the phase position, if the color is more than the background If two conditions of darkness and darkness of each color component are close, it is determined that the foreground image at this position is a shadow pixel and is removed.
TW099137170A 2010-10-29 2010-10-29 Shadow Removal Method in Mobile Light Source Environment TWI451342B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW099137170A TWI451342B (en) 2010-10-29 2010-10-29 Shadow Removal Method in Mobile Light Source Environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW099137170A TWI451342B (en) 2010-10-29 2010-10-29 Shadow Removal Method in Mobile Light Source Environment

Publications (2)

Publication Number Publication Date
TW201218090A TW201218090A (en) 2012-05-01
TWI451342B true TWI451342B (en) 2014-09-01

Family

ID=46552411

Family Applications (1)

Application Number Title Priority Date Filing Date
TW099137170A TWI451342B (en) 2010-10-29 2010-10-29 Shadow Removal Method in Mobile Light Source Environment

Country Status (1)

Country Link
TW (1) TWI451342B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11876945B2 (en) 2020-01-21 2024-01-16 Mobile Drive Netherlands B.V. Device and method for acquiring shadow-free images of documents for scanning purposes

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115601245B (en) * 2021-07-07 2023-12-12 同方威视技术股份有限公司 Shadow eliminating device and method, empty disc identifying device and method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030123703A1 (en) * 2001-06-29 2003-07-03 Honeywell International Inc. Method for monitoring a moving object and system regarding same
TWI250466B (en) * 2004-04-29 2006-03-01 Ind Tech Res Inst Object shadow detection method
TW201001338A (en) * 2008-06-16 2010-01-01 Huper Lab Co Ltd Method of detecting moving objects
TW201019268A (en) * 2008-11-06 2010-05-16 Ind Tech Res Inst Method for detecting shadow of object
US7720257B2 (en) * 2005-06-16 2010-05-18 Honeywell International Inc. Object tracking system
US7801330B2 (en) * 2005-06-24 2010-09-21 Objectvideo, Inc. Target detection and tracking from video streams

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030123703A1 (en) * 2001-06-29 2003-07-03 Honeywell International Inc. Method for monitoring a moving object and system regarding same
TWI250466B (en) * 2004-04-29 2006-03-01 Ind Tech Res Inst Object shadow detection method
US7720257B2 (en) * 2005-06-16 2010-05-18 Honeywell International Inc. Object tracking system
US7801330B2 (en) * 2005-06-24 2010-09-21 Objectvideo, Inc. Target detection and tracking from video streams
TW201001338A (en) * 2008-06-16 2010-01-01 Huper Lab Co Ltd Method of detecting moving objects
TW201019268A (en) * 2008-11-06 2010-05-16 Ind Tech Res Inst Method for detecting shadow of object

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11876945B2 (en) 2020-01-21 2024-01-16 Mobile Drive Netherlands B.V. Device and method for acquiring shadow-free images of documents for scanning purposes

Also Published As

Publication number Publication date
TW201218090A (en) 2012-05-01

Similar Documents

Publication Publication Date Title
Huang et al. A real-time object detecting and tracking system for outdoor night surveillance
JP4477221B2 (en) How to determine the orientation of an image containing a blue sky
Xiong et al. Color sensors and their applications based on real-time color image segmentation for cyber physical systems
JP2001195591A (en) Method for detecting void in image
Wang et al. A new fire detection method using a multi-expert system based on color dispersion, similarity and centroid motion in indoor environment
CN103208126A (en) Method for monitoring moving object in natural environment
CN105469427B (en) One kind is for method for tracking target in video
Huerta et al. Exploiting multiple cues in motion segmentation based on background subtraction
CN108921215A (en) A kind of Smoke Detection based on local extremum Symbiotic Model and energy spectrometer
CN114202646A (en) Infrared image smoking detection method and system based on deep learning
Zhang et al. Application research of YOLO v2 combined with color identification
Kim et al. Color segmentation robust to brightness variations by using B-spline curve modeling
TWI451342B (en) Shadow Removal Method in Mobile Light Source Environment
Aghaei et al. A flying gray ball multi-illuminant image dataset for color research
CN114067172A (en) Simulation image generation method, simulation image generation device and electronic equipment
Wei et al. Simulating shadow interactions for outdoor augmented reality with RGBD data
Hanji et al. Hdr4cv: High dynamic range dataset with adversarial illumination for testing computer vision methods
Zhang et al. Real-time fire detection using video sequence data
CN114296556A (en) Interactive display method, device and system based on human body posture
Huang et al. A physical approach to moving cast shadow detection
Sethu et al. A Comprehensive Review of Deep Learning based Illumination Estimation
Shengze et al. Research based on the HSV humanoid robot soccer image processing
Wiesemann et al. Fog augmentation of road images for performance analysis of traffic sign detection algorithms
Muthukumar et al. Real time insignificant shadow extraction from natural sceneries
Wang et al. Task-driven image preprocessing algorithm evaluation strategy