TWI463416B - A foreground separation method - Google Patents
A foreground separation method Download PDFInfo
- Publication number
- TWI463416B TWI463416B TW100148779A TW100148779A TWI463416B TW I463416 B TWI463416 B TW I463416B TW 100148779 A TW100148779 A TW 100148779A TW 100148779 A TW100148779 A TW 100148779A TW I463416 B TWI463416 B TW I463416B
- Authority
- TW
- Taiwan
- Prior art keywords
- image
- foreground
- background
- present
- path
- Prior art date
Links
Landscapes
- Image Analysis (AREA)
Description
本發明係與一種前景分離方法有關,更明確的說,本發明係與一種用於識別相對於一指定標的之一標的之前景影像的前景分離方法有關。The present invention is related to a foreground separation method, and more specifically, the present invention relates to a foreground separation method for identifying a foreground image with respect to a target of a specified target.
現今科技發展日新月異,不管是在硬體的運算速度或是軟體技術的開發上,都有著令人驚豔的成果並且仍持續在進步中,而影像處理與電腦視覺這方面的相關研究,在近年來更是發展迅速並因此而被廣泛地應用。此外,影像的動態分析及追蹤也逐漸發展成熟。就安全監控這方面而言,除了基本的監視及錄影建檔等等基本功能之外,還要能透過電腦視覺及影像處理的相關技術,來對攝影鏡頭所擷取到的連續影像進行處理及分析,以提供特定區域的偵測、辨識、追蹤及異常警告等功能。而在應用上開之技術時,需配合有效的資料處理方法,方能達到較佳的效果。其中,如何自多張影像中有效地自偵測畫面中分離前景影像、後景影像(或稱背景),係為監控系統運作時很重要的一個步驟。Nowadays, the development of science and technology is changing with each passing day. Whether it is in the speed of hardware computing or the development of software technology, it has amazing results and continues to progress. The related research on image processing and computer vision has been in recent years. It is growing rapidly and is therefore widely used. In addition, the dynamic analysis and tracking of images has gradually matured. In terms of security monitoring, in addition to the basic functions of basic surveillance and video archiving, it is also necessary to process the continuous images captured by the photographic lens through the related technologies of computer vision and image processing. Analysis to provide detection, identification, tracking, and anomaly warning for specific areas. In the application of the technology, it is necessary to cooperate with effective data processing methods to achieve better results. Among them, how to effectively separate the foreground image and the back view image (or background) from multiple images in multiple images is an important step in monitoring the operation of the system.
為了有效地自影像中分辨並擷取其之後景及前景,現今大部分的監控系統係採用背景相減法、時間差相減法和高斯混合背景模型等運算方式。然而,背景相減法及時間差相減法此兩種方式都需要有乾淨之背景,換而言之,若影像之後景構成過於複雜,其所建立或擷取之前景將支離破碎、不堪使用。In order to effectively distinguish and capture the background and foreground from the image, most of the monitoring systems today use the background subtraction method, the time difference subtraction method and the Gaussian mixture background model. However, both the background subtraction method and the time difference subtraction method need to have a clean background. In other words, if the rear view of the image is too complicated, the scene created or captured will be fragmented and unusable.
為此,利用高斯混合模型則可一定程度的克服上開前景品質不佳之問題。然而,高斯混合模型亦有其缺點,當欲偵察的標的物停留在畫面上過久,高斯混合模型將會將標的物之影像視為後景的一部份,標的物停留的時間越久,則高斯混合模型對標的物的辨識能力將隨之而快速減損。上開的現象將會直接影響前景偵測的準確性,同時也會對後續追蹤的正確性造成影響。To this end, the Gaussian mixture model can overcome the problem of poor quality in the foreground. However, the Gaussian mixture model also has its shortcomings. When the object to be reconnaissed stays on the screen for too long, the Gaussian mixture model will regard the image of the object as part of the background, and the longer the object stays, the longer it will be. The ability of the Gaussian mixture model to identify the target will be quickly degraded. The phenomenon of the opening will directly affect the accuracy of the foreground detection, and will also affect the correctness of the follow-up.
為克服上開問題,目前己有業者利用一個時間控制變數來延長相融時間,以降低標的物之靜止情況對其之辨識能力的影響。但若標的物的靜止時間較長,前景仍會融於後景中而成為背景的一部份。再者,Li等人於2007年8月於Journal of Computer Applications之第27期,第8號的第2014頁至2017頁中,係揭露了一種名為“Improvement on adaptive mixture Gaussian background model,”的方法,以用來改善上開的問題。然而,請參閱圖一,圖一係繪述了利用Li的方法進行前景分離的效果。圖一係分為T1、T2、T3以及T4四個區間,其分別揭露了Li之方法於第139號影格、209號影格、274號影格與308號影格的影像,以及該等影格所對應的前景偵測結果。由圖中可以清楚的看到前景的辨識能力將隨時間而大幅減損,至T4時其己無法自影像中將標的物之前景進行擷取。In order to overcome the problem of opening up, the current industry has used a time control variable to extend the blending time to reduce the influence of the static state of the target on its ability to recognize. However, if the object's static time is long, the foreground will still merge into the background and become part of the background. Furthermore, Li et al., in the 2007 issue of Journal of Computer Applications, No. 27, No. 8, pages 2014 to 2017, discloses a method called "Improvement on adaptive mixture Gaussian background model," Method to improve the problem of opening up. However, please refer to FIG. 1. FIG. 1 depicts the effect of using Li to perform foreground separation. Figure 1 is divided into four sections, T1, T2, T3 and T4, which respectively reveal the images of Li's method in the 139th frame, the 209th frame, the 274th frame and the 308th frame, and the corresponding images. Prospect detection results. It can be clearly seen from the figure that the recognition ability of the foreground will be greatly degraded over time, and by T4 it is impossible to extract the foreground of the subject from the image.
據此,如何開發出一種在分離前景與背景之同時,可以有效地克服傳統高斯混合模型中,前景物件停留過久會融於背景問題之方法,實為所屬技術領域所急欲解決的問題。Based on this, how to develop a method that can effectively overcome the problem of the foreground object in the traditional Gaussian mixture model and to integrate the background problem in the traditional Gaussian mixture model while separating the foreground and the background is an urgent problem to be solved in the technical field.
有鑑於此,本發明之一範疇在於提供一種前景分離方法,更明確的說,本發明係為一種用於識別相對於一指定標的之一標的前景影像的前景分離方法。本發明係透過建立一參考後景影像來作為影像處理的參考,再根據前景路徑來取代所欲分離之影像的相對應部份,接著再進行差分,以求出對應於指定標的之一標的前景影像,以達前景辦識之效。In view of the above, one aspect of the present invention is to provide a foreground separation method, and more particularly, to a foreground separation method for identifying a foreground image with respect to a target of a specified target. The invention establishes a reference back-view image as a reference for image processing, and then replaces the corresponding portion of the image to be separated according to the foreground path, and then performs a difference to find a foreground corresponding to one of the specified targets. Imagery, to achieve the benefits of the future.
明確的說,本發明的前景分離方法係包含有步驟S1至S5。步驟S1為依序準備一第一影像、一第二影像、一第三影像以及一第四影像。步驟S2為利用第一影像、第二影像以及第三影像來建立一參考後景影像。步驟S3為利用第一影像、第二影像以及第三影像來建立一前景路徑。步驟S4則為利用第四影像、參考後景影像以及前景路徑,來產生一標準後景影像。步驟S5為利用標準後景影像來對第四影像進行比較,以求得相對應於指定標的之一標的前景影像。Specifically, the foreground separation method of the present invention includes steps S1 to S5. Step S1 is to sequentially prepare a first image, a second image, a third image, and a fourth image. Step S2 is to establish a reference background image by using the first image, the second image, and the third image. Step S3 is to establish a foreground path by using the first image, the second image, and the third image. Step S4 is to generate a standard background image by using the fourth image, the reference background image, and the foreground path. Step S5 is to compare the fourth image by using the standard background image to obtain a foreground image corresponding to one of the specified targets.
而於實際應用時,上開的步驟S2係進一步包含有數個子步驟,其分別為自第一影像、第二影像及第三影像分別區隔出對應的一靜態區域,以及匯整各個靜態區域以產生一參考後景影像。In the actual application, the step S2 of the upper step further includes a plurality of sub-steps for respectively separating a corresponding static area from the first image, the second image, and the third image, and collecting each static area. Generate a reference background image.
上開的步驟S3亦得包含有子步驟,其分別為對第一影像、第二影像、第三影像以一高斯混合模型,來分別擷取一相對應的前景影像;根據相對於第一影像、第二影像以及第三影像之前景影像的位置,以一第二預定方法來估算第四影像之前景影像之一預測位置;以及利用對應於第一影像、第二影像、第三影像之前景影像以及預測位置,而取得一聯集以形成為前景路徑。The step S3 of the upper step further includes a sub-step of respectively capturing a corresponding foreground image by using a Gaussian mixture model for the first image, the second image, and the third image; And the second image and the position of the third image foreground image, estimating a predicted position of the front image of the fourth image by a second predetermined method; and utilizing the foreground image corresponding to the first image, the second image, and the third image The image and the predicted position are taken to form a joint to form a foreground path.
另外,上開指的第二預定方法係指將第一影像、第二影像以及第三影像之位置代入一卡爾曼濾波器進行運算。In addition, the second predetermined method of the upper finger refers to substituting the positions of the first image, the second image, and the third image into a Kalman filter for calculation.
再者,上開的步驟S4亦得進一步包含有子步驟,其分別為自第四影像擷取一對應的後景影像,以及利用參考後景影像對應於前景路徑的區域,來取代第四影像相對於前景路徑之區域,以產生一標準後景影像。Furthermore, the step S4 of the upper step further includes a sub-step of respectively capturing a corresponding back-end image from the fourth image and replacing the fourth image by using a region of the reference background image corresponding to the foreground path. Relative to the area of the foreground path to produce a standard background image.
總結來說,本發明揭露了一種可在分離前景與背景之同時,有效地克服傳統高斯混合模型中前景物件停留過久會融於背景問題之方法。再者,本發明係進一步地改善了習知的高斯混合模型,因前景停留在原地的時間較長而被融於背景之中的問題,並讓前景在進入警戒區以後不會因為停留而產生無法偵測的問題,進而加強了習知監視系統的準確性。In summary, the present invention discloses a method for effectively overcoming the background problem when the foreground object stays too long in the traditional Gaussian mixture model while separating the foreground and the background. Furthermore, the present invention further improves the conventional Gaussian mixture model, which is immersed in the background due to the long time in which the foreground stays in place, and allows the foreground to be generated after entering the warning zone. Problems that cannot be detected further enhance the accuracy of the conventional monitoring system.
除非有另外定義,否則本說明書所用之所有技術及科學術語,皆具有與熟習本發明所屬技術者通常所瞭解的意義相同之意義。另外,本說明書目前所述者僅屬本發明的眾多實例方法之其中之一,在本發明之實際使用時,可使用與本說明書所述方法及裝置相類似或等效之任何方法或手段為之。再者,本說明書中所提及之一數目以上或以下,係包含數目本身。另外,本說明書若提及某甲與某乙係電性連接或耦接時,其係實指某甲與某乙係具有能量、資料或信號的傳輸行為,其不以實際連接為限,據此,舉凡無線網路、光學信號傳輸待行為均落入其範圍。再者,說明書的『此』一字係與『本發明的』一詞同義。Unless otherwise defined, all technical and scientific terms used in the specification have the same meaning meaning In addition, the present description is only one of the many example methods of the present invention. In the actual use of the present invention, any method or means similar or equivalent to the method and apparatus described in the present specification may be used. It. Furthermore, one or more of the numbers mentioned in the specification include the number itself. In addition, if the specification refers to the electrical connection or coupling between a certain type A and a certain type B, it means that the transmission behavior of energy and data or signals of a certain type A and a certain type of line is not limited by the actual connection. Therefore, all wireless network and optical signal transmission behaviors fall within its scope. Furthermore, the word "this" in the specification is synonymous with the term "the invention".
且應瞭解的是,本說明書揭示執行所揭示功能之某些方法、流程,並不以說明書中所記載之順序為限,除說明書有明確排除,否則各步驟、流程先後順序之安排端看使用者之要求而自由調整。再者,本說明書中的各圖式間的各元件間之比例已經過調整以維持各圖面的簡潔,故此,除了說明書有明確說明外,圖面中的各個元件的相對應大小、位置以及形狀均僅供參考,在不脫離本發明的發明觀念下,各個元件的大小、位置以及形狀等特徵之安排端看使用者之要求而自由變更。另外,考量本發明之各元件之性質為相互類似,故各元件間的說明、標號為相互適用。It should be understood that the present disclosure discloses certain methods and processes for performing the disclosed functions, and is not limited to the order described in the specification, unless the specification is explicitly excluded, otherwise the steps of the steps and processes are used. Freely adjusted by the requirements of the person. Furthermore, the ratios between the various elements in the drawings in the present specification have been adjusted to maintain the simplicity of the drawings, and therefore, the corresponding size and position of the respective elements in the drawing are as described in the specification. The shapes are for reference only, and the arrangement of the features such as the size, position, and shape of each component can be freely changed depending on the requirements of the user without departing from the inventive concept of the present invention. Further, since the properties of the respective elements of the present invention are considered to be similar to each other, the descriptions and reference numerals between the respective elements apply to each other.
為使本發明能更清楚的被說明,請參照以下本發明詳細說明及其中所包括之實例,以更容易地理解本發明。本發明係揭露了一種前景分離方法,簡單來說,本發明係透過建立一參考後景影像以作為影像處理的參考之後,再根據前景路徑來取代所欲分離之影像的對應部份,接著再進行差分以求出對應於指定標的之一標的前景影像,以達前景辦識之功效。In order to make the invention more apparent, the following detailed description of the invention and the examples thereof are included to provide a better understanding of the invention. The present invention discloses a foreground separation method. Briefly, the present invention replaces the corresponding portion of the image to be separated according to the foreground path by establishing a reference background image as a reference for image processing, and then A difference is made to find a foreground image corresponding to one of the specified targets to achieve the effect of the foreground.
另外,本發明可以大致地得分成『建立參考後景影像』、『建立前景路徑』、『建立標準後景影像』以及『辨識標的前景影像』四大部份,以下將根據上開四個運作部分別進行說明。In addition, the present invention can be roughly scored into four parts: "establishing a reference back view image", "establishing a foreground path", "establishing a standard back view image", and "identifying a target foreground image". The department will explain separately.
請參閱圖二,圖二係繪述了本發明的前景分離方法之較佳實施例的步驟流程圖。由圖二中可見,本發明的前景分離方法1係包含有步驟S1至步驟S5。首先第一運作部分為『建立參考後景影像』,其主要係包含了步驟S1及步驟S2。步驟S1為依序準備第一影像G1、第二影像G2、第三影像G3以及第四影像G4。上開的準備一詞係指利用影像擷取器拍攝、自一資料庫匯入或是其他類似的手段。Referring to FIG. 2, FIG. 2 is a flow chart showing the steps of a preferred embodiment of the foreground separation method of the present invention. As can be seen from FIG. 2, the foreground separation method 1 of the present invention includes steps S1 to S5. First, the first operation part is "establishing a reference background image", which mainly includes step S1 and step S2. Step S1 is to sequentially prepare the first image G1, the second image G2, the third image G3, and the fourth image G4. The term "preparation" refers to the use of an image capture device, import from a database, or other similar means.
第一影像G1、第二影像G2、第三影像G3以及第四影像G4係分別代表了於各時間所拍攝的眾多影格中的其中之一者,再者,第二影像G2之拍攝時間係較第一影像G1晚,如此類推。需注意的是,上開各影像之間的拍攝間隔並不以固定為限。為保持說明書的簡潔,本具體實施例僅以四張影像來交代本發明之本質,然而於實際應用時,影像之數量得為上千至上萬亦可。The first image G1, the second image G2, the third image G3, and the fourth image G4 respectively represent one of a plurality of frames captured at each time. Furthermore, the shooting time of the second image G2 is compared. The first image G1 is late, and so on. It should be noted that the shooting interval between the images on the top is not limited. In order to keep the description concise, this embodiment only uses four images to explain the essence of the present invention. However, in practical applications, the number of images may be thousands to tens of thousands.
另外,為了對各影像的性質進行更進一步的說明,請參閱圖三A至圖三E,圖三A及圖三E係分別繪述了本發明的較佳實施例中之第一影像G1、第一影像G1之後景影像B、第一影像G1之前景影像F、第一影像G1之靜態區域以及第一影像G1之動態區域A的示意圖。由圖三A至圖三E可見,第一影像G1之前景及後景係分別以第一前景影像F以及第一後景影像B來代表。考量前景及後景之定義己為習知,故於此省略。於本具體實施例中,本發明的前景影像F係由一黑底白圖的形狀所組成,白圖的形狀係對應於指定標的10之形狀,以代表指標的10之位置。In addition, in order to further describe the nature of each image, please refer to FIG. 3A to FIG. 3E, and FIG. 3A and FIG. 3E respectively depict the first image G1 in the preferred embodiment of the present invention. A schematic diagram of the first image G1 after view image B, the first image G1 foreground image F, the static region of the first image G1, and the dynamic region A of the first image G1. As can be seen from FIG. 3A to FIG. 3E, the front view and the rear view of the first image G1 are represented by the first foreground image F and the first back view image B, respectively. The definition of the foreground and the background is a matter of fact, so it is omitted here. In the present embodiment, the foreground image F of the present invention is composed of the shape of a black-and-white map, and the shape of the white map corresponds to the shape of the designated target 10 to represent the position of the index of 10.
除此之外,本發明的第一影像G1亦定義有一靜態區域S及一動態區域A。靜態區域S係指影像中較為穩定的區域。更明確的說,請見圖三D及圖三E,若系統於影像中偵測到正在移動的一指定標的10,則正在移動的指定標的10所處之位置將被定義為一動態區域A。同時,指定標的10於動態區域A以外的部份則將被定義為靜態區域。另外,於本具體實施例中,上開所提及的對應形狀的區塊係較指定標的10稍大。再者,考量第二影像G2及第三影像G3之性質與第一影像G1相似,故將不於此贅述。In addition, the first image G1 of the present invention also defines a static area S and a dynamic area A. The static area S refers to a relatively stable area in the image. More specifically, please refer to FIG. 3D and FIG. 3E. If the system detects a moving target 10 in the image, the position of the moving target 10 will be defined as a dynamic area A. . At the same time, the part of the specified target 10 other than the dynamic area A will be defined as a static area. In addition, in the present embodiment, the block of the corresponding shape mentioned above is slightly larger than the designated object 10. Furthermore, it is considered that the properties of the second image G2 and the third image G3 are similar to those of the first image G1, and thus will not be described herein.
在各影像均準備完畢後,則進行步驟S2。請再參閱圖二,步驟S2為利用第一影像G1、第二影像G2以及第三影像G3來建立一參考後景影像GR1。更明確的說,請參閱圖四,圖四繪述了參考後景影像GR1的建立方式。由圖四中可見,參考後景影像GR1係分別由第一影像G1、第二影像G2以及第三影像G3的靜態區域相互補償、聯集產生而形成。需注意的是,在實際應用時,影像會高達數千張,此時,參考後景影像GR1將會隨著時間,而不斷被各個影像中的靜態區域來進行持續的更新。更明確的說,本系統得以數秒為一單位並將各圖像中無移動物體的區域,更新至參考後景影像GR1中,而當畫面上有持續且劇烈的變動時,則不進行更新的動作,利用此方式,本發明可以避免影像中的指定標的10意外地存入參考後景影像GR1中,所導致之後續的誤差。另外,若畫面為完全靜止時,亦得視狀況停止參考後景影像GR1的更新以維持其正確性。After each image is prepared, step S2 is performed. Referring to FIG. 2 again, step S2 is to establish a reference background image GR1 by using the first image G1, the second image G2, and the third image G3. To be more specific, please refer to Figure 4, which depicts the way the reference background image GR1 is created. As can be seen from FIG. 4, the reference background image GR1 is formed by mutually compensating and combining the static regions of the first image G1, the second image G2, and the third image G3. It should be noted that in actual applications, the image will be up to thousands of images. At this time, the reference background image GR1 will be continuously updated by the static area in each image with time. More specifically, the system can update the area of the image without moving objects to the reference background image GR1 in a few seconds, and does not update when there is continuous and dramatic change on the screen. Action, in this way, the present invention can avoid the subsequent error caused by the specified target 10 in the image being accidentally stored in the reference background image GR1. In addition, if the picture is completely still, the update of the reference background image GR1 is stopped as appropriate to maintain its correctness.
在進行本發明的第一運作部分之同時,應持續地利用高斯混合模型來監測第一影像G1、第二影像G2以及第三影像G3的前景影像F,並完成前景匹配及前景相關資訊的更新。若對應於指定標的10之前景影像F在物件追蹤時有被連續匹配多次,則代表此前景影像F已於畫面上已出現一段時間,則前景影像F具有被溶解於後景、背景的可能。據此,將啟動本發明的第二運作部分。While performing the first operational part of the present invention, the Gaussian mixture model should be continuously used to monitor the foreground image F of the first image G1, the second image G2, and the third image G3, and the foreground matching and the foreground related information are updated. . If the foreground image F corresponding to the specified target 10 is continuously matched multiple times during the object tracking, it means that the foreground image F has appeared on the screen for a certain period of time, and the foreground image F has the possibility of being dissolved in the background and the background. . Accordingly, the second operational portion of the present invention will be activated.
本發明的第二運作部分係為『建立一前景路徑』,其主要包了步驟S3。步驟S3係為利用第一影像G1、第二影像G2以及第三影像G3來建立一前景路徑30。The second operational part of the present invention is "establishing a foreground path", which mainly includes step S3. Step S3 is to establish a foreground path 30 by using the first image G1, the second image G2, and the third image G3.
具體的說,前景路徑30的運算方式如下所述。首先,分別對第一影像G1、第二影像G2以及第三影像G3分別以高斯混合模型來進行運算,以取得多個相對應的前景影像F。接著,將各個前景影像F的起始位置以及結束位置來進行匯整。再者,利用卡爾曼濾波器來對上開各個前景影像F的所處位置、以及移動速率、座標變化模式或是其他參數,以習知的影像處理方式來推算第四影像G4的前景影像F應存在的位置。其中,上開程序所推算得出的第四影像G4之前景影像F的位置,係被定義為如圖五所繪述之一預測位置20。圖五繪述了本發明的較佳實施例中預測位置20的示意圖。Specifically, the operation mode of the foreground path 30 is as follows. First, the first image G1, the second image G2, and the third image G3 are respectively calculated by a Gaussian mixture model to obtain a plurality of corresponding foreground images F. Next, the start position and the end position of each foreground image F are collected. Furthermore, the Kalman filter is used to estimate the position of each foreground image F, the moving rate, the coordinate change mode or other parameters, and the foreground image F of the fourth image G4 is estimated by a conventional image processing method. The location that should exist. The position of the foreground image F of the fourth image G4 calculated by the upper open program is defined as one predicted position 20 as depicted in FIG. Figure 5 depicts a schematic diagram of predicted position 20 in a preferred embodiment of the present invention.
在推算得到前景影像F應存在的預測位置20後,再分別將對應於第一影像G1、第二影像G2、第三影像G3的前景影像F與上開的預測位置20取一聯集,即可求得如圖六所示之前景路徑30,圖六係繪述了本發明的較佳實施例中前景路徑30的示意圖。After estimating the predicted position 20 where the foreground image F should exist, the foreground image F corresponding to the first image G1, the second image G2, and the third image G3 is respectively combined with the predicted position 20 of the upper image, that is, A front view path 30 as shown in FIG. 6 can be obtained. FIG. 6 depicts a schematic diagram of the foreground path 30 in the preferred embodiment of the present invention.
在取得前景路徑30後,接著將進行第三部之『建立標準後景影像』。請參閱圖七,圖七係繪述了本發明的較佳實施例的標準後景影像2中形成示意圖。第三運作部份之取代影像係主要包含有步驟S4。步驟S4為利用該第四影像G4、參考後景影像GR1以及前景路徑30來產生一標準後景影像GR2。After obtaining the foreground path 30, the third part of "Building a standard background image" will be performed. Referring to FIG. 7, FIG. 7 is a schematic diagram showing the formation of a standard back scene image 2 of a preferred embodiment of the present invention. The replacement image of the third operational part mainly includes step S4. Step S4 is to generate a standard background image GR2 by using the fourth image G4, the reference background image GR1, and the foreground path 30.
更明確的說,步驟S4的運作方式係如下所述。首先,自第四影像G4以高斯混合模型來擷取一對應的後景影像B。需注意的是,第四影像G4係被定義為處於圖中之指定標的10之前景,已沒有被系統所擷取之影像。其無法擷取的原因可能為指定標的10因長時間靜止,而被部份地融入後景影像B中。此時,指定標的之前景影像己一定程度的溶入第四影像G4中,並影響其辦識的效果。More specifically, the operation of step S4 is as follows. First, a corresponding back scene image B is captured from the fourth image G4 in a Gaussian mixture model. It should be noted that the fourth image G4 is defined as the foreground of the designated target 10 in the figure, and there is no image captured by the system. The reason why it cannot be retrieved may be that the designated target 10 is partially integrated into the background image B because it is stationary for a long time. At this time, the foreground image of the designated target has been dissolved into the fourth image G4 to a certain extent, and the effect of the knowledge is affected.
接著,利用參考後景影像GR1對應於前景路徑30的區域,來取代第四影像G4上相對應的區域,此舉將能消除第四影像G4上有可能包含指定標的10之前景影像F的標準後景影像GR2。由於在上開的標準後景影像GR2中可能具有指定標的10之潛在區域,已被參考後景影像GR1所取代,故標準後景影像GR2將不存在指定標的10之影像或其對應的前景。Then, the reference rear view image GR1 is used to replace the corresponding area on the fourth image G4 by the area corresponding to the foreground path 30. This will eliminate the possibility that the fourth image G4 may include the foreground image F of the specified target 10. Rear view image GR2. Since the potential area of the designated standard back scene image GR2 may have been replaced by the reference back scene image GR1, the standard back scene image GR2 will not have the image of the specified target 10 or its corresponding foreground.
在取得標準後景影像GR2後,則進行第四部『辨識標的前景影像』。第四部程序係包含有步驟S5,步驟S5為利用該標準後景影像GR2來對第四影像G4進行比較,以求得對應於指定標的10之一標的前景影像GF。上開的比較一詞,係指差分運算或其他用於比較圖式的影像處理方法。考量標準後景影像GR2將不包含指定標的之前景影像,故透過對二圖式進行差分,便可以明確的得出指定標的10之標的前景影像GF。After obtaining the standard background image GR2, the fourth "identification target image" is performed. The fourth program includes step S5. The step S5 is to compare the fourth image G4 by using the standard background image GR2 to obtain a foreground image GF corresponding to one of the specified targets. The term "comparison" above refers to differential operations or other image processing methods used to compare patterns. Considering that the standard background image GR2 will not contain the foreground image of the specified target, the foreground image GF of the specified target 10 can be clearly obtained by making a difference to the two patterns.
最後,為說明本發明之效果,請參閱圖八,圖八係繪述了本發明的較佳實施例中與先前技術的效果比較圖表。由圖可見,先前技術於第308格時,前景己完全地溶於背景中並且無法分辨其之指定標的10。而反觀本發明則於第308格時仍維持清晰可見,直至第438格,指定標的10之前景仍維持清晰而不被影響。總結來說,本發明揭露了一種可在分離前景與背景之同時,有效地克服傳統高斯混合模型中前景物件停留過久會融於背景問題之方法,並讓前景在進入警戒區以後不會因為停留而產生無法偵測的問題,進而加強了習知監察系統的準確性。Finally, to illustrate the effects of the present invention, reference is made to Figure 8, which depicts a comparison of effects with prior art in a preferred embodiment of the present invention. As can be seen from the figure, in the prior art at the 308th grid, the foreground is completely dissolved in the background and the specified target 10 cannot be distinguished. In contrast, the present invention remains clearly visible at the 308th grid, until the 438th grid, the designated target 10 remains clear and unaffected. In summary, the present invention discloses a method for effectively overcoming the background object in the traditional Gaussian mixture model while separating the foreground and the background, and letting the foreground not enter the warning zone. Staying and creating undetectable problems enhances the accuracy of the conventional monitoring system.
藉由以上較佳具體實施例之詳述,係希望能更加清楚描述本發明之特徵與精神,而並非以上述所揭露的較佳具體實施例來對本發明之範疇加以限制。相反地,其目的是希望能涵蓋各種改變及具相等性的安排於本發明所欲申請之專利範圍的範疇內。因此,本發明所申請之專利範圍的範疇應根據上述的說明作最寬廣的解釋,以致使其涵蓋所有可能的改變以及具相等性的安排。The features and spirit of the present invention will be more apparent from the detailed description of the preferred embodiments. On the contrary, the intention is to cover various modifications and equivalents within the scope of the invention as claimed. Therefore, the scope of the patented scope of the invention should be construed in the broadest
1...前景分離方法1. . . Foreground separation method
10...指定標的10. . . Specified target
20...預測位置20. . . Forecast position
30...前景路徑30. . . Foreground path
A...動態區域A. . . Dynamic area
B...後景影像B. . . Rear view image
F...前景影像F. . . Foreground image
G1...第一影像G1. . . First image
G2...第二影像G2. . . Second image
G3...第三影像G3. . . Third image
G4...第四影像G4. . . Fourth image
GR1...參考後景影像GR1. . . Reference background image
GR2...標準後景影像GR2. . . Standard rear view image
GF...標的前景影像GF. . . Target image
S...靜態區域S. . . Static area
S1~S5...步驟流程S1~S5. . . Step flow
T1~T4...區間T1~T4. . . Interval
圖一係繪述了利用先前技術進行前景分離的效果。Figure 1 depicts the effect of foreground separation using prior art techniques.
圖二係繪述了本發明的前景分離方法之較佳實施例的步驟流程圖。Figure 2 is a flow chart showing the steps of a preferred embodiment of the foreground separation method of the present invention.
圖三A係繪述了本發明的較佳實施例中第一影像之示意圖。Figure 3A is a schematic illustration of a first image in a preferred embodiment of the present invention.
圖三B係繪述了本發明的較佳實施例中第一影像之後景影像。Figure 3B depicts a first image rear view image in a preferred embodiment of the present invention.
圖三C係繪述了本發明的較佳實施例中第一影像之前景影像。Figure 3C depicts a first image foreground image in a preferred embodiment of the present invention.
圖三D係繪述了本發明的較佳實施例中第一影像之動態區域。Figure 3D depicts the dynamic region of the first image in a preferred embodiment of the present invention.
圖三E係繪述了本發明的較佳實施例中第一影像之靜態區域。Figure 3E depicts a static region of a first image in a preferred embodiment of the present invention.
圖四係繪述了本發明的較佳實施例中參考後景影像的建立方式。Figure 4 is a diagram showing the manner in which the reference rear view image is created in the preferred embodiment of the present invention.
圖五係繪述了本發明的較佳實施例中預測位置的示意圖。Figure 5 is a schematic diagram showing the predicted position in a preferred embodiment of the present invention.
圖六係繪述了本發明的較佳實施例中前景路徑的示意圖。Figure 6 is a schematic diagram showing the foreground path in a preferred embodiment of the present invention.
圖七係繪述了本發明的較佳實施例中標準後景影像的形成示意圖。Figure 7 is a block diagram showing the formation of a standard back scene image in a preferred embodiment of the present invention.
圖八係繪述了本發明的較佳實施例中與先前技術的效果比較圖表。Figure 8 is a graph comparing the effects of the prior art with the prior art in a preferred embodiment of the present invention.
【主要元件符號說明】[Main component symbol description]
1...前景分離方法1. . . Foreground separation method
S1~S5...步驟流程S1~S5. . . Step flow
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW100148779A TWI463416B (en) | 2011-12-27 | 2011-12-27 | A foreground separation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW100148779A TWI463416B (en) | 2011-12-27 | 2011-12-27 | A foreground separation method |
Publications (2)
Publication Number | Publication Date |
---|---|
TW201327417A TW201327417A (en) | 2013-07-01 |
TWI463416B true TWI463416B (en) | 2014-12-01 |
Family
ID=49225102
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW100148779A TWI463416B (en) | 2011-12-27 | 2011-12-27 | A foreground separation method |
Country Status (1)
Country | Link |
---|---|
TW (1) | TWI463416B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104751485B (en) * | 2015-03-20 | 2017-05-24 | 安徽大学 | GPU adaptive foreground extracting method |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW201032178A (en) * | 2009-02-16 | 2010-09-01 | Inst Information Industry | Method and system for foreground detection using multi-modality fusion graph cut |
-
2011
- 2011-12-27 TW TW100148779A patent/TWI463416B/en not_active IP Right Cessation
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW201032178A (en) * | 2009-02-16 | 2010-09-01 | Inst Information Industry | Method and system for foreground detection using multi-modality fusion graph cut |
Non-Patent Citations (1)
Title |
---|
ius SULIMAN, Cristina CRUCERU, Florin MOLDOVEANU, "Kalman Filter Based Tracking in an Video Surveillance System", Advances in Electrical and Computer Engineering, Volume 10, Issue 2, On page(s): 30–34, Year 2010. Zhou, Quming and Aggarwal, J. K."Object tracking in an outdoor environment using fusion of features and cameras" In Image Vision Comput., (24) 11: 1244-1255, 2006. * |
Also Published As
Publication number | Publication date |
---|---|
TW201327417A (en) | 2013-07-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109272530B (en) | Target tracking method and device for space-based monitoring scene | |
CN107016367B (en) | Tracking control method and tracking control system | |
CN105100688B (en) | Image processing method, image processing apparatus and monitoring system | |
JP6525545B2 (en) | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM | |
CN103729858B (en) | A kind of video monitoring system is left over the detection method of article | |
JP2017534046A (en) | Building height calculation method, apparatus and storage medium | |
WO2014136623A1 (en) | Method for detecting and tracking objects in sequence of images of scene acquired by stationary camera | |
CN107258077A (en) | System and method for continuous autofocus (CAF) | |
JP2013196682A (en) | Group-of-people detection method and group-of-people detector | |
CN102307274A (en) | Motion detection method based on edge detection and frame difference | |
CN101715070B (en) | Method for automatically updating background in specifically monitored video | |
WO2019240988A1 (en) | Camera area locking | |
CN109711241B (en) | Object detection method and device and electronic equipment | |
US20160180201A1 (en) | Image processing | |
CN103500335A (en) | Photo shooting and browsing method and photo shooting and browsing device based on gesture recognition | |
EP3629570A2 (en) | Image capturing apparatus and image recording method | |
TWI601425B (en) | A method for tracing an object by linking video sequences | |
CN114037087B (en) | Model training method and device, depth prediction method and device, equipment and medium | |
CN103578121A (en) | Motion detection method based on shared Gaussian model in disturbed motion environment | |
TWI463416B (en) | A foreground separation method | |
JP2019121356A (en) | Interference region detection apparatus and method, and electronic apparatus | |
CN111968158A (en) | Video jitter detection method based on kernel correlation filtering and structural similarity | |
CN105138689B (en) | A kind of layered video method of abstracting with abnormal prompt of hovering | |
CN102129692A (en) | Method and system for detecting motion target in double threshold scene | |
KR102161212B1 (en) | System and method for motion detecting |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
MM4A | Annulment or lapse of patent due to non-payment of fees |