TWI517055B - Image foreground object screening method - Google Patents

Image foreground object screening method Download PDF

Info

Publication number
TWI517055B
TWI517055B TW103101139A TW103101139A TWI517055B TW I517055 B TWI517055 B TW I517055B TW 103101139 A TW103101139 A TW 103101139A TW 103101139 A TW103101139 A TW 103101139A TW I517055 B TWI517055 B TW I517055B
Authority
TW
Taiwan
Prior art keywords
data
image data
foreground
value
dimensional array
Prior art date
Application number
TW103101139A
Other languages
Chinese (zh)
Other versions
TW201528160A (en
Inventor
Vincent Cheng
Original Assignee
Volx Business Sofiware Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Volx Business Sofiware Inc filed Critical Volx Business Sofiware Inc
Priority to TW103101139A priority Critical patent/TWI517055B/en
Publication of TW201528160A publication Critical patent/TW201528160A/en
Application granted granted Critical
Publication of TWI517055B publication Critical patent/TWI517055B/en

Links

Landscapes

  • Image Analysis (AREA)

Description

影像前景物件篩選方法 Image foreground object screening method

本發明是有關於一種影像處理方法,特別是指一種物件特徵產生方法、物件特徵比對方法及物件篩選方法。 The invention relates to an image processing method, in particular to an object feature generation method, an object feature comparison method and an object screening method.

檢警辦案時,常常需要調閱監視攝影機的影像資料,並逐一進行比對,以尋找可疑的人事物。 When the policeman handles the case, it is often necessary to read the video data of the surveillance camera and compare them one by one to find suspicious people.

舉一銀行搶案為例,當搶匪搶劫銀行完之後並進行逃逸,銀行內或附近巷道的監視攝影機可能會將搶匪的影像予以紀錄,而偵辦人員可從監視攝影機紀錄的影像將搶劫時及其前後一段時間的影像資料逐一比對,以尋找可疑的線索,但長期觀看下來,偵辦人員的專注力易因疲累而降低,進而影響影像資料的判斷,造成比對的失誤。除此之外,若搶匪為了逃逸進行本身外觀的改變,更容易影響偵辦人員的比對。 Take a bank robbery as an example. When the robbery bank is robbed and escapes, the surveillance camera in the bank or nearby roadway may record the image of the robbery, and the investigator can rob the image recorded by the surveillance camera. The image data of a period of time before and after are compared one by one to find suspicious clues, but in the long-term viewing, the concentration of the investigators is easily reduced due to fatigue, which in turn affects the judgment of the image data and causes the mistakes of the comparison. In addition, if the robbers change their appearance in order to escape, it is easier to affect the comparison of the investigators.

除此之外,前述偵辦人員比對影像資料的方法還有一個缺點,那就是,看完一段影像資料之後,偵辦人員若有一新的線索欲進行偵辦,則勢必需要重頭再看一次 影像資料,而造成相當多時間的浪費,因此,必須對現有影像資料之辨識方法提供一改善的方式。 In addition, the above-mentioned reporters have a shortcoming in comparing the methods of image data. That is, after reading a piece of image data, if the investigator has a new clue to conduct the investigation, it is necessary to look again. Image data, which causes considerable waste of time, therefore, an improved way of identifying existing image data must be provided.

因此,本發明之目的,即在提供一種幫助比對影像的物件特徵產生方法。 Accordingly, it is an object of the present invention to provide a method of generating object features that aid in aligning images.

因此,本發明之另一目的,即在提供一種幫助比對影像的影像處理方法。 Accordingly, it is another object of the present invention to provide an image processing method that facilitates comparison of images.

因此,本發明之又一目的,即在提供一種幫助比對影像的物件篩選方法。 Accordingly, it is still another object of the present invention to provide an object screening method that facilitates comparison of images.

於是,本發明物件特徵產生方法,包含: Thus, the object feature generating method of the present invention comprises:

(A)顯示一照片或影像資料。 (A) Display a photo or video material.

(B)接收代表使用者輸入之多個位置的多個座標值。 (B) receiving a plurality of coordinate values representing a plurality of locations input by the user.

(C)產生一包括該等座標值的座標集。 (C) generating a set of coordinates including the coordinate values.

(D)根據該座標集擷取該照片或影像資料中被該等座標所代表之位置所圍繞之範圍,形成一目標物件資料。 (D) According to the coordinate set, the range of the photo or image data surrounded by the positions represented by the coordinates is taken to form a target object data.

(E)針對該目標物件資料處理得到一形式為陣列的第二特徵值。 (E) processing the target object data to obtain a second eigenvalue in the form of an array.

較佳地,其中,該座標集是根據使用者於該照片或影像資料上點擊的順序依序儲存代表各個點擊之位置的各該座標值,該範圍是根據該順序將各個位置依序連線而圍繞形成之範圍。 Preferably, the coordinate set sequentially stores the coordinate values representing the positions of the respective clicks according to the order in which the user clicks on the photo or the image data, and the range is to sequentially connect the respective positions according to the sequence. And around the scope of formation.

較佳地,其中,該步驟(E)包括以下至少一子步 驟:(E1)針對各該目標物件資料利用移動視窗掃描,對每一視窗獲得RGB色彩資訊值並分別記錄在RGB色度直方圖,該目標物件資料掃瞄完成後得到RGB共三個1×256的色彩一維陣列;及(E2)針對各該目標物件資料利用局部二進位樣本(local binary pattern,LBP)處理,求每一個3X3區塊的LBP值並記錄在一LBP直方圖,整體迭代(iterative)後得到一個1×256的紋理一維陣列;該第二特徵值為該等色彩一維陣列及/或該紋理一維陣列的集合。 Preferably, wherein the step (E) comprises at least one of the following substeps Step (E1): For each target object data, use the moving window scan to obtain RGB color information values for each window and record them in the RGB chromaticity histogram respectively. After the target object data is scanned, the RGB is obtained by three 1×. 256 color one-dimensional array; and (E2) for each target object data using local binary pattern (LBP) processing, find the LBP value of each 3X3 block and record it in an LBP histogram, the overall iteration An iterative result is a 1×256 texture one-dimensional array; the second characteristic value is a set of the one-dimensional array of the colors and/or the one-dimensional array of the texture.

於是,本發明物件特徵比對方法,包含以下步驟:對一錄影資料計算出複數張影像資料的前景資料,並針對各該前景資料處理得到一形式為陣列的第一特徵值。 Therefore, the object feature comparison method of the present invention comprises the steps of: calculating foreground data of a plurality of image data for a video material, and processing, for each of the foreground data, a first feature value in the form of an array.

如所述物件特徵產生方法產生一目標物件資料及針對該目標物件資料處理得到一形式為陣列的第二特徵值。 The object feature generation method generates a target object data and processes the target object data to obtain a second feature value in the form of an array.

使該第二特徵值分別與各該前景資料的各該第一特徵值進行比對,分別計算出一相關度分數。 The second feature value is compared with each of the first feature values of each of the foreground materials, and a correlation score is calculated respectively.

依據各該相關度分數進行排序並輸出排序結果。 Sort according to each of the relevance scores and output the sort result.

於是,本發明物件篩選方法,包含以下步驟: Thus, the method for screening an object of the present invention comprises the following steps:

(a)對一錄影資料計算出複數張影像資料的前景資料。 (a) Calculate the foreground data of a plurality of image data for a video material.

(b)分析該等影像資料中的前景資料中的物件, 將具有相同之移動物件特徵的複數物件歸屬於相同的物件。 (b) analyzing the objects in the foreground data in the imagery, A plurality of objects having the same moving object feature are assigned to the same object.

(c)接收一由使用者輸入之三個以上的位置所界定的選定條件,並判斷各該物件在移動過程中是否符合該選定條件來排除或保留各該物件。 (c) receiving a selected condition defined by more than three locations entered by the user and determining whether each of the objects meets the selected condition during the movement to exclude or retain each of the objects.

較佳地,其中,該等輸入之位置的連線圍繞出一以該等位置為頂點的區域,該選定條件為:若影像資料中前景資料的物件在移動過程中有落入該區域時,則保留該物件,否則排除該物件。 Preferably, the connection of the positions of the input surrounds an area with the vertices of the positions, and the selected condition is: if the object of the foreground data in the image data falls into the area during the movement, The object is retained, otherwise the object is excluded.

較佳地,其中,該等輸入之位置的連線圍繞出一以該等位置為頂點的區域,該選定條件為:若影像資料中前景資料的物件在移動過程中有落入該區域時,則排除該物件,否則保留該物件。 Preferably, the connection of the positions of the input surrounds an area with the vertices of the positions, and the selected condition is: if the object of the foreground data in the image data falls into the area during the movement, The object is then excluded, otherwise the object is retained.

較佳地,其中,該等輸入之位置分別為第一位置、第二位置及第三位置,第一位置及第二位置定義出一線段,第三位置定義出一垂直該線段且朝向該第三位置之一側的向量,該選定條件為:若影像資料中前景資料的物件在移動過程中有通過該線段且於通過時是沿該向量之方向前進或是前進方向與該向量的夾角小於一預設值,則保留該物件,否則排除該物件。 Preferably, the positions of the inputs are a first position, a second position, and a third position, respectively, wherein the first position and the second position define a line segment, and the third position defines a vertical line segment and faces the first a vector on one side of the three positions, the selected condition is: if the object of the foreground data in the image data passes through the line segment during the movement and passes in the direction of the vector when passing or the angle between the forward direction and the vector is less than A preset value is retained for the object, otherwise the object is excluded.

較佳地,其中,步驟(a)還針對各該前景資料處理得到一形式為陣列的第一特徵值,該方法還包括於步驟(a)後執行的步驟(d),以及於步驟(c)及(d)均執行完畢後執行的步驟(e)及(f): Preferably, the step (a) further obtains a first feature value in the form of an array for each of the foreground data processing, the method further comprising the step (d) performed after the step (a), and the step (c) And (d) steps (e) and (f) performed after the execution is completed:

(d)如所述物件特徵產生方法產生一目標物件資料及針對該目標物件資料處理得到一形式為陣列的第二特徵值。 (d) The object feature generation method generates a target object data and processes the target object data to obtain a second eigenvalue in the form of an array.

(e)使該第二特徵值分別與步驟(c)所篩選出之物件於其所在之該前景資料中所對應之部分的第一特徵值進行比對,分別計算出一相關度分數。 (e) aligning the second eigenvalue with the first eigenvalue of the portion of the object selected by the step (c) corresponding to the foreground data, and calculating a correlation score.

(f)依據各該相關度分數進行排序並輸出排序結果。 (f) Sorting according to each of the relevance scores and outputting the sort result.

本發明之功效在於:根據使用者的選擇形成的該目標物件資料,再針對該目標物件資料來產生該第二特徵值,使得後續比對作業能夠更為精確。 The effect of the invention is that the second object value is generated according to the target object data formed by the user's selection, so that the subsequent comparison work can be more accurate.

S11~S16‧‧‧步驟 S11~S16‧‧‧Steps

S171~S176‧‧‧步驟 S171~S176‧‧‧Steps

S181~S186‧‧‧步驟 S181~S186‧‧‧Steps

S19~S21‧‧‧步驟 S19~S21‧‧‧Steps

S200~S206‧‧‧步驟 S200~S206‧‧‧Steps

S211~S214‧‧‧步驟 S211~S214‧‧‧Steps

S221~S226‧‧‧步驟 S221~S226‧‧‧Steps

S231~S236‧‧‧步驟 S231~S236‧‧‧Steps

S24~S25‧‧‧步驟 S24~S25‧‧‧Steps

1‧‧‧電子裝置 1‧‧‧Electronic device

11‧‧‧記憶單元 11‧‧‧ memory unit

12‧‧‧顯示單元 12‧‧‧Display unit

13‧‧‧控制單元 13‧‧‧Control unit

本發明之其他的特徵及功效,將於參照圖式的較佳實施例詳細說明中清楚地呈現,其中:圖1是一流程示意圖,說明本發明影像處理方法的一較佳實施例;圖2是一方塊圖,說明執行本發明的電子裝置的架構;圖3是一高斯混合模型示意圖;圖4是一流程示意圖,說明該較佳實施例;圖5是一操作畫面示意圖,說明「遮罩篩選」;圖6是一操作畫面示意圖,說明「保留篩選」;圖7是一操作畫面示意圖,說明「方向性篩選」;圖8是一流程示意圖,說明本發明物件特徵比對方法的一較佳實施例; 圖9是一流程示意圖,說明步驟S21;及圖10是一操作畫面示意圖,說明選取一物件。 The other features and advantages of the present invention will be apparent from the following detailed description of the preferred embodiments of the invention, wherein: FIG. 1 is a flow diagram illustrating a preferred embodiment of the image processing method of the present invention; Is a block diagram illustrating the architecture of an electronic device embodying the present invention; FIG. 3 is a schematic diagram of a Gaussian mixture model; FIG. 4 is a flow diagram illustrating the preferred embodiment; FIG. 5 is a schematic diagram of an operation screen illustrating a "mask" Figure 6 is a schematic diagram of an operation screen illustrating "retention screening"; Figure 7 is a schematic diagram of an operation screen illustrating "directional filtering"; Figure 8 is a flow diagram illustrating a comparison of the feature comparison methods of the present invention. Good example; FIG. 9 is a schematic flow chart showing step S21; and FIG. 10 is a schematic diagram of an operation screen for selecting an object.

參閱圖1與圖2,本發明影像處理方法之較佳實施例由一電子裝置1執行,該電子裝置1包括一記憶單元11、一顯示單元12,及一與記憶單元11、顯示單元12連接的控制單元13。記憶單元11儲存有一與本發明影像處理方法及物件特徵比對方法相關的應用程式,及至少一錄影資料。控制單元13讀取該應用程式而執行以下影像處理方法之步驟。 Referring to FIG. 1 and FIG. 2, a preferred embodiment of the image processing method of the present invention is implemented by an electronic device 1. The electronic device 1 includes a memory unit 11, a display unit 12, and a memory unit 11 and a display unit 12. Control unit 13. The memory unit 11 stores an application related to the image processing method and the object feature comparison method of the present invention, and at least one video material. The control unit 13 reads the application and performs the steps of the following image processing method.

步驟S11-讀取錄影資料。該錄影資料例如是某監視攝影機在某一時段錄下的影片。 Step S11 - reading the video material. The video material is, for example, a movie recorded by a surveillance camera at a certain time.

步驟S12-對該錄影資料擷取出複數張影像資料,例如每秒擷取3張,假設30分鐘的錄影資料總共擷取出5400張影像資料。 Step S12 - extracting a plurality of image data from the recorded data, for example, taking 3 images per second, assuming that a total of 5400 image data are extracted from the 30-minute video data.

步驟S13-利用該等影像資料建立如圖3所示意的一高斯混合模型(Gaussian Mixture Model)。建立高斯混合模型的目的在於建立這個錄影資料的基礎背景。該高斯混合模型以圖3所示的圖形表示時,橫軸為時間,縱軸為像素的機率強度;因此,該高斯混合模型中包括數個分別代表某一像素集合的機率強度隨時間改變的高斯分布,可以以式1與式2的方程式表示。 Step S13 - Using the image data to establish a Gaussian Mixture Model as illustrated in FIG. The purpose of establishing a Gaussian mixture model is to establish the basic background of this video material. When the Gaussian mixture model is represented by the graph shown in FIG. 3, the horizontal axis is time and the vertical axis is the probability intensity of the pixel; therefore, the Gaussian mixture model includes a plurality of probability strengths respectively representing a certain pixel set changing with time. The Gaussian distribution can be expressed by the equations of Equations 1 and 2.

(式2) (Formula 2)

其中,時間t時,P(xt)代表像素值為X的像素點的機率強度,Wi,t代表著第i個高斯分布的權重,η為高斯機率密度函數,μi,t與Σi,t分別為第i個高斯分布的平均與共變異數。 Where, at time t, P(x t ) represents the probability intensity of the pixel with pixel value X, W i,t represents the weight of the ith Gaussian distribution, η is the Gaussian probability density function, μ i,t and Σ i, t are the average and covariance of the i-th Gaussian distribution , respectively.

建立高斯混合模型之後,錄影資料可繼續傳入而使影像資料不斷增加,則時間軸拉長且參數持續維護更新。 After the Gaussian mixture model is established, the video data can continue to be transmitted and the image data is continuously increased, and the time axis is elongated and the parameters are continuously maintained and updated.

步驟S14-從該高斯混合模型中得到一背景影像資料。 Step S14 - obtaining a background image data from the Gaussian mixture model.

由於在高斯混合模型中,簡單來說,長時間且機率強度高的像素集合,表示在這些影像資料中大多有出現,通常代表背景;僅在少數幾張影像資料中出現的物件,在高斯混合模型中通常機率強度低且分布短。因此,可以從高斯混合模型中找出最能代表背景的幾個高斯分布,這些找出的高斯分布所對應的像素集合即可建立背景影像資料。 In the Gaussian mixture model, simply speaking, a long-term and high-intensity set of pixels indicates that most of these image data appear, usually representing the background; objects appearing only in a few pieces of image data, mixed in Gaussian In the model, the probability is usually low and the distribution is short. Therefore, several Gaussian distributions that best represent the background can be found from the Gaussian mixture model, and the set of pixels corresponding to the found Gaussian distribution can establish background image data.

本實施例找出代表背景的高斯分布的方法,是分別計算該等高斯分布的權重除以標準差的比值,根據比值大小排列後找出預設之前幾名的高斯分布來代表背景。 In the embodiment, the method for finding the Gaussian distribution of the background is to separately calculate the weight of the Gaussian distribution divided by the standard deviation, and arrange the Gaussian distributions of the previous names according to the ratio size to represent the background.

步驟S15-利用像素分類法(pixel-wise classification),針對各該影像資料的每一像素的R/G/B色度,分別與該背景影像資料的對應像素的R/G/B色度相 減得到一差值;由於差值大,代表這個像素在背景影像資料中,與在某張影像資料中色度差很多,也就是有物件移入。因此,本步驟將差值大於一預設函式的像素分離出來而構成一對應該影像資料的初步的前景資料。 Step S15 - using pixel-wise classification, the R/G/B chromaticity of each pixel of each image data is respectively compared with the R/G/B chromaticity of the corresponding pixel of the background image data. A difference is obtained; because the difference is large, the pixel is represented in the background image data, and the chromaticity is much different in a certain image data, that is, the object is moved in. Therefore, in this step, pixels having a difference greater than a predetermined function are separated to form a preliminary foreground data of a pair of image data.

步驟S16-針對S15得到的結果利用區域分類法(region-based classification)進行陰影濾除,將亮度變化較小而代表陰影的部份進一步濾除,才是真正的前景資料。 Step S16 - The result obtained by S15 is subjected to shadow filtering by region-based classification, and the portion of the brightness change and the shadow portion is further filtered out, which is the real foreground data.

本步驟中,針對步驟S15求出的初步的前景資料中的像素計算增益值,增益值等於各該影像資料之一像素的灰階值與背景影像資料對應像素的灰階值的差值,與該背景影像資料對應像素的灰階值的比值;並將增益值接近且鄰近的像素合併成一區域,當該區域的平均增益值小於一預設門檻值,代表光學強度與背景相較變化不大而屬於陰影,則進一步刪除而得到該前景資料。 In this step, the gain value is calculated for the pixel in the preliminary foreground data obtained in step S15, and the gain value is equal to the difference between the grayscale value of one pixel of each image data and the grayscale value of the corresponding pixel of the background image data, and The background image data corresponds to the ratio of the grayscale values of the pixels; and the pixels whose gain values are close to and adjacent to each other are merged into one region. When the average gain value of the region is less than a preset threshold, the optical intensity is less changed than the background. And if it belongs to the shadow, it is further deleted to get the foreground data.

接下來的步驟分成色彩分析(S171~S176)以及紋理分析(S181~S186)兩種方式分析前景資料,綜合得到特徵值。在本實施例,假設先前步驟中得到的前景資料是人,因此將前景資料分成n部位,n=3,也就是頭、身、腳三部分,後續還會依照預設權重進行加權計算(S176、S186),或在辨識流程中計算相關度分數時進行加權計算(S25),有助於提高辨識度。但本發明不以n=3為限,若直接進行全身比對,就是n=1。 The next steps are divided into color analysis (S171~S176) and texture analysis (S181~S186) to analyze the foreground data and obtain the feature values. In this embodiment, it is assumed that the foreground data obtained in the previous step is a person, so the foreground data is divided into n parts, n=3, that is, the head, the body and the foot, and the weighting calculation is performed according to the preset weight (S176). S186), or performing weighting calculation (S25) when calculating the relevance score in the identification process, helps to improve the recognition. However, the present invention is not limited to n=3, and if the whole body is directly compared, n=1.

在色彩分析方面,設定i=1~n,j=1~第n部位的 像素數,一開始i=1,j=1。 In terms of color analysis, set i=1~n, j=1~n part The number of pixels, i = 1 at the beginning, j = 1.

步驟S171-利用一移動視窗(sliding window)掃描前景資料的第i部位(例如頭部)的第j像素,得到這個像素的RGB色彩資訊值,在本實施例是R色度值、G色度值,及B色度值。在本實施例,色度值為0~255的數值。 Step S171- Scanning the jth pixel of the i-th portion (for example, the head) of the foreground data by using a sliding window to obtain the RGB color information value of the pixel, which is the R chrominance value and the G chromaticity in this embodiment. Value, and B chromaticity value. In this embodiment, the chromaticity value is a value from 0 to 255.

步驟S172-將R色度值、G色度值與B色度值分別記錄於R色度直方圖、G色度直方圖與B色度直方圖中,直方圖的橫軸為0~255,縱軸為像素數。 Step S172— Recording the R chrominance value, the G chrominance value, and the B chrominance value in the R chromaticity histogram, the G chromaticity histogram, and the B chromaticity histogram, respectively, and the horizontal axis of the histogram is 0 to 255. The vertical axis is the number of pixels.

步驟S173-檢視j是否等於該前景資料的第i部位的像素數,也就是檢視是否已經掃描完該第i部位。若是,則進行步驟S174,若否,則令j=j+1,掃描下一個像素。 Step S173 - Check whether j is equal to the number of pixels of the i-th portion of the foreground data, that is, whether the i-th portion has been scanned. If yes, proceed to step S174, and if no, let j=j+1 scan the next pixel.

假設第i部位有三個像素,以R色度來說,這三個像素的R色度值例如分別為2、250、250,則在R色度直方圖上,橫軸位置2處對應的數值為1,橫軸位置250處對應的數值為2,其他皆為0。 Suppose that the i-th part has three pixels. In terms of R chromaticity, the R chrominance values of the three pixels are, for example, 2, 250, and 250 respectively. On the R chromaticity histogram, the corresponding value at the horizontal axis position 2 is For 1, the corresponding value of the horizontal axis position 250 is 2, and the others are all 0.

步驟S174-將對應於第i部位的R色度直方圖、G色度直方圖與B色度直方圖中的數值讀出來,轉換為R一維陣列、G一維陣列,及B一維陣列。以前面例子來說,就是得到一個1×256的陣列,只有第3及第251個元素數值分別為1及2,其他為0。 Step S174 - reading the values in the R chroma histogram, the G chroma histogram, and the B chroma histogram corresponding to the i-th portion, and converting them into an R-dimensional array, a G-dimensional array, and a B-dimensional array. . In the previous example, we get a 1×256 array. Only the 3rd and 251th elements have values of 1 and 2, and the others are 0.

步驟S175-檢視i是否等於n,也就是檢視是否已經掃描完前景資料所有部位。若是,則執行步驟S176 ,若否,則令i=i+1,處理下一個部位。 Step S175 - Check if i is equal to n, that is, check whether all parts of the foreground data have been scanned. If yes, go to step S176. If no, let i=i+1 and process the next part.

步驟S176-將第1到第n部位分別乘上一預設的權重後相加,得到R色彩一維陣列、G色彩一維陣列及B色彩一維陣列。以n=3的例子來說,身體特徵比較重要時,預設的權重可以是0.2、0.6、0.2。 Step S176 - multiplying the first to nth portions by a predetermined weight and adding them to obtain an R color one-dimensional array, a G color one-dimensional array, and a B color one-dimensional array. In the case of n=3, when the body features are important, the preset weights can be 0.2, 0.6, and 0.2.

需說明的是,有關各部位的RGB色彩一維陣列也可以直接儲存而未經加權計算,留待辨識流程計算分數時一併計算。 It should be noted that the RGB color one-dimensional array of each part can also be directly stored without weighting calculation, and is calculated when the identification process calculates the score.

以下說明本實施例利用局部二進位樣本(local binary pattern,LBP)針對前景資料進行紋理分析的方法流程。在紋理分析方面,設定相同的i=1~n,k=1~第n部位的像素數,一開始i=1,k=1。 The flow of the method for performing texture analysis on foreground data by using a local binary pattern (LBP) in the present embodiment will be described below. In terms of texture analysis, the same number of pixels of i=1~n, k=1~nth part is set, and i=1 and k=1 at the beginning.

步驟S181-利用LBP技術處理求第i部位每一個3X3區塊的LBP值。詳細來說,針對前景資料的第i部位的第一個3X3區塊(例如最左上角的像素作為九宮格的中心,沒有像素的部分補充灰階值為相同的假設像素)計算LBP值,大致上是各格灰階值減去中央格的灰階值再作二值化處理,變成0或1,接著使各格乘以權重(2的0次方~7次方)加總得到LBP值。在本實施例,LBP值為0~255的數值。 Step S181 - The LBP value is used to process the LBP value of each 3X3 block of the i-th part. In detail, for the first 3X3 block of the i-th part of the foreground data (for example, the pixel in the upper left corner is the center of the nine-square grid, and the portion of the pixel is not the same as the hypothetical pixel with the same gray-scale value), the LBP value is calculated substantially. It is the grayscale value of each grid minus the grayscale value of the central grid and then binarized to become 0 or 1, and then multiply each weight by the weight (2 to the power of 0 to the power of 7) to obtain the LBP value. In this embodiment, the LBP value is a value from 0 to 255.

步驟S182-將LBP值記錄在一LBP直方圖中,該LBP直方圖的橫軸為0~255,縱軸為像素數。 Step S182 - Record the LBP value in an LBP histogram having a horizontal axis of 0 to 255 and a vertical axis of the number of pixels.

步驟S183-檢視k是否等於該前景資料的第i部位的像素數,也就是檢視是否已經掃描完該第i部位。若 是,則進行步驟S184,若否,則令k=k+1,使九宮格移動一格。 Step S183 - It is checked whether k is equal to the number of pixels of the i-th portion of the foreground data, that is, whether the i-th portion has been scanned. If If yes, go to step S184. If no, let k=k+1, so that the nine squares move one space.

步驟S184-將LBP直方圖中的數值讀出來,得到一個1×256的紋理一維陣列。 Step S184 - Read out the values in the LBP histogram to obtain a 1 x 256 texture one-dimensional array.

步驟S185-檢視i是否等於n,也就是檢視是否已經掃描完前景資料所有部位。若是,則執行步驟S186,若否,則令i=i+1,處理下一個部位。 Step S185 - Detect whether i is equal to n, that is, check whether all parts of the foreground data have been scanned. If yes, go to step S186, if no, let i=i+1 and process the next part.

步驟S186-將第1到第n部位分別乘上一預設的權重後相加,得到整體的紋理一維陣列。同樣地,有關各部位的紋理一維陣列也可以直接儲存而未經加權計算,留待辨識流程計算分數時一併計算。 Step S186 - multiplying the first to nth portions by a predetermined weight and adding them to obtain an overall texture one-dimensional array. Similarly, the texture one-dimensional array of each part can also be directly stored without weighting calculation, and is calculated when the identification process calculates the score.

步驟S19-在進行色彩分析以及紋理分析之後,儲存對應某一張影像資料的各個一維陣列於記憶單元11,得到一第一特徵值,該第一特徵值是該等一維陣列的集合。 Step S19 - After performing color analysis and texture analysis, storing each one-dimensional array corresponding to a certain image data in the memory unit 11 to obtain a first feature value, where the first feature value is a set of the one-dimensional arrays.

須說明的是,本發明不以色彩分析以及紋理分析並行搭配處理為限,也可以只採用色彩分析的結果當做各該影像資料的特徵值,或者只採用紋理分析的結果當做特徵值;當採用色彩及紋理共同搭配處理時,兩者處理順序沒有先後的限制。以下參閱圖4,繼續說明後續步驟。 It should be noted that the present invention is not limited to color analysis and texture analysis parallel collocation processing, and may also use only the result of color analysis as the eigenvalue of each image data, or only use the result of texture analysis as the eigenvalue; When color and texture are processed together, there is no limit to the order in which they are processed. Referring now to Figure 4, the subsequent steps are continued.

步驟S20-執行物件追蹤之分析,將該等影像資料中的前景資料中的物件,具有特定之移動物件特徵者歸屬於相同的物件。舉例來說,擷取到汽車行駛於道路上之5張影像資料,經本步驟之分析後可將該5張影像資料中對 應該汽車部分的前景資料識別為相同的物件。計算移動物件特徵之演算法例如是以前景資料中移動部分的重心來判斷是否屬相同的物件,又或者是利用卡曼濾波器(Kalman filter)追蹤法來計算並追蹤移動物件特徵。 Step S20- Perform an analysis of the object tracking, and the objects in the foreground data in the image data have the specific moving object features attributed to the same object. For example, if you capture 5 images of the car on the road, you can analyze the 5 images after the analysis of this step. The foreground data of the car part should be identified as the same object. The algorithm for calculating the characteristics of the moving object is, for example, whether the object is the same object by the center of gravity of the moving part in the foreground data, or the Kalman filter tracking method is used to calculate and track the moving object feature.

本步驟之目的在於提供後續之物件篩選步驟(步驟S200)之用,舉例來說,欲篩選通過畫面上某一區域的移動物件,那麼若前述該汽車在2張影像資料中是位於該區域,則含有該汽車的全部5張影像資料因被分析屬同一移動物件,便可全部被找出。 The purpose of this step is to provide a subsequent object screening step (step S200). For example, if the moving object is to be filtered through an area on the screen, if the car is located in the area in the two image data, Then all the five image data containing the car can be found out because they are analyzed to belong to the same moving object.

步驟S200-執行物件篩選。在本實施例中,本步驟是為了增加進一步的限制條件以縮小最終篩選結果,也可以省略步驟S20及本步驟而直接以步驟S19之該第一特徵值來執行後述步驟S25。 Step S200 - Perform object screening. In the present embodiment, this step is for adding further restriction conditions to narrow the final screening result, and step S20 and the present step may be omitted, and step S25 described later may be directly performed with the first feature value of step S19.

本步驟包括以下子步驟S201~S206,在本實施例中該等子步驟可區分成三種類型;「遮罩篩選」、「保留篩選」及「方向性篩選」,各類型可選擇其中一者或選擇其中數個來執行,以下分別說明。 This step includes the following sub-steps S201-S206. In this embodiment, the sub-steps can be divided into three types; "mask screening", "retention filtering", and "directional filtering", one of which can be selected for each type. Or select several of them to perform, as explained below.

「遮罩篩選」指的是預先接收使用者選定的一個具有三個以上之頂點之連線所圍繞出的區域(如圖5)(步驟S201),並判斷若影像資料中前景資料的物件有在移動過程中落入該區域(步驟S202),則排除該物件,不顯示於結果,否則保留該物件。 "Mask screening" refers to receiving an area surrounded by a line of three or more vertices selected by a user (see FIG. 5) (step S201), and determining that the object of the foreground data in the image data has If the area falls into the area during the movement (step S202), the object is excluded and is not displayed in the result, otherwise the object is retained.

「保留篩選」同樣是預先接收使用者選定的一個具有三個以上之頂點之連線所圍繞出的區域(如圖6)( 步驟S203),然而不同的是,在判斷若影像資料中前景資料的物件在移動過程中有落入該區域時,則保留該物件,否則排除該物件。 The "retention filter" is also an area that is pre-received by a user-selected line with more than three vertices (see Figure 6). Step S203), however, the difference is that if the object of the foreground material in the image data has fallen into the area during the movement, the object is retained, otherwise the object is excluded.

「方向性篩選」指的是預先接收使用者定義出的一線段以及一方向,該方向是一垂直該線段之向量的方向,舉例來說,使用者先於畫面中點擊第一位置P1及第二位置P2,指定出線段的兩個端點以定義出一沿左上-右下方向的線段,然後點擊該線段右上方的一側的任一第三位置P3,如此等於選定了由左下往右上之方向(如圖7),然後,判斷若影像資料中前景資料的物件有通過該線段且於通過時是沿選定的該方向前進或是前進方向與該向量的夾角小於一預設值,則保留該物件,否則排除該物件。 "Directive screening" refers to pre-receiving a line segment defined by the user and a direction which is a direction perpendicular to the vector of the line segment. For example, the user clicks on the first position P1 and the first in the screen. Two positions P2, specify two end points of the line segment to define a line segment in the upper left-lower right direction, and then click on any third position P3 on the upper right side of the line segment, thus equal to the selection from the lower left to the upper right The direction (as shown in FIG. 7), and then, if the object of the foreground data in the image data passes through the line segment and advances in the selected direction when the passage is passed or the angle between the forward direction and the vector is less than a preset value, then Keep the object, otherwise exclude the object.

需說明的是,各項篩選可以選其中一個、兩個,或三個來進行,舉例來說,若是取三個篩選結果的交集,那麼物件必須不落入「遮罩篩選」選定的區域、落入「保留篩選」選定的區域,且於「方向性篩選」中通過選定的線段並朝選定的方向行進。此外,篩選的方法不以上述為限,只要是對物件的位置或移動行為進行限制,均可做為篩選的條件。 It should be noted that each screening may be performed by one, two, or three. For example, if the intersection of three screening results is taken, the object must not fall within the selected area of the "mask screening", Falls into the selected area of the Retention Filter and travels through the selected line segment in the Directional Filter and in the selected direction. In addition, the screening method is not limited to the above, and as long as the position or movement behavior of the object is restricted, it can be used as a screening condition.

將物件進行篩選後,再以通過篩選的物件於其所在之前景資料中所對應之部分的該第一特徵值進行步驟S25之比對。舉例來說,出現代表汽車的物件的前景資料中可能還同時有代表行人的物件存在,若代表汽車的物件通過篩選,則以前景資料中代表汽車的物件的部分畫素所對 應的該第一特徵值進行步驟S25之比對。 After the object is screened, the first characteristic value of the portion corresponding to the foreground material in the foreground data is subjected to the comparison of step S25. For example, in the foreground data of the object representing the car, there may be objects representing the pedestrian at the same time. If the object representing the car passes the screening, the partial pixels representing the object of the car in the foreground data are The first characteristic value should be subjected to the comparison of step S25.

在省略本步驟的情況下,則是對每一前景資料的該第一特徵值進行後續步驟。 In the case where this step is omitted, the subsequent step is performed on the first feature value of each foreground material.

當蒐集到的所有錄影資料,都預先進行圖1部分所示的影像處理方法,針對每一影像資料以幾個一維陣列的方式儲存成對應的第一特徵值,將來需要調閱錄影資料進行辨識時,效率則能大幅提升,相當有助益。而在開始調閱錄影資料時,則對所有錄影資料進行圖4部分所示的影像處理方法,進行物件追蹤,並選擇進行所需的物件篩選,進一步限縮結果的數量。 When all the collected video data, the image processing method shown in FIG. 1 is pre-processed, and each image data is stored into corresponding first feature values in several one-dimensional arrays, and the video data needs to be read in the future. When identifying, the efficiency can be greatly improved, which is quite helpful. At the beginning of the video recording, the image processing method shown in FIG. 4 is performed on all the video materials, and the object tracking is performed, and the required object screening is selected to further limit the number of results.

參閱圖2及圖8,電子裝置1的控制單元13讀取該應用程式而執行以下物件特徵比對方法之步驟。 Referring to FIGS. 2 and 8, the control unit 13 of the electronic device 1 reads the application to perform the following steps of the object feature comparison method.

步驟S21-接收一目標物件資料。該目標物件資料例如是偵辦人員取得的嫌疑犯的照片,並從該照片框選得到,或者從已經鎖定的錄影資料中的某一影像資料框選得到。其中,前述框選不限於以方格或不規則形狀選取。參閱圖9,在本實施例中,本步驟還包括以下子步驟S211至S214: Step S21 - receiving a target object data. The target object information is, for example, a photo of the suspect obtained by the investigator, and is selected from the photo frame or selected from a certain image data in the already locked video material. Wherein, the foregoing frame selection is not limited to being selected in a checkered or irregular shape. Referring to FIG. 9, in this embodiment, the step further includes the following sub-steps S211 to S214:

步驟S211-顯示該照片或影像資料。 Step S211 - Display the photo or image data.

步驟S212-接收代表使用者於該照片或影像資料上所點擊之多個位置的多個座標值。舉例而言,使用者欲選定一畫面中的一個行人(如圖10),則點擊該行人輪廓的多個點,以選取該行人做為接下來進行比對的目標物件資料。 Step S212 - receiving a plurality of coordinate values representing a plurality of locations clicked by the user on the photo or video material. For example, if the user wants to select a pedestrian in a picture (as shown in FIG. 10), click on a plurality of points of the pedestrian contour to select the pedestrian object as the target object data to be compared next.

步驟S213-產生一包括該等座標值的座標集。需注意該座標集保存有該等座標值於步驟S212中點擊之順序。 Step S213 - generating a set of coordinates including the coordinate values. It should be noted that the coordinate set stores the order in which the coordinate values are clicked in step S212.

步驟S214-根據該座標集擷取該照片或影像資料中被該等座標所代表之位置所圍繞之範圍,形成該目標物件資料。 Step S214 - extracting a range of the photo or image data surrounded by the coordinates represented by the coordinates according to the coordinate set to form the target object data.

另外,若已經利用本發明影像處理方法處理而得到影像資料的第一特徵值,則可將框選部分所對應的第一特徵值定義為第二特徵值,而直接執行步驟S25及其後步驟。 In addition, if the first feature value of the image data is obtained by the image processing method of the present invention, the first feature value corresponding to the frame selection portion may be defined as the second feature value, and step S25 and subsequent steps are directly performed. .

接下來執行的步驟與圖1影像處理方法中對於前景資料的色彩分析步驟S171~S176以及紋理分析步驟S181~S186相同,只是分析對象為目標物件資料,因此以下僅簡略說明。 The steps to be performed next are the same as the color analysis steps S171 to S176 and the texture analysis steps S181 to S186 of the foreground data in the image processing method of FIG. 1, except that the analysis target is the target object data, and therefore only the following description will be briefly described.

在色彩分析方面,設定i=1~n,j=1~第n部位的像素數,一開始i=1,j=1。 In terms of color analysis, set the number of pixels of i=1~n, j=1~nth part, i=1, j=1 at the beginning.

步驟S221-掃描目標物件資料的第i部位的第j像素,得到這個像素的R色度值、G色度值,及B色度值。 Step S221 - Scanning the jth pixel of the i-th portion of the target object data to obtain the R chrominance value, the G chrominance value, and the B chromaticity value of the pixel.

步驟S222-將R色度值、G色度值與B色度值分別記錄於R色度直方圖、G色度直方圖與B色度直方圖中。 Step S222 - Recording the R chrominance value, the G chrominance value, and the B chrominance value in the R chromaticity histogram, the G chromaticity histogram, and the B chromaticity histogram, respectively.

步驟S223-檢視j是否等於該目標物件資料的第i部位的像素數。若是,則進行步驟S224,若否,則令 j=j+1,掃描下一個像素。 Step S223 - It is checked whether j is equal to the number of pixels of the i-th part of the target object data. If yes, proceed to step S224, if no, then order j=j+1, scan the next pixel.

步驟S224-將對應於第i部位的R色度直方圖、G色度直方圖與B色度直方圖中的數值讀出來,轉換為R一維陣列、G一維陣列,及B一維陣列。 Step S224 - reading the values in the R chromaticity histogram, the G chromaticity histogram, and the B chromaticity histogram corresponding to the i-th portion, and converting them into an R-dimensional array, a G-dimensional array, and a B-dimensional array. .

步驟S225-檢視i是否等於n。若是,則執行步驟S226,若否,則令i=i+1,處理下一個部位。 Step S225 - Check if i is equal to n. If yes, go to step S226, if no, let i=i+1 and process the next part.

步驟S226-將第1到第n部位分別乘上一預設的權重後相加,得到R色彩一維陣列、G色彩一維陣列及B色彩一維陣列。 Step S226 - Multiplying the first to nth portions by a predetermined weight and adding them to obtain an R color one-dimensional array, a G color one-dimensional array, and a B color one-dimensional array.

需說明的是,有關各部位的RGB色彩一維陣列也可以直接儲存而未經加權計算,留待步驟S25計算相關度分數時再時一併計算。 It should be noted that the one-dimensional array of RGB colors of each part can also be directly stored without weighting calculation, and is calculated when the correlation score is calculated in step S25.

另一方面,利用LBP針對目標物件資料進行紋理分析。 On the other hand, the LBP is used to perform texture analysis on the target object data.

步驟S231-利用LBP技術處理求目標物件資料第i部位每一個3X3區塊的LBP值。 Step S231 - processing the LBP value of each 3X3 block of the i-th part of the target object data by using the LBP technique.

步驟S232-將LBP值記錄在一LBP直方圖中。 Step S232 - Record the LBP value in an LBP histogram.

步驟S233-檢視k是否等於該目標物件資料的第i部位的像素數。若是,則進行步驟S234,若否,則令k=k+1,使九宮格移動一格。 Step S233 - It is checked whether k is equal to the number of pixels of the i-th part of the target object data. If yes, proceed to step S234. If no, let k=k+1 move the nine squares to one grid.

步驟S234-將LBP直方圖中的數值讀出來,得到一個1×256的紋理一維陣列。 Step S234 - Read out the values in the LBP histogram to obtain a 1 x 256 texture one-dimensional array.

步驟S235-檢視i是否等於n,也就是檢視是否已經掃描完目標物件資料所有部位。若是,則執行步驟S236 ,若否,則令i=i+1,處理下一個部位。 Step S235 - Detect whether i is equal to n, that is, check whether all parts of the target object data have been scanned. If yes, go to step S236. If no, let i=i+1 and process the next part.

步驟S236-將第1到第n部位分別乘上一預設的權重後相加,得到整體的紋理一維陣列。 Step S236 - Multiplying the first to nth portions by a predetermined weight and adding them to obtain an overall texture one-dimensional array.

在進行色彩分析以及紋理分析之後,進行步驟S24,儲存目標物件資料的各個一維陣列於記憶單元11,得到一第二特徵值,該第二特徵值是該等一維陣列的集合。 After performing color analysis and texture analysis, step S24 is performed to store each one-dimensional array of target object data in the memory unit 11 to obtain a second feature value, which is a set of the one-dimensional arrays.

步驟S25-使該第二特徵值與一第一特徵值進行比對,計算一相關度分數,其中該第一特徵值是由步驟S19或步驟S200所得出。由於該第一特徵值與第二特徵值皆為一維陣列的集合,各該一維陣列相當於特徵向量,所以本實施例利用餘弦距離(cosine distance)公式計算兩特徵向量之間的距離。 Step S25 - Comparing the second feature value with a first feature value, and calculating a relevance score, wherein the first feature value is obtained by step S19 or step S200. Since the first eigenvalue and the second eigenvalue are both a set of one-dimensional arrays, each of the one-dimensional arrays is equivalent to a feature vector, so the present embodiment calculates a distance between the two feature vectors using a cosine distance formula.

以前述例子詳細來說,各部位的色彩一維陣列以及紋理一維陣列都已分別經過加權處理而合併為整體的色彩一維陣列以及紋理一維陣列。因此,本步驟是使目標物件資料的色彩一維陣列與前景資料的色彩一維陣列進行比對,計算一色彩相關度分數,並使目標物件資料的紋理一維陣列與前景資料的紋理一維陣列進行比對,計算一紋理相關度分數,再使兩種相關度分數各自乘上一預定權重後相加而得到該目標物件資料與該前景資料的相關度分數。 In detail, in the foregoing examples, the color one-dimensional array and the texture one-dimensional array of each part have been respectively weighted and combined into an overall color one-dimensional array and a texture one-dimensional array. Therefore, this step is to compare the color one-dimensional array of the target object data with the one-dimensional array of the foreground data, calculate a color correlation score, and make the texture one-dimensional array of the target object data and the texture of the foreground data one-dimensional. The array is compared, a texture correlation score is calculated, and then the two correlation scores are respectively multiplied by a predetermined weight, and then the correlation scores of the target object data and the foreground data are obtained.

另一方面,本發明也可以在影像處理階段不合併各部位的一維陣列,也就是沒有步驟S176、S186、S226及S236,那麼在本步驟,就須使目標物件資料的各該部位 的一維陣列與前景資料對應部位的一維陣列進行比對,計算相關度分數,再使各部位的相關度分數各自乘上預定權重後相加而得到該目標物件資料與該前景資料的相關度分數。色彩一維陣列以及紋理一維陣列皆以相同方式計算出相關度分數之後,再使兩種相關度分數各自乘上一預定權重後相加而得到該目標物件資料與該前景資料的相關度分數。 On the other hand, the present invention can also not merge the one-dimensional array of each part in the image processing stage, that is, without steps S176, S186, S226 and S236, then in this step, each part of the target object data must be made. The one-dimensional array is compared with the one-dimensional array corresponding to the foreground data, and the correlation score is calculated, and then the correlation scores of the respective parts are multiplied by a predetermined weight and then added to obtain the correlation between the target object data and the foreground data. Degree score. After the color one-dimensional array and the texture one-dimensional array calculate the correlation scores in the same manner, the two correlation scores are respectively multiplied by a predetermined weight, and then the correlation scores of the target object data and the foreground data are obtained. .

步驟S26-依據該相關度分數進行排序並透過顯示單元12輸出排序結果。例如,顯示單元12顯示相關度分數最高的五張影像資料中的前景資料。 Step S26 - sorting according to the relevance score and outputting the sort result through the display unit 12. For example, the display unit 12 displays the foreground data among the five image data having the highest relevance score.

歸納上述,利用本發明影像處理方法將錄影資料轉換成多數張影像資料並以陣列形式儲存為特徵值,因此在物件特徵比對方法中可以方便地進行相關度運算,高效率地找出高度相關的物件,獲得快速且辨識效果佳的自動化辨識效果,可提供偵辦人員在影像比對上相當大的助益。 In summary, the video processing method of the present invention converts video data into a plurality of image data and stores them as feature values in an array form. Therefore, correlation processing can be conveniently performed in the object feature comparison method, and high correlation is efficiently found. The object, the fast and well-identified automatic identification effect, can provide the investigator with considerable help in image comparison.

此外,根據使用者的選擇而形成的該目標物件資料再針對該目標物件資料來產生該第二特徵值,並配合各項篩選方式縮少比對的數量,再進行相關度運算,使得最終比對結果能夠更為精確,故確實能達成本之目的。 In addition, the target object data formed according to the user's selection further generates the second characteristic value for the target object data, and reduces the number of comparisons according to various screening methods, and then performs correlation calculation to make the final ratio The result can be more precise, so it can achieve the purpose.

惟以上所述者,僅為本發明之較佳實施例而已,當不能以此限定本發明實施之範圍,即大凡依本發明申請專利範圍及專利說明書內容所作之簡單的等效變化與修飾,皆仍屬本發明專利涵蓋之範圍內。 The above is only the preferred embodiment of the present invention, and the scope of the present invention is not limited thereto, that is, the simple equivalent changes and modifications made by the patent application scope and patent specification content of the present invention, All remain within the scope of the invention patent.

S211~S214‧‧‧步驟 S211~S214‧‧‧Steps

Claims (4)

一種物件篩選方法,包含以下步驟:(a)對一錄影資料計算出複數張影像資料的前景資料;(b)分析該等影像資料中的前景資料中的物件,將具有相同之移動物件特徵的複數物件歸屬於相同的物件;及(c)接收一由使用者輸入之三個以上的位置所界定的選定條件,並判斷各該物件在移動過程中是否符合該選定條件來排除或保留各該物件。 An object screening method comprises the steps of: (a) calculating foreground data of a plurality of image data for a video material; and (b) analyzing objects in the foreground data of the image data, having the same moving object feature. The plurality of objects are assigned to the same item; and (c) receiving a selected condition defined by more than three locations input by the user, and determining whether each of the objects meets the selected condition during the movement to exclude or retain each of the items object. 如請求項1所述物件篩選方法,其中,該等輸入之位置的連線圍繞出一以該等位置為頂點的區域,該選定條件為:若影像資料中前景資料的物件在移動過程中有落入該區域時,則保留該物件,否則排除該物件。 The object screening method of claim 1, wherein the connection of the positions of the input surrounds an area with the vertex as the vertex, and the selected condition is: if the object of the foreground data in the image data has a moving process When it falls into the area, the object is retained, otherwise the object is excluded. 如請求項1所述物件篩選方法,其中,該等輸入之位置的連線圍繞出一以該等位置為頂點的區域,該選定條件為:若影像資料中前景資料的物件在移動過程中有落入該區域時,則排除該物件,否則保留該物件。 The object screening method of claim 1, wherein the connection of the positions of the input surrounds an area with the vertex as the vertex, and the selected condition is: if the object of the foreground data in the image data has a moving process When falling into the area, the item is excluded, otherwise the item is retained. 如請求項1所述物件篩選方法,其中,該等輸入之位置分別為第一位置、第二位置及第三位置,第一位置及第二位置定義出一線段,第三位置定義出一垂直該線段且朝向該第三位置之一側的向量,該選定條件為:若影像資料中前景資料的物件在移動過程中有通過該線段且於通過時是沿該向量之方向前進或是前進方向與該向量的 夾角小於一預設值,則保留該物件,否則排除該物件。 The object screening method of claim 1, wherein the positions of the inputs are a first position, a second position, and a third position, respectively, wherein the first position and the second position define a line segment, and the third position defines a vertical position. The line segment and the vector facing one side of the third position, the selection condition is: if the object of the foreground data in the image data passes through the line segment during the movement and passes in the direction of the vector or the direction of advancement when passing With the vector If the angle is less than a preset value, the object is retained, otherwise the object is excluded.
TW103101139A 2014-01-13 2014-01-13 Image foreground object screening method TWI517055B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW103101139A TWI517055B (en) 2014-01-13 2014-01-13 Image foreground object screening method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW103101139A TWI517055B (en) 2014-01-13 2014-01-13 Image foreground object screening method

Publications (2)

Publication Number Publication Date
TW201528160A TW201528160A (en) 2015-07-16
TWI517055B true TWI517055B (en) 2016-01-11

Family

ID=54198319

Family Applications (1)

Application Number Title Priority Date Filing Date
TW103101139A TWI517055B (en) 2014-01-13 2014-01-13 Image foreground object screening method

Country Status (1)

Country Link
TW (1) TWI517055B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI612482B (en) * 2016-06-28 2018-01-21 圓展科技股份有限公司 Target tracking method and target tracking device
CN110569690B (en) * 2018-06-06 2022-05-13 浙江宇视科技有限公司 Target information acquisition method and device
TWI716111B (en) * 2019-09-23 2021-01-11 大陸商北京集創北方科技股份有限公司 Image acquisition quality evaluation method and system

Also Published As

Publication number Publication date
TW201528160A (en) 2015-07-16

Similar Documents

Publication Publication Date Title
Zuffi et al. Three-D Safari: Learning to Estimate Zebra Pose, Shape, and Texture from Images" In the Wild"
Yang et al. Real-time face detection based on YOLO
US11830246B2 (en) Systems and methods for extracting and vectorizing features of satellite imagery
US9898686B2 (en) Object re-identification using self-dissimilarity
Choi et al. Thermal image enhancement using convolutional neural network
JP5726125B2 (en) Method and system for detecting an object in a depth image
JP6351240B2 (en) Image processing apparatus, image processing method, and program
TW200841276A (en) Image processing methods
US20230099984A1 (en) System and Method for Multimedia Analytic Processing and Display
US10489640B2 (en) Determination device and determination method of persons included in imaging data
US7747079B2 (en) Method and system for learning spatio-spectral features in an image
JP6095817B1 (en) Object detection device
JP5936561B2 (en) Object classification based on appearance and context in images
Karaimer et al. Combining shape-based and gradient-based classifiers for vehicle classification
TWI517055B (en) Image foreground object screening method
Fang et al. Background subtraction based on random superpixels under multiple scales for video analytics
Diaz et al. Detecting dynamic objects with multi-view background subtraction
CN109064444B (en) Track slab disease detection method based on significance analysis
Chang et al. Single-shot person re-identification based on improved random-walk pedestrian segmentation
Chen et al. Illumination-invariant video cut-out using octagon sensitive optimization
JP6276504B2 (en) Image detection apparatus, control program, and image detection method
JP6884546B2 (en) Attribute judgment device, attribute judgment method, computer program, and recording medium
JP2007139421A (en) Morphological classification device and method, and variation region extraction device and method
JP6814374B2 (en) Detection method, detection program and detection device
Vasconcelos et al. Towards deep learning invariant pedestrian detection by data enrichment

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees