TWI442326B - Image classification method and image classification system - Google Patents

Image classification method and image classification system Download PDF

Info

Publication number
TWI442326B
TWI442326B TW98141160A TW98141160A TWI442326B TW I442326 B TWI442326 B TW I442326B TW 98141160 A TW98141160 A TW 98141160A TW 98141160 A TW98141160 A TW 98141160A TW I442326 B TWI442326 B TW I442326B
Authority
TW
Taiwan
Prior art keywords
state
probability
image recognition
image
current
Prior art date
Application number
TW98141160A
Other languages
Chinese (zh)
Other versions
TW201120764A (en
Inventor
Shih Shinh Huang
Shao Chung Hu
Min Fang Lo
Original Assignee
Chung Shan Inst Of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chung Shan Inst Of Science filed Critical Chung Shan Inst Of Science
Priority to TW98141160A priority Critical patent/TWI442326B/en
Publication of TW201120764A publication Critical patent/TW201120764A/en
Application granted granted Critical
Publication of TWI442326B publication Critical patent/TWI442326B/en

Links

Description

影像辨識方法及影像辨識系統Image recognition method and image recognition system

本發明係關於一種影像辨識方法及影像辨識系統,並且特別地關於一種具有動態調整特徵擷取功能及考慮辨識時序影響之一種影像辨識方法及影像辨識系統。The present invention relates to an image recognition method and an image recognition system, and in particular to an image recognition method and image recognition system having a dynamic adjustment feature extraction function and considering the influence of identification timing.

安全氣囊的設計與發明,確實有效降低了交通意外發生時的傷害,但是安全氣囊爆破本身卻也對使用者(例如乘車駕駛、乘客)造成一定傷害。早期安全氣囊並未針對不同的保護對象而設計不同的爆破壓制,往往造成爆破傷害或保護不足的結果。有鑑於此,目前許多產業界以及學術機構皆著手在研發感測器用以偵測駕駛乘坐狀態,以正確控制安全氣囊爆,避免爆破傷害。The design and invention of the airbag has effectively reduced the damage caused by traffic accidents, but the airbag blasting itself also causes certain damage to users (such as driving and passengers). Early airbags did not design different bursts of compression for different objects of protection, often resulting in blast damage or inadequate protection. In view of this, many industry and academic institutions are currently developing sensors to detect the driving state, in order to properly control the airbag burst and avoid blast damage.

目前最常被使用的感測器主要有重量或壓力感測器(pressure sensors)、超音波感測器(ultrasonic sensors)及視覺感測器(camera sensors)。重量或壓力感測器主要感測乘座椅上的重量或壓力,來辨識駕駛乘坐狀態,作為氣囊爆開時充氣速度與力道的考量,但此類感測器最主要的功能用在感測物體的存在與否,卻無法計算物體與安全氣曩距離。超音波感測器主要透過超音波傳遞與接收來量測目標物體的距離,可以應用來準確偵測乘客與安全氣囊的距離,但其與重量感測器的缺點在無法對存在的物體做進一步的分類。Currently the most commonly used sensors are weight or pressure sensors, ultrasonic sensors, and camera sensors. The weight or pressure sensor mainly senses the weight or pressure on the seat to identify the driving state, as the inflation speed and force when the airbag pops open, but the most important function of such sensors is used in sensing. The presence or absence of an object does not calculate the distance between the object and the safety. The ultrasonic sensor mainly measures the distance between the target and the object through ultrasonic transmission and reception, and can be applied to accurately detect the distance between the passenger and the airbag. However, the disadvantage of the weight sensor and the weight sensor cannot further the existing object. Classification.

為有效克服上述困難,以達成有效之駕駛乘坐狀態的辨識與偵測,目前已有許多電腦影像辨識技術被提出。大致上可區分為單眼視覺辨識技術(monocular-vision occupant classification)以及立體視覺辨識技術(stereo-vision occupant classification)兩類。In order to effectively overcome the above difficulties, in order to achieve effective identification and detection of driving state, many computer image recognition technologies have been proposed. Generally, it can be classified into two types: monocular-vision occupant classification and stereo-vision occupant classification.

單眼視覺辨識技術利用單一相機的技術來進行駕駛座狀態的辨識技術,根據所採用技術之差異可區分為樣版比對技術(template matching)方式以及基於機器學習(machine learning)之分類技術(classifier)。樣板比對的辨識技術,主要蒐集各駕駛乘坐狀態類別(occupant class)的影像資料庫,並針對每張影像建構其所對應之樣版模型(template model),接著透過樣版比較的演算法計算出與目前影像最相似之樣版,以達到影像分類與辨識的目的。Monocular vision recognition technology uses a single camera technology to identify the driver's seat state. It can be divided into a template matching method and a machine learning based classifier according to the difference in technology used. ). The identification technology of the template comparison mainly collects the image database of each occupant class, and constructs a corresponding template model for each image, and then calculates by the algorithm of the pattern comparison. The most similar pattern to the current image is used to achieve image classification and identification.

然而,樣版比對技術主要基於假設各樣版間為相互獨立(mutually independence),因此要獲得較高之辨識率,須針對不同種類皆收集足夠之樣版,然而這樣會大幅降低影像辨識的效能。基於機器學習之分類技術主要是透過機器學習的理論架構,以獲得各類別之分類器,用以歸類不同之種類。However, the pattern comparison technique is mainly based on the assumption that the patterns are mutually independent. Therefore, in order to obtain a higher recognition rate, sufficient patterns must be collected for different types, but this will greatly reduce image recognition. efficacy. The classification technology based on machine learning is mainly through the theoretical framework of machine learning to obtain classifiers of various categories for classifying different types.

立體視覺辨識技術主要模仿人類視覺來達到立體視覺的目的。主要原理為採用相距設置的兩台相機來感測三維空間物體,可同時獲取物體影像並計算出物體距離。故可提供較單眼視覺辨識技術更為精確之辨識特徵。Stereoscopic vision recognition technology mainly imitates human vision to achieve stereoscopic vision. The main principle is to use two cameras arranged at different distances to sense three-dimensional objects, and simultaneously acquire image of the object and calculate the distance of the object. Therefore, it can provide more accurate identification features than the monocular vision recognition technology.

無論應用單眼視覺或立體視覺於乘客狀態辨識上,大致上可區分為三個步驟:前景切割(foreground segmentation)、特徵點擷取(feature extraction)以及乘客辨識(occupant classification)。然而由於環境場景之光線波動影響,相機所感測之影像亮度與顏色並非固定不變;另一方面,背景亦可能隨著時間而呈現不同的景緻,尤其是於車輛實際應用時,外在環境的場景變化更複雜,使得先前技術於乘客狀態辨識上並無法有效克服因環境的場景隨時間變化而對辨識的影響。Regardless of whether monocular or stereoscopic vision is applied to passenger state recognition, it can be roughly divided into three steps: foreground segmentation, feature extraction, and occupant classification. However, due to the influence of light fluctuations in the environmental scene, the brightness and color of the image sensed by the camera are not fixed; on the other hand, the background may also have different views over time, especially in the actual application of the vehicle, the external environment. The scene change is more complicated, so that the prior art on the passenger state recognition can not effectively overcome the influence of the environment scene on the recognition over time.

有鑑於此,本發明之一範疇在於提供一種影像辨識方法,具有動態調整特徵擷取功能及考慮辨識時序影響,可提高辨識率,解決先前技術所遭遇的問題。In view of the above, one aspect of the present invention is to provide an image recognition method, which has the function of dynamically adjusting feature extraction and considering the influence of identification timing, can improve the recognition rate, and solve the problems encountered in the prior art.

該影像辨識方法包含下列步驟:(a)擷取一目標區域之一目前影像;(b)根據對應一第一分類準則之一第一有效辨識區域,計算於該目前影像中對應之一第一特徵向量;(c)根據對應該第一分類準則之一第一權重分佈、該第一特徵向量、一狀態轉換規則及該目標區域之前一狀態,計算關於該第一分類準則之該目標區域之一第一狀態機率;以及(d)根據該第一狀態機率,判斷該目標區域之一目前狀態。The image recognition method comprises the following steps: (a) capturing a current image of a target area; and (b) calculating a first effective identification area corresponding to one of the first classification criteria, and calculating a corresponding one of the current images. a feature vector; (c) calculating, according to one of the first classification criteria, the first weight distribution, the first feature vector, a state transition rule, and a previous state of the target region, the target region for the first classification criterion a first state probability; and (d) determining a current state of one of the target regions based on the first state probability.

於實際應用時,引用之分類器(classifier)將不只一個,因此前述步驟(b)亦進一步包含根據對應一第二分類準則之一第二有效辨識區域,以計算於該目前影像中對應之一第二特徵向量,步驟(c)進一步包含根據對應該第二分類準則之一第二權重分佈、該第二特徵向量、一狀態轉換規則及該目標區域之前一狀態,以計算關於該第二分類準則之該目標區域之一第二狀態機率,步驟(d)則為根據該第一狀態機率及該第二狀態機率,判斷該目標區域之該目前狀態。In actual application, there will be more than one classifier referenced, so the foregoing step (b) further includes a second effective identification area according to one of the corresponding second classification criteria to calculate one of the corresponding ones in the current image. a second feature vector, the step (c) further comprising: calculating a second classification according to a second weight distribution corresponding to the second classification criterion, the second feature vector, a state transition rule, and a previous state of the target region The second state probability of one of the target regions of the criterion, and the step (d) is to determine the current state of the target region based on the first state probability and the second state probability.

由於本發明之影像辨識方法使用該狀態轉換規則,故在判斷該目標區域之該目前狀態時,能考慮該目標區域之前一狀態,避免狀態轉換的不合理性。Since the image recognition method of the present invention uses the state transition rule, when determining the current state of the target region, a state before the target region can be considered to avoid irrationality of state transition.

另外,本發明之影像辨識方法利用Rao-Blackwellised粒子濾波器來動態調整該第一有效辨識區域及該第一權重分佈,以克服環境的場景隨時間變化而對辨識的影響,亦可容許擷取之影像有位移(translation)、旋轉(orientation)及大小(scale)的變化。In addition, the image recognition method of the present invention utilizes a Rao-Blackwellised particle filter to dynamically adjust the first effective identification region and the first weight distribution to overcome the influence of the environment scene on the identification over time, and may also allow for the capture. The image has variations in translation, orientation, and scale.

本發明之另一範疇在於提供一種影像辨識系統,用以實施本發明之影像辨識方法。該影像辨識系統包含一影像擷取單元、一儲存單元及一資料處理單元。該影像擷取單元用以擷取該目標區域之該目前影像。該儲存單元用以儲存該目前影像、對應該第一分類準則之該第一有效辨識區域及該第一權重分佈、該狀態轉換規則、該目標區域之前一狀態及其他於辨識過程中暫存之資料。該資料處理單元電性連接該影像擷取單元及該儲存單元,用以執行前述影像辨識方法所需的各項計算及判斷作業。Another aspect of the present invention is to provide an image recognition system for implementing the image recognition method of the present invention. The image recognition system comprises an image capture unit, a storage unit and a data processing unit. The image capturing unit is configured to capture the current image of the target area. The storage unit is configured to store the current image, the first effective identification area corresponding to the first classification criterion, the first weight distribution, the state transition rule, the previous state of the target area, and other temporary storage in the identification process. data. The data processing unit is electrically connected to the image capturing unit and the storage unit for performing various calculations and determination operations required by the image recognition method.

因此,本發明之影像辨識方法及影像辨識系統可克服因環境的場景隨時間變化而對辨識的影響,並避免狀態轉換的不合理性,進而提昇辨識率,克服前先技術的問題。Therefore, the image recognition method and the image recognition system of the present invention can overcome the influence of the environment scene on the recognition over time, and avoid the irrationality of the state transition, thereby improving the recognition rate and overcoming the problems of the prior art.

關於本發明之優點與精神可以藉由以下的發明詳述及所附圖式得到進一步的瞭解。The advantages and spirit of the present invention will be further understood from the following detailed description of the invention.

請參閱圖一及圖二,圖一係繪示根據本發明之一具體實施例之影像辨識方法之流程圖,圖二係繪示根據該具體實施例之影像辨識系統1之功能方塊圖。根據該具體實施例,影像辨識方法係應用於乘客狀態辨識,因此影像辨識系統1得直接與車輛之電腦結合,換句話說,影像辨識系統1之資料處理單元122及儲存單元124可視為車用電腦12之一部分,而影像辨識系統1之影像擷取單元14(例如攝影機)則裝設於車內並與車用電腦12連線。Referring to FIG. 1 and FIG. 2, FIG. 1 is a flow chart showing an image recognition method according to an embodiment of the present invention, and FIG. 2 is a functional block diagram of the image recognition system 1 according to the specific embodiment. According to the specific embodiment, the image recognition method is applied to the passenger state recognition. Therefore, the image recognition system 1 is directly coupled to the computer of the vehicle. In other words, the data processing unit 122 and the storage unit 124 of the image recognition system 1 can be regarded as vehicles. One part of the computer 12, and the image capturing unit 14 (for example, a camera) of the image recognition system 1 is installed in the vehicle and connected to the vehicle computer 12.

根據該具體實施例,影像辨識方法先藉由影像擷取單元14擷取一目標區域之一目前影像,如步驟S102所示;其中目標區域即指車內座位所在區域(可參閱圖三)。接著,根據對應不同的分類準則之有效辨識區域,利用資料處理單元122計算於該目前影像中對應之特徵向量,如步驟S104所示。前述擷取的影像及計算中所需之資料均可儲存於儲存單元124中,後續亦同。According to the specific embodiment, the image recognition method first captures a current image of a target area by the image capturing unit 14, as shown in step S102; wherein the target area refers to the area where the seat in the vehicle is located (refer to FIG. 3). Then, the corresponding feature vector in the current image is calculated by the data processing unit 122 according to the effective identification area corresponding to the different classification criteria, as shown in step S104. The images captured in the foregoing and the data required for the calculation can be stored in the storage unit 124, and the subsequent ones are the same.

由於對應不同的乘客狀態,其影像特徵亦不同,通常對辨識具有重要性的影像區域並非整個擷取到的影像,因此對應不同乘客狀態的辨識,有不同的辨識有效區域(region of interest),亦即基於不同的分類準則有不同的辨識有效區域,並可據以判斷各乘客狀態之機率。根據該具體實施例,乘客狀態(亦即該目標區域之狀態)可為無人乘坐狀態、物品放置狀態、背向嬰兒座狀態、前向幼兒座狀態、幼兒乘坐狀態及成人乘坐狀態。Since the image characteristics are different for different passenger states, the image area that is important for recognition is not the entire captured image. Therefore, there are different regions of interest corresponding to the identification of different passenger states. That is, there are different identification effective areas based on different classification criteria, and the probability of each passenger state can be judged accordingly. According to this embodiment, the state of the passenger (i.e., the state of the target area) may be an unattended state, an item placement state, a back-to-baby seat state, a forward child seat state, a child ride state, and an adult ride state.

前述不同的辨識有效區域係基於各乘客狀態所得影像之差異性最大化而建立,辨識有效區域的決定以及特徵點的選取,主要透過機器學習(machine learning)演算法來獲得。請另參閱圖三,圖三係繪示各乘客狀態之辨識有效區域之示意圖。於實施辨識之前,可先實施一訓練程序以定出各辨識有效區域,其先搜集相當數量之各乘客狀態的樣本影像,利用Hausdorff Distance以計算對應各辨識有效區域之集合距離,調整各辨識有效區域,反覆計算集合距離以定出後續辨識程序中關於各辨識有效區域所需的資料。根據該具體實施例中,圖三顯示於訓練程序中決定出之各辨識有效區域:物品放置狀態R1、背向嬰兒座狀態R2、前向幼兒座狀態R3、幼兒乘坐狀態R4及成人乘坐狀態R5;但本發明不以此為限。The different identification effective areas are established based on maximizing the difference of images obtained by each passenger state, and the determination of the effective area and the selection of feature points are mainly obtained through a machine learning algorithm. Please also refer to FIG. 3, which is a schematic diagram showing the identification effective area of each passenger state. Before implementing the identification, a training program may be implemented to determine each identified effective area, which first collects a sample image of a considerable number of passenger states, and uses Hausdorff Distance to calculate the collective distance corresponding to each identified effective area, and adjusts each identification effectively. The area, the set distance is repeatedly calculated to determine the information required for each identified effective area in the subsequent identification procedure. According to the specific embodiment, FIG. 3 shows the identified effective areas determined in the training program: the item placement state R1, the back-to-baby seat state R2, the forward child seat state R3, the child ride state R4, and the adult ride state R5. However, the invention is not limited thereto.

此外,此辨識有效區域係以Tchebichef Moment表示,以形成特徵向量,並且基於即時辨識之需要,利用Adaboost演算法降低特徵向量之維度,並同時計算出對應特徵向量之權重分佈。補充說明的是,於步驟S104計算出之特徵向量亦如同前述,亦以Tchebichef Moment表示,並僅取出具重要性之元素,以作為特徵向量之組成元素。In addition, the identification effective area is represented by Tchebichef Moment to form a feature vector, and based on the need of real-time identification, the Adaboost algorithm is used to reduce the dimension of the feature vector, and the weight distribution of the corresponding feature vector is calculated simultaneously. It should be noted that the feature vector calculated in step S104 is also as described above, and is also represented by Tchebichef Moment, and only the element having importance is taken out as a constituent element of the feature vector.

根據該具體實施例,影像辨識方法接著利用資料處理單元122根據對應各分類準則之權重分佈、前述計算出之特徵向量、一狀態轉換規則及該目標區域之前一狀態,計算關於各分類準則之該目標區域之狀態機率,如步驟S106所示。其中狀態轉換規則係以機率限制乘客狀態之轉換,亦即各乘客狀態間之轉置機率(transition probability),以提供考慮相鄰時間點相依性之辨識,其得以一個有限狀態機(finite state machine)來描述。According to the specific embodiment, the image recognition method then uses the data processing unit 122 to calculate the weight classification according to the respective classification criteria, the calculated feature vector, a state transition rule, and a previous state of the target region. The state probability of the target area is as shown in step S106. The state transition rule is to limit the transition of the passenger state by probability, that is, the transition probability between the passenger states to provide identification of the dependence of adjacent time points, which can be a finite state machine. ) to describe.

如圖四所示,基於實際現象,除無人乘坐狀態外,其餘各狀態(背向嬰兒座狀態、前向幼兒座狀態、幼兒乘坐狀態及成人乘坐狀態)間之轉換必然經過無人乘坐狀態,並且基於時間連續性及狀態相依性,各狀態有較大的機率維持原狀態,並且各狀態(除無人乘坐狀態外)間之直接狀態轉換之機率為零。因此,考慮此狀態轉換規則之辨識可更符合實際乘客狀態變化,以提高辨識正確率。As shown in Figure 4, based on the actual phenomenon, in addition to the unoccupied state, the transition between the other states (back to the baby seat state, the forward child seat state, the child ride state, and the adult ride state) must pass the unattended state, and Based on time continuity and state dependencies, each state has a greater probability of maintaining the original state, and the probability of direct state transition between states (except for the unoccupied state) is zero. Therefore, considering the identification of this state transition rule can be more consistent with the actual passenger state change to improve the recognition accuracy.

最後,影像辨識方法接著利用資料處理單元122根據前述對後各分類準則計算出之狀態機率,原則上以機率較高者,判斷其為該目標區域之目前狀態,如步驟S108所示。Finally, the image recognition method then uses the state probability calculated by the data processing unit 122 according to the foregoing respective classification criteria. In principle, the higher probability is used to determine the current state of the target region, as shown in step S108.

補充說明的是,由於車輛行進中該目標區域之背景環境變化複雜且光線明暗差異劇烈,而且乘客外觀非常多樣,加之以裝設好的影像擷取單元14可能因震動或其他原因造成拍攝角度已不同於預設值,使得在訓練程序中設定之有效辨識區域相關參數可能不再適用。In addition, since the background environment of the target area is complicated to change and the light and dark difference is severe, and the appearance of the passenger is very diverse, the image capturing unit 14 installed may cause the shooting angle due to vibration or other reasons. Different from the preset value, the valid identification area related parameters set in the training program may no longer be applicable.

因此,本發明之影像辨識方法為提升乘坐狀態辨識之穩定性與精準度,乘客於影像位置之時間軸變化,故將所有資訊整合於一個貝式網路架構中,並利用Rao-Blackwellised粒子濾波器(Rao-Blackwellised Particle Filtering,RBPF)來動態調整有效辨識區域、特徵向量及其對應之權重分佈。藉此,縱使影像擷取單元14之拍攝角度於使用中有變動,造成有位移(translation)、旋轉(orientation)及大小(scale)的變化,仍得自擷取的影像中,擷取出所需的有效辨識區域的影像,並進行前述各步驟。Therefore, the image recognition method of the present invention improves the stability and accuracy of the ride state recognition, and the time axis of the passenger changes in the image position, so that all the information is integrated into a shell network architecture, and Rao-Blackwellised particle filter is utilized. Rao-Blackwellised Particle Filtering (RBPF) dynamically adjusts the effective identification area, feature vector and its corresponding weight distribution. Therefore, even if the shooting angle of the image capturing unit 14 changes during use, resulting in a change in translation, orientation, and scale, it is still required to extract the captured image. Effectively identify the image of the area and perform the aforementioned steps.

此外,基於前述說明,整合辨識前之訓練程序及辨識程序,可形成本案技術架構之流程圖,如圖五所示。其中分類器(classifier)即為包含分類準則、有效辨識區域及權重分佈等資訊之總稱;量測模型為提供直接以分類器為基礎之判斷資訊;RBPF推論架構為整合量測模型、狀態轉換規則以判斷出乘客狀態。In addition, based on the foregoing description, integrating the training program and the identification program before identification can form a flow chart of the technical architecture of the present case, as shown in FIG. The classifier is a general term for information including classification criteria, effective identification area and weight distribution; the measurement model provides judgment information based directly on the classifier; the RBPF inference structure is an integrated measurement model and state transition rule. To determine the passenger status.

如前述實施例之說明,本發明之影像辨識方法及影像辨識系統可克服因環境的場景隨時間變化而對辨識的影響,並避免狀態轉換的不合理性,進而提昇辨識率,提供其他操作(例如安全囊之展開)正確的資訊,克服前先技術的問題。As described in the foregoing embodiments, the image recognition method and the image recognition system of the present invention can overcome the influence of the environment scene on the recognition over time, and avoid the irrationality of the state transition, thereby improving the recognition rate and providing other operations ( For example, the development of a security bag) correct information to overcome the problems of the prior art.

藉由以上較佳具體實施例之詳述,係希望能更加清楚描述本發明之特徵與精神,而並非以上述所揭露的較佳具體實施例來對本發明之範疇加以限制。相反地,其目的是希望能涵蓋各種改變及具相等性的安排於本發明所欲申請之專利範圍的範疇內。The features and spirit of the present invention will be more apparent from the detailed description of the preferred embodiments. On the contrary, the intention is to cover various modifications and equivalents within the scope of the invention as claimed.

1...影像辨識系統1. . . Image recognition system

12...車用電腦12. . . Car computer

14...影像擷取單元14. . . Image capture unit

122...資料處理單元122. . . Data processing unit

124...儲存單元124. . . Storage unit

R1~R5...乘客狀態R1~R5. . . Passenger status

S102~S108...步驟S102~S108. . . step

圖一係繪示根據本發明之一具體實施例之影像辨識方法之流程圖。FIG. 1 is a flow chart showing an image recognition method according to an embodiment of the present invention.

圖二係繪示根據該具體實施例之影像辨識系統之功能方塊圖。FIG. 2 is a functional block diagram of an image recognition system according to the specific embodiment.

圖三係繪示各乘客狀態之辨識有效區域之示意圖。Figure 3 is a schematic diagram showing the identification effective area of each passenger state.

圖四係繪示各乘客狀態之有限狀態機之示意圖。Figure 4 is a schematic diagram showing the finite state machine for each passenger state.

圖五係繪示本案技術架構之流程圖。Figure 5 is a flow chart showing the technical architecture of the case.

S102~S108...步驟S102~S108. . . step

Claims (13)

一種影像辨識方法,包含下列步驟:(a)擷取一目標區域之一目前影像;(b)根據對應一第一分類準則之一第一有效辨識區域,計算於該目前影像中對應之一第一特徵向量;(c)根據對應該第一分類準則之一第一權重分佈、該第一特徵向量、一狀態轉換規則及該目標區域之前一狀態,計算關於該第一分類準則之該目標區域之一第一狀態機率,其中該狀態轉換規則包含一第一狀態、一第二狀態、自該第一狀態轉換至該第二狀態之一第一狀態轉換機率、自該第二狀態轉換至該第一狀態之一第二狀態轉換機率、一第三狀態、自該第三狀態轉換至該第一狀態之一第三狀態轉換機率、及自該第一狀態轉換至該第三狀態之一第四狀態轉換機率,且該第三狀態轉換至該第二狀態或該第二狀態轉換至該第三狀態之狀態轉換機率為零;以及(d)根據該第一狀態機率,判斷該目標區域之一目前狀態。 An image recognition method includes the following steps: (a) capturing a current image of a target area; (b) calculating a corresponding one of the current images according to a first effective identification area corresponding to a first classification criterion a feature vector; (c) calculating the target region for the first classification criterion according to a first weight distribution corresponding to the first classification criterion, the first feature vector, a state transition rule, and a previous state of the target region a first state probability, wherein the state transition rule includes a first state, a second state, a first state transition probability from the first state transition to the second state, and a transition from the second state to the second state a first state transition probability, a third state, a transition from the third state to the first state, a third state transition probability, and a transition from the first state to the third state a four-state conversion probability, and the state transition probability of the third state transitioning to the second state or the second state transitioning to the third state is zero; and (d) determining the mesh according to the first state probability One of the current state of the region. 如申請專利範圍第1項所述之影像辨識方法,進一步包含下列步驟:利用Adaboost演算法降低該第一特徵向量之維度。 The image recognition method according to claim 1, further comprising the step of: reducing the dimension of the first feature vector by using an Adaboost algorithm. 如申請專利範圍第1項所述之影像辨識方法,其中步驟(b)進一步包含根據對應一第二分類準則之一第二有效辨識 區域,以計算於該目前影像中對應之一第二特徵向量,步驟(c)進一步包含根據對應該第二分類準則之一第二權重分佈、該第二特徵向量、一狀態轉換規則及該目標區域之前一狀態,以計算關於該第二分類準則之該目標區域之一第二狀態機率,步驟(d)則為根據該第一狀態機率及該第二狀態機率,判斷該目標區域之該目前狀態。 The image recognition method of claim 1, wherein the step (b) further comprises: second effective identification according to one of the corresponding second classification criteria a region for calculating a corresponding one of the second feature vectors in the current image, the step (c) further comprising: a second weight distribution corresponding to the second classification criterion, the second feature vector, a state transition rule, and the target a state before the region to calculate a second state probability of the target region for the second classification criterion, and step (d) is to determine the current region of the target region according to the first state probability and the second state probability status. 如申請專利範圍第1項所述之影像辨識方法,其中該目標區域為一車輛座椅,該目前狀態係選自由無人乘坐狀態、物品放置狀態、背向嬰兒座狀態、前向幼兒座狀態、幼兒乘坐狀態及成人乘坐狀態組成之群組其中之一。 The image recognition method of claim 1, wherein the target area is a vehicle seat, and the current state is selected from a state of no-ride, an item placement state, a back-to-baby seat state, a forward child seat state, One of the group consisting of a child's ride state and an adult ride state. 如申請專利範圍第1項所述之影像辨識方法,其中該第一狀態係一無人乘坐狀態。 The image recognition method of claim 1, wherein the first state is an unattended state. 如申請專利範圍第1項所述之影像辨識方法,進一步包含下列步驟:於步驟(b)及步驟(c)中,利用Rao-Blackwellised粒子濾波器來動態調整該第一有效辨識區域及該第一權重分佈。 The image recognition method according to claim 1, further comprising the steps of: in step (b) and step (c), dynamically adjusting the first effective identification region and the first portion by using a Rao-Blackwellised particle filter A weight distribution. 如申請專利範圍第1項所述之影像辨識方法,於步驟(a)之前,進一步包含下列步驟:提供關於該目標區域之複數個樣本影像;以及根據該複數個樣本影像,計算出關於該第一分類準則之該第一有效辨識區域及該第一權重分佈。 The method for image identification according to claim 1, further comprising the steps of: providing a plurality of sample images about the target region; and calculating, according to the plurality of sample images, the step of (a) The first effective identification area of a classification criterion and the first weight distribution. 一種影像辨識系統,包含: 一影像擷取單元,用以擷取一目標區域之一目前影像;一儲存單元,用以儲存該目前影像、對應一第一分類準則之一第一有效辨識區域及一第一權重分佈、一狀態轉換規則及該目標區域之前一狀態,其中該狀態轉換規則包含一第一狀態、一第二狀態、自該第一狀態轉換至該第二狀態之一第一狀態轉換機率、自該第二狀態轉換至該第一狀態之一第二狀態轉換機率、一第三狀態、自該第三狀態轉換至該第一狀態之一第三狀態轉換機率、及自該第一狀態轉換至該第三狀態之一第四狀態轉換機率,且該第三狀態轉換至該第二狀態或該第二狀態轉換至該第三狀態之狀態轉換機率為零;以及一資料處理單元,電性連接該影像擷取單元及該儲存單元,該資料處理單元根據對應一第一分類準則之一第一有效辨識區域,計算於該目前影像中對應之一第一特徵向量,接著根據對應該第一分類準則之一第一權重分佈、該第一特徵向量、一狀態轉換規則及該目標區域之前一狀態,計算關於該第一分類準則之該目標區域之一第一狀態機率,最後根據該第一狀態機率,判斷該目標區域之一目前狀態。 An image recognition system comprising: An image capturing unit for capturing a current image of a target area; a storage unit for storing the current image, a first effective identification area corresponding to a first classification criterion, and a first weight distribution, a state transition rule and a previous state of the target region, wherein the state transition rule includes a first state, a second state, a first state transition probability from the first state transition to the second state, and the second state Transitioning to one of the first states, a second state transition probability, a third state, transitioning from the third state to the first state, a third state transition probability, and transitioning from the first state to the third state a fourth state transition probability, and the state transition probability of the third state transitioning to the second state or the second state transitioning to the third state is zero; and a data processing unit electrically connecting the image Taking the unit and the storage unit, the data processing unit calculates a corresponding first feature vector in the current image according to the first valid identification area corresponding to one of the first classification criteria, Calculating a first state probability of the target region of the first classification criterion according to a first weight distribution corresponding to the first classification criterion, the first feature vector, a state transition rule, and a previous state of the target region, Finally, according to the first state probability, one of the target regions is determined to be in the current state. 如申請專利範圍第8項所述之影像辨識系統,其中該資料處理單元利用Adaboost演算法降低該第一特徵向量之維度。 The image recognition system of claim 8, wherein the data processing unit reduces the dimension of the first feature vector by using an Adaboost algorithm. 如申請專利範圍第8項所述之影像辨識系統,其中該資料 處理單元亦根據對應一第二分類準則之一第二有效辨識區域,以計算於該目前影像中對應之一第二特徵向量,接著根據對應該第二分類準則之一第二權重分佈、該第二特徵向量、一狀態轉換規則及該目標區域之前一狀態,以計算關於該第二分類準則之該目標區域之一第二狀態機率,最後同時根據該第一狀態機率及該第二狀態機率,判斷該目標區域之該目前狀態。 Such as the image recognition system described in claim 8 of the patent application, wherein the data The processing unit is further configured to calculate a corresponding second eigenvector in the current image according to a second effective identification region corresponding to a second classification criterion, and then according to the second weight distribution corresponding to the second classification criterion, the first a second feature vector, a state transition rule, and a previous state of the target region to calculate a second state probability of the target region for the second classification criterion, and finally, according to the first state probability and the second state probability, The current state of the target area is determined. 如申請專利範圍第8項所述之影像辨識系統,其中該目標區域為一車輛座椅,該目前狀態係選自由無人乘坐狀態、物品放置狀態、背向嬰兒座狀態、前向幼兒座狀態、幼兒乘坐狀態及成人乘坐狀態組成之群組其中之一。 The image recognition system of claim 8, wherein the target area is a vehicle seat, and the current state is selected from a state of no-ride, an item placement state, a back-to-baby seat state, a forward child seat state, One of the group consisting of a child's ride state and an adult ride state. 如申請專利範圍第8項所述之影像辨識系統,其中該第一狀態係一無人乘坐狀態。 The image recognition system of claim 8, wherein the first state is an unattended state. 如申請專利範圍第8項所述之影像辨識系統,其中該資料處理單元利用Rao-Blackwellised粒子濾波器來動態調整該第一有效辨識區域及該第一權重分佈,並儲存於該儲存單元中。The image recognition system of claim 8, wherein the data processing unit dynamically adjusts the first effective identification area and the first weight distribution by using a Rao-Blackwellised particle filter, and stores the same in the storage unit.
TW98141160A 2009-12-02 2009-12-02 Image classification method and image classification system TWI442326B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW98141160A TWI442326B (en) 2009-12-02 2009-12-02 Image classification method and image classification system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW98141160A TWI442326B (en) 2009-12-02 2009-12-02 Image classification method and image classification system

Publications (2)

Publication Number Publication Date
TW201120764A TW201120764A (en) 2011-06-16
TWI442326B true TWI442326B (en) 2014-06-21

Family

ID=45045294

Family Applications (1)

Application Number Title Priority Date Filing Date
TW98141160A TWI442326B (en) 2009-12-02 2009-12-02 Image classification method and image classification system

Country Status (1)

Country Link
TW (1) TWI442326B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI734449B (en) * 2020-04-21 2021-07-21 財團法人工業技術研究院 Method of labelling features for image recognition and apparatus thereof
TWI759286B (en) * 2016-03-17 2022-04-01 加拿大商艾維吉隆股份有限公司 System and method for training object classifier by machine learning

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI759286B (en) * 2016-03-17 2022-04-01 加拿大商艾維吉隆股份有限公司 System and method for training object classifier by machine learning
TWI734449B (en) * 2020-04-21 2021-07-21 財團法人工業技術研究院 Method of labelling features for image recognition and apparatus thereof
US11409999B2 (en) 2020-04-21 2022-08-09 Industrial Technology Research Institute Method of labelling features for image recognition and apparatus thereof

Also Published As

Publication number Publication date
TW201120764A (en) 2011-06-16

Similar Documents

Publication Publication Date Title
US10953850B1 (en) Seatbelt detection using computer vision
EP1687754B1 (en) System and method for detecting an occupant and head pose using stereo detectors
US20040220705A1 (en) Visual classification and posture estimation of multiple vehicle occupants
CN105716567B (en) The method for obtaining equipment sensing object and motor vehicles distance by single eye images
JP5493108B2 (en) Human body identification method and human body identification device using range image camera
WO2003091941A1 (en) High-performance object detection with image data fusion
WO2013008303A1 (en) Red-eye detection device
JP6739672B2 (en) Physical constitution estimation device and physical constitution estimation method
US20060280336A1 (en) System and method for discriminating passenger attitude in vehicle using stereo image junction
US8560179B2 (en) Adaptive visual occupant detection and classification system
US7295123B2 (en) Method for detecting a person in a space
CN110909561A (en) Eye state detection system and operation method thereof
TWI442326B (en) Image classification method and image classification system
CN104823218A (en) System and method for detecting pedestrians using a single normal camera
Haselhoff et al. Radar-vision fusion for vehicle detection by means of improved haar-like feature and adaboost approach
Baltaxe et al. Marker-less vision-based detection of improper seat belt routing
CN113361452B (en) Driver fatigue driving real-time detection method and system based on deep learning
Lee et al. Stereovision-based real-time occupant classification system for advanced airbag systems
TWI447655B (en) An image recognition method
Faber Seat occupation detection inside vehicles
KR102440041B1 (en) Object recognition apparatus with customized object detection model
Hu Robust seatbelt detection and usage recognition for driver monitoring systems
US20090304263A1 (en) Method for classifying an object using a stereo camera
Klomark Occupant detection using computer vision
TWI465961B (en) Intelligent seat passenger image sensing device