TWI718442B - Convolutional neutral networks identification efficiency increasing method and related convolutional neutral networks identification efficiency increasing device - Google Patents
Convolutional neutral networks identification efficiency increasing method and related convolutional neutral networks identification efficiency increasing device Download PDFInfo
- Publication number
- TWI718442B TWI718442B TW107141377A TW107141377A TWI718442B TW I718442 B TWI718442 B TW I718442B TW 107141377 A TW107141377 A TW 107141377A TW 107141377 A TW107141377 A TW 107141377A TW I718442 B TWI718442 B TW I718442B
- Authority
- TW
- Taiwan
- Prior art keywords
- group
- neural network
- foreground
- pixels
- input image
- Prior art date
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
Description
本發明係提供一種影像辨識方法與其裝置,尤指一種應用於影像辨識的類神經網路辨識效能提升方法及類神經網路辨識效能提升裝置。 The present invention provides an image recognition method and device, in particular a neural network-like recognition performance improvement method and a neural network-like recognition performance improvement device applied to image recognition.
以類神經網路演算為基礎的傳統影像辨識技術直接使用原始監控影像作為輸入資訊。原始監控影像內含的資訊量龐大,大幅限制影像辨識效能的提升;即便從原始監控影像選取小範圍的特定影像進行影像辨識,試圖透過減少資訊量來提高運算效能,小範圍特定影像內的待測物件仍會受周圍複雜環境背景影響,無法快速精確地得到所需辨識結果。因此,如何設計一種有助於改善類神經網路辨識效能提升之方法,即為相關監控產業的重點發展課題之一。 Traditional image recognition technology based on neural network calculations directly uses raw surveillance images as input information. The large amount of information contained in the original surveillance image greatly limits the improvement of image recognition performance; even if a small range of specific images are selected from the original surveillance image for image recognition, an attempt is made to reduce the amount of information to improve computing performance. The test object will still be affected by the surrounding complex environment background, and the required identification result cannot be obtained quickly and accurately. Therefore, how to design a method that helps to improve the performance of neural network identification is one of the key development topics of the related surveillance industry.
本發明係提供一種應用於影像辨識的類神經網路辨識效能提升方法及類神經網路辨識效能提升裝置,以解決上述之問題。 The present invention provides a neural network-like recognition performance improvement method and a neural network-like recognition performance improvement device applied to image recognition to solve the above-mentioned problems.
本發明之申請專利範圍係揭露一種類神經網路辨識效能提升方法,其包含有分析一輸入影像以取得一前景資訊,利用該前景資訊生成一前景遮罩,以及該輸入影像經由該前景遮罩轉換為一輸出影像。該輸出影像作為類神經網路辨識的導入數據,以提升物件辨識效能。 The scope of patent application of the present invention discloses a method for improving the performance of neural network recognition, which includes analyzing an input image to obtain foreground information, using the foreground information to generate a foreground mask, and passing the input image through the foreground mask Convert to an output image. The output image is used as the imported data for neural network identification to improve the performance of object identification.
本發明之申請專利範圍另揭露一種類神經網路辨識效能提升裝置,其包含有一影像產生器以及一運算處理器。該影像產生器用來取得一輸入影像。該運算處理器電連接該影像產生器。該運算處理器用來分析該輸入影像以取得一前景資訊,利用該前景資訊生成一前景遮罩,以及該輸入影像經由該前景遮罩轉換為一輸出影像,可有效改善類神經網路演算法在複雜環境下的物件辨識效能。該輸出影像作為類神經網路辨識的導入數據以提升物件辨識效能。 The patent application scope of the present invention also discloses a neural network-like recognition performance improvement device, which includes an image generator and an arithmetic processor. The image generator is used to obtain an input image. The arithmetic processor is electrically connected to the image generator. The arithmetic processor is used to analyze the input image to obtain foreground information, use the foreground information to generate a foreground mask, and convert the input image into an output image through the foreground mask, which can effectively improve the complexity of neural network algorithms. Object recognition performance in the environment. The output image is used as the imported data for neural network recognition to improve the performance of object recognition.
本發明之類神經網路辨識效能提升方法及其裝置係先從輸入影像分離出前景資訊,按照前景資訊的像素值分佈定義出前景遮罩,輸入影像經前景遮罩之轉換能有效過濾掉非必要資訊,所生成輸出資訊作為類神經網路辨識的導入數據可提高類神經網路辨識準確度。輸入影像不限於RGB、YUV、HSL或HSV等色彩模式。輸入影像及其轉換所得前景資訊、前景遮罩與輸出影像係屬各像素值之運算,故皆具有實質相同的影像尺寸。此外,輸出影像的像素灰階值可選擇性侷限在特定範圍內,目的在於減少類神經網路辨識效能提升裝置所需的儲存容量,更有效地處理大容量影像資訊。 The neural network recognition performance improvement method and device of the present invention first separate foreground information from the input image, and define the foreground mask according to the pixel value distribution of the foreground information. The input image can be effectively filtered out by the conversion of the foreground mask. Necessary information, and the generated output information can be used as the imported data for neural network identification to improve the accuracy of neural network identification. The input image is not limited to color modes such as RGB, YUV, HSL or HSV. The input image and the converted foreground information, the foreground mask, and the output image belong to the calculation of each pixel value, so they all have substantially the same image size. In addition, the pixel grayscale value of the output image can be selectively limited to a specific range, the purpose is to reduce the storage capacity required by the neural network-like recognition performance improvement device, and more effectively process large-capacity image information.
10:類神經網路辨識效能提升裝置 10: Neural network identification performance improvement device
12:影像產生器 12: Image generator
14:運算處理器 14: arithmetic processor
I:監控畫面 I: Monitoring screen
I1:輸入影像 I1: Input image
I2:前景資訊 I2: Prospect Information
I3:前景遮罩 I3: Foreground mask
I4:輸出影像 I4: Output image
H:前景資訊之直方圖 H: Histogram of foreground information
H1:第一直方模型 H1: The first straight square model
H2:第二直方模型 H2: The second histogram model
S1:第一群 S1: First group
S2:第二群 S2: The second group
S200、S202、S204、S206、S208、S210:步驟 S200, S202, S204, S206, S208, S210: steps
S700、S702、S704、S706、S708、S710、S712、S714、S716:步驟 S700, S702, S704, S706, S708, S710, S712, S714, S716: steps
第1圖為本發明實施例之類神經網路辨識效能提升裝置之功能方塊圖。 Figure 1 is a functional block diagram of a neural network recognition performance improvement device according to an embodiment of the present invention.
第2圖為本發明實施例之類神經網路辨識效能提升方法之流程圖。 FIG. 2 is a flowchart of a method for improving the recognition performance of a neural network according to an embodiment of the present invention.
第3圖至第6圖分別為本發明實施例之輸入影像在不同轉換階段之示意圖。 Figures 3 to 6 are schematic diagrams of the input image in different conversion stages according to the embodiment of the present invention.
第7圖為本發明實施例之生成前景遮罩之流程圖。 Figure 7 is a flow chart of generating a foreground mask according to an embodiment of the present invention.
第8圖為本發明實施例之前景資訊直方圖之示意圖。 FIG. 8 is a schematic diagram of a histogram of foreground information according to an embodiment of the present invention.
第9圖為本發明實施例之用於解析前景遮罩之像素分佈類型之示意圖。 FIG. 9 is a schematic diagram of the pixel distribution type used to analyze the foreground mask according to an embodiment of the present invention.
請參閱第1圖,第1圖為本發明實施例之類神經網路辨識效能提升裝置10之功能方塊圖。類神經網路辨識效能提升裝置10可包含有電連接在一起的影像產生器12以及運算處理器14。影像產生器12用於取得輸入影像I1。影像產生器12可為影像擷取器,直接取得監控範圍的影像資訊作為輸入影像I1;或者,影像產生器12還可為影像接收器,以有線或無線方式接收外部影像擷取器所生影像資訊當成輸入影像I1。輸入影像I1主要用在以類神經網路(Convolutional Neutral Networks,CNN)為基礎的物件辨識技術;因此,運算處理器14係執行一套類神經網路辨識效能提升方法,可有效改善類神經網路演算法在複雜環境下的物件辨識效能。
Please refer to FIG. 1. FIG. 1 is a functional block diagram of a neural network recognition
請參閱第2圖至第6圖,第2圖為本發明實施例之類神經網路辨識效能提升方法之流程圖,第3圖至第6圖分別為本發明實施例之輸入影像I1在不同轉換階段之示意圖。第2圖所述類神經網路辨識效能提升方法適用於第1圖所示類神經網路辨識效能提升裝置10。首先,執行步驟S200與步驟S202,取得關聯於監控範圍之監控畫面I,並利用物件偵測技術在監控畫面I選定輸入影像I1之範圍。第3圖所示實施態樣係在監控畫面I內選定一個小範圍的輸入影像I1,然實際應用不限於此;例如可將監控畫面I全幅皆當成輸入影像I1。接著,執行步驟S204與步驟S206,產生輸入影像I1之背景資訊,計算輸入影像I1與背景資訊之差異而取
得前景資訊I2。輸入影像I1可通過高司混合模型(Mixture of Gaussians,MOG)或以類神經網路演算法為基礎的背景差分(background subtraction)建立背景資訊,或由其它任意演算法取得背景資訊。
Please refer to Figures 2 to 6. Figure 2 is a flowchart of a neural network recognition performance improvement method according to an embodiment of the present invention. Figures 3 to 6 are respectively different input images I1 according to an embodiment of the present invention. Schematic diagram of the conversion phase. The neural network identification performance improvement method described in FIG. 2 is applicable to the neural network identification
步驟S204與S206係分析輸入影像I1以取得前景資訊I2。前述之先取得背景資訊、再計算輸入影像I1與背景資訊之差值僅為前景資訊I2的多種取得方式之一,實際應用當不限於此。接下來,執行步驟S208與步驟S210,利用前景資訊I2生成前景遮罩I3,再藉由前景遮罩I3將輸入影像I1轉換為輸出影像I4。如監控畫面I擷取自複雜環境,例如交通繁忙的道路或人車混雜的路口,即便從監控畫面I劃出小範圍的輸入影像I1,輸入影像I1仍會涵蓋許多影響偵測準確度的背景圖案。本發明係透過前景資訊I2過濾掉輸入影像I1的背景物件,如第6圖所示輸出影像I4的背景物件已被抹去,因此輸出影像I4作為類神經網路辨識的導入數據,能減少複雜環境下之背景物件干擾,有效提升物件辨識效能及偵測準確度。 Steps S204 and S206 analyze the input image I1 to obtain foreground information I2. The foregoing first obtaining of background information and then calculating the difference between the input image I1 and the background information is only one of the multiple methods of obtaining the foreground information I2, and the actual application is not limited to this. Next, step S208 and step S210 are performed to generate a foreground mask I3 using the foreground information I2, and then the input image I1 is converted into an output image I4 by the foreground mask I3. If the monitoring screen I is captured from a complex environment, such as a busy road or an intersection with people and vehicles, even if a small range of input image I1 is drawn from the monitoring screen I, the input image I1 will still cover many backgrounds that affect the accuracy of detection pattern. The present invention filters out the background objects of the input image I1 through the foreground information I2. As shown in Figure 6, the background objects of the output image I4 have been erased. Therefore, the output image I4 is used as the imported data for neural network identification, which can reduce complexity The interference of background objects in the environment effectively improves object recognition performance and detection accuracy.
請參閱第3圖至第8圖,第7圖為本發明實施例之生成前景遮罩I3之流程圖,第8圖為本發明實施例之轉換自前景資訊I2之直方圖H之示意圖。首先,執行步驟S700與步驟S702,計算前景資訊I2之直方圖H,且將直方圖H依像素值範圍劃分為多個群;例如至少劃分成第一群S1與第二群S2,其中第一群S1之像素值範圍小於第二群S2之像素值範圍。再來執行步驟S704,將第二群S2之像素數量相比於預定參數。預定參數可依統計數據而定,例如根據監控畫面I所處環境決定、或設定第二群S2與第一群S1之間像素數量比例來定義預定參數。第二群S2的像素數量大於預定參數,表示輸入影像I1內有動態物件;第二群S2的像素數量小於預定參數,尚不確定輸入影像I1內物件係保持靜止或受雜訊干擾。 Please refer to FIGS. 3 to 8. FIG. 7 is a flowchart of generating a foreground mask I3 according to an embodiment of the present invention, and FIG. 8 is a schematic diagram of a histogram H converted from foreground information I2 according to an embodiment of the present invention. First, perform steps S700 and S702 to calculate the histogram H of the foreground information I2, and divide the histogram H into a plurality of groups according to the pixel value range; for example, it is divided into at least the first group S1 and the second group S2. The pixel value range of the group S1 is smaller than the pixel value range of the second group S2. Step S704 is executed again to compare the number of pixels in the second group S2 with a predetermined parameter. The predetermined parameter may be determined according to statistical data, for example, determined according to the environment of the monitoring screen I, or the ratio of the number of pixels between the second group S2 and the first group S1 is set to define the predetermined parameter. The number of pixels in the second group S2 is greater than the predetermined parameter, indicating that there are dynamic objects in the input image I1; the number of pixels in the second group S2 is less than the predetermined parameter, and it is not yet certain whether the objects in the input image I1 remain static or interfered by noise.
若第二群S2的像素數量大於預定參數,表示輸入影像I1與背景資訊之間有明顯變化,執行步驟S706,設定前景門檻;舉例來說,前景門檻可為直方圖H所有像素均值的百分之四十。前景門檻的百分比不限於此數值,端視設計需求而定。然後,執行步驟S708,前景資訊I2內其像素值高於前景門檻的像素歸類為第一組像素,低於前景門檻的像素歸類為第二組像素。接著執行步驟S710,將前景遮罩I3內其位置對應於第一組像素及第二組像素的像素值分別設為第一數值與第二數值,生成前景遮罩I3。舉例來說,第一數值可為1,如第5圖所示之前景遮罩I3的無網格區域,且第二數值可為0,如第5圖所示之前景遮罩I3的網格區域。 If the number of pixels in the second group S2 is greater than the predetermined parameter, it indicates that there is a significant change between the input image I1 and the background information, and step S706 is executed to set the foreground threshold; for example, the foreground threshold may be the percentage of the average value of all pixels in the histogram H Of forty. The percentage of the prospect threshold is not limited to this value, it depends on the design requirements. Then, step S708 is executed, the pixels in the foreground information I2 whose pixel values are higher than the foreground threshold are classified as the first group of pixels, and the pixels below the foreground threshold are classified as the second group of pixels. Next, step S710 is performed to set the pixel values of the first group of pixels and the second group of pixels in the foreground mask I3 to the first value and the second value, respectively, to generate the foreground mask I3. For example, the first value can be 1, as shown in Fig. 5, the grid-free area of foreground mask I3, and the second value can be 0, as shown in Fig. 5, the grid of foreground mask I3 area.
若第二群S2的像素數量小於預定參數,輸入影像I1與背景資訊之間變化不大,執行步驟S712,判斷第一群S1是否符合特定條件。特定條件係指第一群S1具有較多數量的像素,實際數目應視所處環境及統計數據而定。如第一群S1符合特定條件,表示直方圖H裡像素分佈集中在低檔範圍,視為輸入影像I1內的物件保持靜止,執行步驟S714,前景遮罩I3內所有像素的像素值設為第一數值;第一數值為1時,該張輸入影像I1可直接當成輸出影像I4,作為類神經網路辨識的導入數據。如第一群S1不符合特定條件,表示直方圖H裡像素分佈散亂,解讀為輸入影像I1受到雜訊干擾,執行步驟S716,前景遮罩I3內所有像素的像素值設為第二數值;第二數值為0時,直接捨棄該張輸入影像I1。 If the number of pixels in the second group S2 is less than the predetermined parameter, there is little change between the input image I1 and the background information, and step S712 is executed to determine whether the first group S1 meets a specific condition. The specific condition means that the first group S1 has a larger number of pixels, and the actual number depends on the environment and statistical data. If the first group S1 meets certain conditions, it means that the pixel distribution in the histogram H is concentrated in the low-end range, and the object in the input image I1 is deemed to remain stationary. Step S714 is executed, and the pixel values of all pixels in the foreground mask I3 are set to the first Numerical value; when the first value is 1, the input image I1 can be directly used as the output image I4, which is used as the imported data for neural network identification. If the first group S1 does not meet the specific conditions, it means that the pixel distribution in the histogram H is scattered, which means that the input image I1 is interfered by noise. Step S716 is executed, and the pixel values of all pixels in the foreground mask I3 are set to the second value; When the second value is 0, the input image I1 is directly discarded.
步驟S210中,輸入影像I1經由前景遮罩I3轉換為輸出影像I4,可選擇直接計算輸入影像I1之所有像素值分別與前景遮罩I3之對應像素值的乘積,所得乘積即為輸出影像I4的各像素值;或者,計算出輸入影像I1所有像素值分別與前景遮罩I3對應像素值的乘積之後,可進一步從該些乘積篩選出其位置對應到前景 遮罩I3內不屬於第二數值的像素位置的第一組乘積、以及屬於第二數值的像素位置的第二組乘積。第二組乘積可歸類為背景,如將其設定成第二數值,輸出影像I4裡屬於第二組乘積的背景像素為黑色,可能會影響輸出影像I4內物件的色彩呈現效果,故可將第二組乘積替換為參考值(如第6圖所示輸出影像I4的單斜線區域),並結合第一組乘積與該些參考值作為輸出影像I4的各像素值。舉例來說,輸出影像I4內待測物件(例如行人)的色彩一般多為黑白資訊,如將第二組乘積定為第二數值(黑色),容易和待測物件圖案混淆,故可將第二組乘積選擇性設為其它顏色,例如灰色,明確區隔待測物件與背景。 In step S210, the input image I1 is converted to the output image I4 through the foreground mask I3, and the product of all the pixel values of the input image I1 and the corresponding pixel value of the foreground mask I3 can be directly calculated. The resulting product is the output image I4 Each pixel value; or, after calculating the product of all the pixel values of the input image I1 and the corresponding pixel value of the foreground mask I3, you can further filter out the position corresponding to the foreground from these products The first set of products of pixel positions that do not belong to the second value and the second set of products of pixel positions that belong to the second value in the mask I3. The second set of products can be classified as background. If it is set to the second value, the background pixels belonging to the second set of products in the output image I4 are black, which may affect the color rendering of the objects in the output image I4, so you can change The second set of products is replaced with a reference value (as shown in the single slanted area of the output image I4 as shown in FIG. 6), and the first set of products and the reference values are combined as the pixel values of the output image I4. For example, the color of the object under test (such as pedestrian) in the output image I4 is generally black and white information. If the second set of products is set as the second value (black), it is easy to be confused with the pattern of the object under test. The two sets of products can optionally be set to other colors, such as gray, to clearly separate the object to be tested and the background.
請參閱第7圖至第9圖,第9圖為本發明實施例之用於解析前景遮罩之像素分佈類型之示意圖。步驟S704係將第二群S2之像素數量相比於預定參數,本發明可進一步預設如第9圖所示的第一直方模型H1及第二直方模型H2。若前景資訊I2的直方圖H相似第一直方模型H1,意即第二群S2的像素數量較多(大於預定參數),可接續執行步驟S708;若第二群S2的像素數量偏少(小於預定參數),進一步執行步驟S712,判斷前景資訊I2的直方圖H是否相似於第二直方模型H2。如直方圖H相似第二直方模型H2,意即符合第一群S1具有較多數量像素的特定條件,執行步驟S714生成相關前景遮罩I3;如直方圖H不同於第二直方模型H2,意即第一群S1的像素數量較少而不符特定條件,執行步驟S716以捨棄此輸入影像I1。第一直方模型H1可呈現出預定參數的視覺化圖形樣式,第二直方模型H2可呈現出特定條件的視覺化圖形樣式,然實際樣式當不限於前揭實施例。 Please refer to FIGS. 7-9. FIG. 9 is a schematic diagram of the pixel distribution type used to analyze the foreground mask according to an embodiment of the present invention. Step S704 is to compare the number of pixels of the second group S2 with a predetermined parameter. The present invention can further preset the first and second histogram models H1 and H2 as shown in FIG. 9. If the histogram H of the foreground information I2 is similar to the first rectangular model H1, which means that the number of pixels in the second group S2 is larger (greater than the predetermined parameter), step S708 can be continued; if the number of pixels in the second group S2 is too small ( Less than a predetermined parameter), step S712 is further executed to determine whether the histogram H of the foreground information I2 is similar to the second histogram model H2. If the histogram H is similar to the second histogram model H2, it means that it meets the specific condition that the first group S1 has a larger number of pixels, and step S714 is performed to generate the relevant foreground mask I3; if the histogram H is different from the second histogram model H2, it means That is, the number of pixels in the first group S1 is small and does not meet the specific conditions, and step S716 is executed to discard the input image I1. The first histogram model H1 can present a visualized graphic style with predetermined parameters, and the second histogram model H2 can present a visualized graphic style with specific conditions, but the actual style is not limited to the foregoing embodiment.
綜上所述,本發明的類神經網路辨識效能提升方法及其裝置係先從輸入影像分離出前景資訊,按照前景資訊的像素值分佈進行分類以定義出不同情境下的前景遮罩,輸入影像經前景遮罩之轉換能有效過濾掉非必要資訊,所 生成輸出資訊作為類神經網路辨識的導入數據,即能提高類神經網路辨識準確度。特別一提的是,輸入影像不限於RGB、YUV、HSL或HSV等色彩模式。輸入影像及其轉換所得前景資訊、前景遮罩與輸出影像係屬各像素值之運算轉換,故皆具有實質相同的影像尺寸。此外,輸出影像的像素灰階值可選擇性限定於0~128的範圍內,目的在於減少類神經網路辨識效能提升裝置所需的儲存容量,更有效地處理大容量影像資訊;因此前景遮罩屬於二值圖,輸出影像屬於256級或128級灰階圖。相比於先前技術,本發明將輸入影像的背景雜訊濾掉,有助於改善類神經網路辨識的效能提升。 In summary, the neural network-like recognition performance improvement method and device of the present invention first separate the foreground information from the input image, and classify the foreground information according to the pixel value distribution of the foreground information to define the foreground mask in different situations. The conversion of the image by the foreground mask can effectively filter out unnecessary information, so Generate output information as the imported data for neural network identification, which can improve the accuracy of neural network identification. In particular, the input image is not limited to color modes such as RGB, YUV, HSL, or HSV. The input image and the converted foreground information, the foreground mask, and the output image belong to the arithmetic conversion of each pixel value, so they all have substantially the same image size. In addition, the pixel grayscale value of the output image can be selectively limited to the range of 0~128, the purpose is to reduce the storage capacity required by the neural network recognition performance improvement device, and more effectively process large-capacity image information; therefore, the foreground is blocked. The mask is a binary image, and the output image is a 256-level or 128-level grayscale image. Compared with the prior art, the present invention filters out the background noise of the input image, which helps to improve the performance of neural network recognition.
以上所述僅為本發明之較佳實施例,凡依本發明申請專利範圍所做之均等變化與修飾,皆應屬本發明之涵蓋範圍。 The foregoing descriptions are only preferred embodiments of the present invention, and all equivalent changes and modifications made in accordance with the scope of the patent application of the present invention should fall within the scope of the present invention.
S200、S202、S204、S206、S208、S210:步驟 S200, S202, S204, S206, S208, S210: steps
Claims (12)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW107141377A TWI718442B (en) | 2018-11-21 | 2018-11-21 | Convolutional neutral networks identification efficiency increasing method and related convolutional neutral networks identification efficiency increasing device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW107141377A TWI718442B (en) | 2018-11-21 | 2018-11-21 | Convolutional neutral networks identification efficiency increasing method and related convolutional neutral networks identification efficiency increasing device |
Publications (2)
Publication Number | Publication Date |
---|---|
TW202020750A TW202020750A (en) | 2020-06-01 |
TWI718442B true TWI718442B (en) | 2021-02-11 |
Family
ID=72175708
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW107141377A TWI718442B (en) | 2018-11-21 | 2018-11-21 | Convolutional neutral networks identification efficiency increasing method and related convolutional neutral networks identification efficiency increasing device |
Country Status (1)
Country | Link |
---|---|
TW (1) | TWI718442B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040181747A1 (en) * | 2001-11-19 | 2004-09-16 | Hull Jonathan J. | Multimedia print driver dialog interfaces |
TW200915225A (en) * | 2007-09-28 | 2009-04-01 | Du-Ming Tsai | Activity recognition method and system |
TW201040847A (en) * | 2009-05-15 | 2010-11-16 | Univ Nat Taiwan | Method of identifying objects in image |
CN102508288A (en) * | 2011-10-18 | 2012-06-20 | 浙江工业大学 | Earthquake prediction auxiliary system based on technology of Internet of things |
CN103473539A (en) * | 2013-09-23 | 2013-12-25 | 智慧城市系统服务(中国)有限公司 | Gait recognition method and device |
-
2018
- 2018-11-21 TW TW107141377A patent/TWI718442B/en active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040181747A1 (en) * | 2001-11-19 | 2004-09-16 | Hull Jonathan J. | Multimedia print driver dialog interfaces |
TW200915225A (en) * | 2007-09-28 | 2009-04-01 | Du-Ming Tsai | Activity recognition method and system |
TW201040847A (en) * | 2009-05-15 | 2010-11-16 | Univ Nat Taiwan | Method of identifying objects in image |
CN102508288A (en) * | 2011-10-18 | 2012-06-20 | 浙江工业大学 | Earthquake prediction auxiliary system based on technology of Internet of things |
CN103473539A (en) * | 2013-09-23 | 2013-12-25 | 智慧城市系统服务(中国)有限公司 | Gait recognition method and device |
Also Published As
Publication number | Publication date |
---|---|
TW202020750A (en) | 2020-06-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Nafchi et al. | Efficient no-reference quality assessment and classification model for contrast distorted images | |
CN104408429B (en) | A kind of video represents frame extracting method and device | |
CN111539343B (en) | Black smoke vehicle detection method based on convolution attention network | |
CN107038416B (en) | Pedestrian detection method based on binary image improved HOG characteristics | |
US10769478B2 (en) | Convolutional neutral network identification efficiency increasing method and related convolutional neutral network identification efficiency increasing device | |
CN110263660A (en) | A kind of traffic target detection recognition method of adaptive scene changes | |
CN109255326B (en) | Traffic scene smoke intelligent detection method based on multi-dimensional information feature fusion | |
CN106203461B (en) | Image processing method and device | |
JP2014041476A (en) | Image processing apparatus, image processing method, and program | |
CN106529419A (en) | Automatic detection method for significant stack type polymerization object in video | |
CN110378893B (en) | Image quality evaluation method and device and electronic equipment | |
CN115205194B (en) | Image processing-based method, system and device for detecting coverage rate of armyworm plate | |
CN107480676A (en) | A kind of vehicle color identification method, device and electronic equipment | |
CN105550710A (en) | Nonlinear fitting based intelligent detection method for running exception state of contact network | |
CN114331946B (en) | Image data processing method, device and medium | |
CN109190455A (en) | Black smoke vehicle recognition methods based on Gaussian Mixture and autoregressive moving-average model | |
CN113096103A (en) | Intelligent smoke image sensing method for emptying torch | |
CN116958880A (en) | Video flame foreground segmentation preprocessing method, device, equipment and storage medium | |
CN117541983B (en) | Model data quality analysis method and system based on machine vision | |
TWI718442B (en) | Convolutional neutral networks identification efficiency increasing method and related convolutional neutral networks identification efficiency increasing device | |
CN111160372B (en) | Large target identification method based on high-speed convolutional neural network | |
CN111368625B (en) | Pedestrian target detection method based on cascade optimization | |
CN115222732B (en) | Injection molding process anomaly detection method based on big data analysis and color difference detection | |
CN111209771A (en) | Neural network identification efficiency improving method and relevant identification efficiency improving device thereof | |
Shi et al. | License plate detection based on convolutional neural network and visual feature |