TWI765250B - Method for selecting deep learning algorithm and device for selecting deep learning algorithm - Google Patents
Method for selecting deep learning algorithm and device for selecting deep learning algorithm Download PDFInfo
- Publication number
- TWI765250B TWI765250B TW109113110A TW109113110A TWI765250B TW I765250 B TWI765250 B TW I765250B TW 109113110 A TW109113110 A TW 109113110A TW 109113110 A TW109113110 A TW 109113110A TW I765250 B TWI765250 B TW I765250B
- Authority
- TW
- Taiwan
- Prior art keywords
- similarity
- deep learning
- training data
- learning algorithm
- selecting
- Prior art date
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
Description
本發明屬於深度學習演算法領域,特別涉及到一種深度學習演算法的選擇方法及深度學習演算法的選擇裝置。 The invention belongs to the field of deep learning algorithms, and particularly relates to a method for selecting a deep learning algorithm and a device for selecting a deep learning algorithm.
目前卷積神經網路(Convolutional Neural Network,CNN)作為計算器視覺的工具,應用非常廣泛。針對任一資料庫均可設計出合適的CNN演算法模型,該CNN演算法模型可以用於訓練資料庫中的樣本,從而得到資料庫中的樣本與樣本的標籤之間的聯繫。然而,從複數CNN演算法中找出一種合適的演算法,通常需要具備專業背景的專業人員藉由大量的分析實驗獲得,對普通人員來說比較困難。 At present, Convolutional Neural Network (CNN) is widely used as a tool for computer vision. A suitable CNN algorithm model can be designed for any database, and the CNN algorithm model can be used to train the samples in the database, so as to obtain the relationship between the samples in the database and the labels of the samples. However, finding a suitable algorithm from complex CNN algorithms usually requires professionals with professional backgrounds to obtain through a large number of analysis experiments, which is difficult for ordinary people.
鑒於上述狀況,有必要提供一種深度學習演算法的選擇方法及深度學習演算法的選擇裝置,以降低選擇解決特定問題的深度學習演算法的難度。 In view of the above situation, it is necessary to provide a method for selecting a deep learning algorithm and a device for selecting a deep learning algorithm, so as to reduce the difficulty of selecting a deep learning algorithm for solving a specific problem.
本發明第一方面提供了一種深度學習演算法的選擇方法,所述深度學習演算法的選擇方法包括:獲取待處理的問題類型; 依據所述問題類型選擇相應的資料集,並將所選資料集分為訓練資料與測試資料;計算所述訓練資料的相似度;依據所述相似度調整所述訓練資料的批次大小;依據所述問題類型選擇多種深度學習演算法,並利用所述訓練資料訓練所選擇的所述深度學習演算法以獲取相應的演算法模型;利用所述測試資料驗證所述演算法模型,並選擇驗證結果最優的所述深度學習演算法。 A first aspect of the present invention provides a method for selecting a deep learning algorithm, and the method for selecting a deep learning algorithm includes: acquiring a problem type to be processed; Select a corresponding data set according to the problem type, and divide the selected data set into training data and test data; calculate the similarity of the training data; adjust the batch size of the training data according to the similarity; Select multiple deep learning algorithms for the problem type, and use the training data to train the selected deep learning algorithm to obtain a corresponding algorithm model; use the test data to verify the algorithm model, and select to verify The deep learning algorithm with the best results.
進一步地,所述訓練資料的相似度計算方法,包含如下步驟:隨機選取所述訓練資料中的複數樣本組,每個所述樣本組包括兩個樣本,分別計算各個樣本組的相似度,然後計算複數樣本組的相似度的平均值,並將所述平均值設為所述訓練資料的相似度。 Further, the method for calculating the similarity of the training data includes the steps of randomly selecting a plurality of sample groups in the training data, each of the sample groups including two samples, calculating the similarity of each sample group respectively, and then The average of the similarities of the plural sample groups is calculated, and the average is set as the similarity of the training data.
進一步地,所述訓練資料的所述相似度與所述批次大小的關係滿足:所述批次大小與所述相似度成反比。 Further, the relationship between the similarity of the training data and the batch size satisfies: the batch size is inversely proportional to the similarity.
進一步地,所述問題類型包括圖片分類、物件分割及物件辨識,所述深度學習演算法為卷積神經網路演算法,包括ResNet、AlexNet、VGG及Inception。 Further, the problem types include image classification, object segmentation and object identification, and the deep learning algorithm is a convolutional neural network roadmap algorithm, including ResNet, AlexNet, VGG and Inception.
進一步地,利用結構相似性計算所述卷積神經網路演算法的所述訓練資料的相似度。 Further, the similarity of the training data of the convolutional neural network road algorithm is calculated by using the structural similarity.
本發明第二方面提供一種深度學習演算法的選擇裝置,所述深度學習演算法的選擇裝置包括:處理器;以及存儲器,所述存儲器中存儲有複數程式模組,複數所述程式模組由所述處理器運行並執行如下步驟: 獲取待處理的問題類型;依據所述問題類型選擇相應的資料集,並將所選資料集分為訓練資料與測試資料;計算所述訓練資料的相似度;依據所述相似度調整所述訓練資料的批次大小;依據所述問題類型選擇多種深度學習演算法,並利用所述訓練資料訓練所選擇的所述深度學習演算法以獲取相應的演算法模型;利用所述測試資料驗證所述演算法模型,並選擇驗證結果最優的所述深度學習演算法。 A second aspect of the present invention provides a device for selecting a deep learning algorithm. The device for selecting a deep learning algorithm includes: a processor; and a memory, where a plurality of program modules are stored in the memory, and the plurality of program modules are composed of The processor operates and performs the following steps: Obtain the type of the problem to be processed; select a corresponding data set according to the problem type, and divide the selected data set into training data and test data; calculate the similarity of the training data; adjust the training according to the similarity batch size of data; select a variety of deep learning algorithms according to the problem type, and use the training data to train the selected deep learning algorithm to obtain a corresponding algorithm model; use the test data to verify the The algorithm model is selected, and the deep learning algorithm with the best verification result is selected.
進一步地,所述訓練資料的相似度計算方法,包含如下步驟:隨機選取所述訓練資料中的複數樣本組,每個所述樣本組包括兩個樣本,分別計算各個樣本組的相似度,然後計算複數樣本組的相似度的平均值,並將所述平均值設為所述訓練資料的相似度。 Further, the method for calculating the similarity of the training data includes the steps of randomly selecting a plurality of sample groups in the training data, each of the sample groups including two samples, calculating the similarity of each sample group respectively, and then The average of the similarities of the plural sample groups is calculated, and the average is set as the similarity of the training data.
進一步地,所述訓練資料的所述相似度與所述批次大小的關係滿足:所述批次大小與所述相似度成反比。 Further, the relationship between the similarity of the training data and the batch size satisfies: the batch size is inversely proportional to the similarity.
進一步地,所述問題類型包括圖片分類、物件分割及物件辨識,所述深度學習演算法為卷積神經網路演算法,包括ResNet、AlexNet、VGG及Inception。 Further, the problem types include image classification, object segmentation and object identification, and the deep learning algorithm is a convolutional neural network roadmap algorithm, including ResNet, AlexNet, VGG and Inception.
進一步地,利用結構相似性計算所述卷積神經網路演算法的所述訓練資料的相似度。 Further, the similarity of the training data of the convolutional neural network road algorithm is calculated by using the structural similarity.
本發明利用訓練資料的相似度靈活調整訓練資料的批次大小,提高訓練資料的品質,使藉由訓練資料訓練的演算法模型更加精準,然後利用測試資料驗證演算法模型,選出解決特定問題的最優演算法,本發明降低了深度學習演算法選擇的難度且具有通用性。 The invention utilizes the similarity of the training data to flexibly adjust the batch size of the training data, improves the quality of the training data, and makes the algorithm model trained by the training data more accurate, and then uses the test data to verify the algorithm model, and selects the one that solves the specific problem. The optimal algorithm, the present invention reduces the difficulty of selecting a deep learning algorithm and has universality.
100:深度學習演算法的選擇裝置 100: Selection Device for Deep Learning Algorithms
10:存儲器 10: Memory
20:處理器 20: Processor
30:顯示單元 30: Display unit
40:輸入單元 40: Input unit
6:深度學習演算法的選擇系統 6: Selection system for deep learning algorithms
61:資訊獲取模組 61: Information acquisition module
62:選擇模組 62: Select Mods
63:相似度計算模組 63: Similarity calculation module
64:調整模組 64: Adjustment Mods
65:訓練模組 65: Training Module
66:驗證模組 66: Verification Module
圖1係本發明的一個實施例中深度學習演算法的選擇裝置的硬體架構示意圖。 FIG. 1 is a schematic diagram of a hardware architecture of a device for selecting a deep learning algorithm in an embodiment of the present invention.
圖2係圖1所示深度學習演算法的選擇裝置的深度學習演算法的選擇系統的模組示意圖。 FIG. 2 is a schematic diagram of a module of a deep learning algorithm selection system of the deep learning algorithm selection device shown in FIG. 1 .
圖3係本發明的一個實施例中深度學習演算法的選擇方法的流程示意圖。 FIG. 3 is a schematic flowchart of a method for selecting a deep learning algorithm according to an embodiment of the present invention.
為了能夠更清楚地理解本發明的上述目的、特徵與優點,下面結合附圖與具體實施方式對本發明進行詳細描述。需要說明的係,在不衝突的情況下,本申請的實施方式及實施方式中的特徵可以相互組合。 In order to more clearly understand the above objects, features and advantages of the present invention, the present invention will be described in detail below with reference to the accompanying drawings and specific embodiments. It should be noted that the embodiments of the present application and the features of the embodiments can be combined with each other unless there is conflict.
在下面的描述中闡述了很多具體細節以便於充分理解本發明,所描述的實施方式僅係本發明一部分實施方式,而不係全部的實施方式。基於本發明中的實施方式,本領域普通技術人員在沒有做出創造性勞動前提下所獲得的所有其它實施方式,都屬於本發明保護的範圍。 In the following description, many specific details are set forth to facilitate a full understanding of the present invention, and the described embodiments are only a part of the embodiments of the present invention, but not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
除非另有定義,本文所使用的所有的技術與科學術語與屬於本發明的技術領域的技術人員通常理解的含義相同。本文中在本發明的說明書中所使用的術語只係為了描述具體的實施方式的目的,不係旨在限制本發明。 Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terms used herein in the description of the present invention are for the purpose of describing specific embodiments only, and are not intended to limit the present invention.
本文所使用的術語“及/或”包括一個或複數相關的所列項目的任意的與所有的組合。 As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
請參照圖1,本發明提供了一種深度學習演算法的選擇裝置100,用於降低選擇解決特定問題的深度學習演算法的難度。
Referring to FIG. 1 , the present invention provides an
其中,深度學習(Deep Learning,DL)係機器學習的技術與研究領域之一,藉由建立具有階層結構的人工神經網路(Artifitial Neural Networks,ANNs),在計算系統中實現人工智慧的卷積神經網路(Convolutional Neural Networks,CNN),係深度學習(deep learning)的代表演算法之一。 Among them, deep learning (DL) is one of the technologies and research fields of machine learning. By establishing artificial neural networks (ANNs) with a hierarchical structure, the convolution of artificial intelligence is realized in computing systems. Convolutional Neural Networks (CNN) is one of the representative algorithms of deep learning.
以下僅以卷積神經網路為例說明,但本方法不限於卷積神經網路。 The following only takes the convolutional neural network as an example to illustrate, but this method is not limited to the convolutional neural network.
深度學習演算法的選擇裝置100包括存儲器10、處理器20、顯示單元30與輸入單元40,顯示單元30、存儲器10分別與處理器20電性連接。
The deep learning
存儲器10用於存儲深度學習演算法的選擇裝置100中的各種資料,例如各種深度學習演算法等。在本實施例中,存儲器10可以包括但不限於唯讀存儲器(Read-Only Memory,ROM)、隨機存儲器(Random Access Memory,RAM)、可程式設計唯讀存儲器(Programmable Read-Only Memory,PROM)、可擦除可程式設計唯讀存儲器(Erasable Programmable Read-Only Memory,EPROM)、一次可程式設計唯讀存儲器(One-time Programmable Read-Only Memory,OTPROM)、電子擦除式可複寫唯讀存儲器(Electrically-Erasable Programmable Read-Only Memory,EEPROM)、唯讀光碟(Compact Disc Read-Only Memory,CD-ROM)或其他光碟存儲器、磁碟存儲器、磁帶存儲器、或者能夠用於攜帶或存儲資料的電腦可讀的任何其他介質。
The
處理器20可以為中央處理器(CPU,Central Processing Unit)、微處理器、數文書處理晶片或任何能夠執行資料處理功能的處理器晶片。
The
顯示單元30用於顯示訓練資料或測試資料的輸出結果等。例如顯示深度學習演算法模型的測試結果,在本實施例中,顯示單元30可以係但不限於觸摸顯示幕、液晶顯示幕等顯示裝置。
The
輸入單元40用於供使用者輸入各種資訊與控制指令,例如確認演算法要解決的問題類型等。在本實施例中,輸入單元40可以係但不限於遙控器、滑鼠、語音輸入裝置、觸控式螢幕等。
The
請參照圖2,深度學習演算法的選擇裝置100中還運行有深度學習演算法的選擇系統6。圖2為本發明一個實施例中的深度學習演算法的選擇系統
6的功能模組示意圖。在本實施例中,深度學習演算法的選擇系統6包括一個或複數程式形式的電腦指令,該一個或複數程式形式的電腦指令儲存於存儲器10中,並由處理器20執行。深度學習演算法的選擇系統6包括資訊獲取模組61、選擇模組62、相似度計算模組63、調整模組64、訓練模組65與驗證模組66。
Referring to FIG. 2 , the deep learning
資訊獲取模組61用於獲取深度學習演算法需要解決的問題類型,所述問題類型包括圖片分類、物件分割及物件辨識,但不限於此。
The
選擇模組62用於依據問題類型選擇相應的資料集,並將所選資料集分為訓練資料與測試資料,還用於依據問題類型選擇所有相應的深度學習演算法。
The
相似度計算模組63用於計算訓練資料的相似度。
The
在一實施例中,利用結構相似性計算訓練資料的相似度。 In one embodiment, the similarity of training data is calculated using structural similarity.
其中結構相似性(structural similarity index,SSIM),係一種衡量兩幅圖像相似度的指標,它分別從亮度、對比度、結構三方面度量圖像相似性。相似度的範圍為0到1,兩幅圖像的相似度的值越大,兩幅圖像的相似性越大,當兩張圖像一模一樣時,相似度SSIM的值等於1。 The structural similarity index (SSIM) is an index to measure the similarity of two images, and it measures the image similarity from three aspects: brightness, contrast, and structure. The range of similarity is 0 to 1. The greater the value of the similarity between the two images, the greater the similarity between the two images. When the two images are exactly the same, the value of the similarity SSIM is equal to 1.
在一實施例中,訓練資料有100張圖,隨機抽取20組樣本圖像,每組兩張,共40張,利用結構相似性分別計算所述20組樣本圖像的相似度,然後計算20組樣本圖像相似度的平均值,將平均值設定為訓練資料的相似度。 In one embodiment, there are 100 images in the training data, 20 groups of sample images are randomly selected, two for each group, a total of 40 images, and the similarity of the 20 groups of sample images is calculated by using the structural similarity, and then 20 The average value of the similarity of the group sample images, and the average value is set as the similarity of the training data.
可以理解,實際操作中,根據實際場景中訓練資料的數量,靈活選取樣本數量,分別計算各個樣本組的相似度,然後計算所選樣本組的相似度的平均值。 It can be understood that, in actual operation, the number of samples is flexibly selected according to the number of training materials in the actual scene, the similarity of each sample group is calculated separately, and then the average value of the similarity of the selected sample groups is calculated.
可以理解,在其他實施例中,還包括其他計算訓練資料的方法。 It can be understood that in other embodiments, other methods for calculating training data are also included.
調整模組64用於依據相似度調整對應的訓練資料批次大小。
The
訓練模組65用於利用訓練資料訓練深度學習演算法,以獲取演算法模型。
The
驗證模組66利用測試資料驗證演算法模型,以獲取驗證結果最優的深度學習演算法。
The
請參照圖3,本發明還提供了一種深度學習演算法的選擇方法,圖3為該深度學習演算法的選擇方法的流程示意圖,該深度學習演算法的選擇方法包括如下步驟: Please refer to FIG. 3 , the present invention also provides a method for selecting a deep learning algorithm. FIG. 3 is a schematic flowchart of the method for selecting a deep learning algorithm. The method for selecting a deep learning algorithm includes the following steps:
步驟301,獲取待處理的問題類型。 In step 301, the type of the problem to be processed is acquired.
具體地,資訊獲取模組61獲取待處理的問題類型,所述問題類型包括但不限於圖片分類、物件分割及物件辨識。
Specifically, the
步驟302,依據問題類型選擇相應的資料集,並將所選資料集分為訓練資料與測試資料。 Step 302, select a corresponding data set according to the question type, and divide the selected data set into training data and test data.
具體地,選擇模組62依據問題類型選擇相應的資料集,並將所選資料集分為訓練資料與測試資料。
Specifically, the
例如,Common Object in Context(COCO)係一個由微軟維護的圖像資料集,可進行圖片分類,包括:超過30萬幅圖像、超過200萬個實例、80多類對像等。 For example, Common Object in Context (COCO) is an image dataset maintained by Microsoft, which can classify images, including: more than 300,000 images, more than 2 million instances, and more than 80 types of objects.
其中訓練資料用於訓練深度學習的演算法,獲取演算法模型,測試資料用於驗證演算法模型的準確性。 The training data is used to train the deep learning algorithm, and the algorithm model is obtained, and the test data is used to verify the accuracy of the algorithm model.
訓練資料與測試資料的比例一般為4:1,可以理解,可以根據實際的場景具體調整訓練資料與測試資料之間的比例,例如但不限於,比例為3:1或5:1。 The ratio of training data to test data is generally 4:1. It is understandable that the ratio between training data and test data can be specifically adjusted according to the actual scene, for example, but not limited to, the ratio is 3:1 or 5:1.
步驟303,計算訓練資料的相似度。 Step 303: Calculate the similarity of the training data.
具體地,相似度計算模組63計算訓練資料的相似度。
Specifically, the
在一實施例中,相似度計算模組63利用結構相似性計算訓練資料的相似度。
In one embodiment, the
相似度的範圍為0到1,兩幅圖像的相似度的值越大,兩幅圖像的相似性越大,當兩張圖像一模一樣時,相似度的值等於1。 The range of similarity is 0 to 1. The greater the value of the similarity between the two images, the greater the similarity between the two images. When the two images are exactly the same, the value of the similarity is equal to 1.
具體地,隨機選取訓練資料中的複數樣本組,每個樣本組包括兩張圖像,分別計算各個樣本組的相似度,然後計算各個樣本組相似度的平均值。 Specifically, multiple sample groups in the training data are randomly selected, each sample group includes two images, the similarity of each sample group is calculated separately, and then the average value of the similarity of each sample group is calculated.
步驟304,依據相似度調整訓練資料的批次大小。 Step 304: Adjust the batch size of the training data according to the similarity.
具體地,調整模組64依據相似度調整對應的訓練資料的批次大小,從而可以利用調整後的訓練資料精準訓練深度學習演算法的權重。
Specifically, the
在一實施例中,相似度為SSIM的值,相似度與批次大小對應關係如表1所示。批次(batch)大小,即每一次輸入CNN演算法中訓練資料的數量。 In one embodiment, the similarity is the value of SSIM, and the corresponding relationship between the similarity and the batch size is shown in Table 1. The batch size is the amount of training data in each input to the CNN algorithm.
即批次大小與相似度成反比,相似度越大,批次大小越小,相似度越小,批次大小越大。 That is, the batch size is inversely proportional to the similarity, the larger the similarity, the smaller the batch size, and the smaller the similarity, the larger the batch size.
可以理解,在其他實施例中,批次大小為一個範圍,如表2所示。 It can be understood that in other embodiments, the batch size is a range, as shown in Table 2.
可以根據實際的場景,依據相似度的大小,靈活選擇調整批次大小。 You can flexibly choose to adjust the batch size according to the actual scene and the size of the similarity.
步驟305,依據問題類型選擇多種深度學習演算法,並利用訓練資料訓練所選擇的深度學習演算法以獲取相應的演算法模型。 Step 305 , select multiple deep learning algorithms according to the problem type, and train the selected deep learning algorithm by using the training data to obtain a corresponding algorithm model.
具體地,選擇模組62依據問題類型選擇複數深度學習演算法,訓練模組65利用訓練資料訓練所選擇的深度學習演算法,分別獲取相應的演算法模型。
Specifically, the
在一實施例中,用來處理圖像分類問題的CNN的演算法包括ResNet、AlexNet、VGG、Inception等。 In one embodiment, the CNN algorithm used to deal with the image classification problem includes ResNet, AlexNet, VGG, Inception, and the like.
步驟306,利用測試資料驗證所有的演算法模型,並選擇驗證結果最優的深度學習演算法。 Step 306 , verify all the algorithm models by using the test data, and select the deep learning algorithm with the best verification result.
具體地,驗證模組66利用測試資料驗證所有的演算法模型,並選擇驗證結果最優的深度學習演算法。
Specifically, the
可以理解,步驟304之後還包括步驟, 計算所述測試資料的相似度。 It can be understood that after step 304, there are also steps, Calculate the similarity of the test data.
具體地,相似度計算模組63計算測試資料的相似度;依據相似度調整測試資料的批次大小。
Specifically, the
具體地,調整模組64依據相似度調整對應的訓練資料批次大小。
Specifically, the
本發明提供的深度學習演算法的選擇方法,利用訓練資料的相似度靈活調整訓練資料的批次大小,藉由訓練資料訓練選擇的深度學習演算法,精准的得出相應演算法模型,並藉由測試資料驗證演算法模型,獲取最優的深度學習演算法的選擇方法。 The method for selecting a deep learning algorithm provided by the present invention utilizes the similarity of the training data to flexibly adjust the batch size of the training data, trains the selected deep learning algorithm based on the training data, and accurately obtains the corresponding algorithm model, and uses the training data to train the selected deep learning algorithm. The algorithm model is verified by the test data, and the selection method of the optimal deep learning algorithm is obtained.
本發明能夠利用訓練資料的相似度靈活調整訓練資料的批次大小,從而提高訓練資料的品質,使藉由訓練資料獲取的演算法模型更加精準,然後利用驗證測試資料驗證演算法模型,選出解決特定問題的最優的演算法,上述方法降低了深度學習演算法選擇的難度,且通用性強。 The present invention can flexibly adjust the batch size of the training data by utilizing the similarity of the training data, thereby improving the quality of the training data, making the algorithm model obtained from the training data more accurate, and then using the verification test data to verify the algorithm model, and selecting a solution to solve the problem. The optimal algorithm for a specific problem, the above method reduces the difficulty of deep learning algorithm selection, and has strong versatility.
對於本領域技術人員而言,顯然本發明不限於上述示範性實施例的細節,而且在不背離本發明的精神或基本特徵的情況下,能夠以其他的具體形式實現本發明。因此,無論從哪一點來看,均應將實施例看作係示範性的,而且係非限制性的,本發明的範圍由所附請求項而不係上述說明限定,因此旨在將落在請求項的等同要件的含義與範圍內的所有變化涵括在本發明內。不應 將請求項中的任何附圖標記視為限制所涉及的請求項。此外,顯然“包括”一詞不排除其他器或步驟,單數不排除複數。 It will be apparent to those skilled in the art that the present invention is not limited to the details of the above-described exemplary embodiments, but that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics of the invention. Therefore, the embodiments are to be regarded in all respects as illustrative and not restrictive, the scope of the present invention being defined by the appended claims rather than the foregoing description, and are therefore intended to fall within All changes within the meaning and scope of equivalents of the claims are encompassed within the present invention. should not Treat any reference sign in a claim as the claim to which the restriction relates. Furthermore, it is clear that the word "comprising" does not exclude other means or steps, and the singular does not exclude the plural.
以上所述,僅為本發明的較佳實施例,並非係對本發明作任何形式上的限定。另外,本領域技術人員還可在本發明精神內做其它變化,當然,這些依據本發明精神所做的變化,都應包含在本發明所要求保護的範圍之內。 The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention in any form. In addition, those skilled in the art can also make other changes within the spirit of the present invention. Of course, these changes made according to the spirit of the present invention should all be included within the scope of the claimed protection of the present invention.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW109113110A TWI765250B (en) | 2020-04-17 | 2020-04-17 | Method for selecting deep learning algorithm and device for selecting deep learning algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW109113110A TWI765250B (en) | 2020-04-17 | 2020-04-17 | Method for selecting deep learning algorithm and device for selecting deep learning algorithm |
Publications (2)
Publication Number | Publication Date |
---|---|
TW202141326A TW202141326A (en) | 2021-11-01 |
TWI765250B true TWI765250B (en) | 2022-05-21 |
Family
ID=80783480
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW109113110A TWI765250B (en) | 2020-04-17 | 2020-04-17 | Method for selecting deep learning algorithm and device for selecting deep learning algorithm |
Country Status (1)
Country | Link |
---|---|
TW (1) | TWI765250B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3432267A1 (en) * | 2017-07-21 | 2019-01-23 | Dental Monitoring | Method for analysing an image of a dental arch |
TW201909112A (en) * | 2017-07-20 | 2019-03-01 | 大陸商北京三快在線科技有限公司 | Image feature acquisition |
WO2019048506A1 (en) * | 2017-09-08 | 2019-03-14 | Asml Netherlands B.V. | Training methods for machine learning assisted optical proximity error correction |
-
2020
- 2020-04-17 TW TW109113110A patent/TWI765250B/en active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW201909112A (en) * | 2017-07-20 | 2019-03-01 | 大陸商北京三快在線科技有限公司 | Image feature acquisition |
EP3432267A1 (en) * | 2017-07-21 | 2019-01-23 | Dental Monitoring | Method for analysing an image of a dental arch |
WO2019048506A1 (en) * | 2017-09-08 | 2019-03-14 | Asml Netherlands B.V. | Training methods for machine learning assisted optical proximity error correction |
Also Published As
Publication number | Publication date |
---|---|
TW202141326A (en) | 2021-11-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210390706A1 (en) | Detection model training method and apparatus, computer device and storage medium | |
CN110363210B (en) | Training method and server for image semantic segmentation model | |
CN111882579A (en) | Large infusion foreign matter detection method, system, medium and equipment based on deep learning and target tracking | |
CN113537446B (en) | Deep learning algorithm selection method and deep learning algorithm selection device | |
CN110889446A (en) | Face image recognition model training and face image recognition method and device | |
CN111104898A (en) | Image scene classification method and device based on target semantics and attention mechanism | |
WO2020199694A1 (en) | Spine cobb angle measurement method and apparatus, readable storage medium, and terminal device | |
CN108960269B (en) | Feature acquisition method and device for data set and computing equipment | |
JP2001283229A (en) | Method for calculating position and direction of object in three-dimensional space | |
CN109934196A (en) | Human face posture parameter evaluation method, apparatus, electronic equipment and readable storage medium storing program for executing | |
US20240169518A1 (en) | Method and apparatus for identifying body constitution in traditional chinese medicine, electronic device, storage medium and program | |
CN109344845A (en) | A kind of feature matching method based on Triplet deep neural network structure | |
CN109993021A (en) | The positive face detecting method of face, device and electronic equipment | |
CN112784111B (en) | Video classification method, device, equipment and medium | |
WO2023050651A1 (en) | Semantic image segmentation method and apparatus, and device and storage medium | |
WO2023168812A1 (en) | Optimization method and apparatus for search system, and storage medium and computer device | |
CN113160188B (en) | Robust blood cell detection method based on circular features | |
CN107590460A (en) | Face classification method, apparatus and intelligent terminal | |
CN113763348A (en) | Image quality determination method and device, electronic equipment and storage medium | |
CN108573192B (en) | Glasses try-on method and device matched with human face | |
CN111414930B (en) | Deep learning model training method and device, electronic equipment and storage medium | |
TWI765250B (en) | Method for selecting deep learning algorithm and device for selecting deep learning algorithm | |
CN113052236A (en) | Pneumonia image classification method based on NASN | |
US20230401670A1 (en) | Multi-scale autoencoder generation method, electronic device and readable storage medium | |
CN111599444A (en) | Intelligent tongue diagnosis detection method and device, intelligent terminal and storage medium |