TWI676939B - Electronic component packaging classification system using neural network for classification - Google Patents

Electronic component packaging classification system using neural network for classification Download PDF

Info

Publication number
TWI676939B
TWI676939B TW107121401A TW107121401A TWI676939B TW I676939 B TWI676939 B TW I676939B TW 107121401 A TW107121401 A TW 107121401A TW 107121401 A TW107121401 A TW 107121401A TW I676939 B TWI676939 B TW I676939B
Authority
TW
Taiwan
Prior art keywords
electronic component
data
training
classification
electronic part
Prior art date
Application number
TW107121401A
Other languages
Chinese (zh)
Other versions
TW202001698A (en
Inventor
王彥智
洪盟峰
何俊輝
陳怡婷
魏君強
Original Assignee
富比庫股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富比庫股份有限公司 filed Critical 富比庫股份有限公司
Priority to TW107121401A priority Critical patent/TWI676939B/en
Application granted granted Critical
Publication of TWI676939B publication Critical patent/TWI676939B/en
Publication of TW202001698A publication Critical patent/TW202001698A/en

Links

Landscapes

  • Image Analysis (AREA)

Abstract

一種運用類神經網路進行分類之電子零件封裝分類系統,該電子零件封裝系統包含一服務資料庫、一外部資料庫、一特徵選取模組、一資料整合模組,及一分類處理模組,該服務資料庫供外部輸入電子零件圖樣,該外部資料庫儲存電子零件之封裝類型資料,該特徵選取模組紀錄有電子零件的封裝類型特徵,該資料整合模組對選取之特徵值進行資料預處理與正規化以得到待分類資料,該分類處理模組接收該待分類資料並將分類結果顯示於該服務資料庫。 An electronic component packaging classification system using neural network-like classification for classification. The electronic component packaging system includes a service database, an external database, a feature selection module, a data integration module, and a classification processing module. The service database is used for external input of electronic component drawings. The external database stores electronic component package type information. The feature selection module records the electronic component package type characteristics. The data integration module performs data pre-processing on the selected feature values. Processing and normalization to obtain the data to be classified, the classification processing module receives the data to be classified and displays the classification result in the service database.

Description

運用類神經網路進行分類之電子零 件封裝分類系統 Electronic Zeros Using Neural Networks for Classification Package classification system

本發明是有關一種分類系統,特別是指一種運用類神經網路進行分類之電子零件封裝分類系統。 The invention relates to a classification system, in particular to an electronic component packaging classification system using a neural network to classify.

現今科技發展下,電子電路設計與組裝的工作流程已逐漸趨於自動化,在設計印刷電路板的過程中,需先匯入零件圖樣資料庫(Footprint Library)、設定印刷電路板參數(PCB Parameters Setup)、佈局(Placement)、走線(Routing)最後進入可製造性設計檢查(Design for Manufacture Check,DFM Check)等階段。 With the development of technology today, the electronic circuit design and assembly workflow has gradually become automated. In the process of designing printed circuit boards, you must first import the Footprint Library and set the PCB Parameters Setup. ), Placement (routing), routing (Routing) and finally enter the manufacturability design check (Design for Manufacture Check, DFM Check) and other stages.

在執行可製造性設計檢查前,傳統的做法是由佈局工程師採用人工的方式逐一分類印刷電路板上所使用之電子零件屬於何種封裝類型,而佈局工程師判斷電子零件封裝類型的依據大多為電子零件圖樣名稱,外觀則以接腳數量與接腳擺放方式進行判斷,此過程不僅仰賴工程師本身的工作經驗,更無法確保電子零件封裝類型分類後的正確性。 Before performing the manufacturability design check, the traditional method is to manually sort the package types of the electronic parts used on the printed circuit board one by one manually by the layout engineer, and the layout engineers judge the packaging type of the electronic parts mostly based on electronics. The name of the part drawing and the appearance are judged by the number of pins and the placement of the pins. This process depends not only on the engineer's own work experience, but also cannot ensure the correctness of the electronic component package type classification.

隨著封裝技術演進促使電子零件的封裝類型越來越多變,有部分封裝類型的電子零件圖樣更是極為相似。對於佈局工程師而言,透過電子零件圖樣辨別封裝類型更加深其困難性,且電子零件封裝類型判斷若有錯誤,將影響佈局工程師的工作流程乃至於組裝廠的生產良率與產品品質。 With the evolution of packaging technology, the packaging types of electronic parts are becoming more and more variable, and the patterns of some packaging types are very similar. For the layout engineer, it is more difficult to identify the package type through the electronic part pattern, and if there is an error in the judgment of the electronic part package type, it will affect the layout engineer's workflow and even the production yield and product quality of the assembly plant.

上述缺點都顯現習知電子零件的封裝分類在操作過程中所衍生的種種問題,因此,發展封裝分類工具以輔助佈局工程師降低電子零件封裝類型分類錯誤的機率已成必然。 The above shortcomings all show various problems arising from the operation of conventional electronic component packaging classification. Therefore, it is inevitable to develop packaging classification tools to assist layout engineers in reducing the probability of electronic component packaging type classification errors.

有鑑於此,本發明之目的,是提供一種運用類神經網路進行分類之電子零件封裝分類系統,該電子零件封裝分類系統,包含一服務資料庫、一外部資料庫、一特徵選取模組、一資料整合模組,及一分類處理模組。 In view of this, an object of the present invention is to provide an electronic component package classification system using neural network-like classification. The electronic component package classification system includes a service database, an external database, a feature selection module, A data integration module and a classification processing module.

該服務資料庫用以供外部輸入電子零件圖樣,以及接收並儲存相關聯之輸入與輸出數據的訓練數據,該外部資料庫儲存有複數筆電子零件之封裝類型資料,該特徵選取模組與該外部資料庫連接,紀錄有電子零 件的封裝類型特徵,依據該服務資料庫輸入之欲進行分類的電子零件圖樣,該特徵選取模組依照該封裝類型特徵自該外部資料庫進行特徵選取。 The service database is used for external input of electronic part patterns, and training data for receiving and storing associated input and output data. The external database stores a plurality of electronic part package type data, the feature selection module and the External database connection with electronic zero recorded The package type characteristics of the package are based on the electronic part drawings to be classified according to the service database input. The feature selection module performs feature selection from the external database according to the package type characteristics.

該資料整合模組對該特徵選取模組所選取之特徵值進行資料預處理與正規化,以清除資料中的錯誤雜訊與填補資料遺缺,並限縮該特徵之特徵值分佈於一特定區間,以得到待分類資料,該分類處理模組接收該待分類資料並將分類結果顯示於該服務資料庫。 The data integration module performs data preprocessing and normalization on the feature values selected by the feature selection module to remove error noise from the data and fill in missing data, and limit the feature value distribution of the feature to a specific interval To obtain the data to be classified, the classification processing module receives the data to be classified and displays the classification result in the service database.

本發明的另一技術手段,是在於上述之分類處理模組包括一儲存有執行一動作指令的處理器,所述動作包括:使用者端輸入欲進行分類之電子零件圖樣至該服務資料庫;該特徵選取模組依照該電子零件圖樣之封裝類型特徵自該外部資料庫進行特徵選取;該資料整合模組對所選取之特徵值進行資料預處理與正規化,以得到待分類資料;及該服務資料庫得到電子零件之封裝類型的分類結果。 Another technical means of the present invention is that the above-mentioned classification processing module includes a processor storing a command for executing an action, and the action includes: the user inputting an electronic part drawing to be classified into the service database; The feature selection module performs feature selection from the external database according to the package type characteristics of the electronic part pattern; the data integration module performs data preprocessing and normalization on the selected feature values to obtain data to be classified; and the The service database obtains the classification results of the package types of electronic parts.

本發明的又一技術手段,是在於上述之電子零件封裝分類系統更包含一訓練模組及一參數儲存模組,該訓練模組與該資料整合模組及該服務資料庫連接, 並決定該訓練資料集進行訓練之訓練規模及神經網路參數,以作為後續分類之依據,其中,訓練之收斂條件為當前訓練結束後累計誤差小於給定的門檻值,即停止訓練,而該參數儲存模組與該訓練模組及該服務資料庫連接,用以紀錄該訓練模組所使用之訓練參數數據。 Another technical means of the present invention is that the above-mentioned electronic component package classification system further includes a training module and a parameter storage module, and the training module is connected to the data integration module and the service database, The training scale and neural network parameters of the training data set for training are determined as the basis for subsequent classification. Among them, the convergence condition of training is that the cumulative error after the current training is less than a given threshold, that is, the training is stopped, and the The parameter storage module is connected to the training module and the service database to record training parameter data used by the training module.

本發明的再一技術手段,是在於上述之資料整合模組將該特徵值正規化至v a 、v b 區間中,滿足v'=

Figure TWI676939B_D0001
,v a <v b 關係式,其中,v'為正規化至v a 、v b 後的特徵值,v為需作正規化之特徵值,v max 為一項特徵中的最大特徵值,而v min 為一項特徵中的最小特徵值。 Another technical means of the present invention is that the above-mentioned data integration module normalizes the feature values into the intervals of v a and v b , and satisfies v ' =
Figure TWI676939B_D0001
, V a <v b relation, where, v 'is normalized to the feature value of v a, v b, v is the need for the normalized feature values, v max is the maximum eigenvalue of a feature, and v min is the minimum eigenvalue of a feature.

本發明的又一技術手段,是在於上述之神經網路參數為收斂條件、隱藏層神經元個數、隱藏層個數、初始學習率、初始動量、門檻值、權重值、偏權值等上述任一或其組合。 Another technical means of the present invention is that the above-mentioned neural network parameters are convergence conditions, the number of hidden layer neurons, the number of hidden layers, the initial learning rate, the initial momentum, the threshold value, the weight value, and the partial weight value. Either or a combination.

本發明的再一技術手段,是在於上述之隱藏層神經元個數j滿足(x×(input+output)),1.5<x<2,其中,input為輸入封裝類型特徵19個,output為分類輸出的封裝類型10個。 Another technical means of the present invention is that the number of neurons in the hidden layer j above meets ( x × ( input + output )), 1.5 < x <2, where input is 19 features of the input package type and output is classification The output package type is 10.

本發明的另一技術手段,是在於上述之封 裝類型資料記錄有電子零件外觀資訊、印刷電路板限制區域資訊、銲點資訊、幾何形狀參數、適用場域參數、電性參數、接點參數等上述任一或其組合。 Another technical means of the present invention is the above-mentioned seal The package type data records any one or a combination of the above-mentioned electronic component appearance information, printed circuit board restricted area information, solder joint information, geometric shape parameters, applicable field parameters, electrical parameters, and contact parameters.

本發明的又一技術手段,是在於上述之封裝類型特徵包括電子零件實體外觀、電子零件實體接腳,及電子零件圖樣。 Another technical means of the present invention is that the above-mentioned package type features include the physical appearance of the electronic part, the physical pins of the electronic part, and the electronic part pattern.

本發明的再一技術手段,是在於上述之封裝類型特徵的權重比例為電子零件圖樣大於電子零件實體外觀,電子零件實體外觀大於電子零件實體接腳。 Another technical means of the present invention is that the weighting ratio of the aforementioned package type features is that the electronic part pattern is larger than the physical appearance of the electronic part, and the physical appearance of the electronic part is greater than the physical pin of the electronic part.

本發明的另一技術手段,是在於上述之電子零件實體外觀、電子零件實體接腳,及電子零件圖樣選自於電子零件的接腳數量、原始的電子零件實體長度、最大的電子零件實體長度、最小的電子零件實體長度、原始的電子零件實體寬度、最大的電子零件實體寬度、最小的電子零件實體寬度、電子零件實體高度、電子零件實體與電路板的間距、大的電子零件接腳長度、小的電子零件接腳長度、大的電子零件接腳寬度、小的電子零件接腳寬度、大的電子零件圖樣接腳長度、小的電子零件圖樣接腳長度、大的電子零件圖樣接腳寬度、小的電子零件圖樣接腳 寬度、電子零件圖樣接腳間距的X軸方向,及電子零件圖樣接腳間距的Y軸方向等19種。 Another technical means of the present invention is that the physical appearance of the electronic component, the physical pins of the electronic component, and the electronic component pattern are selected from the number of pins of the electronic component, the original physical length of the electronic component, and the maximum physical length of the electronic component. 、 Minimum electronic component entity length, original electronic component entity width, maximum electronic component entity width, minimum electronic component entity width, electronic component entity height, electronic component entity and circuit board spacing, large electronic component pin length , Small electronic part pin length, large electronic part pin width, small electronic part pin width, large electronic part pattern pin length, small electronic part pattern pin length, large electronic part pattern pin Width, small electronic part pattern pins There are 19 types including width, X-axis direction of pin pitch of electronic parts pattern, and Y-axis direction of pin pitch of electronic parts pattern.

本發明之有益功效在於,透過電子零件實體的特徵訓練類神經網路,找出最適合該分類系統的訓練規模以及神經網路參數,且經過正規化後的訓練結果正確率較未正規化的訓練結果高,解決以人工方式分類電子零件封裝類型容易造成之判斷錯誤、耗時且分類過程過於依賴佈局工程師本身工作經驗的問題,亦能獲得更高品質的訓練以及分類結果。 The beneficial effect of the present invention is that the neural network is trained through the characteristics of the electronic part entity to find the training scale and neural network parameters that are most suitable for the classification system, and the accuracy of the normalized training result is lower than that of the normalized one. The training result is high, and it can solve the problems of easy judgment, time-consuming, and classification process that rely on the layout engineer's own work experience to manually classify the electronic component package types. It can also obtain higher quality training and classification results.

1‧‧‧服務資料庫 1‧‧‧Service Database

3‧‧‧外部資料庫 3‧‧‧ External Database

4‧‧‧特徵選取模組 4‧‧‧Feature Selection Module

5‧‧‧資料整合模組 5‧‧‧Data Integration Module

6‧‧‧訓練模組 6‧‧‧ Training Module

7‧‧‧參數儲存模組 7‧‧‧parameter storage module

8‧‧‧分類處理模組 8‧‧‧Classification processing module

91~94‧‧‧步驟 91 ~ 94‧‧‧step

圖1是一方塊示意圖,說明本發明運用類神經網路進行分類之電子零件封裝分類系統的較佳實施例;圖2是一示意圖,說明本較佳實施例於一節點輸出計算階段的架構示意;圖3是一示意圖,說明本較佳實施例進行訓練的流程;圖4是一示意圖,說明本較佳實施例於一權重修正階段的架構示意;及圖5是一示意圖,說明本較佳實施例中一分類處理模 組執行一動作指令的流程。 FIG. 1 is a block diagram illustrating a preferred embodiment of the electronic component packaging classification system using neural network-like classification according to the present invention. FIG. 2 is a schematic diagram illustrating the architecture of the preferred embodiment at a node output calculation stage. Figure 3 is a schematic diagram illustrating the training process of the preferred embodiment; Figure 4 is a schematic diagram illustrating the architecture of the preferred embodiment in a weight correction phase; and Figure 5 is a schematic diagram illustrating the preferred embodiment A classification processing model in the embodiment The group executes a flow of action instructions.

有關本發明之相關申請專利特色與技術內容,在以下配合參考圖式之較佳實施例的詳細說明中,將可清楚的呈現。 The features and technical contents of the related patent application of the present invention will be clearly presented in the following detailed description of the preferred embodiments with reference to the drawings.

參閱圖1,為本發明運用類神經網路進行分類之電子零件封裝分類系統的較佳實施例,該電子零件封裝分類系統包含一服務資料庫1、一外部資料庫3、一特徵選取模組4、一資料整合模組5、一訓練模組6、一參數儲存模組7,及一分類處理模組8。 Referring to FIG. 1, a preferred embodiment of an electronic component packaging classification system using neural network-like classification according to the present invention. The electronic component packaging classification system includes a service database 1, an external database 3, and a feature selection module. 4. A data integration module 5, a training module 6, a parameter storage module 7, and a classification processing module 8.

該服務資料庫1用以供外部輸入欲進行訓練或分類之電子零件圖樣,以及接收並儲存相關聯之輸入與輸出數據的訓練數據,其中該電子零件圖樣係由電子設計自動化(Electronic Design Automatic,EDA)工具轉換之檔案格式。 The service database 1 is used for externally inputting electronic part patterns to be trained or classified, and to receive and store training data of associated input and output data, wherein the electronic part patterns are implemented by Electronic Design Automatic (Electronic Design Automatic, EDA) tool conversion file format.

該外部資料庫3儲存有複數筆電子零件之封裝類型資料,其中,該封裝類型資料記錄有電子零件外觀資訊、印刷電路板限制區域資訊、銲點資訊、幾何形狀參數、適用場域參數、電性參數、接點參數等上述任一或 其組合。 The external database 3 stores package type data of a plurality of electronic parts, wherein the package type data records electronic part appearance information, printed circuit board restricted area information, solder joint information, geometric shape parameters, applicable field parameters, electrical Parameters, contact parameters, etc. Its combination.

該特徵選取模組4與該外部資料庫3連接,紀錄有電子零件的封裝類型特徵,依據該服務資料庫1輸入之欲進行訓練或分類的電子零件圖樣,該特徵選取模組4依照該封裝類型特徵自該外部資料庫3進行特徵選取。 The feature selection module 4 is connected to the external database 3, and records the package type characteristics of the electronic parts. According to the electronic part drawings to be trained or classified according to the service database 1, the feature selection module 4 is based on the package. Type features are selected from the external database 3.

電子零件與電路板接合大致可將封裝技術分成通孔式封裝技術(Through Hole Technology,THT)與表面黏著技術(Surface Mount Technology,SMT),於此將基本SMT類型的電子零件封裝依照接腳形式、接腳型態、尺寸大小以及功能性分成44類,而本實施例取常見之25類,並將其分類成10種封裝類型,以符合佈局工程師判斷封裝類型的需求。 The joining of electronic parts and circuit boards can roughly divide the packaging technology into Through Hole Technology (THT) and Surface Mount Technology (SMT). Here, the basic SMT type of electronic component packaging is based on the pin form. , Pin type, size, and functionality are divided into 44 categories, and this embodiment takes the common 25 categories and classifies them into 10 package types to meet the needs of layout engineers in determining package types.

此階段該特徵選取模組4從25類SMT封裝類型取得19項特徵,及取得特徵值,準備將這些特徵值由該資料整合模組5進行資料預處理。 At this stage, the feature selection module 4 obtains 19 features from the 25 types of SMT package types, and obtains the feature values, and prepares these feature values for data preprocessing by the data integration module 5.

進一步地,該封裝類型特徵包括電子零件實體外觀、電子零件實體接腳,及電子零件圖樣等共19種。其中,電子零件實體外觀特徵為電子零件的接腳數量、 原始的電子零件實體長度、最大的電子零件實體長度、最小的電子零件實體長度、原始的電子零件實體寬度、最大的電子零件實體寬度、最小的電子零件實體寬度、電子零件實體高度,及電子零件實體與電路板的間距。 Further, the package type features include a total of 19 types including the physical appearance of electronic parts, the physical pins of electronic parts, and the drawings of electronic parts. Among them, the physical appearance characteristics of the electronic part are the number of pins of the electronic part, Original electronic part physical length, maximum electronic part physical length, minimum electronic part physical length, original electronic part physical width, maximum electronic part physical width, minimum electronic part physical width, electronic part physical height, and electronic part The distance between the entity and the circuit board.

電子零件實體接腳特徵為大的電子零件接腳長度、小的電子零件接腳長度、大的電子零件接腳寬度,及小的電子零件接腳寬度。電子零件圖樣特徵為大的電子零件圖樣接腳長度、小的電子零件圖樣接腳長度、大的電子零件圖樣接腳寬度、小的電子零件圖樣接腳寬度、電子零件圖樣接腳間距的X軸方向,及電子零件圖樣接腳間距的Y軸方向。 Electronic part physical pins are characterized by a large electronic part pin length, a small electronic part pin length, a large electronic part pin width, and a small electronic part pin width. The electronic part pattern features are the X-axis of the large electronic part pattern pin length, the small electronic part pattern pin length, the large electronic part pattern pin width, the small electronic part pattern pin width, and the electronic part pattern pin pitch. Direction, and the Y-axis direction of the pin pitch of the electronic part pattern.

於此,該封裝類型特徵的權重比例為電子零件圖樣大於電子零件實體外觀,電子零件實體外觀大於電子零件實體接腳。 Here, the weighting ratio of the package type feature is that the electronic part pattern is larger than the physical appearance of the electronic part, and the physical appearance of the electronic part is larger than the physical pin of the electronic part.

該資料整合模組5對該特徵選取模組4所選取之特徵值進行資料預處理與正規化,以清除資料中的錯誤雜訊與填補資料遺缺,並限縮該特徵之特徵值分佈於一特定區間,以得到訓練資料集。於此,當該資料整合模組5所處理之資料為欲進行訓練之電子零件圖樣,稱為訓 練資料集,供該訓練模組6進行訓練用,若該資料整合模組5所處理之資料為欲進行分類之電子零件圖樣,則稱為待分類資料,以作為封裝類型的分類結果。 The data integration module 5 performs data preprocessing and normalization on the feature values selected by the feature selection module 4 to remove error noise from the data and fill in missing data, and limit the feature value distribution of the feature to one. Specific interval to get the training data set. Here, when the data processed by the data integration module 5 is a pattern of electronic parts to be trained, it is called training The training data set is used for training by the training module 6. If the data processed by the data integration module 5 is an electronic part pattern to be classified, it is called to-be-classified data as the classification result of the package type.

進行預處理是對資料進行資料整合、資料清理與資料遺缺填補,及資料轉換,其中,資料整合目的在於解決訓練資料是由多個不同資料庫組成所造成資料不一致、單位不同與資料重複的問題,若資料不一致,可能會因為欄位內容表示方式不同而導致訓練過程中不易收斂或影響訓練結果,使資料形成一個不利於訓練的資料集,因此,資料整合為資料預處理的首要步驟。 Preprocessing is to integrate data, clean up and fill in missing data, and convert data. Among them, the purpose of data integration is to solve the problems of inconsistent data, different units and duplicate data caused by training data composed of multiple different databases. If the data is inconsistent, it may be difficult to converge or affect the training result due to the different representation of the field content, so that the data forms a data set that is not conducive to training. Therefore, data integration is the first step in data preprocessing.

再者,資料清理與資料遺缺填補目的在於確認資料的完整性、正確性及合理性,由於資料來源多元,因此在此階段必須檢查特徵值是否合理,於此所挑選的特徵為電子零件的參數,故使用整體平均值填補遺缺的資料。 Furthermore, the purpose of data cleaning and filling of missing data is to confirm the completeness, correctness and rationality of the data. Due to the multiple sources of data, it is necessary to check whether the feature values are reasonable at this stage. The features selected here are the parameters of the electronic components , So use the overall average to fill in the missing data.

資料轉換目的在於將資料內容轉換成易於訓練或是使訓練結果可信度提升,其中,此階段的工作包含資料一般化、建立新屬性與資料正規化,資料一般化指提升資料所代表的概念與意義,以將特徵中所包含的特徵值類型減少,而建立新屬性指利用舊有的屬性找出訓練所 需要的新屬性,資料正規化指將不同標準或單位之下所記錄的資料轉換成相同標準,正規化後資料將重新分布於一個特定且較小的區間中,以便提高訓練結果的準確度。常見的正規化方法有極值正規化、Z-分數正規化,及十進制正規化。 The purpose of data conversion is to convert the content of the data into easy-to-train or enhance the credibility of the training results. The work at this stage includes data generalization, the establishment of new attributes and data normalization. Data generalization refers to the improvement of the concepts represented by data. And meaning to reduce the type of feature value contained in the feature, and creating a new attribute means using the old attribute to find the training place New attributes required. Data normalization refers to converting data recorded under different standards or units into the same standard. After normalization, the data will be redistributed in a specific and smaller interval in order to improve the accuracy of the training results. Common normalization methods include extreme value normalization, Z-score normalization, and decimal normalization.

於此,該資料整合模組5將該特徵值正規化至v a 、v b 區間中,滿足

Figure TWI676939B_D0002
,v a <v b 關係式,其中,v'為正規化至v a 、v b 後的特徵值,v為需作正規化之特徵值,v max 為一項特徵中的最大特徵值,而v min 為一項特徵中的最小特徵值。 Here, the data integration module 5 normalizes the eigenvalues into the intervals of v a and v b , satisfying
Figure TWI676939B_D0002
, V a <v b relation, where, v 'is normalized to the feature value of v a, v b, v is the need for the normalized feature values, v max is the maximum eigenvalue of a feature, and v min is the minimum eigenvalue of a feature.

本實施例使用了正規化與未正規化的訓練資料集作為實驗比較,此資料集所挑選之特徵數、資料量以及輸出節點數量與類神經網路(亦稱為神經網路)的訓練條件如下表1所示。 This embodiment uses a normalized and unnormalized training data set as an experimental comparison. The number of features, data amount, and number of output nodes selected in this data set and the training conditions of a neural network-like (also known as neural network) As shown in Table 1 below.

參閱下表2為正規化該訓練資料集之訓練結果,下表2為未正規化該訓練資料集之訓練結果,於此使用i-j-k描述神經網路架構,其中i代表輸入層神經元數量;j代表隱藏層神經元數量;k則代表輸出層神經元數量。 See Table 2 below for the training results of the normalized training data set. Table 2 below shows the training results for the non-normalized training data set. Here i - j - k is used to describe the neural network architecture, where i represents the input layer neural network. Number of neurons; j represents the number of neurons in the hidden layer; k represents the number of neurons in the output layer.

由上表2、3之結果可知,正規化後之資料集No(19-50-10)的平均正確率為99.2%,未正規化之資料集No(19-53-10)的平均正確率為51.8%,因此,經過正規化後之資料集的分類結果表現為佳,且相較於未正規化資料集之分類結果高出55.9%。 From the results in Tables 2 and 3 above, it can be seen that the average correct rate of the normalized data set No (19-50-10) is 99.2%, and the average correct rate of the unnormalized data set No (19-53-10) It is 51.8%. Therefore, the classification result of the normalized data set performs better, and it is 55.9% higher than that of the unnormalized data set.

再者,資料集正規化後每一項特徵中的特徵值距離縮短,使得訓練過程中類神經網路能更容易地透過正規化後的特徵值計算出神經元之間連結的權重值,若資料未正規化,可能會因權重值超過活化函數的區間而導致無法正確地調整權重值,致使類神經網路過早收斂而未達到訓練與學習的效果。 In addition, the feature value distance of each feature is shortened after the data set is normalized, so that the neural network can more easily calculate the weight value of the connection between neurons through the normalized feature value during training. If the data is not normalized, the weight value may not be adjusted correctly because the weight value exceeds the interval of the activation function, causing the neural network to converge prematurely without achieving the effects of training and learning.

本發明利用極值正規化的方式使特徵值重新分布於特定的區間中,以提高訓練類神經網路效率,且經過正規化後的訓練結果正確率較未正規化的訓練結果高。 The present invention uses extreme value normalization to redistribute feature values in specific intervals to improve the efficiency of training neural networks, and the accuracy rate of training results after normalization is higher than that of non-normalized training results.

該訓練模組6使用前饋式神經網路(Feed-Forward Neural Network,FNN)架構搭配倒傳遞演算法,而倒傳遞演算法屬於多層前饋式網路並將神經網路分成輸入層(Input Layer)、隱藏層(Hidden Layer)與輸出層(Output Layer)。輸入層在網路架構中作為接受資料並輸入訊息的一端,輸入層有多少神經元即代表有多少種不同的訓練特徵,用以表示網路輸入的變數。 The training module 6 uses a feed-forward neural network (FNN) architecture with a backward transfer algorithm. The backward transfer algorithm belongs to a multilayer feedforward network and divides the neural network into an input layer (Input Layer), Hidden Layer and Output Layer. The input layer is used as an end to receive data and input messages in the network structure. How many neurons in the input layer represent how many different training features are used to represent the variables of the network input.

隱藏層介於輸入層與輸出層間,用以呈現各單元之間互相影響的情況。隱藏層神經元數量的多寡是以試誤法找到最適合,當隱藏層神經元數量越多收斂速度越慢、誤差值越小。輸出層在網路架構中作為處理訓練結果並輸出訊息的一端,用以表示網路輸出的變數。 The hidden layer is located between the input layer and the output layer, and is used to show the interaction between the units. The number of hidden neurons is best found by trial and error. When the number of hidden neurons is larger, the convergence speed is slower and the error value is smaller. The output layer is used in the network architecture to process training results and output information, which is used to represent the variables output by the network.

倒傳遞演算法是將誤差值最小化並找出輸入層、隱藏層以及輸出層之間連結權重的關係,如圖2所示,倒傳遞類神經架構可分為輸入(Input)、權重值(Weight)和活化函數(Activation Function)三個部份,權重值又可分成權重值與偏權值。其中,x 1,x 2,x 3...x i 表示為輸入訊號;

Figure TWI676939B_D0009
Figure TWI676939B_D0010
Figure TWI676939B_D0011
...
Figure TWI676939B_D0012
表示輸入層神經元與隱藏層神經元相連的權重值;
Figure TWI676939B_D0013
則是隱藏層神經元之偏權值;h 1h 2h 3...h j 則是輸入項x n 與權重值
Figure TWI676939B_D0014
之乘積總和,如公式
Figure TWI676939B_D0015
The inverse transfer algorithm is to minimize the error value and find the relationship between the connection weights between the input layer, the hidden layer and the output layer. As shown in Figure 2, the inverse transfer neural architecture can be divided into Input and Weight values Weight) and Activation Function. The weight value can be divided into weight value and partial weight value. Among them, x 1 , x 2 , x 3 ... x i are input signals;
Figure TWI676939B_D0009
,
Figure TWI676939B_D0010
,
Figure TWI676939B_D0011
...
Figure TWI676939B_D0012
Represents the weight value of the input layer neuron and the hidden layer neuron;
Figure TWI676939B_D0013
Is the partial weight of the hidden layer neurons; h 1 , h 2 , h 3 ... h j are the input terms x n and the weight values
Figure TWI676939B_D0014
Product sum, such as formula
Figure TWI676939B_D0015

而後再將h j 代入活化函數f tanh 產生隱藏層的輸出並同時做為下一層的輸入。為了模擬生物神經網路的運作模式,活化函數通常是一種非線性的轉換,傳統的活化函數為Hyper tangent函數與Sigmoid函數[28],如下公式:

Figure TWI676939B_D0016
Then, h j is substituted into the activation function f tanh to generate the output of the hidden layer and simultaneously used as the input of the next layer. In order to simulate the operation mode of biological neural networks, the activation function is usually a non-linear transformation. The traditional activation functions are Hyper Tangent function and Sigmoid function [28], as follows:
Figure TWI676939B_D0016

Figure TWI676939B_D0017
Figure TWI676939B_D0017

Figure TWI676939B_D0018
Figure TWI676939B_D0018

隱藏層使用的活化函數為Hyper tangent函數,輸出則是採用Sigmoid函數。

Figure TWI676939B_D0019
Figure TWI676939B_D0020
Figure TWI676939B_D0021
...
Figure TWI676939B_D0022
表示隱藏層神經元與輸出層神經元相連的權重值;
Figure TWI676939B_D0023
則是輸出層神經元之偏權值;O 1,O 2,...,O k 則是輸入項f tanh (h j )與權重值
Figure TWI676939B_D0024
之乘積總和,最後將O k 代入活化函數f sig (O k ),產生神經元輸出y k ,如公式y k ( O )=f sig (O k )。 The activation function used by the hidden layer is the Hyper tangent function, and the output is the Sigmoid function.
Figure TWI676939B_D0019
,
Figure TWI676939B_D0020
,
Figure TWI676939B_D0021
...
Figure TWI676939B_D0022
Represents the weight value of the hidden layer neuron and the output layer neuron;
Figure TWI676939B_D0023
Are the partial weights of the output layer neurons; O 1 , O 2 , ..., O k are the input terms f tanh ( h j ) and the weight values
Figure TWI676939B_D0024
The sum of the product, and finally substituting O k activation function f sig (O k), generates an output neurons y k, as shown in equation y k (O) = f sig (O k).

倒傳遞神經網路未達到收斂條件時,會計算輸出結果與目標結果的誤差並調整權重重新訓練,直至達到收斂條件為止,如公式w t =(w t-1+△w)。 When the backward transfer neural network does not reach the convergence condition, it will calculate the error between the output result and the target result and adjust the weights to retrain until the convergence condition is reached, such as the formula w t = ( w t -1 + △ w ).

該訓練模組6與該資料整合模組5及該服務資料庫1連接,並決定該訓練資料集進行訓練之訓練規模及神經網路參數,以作為後續分類之依據,並傳送訓練結果至該服務資料庫1,其中,訓練之收斂條件為當前訓練結束後累計誤差小於給定的門檻值,即停止訓練。 The training module 6 is connected to the data integration module 5 and the service database 1, and determines the training scale and neural network parameters for training on the training data set as a basis for subsequent classification, and sends the training results to the Service database 1, where the convergence condition of training is that the cumulative error after the current training is less than a given threshold, that is, training is stopped.

參閱圖3,在本較佳實施例中將訓練流程分成網路初始化階段、節點輸出計算階段,及權重修正階段,於此節點亦稱為神經元。首先,透過網路初始化階段將訓練資料(於此亦稱訓練資料集)、設定網路輸入參數、隨機產生權重值(Weights)與偏權值(Bias)、分配權重值與偏權值,然後,準備進入節點輸出計算階段,此階段包含計算隱藏層的節點輸出值、隱藏層的活化函數(Hyper tangent)套用、計算輸出層的節點輸出值,以及輸出層節點的活化函數(Sigmoid)套用,最後,透過權重修正階段計算誤差修正梯度,並調整權重值與偏權值、學習率至達到收斂標準的輸出結果後結束訓練流程,若無達到收斂標準則判斷是否符合終止循環次數,並再次進行權重修正階段與節點輸出計算階段至達到收斂標準的輸出結果後結束訓練流程。 Referring to FIG. 3, in the preferred embodiment, the training process is divided into a network initialization phase, a node output calculation phase, and a weight correction phase. This node is also referred to as a neuron. First, through the network initialization phase, training data (herein referred to as the training data set), setting network input parameters, randomly generating weights (Weights) and partial weights (Bias), assigning weights and partial weights, and then , Ready to enter the node output calculation stage, this stage includes calculating the node output value of the hidden layer, applying the activation function of the hidden layer (Hyper tangent), calculating the node output value of the output layer, and applying the activation function of the output layer node (Sigmoid). Finally, the error correction gradient is calculated through the weight correction stage, and the weighting value and partial weight value, the learning rate is adjusted to the output result that reaches the convergence standard, and the training process is terminated. The training process ends after the weight correction phase and node output calculation phase reach the output result of the convergence standard.

進一步地,執行網路初始化階段時,系統 會先要求輸入神經網路參數並將權重值和偏權值初始化。此階段設定三項神經網路參數分別為:初始學習率、初始動量與隱藏層節點數。 Further, during the network initialization phase, the system It will first ask for neural network parameters and initialize the weights and partial weights. Three neural network parameters are set at this stage: initial learning rate, initial momentum, and number of hidden layer nodes.

初始學習率:執行初始化時,會將學習率設定在[0,1]的區間中,於此使用自適應學習率調整方法,會參考每一次執行訓練結果所累計的誤差值判斷訓練方向是否正確,若誤差值呈現下降的趨勢,表示訓練方向正確,便會增加學習率,加快學習速度,反之則會加上懲罰因子,降低學習速率,降低學習腳步修正訓練方向。 Initial learning rate: When the initialization is performed, the learning rate is set to the interval of [0,1]. Here, the adaptive learning rate adjustment method is used, and the reference is made to the accumulated error value to determine whether the training direction is correct. If the error value shows a downward trend, it indicates that the training direction is correct, it will increase the learning rate and speed up the learning speed, otherwise it will add a penalty factor to reduce the learning rate and reduce the learning steps to modify the training direction.

初始動量:除學習率的設定外,動量的大小也會影響神經網路的學習效率,而動量的主要功能在於穩定調整學習率後計算權重值時所造成的震盪現象。初始化的過程中,如同學習率一般可將參數設定在[0,1]的區間中,系統會在每次調整學習率與權重值時自動加入此參數進行調整。 Initial momentum: In addition to the setting of the learning rate, the size of the momentum also affects the learning efficiency of the neural network. The main function of momentum is to stabilize the shock caused by calculating the weight value after the learning rate is adjusted. During the initialization process, the parameters can be set in the interval [0,1] as the learning rate. The system will automatically add this parameter to adjust each time the learning rate and weight value are adjusted.

隱藏層節點數:隱藏層節點數量的多寡將影響到收斂速率、學習效率以及訓練結果,於此隱藏層節點選擇採用試誤法。 Number of nodes in the hidden layer: The number of nodes in the hidden layer will affect the convergence rate, learning efficiency, and training results. Here, the nodes in the hidden layer choose to use trial and error.

收斂條件可設定採用達最大訓練次數或累 計誤差小於給定的門檻值後停止,其中,採用達最大訓練次數指當訓練次數達到設定的最大訓練次數即停止,說明了此次的訓練無法使神經網路確實收斂,需重新調整神經網路參數或檢查訓練資料集是否異常,上述二種條件只要有一項達成,即終止訓練流程。 Convergence conditions can be set up to the maximum number of training or tired Stop counting when the error is less than the given threshold. Among them, the maximum number of trainings is used to stop when the training number reaches the set maximum number of trainings, which indicates that the training cannot make the neural network converge. You need to readjust the neural network. Or check the training data set for abnormality. As long as one of the above two conditions is met, the training process is terminated.

於此,訓練之收斂條件為當前訓練結束後累計誤差小於前一次的

Figure TWI676939B_D0025
即停止訓練,
Figure TWI676939B_D0026
為當次訓練累積之誤差均方根RMSE,
Figure TWI676939B_D0027
為前次訓練累積之誤差均方根RMSE,滿足關係式:
Figure TWI676939B_D0028
,其中
Figure TWI676939B_D0029
Figure TWI676939B_D0030
滿足關係式:
Figure TWI676939B_D0031
v rmse 為每次訓練結果後所累積之誤差均方根RMSE,c d 為訓練資料集的資料量,c o 為神經網路的輸出位元數,
Figure TWI676939B_D0032
為分類結果的目標值,
Figure TWI676939B_D0033
為當前分類結果的近似值。 Here, the convergence condition for training is that the cumulative error after the current training is less than the previous one
Figure TWI676939B_D0025
Stop training,
Figure TWI676939B_D0026
Is the root mean square error RMSE accumulated during the training,
Figure TWI676939B_D0027
The root mean square RMSE of the error accumulated from the previous training, satisfying the relationship:
Figure TWI676939B_D0028
,among them
Figure TWI676939B_D0029
versus
Figure TWI676939B_D0030
Satisfaction relationship:
Figure TWI676939B_D0031
, V rmse results after training for each of the accumulated rms error RMSE, c d is the amount of data the training data set, c o is the number of bits output neural network,
Figure TWI676939B_D0032
Is the target value of the classification result,
Figure TWI676939B_D0033
Is the approximate value of the current classification result.

進一步地,該神經網路參數為收斂條件、隱藏層神經元個數、隱藏層個數、初始學習率、初始動量、門檻值、權重值、偏權值等上述任一或其組合。 Further, the parameters of the neural network are any one or a combination of the above, such as convergence conditions, the number of hidden layer neurons, the number of hidden layers, the initial learning rate, the initial momentum, the threshold value, the weight value, and the partial weight value.

再請參閱圖2,進行節點輸出計算階段時,逐步算出每一個輸入節點的輸出值,並在計算完畢後加上 偏權值經過活化函數後,作為下一層的輸入值。 Please refer to Figure 2 again. During the node output calculation phase, the output value of each input node is calculated step by step, and added after the calculation is completed. After the partial weight value passes through the activation function, it is used as the input value of the next layer.

於此使用i-j-k描述神經網路架構,其中i代表輸入層神經元數量;j代表隱藏層神經元數量;k則代表輸出層神經元數量。x 1~x i 為輸入的特徵值,經由輸入層連結隱藏層的權重值

Figure TWI676939B_D0034
計算h j ,使用公式h j ( X )=
Figure TWI676939B_D0035
。接著將h j 經過活化函數f tanh 後所得到的值作為隱藏層連結輸出層的輸入值,再乘上隱藏層連結輸出層的權重值
Figure TWI676939B_D0036
可得O k ,使用公式O k ( H )=
Figure TWI676939B_D0037
。最後在經由活化函數,得到每一筆資料的分類結果y k ,使用公式y k ( O )=f sig (O k )。 Here i - j - k is used to describe the neural network architecture, where i represents the number of neurons in the input layer; j represents the number of neurons in the hidden layer; k represents the number of neurons in the output layer. x 1 ~ x i are the input feature values, and the weight values of the hidden layer are connected through the input layer
Figure TWI676939B_D0034
Calculate h j using the formula h j ( X ) =
Figure TWI676939B_D0035
. Then use the value of h j after the activation function f tanh as the input value of the hidden layer link output layer, and then multiply the weight value of the hidden layer link output layer.
Figure TWI676939B_D0036
O k can be obtained using the formula O k ( H ) =
Figure TWI676939B_D0037
. Finally, through the activation function, the classification result y k of each piece of data is obtained, using the formula y k ( O ) = f sig ( O k ).

配合參閱圖4,當進入權重修正階段時,便會參考前一次訓練完後的累計誤差調整權重值、偏權值以及學習率。經由調整這三項變數使該訓練模組6具有更佳的學習能力,此外亦得以透過每一次訓練的結果對訓練條件稍作修正,以確保學習方向正確且可得到最佳學習效果。 With reference to FIG. 4, when the weight correction phase is entered, the weight value, partial weight value, and learning rate will be adjusted with reference to the cumulative error after the previous training. By adjusting these three variables, the training module 6 has better learning ability. In addition, the training conditions can be slightly modified through the results of each training to ensure that the learning direction is correct and the best learning effect can be obtained.

調整權重方法則是需由輸出層逐步向輸入層計算,分別計算輸出層偏權值梯度、隱藏層至輸出層權重值梯度、隱藏層偏權值梯度,以及隱藏層至輸入層權重值梯度,共四種梯度之後,藉由梯度計算變動量,最後再 經由變動量與動量對權重值進行修正。 The method of adjusting weights is to gradually calculate from the output layer to the input layer, respectively calculating the output layer partial weight gradient, the hidden layer to output layer weight value gradient, the hidden layer partial weight value gradient, and the hidden layer to input layer weight value gradient. After a total of four gradients, calculate the amount of change by the gradient, and finally Correct the weight value through the amount of change and momentum.

調整權重時,需先計算輸出層偏權值梯度

Figure TWI676939B_D0038
、輸出層到隱藏層每一節點的梯度
Figure TWI676939B_D0039
、隱藏層偏權值梯度
Figure TWI676939B_D0040
以及隱藏層到輸入層每一節點的梯度
Figure TWI676939B_D0041
Figure TWI676939B_D0042
為第k個輸出的目標值;
Figure TWI676939B_D0043
則是第k個輸出的近似值,使用公式為
Figure TWI676939B_D0044
Figure TWI676939B_D0045
Figure TWI676939B_D0046
Figure TWI676939B_D0047
When adjusting the weight, you need to calculate the partial weight gradient of the output layer first
Figure TWI676939B_D0038
Gradient from output node to hidden node
Figure TWI676939B_D0039
Hidden layer partial weight gradient
Figure TWI676939B_D0040
And the gradient from each node of the hidden layer to the input layer
Figure TWI676939B_D0041
,
Figure TWI676939B_D0042
Is the target value of the k- th output;
Figure TWI676939B_D0043
Is the approximate value of the k- th output, using the formula:
Figure TWI676939B_D0044
,
Figure TWI676939B_D0045
,
Figure TWI676939B_D0046
,
Figure TWI676939B_D0047

接著計算輸出層偏權值變動量

Figure TWI676939B_D0048
、輸出層至隱藏層權重值之變動量
Figure TWI676939B_D0049
、隱藏層偏權值
Figure TWI676939B_D0050
變動量以及輸入層至隱藏層權重值變動量
Figure TWI676939B_D0051
。此計算過程中,會乘上學習率η使變動量調整幅度更加明顯,使用公式為
Figure TWI676939B_D0052
Figure TWI676939B_D0053
Figure TWI676939B_D0054
Figure TWI676939B_D0055
。 Then calculate the change in partial weight of the output layer
Figure TWI676939B_D0048
Change amount of weight value from output layer to hidden layer
Figure TWI676939B_D0049
Hidden layer partial weights
Figure TWI676939B_D0050
The amount of change and the amount of change from the input layer to the hidden layer weight value
Figure TWI676939B_D0051
. In this calculation process, the learning rate η will be multiplied to make the adjustment of the fluctuation amount more obvious. Using the formula is
Figure TWI676939B_D0052
,
Figure TWI676939B_D0053
,
Figure TWI676939B_D0054
,
Figure TWI676939B_D0055
.

最後可透過梯度與權重值變動量更新輸入層連結隱藏層之權重值

Figure TWI676939B_D0056
、隱藏層偏權值
Figure TWI676939B_D0057
、隱藏層連結輸出層之權重值
Figure TWI676939B_D0058
,以及輸出層偏權值
Figure TWI676939B_D0059
並乘上動量M mom ,以減緩因調整權重值而造成訓練過程的震盪,作為下一次訓練時使用之參數,公式為
Figure TWI676939B_D0060
Figure TWI676939B_D0061
Figure TWI676939B_D0062
Figure TWI676939B_D0063
Figure TWI676939B_D0064
Figure TWI676939B_D0065
。 Finally, the weight value of the input layer and the hidden layer can be updated through the gradient and the change of the weight value.
Figure TWI676939B_D0056
Hidden layer partial weights
Figure TWI676939B_D0057
The weight value of the hidden layer connected to the output layer
Figure TWI676939B_D0058
, And the output layer partial weight
Figure TWI676939B_D0059
And multiply it by the momentum M mom to reduce the shock of the training process caused by adjusting the weight value. As a parameter used in the next training, the formula is
Figure TWI676939B_D0060
Figure TWI676939B_D0061
,
Figure TWI676939B_D0062
,
Figure TWI676939B_D0063
Figure TWI676939B_D0064
,
Figure TWI676939B_D0065
.

本階段採用自適應學習率作為計算權重值變動量的因子,學習率的調整會參考前一次訓練結果的

Figure TWI676939B_D0066
與當次訓練結果的
Figure TWI676939B_D0067
比較,判斷當次的學習方向是否正確。若學習方向正確,則會為學習率加上獎勵因子,使下一次的學習可以更加快速,使學習過程得以更早達成收斂條件,反之則加上懲罰因子,使學習速度減慢,以維持學習效果,公式為:
Figure TWI676939B_D0068
In this stage, the adaptive learning rate is used as a factor for calculating the variation of the weight value. The adjustment of the learning rate will refer to the previous training result.
Figure TWI676939B_D0066
With the results of the training
Figure TWI676939B_D0067
Compare and judge whether the current learning direction is correct. If the learning direction is correct, a reward factor will be added to the learning rate, so that the next learning can be faster and the learning process can reach the convergence condition earlier. Otherwise, a penalty factor is added to slow the learning speed to maintain learning Effect, the formula is:
Figure TWI676939B_D0068

透過每次訓練過程中之訓練結果所得到的RMSE值可以用來調整權重與學習率,使訓練過程朝向正確的方向前進,而不至於在訓練過程無法收斂。 The RMSE value obtained through the training results during each training process can be used to adjust the weight and learning rate, so that the training process advances in the correct direction, so that it cannot converge during the training process.

在本較佳實施例中,該訓練規模包括一輸入層、一隱藏層,及一輸出層,其中,該輸入層為輸入的封裝類型特徵數目,該隱藏層數目為1,該輸出層為分類輸出的封裝類型數目,於此輸入層特徵為19個,分類輸出的封裝類型為10個。 In the preferred embodiment, the training scale includes an input layer, a hidden layer, and an output layer, where the input layer is the number of input package type features, the number of hidden layers is 1, and the output layer is a classification The number of output package types. There are 19 input layer features, and the output type of the classification is 10.

該隱藏層神經元個數j滿足(x×(input+ output)),1.5<x<2,其中,input為輸入封裝類型特徵19個,output為分類輸出的封裝類型10個。較佳地,當隱藏層神經元數值接近上述公式,可得到較佳的訓練以及訓練之分類結果。 The number of hidden layer neurons j satisfies ( x × ( input + output )), 1.5 < x <2, where input is 19 features of the input package type and output is 10 of the classification output type. Preferably, when the value of the hidden layer neuron is close to the above formula, better training and training classification results can be obtained.

其中,輸出的封裝類型分別為球柵陣列(Ball Grid Array,BGA)、四邊扁平接腳延伸封裝(Quad Flat Package,QFP)、四邊扁平無延伸接腳封裝(Quad Flat No-Lead,QFN)、小外型電晶體封裝(Small Outline Integrated Transistor,SOT)、小外型積體電路(Small Outline Integrated Circuit,SOIC)、無接腳小外型積體電路(Small Outline Integrated Circuit No-Lead,SON)、兩側扁平無接腳延伸封裝(Dual Flat No-Lead,DFN)、小外型二極體封裝(Small Outline Diode,SOD)、小型貼片元件Chip,及表面黏著式的色碼電阻(Metal Electrode Leadless Face,MELF)等。 The output package types are Ball Grid Array (BGA), Quad Flat Package (QFP), Quad Flat No-Lead (QFN), Small Outline Integrated Transistor (SOT), Small Outline Integrated Circuit (SOIC), Small Outline Integrated Circuit No-Lead (SON) , Dual Flat No-Lead (DFN) packages on both sides, Small Outline Diode (SOD) packages, Small chip components Chip, and Surface-attached color-coded resistors (Metal Electrode Leadless Face (MELF), etc.

該參數儲存模組7與該訓練模組6及該服務資料庫1連接,用以紀錄該訓練模組6所使用之訓練參數數據。 The parameter storage module 7 is connected to the training module 6 and the service database 1 to record training parameter data used by the training module 6.

參閱圖5,該分類處理模組8接收該待分類 資料並將分類結果顯示於該服務資料庫1。實際實施時,該分類處理模組8可單獨設置在使用者端,或設置於同一電子裝置中以進行電子零件封裝類型的訓練與分類,不應以此為限。而分類結果可以是待分類資料或是待分類資料經處理後之結果。 Referring to FIG. 5, the classification processing module 8 receives the to-be-classified Information and display the classification results in the service database 1. In actual implementation, the classification processing module 8 may be separately disposed on the user end or in the same electronic device for training and classification of electronic component packaging types, and should not be limited to this. The classification result can be the data to be classified or the processed result of the data to be classified.

該分類處理模組8包括一儲存有執行一動作指令的處理器,所述動作包括下列步驟。 The classification processing module 8 includes a processor storing an execution instruction, and the operation includes the following steps.

首先,進行步驟91,使用者端輸入欲進行分類之電子零件圖樣至該服務資料庫1,接著,進行步驟92,該特徵選取模組4依照該電子零件圖樣之封裝類型特徵自該外部資料庫3進行特徵選取。 First, step 91 is performed, and the user terminal inputs the electronic part pattern to be classified into the service database 1. Then, step 92 is performed. The feature selection module 4 is obtained from the external database according to the package type characteristics of the electronic part pattern. 3 Perform feature selection.

然後,進行步驟93,該資料整合模組5對所選取之特徵值進行資料預處理與正規化,以得到待分類資料。 Then, step 93 is performed. The data integration module 5 performs data preprocessing and normalization on the selected feature values to obtain data to be classified.

最後,進行步驟94,該服務資料庫1得到電子零件之封裝類型的分類結果。 Finally, step 94 is performed, and the service database 1 obtains the classification result of the package type of the electronic component.

本發明利用19項電子零件實體的特徵訓練類神經網路,找出最適合該分類系統的訓練規模及神經網路參數,且經過正規化後的訓練結果正確率較未正規化的 訓練結果高,再者,當隱藏層神經元個數滿足(x×(input+output)),1.5<x<2,可得到更佳的訓練以及訓練之分類結果。 The invention uses the features of 19 electronic component entities to train a neural network to find the training scale and neural network parameters that are most suitable for the classification system. The normalized training result has a higher accuracy rate than the non-normalized training result. Furthermore, when the number of hidden layer neurons satisfies ( x × ( input + output )), 1.5 < x <2, better training and training classification results can be obtained.

綜上所述,本發明運用類神經網路進行分類之電子零件封裝分類系統,藉由將該服務資料庫1、該外部資料庫3、該特徵選取模組4、該資料整合模組5、該訓練模組6、該參數儲存模組7,及該分類處理模組間相互設置,結合倒傳遞類神經網路,解決以人工方式分類電子零件封裝類型容易造成之判斷錯誤、耗時且分類過程過於依賴佈局工程師本身工作經驗的問題,亦能獲得更高品質的訓練以及分類結果,故確實可以達成本發明之目的。 To sum up, the present invention uses a neural network-like electronic component packaging classification system to classify the service database 1, the external database 3, the feature selection module 4, the data integration module 5, The training module 6, the parameter storage module 7, and the classification processing module are mutually set, and combined with a reverse transfer neural network, it solves the error, time-consuming and classification easily caused by the manual classification of electronic component packaging types. The problem that the process is too dependent on the layout engineer's own work experience can also obtain higher quality training and classification results, so it can indeed achieve the purpose of cost invention.

惟以上所述者,僅為本發明之較佳實施例而已,當不能以此限定本發明實施之範圍,即大凡依本發明申請專利範圍及發明說明內容所作之簡單的等效變化與修飾,皆仍屬本發明專利涵蓋之範圍內。 However, the above are only the preferred embodiments of the present invention. When the scope of implementation of the present invention cannot be limited by this, that is, the simple equivalent changes and modifications made according to the scope of the patent application and the description of the invention, All are still within the scope of the invention patent.

Claims (10)

一種運用類神經網路進行分類之電子零件封裝分類系統,該電子零件封裝分類系統包含:一服務資料庫,用以供外部輸入電子零件圖樣,以及接收並儲存相關聯之輸入與輸出數據的訓練數據;一外部資料庫,儲存有複數筆電子零件之封裝類型資料;一特徵選取模組,與該外部資料庫連接,紀錄有電子零件的封裝類型特徵,依據該服務資料庫輸入之欲進行分類的電子零件圖樣,該特徵選取模組依該封裝類型特徵自該外部資料庫進行特徵選取;一資料整合模組,對該特徵選取模組所選取之特徵值進行資料預處理與正規化,以清除資料中的錯誤雜訊與填補資料遺缺,並限縮該特徵之特徵值分佈於一特定區間,以得到待分類資料;及一分類處理模組,接收該待分類資料並將分類結果顯示於該服務資料庫。An electronic part packaging classification system using neural network-like classification for classification. The electronic part packaging classification system includes: a service database for external input of electronic part patterns, and training for receiving and storing associated input and output data. Data; an external database that stores the package type data of multiple electronic parts; a feature selection module that is connected to the external database and records the package type characteristics of the electronic parts, and is classified according to the desires entered by the service database Electronic component drawings, the feature selection module performs feature selection from the external database according to the package type feature; a data integration module performs data preprocessing and normalization of the feature values selected by the feature selection module to Clear error data from the data and fill in missing data, and limit the eigenvalues of the feature to a specific interval to obtain the data to be classified; and a classification processing module that receives the data to be classified and displays the classification results in The service database. 依據申請專利範圍第1項所述之電子零件封裝分類系統,其中,該分類處理模組包括一儲存有執行一動作指令的處理器,所述動作包括:使用者端輸入欲進行分類之電子零件圖樣至該服務資料庫;該特徵選取模組依照該電子零件圖樣之封裝類型特徵自該外部資料庫進行特徵選取;該資料整合模組對所選取之特徵值進行資料預處理與正規化,以得到待分類資料;及該服務資料庫得到電子零件之封裝類型的分類結果。According to the electronic component package classification system according to item 1 of the scope of patent application, wherein the classification processing module includes a processor storing an execution instruction, the action includes: the user inputting the electronic component to be classified Drawing to the service database; the feature selection module performs feature selection from the external database according to the package type characteristics of the electronic part pattern; the data integration module performs data preprocessing and normalization on the selected feature values to To obtain the data to be classified; and the service database to obtain the classification results of the packaging types of the electronic parts. 依據申請專利範圍第2項所述之電子零件封裝分類系統,更包含一訓練模組及一參數儲存模組,該訓練模組與該資料整合模組及該服務資料庫連接,並決定該訓練資料集進行訓練之訓練規模及神經網路參數,以作為後續分類之依據,其中,訓練之收斂條件為當前訓練結束後累計誤差小於給定的門檻值,即停止訓練,而該參數儲存模組與該訓練模組及該服務資料庫連接,用以紀錄該訓練模組所使用之訓練參數數據。According to the electronic component packaging classification system described in item 2 of the scope of the patent application, it further includes a training module and a parameter storage module. The training module is connected to the data integration module and the service database, and determines the training. The training scale and neural network parameters of the data set for training are used as the basis for subsequent classification. Among them, the convergence condition of training is that the cumulative error after the current training is less than the given threshold value, that is, the training is stopped, and the parameter storage module Connected to the training module and the service database to record training parameter data used by the training module. 依據申請專利範圍第3項所述之電子零件封裝分類系統,其中,該資料整合模組將該特徵值正規化至v a 、v b 區間中,滿足,v a <v b 關係式,其中,v'為正規化至v a 、v b 後的特徵值,v為需作正規化之特徵值,v max 為一項特徵中的最大特徵值,而v min 為一項特徵中的最小特徵值。According to the electronic component packaging classification system described in item 3 of the scope of patent application, wherein the data integration module normalizes the feature values to the v a and v b intervals, satisfying , V a <v b relation, where, v 'is normalized to the feature value of v a, v b, v is the need for the normalized feature values, v max is the maximum eigenvalue of a feature, and v min is the minimum eigenvalue of a feature. 依據申請專利範圍第4項所述之電子零件封裝分類系統系統,其中,該神經網路參數為收斂條件、隱藏層神經元個數、隱藏層個數、初始學習率、初始動量、門檻值、權重值、偏權值等上述任一或其組合。According to the electronic component packaging classification system system described in item 4 of the scope of patent application, wherein the neural network parameters are convergence conditions, number of hidden layer neurons, number of hidden layers, initial learning rate, initial momentum, threshold value, Any one or a combination of the above, such as weight value and partial weight value. 依據申請專利範圍第5項所述之電子零件封裝分類系統,其中,該隱藏層神經元個數j滿足(x×(input+output)),1.5<x<2,其中,input為輸入封裝類型特徵19個,output為分類輸出的封裝類型10個。According to the electronic component packaging classification system described in item 5 of the scope of patent application, wherein the number of neurons in the hidden layer j satisfies ( x × ( input + output )), 1.5 < x <2, where input is the input package type There are 19 features, and output is the package type of the classification output. 依據申請專利範圍第6項所述之電子零件封裝分類系統,其中,該封裝類型資料記錄有電子零件外觀資訊、印刷電路板限制區域資訊、銲點資訊、幾何形狀參數、適用場域參數、電性參數、接點參數等上述任一或其組合。According to the electronic component package classification system described in item 6 of the scope of the patent application, the package type data records electronic component appearance information, printed circuit board restricted area information, solder joint information, geometric shape parameters, applicable field parameters, electrical Parameters, contact parameters, etc. any of the above or a combination thereof. 依據申請專利範圍第7項所述之電子零件封裝分類系統,其中,該封裝類型特徵包括電子零件實體外觀、電子零件實體接腳,及電子零件圖樣。According to the electronic component package classification system described in item 7 of the scope of patent application, the package type characteristics include the physical appearance of the electronic component, the physical pins of the electronic component, and the electronic component drawing. 依據申請專利範圍第8項所述之電子零件封裝分類系統,其中,該封裝類型特徵的權重比例為電子零件圖樣大於電子零件實體外觀,電子零件實體外觀大於電子零件實體接腳。According to the electronic component packaging classification system described in item 8 of the scope of the patent application, wherein the weight ratio of the package type feature is that the electronic component pattern is larger than the physical appearance of the electronic component, and the physical appearance of the electronic component is greater than the physical pin of the electronic component. 依據申請專利範圍第9項所述之電子零件封裝分類系統,其中,該電子零件實體外觀、電子零件實體接腳,及電子零件圖樣選自於電子零件的接腳數量、原始的電子零件實體長度、最大的電子零件實體長度、最小的電子零件實體長度、原始的電子零件實體寬度、最大的電子零件實體寬度、最小的電子零件實體寬度、電子零件實體高度、電子零件實體與電路板的間距、大的電子零件接腳長度、小的電子零件接腳長度、大的電子零件接腳寬度、小的電子零件接腳寬度、大的電子零件圖樣接腳長度、小的電子零件圖樣接腳長度、大的電子零件圖樣接腳寬度、小的電子零件圖樣接腳寬度、電子零件圖樣接腳間距的X軸方向,及電子零件圖樣接腳間距的Y軸方向等19種。The electronic component package classification system according to item 9 of the scope of the patent application, wherein the physical appearance of the electronic component, the physical pins of the electronic component, and the electronic component pattern are selected from the number of pins of the electronic component, and the physical length of the original electronic component. , The largest electronic part entity length, the smallest electronic part entity length, the original electronic part entity width, the largest electronic part entity width, the smallest electronic part entity width, the electronic part entity height, the distance between the electronic part entity and the circuit board, Large electronic part pin length, small electronic part pin length, large electronic part pin width, small electronic part pin width, large electronic part pattern pin length, small electronic part pattern pin length, There are 19 types, such as large electronic part pattern pin width, small electronic part pattern pin width, electronic part pattern pin pitch X axis direction, and electronic part pattern pin pitch Y axis direction.
TW107121401A 2018-06-22 2018-06-22 Electronic component packaging classification system using neural network for classification TWI676939B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW107121401A TWI676939B (en) 2018-06-22 2018-06-22 Electronic component packaging classification system using neural network for classification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW107121401A TWI676939B (en) 2018-06-22 2018-06-22 Electronic component packaging classification system using neural network for classification

Publications (2)

Publication Number Publication Date
TWI676939B true TWI676939B (en) 2019-11-11
TW202001698A TW202001698A (en) 2020-01-01

Family

ID=69189197

Family Applications (1)

Application Number Title Priority Date Filing Date
TW107121401A TWI676939B (en) 2018-06-22 2018-06-22 Electronic component packaging classification system using neural network for classification

Country Status (1)

Country Link
TW (1) TWI676939B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI715289B (en) * 2019-11-14 2021-01-01 新加坡商鴻運科股份有限公司 Device and method for setting product printing parameters and storage medium
CN112801328A (en) * 2019-11-14 2021-05-14 鸿富锦精密电子(天津)有限公司 Product printing parameter setting device, method and computer readable storage medium
CN113095540A (en) * 2019-12-23 2021-07-09 财团法人工业技术研究院 Data integration method and data integration system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080155483A1 (en) * 2006-12-22 2008-06-26 Inventec Corporation Database-aided circuit design system and method therefor
CN100562878C (en) * 2006-06-23 2009-11-25 株式会社日立高新技术 Defect inspecting system and defect detecting method
CN102063554A (en) * 2011-01-17 2011-05-18 浪新微电子系统(上海)有限公司 PCB (Printed circuit board) design platform
CN105372581A (en) * 2015-11-18 2016-03-02 华南理工大学 Flexible circuit board manufacturing process automatic monitoring and intelligent analysis system and method
TW201633192A (en) * 2014-12-18 2016-09-16 Asml荷蘭公司 Feature search by machine learning
CN106777612A (en) * 2016-12-02 2017-05-31 全球能源互联网研究院 A kind of method and device of the forecast model and PCB design for setting up PCB types
TW201816670A (en) * 2016-10-14 2018-05-01 美商克萊譚克公司 Diagnostic systems and methods for deep learning models configured for semiconductor applications
TW201822038A (en) * 2016-12-12 2018-06-16 達盟系統有限公司 Auto defect screening using adaptive machine learning in semiconductor device manufacturing flow

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100562878C (en) * 2006-06-23 2009-11-25 株式会社日立高新技术 Defect inspecting system and defect detecting method
US20080155483A1 (en) * 2006-12-22 2008-06-26 Inventec Corporation Database-aided circuit design system and method therefor
CN102063554A (en) * 2011-01-17 2011-05-18 浪新微电子系统(上海)有限公司 PCB (Printed circuit board) design platform
TW201633192A (en) * 2014-12-18 2016-09-16 Asml荷蘭公司 Feature search by machine learning
CN105372581A (en) * 2015-11-18 2016-03-02 华南理工大学 Flexible circuit board manufacturing process automatic monitoring and intelligent analysis system and method
TW201816670A (en) * 2016-10-14 2018-05-01 美商克萊譚克公司 Diagnostic systems and methods for deep learning models configured for semiconductor applications
CN106777612A (en) * 2016-12-02 2017-05-31 全球能源互联网研究院 A kind of method and device of the forecast model and PCB design for setting up PCB types
TW201822038A (en) * 2016-12-12 2018-06-16 達盟系統有限公司 Auto defect screening using adaptive machine learning in semiconductor device manufacturing flow

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI715289B (en) * 2019-11-14 2021-01-01 新加坡商鴻運科股份有限公司 Device and method for setting product printing parameters and storage medium
CN112801328A (en) * 2019-11-14 2021-05-14 鸿富锦精密电子(天津)有限公司 Product printing parameter setting device, method and computer readable storage medium
US11635743B2 (en) 2019-11-14 2023-04-25 Fulian Precision Electronics (Tianjin) Co., Ltd. Parameters suggestion system of solder paste screen printer including method, device employing method, and non-transitory storage
CN112801328B (en) * 2019-11-14 2023-10-31 富联精密电子(天津)有限公司 Product printing parameter setting device, method and computer readable storage medium
CN113095540A (en) * 2019-12-23 2021-07-09 财团法人工业技术研究院 Data integration method and data integration system

Also Published As

Publication number Publication date
TW202001698A (en) 2020-01-01

Similar Documents

Publication Publication Date Title
TWI676939B (en) Electronic component packaging classification system using neural network for classification
US20190392322A1 (en) Electronic component packaging type classification system using artificial neural network
Mirhoseini et al. Chip placement with deep reinforcement learning
Yang et al. Transfer-learning-based online Mura defect classification
CN110543616B (en) SMT solder paste printing volume prediction method based on industrial big data
TWI674823B (en) System and method for automatic layout
CN111539178A (en) Chip layout design method and system based on neural network and manufacturing method
CN109359355B (en) Design implementation method of standard structure module
CN108182316B (en) Electromagnetic simulation method based on artificial intelligence and electromagnetic brain thereof
US20210174200A1 (en) Training device and training method for neural network model
CN115293075A (en) OPC modeling method, OPC modeling device and electronic equipment
US8185864B2 (en) Circuit board analyzer and analysis method
Wang et al. An artificial neural network to support package classification for SMT components
Li et al. Fine pitch stencil printing process modeling and optimization
CN106777612B (en) Method and device for establishing PCB type prediction model and PCB design
US10896283B1 (en) Noise-based optimization for integrated circuit design
CN110633721A (en) Electronic part packaging and classifying system for classifying by using neural network
JPH11175577A (en) Virtual manufacture support design method and virtual manufacture support design system
CN115879412A (en) Layout level circuit diagram size parameter optimization method based on transfer learning
CN112347704B (en) Efficient artificial neural network microwave device modeling method based on Bayesian theory
Governi et al. A genetic algorithms-based procedure for automatic tolerance allocation integrated in a commercial variation analysis software
Rajabi et al. Analog circuit complementary optimization based on evolutionary algorithms and artificial neural network
CN111414724A (en) Method for optimizing circuit simulation
CN112508250B (en) Incremental analysis method, system, medium and terminal for command information system generation scheme
JP4377568B2 (en) Failure prediction method and failure prediction system

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees