TWI819578B - Multi-objective parameters optimization system, method and computer program product thereof - Google Patents

Multi-objective parameters optimization system, method and computer program product thereof Download PDF

Info

Publication number
TWI819578B
TWI819578B TW111115048A TW111115048A TWI819578B TW I819578 B TWI819578 B TW I819578B TW 111115048 A TW111115048 A TW 111115048A TW 111115048 A TW111115048 A TW 111115048A TW I819578 B TWI819578 B TW I819578B
Authority
TW
Taiwan
Prior art keywords
neural network
combination
trained neural
value
parameter optimization
Prior art date
Application number
TW111115048A
Other languages
Chinese (zh)
Other versions
TW202343172A (en
Inventor
江振瑞
陳思翰
Original Assignee
國立中央大學
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 國立中央大學 filed Critical 國立中央大學
Priority to TW111115048A priority Critical patent/TWI819578B/en
Application granted granted Critical
Publication of TWI819578B publication Critical patent/TWI819578B/en
Publication of TW202343172A publication Critical patent/TW202343172A/en

Links

Landscapes

  • Hardware Redundancy (AREA)
  • Feedback Control In General (AREA)
  • Electrical Discharge Machining, Electrochemical Machining, And Combined Machining (AREA)

Abstract

A multi-objective parameters optimization system for providing a plurality of optimal machining parameters of a wire electrical discharge machine is provided. The system includes a plurality of post-training neural networks and a multi-objective parameters optimization module. Each post-training neural network outputs an objective prediction value according to a machining parameter combination, respectively. The multi-objective parameters optimization module executes a genetic algorithm that using the objective prediction values outputted from the post-training neural networks according to a plurality of value combinations of the machining parameter combination to find at least one optimal value combination of the machining parameter combination.

Description

多目標參數最佳化系統、方法及電腦程式產品 Multi-objective parameter optimization systems, methods and computer program products

本發明屬於人工智慧領域,特別是類神經網路相關的人工智慧領域。 The invention belongs to the field of artificial intelligence, especially the field of artificial intelligence related to neural networks.

線切割放電加工機是生產製造業中常用的機器,其根據不同的加工參數組合,可產生不同的加工品質。一般而言,為了達到使用者要求的加工品質,必須透過資深操作技師依照過去累積的經驗,以人力的方式調整所需的加工參數組合,然而此過程將耗費大量時間成本及人力,且也未必能調整出最佳化的加工參數組合。 Wire-cut electric discharge machines are commonly used machines in the manufacturing industry. They can produce different processing qualities based on different combinations of processing parameters. Generally speaking, in order to achieve the processing quality required by users, senior operating technicians must manually adjust the required processing parameter combinations based on accumulated experience in the past. However, this process will consume a lot of time, cost and manpower, and may not be Can adjust the optimal combination of processing parameters.

由此可知,目前的技術仍有改善的空間。對此,本發明提供一種多目標參數最佳化系統、方法及電腦程式產品,能有效解決上述問題。 It can be seen that the current technology still has room for improvement. In this regard, the present invention provides a multi-objective parameter optimization system, method and computer program product, which can effectively solve the above problems.

根據本發明的一觀點,茲提出一種用於提供線切割放電加工機的多目標最佳化加工參數的多目標參數最佳化系統。該系統包含複數訓練後類神經網路及參數最佳化模組。每個訓練後類神經網路根據加工參數組合而各自輸 出目標預測值。參數最佳化模組執行基因演算法,以利用該等訓練後類神經網路根據加工參數組合的多種數值組合所輸出的該等目標預測值來找出加工參數組合的最佳化的數值組合。 According to one aspect of the present invention, a multi-objective parameter optimization system for providing multi-objective optimized processing parameters of a wire-cut electric discharge machine is proposed. The system includes complex-trained neural networks and parameter optimization modules. Each trained neural network has its own output according to the combination of processing parameters. Output target prediction value. The parameter optimization module executes a genetic algorithm to find the optimal numerical combination of the processing parameter combination using the target predicted values output by the trained neural network based on the various numerical combinations of the processing parameter combination. .

根據本發明的另一觀點,是提供一種用於提供線切割放電加工機的多目標最佳化加工參數的多目標參數最佳化方法,該方法由多目標參數最佳化系統執行。多目標參數最佳化系統包含複數訓練後類神經網路及參數最佳化模組,且每個訓練後類神經網路根據加工參數組合而各自輸出目標預測值。其中該方法包含步驟:藉由參數最佳化模組執行一基因演算法,以利用該等訓練後類神經網路根據加工參數組合的多種數值組合所輸出的該等目標預測值來找出加工參數組合的最佳化數值組合。 According to another aspect of the present invention, a multi-objective parameter optimization method for providing multi-objective optimized processing parameters of a wire-cut electric discharge machine is provided, and the method is executed by a multi-objective parameter optimization system. The multi-objective parameter optimization system includes complex trained neural networks and parameter optimization modules, and each trained neural network outputs a target prediction value according to a combination of processing parameters. The method includes the steps of: executing a genetic algorithm through a parameter optimization module to use the trained neural network to find out the processing target prediction values based on the multiple numerical combinations of the processing parameter combinations. The optimal numerical combination of parameter combinations.

根據本發明又另一觀點,提供一種儲存於一非暫態電腦可讀取媒體之中的電腦程式產品,用以使多目標參數最佳化系統運作,以提供線切割放電加工機的多目標最佳化加工參數。多目標參數最佳化系統複數訓練後類神經網路及參數最佳化模組,且每個訓練後類神經網路根據加工參數組合而各自輸出目標預測值。其中該電腦程式產品包含:指令,使參數最佳化模組執行基因演算法,以利用該等訓練後類神經網路根據加工參數組合的多種數值組合所輸出的該等目標預測值來找出加工參數組合的最佳化數值組合。 According to yet another aspect of the present invention, a computer program product stored in a non-transitory computer-readable medium is provided to enable multi-objective parameter optimization system operation to provide multi-objective control of wire-cut electric discharge machines. Optimize processing parameters. The multi-objective parameter optimization system has multiple trained neural networks and parameter optimization modules, and each trained neural network outputs a target prediction value based on a combination of processing parameters. The computer program product includes: instructions to cause the parameter optimization module to execute a genetic algorithm to use the trained neural network to find the target predicted values output based on the multiple numerical combinations of the processing parameter combinations. Optimized numerical combination of processing parameter combinations.

1:多目標參數最佳化系統 1: Multi-objective parameter optimization system

2:訓練後類神經網路群組 2: Neural network group after training

3:參數最佳化模組 3: Parameter optimization module

4:遷移學習模組 4: Transfer learning module

10:線切割放電加工機 10: Wire-cut electrical discharge processing machine

21:第一類神經網路 21: Type 1 Neural Networks

22:第二類神經網路 22: Type II Neural Network

23:第三類神經網路 23: The third type of neural network

24:第四類神經網路 24: Type IV Neural Networks

25:第五類神經網路 25: Category 5 Neural Networks

26:第六類神經網路 26: Category 6 Neural Networks

211、221、231:輸入層 211, 221, 231: input layer

212、222、232:隱藏層 212, 222, 232: hidden layer

213、223、233:損失函數層 213, 223, 233: Loss function layer

214、224、234:神經元 214, 224, 234: Neuron

S61~S69:步驟 S61~S69: steps

OF1、OF2:目標預測值 OF1, OF2: target prediction value

圖1是本發明一實施例的多目標參數最佳化系統的示意圖。 Figure 1 is a schematic diagram of a multi-objective parameter optimization system according to an embodiment of the present invention.

圖2是本發明一實施例的訓練後類神經網路群組的運作過程示意圖。 FIG. 2 is a schematic diagram of the operation process of the trained neural network group according to an embodiment of the present invention.

圖3A是本發明一實施例的參數最佳化模組的運作過程示意圖。 FIG. 3A is a schematic diagram of the operation process of the parameter optimization module according to an embodiment of the present invention.

圖3B是本發明一實施例的基因演算法的演算結果示意圖。 FIG. 3B is a schematic diagram of the calculation results of the genetic algorithm according to an embodiment of the present invention.

圖4是本發明一實施例的遷移學習模組的運作過程示意圖。 Figure 4 is a schematic diagram of the operation process of the transfer learning module according to an embodiment of the present invention.

圖5A是本發明一實施例的第一類神經網路的訓練模型的示意圖。 FIG. 5A is a schematic diagram of the training model of the first type of neural network according to an embodiment of the present invention.

圖5B是本發明一實施例的第二類神經網路的訓練模型的示意圖。 FIG. 5B is a schematic diagram of the training model of the second type of neural network according to an embodiment of the present invention.

圖5C是本發明一實施例的第三類神經網路的訓練模型的示意圖。 FIG. 5C is a schematic diagram of the training model of the third type of neural network according to an embodiment of the present invention.

圖6是本發明一實施例的多目標參數最佳化方法的步驟流程圖。 FIG. 6 is a flow chart of a multi-objective parameter optimization method according to an embodiment of the present invention.

當結合附圖閱讀時,下列實施例用於清楚地展示本發明的上述及其他技術內容、特徵及/或效果。透過具體實施方式的闡述,人們將進一步瞭解本發明所採用的技術手段及效果,以達到上述的目的。此外,由於本發明所揭示的內容應易於理解且可為本領域技術人員所實施,因此,所有不脫離本發明的概念的相等置換或修改應包含在權利要求中。 When read in conjunction with the accompanying drawings, the following embodiments are used to clearly demonstrate the above and other technical contents, features and/or effects of the present invention. Through the elaboration of specific embodiments, people will further understand the technical means and effects adopted by the present invention to achieve the above objectives. In addition, since the contents disclosed in the present invention should be easily understood and implemented by those skilled in the art, all equivalent substitutions or modifications that do not depart from the concept of the present invention should be included in the claims.

應注意的是,在本文中,除了特別指明者之外,「一」元件不限於單一的該元件,還可指一或更多的該元件。 It should be noted that in this document, unless otherwise specified, "a" element is not limited to a single element, but may also refer to one or more elements.

此外,說明書及權利要求中例如「第一」或「第二」等序數僅為描述所請求的元件,而不代表或不表示所請求的元件具有任何順序的序數,且不是所請求的元件及另一所請求的元件之間或製造方法的步驟之間的順序。這些序數的使用僅是為了將具有特定名稱的一個請求元件與具有相同名稱的另一請求元件區分開來。 In addition, ordinal numbers such as "first" or "second" in the description and claims merely describe the claimed elements, and do not represent or indicate that the claimed elements have any sequential ordinal number, and are not the claimed elements and A sequence between another claimed element or between steps of a manufacturing method. These ordinal numbers are used only to distinguish one request element with a specific name from another request element with the same name.

此外,說明書及權利要求中例如「相鄰」一詞是用於描述相互鄰近,不必然表示相互接觸。 In addition, the word "adjacent" in the description and claims is used to describe mutual proximity and does not necessarily mean that they are in contact with each other.

此外,本發明中關於“當...”或“...時”等描述表示”當下、之前或之後”等態樣,而不限定為同時發生之情形,在此先行敘明。本發明中關於“設置於...上”等類似描述係表示兩元件的對應位置關係,並不限定兩元件之間是否有所接觸,除非特別有限定,在此先行敘明。再者,本發明記載多個功效時,若在功效之間使用“或”一詞,係表示功效可獨立存在,但不排除多個功效可同時存在。 In addition, descriptions such as "when..." or "when" in the present invention mean "at the moment, before or after" and other aspects, and are not limited to situations that occur at the same time, which will be explained here. In the present invention, "disposed on" and other similar descriptions indicate the corresponding positional relationship between the two components, and do not limit whether there is contact between the two components, unless there are special limitations, which are explained here. Furthermore, when the present invention describes multiple effects, if the word "or" is used between the effects, it means that the effects can exist independently, but it does not exclude that multiple effects can exist at the same time.

此外,說明書及權利要求中例如「連接」或「耦接」一詞不僅指與另一元件直接連接,也可指與另一元件間接連接或電性連接。另外,電性連接包含直接連接、間接連接或二元件間以無線電信號交流的態樣。 In addition, words such as "connected" or "coupled" in the description and the claims not only refer to a direct connection with another component, but also refer to an indirect connection or electrical connection with another component. In addition, electrical connection includes direct connection, indirect connection, or communication between two components through radio signals.

此外,說明書及權利要求中,「約」、「大約」、「實質上」、「大致上」之用語通常表示在一值與一給定值的差距在該給定值的10%內,或5%內,、或3%之內、,或2%之內、,或1%之內、,或0.5%之內的範圍。在此給定的數量為大約的數量,亦即在沒有特定說明「約」、「大約」、「實質上」、「大致上」的情況下,仍可隱含「約」、「大約」、「實質上」、「大致上」之含義。此外,用語「範圍為第一數值至第二數值」、「範圍介於第一數值至第二數值之間」表示所述範圍包含第一數值、第二數值以及它們之間的其它數值。 In addition, in the description and claims, the terms "about", "approximately", "substantially" and "substantially" usually mean that the difference between a value and a given value is within 10% of the given value, or Within 5%, or within 3%, or within 2%, or within 1%, or within the range of 0.5%. The quantities given here are approximate quantities, that is, in the absence of specific instructions for "about", "approximately", "substantially", and "approximately", "about", "approximately", "approximately", The meaning of "substantially" and "substantially". In addition, the terms "range is a first value to a second value" and "range is between a first value and a second value" mean that the range includes the first value, the second value and other values therebetween.

此外,在本文中,「系統」、「設備」、「裝置」、「模組」、或「單元」等用語,是指包含了一電子元件或由多個電子元件所組成的一數位電路、一類比電路、或其他更廣義電路,且除了特別指明者之外,它們不必然 有位階或層級關係。此外,各元件可以適合的方式來實現成單一電路或一積體電路,且可包括一或多個主動元件,例如,電晶體或邏輯閘,或一或多個被動元件,例如,電阻、電容、或電感,但不限於此。各元件可以適合的方式來彼此連接,例如,分別配合輸入信號及輸出信號,使用一或多條線路來形成串聯或並聯。此外,各元件可允許輸入信號及輸出信號依序或並列進出。上述組態皆是依照實際應用而定。 In addition, in this article, the terms "system", "device", "device", "module", or "unit" refer to a digital circuit that includes an electronic component or is composed of a plurality of electronic components. A type of circuit, or other circuits of a broader nature, and unless otherwise specified, they are not necessarily There is a rank or hierarchical relationship. In addition, each component may be implemented as a single circuit or an integrated circuit in a suitable manner, and may include one or more active components, such as transistors or logic gates, or one or more passive components, such as resistors or capacitors. , or inductor, but not limited to this. The components can be connected to each other in a suitable manner, for example, using one or more lines to form a series or parallel connection in conjunction with the input signal and the output signal respectively. In addition, each component can allow input and output signals to enter and exit sequentially or in parallel. The above configurations are all based on actual applications.

此外,只要合理,本發明所揭示的不同實施例的技術特徵可結合形成另一實施例。 In addition, as long as it is reasonable, the technical features of different embodiments disclosed in the present invention may be combined to form another embodiment.

另外,在本文中,「至少一元件」包含了一或多個元件之態樣,例如「一範圍中的至少一元件」之描述包含了該範圍中的單一元件之態樣、該範圍中多個元件之態樣以及該範圍中所有元件之態樣。在本文中,「及/或」包含了單一、任意多個及全部之態樣,例如「元件a、元件b及/或元件c」之描述包含了三者中任意一者之態樣、三者中任意二者之態樣以及三者之態樣。 In addition, in this document, “at least one element” includes the form of one or more elements. For example, the description of “at least one element in a range” includes the form of a single element in the range, multiple elements in the range. The appearance of an element and the appearance of all elements in the range. In this article, "and/or" includes single, any multiple and all aspects. For example, the description of "component a, component b and/or component c" includes aspects of any one of the three, and the three The appearance of any two of them and the appearance of any three of them.

圖1是本發明一實施例的多目標參數最佳化系統1(以下簡稱系統1)的示意圖。系統1可用於提供一線切割放電加工機10的多個最佳化加工參數。如圖1所示,系統1可包含一訓練後類神經網路群組2及一參數最佳化模組3。此外,系統1亦可包含一遷移學習模組4。 Figure 1 is a schematic diagram of a multi-objective parameter optimization system 1 (hereinafter referred to as system 1) according to an embodiment of the present invention. The system 1 can be used to provide multiple optimized processing parameters of the linear cutting electric discharge machine 10 . As shown in Figure 1, the system 1 may include a trained neural network group 2 and a parameter optimization module 3. In addition, the system 1 may also include a transfer learning module 4.

其中,訓練後類神經網路群組2可包含複數個訓練後類神經網路(亦可稱為代理模型)。每個訓練後類神經網路可根據一加工參數組合而各自輸出一目標預測值,亦即使用者可將一加工參數組合輸入至訓練後類神經網路,訓練後類神經網路可輸出該加工參數組合對應的目標預測值。參數最佳化模組3可執行基因演算法,以利用該等訓練後類神經網路的該等目標預測值找出最佳化 的加工參數組合。此外,遷移學習模組4可分別對該等訓練後類神經網路執行一遷移學習,使該等訓練後類神經網路轉換成複數個遷移學習後類神經網路。 Among them, the trained neural network group 2 may include a plurality of trained neural network groups (also called proxy models). Each trained neural network can output a target prediction value according to a combination of processing parameters, that is, the user can input a combination of processing parameters to the trained neural network, and the trained neural network can output the The target predicted value corresponding to the processing parameter combination. The parameter optimization module 3 can execute a genetic algorithm to find the optimization using the target prediction values of the trained neural network combination of processing parameters. In addition, the transfer learning module 4 can perform a transfer learning on the trained neural networks respectively, so that the trained neural networks are converted into a plurality of transfer learning-like neural networks.

在一實施例中,訓練後類神經網路群組2、參數最佳化模組3或遷移學習模組4可設置於相同或不同的電子裝置上,其中電子裝置可具備處理器。電子裝置的類型可包含桌上型電腦、筆記型電腦、平板電腦、工業電腦、伺服器、雲端伺服器、智慧型手機或其它具備處理器的電子產品,且不限於此。在一實施例中,訓練後類神經網路群組2、參數最佳化模組3或遷移學習模組4可透過一電腦程式產品來實現,舉例來說,電腦程式產品可具有複數個指令,該等指令可使電子裝置的處理器執行特殊運作,進而使處理器實現訓練後類神經網路群組2、參數最佳化模組3或遷移學習模組4的各種功能,但不限於此。此外,在一實施例中,電腦程式產品可儲存於非暫態電腦可讀取媒體之中,例如儲存於電子裝置的記憶體或儲存器之中,但不限於此。另外,電腦程式產品可分別存放於網路伺服器(例如網路伺服器的記憶體或其它儲存裝置)以供其他使用者下載,例如可將電腦程式產品製作成應用軟體(application,APP)的形式在網路上販售使用,且不限於此。 In an embodiment, the trained neural network group 2, the parameter optimization module 3 or the transfer learning module 4 can be disposed on the same or different electronic devices, wherein the electronic devices can be equipped with a processor. Types of electronic devices may include desktop computers, notebook computers, tablet computers, industrial computers, servers, cloud servers, smartphones, or other electronic products with processors, but are not limited thereto. In one embodiment, the trained neural network group 2, parameter optimization module 3 or transfer learning module 4 can be implemented through a computer program product. For example, the computer program product can have a plurality of instructions. , these instructions can cause the processor of the electronic device to perform special operations, thereby enabling the processor to implement various functions of the trained neural network group 2, parameter optimization module 3 or transfer learning module 4, but are not limited to this. In addition, in one embodiment, the computer program product may be stored in a non-transitory computer-readable medium, such as, but not limited to, the memory or storage of an electronic device. In addition, the computer program product can be stored in a network server (such as the memory of the network server or other storage device) for other users to download. For example, the computer program product can be made into an application software (application, APP). The form is sold and used on the Internet, but is not limited to this.

接著說明各元件。 Next, each component will be described.

圖2是本發明一實施例的訓練後類神經網路群組2的運作過程示意圖,並請同時參考圖1。 Figure 2 is a schematic diagram of the operation process of the trained neural network group 2 according to an embodiment of the present invention, and please refer to Figure 1 at the same time.

如圖2所示,在一實施例中,訓練後類神經網路群組2可包含一第一類神經網路21、一第二類神經網路22及一第三類神經網路23。第一類神經網路21、一第二類神經網路22及一第三類神經網路23可取得相同種類的加工參數組合,並各自輸出不同的目標預測值。 As shown in FIG. 2 , in one embodiment, the trained neural network group 2 may include a first type neural network 21 , a second type neural network 22 and a third type neural network 23 . The first type neural network 21, a second type neural network 22 and a third type neural network 23 can obtain the same type of processing parameter combination, and each output different target prediction values.

在一實施例中,加工參數組合中的加工參數可包含放電脈衝停止時間(off time,OFF)、電弧放電脈衝停止時間(arc off time,AFF)、短路放電脈衝停止時間(short off time,SFF)、放電脈衝持續時間(on time,ON)、電弧放電脈衝持續時間(arc on time,AN)或短路放電脈衝持續時間(short on time,SN),或者上述加工參數之任意組合,且不限於此。在另一些實施例中,加工參數的類型亦可包含基準電壓(standard voltage,SV)、輸出電壓(output voltage,OV)、進給倍率(feedrate override,FR)及/或水流大小(water flow,WL),且不限於此。 In one embodiment, the processing parameters in the processing parameter combination may include discharge pulse off time (off time, OFF), arc discharge pulse off time (arc off time, AFF), short circuit discharge pulse off time (short off time, SFF). ), discharge pulse duration (on time, ON), arc discharge pulse duration (arc on time, AN) or short circuit discharge pulse duration (short on time, SN), or any combination of the above processing parameters, and is not limited to this. In other embodiments, the types of processing parameters may also include standard voltage (SV), output voltage (OV), feedrate override (FR), and/or water flow (water flow, WL), without limitation.

在一實施例中,第一類神經網路21可用於輸出一加工進給速度(machining speed or feed rate)預測值,例如將一組加工參數組合輸入至第一類神經網路21後,第一類神經網路21可對該組加工參數組合進行特徵分析,並預測該組加工參數組合對應的加工進給速度。此處「加工進給速度」指的是線切割放電加工機10在工件(亦即被切割物品)切割過程中各時間的進給速度。 In one embodiment, the first type of neural network 21 can be used to output a prediction value of machining speed or feed rate. For example, after a set of processing parameter combinations is input to the first type of neural network 21, the A type of neural network 21 can perform feature analysis on the set of processing parameter combinations and predict the processing feed speed corresponding to the set of processing parameter combinations. The “processing feed speed” here refers to the feed speed of the wire-cut electric discharge machine 10 at each time during the cutting process of the workpiece (that is, the object to be cut).

在一實施例中,第二類神經網路22可用於輸出一表面粗糙度(surface roughness)預測值,例如將一組加工參數組合輸入至第二類神經網路22後,第二類神經網路22可對該組加工參數組合進行特徵分析,並預測該組加工參數組合對應的表面粗糙度。此處「表面粗糙度」指的是工件在切割後的表面粗糙度。 In one embodiment, the second type neural network 22 can be used to output a surface roughness prediction value. For example, after a set of processing parameter combinations is input to the second type neural network 22, the second type neural network Road 22 can perform feature analysis on this set of processing parameter combinations and predict the surface roughness corresponding to this set of processing parameter combinations. "Surface roughness" here refers to the surface roughness of the workpiece after cutting.

在一實施例中,第三類神經網路23可用於輸出一加工精度(machining accuracy)預測值,例如將一組加工參數組合輸入至第三類神經網路23後,第三類神經網路23可對該組加工參數組合進行特徵分析,並預測該組加工參數組合對應的加工精度。此處「加工精度」指的是工件在切割後的表面的實際尺寸、形狀及位置等三種幾何參數與預期的理想幾何參數的符合程度。 In one embodiment, the third type neural network 23 can be used to output a prediction value of machining accuracy. For example, after a set of machining parameter combinations is input to the third type neural network 23, the third type neural network 23 The characteristics of this set of processing parameter combinations can be analyzed, and the corresponding processing accuracy of this set of processing parameter combinations can be predicted. "Machining accuracy" here refers to the degree to which the three geometric parameters such as the actual size, shape and position of the cut surface of the workpiece conform to the expected ideal geometric parameters.

在一實施例中,第一類神經網路21、第二類神經網路22及第三類神經網路23是分別經由深度學習進行訓練,進而得到能夠根據加工參數組合產生預測目標值的能力。第一類神經網路21、第二類神經網路22及第三類神經網路23的訓練過程將於後續段落進行說明(例如圖5A至圖5C)。 In one embodiment, the first type of neural network 21 , the second type of neural network 22 and the third type of neural network 23 are respectively trained through deep learning, thereby obtaining the ability to generate predicted target values based on a combination of processing parameters. . The training process of the first type of neural network 21, the second type of neural network 22 and the third type of neural network 23 will be explained in subsequent paragraphs (eg, Figure 5A to Figure 5C).

在一實施例中,第一類神經網路21、第二類神經網路22及第三類神經網路23可使用Keras開源神經網路庫進行開發,使用版本為2.6.0,且不限於此。 In one embodiment, the first type of neural network 21 , the second type of neural network 22 and the third type of neural network 23 can be developed using the Keras open source neural network library, and the version used is 2.6.0, and is not limited to this.

接著說明參數最佳化模組3。圖3A是本發明一實施例的參數最佳化模組3的運作過程示意圖,並請同時參考圖1及圖2。 Next, the parameter optimization module 3 will be described. FIG. 3A is a schematic diagram of the operation process of the parameter optimization module 3 according to an embodiment of the present invention. Please refer to FIG. 1 and FIG. 2 at the same time.

如圖3A所示,當第一類神經網路21、第二類神經網路22及第三類神經網路23訓練完成後,參數最佳化模組3可將加工參數組合(例如OFF、AFF、SFF、ON、AN及SN)的多種數值組合輸入至第一類神經網路21、第二類神經網路22及第三類神經網路23,並執行基因演算法,從第一類神經網路21、第二類神經網路22及第三類神經網路23分別輸出的多個目標預測值之中找出最佳目標預測值,以及從最佳目標預測值回推出該最佳目標預測值所對應的一或多個加工參數組合,並將對應最佳的目標預測值的加工參數組合設定為最佳化數值組合。 As shown in Figure 3A, after the training of the first type neural network 21, the second type neural network 22 and the third type neural network 23 is completed, the parameter optimization module 3 can combine the processing parameters (such as OFF, AFF, SFF, ON, AN and SN) are input to the first type neural network 21, the second type neural network 22 and the third type neural network 23, and the genetic algorithm is executed, from the first type Find the best target prediction value among multiple target prediction values output by the neural network 21, the second type neural network 22 and the third type neural network 23 respectively, and deduce the best target prediction value from the best target prediction value. One or more processing parameter combinations corresponding to the target predicted value, and the processing parameter combination corresponding to the best target predicted value is set as the optimized value combination.

在一實施例中,加工參數組合中的每種加工參數可預先設定一較佳數值範圍,而參數最佳化模組3可根據每種加工參數的較佳數值範圍來產生加工參數組合的多種數值組合。在一實施例中,每種加工參數的較佳數值範圍可由使用者預先設定,且可隨時調整。在一實施例中,較佳數值範圍可例如是由本領域資深的技術人員預測提供,但不限於此。 In one embodiment, each processing parameter in the processing parameter combination can be preset with a better value range, and the parameter optimization module 3 can generate a variety of processing parameter combinations based on the best value range of each processing parameter. Numeric combination. In one embodiment, the optimal value range of each processing parameter can be preset by the user and can be adjusted at any time. In one embodiment, the preferred numerical range may be predicted and provided by a person skilled in the art, but is not limited thereto.

在一實施例中,基因演算法可例如是第二代不可超越排序基因演算法(non-dominated sorting genetic algorithm II,NSGA-II),但不限於此。藉由NSGA-II,參數最佳化模組3可找出第一類神經網路21、第二類神經網路22及第三類神經網路23的至少其中二者所輸出的多個目標預測值之中的一或多個帕累托前緣(Pareto front),並將屬於帕累托前緣的目標預測值所對應的加工參數組合的數值組合設定為最佳化數值組合。在一實施例中,最佳化數值組合可例如是目標預測值為可產生較快加工進給速度的數值組合、目標預測值為可產生表面粗糙度較低的數值組合、目標預測值為可產生加工精度較精準的數值組合,或者上述之任意組合,且不限於此。 In one embodiment, the genetic algorithm may be, for example, a second generation non-dominated sorting genetic algorithm II (NSGA-II), but is not limited thereto. Through NSGA-II, the parameter optimization module 3 can find multiple targets output by at least two of the first type neural network 21, the second type neural network 22, and the third type neural network 23. One or more Pareto fronts (Pareto front) among the predicted values, and the numerical combination of the processing parameter combination corresponding to the target predicted value belonging to the Pareto front is set as the optimal numerical combination. In one embodiment, the optimal value combination may be, for example, the target prediction value is a value combination that can produce a faster processing feed speed, the target prediction value is a value combination that can produce a lower surface roughness, and the target prediction value is a value combination that can produce a lower surface roughness. Produce a numerical combination with more accurate processing accuracy, or any combination of the above, and is not limited to this.

接著更詳細說明帕累托前緣的細節。圖3B是本發明一實施例的基因演算法的演算結果示意圖,並請同時參考圖3A。其中,圖3B的範例用於說明對應至少二個目標預測值的帕累托前緣的取得過程。 The details of the Pareto front are then explained in more detail. FIG. 3B is a schematic diagram of the calculation results of the genetic algorithm according to an embodiment of the present invention, and please refer to FIG. 3A at the same time. Among them, the example in Figure 3B is used to illustrate the process of obtaining the Pareto front corresponding to at least two target prediction values.

如圖3B所示,當執行NSGA-II後,參數最佳化模組3可類似於產生一座標圖,其中座標圖的橫軸為第一種目標預測值OF1,例如表面粗糙度預測值,座標圖的縱軸為第二種目標預測值OF2,例如加工進給速度預測值,而每組加工參數組合的數值組合可依照其對應的第一種目標預測值及第二種目標預測值在座標圖上被標示出來。之後,參數最佳化模組3可從該等數值組合中找出第一種目標預測值或第二種目標預測值的至少一者優於其它數值組合的一或多個數值組合,並將該等找出來的一或多個數值組合設定為第一組帕累托前緣組合F1。當第一組帕累托前緣組合F1被找出後,參數最佳化模組3再從剩餘的數值組合中找出第一種目標預測值或第二種目標預測值的至少一者優於其它數值組合的一或多個數值組合,並將該等找出來的一或多個數值組合設定為第二組帕累 托前緣組合F2。之後依此類推,直至找出預設數量的帕累托前緣。之後,參數最佳化模組3可將這些找出來的帕累托前緣設定為對應第一種目標預測值及第二種目標預測值的加工參數組合的最佳化數值組合。舉例來說,第一組帕累托前緣組合F1所對應的每種加工參數(例如OFF、AFF、SFF、ON、AN及SN)的數值(例如OFF的一數值、AFF的一數值、SFF的一數值、ON的一數值、AN的一數值及SN的一數值)即為其中一個最佳化數值組合,第二組帕累托前緣組合F2所對應的每種加工參數(例如OFF、AFF、SFF、ON、AN及SN)的數值(例如OFF的另一數值、AFF的另一數值、SFF的另一數值、ON的另一數值、AN的另一數值及SN的另一數值)即為其中另一個最佳化數值組合。本領域技術人員可依此類推當取得對應更多種目標預測值的帕累托前緣時的情形。 As shown in Figure 3B, after executing NSGA-II, the parameter optimization module 3 can generate a graph similar to that in which the horizontal axis of the graph is the first target prediction value OF1, such as the surface roughness prediction value, The vertical axis of the coordinate chart is the second target prediction value OF2, such as the processing feed speed prediction value, and the numerical combination of each group of processing parameter combinations can be calculated according to its corresponding first target prediction value and second target prediction value. are marked on the coordinate diagram. After that, the parameter optimization module 3 can find one or more numerical combinations from the numerical combinations in which at least one of the first target predicted value or the second target predicted value is better than other numerical combinations, and The found one or more value combinations are set as the first set of Pareto front combinations F1. When the first set of Pareto front combinations F1 is found, the parameter optimization module 3 then finds at least one of the first target prediction value or the second target prediction value from the remaining numerical combinations. one or more value combinations in other value combinations, and set the found one or more value combinations as the second set of Pare Support front edge combination F2. And so on until the preset number of Pareto fronts is found. Afterwards, the parameter optimization module 3 can set these found Pareto fronts as optimized value combinations corresponding to the processing parameter combinations of the first target predicted value and the second target predicted value. For example, the values of each processing parameter (such as OFF, AFF, SFF, ON, AN and SN) corresponding to the first set of Pareto front combinations F1 (such as a value of OFF, a value of AFF, SFF A value of , a value of ON, a value of AN and a value of SN) is one of the optimal value combinations. Each processing parameter corresponding to the second set of Pareto front combination F2 (such as OFF, AFF, SFF, ON, AN, and SN) (such as another value of OFF, another value of AFF, another value of SFF, another value of ON, another value of AN, and another value of SN) That is another optimized numerical combination. Those skilled in the art can similarly deduce the situation when Pareto fronts corresponding to more types of target prediction values are obtained.

藉此,可找出最佳化的加工參數組合。 In this way, the optimal combination of processing parameters can be found.

接著說明遷移學習模組4。圖4是本發明一實施例的遷移學習模組4的運作過程示意圖,並請同時參考圖1及圖2。 Next, transfer learning module 4 will be described. FIG. 4 is a schematic diagram of the operation process of the transfer learning module 4 according to an embodiment of the present invention. Please refer to FIG. 1 and FIG. 2 at the same time.

如圖4所示,遷移學習模組4可分別對訓練後的第一類神經網路21、第二類神經網路22及第三類神經網路23執行一遷移學習,使得第一類神經網路21、第二類神經網路22及第三類神經網路23可分別轉換成適用於其它規格的線切割放電加工機或適用於預測其它目標預測值的一第四類神經網路24、一第五類神經網路25及一第六類神經網路26。 As shown in Figure 4, the transfer learning module 4 can perform transfer learning on the trained first type neural network 21, the second type neural network 22 and the third type neural network 23 respectively, so that the first type neural network The network 21, the second type neural network 22 and the third type neural network 23 can be respectively converted into a fourth type neural network 24 suitable for wire cutting electric discharge machining machines of other specifications or suitable for predicting other target prediction values. , a type 5 neural network 25 and a type 6 neural network 26 .

在一實施例中,遷移學習模組4是以權重凍結的方式對第一類神經網路21、第二類神經網路22及第三類神經網路23進行遷移學習,但不限於此。遷移學習的細節將於後續段落說明。 In one embodiment, the transfer learning module 4 performs transfer learning on the first type of neural network 21 , the second type of neural network 22 and the third type of neural network 23 in a weight-freezing manner, but is not limited to this. The details of transfer learning will be explained in subsequent paragraphs.

前述段落已說明訓練後類神經網路群組2、參數最佳化模組3及遷移學習模組4的細節,接著將說明第一類神經網路21、第二類神經網路22及第三類神經網路23的訓練過程。 The foregoing paragraphs have explained the details of the trained neural network group 2, parameter optimization module 3 and transfer learning module 4. Next, the first type neural network 21, the second type neural network 22 and the second type will be described. The training process of three types of neural networks 23.

圖5A是本發明一實施例的第一類神經網路21的訓練模型210(亦即訓練前的第一類神經網路21)的示意圖,請同時參考圖1及圖2。如圖5A所示,第一類神經網路21的訓練模型210是深度類神經網路模型,其可包含一輸入層211、複數個隱藏層212及一損失函數層213,其中每個隱藏層212可包含複數個神經元214。 FIG. 5A is a schematic diagram of the training model 210 of the first type neural network 21 (that is, the first type neural network 21 before training) according to an embodiment of the present invention. Please refer to FIG. 1 and FIG. 2 at the same time. As shown in Figure 5A, the training model 210 of the first type of neural network 21 is a deep neural network model, which may include an input layer 211, a plurality of hidden layers 212 and a loss function layer 213, wherein each hidden layer 212 may include a plurality of neurons 214.

在一實施例中,輸入層211可用於供輸入6個靜態特徵,例如OFF、AFF、SFF、ON、AN、SN等,但不限於此。在一實施例中,訓練模型210可包含18個隱藏層212,且每個隱藏層可包含13個神經元214,但不限於此。在一實施例中,損失函數層213的損失函數可為均方誤差(mean-square error,MSE),但不限於此。在一實施例中,訓練模型210的活化函數可為整流線性單位函式(Rectified Linear Unit,ReLU),但不限於此。在一實施例中,訓練模組210可包含優化器(optimizer),其中優化器可使用Nadam優化器,但不限於此。在一實施例中,訓練模型210的學習率(learning rate)可設定為0.00032,但不限於此。在一實施例中,訓練模型210的批次大小可設定為8,但不限於此。在一實施例中,訓練模型210的訓練次數(Epoch)可設定為1000,且可使用早停法(early stopping)將忍耐次數(patience)設定為20,但不限於此。須注意的是,上述僅訓練模型210的各種設定參數是舉例,並非限定。 In one embodiment, the input layer 211 can be used to input six static features, such as OFF, AFF, SFF, ON, AN, SN, etc., but is not limited thereto. In one embodiment, the training model 210 may include 18 hidden layers 212, and each hidden layer may include 13 neurons 214, but is not limited thereto. In one embodiment, the loss function of the loss function layer 213 may be mean-square error (MSE), but is not limited thereto. In one embodiment, the activation function of the training model 210 may be a rectified linear unit function (Rectified Linear Unit, ReLU), but is not limited thereto. In one embodiment, the training module 210 may include an optimizer, where the optimizer may use the Nadam optimizer, but is not limited thereto. In one embodiment, the learning rate of the training model 210 may be set to 0.00032, but is not limited thereto. In one embodiment, the batch size of the training model 210 may be set to 8, but is not limited thereto. In one embodiment, the training number (Epoch) of the training model 210 can be set to 1000, and the endurance number (patience) can be set to 20 using early stopping method, but is not limited thereto. It should be noted that the above-mentioned various setting parameters for training the model 210 are examples and are not limiting.

在一實施例中,當開始訓練時,加工參數組合的多組數值組合(例如加工參數OFF、AFF、SFF、ON、AN及SN的多個數值組合)以及每個數值組合 所對應的目標值資訊(例如加工參數OFF、AFF、SFF、ON、AN及SN的每種數值組合所對應的實際加工進給速度)可被輸入至訓練模型210之中,其中一部分的數值組合可做為訓練資料,一部分的數值組合可做為測試資料。在一實施例中,數值組合的數量大於或等於50,且佈線於此。在一實施例中,數值組合的數量大於或等於75,且不限於此。在一實施例中,數值組合的數量可藉於80至100之間(80≦數量≦100),且不限於此。在一實施例中,加工參數組合中的OFF的數值可藉於5微秒(μs)至15微秒之間(亦即5μs≦OFF≦15μs),且不限於此。在一實施例中,加工參數組合中的AFF的數值可藉於5μs至15μs之間(亦即5μs≦AFF≦15μs),且不限於此。在一實施例中,加工參數組合中的SFF的數值可藉於5μs至15μs之間(亦即5μs≦SFF≦15μs),且不限於此。在一實施例中,加工參數組合中的ON的數值可藉於0.5μs至1μs之間(亦即0.5μs≦ON≦1μs),且不限於此。在一實施例中,加工參數組合中的AN的數值可藉於0.5μs至1μs之間(亦即0.5μs≦AN≦1μs),且不限於此。在一實施例中,加工參數組合中的SN的數值可藉於0.1μs至0.5μs之間(亦即0.1μs≦SN≦0.5μs),且不限於此。 In one embodiment, when starting training, multiple value combinations of processing parameter combinations (such as multiple value combinations of processing parameters OFF, AFF, SFF, ON, AN and SN) and each value combination The corresponding target value information (such as the actual processing feed speed corresponding to each numerical combination of the processing parameters OFF, AFF, SFF, ON, AN and SN) can be input into the training model 210, and some of the numerical combinations It can be used as training data, and part of the numerical combination can be used as test data. In one embodiment, the number of value combinations is greater than or equal to 50 and is routed thereto. In one embodiment, the number of numerical combinations is greater than or equal to 75, and is not limited thereto. In one embodiment, the number of numerical combinations may be between 80 and 100 (80≦number≦100), and is not limited thereto. In one embodiment, the value of OFF in the processing parameter combination can be between 5 microseconds (μs) and 15 μs (that is, 5 μs≦OFF≦15 μs), and is not limited thereto. In one embodiment, the value of AFF in the processing parameter combination can be between 5 μs and 15 μs (that is, 5 μs≦AFF≦15 μs), and is not limited thereto. In one embodiment, the value of SFF in the processing parameter combination can be between 5 μs and 15 μs (that is, 5 μs≦SFF≦15 μs), and is not limited thereto. In one embodiment, the value of ON in the processing parameter combination can be between 0.5 μs and 1 μs (that is, 0.5 μs≦ON≦1 μs), and is not limited thereto. In one embodiment, the value of AN in the processing parameter combination can be between 0.5 μs and 1 μs (that is, 0.5 μs≦AN≦1 μs), and is not limited thereto. In one embodiment, the value of SN in the processing parameter combination can be between 0.1 μs and 0.5 μs (that is, 0.1 μs≦SN≦0.5 μs), and is not limited thereto.

當訓練模型210取得訓練資料及各訓練資料所對應的目標值資訊後,即可藉由隱藏層212及損失函數層213建構出特徵分析路徑(亦即可完成每個隱藏層212的神經元214的權重之設定),進而完成訓練。完成訓練後的訓練模型210即可形成訓練後的第一類神經網路21。在一實施例中,訓練後的第一類神經網路21的MSE約為0.0115(mm/min)2,而預測結果與實際量測結果的平均絕對百分比誤差(mean absolute percentage error,MAPE)可約為1.87%,具備良好的準確度。 After the training model 210 obtains the training data and the target value information corresponding to each training data, the feature analysis path can be constructed through the hidden layer 212 and the loss function layer 213 (that is, the neurons 214 of each hidden layer 212 can be completed). weight setting), and then complete the training. The trained training model 210 can form the trained first type neural network 21. In one embodiment, the MSE of the first type of neural network 21 after training is approximately 0.0115 (mm/min) 2 , and the mean absolute percentage error (MAPE) between the predicted results and the actual measurement results can be It is about 1.87%, which has good accuracy.

藉此,第一類神經網路21的訓練過程已可被理解。 From this, the training process of the first type neural network 21 can be understood.

圖5B是本發明一實施例的第二類神經網路22的訓練模型220(亦即訓練前的第二類神經網路22)的示意圖,請同時參考圖1及圖2。如圖5B所示,第二類神經網路22的訓練模型220是深度類神經網路模型,其可包含一輸入層221、複數個隱藏層222及一損失函數層223,其中每個隱藏層222可包含複數個神經元224。 FIG. 5B is a schematic diagram of the training model 220 of the second type neural network 22 (that is, the second type neural network 22 before training) according to an embodiment of the present invention. Please refer to FIG. 1 and FIG. 2 at the same time. As shown in FIG. 5B , the training model 220 of the second type of neural network 22 is a deep neural network model, which may include an input layer 221 , a plurality of hidden layers 222 and a loss function layer 223 , where each hidden layer 222 may include a plurality of neurons 224.

在一實施例中,輸入層221可用於供輸入6個靜態特徵,例如OFF、AFF、SFF、ON、AN、SN等,但不限於此。在一實施例中,訓練模型220可包含20個隱藏層222,且每個隱藏層222可包含20個神經元224,但不限於此。在一實施例中,損失函數層223的損失函數可為MSE,但不限於此。在一實施例中,訓練模型220的活化函數可為ReLU,但不限於此。在一實施例中,訓練模組220的優化器可為Nadam優化器,但不限於此。在一實施例中,訓練模型220的學習率可設定為0.003,但不限於此。在一實施例中,訓練模型220的批次大小可設定為5,但不限於此。在一實施例中,訓練模型220的Epoch可設定為1000,且可使用early stopping將patience設定為20,但不限於此。須注意的是,上述僅訓練模型220的各種設定參數是舉例,並非限定。 In one embodiment, the input layer 221 can be used to input six static features, such as OFF, AFF, SFF, ON, AN, SN, etc., but is not limited thereto. In one embodiment, the training model 220 may include 20 hidden layers 222, and each hidden layer 222 may include 20 neurons 224, but is not limited thereto. In one embodiment, the loss function of the loss function layer 223 may be MSE, but is not limited thereto. In one embodiment, the activation function of the training model 220 may be ReLU, but is not limited thereto. In one embodiment, the optimizer of the training module 220 may be a Nadam optimizer, but is not limited thereto. In one embodiment, the learning rate of the training model 220 may be set to 0.003, but is not limited thereto. In one embodiment, the batch size of the training model 220 may be set to 5, but is not limited thereto. In one embodiment, the Epoch of the training model 220 can be set to 1000, and early stopping can be used to set the patience to 20, but is not limited thereto. It should be noted that the above-mentioned various setting parameters for training the model 220 are examples and are not limiting.

第二類神經網路22的訓練模型220可採用與第一類神經網路21的訓練模型210相同的加工參數組合的數值組合,故不再詳述此部分。 The training model 220 of the second type neural network 22 can adopt the same numerical combination of processing parameter combinations as the training model 210 of the first type neural network 21, so this part will not be described in detail.

當訓練模型220取得訓練資料及各訓練資料所對應的目標值資訊後,即可藉由隱藏層222及損失函數層223建構出特徵分析路徑(亦即可完成每個隱藏層222的神經元224的權重之設定),進而完成訓練。完成訓練後的訓練模型220即可形成訓練後的第二類神經網路22。在一實施例中,訓練後的第二類神經 網路22的MSE約為0.0063(mm/min)2,而預測結果與實際量測結果的MAPE可約為1.72%,具備良好的準確度。 After the training model 220 obtains the training data and the target value information corresponding to each training data, the feature analysis path can be constructed through the hidden layer 222 and the loss function layer 223 (that is, the neurons 224 of each hidden layer 222 can be completed). weight setting), and then complete the training. The trained training model 220 can form the trained second type neural network 22. In one embodiment, the MSE of the second type neural network 22 after training is about 0.0063 (mm/min) 2 , and the MAPE between the predicted results and the actual measurement results is about 1.72%, which has good accuracy.

藉此,第二類神經網路22的訓練過程已可被理解。 Through this, the training process of the second type neural network 22 can be understood.

圖5C是本發明一實施例的第三類神經網路23的訓練模型230(亦即訓練前的第三類神經網路23)的示意圖,請同時參考圖1及圖2。如圖5C所示,第三類神經網路23的訓練模型230是深度類神經網路模型,其可包含一輸入層231、複數個隱藏層232及一損失函數層233,其中每個隱藏層232可包含複數個神經元234。 FIG. 5C is a schematic diagram of the training model 230 of the third type neural network 23 (that is, the third type neural network 23 before training) according to an embodiment of the present invention. Please refer to FIG. 1 and FIG. 2 at the same time. As shown in Figure 5C, the training model 230 of the third type of neural network 23 is a deep neural network model, which may include an input layer 231, a plurality of hidden layers 232 and a loss function layer 233, where each hidden layer 232 may include a plurality of neurons 234.

在一實施例中,輸入層231可用於供輸入6個靜態特徵,例如OFF、AFF、SFF、ON、AN、SN等,但不限於此。在一實施例中,訓練模型230可包含26個隱藏層232,且每個隱藏層232可包含13個神經元,但不限於此。在一實施例中,損失函數層233的損失函數可為MSE,但不限於此。在一實施例中,訓練模型230的活化函數可為ReLU,但不限於此。在一實施例中,訓練模組230的優化器可為Nadam優化器,但不限於此。在一實施例中,訓練模型230的學習率可設定為0.00035,但不限於此。在一實施例中,訓練模型230的批次大小可設定為4,但不限於此。在一實施例中,訓練模型230的Epoch可設定為1000,且可使用early stopping將patience設定為20,但不限於此。須注意的是,上述僅訓練模型230的各種設定參數是舉例,並非限定。 In one embodiment, the input layer 231 can be used to input six static features, such as OFF, AFF, SFF, ON, AN, SN, etc., but is not limited thereto. In one embodiment, the training model 230 may include 26 hidden layers 232, and each hidden layer 232 may include 13 neurons, but is not limited thereto. In one embodiment, the loss function of the loss function layer 233 may be MSE, but is not limited thereto. In one embodiment, the activation function of the training model 230 may be ReLU, but is not limited thereto. In one embodiment, the optimizer of the training module 230 may be a Nadam optimizer, but is not limited thereto. In one embodiment, the learning rate of the training model 230 may be set to 0.00035, but is not limited thereto. In one embodiment, the batch size of the training model 230 may be set to 4, but is not limited thereto. In one embodiment, the Epoch of the training model 230 can be set to 1000, and early stopping can be used to set the patience to 20, but is not limited thereto. It should be noted that the above-mentioned various setting parameters for training the model 230 are examples and are not limiting.

第三類神經網路23的訓練模型230可採用與第一類神經網路21的訓練模型210相同的加工參數組合的數值組合,故不再詳述此部分。 The training model 230 of the third type of neural network 23 can adopt the same numerical combination of processing parameter combinations as the training model 210 of the first type of neural network 21, so this part will not be described in detail.

當訓練模型230取得訓練資料及各訓練資料所對應的目標值資訊後,即可藉由隱藏層232及損失函數層233進行建構出特徵分析路徑(亦即可完成 每個隱藏層232的神經元234的權重之設定),進而完成訓練。完成訓練後的訓練模型230即可形成訓練後的第三類神經網路23。在一實施例中,訓練後的第三類神經網路23的預測結果與實際量測結果的平均絕對誤差(mean absolute error,MAE)可約為1.95%,具備良好的準確度。 After the training model 230 obtains the training data and the target value information corresponding to each training data, it can construct a feature analysis path through the hidden layer 232 and the loss function layer 233 (that is, it can be completed The weight setting of the neuron 234 of each hidden layer 232), thereby completing the training. The trained training model 230 can form the trained third type neural network 23. In one embodiment, the mean absolute error (MAE) between the prediction results of the trained third type neural network 23 and the actual measurement results can be approximately 1.95%, which has good accuracy.

藉此,第三類神經網路23的訓練過程已可被理解。 Through this, the training process of the third type neural network 23 can be understood.

當第一類神經網路21、第二類神經網路22及第三類神經網路23的訓練完成後,即可分別根據輸入的加工參數組合來輸出加工進給速度預測值、表面粗糙度預測值及加工精度預測值,而參數最佳化模組3亦可使用第一類神經網路21、第二類神經網路22及第三類神經網路23及執行基因演算法來找出最佳化的加工參數組合。 After the training of the first type neural network 21 , the second type neural network 22 and the third type neural network 23 is completed, the processing feed speed prediction value and surface roughness can be output according to the input processing parameter combination. Predicted value and machining accuracy predicted value, and the parameter optimization module 3 can also use the first type of neural network 21, the second type of neural network 22 and the third type of neural network 23 and execute the genetic algorithm to find out Optimized processing parameter combination.

在一實施例中,參數最佳化模組3使用基因演算法所找出的加工參數組合所對應的加工進給速度預測值與實測結果的平均MAPE約為3.44,表面粗糙度預測值與實測結果的平均MAPE約為4.46,加工精度預測值與實測結果的平均MAE為1.99,皆可具備良好的準確度。 In one embodiment, the parameter optimization module 3 uses the genetic algorithm to find the processing parameter combination, and the average MAPE between the predicted processing feed speed value and the actual measured result is about 3.44, and the predicted surface roughness value and the actual measured result are about 3.44. The average MAPE of the results is about 4.46, and the average MAE of the predicted machining accuracy value and the measured results is 1.99, both of which have good accuracy.

此外,當第一類神經網路21、第二類神經網路22及第三類神經網路23的訓練完成後,遷移學習模組4亦可分別對第一類神經網路21、第二類神經網路22及第三類神經網路23使用權重凍結,藉此進行遷移學習。接著將說明遷移學習的過程,請同時參考圖4至圖5C。 In addition, after the training of the first type neural network 21, the second type neural network 22 and the third type neural network 23 is completed, the transfer learning module 4 can also train the first type neural network 21, the second type neural network 21 and the second type neural network 23 respectively. The class neural network 22 and the third class neural network 23 use weight freezing to perform transfer learning. Next, the process of transfer learning will be explained. Please refer to Figure 4 to Figure 5C at the same time.

在一實施例中,遷移學習模組4是將第一類神經網路21的最後一層隱藏層212的神經元214以外的神經元214進行權重凍結(例如將前17個隱藏層212的神經元214的權重凍結),並利用訓練用資料重新訓練,以僅調整最後一層隱藏層212的神經元214的權重。在一實施例中,當使用約40組訓練用資料進行 訓練時,第一類神經網路21進行遷移學習後所形成的第四類神經網路24的目標預測值與實測結果的MAPE可約為1.97%,具備良好的準確度。 In one embodiment, the transfer learning module 4 freezes the weights of the neurons 214 other than the neurons 214 of the last hidden layer 212 of the first type neural network 21 (for example, the neurons of the first 17 hidden layers 212 The weights of 214 are frozen) and retrained using the training data to adjust only the weights of neurons 214 of the last hidden layer 212. In one embodiment, when using about 40 sets of training data, During training, the MAPE between the target prediction value and the actual measurement result of the fourth type neural network 24 formed after the first type neural network 21 performs transfer learning can be approximately 1.97%, which has good accuracy.

在一實施例中,遷移學習模組4是將第二類神經網路22的最後一層隱藏層222的神經元224以外的神經元224進行權重凍結(例如將前19個隱藏層222的神經元224的權重凍結),並利用訓練用資料重新訓練,以僅調整最後一層隱藏層222的神經元224的權重。在一實施例中,當使用約40組訓練用資料進行訓練時,第二類神經網路22進行遷移學習後所形成的第五類神經網路25的目標預測值與實測結果的MAPE可約為2.58%,具備良好的準確度。 In one embodiment, the transfer learning module 4 freezes the weights of neurons 224 other than the neurons 224 of the last hidden layer 222 of the second type neural network 22 (for example, the neurons of the first 19 hidden layers 222 The weights of 224 are frozen) and retrained using the training data to adjust only the weights of neurons 224 of the last hidden layer 222. In one embodiment, when about 40 sets of training data are used for training, the MAPE of the target predicted value of the fifth type neural network 25 formed after the second type neural network 22 performs transfer learning and the actual measurement result can be approximately is 2.58%, which has good accuracy.

在一實施例中,遷移學習模組4是將第三類神經網路23的最後一層隱藏層232的神經元234以外的神經元234進行權重凍結(例如將前25個隱藏層232的神經元234的權重凍結),並利用訓練用資料重新訓練,以僅調整最後一層隱藏層232的神經元234的權重。在一實施例中,當使用約40組訓練用資料進行訓練時,第三類神經網路23進行遷移學習後所形成的第六類神經網路26的目標預測值與實測結果的MAE可約為2.17%,具備良好的準確度。 In one embodiment, the transfer learning module 4 freezes the weights of the neurons 234 other than the neurons 234 of the last hidden layer 232 of the third type neural network 23 (for example, the neurons of the first 25 hidden layers 232 The weights of 234 are frozen) and retrained using the training data to adjust only the weights of neurons 234 of the last hidden layer 232. In one embodiment, when about 40 sets of training data are used for training, the MAE of the target prediction value of the sixth type neural network 26 formed after the third type neural network 23 performs transfer learning and the actual measurement result can be approximately is 2.17%, which has good accuracy.

藉此,可使用小量的訓練資料,將訓練後類神經網路群組2在不同機台的之間移轉,以擴增系統1的適用範圍。 In this way, a small amount of training data can be used to transfer the trained neural network group 2 between different machines to expand the applicable scope of the system 1.

接著統整本發明的系統1的運作過程。圖6是本發明一實施例的多目標參數最佳化方法的步驟流程圖,其中多目標參數最佳化方法是透過系統1來執行。 Next, the operation process of the system 1 of the present invention is integrated. FIG. 6 is a step flow chart of a multi-objective parameter optimization method according to an embodiment of the present invention. The multi-objective parameter optimization method is executed through the system 1 .

首先步驟S61被執行,第一類神經網路21的訓練用模型210、第二類神經網路22的訓練用模型220及第三類神經網路23的訓練用模型230的架構被設定完成,亦即形成如圖5A~5C的架構。之後步驟S62被執行,訓練用模型 210~230分別取得訓練用資料。之後步驟S63被執行,訓練用模型210~230利用訓練用資料進行訓練,以各自形成訓練完成的類神經網路。藉此,第一類神經網路21的訓練用模型210、第二類神經網路22及第三類神經網路23的訓練可完成。 First, step S61 is executed, and the architectures of the training model 210 of the first type neural network 21, the training model 220 of the second type neural network 22, and the training model 230 of the third type neural network 23 are set. That is, the structure as shown in Figures 5A~5C is formed. Then step S62 is executed, and the model for training 210~230 obtain training data respectively. Then step S63 is executed, and the training models 210 to 230 are trained using the training data to form a trained neural network respectively. Thereby, the training of the training model 210 of the first type neural network 21, the second type neural network 22 and the third type neural network 23 can be completed.

當步驟S63完成後,步驟S64可被執行,參數最佳化模組3將加工參數組合的多個較佳數值組合分別輸入至訓練後的第一類神經網路21、第二類神經網路22及第三類神經網路23。之後步驟S65被執行,參數最佳化模組3執行NSGA-II,從第一類神經網路21、第二類神經網路22及第三類神經網路23的目標預測值中找出一或多個帕累托前緣。之後步驟S66被執行,參數最佳化模組3將該等帕累托前緣所對應的數值組合設定為加工參數組合的最佳化數值組合。藉此,本發明的系統1可自動找出多目標的最佳化參數組合。 After step S63 is completed, step S64 can be executed. The parameter optimization module 3 inputs a plurality of better value combinations of the processing parameter combinations into the trained first type neural network 21 and the second type neural network respectively. 22 and the third type of neural network 23. After that, step S65 is executed, and the parameter optimization module 3 executes NSGA-II to find a prediction value from the target prediction values of the first type neural network 21, the second type neural network 22, and the third type neural network 23. or multiple Pareto fronts. Then step S66 is executed, and the parameter optimization module 3 sets the numerical combination corresponding to the Pareto fronts as the optimized numerical combination of the processing parameter combination. Thereby, the system 1 of the present invention can automatically find the optimal parameter combination for multiple objectives.

此外,當步驟S63完成時,步驟S67亦可被執行,遷移學習模組4分別對第一類神經網路21、第二類神經網路22及第三類神經網路23進行權重凍結。之後步驟S68被執行,遷移學習模組4重新訓練權重凍結的第一類神經網路21、第二類神經網路22及第三類神經網路23。之後步驟S69被執行,第一類神經網路21形成適用於不同機台或具備不同預測能力的第四類神經網路24,第二類神經網路22形成適用於不同機台或具備不同預測能力的第五類神經網路25,且第三類神經網路23形成適用於不同機台或具備不同預測能力的第六類神經網路26。藉此,遷移學習可被完成。 In addition, when step S63 is completed, step S67 may also be executed, and the transfer learning module 4 freezes the weights of the first type of neural network 21, the second type of neural network 22, and the third type of neural network 23 respectively. Then step S68 is executed, and the transfer learning module 4 retrains the first type of neural network 21 , the second type of neural network 22 and the third type of neural network 23 with frozen weights. Then step S69 is executed. The first type of neural network 21 forms a fourth type of neural network 24 that is suitable for different machines or has different prediction capabilities. The second type of neural network 22 forms a fourth type of neural network that is suitable for different machines or has different prediction capabilities. The fifth type of neural network 25 has different capabilities, and the third type of neural network 23 forms a sixth type of neural network 26 that is suitable for different machines or has different prediction capabilities. With this, transfer learning can be accomplished.

據此,本發明的多目標參數最佳化系統1所建立的訓練後類神經網路群組2可準確地預測線切割放電加工機10的多個目標預測值。此外,多目標參數最佳化系統1的參數最佳化模組3可執行基因演算法,以使用訓練後類神經網路群組2來取得最佳化的加工參數組合,藉此可節省大量的時間成本及人力成 本。另外,本發明的多目標參數最佳化系統1的訓練後類神經網路群組2可藉由遷移學習而適用其它機台,可提升適用性,並減少重新開發的經費及成本。 Accordingly, the trained neural network group 2 established by the multi-objective parameter optimization system 1 of the present invention can accurately predict multiple target prediction values of the wire-cut electric discharge machine 10 . In addition, the parameter optimization module 3 of the multi-objective parameter optimization system 1 can execute a genetic algorithm to obtain the optimal processing parameter combination using the trained neural network group 2, thereby saving a lot of money. time cost and labor cost Book. In addition, the trained neural network group 2 of the multi-objective parameter optimization system 1 of the present invention can be applied to other machines through transfer learning, which can improve the applicability and reduce redevelopment expenses and costs.

儘管本發明已透過上述實施例來說明,可理解的是,根據本發明的精神及本發明所主張的申請專利範圍,許多修飾及變化都是可能的。 Although the present invention has been illustrated by the above embodiments, it is understood that many modifications and variations are possible according to the spirit of the present invention and the patentable scope claimed by the present invention.

1:多目標參數最佳化系統 1: Multi-objective parameter optimization system

2:訓練後類神經網路群組 2: Neural network group after training

21:第一類神經網路 21: Type 1 Neural Networks

22:第二類神經網路 22: Type II Neural Network

23:第三類神經網路 23: The third type of neural network

3:參數最佳化模組 3: Parameter optimization module

Claims (9)

一種多目標參數最佳化系統,用於提供一線切割放電加工機的多目標最佳化加工參數,包含:複數個訓練後類神經網路,其中每個訓練後類神經網路根據一加工參數組合而各自輸出一目標預測值;以及一參數最佳化模組,執行一基因演算法,以利用該等訓練後類神經網路根據該加工參數組合的多種數值組合所輸出的該等目標預測值來找出該加工參數組合的至少一最佳化數值組合;其中該基因演算法是找出該等訓練後類神經網路群組根據該等數值組合所輸出的該等目標預測值中的至少一帕累托前緣(Pareto front),並將該至少一帕累托前緣所對應的該組數值組合設定為該加工參數組合的該至少一最佳化數值組合。 A multi-objective parameter optimization system used to provide multi-objective optimized processing parameters of a linear cutting electric discharge machining machine, including: a plurality of trained neural networks, wherein each trained neural network is based on a processing parameter combined to each output a target prediction value; and a parameter optimization module that executes a genetic algorithm to utilize the target predictions output by the trained neural network according to a plurality of numerical combinations of the processing parameter combination. values to find at least one optimized value combination of the processing parameter combination; wherein the genetic algorithm is to find out the target prediction values output by the trained neural network group based on the combination of values. At least one Pareto front (Pareto front), and the set of value combinations corresponding to the at least one Pareto front are set to the at least one optimized value combination of the processing parameter combination. 如請求項1所述的多目標參數最佳化系統,其中該等加工參數包含放電脈衝停止時間、電弧放電脈衝停止時間、短路放電脈衝停止時間、放電脈衝持續時間、電弧放電脈衝持續時間或短路放電脈衝持續時間,或者上述之任意組合。 The multi-objective parameter optimization system as described in claim 1, wherein the processing parameters include discharge pulse stop time, arc discharge pulse stop time, short circuit discharge pulse stop time, discharge pulse duration, arc discharge pulse duration or short circuit Discharge pulse duration, or any combination of the above. 如請求項1所述的多目標參數最佳化系統,其中該等訓練後類神經網路群組包含一第一訓練後類神經網路、一第二訓練後類神經網路及/或一第三訓練後類神經網路,其中該第一訓練後類神經網路的該目標預測值為加工進給速度預測值,該第二訓練後類神經網路的該目標預測值為該表面粗糙度預測值,且該第三訓練後類神經網路的該目標預測值為該工件精度預測值。 The multi-objective parameter optimization system as claimed in claim 1, wherein the trained neural network group includes a first trained neural network, a second trained neural network and/or a A third trained neural network, wherein the target predicted value of the first trained neural network is a processing feed speed predicted value, and the target predicted value of the second trained neural network is surface roughness. degree prediction value, and the target prediction value of the third trained neural network is the workpiece accuracy prediction value. 如請求項1所述的多目標參數最佳化系統,其中更包含一遷移學習模組,對每個訓練後類神經網路以權重凍結的方式進行一遷移學習。 The multi-objective parameter optimization system as described in claim 1 further includes a transfer learning module that performs transfer learning on each trained neural network in a weight-freezing manner. 一種多目標參數最佳化方法,由一多目標參數最佳化系統執行,用於提供一線切割放電加工機的多目標最佳化加工參數,其中該多目標參數最佳化系統包含複數個訓練後類神經網路及一參數最佳化模組,且每個訓練後類神經網路根據一加工參數組合而各自輸出一目標預測值,其中該方法包含步驟:藉由該參數最佳化模組,執行一基因演算法,以利用該等訓練後類神經網路根據該加工參數組合的多種數值組合所輸出的該等目標預測值來找出該加工參數組合的至少一最佳化數值組合;其中該基因演算法是找出該等訓練後類神經網路群組根據該等數值組合所輸出的該等目標預測值中的至少一帕累托前緣,並將該至少一帕累托前緣所對應的該組數值組合設定為該加工參數組合的該至少一最佳化數值組合。 A multi-objective parameter optimization method, executed by a multi-objective parameter optimization system, is used to provide multi-objective optimized processing parameters of a line cutting electric discharge machine, wherein the multi-objective parameter optimization system includes a plurality of training A post-training neural network and a parameter optimization module, and each trained neural network outputs a target prediction value according to a combination of processing parameters, wherein the method includes the step of: optimizing the model by the parameter A group that executes a genetic algorithm to use the trained neural networks to find at least one optimized numerical combination of the processing parameter combination based on the target prediction values output by the multiple numerical combinations of the processing parameter combination. ; wherein the genetic algorithm is to find at least one Pareto front among the target prediction values output by the trained neural network group based on the combination of values, and add the at least one Pareto front The set of numerical combinations corresponding to the leading edge is set to the at least one optimized numerical combination of the processing parameter combination. 如請求項5所述的多目標參數最佳化方法,其中該等加工參數包含放電脈衝停止時間、電弧放電脈衝停止時間、短路放電脈衝停止時間、放電脈衝持續時間、電弧放電脈衝持續時間或短路放電脈衝持續時間,或者上述之任意組合。 The multi-objective parameter optimization method as described in claim 5, wherein the processing parameters include discharge pulse stop time, arc discharge pulse stop time, short circuit discharge pulse stop time, discharge pulse duration, arc discharge pulse duration or short circuit Discharge pulse duration, or any combination of the above. 如請求項5所述的多目標參數最佳化方法,其中該等訓練後類神經網路群組包含一第一訓練後類神經網路、一第二訓練後類神經網路或一第三訓練後類神經網路,或者上述之任意組合,其中該第一訓練後類神經網路的該目標預測值為加工進給速度預測值,該第二訓練後類神經網路的該目標預測值為該表面粗糙度預測值,且該第三訓練後類神經網路的該目標預測值為該工件精度預測值。 The multi-objective parameter optimization method as described in claim 5, wherein the trained neural network group includes a first trained neural network, a second trained neural network or a third trained neural network. The trained neural network, or any combination of the above, wherein the target predicted value of the first trained neural network is the processing feed speed predicted value, and the target predicted value of the second trained neural network is the surface roughness prediction value, and the target prediction value of the third trained neural network is the workpiece accuracy prediction value. 如請求項5所述的多目標參數最佳化方法,更包含步驟:藉由該多目標參數最佳化系統的一遷移學習模組,對每個訓練後類神經網路以權重凍結的方式進行一遷移學習。 The multi-objective parameter optimization method described in claim 5 further includes the step of freezing the weight of each trained neural network through a transfer learning module of the multi-objective parameter optimization system. Perform transfer learning. 一種電腦程式產品,儲存於一非暫態電腦可讀取媒體之中,用以使一多目標參數最佳化系統運作,以提供一線切割放電加工機的多目標最佳化加工參數,其中該多目標參數最佳化系統包含複數個訓練後類神經網路及一參數最佳化模組,且每個訓練後類神經網路根據一加工參數組合而各自輸出一目標預測值,其中該電腦程式產品包含:一指令,使該參數最佳化模組執行一基因演算法,以利用該等訓練後類神經網路根據該加工參數組合的多種數值組合所輸出的該等目標預測值來找出該加工參數組合的至少一最佳化數值組合;其中該基因演算法是找出該等訓練後類神經網路群組根據該等數值組合所輸出的該等目標預測值中的至少一帕累托前緣(Pareto front),並將該至少一帕累托前緣所對應的該組數值組合設定為該加工參數組合的該至少一最佳化數值組合。 A computer program product, stored in a non-transitory computer-readable medium, used to operate a multi-objective parameter optimization system to provide multi-objective optimization processing parameters of a linear cutting electric discharge machining machine, wherein the The multi-objective parameter optimization system includes a plurality of trained neural networks and a parameter optimization module, and each trained neural network outputs a target prediction value according to a combination of processing parameters, wherein the computer The program product includes: an instruction that causes the parameter optimization module to execute a genetic algorithm to use the trained neural network to find the target predicted values output based on the multiple numerical combinations of the processing parameter combination. At least one optimized numerical combination of the processing parameter combination is obtained; wherein the genetic algorithm is to find at least one of the target prediction values output by the trained neural network group according to the numerical combination. Pareto front, and the set of value combinations corresponding to the at least one Pareto front is set to the at least one optimized value combination of the processing parameter combination.
TW111115048A 2022-04-20 2022-04-20 Multi-objective parameters optimization system, method and computer program product thereof TWI819578B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW111115048A TWI819578B (en) 2022-04-20 2022-04-20 Multi-objective parameters optimization system, method and computer program product thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW111115048A TWI819578B (en) 2022-04-20 2022-04-20 Multi-objective parameters optimization system, method and computer program product thereof

Publications (2)

Publication Number Publication Date
TWI819578B true TWI819578B (en) 2023-10-21
TW202343172A TW202343172A (en) 2023-11-01

Family

ID=89720574

Family Applications (1)

Application Number Title Priority Date Filing Date
TW111115048A TWI819578B (en) 2022-04-20 2022-04-20 Multi-objective parameters optimization system, method and computer program product thereof

Country Status (1)

Country Link
TW (1) TWI819578B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200617800A (en) * 2006-02-15 2006-06-01 Univ Ling Tung The hierarchy-genetic-algorithm trained artificial neural network thermal error compensation system of CNC machine tools
TW201202876A (en) * 2010-01-29 2012-01-16 Tokyo Electron Ltd Method and system for self-learning and self-improving a semiconductor manufacturing tool
TW201605602A (en) * 2014-08-12 2016-02-16 Nat Univ Chin Yi Technology Inverse model processing system established by using artificial neural network with genetic algorithm
WO2018062398A1 (en) * 2016-09-30 2018-04-05 株式会社Uacj Device for predicting aluminum product properties, method for predicting aluminum product properties, control program, and storage medium
TW201941328A (en) * 2018-03-20 2019-10-16 日商東京威力科創股份有限公司 Self-aware and correcting heterogenous platform incorporating integrated semiconductor processing modules and method for using same
TW202009804A (en) * 2018-08-29 2020-03-01 國立交通大學 Machine learning based systems and methods for creating an optimal prediction model and obtaining optimal prediction results
TW202101138A (en) * 2019-03-29 2021-01-01 美商蘭姆研究公司 Model-based scheduling for substrate processing systems
TW202105098A (en) * 2019-02-15 2021-02-01 德商巴斯夫歐洲公司 Determining operating conditions in chemical production plants
TW202203093A (en) * 2020-07-02 2022-01-16 阿證科技股份有限公司 Neural artificial intelligence decision-making network core system including an asymmetric hidden layer constructed by a neural network with a dynamic node adjustment mechanism
TW202212020A (en) * 2020-09-18 2022-04-01 中國鋼鐵股份有限公司 Quality designing method and electrical device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200617800A (en) * 2006-02-15 2006-06-01 Univ Ling Tung The hierarchy-genetic-algorithm trained artificial neural network thermal error compensation system of CNC machine tools
TW201202876A (en) * 2010-01-29 2012-01-16 Tokyo Electron Ltd Method and system for self-learning and self-improving a semiconductor manufacturing tool
TW201605602A (en) * 2014-08-12 2016-02-16 Nat Univ Chin Yi Technology Inverse model processing system established by using artificial neural network with genetic algorithm
WO2018062398A1 (en) * 2016-09-30 2018-04-05 株式会社Uacj Device for predicting aluminum product properties, method for predicting aluminum product properties, control program, and storage medium
TW201941328A (en) * 2018-03-20 2019-10-16 日商東京威力科創股份有限公司 Self-aware and correcting heterogenous platform incorporating integrated semiconductor processing modules and method for using same
TW202009804A (en) * 2018-08-29 2020-03-01 國立交通大學 Machine learning based systems and methods for creating an optimal prediction model and obtaining optimal prediction results
TW202105098A (en) * 2019-02-15 2021-02-01 德商巴斯夫歐洲公司 Determining operating conditions in chemical production plants
TW202101138A (en) * 2019-03-29 2021-01-01 美商蘭姆研究公司 Model-based scheduling for substrate processing systems
TW202203093A (en) * 2020-07-02 2022-01-16 阿證科技股份有限公司 Neural artificial intelligence decision-making network core system including an asymmetric hidden layer constructed by a neural network with a dynamic node adjustment mechanism
TW202212020A (en) * 2020-09-18 2022-04-01 中國鋼鐵股份有限公司 Quality designing method and electrical device

Also Published As

Publication number Publication date
TW202343172A (en) 2023-11-01

Similar Documents

Publication Publication Date Title
CN111587440B (en) Neuromorphic chip for updating accurate synaptic weight values
Hung et al. A novel virtual metrology scheme for predicting CVD thickness in semiconductor manufacturing
WO2020117991A1 (en) Generating integrated circuit floorplans using neural networks
Peng et al. A new Jacobian matrix for optimal learning of single-layer neural networks
Bas et al. Robust learning algorithm for multiplicative neuron model artificial neural networks
JP2017515205A (en) Cold neuron spike timing back propagation
US20120303562A1 (en) Artificial neural network application for magnetic core width prediction and modeling for magnetic disk drive manufacture
TWI736404B (en) Programmable circuits for performing machine learning operations on edge devices
TW201740318A (en) Multi-layer artificial neural network and controlling method thereof
JP2016536664A (en) An automated method for correcting neural dynamics
Danilin et al. Determining the fault tolerance of memristorsbased neural network using simulation and design of experiments
US11880769B2 (en) Using multiple functional blocks for training neural networks
CN115878907A (en) Social network forwarding behavior prediction method and device based on user dependency relationship
CN110750852A (en) Method and device for predicting remaining service life of super capacitor and electronic equipment
Farooq et al. Efficient FPGA routing using reinforcement learning
Li et al. Improved LSTM-based prediction method for highly variable workload and resources in clouds
TWI819578B (en) Multi-objective parameters optimization system, method and computer program product thereof
Zhang et al. Intelligent STEP-NC-compliant setup planning method
TW202405590A (en) Multi-objective parameters optimization system, method and computer program product thereof
CN106776442B (en) FPGA transistor size adjusting method
Yilmaz et al. A robust training of dendritic neuron model neural network for time series prediction
KR20220051903A (en) Method of generating circuit model and manufacturing integrated circuit using the same
US20230267997A1 (en) Degradation-aware training scheme for reliable memristor deep learning accelerator design
Su et al. Incremental learning with balanced update on receptive fields for multi-sensor data fusion
Shen et al. A fast learning algorithm of neural network with tunable activation function