TWI306207B - - Google Patents

Download PDF

Info

Publication number
TWI306207B
TWI306207B TW94105375A TW94105375A TWI306207B TW I306207 B TWI306207 B TW I306207B TW 94105375 A TW94105375 A TW 94105375A TW 94105375 A TW94105375 A TW 94105375A TW I306207 B TWI306207 B TW I306207B
Authority
TW
Taiwan
Prior art keywords
neural network
temperature monitoring
item
monitoring point
casing
Prior art date
Application number
TW94105375A
Other languages
Chinese (zh)
Other versions
TW200630833A (en
Inventor
Hsin Chung Lien
Shinn Jyh Lin
Yean Der Kuan
Hsin Shen Hung
Yao Chang Tseng
Chih Hao Shen
Original Assignee
Northern Taiwan Inst Of Science And Technology
Shinn Jyh Lin
Hsin Shen Hung
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northern Taiwan Inst Of Science And Technology, Shinn Jyh Lin, Hsin Shen Hung filed Critical Northern Taiwan Inst Of Science And Technology
Priority to TW094105375A priority Critical patent/TW200630833A/en
Publication of TW200630833A publication Critical patent/TW200630833A/en
Application granted granted Critical
Publication of TWI306207B publication Critical patent/TWI306207B/zh

Links

Description

1306207 九、發明說明: 、【發明所屬之技術領域】 本發明「智慧型理論設計電腦機殼散熱開口之方法與裝置」,係以類神經網 路系統為依據,估算出不同散熱開口設計對整體電腦散熱效能之影響,將可針 對於目前廣泛使用的桌上型電職殼之散熱解決方法,作精準有效的分析判斷 以及快速喊域殼散蘭Π設狀最佳化的方法,使其桌上型電腦整體達到 更有效率之散熱效能。 Φ 【先前技術】 近幾年來,由於暖電子產⑽快速蓬勃發展,電子產品逐漸走向高性能、 高頻率、高速度與輕薄化的方向演進,且晶片尺寸逐漸小型化,造成電子元件的 發熱密度越來越高,從英特爾(INTEL)及美商超微(AMD)推出pENT臟4 等以上的微處理ϋ時’就發現微處理雜f的功率,隨著微處理器運作時脈節節 提升,及内部不斷添加的功能線路而大幅升高;但_耗費的辨上升再加上微 處理器製程越做越小的進步,微處理器所散發出來的不僅只在熱量上的增加而 籲已’更在熱通量上也隨之急速升高而造成微處理器的溫度快速攀升而因微處 理器是_晶_域,其卫作溫度必須在—定溫度之下,且線路的連接或晶圓 的操作也會因長祕露在高溫環境下㈣難;故在福的讀環境必需維持在 -個容許溫度範_,超過這個溫度,微處理器即有無法卫作甚至於燒毁的可 能;因此,電子產品之散熱可說是當前電子相關業者決定其產品的穩定性的重要 的要素!由此也可看出,散熱相關產品位於電子產業上、中、下游的環節中不可 輕忽的地位; -般而言’目前大部分驗設計桌上型電腦機殼散細口所採用之技術方 1306207 為、二由叹岭者複雜且詳細的計算設計後,直接在桌上型電腦機殼上製作散熱開 口 ’再於所作之散熱開口裝置感測器,以檢測其開口設言十之位置對整體散熱系統 之影響程度,再進—步的實驗與研究測試是否有改善其整體之散熱效能,以此重 複上述繁雜之麵,直_試出最械改善整體散熱效能之散熱開口位置後,此 即為目前用於桌上型電腦機殼散熱開口之設計方法與裝置; 目前使用技街之缺黠: L由於製作散觸π以供實驗檢賴需的研發靖及製造成本龐大,所以不符 合經濟成本效益; 2.由於實驗與研究測試需要投人大量的研發日铜與人力魏,使其無法再有效 時間内獲得最大之效率; 3·在進行人讀測散熱開σ位置時’容易因人為疏失而造成錯誤; 4·實驗設備易損耗而導致實驗誤差; 5·散熱開口的製作以及實驗設備保養維修昂貴; 6. 由於目前使用之技術需較多的工作人_亍人碰測與實驗,_造成人力 資源的浪費及消耗; 7. 目前使狀技術層面在整個工作流程上較無連續性與自動化,容易在工作程 序上發生錯誤及脫序的情況。 【發明内容】 本發明將建錢H慧魏論設計麵餘鱗開 以類神經網路為依據,將其模擬得到之結 去與裝置,係 後,推算出不同的散熱開口設計對整體電腦散敎系、統子之=路系統訓練及學習 1306207 使用本發啦善时技術之優點如下: 1.由於此轉妓錄置之研發、峰 益; 、一 lie成本低,付合高經濟低成本之效 2.此發财法«置完全由魏 ^ 术兀成,既快速且較精準,因此不需 投入大量之研發時間與縮短工作時 使八此夠在取紐的時間内獲得最高之 效率;具有時效工作性能,減少因時間因素改變所造成之誤差. 3.由於此發日縣使職顺擬_不會發生因人騎疏失而造成錯誤; 4·此發明所制獨彡_賴絲㈣蝴不易損耗; 5. 此發騎使狀設備多魏腦及模絲體,_轉成本低; 因而減少大量人力資源的 6. 此發明完全由電腦來代替人%成所需要的工作, —’ 消耗; 7·此發明由模擬開始至類神經網路推篁办 〜 工作流程上,具備完善的連續性與自二’;腦進行’因而在整個的 發展空間。 >、 ,B夺在工作上也較具有智慧型的 【實施方式】 本發明智慧型理論設計電腦機殼散熱開口 分為二大部分H她鞋構騎方式主要 估算出散熱開口設計條件, 第二部分實物5圖所示, 路架構之輸入層進行主軸轉換與降階, p nsi〇n方法對類神經網 路運算時間。 降階如第1圖所示,使其大幅減少類神經網 第一實施例: 一種智慧型電腦機殼散熱開口之方法與裝置,如第$圖所示,其方法流程包 含有學習過程、檢測過程與再學習 * %其巾料魏是從學雜本資料庫, 1306207 藉由散熱開口設計之屬性向量作為倒傳遞神經網路的私^θ 、】淘入向量χ;,而Μ個輪出節 點分麵應於s種輸綠果,㈣Yl之神經網路細節點為cpu溫度,以 %之神經網路輸出節點為15點不同位置之溫度監測點,其中;&為機殼二層 左下方之溫度監靡,丫3職壯層左上方之溫度,I域殼上層右下 方之溫度監賴,γ5域殼上層右上方之溫度監測點,%域紅層正中央之 溫度監測點,γ7為機殼中層左下方之溫度監測點,%為機殼中層左上方之溫度 監測點,Υ9為機殼中層右下方之溫度監測點,YiG為機殼中層右上方之溫度監 測點’ Yu為機殼中層正中央之溫度監測點,Yi2為機殼τ層左下方之溫度監測 點’知為機殼下層左上方之溫度監測點,Yw為機殼下層右下方之溫度監測點, Y〖5為機殼下層右上方之溫度制點,Yi6為機殼下層正中央之溫度監測點,然 後再以倒傳翁,_輯出值與樣本實際輸出值之縣最小猶為目標函數, 將神經網路各節賴結侧整至最佳化,轉取最精確之散觸口輯評估結 果’並獲得倒傳遞神經網路每—個節點之最佳鍵結值,如第6圖所示,以下就 神經網路學習過程之步驟包含有: y驟…X定初赠結值%,w,及沿水伟之飽和减之偏差(㈣^均 為很小亂數; 步驟自停止條件尚棚猶’神經鱗輸出健繼樣本實際輸出差異 小於容許值’作步驟三到步驟十,否則結束學習過程; 步驟三:對每—個訓練樣本,作步驟四到步驟九; ν驟四.將輸入向量(散熱開口設計參數之屬性向量傳送到隱藏層的每一 個節點作全連結運算; 13〇62〇71306207 IX. Invention: [Technical field of invention] The invention relates to a method and device for intelligently designing a heat dissipation opening of a computer casing, which is based on a neural network system and estimates the design of different heat dissipation openings. The impact of computer cooling performance will be able to make accurate and effective analysis and judgment for the heat dissipation solution of the widely used desktop electric chassis, and to quickly optimize the method of making the domain shell The upper computer as a whole achieves more efficient cooling performance. Φ [Prior Art] In recent years, due to the rapid development of warm electronic products (10), electronic products have gradually evolved toward high performance, high frequency, high speed and light and thin, and the chip size has been gradually miniaturized, resulting in the heat density of electronic components. Increasingly high, when Intel (INTEL) and American Supermicro (AMD) introduced micro-processing of pENT dirty 4 or above, the power of micro-processing f was found, and the pulse was increased with the operation of the microprocessor. And the internal added function lines have risen sharply; but the increase in consumption and the progress of the microprocessor process are getting smaller and smaller, and the microprocessor emits not only the increase in heat but also the appeal. 'There is also a rapid increase in heat flux, which causes the temperature of the microprocessor to rise rapidly. Because the microprocessor is _ crystal_domain, its temperature must be below the fixed temperature, and the connection of the line or The operation of the wafer will also be difficult due to the long exposure to the high temperature environment. Therefore, the reading environment in Fu must be maintained at an allowable temperature range. Above this temperature, the microprocessor cannot be cured or even burned. Possible Therefore, the heat dissipation of electronic products can be said to be an important factor for the current electronics industry to determine the stability of their products! It can also be seen that the heat-dissipation-related products are located in the electronic industry, in the middle and downstream links, and cannot be neglected. - Generally speaking, most of the current technical methods used in designing desktop computer cases are 1306207. For the second and second, the complex and detailed calculation design of the Singling, directly on the desktop computer case to make the heat dissipation opening' and then the heat dissipation opening device sensor to detect the position of the opening setting ten to the whole The degree of influence of the heat dissipation system, the further experimental and research tests to improve the overall heat dissipation performance, so as to repeat the above-mentioned complicated aspects, after trying to find the heat dissipation opening position that is the most mechanically improved overall heat dissipation performance, It is currently used in the design method and device of the heat dissipation opening of the desktop computer case; the current use of the technical street is lacking: L is not in line with the economy due to the large amount of research and development and manufacturing cost required for making the π contact for experimental inspection. Cost-effective; 2. Due to the need for experimentation and research testing, a large number of R&D Nippon copper and manpower are required to be used, so that it can not achieve maximum efficiency in the effective time; When reading the temperature measurement σ position, it is easy to cause errors due to human error; 4. The experimental equipment is easy to wear and cause experimental error; 5. The production of heat dissipation openings and the maintenance and repair of experimental equipment are expensive; 6. Due to the current technology used The workman _ 碰 碰 碰 碰 碰 碰 碰 造成 造成 造成 造成 人力 人力 人力 人力 人力 人力 人力 人力 人力 人力 人力 人力 人力 人力 人力 人力 人力 人力 人力 人力 人力 人力 人力 人力 人力 人力 人力 人力 人力 人力 人力 人力 人力 人力 人力Happening. SUMMARY OF THE INVENTION The present invention will build a money H Hui Wei on the design surface of the scales based on the neural network, and simulate the resulting knots and devices, after the system, calculate the different heat dissipation opening design for the overall computer敎系,统子=路系统训练与学习1306207 The advantages of using this is the following: 1. Due to the R&D and peak benefits of this transfer, the cost of a lie is low, and the cost is high and low. Effectiveness 2. This method of making money is completely fast and precise, so it does not require a lot of research and development time and shortens the work time to achieve the highest efficiency in the time of picking up the new one; It has the aging work performance and reduces the error caused by the change of time. 3.Because this time, the county is responsible for the situation _ _ will not cause mistakes caused by people riding the loss; 4. The invention made by the invention _ Laisi (4) But the hair is not easy to lose; 5. This hair riding device has more Wei brain and mold body, _ transfer cost is low; thus reducing a lot of human resources. 6. This invention completely replaces the human being with the computer to do the required work, — Consumption; 7. This invention begins with simulation To the class of neural network to push the office ~ Workflow, with perfect continuity and self-two; brain carried out 'and thus in the entire development space. >, ,B is also more intelligent in the work [Embodiment] The intelligent theoretical design of the computer case of the present invention is divided into two parts: the H-shoe riding method mainly estimates the design of the heat-dissipating opening, The two parts of the physical object 5 show that the input layer of the road structure performs spindle transformation and reduction, and the p nsi〇n method operates on the neural network. The reduction step is as shown in Fig. 1, which greatly reduces the neural network. The first embodiment: A method and device for cooling the opening of a smart computer case, as shown in Fig. 1, the method flow includes a learning process and detection. Process and re-learning *% of its towel Wei is from the school library, 1306207 by the thermal opening design attribute vector as the inverse of the neural network of the private ^ θ,] scouring vector χ; The node facets should be s kinds of green fruit, (4) the neural network details of Yl are cpu temperature, and the neural network output node of % is the temperature monitoring point of 15 different positions, where; & is the second floor of the case The temperature of the square is monitored by the temperature of the upper left of the 3rd floor, the temperature of the upper right of the upper layer of the I domain, the temperature monitoring point of the upper right of the upper layer of the γ5 domain, and the temperature monitoring point of the center of the red layer of the % domain, γ7 It is the temperature monitoring point at the lower left of the middle layer of the casing, % is the temperature monitoring point at the upper left of the middle layer of the casing, Υ9 is the temperature monitoring point at the lower right of the middle layer of the casing, and YiG is the temperature monitoring point at the upper right of the middle layer of the casing. Temperature monitoring point in the center of the middle layer of the shell, Yi 2 is the temperature monitoring point at the lower left of the casing τ layer 'known as the temperature monitoring point at the upper left of the lower layer of the casing, Yw is the temperature monitoring point at the lower right of the lower layer of the casing, Y 〖5 is the temperature point at the upper right of the lower layer of the casing , Yi6 is the temperature monitoring point in the center of the lower layer of the casing, and then the inversion of the Weng, _ the value of the sample and the actual output value of the county is the minimum objective function, and the neural network is divided into the best To get the most accurate measurement of the results of the contact set and to obtain the optimal key value for each node of the inverted neural network. As shown in Figure 6, the following steps for the neural network learning process include : y ......X is the initial gift value %, w, and the deviation along the saturation of the water ((4)^ are small random numbers; the step from the stop condition is still shed 'the neurological scale output healthy and the sample actual output difference Less than the allowable value' is step 3 to step 10, otherwise the learning process ends; Step 3: For each training sample, step 4 to step 9; ν4. Transfer the input vector (the attribute vector of the heat dissipation opening design parameter to Each node of the hidden layer is fully connected; 13〇62〇7

夕驟五:計算隱藏層之輪出值v w&w),5: Calculate the round-out value of the hidden layer v w & w),

I 其中作用函數定義為/⑹=」^ ; 步驟六:計算每—個輪出節點 步驟七· 士4* f ^ • δΐ t母—個輸出節點之回傳誤差ΑΆ , 0^ = 1,2,〜,叫及鍵結修正量、=略%,岣=%,其中學習率 0<α<ΐ ’實際輪出值^ ; -M_ *^· 7 · β十异隱藏層每—節點之鍵結修正值n«及回傳誤 差Κ=(|^Ά(1-/0,(~,/ = 1,2,.·.,尸); 步九· 卜 新母個鍵結值你片知你),〜(⑽(從w),4(”㈣),其中 〜(《ew) = w)A(〇’£/) + ·龙,w“wew) = 〜(〇w) +△〜., 0k(new)-ek(〇ld) + A0k > 0J(new) = 9J(〇ld) + Aej ; ^驟十:剛試停止條件是否為真; Μ過程,如第7圖所示,其步驟含括有’ 步驟__ . .輸入待測散熱開口設計參數樣本之屬性向量; 步驟二:神,網路評估其散熱開口設計,將降階後之主軸作為神經網路輸入 向量,並計算神經網路各輸出節點輸出值; 網路實際評估誤差大 步驟三:檢峡否有誤判樣本(其中誤判樣本代表神經 於容許值時之樣本),如果有,將誤判樣本資料儲存於學習樣本資料 庫以利再學習資料之取得; 步驟四 :_是否縣,如果相辣―,如果是職止檢測過程,· 10 1306207 再學習過程,如第8圖所示,其步驟包含有, 步驟一 ··將所有誤判樣本加人於學習樣本資料庫; 步驟二:麵進行上述學f触,使神纟_路各節點《可錄進行調整達 至瑕佳化,以有效對近似或雷同之誤判樣本不會再發生誤判,而提 升本發明之估算精確度; 藉由上賴敗仏π♦”項屬陶作為倒傳遞 [雜的,入心=1,2,〜個_點以丨,2,肩對岸於 观出絲,咖彳则秘、_触㈣娜,射學姆 中,精由神經網路利用訓練樣本已知之輪入值與輸出值(即學習樣本資料庫 中訓練樣本屬性向!與其相對應之輪出結果)調整各節點權重,使神經網 路輸出值離本實際輪出值之誤差最小作為目標函數,將各節_結值調整 至最佳化’啸讀纟蝴職算賊,學習過雜束後並蚊各節點權重, 以利檢測過程之估算;檢測過程中將等待檢職本屬性向量作為輪入向量, 經由神經網路進行散熱開口設計之評估,評估過程如果有誤判樣本,則將誤 判樣本貝料儲存於學習樣本資料庫以利再學習資料之取得;再學習過程是經 由決判樣本加入於學習樣本資料庫,使神經網路調整各節點權重,以達至後 續檢測過程對於相似或雷同之前述誤判樣本不再產生誤判情形,俾提升智慧 蜇電腦機殼散熱開口之方法與裝置之估算精度。 本發明「智慧型電職殼健開口之方法餘置」,其巾該智慧型電職 殼散熱開Π設狀裝置包含有個人電勘設有學習樣本資料庫與電腦機殼散熱 開口設計之方法’其中該學習樣本資料庫儲存有訓練樣本之散軸口設計的屬 11 1306207 .. 性向量’該屬性向量是經由電腦模擬分析後所獲取,每一筆训練資料包含26項 ' 數據,前10項(或前《維)數據為散熱開口設計之屬性向量,依序為Χι到Χι〇, -其中:\為電腦機殼散熱入風口位置在X軸的座標,χ2為電腦機殼散熱入風口 位置在Y軸的座標,X3為電腦機殼散熱入風口位置在Z軸的座標,χ4為電腦 機殼散熱入風口直徑,X5為電腦機殼散熱出風口位置在χ軸的座標,為電腦 機殼散熱出風口位置在γ軸的座標’X7為電腦機殼散熱出風口位置在2軸的座 標,&為電腦機殼散熱出風口直徑’ X9為電腦機殼散熱側孔數目,Xig為電腦 ® 機殼散熱側孔直徑,第11項(或項)是神經網路輪出節點Yi為CPU溫度, 弟12項(或”+2項)是神經網路輸出郎點Y2為機殼上層左下方之溫度監測點, 第13項(或《+3項)是神經網路輸出節點Y3為機殼上層左上方之溫度監測點, 第14項(或《+4項)是神經網路輸出節點I為機殼上層右下方之溫度監測點, 第15項(或《+5項)是神經網路輸出節點Ys為機殼上層右上方之溫度監測點, 第16項(或《+6項)是神經網路輸出節點γ6為機殼上層正中央之溫度監測點, 第17項(或《+7項)是神經網路輸出節點Υ7為機殼中層左下方之溫度監測點, • 第I8項(或項)是神經網路輸出節點Υ8為機殼中層左上方之溫度監測點, 第19項(或《+9項)是神經網路輸出節點Υ9為機殼中層右下方之溫度監測點, 第20項(或川項)是神經網路輸出節點Υ10為機殼中層右上方之溫度監測點, 第21項(或項)是神經網路輸出節點Yu為機殼中層正中央之溫度監測點, 第22項(或w+72項)是神經網路輸出節點Y12為機殼下層左下方之溫度監測點, 第23項(或項)是神經網路輸出節點Υ13為機殼下層左上方之溫度監測點, 第24項(或項)是神經網路輸出節點Υ14為機殼下層右下方之溫度監測點,I where the action function is defined as /(6)=”^ ; Step 6: Calculate the step of each round-out node. VII 4* f ^ • δΐ t--the return error of the output node ΑΆ , 0^ = 1,2 , ~, call and key correction amount, = slightly %, 岣 =%, where the learning rate 0 < α < ΐ 'actual turn-out value ^ ; -M_ *^· 7 · β 十-hidden layer per-node key The knot correction value n« and the return error Κ=(|^Ά(1-/0,(~,/ = 1,2,.·., corpse); Step IX·Bu new parent key value You know You), ~((10)(from w), 4("(four)), where ~("ew" = w)A(〇'£/) + ·龙,w"wew) = ~(〇w) +△~ 0k(new)-ek(〇ld) + A0k > 0J(new) = 9J(〇ld) + Aej ; ^10: Whether the test condition is true or not; Μ process, as shown in Figure 7 The steps include 'step __ . . Enter the attribute vector of the sample design parameter of the heat sink opening to be tested; Step 2: God, the network evaluates the heat dissipation opening design, and uses the reduced-order spindle as the neural network input vector. And calculate the output value of each output node of the neural network; the actual evaluation error of the network is large. Step 3: Check whether the gorge is misjudged Sample (where the sample is misjudged to represent the nerve at the allowable value), if any, the misjudged sample data is stored in the learning sample database for the re-learning of the data; Step 4: _ whether the county, if it is spicy - if it is The job detection process, · 10 1306207 Re-learning process, as shown in Figure 8, the steps include, step one · add all misjudged samples to the learning sample database; Step two: face the above learning f touch, Make the oracle _ _ _ _ _ _ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ "The item belongs to the pottery as a reverse pass [miscellaneous, into the heart = 1, 2, ~ a _ point to 丨, 2, shoulder to the side to see the silk, curry is secret, _ touch (four) Na, shot in the middle, fine The neural network uses the known rounding value and output value of the training sample (ie, the training sample attribute in the learning sample database to the corresponding rounding result) to adjust the weight of each node, so that the neural network output value is away from the actual round. The error of the value is the smallest The standard function, adjust each section _ knot value to optimize 'small reading 纟 职 职 职 , , , , , , , , , , , , , 学习 学习 学习 学习 学习 学习 学习 学习 学习 学习 学习 学习 学习 学习 学习 学习 学习 学习 学习 学习 学习 学习 学习 学习The vector is used as the wheel vector, and the heat dissipation opening design is evaluated via the neural network. If there is a misjudgment sample in the evaluation process, the sample material is misjudged and stored in the learning sample database to facilitate the learning of the data; the learning process is determined by the decision. The sample is added to the learning sample database, so that the neural network adjusts the weights of each node, so that the subsequent detection process can no longer cause misjudgment for the similar or identical misjudged samples, and the method of improving the wisdom of the computer case heat dissipation opening is Estimation accuracy of the device. The invention relates to a method for restoring the opening of a smart electric housing, and the intelligent electric housing heat-dissipating device comprises a method for designing a sample database and a heat dissipation opening of a computer casing. 'The learning sample database stores the genus of the scattered sample design of the training sample. 11 1306207 .. Sexual vector' This attribute vector is obtained after computer simulation analysis. Each training material contains 26 items of data, the top 10 The item (or the previous "dimensional" data is the attribute vector of the heat dissipation opening design, in the order of Χι to Χι〇, - where: \ is the coordinates of the X-axis of the heat sink inlet of the computer case, and χ 2 is the cooling inlet of the computer case Position is the coordinate of the Y-axis, X3 is the coordinate of the cooling hole of the computer case in the Z-axis, χ4 is the diameter of the cooling air inlet of the computer case, and X5 is the coordinate of the cooling air outlet of the computer case at the axis of the , axis, which is a computer machine. The position of the heat dissipation air outlet of the shell is in the coordinate axis of the γ axis 'X7 is the coordinate of the cooling air outlet of the computer case in the 2 axis, & the diameter of the cooling air outlet of the computer case 'X9 is the number of holes in the heat dissipation of the computer case, Xig is the computer ® Shell heat dissipation side hole diameter, item 11 (or item) is the neural network rotation node Yi is the CPU temperature, the brother 12 items (or "+2 items" is the neural network output point Y2 is the lower left of the upper layer of the case Temperature monitoring point, item 13 (or "+3 item" is the neural network output node Y3 is the temperature monitoring point on the upper left side of the upper layer of the chassis, and the 14th item (or "+4 item" is the neural network output node I is The temperature monitoring point on the lower right side of the upper layer of the casing, item 15 (or "+5 items" is the temperature monitoring point of the upper part of the upper layer of the upper layer of the neural network output node Ys, and the 16th item (or "+6 items") is the nerve The network output node γ6 is the temperature monitoring point in the center of the upper layer of the casing, and the 17th item (or “+7 item” is the neural network output node Υ7 is the temperature monitoring point at the lower left of the middle layer of the casing, • Item I8 (or Item) is the neural network output node Υ8 is the temperature monitoring point at the upper left of the middle layer of the casing, and the 19th item (or “+9 item” is the neural network output node Υ9 is the temperature monitoring point at the lower right of the middle layer of the casing, the 20th The item (or Chuan item) is the neural network output node Υ10 is the temperature monitoring point on the upper right side of the middle layer of the casing, and the 21st item (or item) is the neural network output node Yu. The temperature monitoring point in the middle of the middle layer of the casing, item 22 (or w+72) is the temperature monitoring point of the neural network output node Y12 for the lower left side of the casing, and the 23rd item (or item) is the neural network output. Node Υ 13 is the temperature monitoring point at the upper left of the lower layer of the casing, and item 24 (or item) is the temperature monitoring point of the lower right side of the lower layer of the chassis.

12 1306207 第25項(或ΜΛ5項)是神經網路輸出節點Y1S為機殼下層右上方之溫度監測點, 第26項(或項)是神經網路輸出節點γ1ό為機殼下層正中央之溫度監測點, 再依據現有已知標準方法所獲取之評估標準值。 如上所述之智慧型電腦機殼散熱開口之方法與裝置,其中該檢測過程步驟 一之神經網路評估電腦機殼散熱開口設計驟說明如下·· 步驟-:將學習過程將中所訓練完成之神經網路鍵結值固定〜〜,H· 步驟二··對每—個測試樣本,作步驟三至步驟五; 步驟三:將輪入向量(散熱開口設計之屬性向量Η傳送到隱藏層的每-個節 點作全連結運算; 步驟四.汁算隱藏層之輸出值~ 7 ,其中作用函數定義為 步驟五.計算每一個輸出層之輸 ^ yk~f(Z^Jkhj~9k) 〇 第^一實施例: 種曰慧型理論設計電腦機殼散熱開 方法流程包含有 彳去與裝置,如第1圖所示,其 有予S與降階過程、檢測 程之步驟含括有 ,、再予習過程’其中學習與降階過 步驟一:«3 /本:_,藉由全部訓練樣本 (訓練樣本之4 / —取4、前《項散熱開口的屬性向量 本之月ϋ «仃數據), · matrix'» , 〇 丨力'孑目關矩陣 i? (autocorrelation ~ Ρ(φ,) , . a 類別/出現之嫌卜 ,、'疋散熱開口之屬性向量,咖)是 阳現之機率密度函數, 值,”為散執門D S {知是在第Η固類別的屬性向量期望 巧歧熟開口之屬性向 13 1306207 步驟二·物子應於及之特徵值(如職叫與特徵向量㈣眶伽s),並將 域向量正規化(n_aiize),設起始狀態㈣; V驟三··求取降階矩陣φ,如 圖所不,降階矩陣Φ由相關矩陣Λ之最大 前w固特徵值所對應之w個特徵向量所組成m 步驟四.主軸轉換與降階,如第2圖所示 , 由义=Φ'χ, ’將輸入向量由χ,之η 維降階至%之4,其中是主轴向量; 步驟五:神經_評估其散朗口設 千如第3圖所不,以《作為輪入 向量’由倒傳遞神經網路對訓練樣本進行學習評估訓練樣本散執開 口設計條件,如果估算誤差小於容許值,則回步驟三,輸入向量再 降一階重複上述步驟,朗找㈣崎算所需之最低(或最少)主轴 維度為止’如果估算誤差大於料值,崎縮需之最少主轴維度 為W = W + 1並停止學習過程; 檢測過程之步驟如下, 步驟一:輸入待測之散熱開口設計條件之屬性向量; 步驟二:將散熱開口設計條件之屬性向量進行主輛轉換與降階,藉η expansion綠將散熱開口設計條件之屬性向量進行主軸轉換盘降 :開口設 階’其中主軸降階之維度,是由學習與降階過程中維持散熱丨 計條件估算精度所需最少之主軸向量數; 經網路輸入向 步驟三:神經網路評估散熱開口設計,將降階後之主軸作為神 I,並3十算神經網路各輸出節點之輸出 *值从取大輪出值之輸出節 點相對應於所代表之散熱開口設計條件 馮檢測樣本之散熱開口 14 130620712 1306207 Item 25 (or ΜΛ5) is the neural network output node Y1S is the temperature monitoring point on the upper right side of the lower layer of the chassis, and the 26th item (or item) is the temperature of the neural network output node γ1ό is the center of the lower layer of the chassis. The monitoring point is based on the evaluation standard values obtained by the existing known standard methods. The method and device for the heat dissipation opening of the smart computer case as described above, wherein the neural network for evaluating the heat dissipation opening of the computer case in the first step of the detection process is as follows: Step -: the learning process will be completed in the training The neural network key value is fixed ~~, H·Step 2· For each test sample, perform step 3 to step 5; Step 3: Transfer the wheel vector (the attribute vector of the heat dissipation opening design to the hidden layer) Each node is fully connected; Step 4. The output value of the hidden layer is ~ 7 , where the action function is defined as step 5. Calculate the output of each output layer ^ yk~f(Z^Jkhj~9k) ^An embodiment: The method of designing a computer shell heat dissipation method includes a device and a device, as shown in Fig. 1, the steps of the S and the step-down process and the detection process are included, Re-learning process 'Where to learn and downgrade step one: «3 / Ben: _, with all training samples (training sample 4 / - take 4, before "item heat dissipation opening attribute vector book of the month ϋ «仃 data ), · matrix'» , 〇丨力'孑目关Array i? (autocorrelation ~ Ρ(φ,) , . a category / appearance of suspicion, '疋 疋 疋 之 之 属性 属性 属性 咖 咖 咖 咖 咖 咖 咖 咖 咖 咖 咖 咖 咖 咖 咖 咖 咖 咖 咖 咖 机 机 机 机 机 机 机 机 机 机 机 机 DS DS DS DS Is the property vector of the third solid class, expecting the property of the familiar opening to 13 1306207. Step 2: The feature value of the object (such as the job and the feature vector (four) sang s), and formalizing the domain vector ( N_aiize), set the initial state (4); V3···Cursing the reduced-order matrix φ, as shown in the figure, the reduced-order matrix Φ consists of w eigenvectors corresponding to the largest pre-w-solid eigenvalues of the correlation matrix Λ m Step 4. Spindle conversion and reduction, as shown in Figure 2, by ==Φ'χ, 'The input vector is reduced from χ, the η dimension to 4%, where is the principal axis; Step 5: Nerve _Evaluate the slanting mouth of the singularity as shown in Figure 3. The "rear-in vector" is used to evaluate the training sample by the inverse-transmission neural network. If the estimation error is less than the allowable value, then Go back to step three, repeat the above steps by inputting the vector and then lowering the first step. (or at least) the spindle dimension until 'the estimated error is greater than the material value, the minimum spindle dimension required for the crusting is W = W + 1 and the learning process is stopped; the steps of the detection process are as follows, Step 1: Enter the design conditions of the heat sink opening to be tested Attribute vector; Step 2: Convert the attribute vector of the heat dissipation opening design condition to the main vehicle conversion and reduce the order, and use η expansion green to convert the attribute vector of the heat dissipation opening design condition to the spindle conversion: the opening is set to 'the main axis is reduced Dimensions are the minimum number of spindle vectors required to maintain the accuracy of the thermal stimuli condition during learning and reduction; through the network input to step 3: neural network evaluation of the heat dissipation opening design, the spindle after the reduction is used as the god I And the output of each output node of the three-dimensional neural network * value from the output node of the large-wheel output value corresponding to the heat-dissipation opening design condition represented by the von test sample heat dissipation opening 14 1306207

步驟四: 、疋否有„吳判樣本,如果有’將誤繼本資料儲存於學習樣本資 料庫以利再學習資料之取得; V驟五.檢測疋否結束,如果否回步驟―,如果是則停止檢測過程; 再學習過程,如第4_示,其步驟含括有, ’驟·絲有糊樣本加人於學習縣資料庫;Step 4: 疋No, there is a sample of Wu Jue, if there is a mistakenly stored in the learning sample database for the re-learning of the data; V. V. Test 疋 no end, if no step back - if Yes, the detection process is stopped; the learning process, as shown in the fourth paragraph, includes the steps, and the sample is added to the learning county database;

步驟二:飾進行上述學習與降_程,使神經醜各節點觀可重新進行 調整達至最佳化’財效對近減雷同之誤繼本不會再發生誤 判,而提升本發明之估算精確度; 藉由上述諸流程之組合,以散熱開口《項屬性向量作為倒傳遞神經網路 的《個輸入向量,,,/=1,2,...,〜個輸出節點#=1,2,,^分別對應於$種輸出 2里’因此1之神經網路輸出節點以CPU溫度,γ2到^之神經網路輸出 節點以15點獨位置之溫度監_,其中;γ2為機殼上層左下方之溫度監Step 2: The above-mentioned learning and descending process is carried out, so that the nodes of the nerves and ugliness can be re-adjusted to optimize the 'financial effect', and the false positives will not be misjudged, and the estimation of the present invention will be improved. Accuracy; by the combination of the above processes, the heat dissipation opening "item attribute vector is used as an input vector of the inverse neural network,, / = 1, 2, ..., ~ output node # = 1, 2,, ^ correspond to the output of the type 2, respectively. Therefore, the neural network output node of 1 is CPU temperature, and the neural network output node of γ2 to ^ is monitored by the temperature of 15 points. γ2 is the casing. Temperature monitoring at the lower left of the upper level

測點,Υ3為機殼上層左上方之溫度監獅,γ4為機殼上層右下方之溫度監 測點Υ5為機虹層右上方之溫度監測點,Υ6為機殼上層正中央之溫度監 測點,Υ7為機殼巾層左下方之溫度制點,γ8為機殼巾層左上方之溫度監 測點’ Υ9為機殼中層右下方之溫度監測點,YiG為機殼中層右上方之溫度監 測點,Yn為機財層正巾央之溫度監測點,Yi2為機殼下層左下方之溫度監 測點’ Υ13為機殼下層左上方之溫度朗點,%域殼下層右下方之溫度監 测點,Υ!5為機殼下層右上方之溫度監測點,Υΐ6為機殼下層正中央之溫度監 測點,然後再依序進行學習與降階過程、檢測過程與再學習過程,其中學習 15Measuring point, Υ3 is the temperature monitoring lion at the upper left of the upper layer of the casing, γ4 is the temperature monitoring point at the lower right of the upper layer of the casing, Υ5 is the temperature monitoring point at the upper right of the machine rainbow layer, and Υ6 is the temperature monitoring point at the center of the upper layer of the casing. Υ7 is the temperature point at the lower left of the casing layer, γ8 is the temperature monitoring point at the upper left of the casing layer Υ9 is the temperature monitoring point at the lower right of the middle layer of the casing, and YiG is the temperature monitoring point at the upper right of the middle layer of the casing. Yn is the temperature monitoring point of the machine layer, and Yi2 is the temperature monitoring point at the lower left of the lower layer of the casing. 'Υ13 is the temperature point on the upper left side of the lower layer of the casing, and the temperature monitoring point on the lower right side of the lower layer of the % domain. !5 is the temperature monitoring point on the upper right side of the lower layer of the casing, and Υΐ6 is the temperature monitoring point in the center of the lower layer of the casing, and then the learning and reduction process, the detection process and the re-learning process are sequentially performed, among which 15

1306207 與降階過程巾’藉由K_L expansiGn方法將散熱開口設計參數之屬性向量轉 換至正交錄上’贱難向舰此谓,並求取轉估算精麟需最少之 軸向里數以降低神㉟網路估算之複雜度’同時神經網路糊訓練樣本已 知之輪人值與輸iij靖學雜本龍料,鱗樣楊㈣i與其相對應之 散熱開口以條件)調整各節點權重’使神經網路輸出值與樣本實際輸出值 之誤差最小作為目標函數,將各_鍵結值調整至最佳化,以提升神經網路 ”精度i白與降階過程結束後並固定各節點權重,以利檢測過程之估 算;檢測過程中將等待檢職本屬性向量軸KL哪咖-方法進行主轴 轉奐與降階並將降階後之主軸作為輸入向量經由神經網路進行散熱開口設 木平估’雜過程如果有誤繼本,職誤雌本資觸存於學習樣本資料 庫以利再學ff料之取得;再學習擁是經由誤讎本加人於學習樣本資料 庫使K-L expansion重新調整主軸方位與神經網路調整各節點權重,以達 至後續檢測過珊於她或制之前述誤繼本不再產生誤繼形,俾提升 智慧型電腦機殼散熱開口之方法與裝置之估算精度。 *如上所狀智慧龍職殼散熱開叹綠與裝置,其帽智慧型電腦機 #’、、、開α之裝置包含有個人電勒f技學習樣本賴庫與智慧型智慧型電腦 機破散熱開D設計之方法,鮮習#本資料庫齡有麟樣本的散熱開口設計 參數之屬性向量,錄觸°設狀屬性向量是經由模擬分析後所獲取,每-'s I練貝料包含26項數據’前1〇項(或前〃維)數據為散熱開口設計參數之屬 f生向里依序為XWXi〇’其中:電腦機殼散熱入風口位置在χ軸的座標, 為電腦機咸散熱人風σ位置在γ #的鍊,&為電職殼散熱人風口位置在 16 !3〇62071306207 and the reduced-order process towel's conversion of the attribute vector of the heat-dissipation design parameters to the orthogonal record by the K_L expansiGn method, and the minimum number of axial miles needed to reduce the estimated precision The complexity of the God 35 network estimation 'simultaneous neural network paste training sample known round human value and input iij Jing Xue miscellaneous dragon, scale Yang (four) i and its corresponding heat dissipation opening condition) adjust the weight of each node ' The error between the neural network output value and the actual output value of the sample is the minimum objective function, and the _ key value is adjusted to be optimized to improve the neural network. After the end of the process, the weights of the nodes are fixed. Eli detection process estimation; in the detection process will wait for the job attribute vector axis KL which method - the method of spindle rotation and reduction and the reduced spindle as an input vector through the neural network for cooling openings Estimate the 'miscellaneous process if there is a mistaken successor, the occupational error female capital is stored in the study sample database to facilitate the acquisition of the ff material; the learning is through the misunderstanding of this person in the learning sample database to make KL expansio nRe-adjust the spindle orientation and the neural network to adjust the weights of each node, so as to achieve the method and device for improving the cooling opening of the smart computer case by the subsequent detection of the previous error or the erroneous succession of the system. Estimation accuracy. * As above, the smart dragon shell heats off the green and the device, its cap smart computer machine #',,, open the device of the alpha contains the personal electric skill learning sample Laiku and intelligent wisdom The computer machine breaks the heat and opens the D design method. The new quotation is the attribute vector of the heat dissipation opening design parameter of the data storage age. The recording and setting property vector is obtained after the simulation analysis, and each -'s I practice The billet contains 26 items of data. The data of the first 1 item (or the front side dimension) is the design parameter of the heat dissipation opening. The order of the data is XWXi〇'. The position of the heat sink inlet of the computer case is at the coordinates of the axis. For the computer machine salty heat dissipation σ position in the chain of γ #, & for the electric housing heat sink position at 16 !3〇6207

Z軸的座標,X4為電腦機殼散熱入風口直徑,X5為電腦機殼散熱出八 X抽的座標’ X6為電腦機殼散熱出風口位置在γ轴的座標,X為電w . 出風口位置在Z軸的座標,X8為電腦機殼散熱出風口直庐, … k X9為電腦機殼散熱 側孔數目,X1G為電腦機殼散熱側孔直徑,第11項(或 / 、乂7項)疋砷經網路輸出 節點Yi為CPU溫度,第12項(或《+2項)是神經網路輪出節點γ為機争 左下方之溫度監測點,第13項(或”+3項)是神經網路輸出節點 士 ”h為機殼上層 左上方之溫度監測點,第14項(或《+4項)是神經網路輸出節點^為機浐上層 右下方之溫度監測點,第15 L+5項)是神經_輸出節點&為機=上層 右上方之溫度監測點,第16項(或《+<5項)是神經網 正中央之溫度監測點’第17項(或π+7項)是神經網 路輸出輪γ6為機殼上層 路輸出節點Y?為機殼中層 左下方之温度監測點,第18項(或項)是神經網路輸出節點 ° 8马機设中層 左上方之溫度監測點,第19項(或《+9項)是神經網路輸出節點為機於中層 右下方之溫度監測點,第20項(或?2+川項)是神經網路輸出節點γι〇為機护中声The coordinate of the Z axis, X4 is the diameter of the air inlet of the computer case, and the X5 is the coordinate of the X X pumping of the computer case. The X6 is the coordinate of the heat dissipation air outlet of the computer case on the γ axis, and X is the electric w. Positioned in the coordinate of the Z axis, X8 is the heat dissipation air outlet of the computer case, ... k X9 is the number of holes in the heat dissipation side of the computer case, X1G is the diameter of the heat dissipation side hole of the computer case, item 11 (or /, 乂7 items)疋 arsenic through the network output node Yi is the CPU temperature, the 12th item (or "+2 item" is the neural network rotation node γ is the temperature monitoring point at the lower left, the 13th item (or "+3 items") ) is the neural network output node "h is the temperature monitoring point on the upper left side of the upper layer of the casing, and the 14th item (or "+4 item" is the temperature monitoring point of the lower layer of the upper layer of the upper part of the casing, the first 15 L+5 items) is the nerve_output node & is the machine=the upper temperature monitoring point on the upper layer, and the 16th item (or “+<5 item is the temperature monitoring point in the center of the neural network” item 17 ( Or π+7 terms) is the neural network output wheel γ6 is the upper layer output node Y of the casing? It is the temperature monitoring point at the lower left of the middle layer of the casing, and the 18th item (or item) is the neural network. The output node ° 8 horse machine set the temperature monitoring point in the upper left of the middle layer, the 19th item (or "+9 item" is the temperature monitoring point of the neural network output node to the lower right of the middle layer, the 20th item (or ?2+) Chuan Xiang) is the neural network output node γι〇 for the sound in the machine

右上方之溫度監測點’第21項(或„+77項)是神經網路輸出節點%為機殼中層 正中央之溫度監測點m項項)是神經鹏輸㈣點I々機殼下層 左下方之溫度監測點,帛23項(或項)是神經晴齡節點γ福機殼下層 左上方之溫度監測點’第24項(或„+以項)是神經網路輸出節點Yu為機殼下層 右下方之溫度監測點’第2S項(或„+λ5項)是神經網路輸出節點^為機殼下層 右上方之溫度監測點’第%項(或項)是神經網路輸出節點Υΐ6為機殼下層 正中央之溫度監靡,触據現有已知鮮雜哺取之評估標準值。 如上所述之智慧型電腦機殼散熱開口之方法與裝置,其中該 學習與降階過 17 1306207 程步驟五之神經網路評估散熱開口不同之設計條件, !個輪出節點分別對應於 是以降階之主軸向量作為 倒傳遞神經網路的輸入向量,而 種輸出向量,藉由倒 傳遞神經網路輸出值與樣本實際輪出值 路各節點鍵結值調整至最佳化,零取最 Φ為目^數’將神經網 網路每-錄狀細雜,^ 辦果,傳遞神經 以下訧神經網路學習步驟說明如下. 步驟-:奴減繼值“…及料料讀和秘^歸㈣θThe temperature monitoring point on the upper right is '21st item (or „+77 items) is the temperature monitoring point of the neural network output node% is the middle of the middle layer of the casing. m) is the nerve transmission (four) point I々 the lower layer of the lower case Fangzhi temperature monitoring point, 帛23 items (or items) is the temperature monitoring point 'the 24th item (or „+) of the lower left part of the lower layer of the γ 机 机 机 是 是 是 是 神经 神经 神经 神经 神经The temperature monitoring point '2S item (or „+λ5 item) at the lower right of the lower layer is the neural network output node ^ is the temperature monitoring point on the upper right side of the lower layer of the chassis' The % item (or item) is the neural network output node Υΐ6 For the temperature monitoring in the center of the lower layer of the casing, it is based on the evaluation standard value of the existing known fresh hybrid feeding. The method and device for the heat dissipation opening of the smart computer casing as described above, wherein the learning and reduction step 17 1306207 The neural network of step 5 evaluates the different design conditions of the heat-dissipating opening, and the round-out nodes correspond to the reduced-order spindle vector as the input vector of the inverse-transition neural network, and the output vector, by the inverse-transfer neural network. The output value and the actual round-out value of the sample The point key value is adjusted to the optimal value, and the zero is the most Φ is the number of the target'. The neural network is per-recorded, and the results are as follows. The following steps are described below. Step-: The subordinate value of the slave reduction "...and the material reading and the secret return (four) θ

為很小亂數; J *轉二:當停祕件尚未到達時,神經網路輸出值與訓練樣本實際輪出差異 小於谷許值,作步驟三到步斜,關結束學習過程; 步驟三:對每一個訓練樣本,作步驟四到步驟九; 步驟四:將輸入向量(主軸向私傳送到隱藏層的每一個節點作全連 算; 步驟五:計算隱藏層之輸出值\, i 其中作用函數定義為/(约= _l^ ; • 步驟六:計算每一個輸出節點八,八; j 步驟七:計算每一個輸出節點之回傳誤差%、乂加(i”) (八,是=1,2,".,叫及鍵結修正量Δη^=<\,辑=妨:,其中學習率 〇<«<1,實際輸出值L ; 步驟八:計算隱藏層每一節點之鍵結修正值,△^说及回傳誤 差,(/iy,y = i,2,.··,;?); k=\ 18 1306207 wjk (new) = wjlc (old) + Awjk 5 ui} {new) = (o/J) + Auy, 0k {new) = 0k {old) A0k J Θ} (new) = 9J {old) + ! 步驟十:測試停止條件是否為真; 如上所述之智慧型電腦機殼散熱開口設計之方法與裝置,其中該檢測過程 步驟三之神經網路評估散熱開口設計步驟說明如下: 步驟一:將學習與降階過程中所訓練完成之神經網路鍵結值固定; 步驟二:對每一個測試樣本,作步驟三至步驟五; 步驟三:將輸入向量(主轴向量)χ,傳送到隱藏層的每一個節點作全連結運算; 步驟四:計算隱藏層之輸出值\,&=/(J>yx,D,其中作用函數定義為 i = ; \ + e p 步驟五:計算每一個輸出層之輸出值凡,。For a small random number; J * turn two: when the stop secret has not arrived, the difference between the neural network output value and the training sample actual rotation is less than the valley value, step three to step oblique, close the learning process; : For each training sample, do step 4 to step 9; Step 4: Pass the input vector (the main axis is privately transmitted to each node of the hidden layer for full connection; Step 5: Calculate the output value of the hidden layer\, i where The action function is defined as / (about = _l^; • Step 6: Calculate each output node eight, eight; j Step seven: Calculate the return error %, 乂 plus (i") of each output node (eight, yes = 1,2,"., and the key correction amount Δη^=<\, suffix=, where the learning rate 〇<«<1, the actual output value L; Step 8: Calculate the hidden layer each Node bond correction value, △^ and return error, (/iy,y = i,2,.··,;?); k=\ 18 1306207 wjk (new) = wjlc (old) + Awjk 5 Ui} {new) = (o/J) + Auy, 0k {new) = 0k {old) A0k J Θ} (new) = 9J {old) + ! Step 10: Test if the stop condition is true; Wisdom The method and device for designing the heat dissipation opening of the computer case, wherein the neural network evaluation heat dissipation opening design step of the third step of the detection process is as follows: Step 1: The neural network key that is trained and completed during the learning and reduction process The value is fixed; Step 2: For each test sample, perform steps 3 to 5; Step 3: Transfer the input vector (spindle vector) to each node of the hidden layer for full-join operation; Step 4: Calculate the hidden layer The output value \, &=/(J>yx,D, where the action function is defined as i = ; \ + ep Step 5: Calculate the output value of each output layer.

19 1306207 【圖式簡單說明】 第i圖K-L Expansion結合神經網路 :腦機殼散熱開口之再學 第2圖智慧型設計電職殼散_ σ機雜細口术構圖 ㈣=一㈣學習 第4圖K-L Expansion結合神經網路之智慧型設計電彳 第5圖倒傳遞神經網路架構 第6圖智慧型設計電腦機殼散熱開口之神經網路训練 第7圖智慧型設計電腦機殼散熱開口之神經網路測試過絲程圖 第8圖智慧型設計電腦機殼散熱開口之神經網 第9圖電腦機殼内部溫度量義 #+¾¾抓細 【主要元件符號說明】19 1306207 [Simple description of the diagram] The i-th image KL Expansion combined with the neural network: the re-learning of the brain cavity heat dissipation opening Figure 2 The intelligent design of the electric occupational shell _ σ machine miscellaneous mouth composition (four) = one (four) learning 4th Figure KL Expansion combined with neural network intelligent design eDonkey Figure 5 inverted neural network architecture Figure 6 intelligent design computer chassis cooling opening neural network training Figure 7 smart design computer case cooling opening The neural network test silk diagram 8th figure smart design computer case heat dissipation opening nerve network Figure 9 computer case internal temperature quantity sense #+3⁄43⁄4 grasp fine [main component symbol description]

Xl為電腦機殼散熱入風口位置在X軸的座標 K為電腦機殼散熱入風口位置在γ軸的座標 X3為電腦機殼散熱入風口位置在Z轴的座標 X4為電腦機殼散熱入風口直徑 \為電腦機殼散熱出風口位置在X軸的座標 Χό為電腦機殼散熱出風口位置在γ轴的座標 I為電腦機殼散熱出風口位置在Ζ軸的座標 X»為電腦機殼散熱出風口直徑 為為電腦機殼散熱侧孔數目 Χίο為電腦機殼散熱側孔直徑 Υι為CPU之溫度 Y2為機殼上層左下方之溫度監測點 I為機殼上層左上方之溫度監測點 Υ4為機殼上層右下方之溫度監測點 YS為機殼上層右上方之溫度監測點 Y6為機殼上層正中央之溫度監測點 為機殼中層左下方之溫度監測點Xl is the cooling air inlet position of the computer case. The coordinate K of the X-axis is the cooling air inlet position of the computer case. The coordinate X3 of the γ-axis is the cooling air inlet position of the computer case. The coordinate X4 of the Z-axis is the cooling air inlet of the computer case. Diameter\ is the computer case heat dissipation air outlet position in the X-axis coordinate Χό is the computer case heat dissipation air outlet position in the γ-axis coordinate I is the computer case heat dissipation air outlet position in the x-axis coordinate X» for the computer case heat dissipation The diameter of the air outlet is the number of holes in the heat dissipation of the computer case. Χίο is the heat dissipation side hole diameter of the computer case. 为 is the temperature of the CPU. Y2 is the temperature monitoring point on the lower left side of the upper layer of the case. The temperature monitoring point on the upper left side of the upper case is Υ4. The temperature monitoring point YS at the lower right of the upper layer of the casing is the temperature monitoring point at the upper right of the upper layer of the casing. The temperature monitoring point at the center of the upper layer of the casing is the temperature monitoring point at the lower left of the middle layer of the casing.

20 D 1306207 γ8為機殼中層左上方之溫度監測點 γ9為機殼中層右下方之溫度監測點 Υΐο為機殼中層右上方之溫度監測點 Yll為機殼中層正中央之溫度監測點 Yl2為機殼下層左下方之溫度監測點 Yl3為機殼下層左上方之溫度監測點 Yl4為機殼下層右下方之溫度監測點 Yl5為機殼下層右上方之溫度監測點 Yl6為機殼下層正中央之溫度監測點20 D 1306207 γ8 is the temperature monitoring point γ9 at the upper left of the middle layer of the casing is the temperature monitoring point at the lower right of the middle layer of the casing Υΐο is the temperature monitoring point at the upper right of the middle layer of the casing Yll is the temperature monitoring point Yl2 at the center of the middle layer of the casing is the machine The temperature monitoring point Yl3 at the lower left of the lower layer of the shell is the temperature monitoring point at the upper left of the lower layer of the casing. Yl4 is the temperature monitoring point at the lower right of the lower layer of the casing. The temperature monitoring point Yl5 at the upper right of the lower layer of the casing is the temperature at the center of the lower layer of the casing. Monitoring points

21twenty one

Claims (1)

ί306207 十、申請專利範園·· =種 =㈣辑職臟叹麵,其概含柳與降階過 程、檢測過程與再學習過程, 其中學習與降階過程之步驟含括有, 步驟—:则縣軸,齡輪縣‘儀㈣屬性向量 (訓練樣本之前4數據),計算相關矩叫麵,elation ^ ) R制㈣},其中力是散熱開口之屬性向量’ ♦,)是 出見之機率密度函數,E{XjX;}是在第,個類別的屬性向量期望 值,”為散熱開口之屬性向量數目; 特徵向量正規化(_alize),設起始狀態…; 步驟三:求取輸卩車φ,降輯輸嫩前痛寺徵值所 對應之W個特徵向量所組成H_1; 步驟四:主轉軸㈣,-〜爾-之猶階至心 維,其中%是主軸向量; 步驟五.神經網路評估其散熱開口設計條件,以^為輸入向量,由倒傳遞 神經網路對訓練樣本進行學習評估訓練樣本散熱開口設計條件,如 果估算誤差小於容許值,則回步驟三,輸入向量再降一階重複上述 步t直到找出可以估算所需之最低(或最少)主轴維度為止,如果 、差大於令許值’則估算縮需之最少主軸維度為所=叫止 學習過程; ,、中π亥予白與降階過程步驟五之神經網路評估散熱開口設計條件,是以降階 22 1306207 =作為倒傳遞神_路的輸人向量,而”個輪出節點分別對應於增輸 、/里藉由倒傳遞神經網路輪出值與樣本實際輸出值之誤差最小化作為目样 函數,將神經網路各節 、'、示 卩機結值調整至最佳化,以獲取最精確之輪出結果,並 焱侍倒傳遞神經網路每一 1 如下: 路母個幌之最錢、雑,町__路學習步驟說明 步驟一:設定初始錘έ士估 、、、Q值、乂及沿水平軸之飽和函數之偏差(bias) &片均 為很小亂數; 步驟―:當停止條件尚未到達時,神經網路輸出值與訓練樣本實際輸出差異 I ;谷4值,作步驟三到步驟十,否則結束學習過程; 竭__‘ 作步翻到步驟九; 步驟四:將輸人向量(主轴向量)1傳送到隱藏層的每—個節點作全連结運 算; '口 步驟五:計算隱藏層之輸出值v , 其中作用函數定義為/(广)=1 ; 步驟六:言十算每一個輸出節點八,; 步驟七:計算每一個輸出節點之回傳誤差 < 鳴痛丨會 (hbu.‘·,叫及鍵結修正量,Δκ,其中學習率 0<«<1,實際輸出值^ ; 步驟八:計算隱藏層每-節點之鍵結修正值△〜=吟,及回傳誤 乂驟九·更新每-個鍵結值^(_),〜—风州卿),其中 23 1306207 Wjk<JleM,^wjk(〇ld) + hM>jk > uij(new) = uij(〇ld) + Auij . ek(new)=,〇k(〇ld) + h9k > 0j(new) = θ.(〇ι^ +Αθ^ . 步驟十:測試停止條件是否為真; 檢測過程之步驟如下, 步驟一:輪人待測之散熱開口設計條件之屬性向量X/ ; 步驟二:將散熱開口設計條件之屬性向量進行主轴轉換與降階,夢以 σ鳩料㈣_料轉讎 〃中主刪 計條件估算精度所需最少之主軸向量數; 錢開^ 步驟- :^_綱—响她網路輪入向 鄉:==娜^獅 庫二再Γ樣本)’如果有’將誤判樣本資料储存於學習樣本資料 庫以利再學習資料之取得; 步驟五:檢測是否結束,如果否回步驟―,如果是則停止檢測過程; 再學習過程’其步驟含括有, 步驟-:將所有誤判樣本加入於學習樣本資料庫·, 步鄉二··==述學習與降叫使神經網路各節點權重可重新進行 判,而提Γ化’叫致對近贱師之誤贿杉會再發生誤 士】而緹升本發明之估算精確度; 24 1306207 藉由上述諸流程之組合,以散埶 *、、、汗”項屬性向量經由K-L expansion轉換為 主軸向量作為倒傳遞神經網路的姻入向量㈣,·,…個輸出節點 Λ,Α=1,2,···,β分別對應於種輪出 掏出向里,因此Yl之神經網路輸出之cpu溫 度’ I到I6之神、爾_點Μ點不同位置之溫度監測點,1中γ2 為機殼上層左下方之溫度監_,I為機殼上層左上方之溫度監測點,I 為機,又上層右下方之溫度監測點,I為機殼上層右上方之溫度監測點,I 為機殼上層正中央之溫度監_,I為機殼帽左下方之溫度監測點,% 為機殼帽左上方之溫度監_,%為機財層右下方之溫度監測點,% 12 為機殼中層右上方之溫度監測點,%為機殼中層正中央之溫度監測點,γ 14 為麟下層左下方之溫度監_,Yu為機殼下層左上方之溫度監測點,γ 16 y機士下層右下方之’皿度[測點,%為機殼下層右上方之温度監測點,I。 為機;vX下層正巾央之溫度監難,然後再依柄行學習與降階過程、檢測過 程與再雜’其巾學f與降階過程巾,藉由κ_[哪咖―方法將散熱 開.又3十參數之屬性向量轉換至正交主轴上,避免屬性向量彼此干擾,並求 維持估算精度所需最少之主軸向量數,以降低神經網路估算之複雜度,同 時神經網路彻爾樣本已知之輸人健輸_即學胃樣本資料庫中,訓練 樣本屬性向量與其相對應之散熱開口設計條件)調整各節點權重,使神經網 路輸出值與樣本實際輸出值之誤差最小作為目標函數,將各節點鍵結值調整 至最佳化,以提升神經網路估算精度,學習與降階過程結束後並固定各節點 權重’以利檢測過程之估算;檢測過程中將等待檢測樣本屬性向量經由k_l expansion方法進行主軸轉換與降階,並將降階後之主軸作為輸入向量,經由 25 1306207 路蛛餘開口設計·,評估雜如果有誤繼本,縣誤判樣本 貝料儲存於學習樣本資料庫以利再學習資料之取得;再學習過程是經由誤判 ;本加入於學習樣本資料庫,使K L ex卿i〇n重新調整主轴方位與神經網 路調整各節點權重。 2·如申晴專利範圍第〗項所述之智慧型電腦機殼散熱開口之方法,該學習樣本 資料庫儲存有訓練樣本的散熱開口設計參數之屬性向量,該散熱開口設計之 屬性向量是經由模擬分析後所獲取,每一筆訓練資料包含26項數據,前 •項(或前,數據為散熱開口設計參數之屬性向量依序為Χι到Χι。,其中乂 為電顺殼餘人風时置在X 標,&為_驗絲人風口位置 在Y軸的座標,x3為制機殼散狀風時置在z軸的座標,&為電腦機 殼散熱入即餘,X5A電麟殼賴出風时置在χ _座標,&為電 腦機殼散熱出風口位置在γ軸的座標,χ7為電腦機殼散熱出風口位置在Ζ 軸的座標,Χ8為制機殼散熱出風σ餘,&為電腦機殼散油孔數目, Χ1〇為電腦機殼散熱側孔直徑,第U項(或W項)是神經網路輸出節點I • ^CPU溫度,第12項(或W項)是神經網路輸出節點γ2為機殼上層左下方 之溫度監測點,第13項(或㈣項)是神經網路輸出節點^為機殼上層左上 方之溫度監測點’第Μ項(或㈣項)是神經網路輸_點'為機殼上層右 下方之溫度監測點,第I5項(或㈣項)是神經網路輪出節點Κ為機殼上層 右上方之溫度監泰,第16項(或㈣极神_路輪⑽點%為機殼上 廣正中央之溫度監測點’第π項(或„+7項)是神經網路輸出節點γ7為機殼 中層左下方之溫度監測點’第18項(或㈣項)是神經網路輸出節點%為機 26 I3062Q7 殼中層左上方之溫度監測點,第19項(或《+9項)是神經網路輸出節點丫9為 機殼中層右下方之溫度監測點,第20項(或《+川項)是神經網路輪出節點γ1〇 為機殼中層右上方之溫度監測點,第21項(或項)是神經網路輸出節點 Υιι為機殼中層正中央之溫度監測點,第22項(或《+/2項)是神經網路輸出節 點Ylz為機殼下層左下方之溫度監測點,第23項(或„+73項)是神經網路輸 出節點Yl3為機殼下層左上方之溫度監測點,第24項(或《+74項)是神經網 路輸出節點Υ14為機殼下層右下方之溫度制點,第25項(或州5項)是神 鲁 經網路輸出節點γΐ5為機殼下層右上方之溫度監測點,第26項(或州6項) 疋神紅網路輸出節點Υΐ6為機殼下層正中央之溫度監測點,上述輸出與輪入 值之對應關係可由實驗過程獲得(或由已知標準方法獲取之輸入與輪出^ 估值)。 3·如申請專利範圍第1項所述之智 岛機办又放熱開口設計之方法,其中兮 "〜队阶,汗j u吕又盯< 檢測過程步驟三之神經網路開口料步驟魏如下: 辟f細猶_丨咖議_物細 步驟二:對每—個戦樣本,作步驟三至步驟五; μ ^ ’ 步驟三:將輪人崎(錄向私傳送_藏 步驟四:計算_層之輸紐^ = 偏卩.树全連結運算; m—丄. ^ 4),其讀用函數定義為 步驟五:計算每—個輸出層之輸出―,㈣Σν^)。 4. 一種智難電腦顧散_口之方法,私法軸 程與再學習讽@ # _u 5子~過程、檢測過 '學f過程是從學f樣本f料庫,藉域設計之 27 1306207 屬!生向里作為倒傳遞神經網路的輸入向量',而”個輸入節點分別對應於5種 崩出Z果’因此Yl之神經網路輸出節點為CPU溫度,I到Yl6之神經網路 輸2即點為15點不同位置之溫度監測點,其中;1為機殼上層左下方之溫 又丨^ Υ3為機殼上層左上方之溫度監測點,丫4為機殼上層右下方之溫 Υ5為機殼上層右上方之溫度監測點,Υ6為機殼上層正中央之溫 度監測點,Υ7為機殼中層左τ方之溫度監測點,Υ8為機財層左上方之溫 度L則點’ Υ9為機殼中層右下方之溫度刻點,Υι。為機財層右上方之溫 紅測點,Yll為機殼巾層財央之溫度監泰,%為下層左下方之溫 狀測點,Υ13為機殼下層左上方之溫度監測點,L為機殼下層右下方之溫 度I測點,γ15為機殼下層右上方之溫度監測點,Υΐ6為機殼下層正中央之溫 紅測點,錢細倒傳遞神經網路輸出值與樣本實際触值之誤差最小化 2為目標祕’觀_路各節_值輕至最触,赠取最精確之散 .、、、PU箱結果,並獲得倒傳遞神經網路每—個節點之最佳鍵結值,以 下就神賴路學之步驟包含有: 步驟一:設定初她一,㈣水伟彻爾編(bias)"均 為復小亂數; J 步驟-’爾件尚未到達時,神經網路輸出值與訓練樣本實際輸出差異 小於容許值,作步L針,否聽束學習過程; V驟—.對每個訓練樣本,作步驟四到步驟九; 步驟四屬人向量_開口設計參數之屬性♦傳送到隱藏層的每_ 個郎點作全連結運算; 28 I3062Q7 步驟五:計算隱藏層之輸出值v〜=/(石, 其中作用函數定義為/(妁=_11^ ; 、 1 + θ 步驟六··計算每—個輸出節點π/(ΣΆ,; J 步驟七· s十算每—個輪出節點之回傳誤差__凡)心(11), (八,/4: = 1,2,‘..,讲)及鍵結修正量/^=吨、.,么4=£^,其中學習率 ο<α<ι ’實際輸出值G 步驟八:計算隱藏層每一節點之鍵結修正值及回傳誤 差,(V:1,2,...,/7); 步驟九:更新每-個鍵結值~(_),%㈣為㈣办歷),其中 '“卿)= '“。W) + —,,〜(卿)=«〆。“) + △%, W = 〇k(old) + A0k , eMew) = ej{〇ld)^9j ; 步驟十:測試停止條件是否為真; 檢測過程,其步驟含括有, 步驟-:輸入待測散熱開口設計參數樣本之屬性向量; 網路輸入向 步驟二:細_嶋邮㈣,输㈣為她 量,並計算神經網路各輸出節點輸出值; 資料儲存於學習樣本資 步驟三:檢測是否有誤判樣本,如果有,將誤判樣本 料庫以利再學習資料之取得; 如果是則停止檢測過程; 步驟四:檢測是否結束,如果否回步驟一, 再學習過程,其步驟包含有, 29 1306207 步驟—··將所有誤娜本加人於學習樣本資料庫; 步驟二:重新進行上述學習過程,使神經網路各節點權 至最佳化,以有效對近似或雷同之誤判樣本不^新進行調整達 升本發明之估算精確度; 胃發生誤判,而提 藉由上述諸流程之組合,以其散熱開口設計之”項 網路的《個旦 、向里作為匈傳遞神經 幹出,果里’趟出節點分別對應制 . 進仃學f過程、檢測過程與再學習過程,其巾學m由 >藉由神_路细辑樣本已 g中’ 翰入值與輸出值(即學習樣本資料庫中,訓 "u目對應之輸出結果)調整各節點權重,使 值與樣本軸嶽_蝴目缝,將她聽值調整至= 化,以提升神經網路估算精度,學習過程結束後並固定各節點權重,以利檢 Z過程之估算;_過財料待檢_本雜向量作為輸人向量,經由神 =_仃_ ,撕,聰獅綱娜I ,誤_ 貝料儲存於予習樣本資料庫以利再學習資料之取得;再學習過程是經由誤判 樣本加入於學習樣本資料庫,使神經網路調整各節點權重,以達至後續檢測 $ t程對於她或雷同之前述誤舰林再產生誤判情形。 .如申1專利範®第4項所述之智慧型機殼散觸口之方法,其中該學習 樣^資料庫儲存有訓練樣本之散熱開口設計的屬性向量,該屬性向量是經由 電腦換擬分析後所獲取’每_筆訓練資料包含%項數據,前W項(或前” 1據為散熱開口设計之屬性向量,依序為&到&,其中:Χι為電腦機殼 '、、、入風口位置在X軸的座標,&為電職殼散熱入風口位置在γ轴的座 30 1306207 標,X3為電腦機殼散熱入風口位置在z細的# _座標’X4t職殼散熱入風 口直徑,X5為電腦機殼散熱出風口位置在Υ &ΛΑ 直在X—鱗,χ6為賴機殼散熱 出風口位置在Υ軸的座標,X7為電腦機殼散 政熟㈣σ位置在Ζ軸的座標, Χ8為電腦機錄熱出•直徑,χ9為電腦機殼散熱側孔數目,知為電腦機 殼散熱慨錢,第U娜㈣舰神經鳴輪出祕Υι為cpu溫度’ 第12項(或㈤項)是神賴路輸出節點Y2為機殼上層左下方之溫度監測 點,第η項(或…項)是神經網路輪出節點Y3為機殼上層左上方之溫度監306 306207 X. Applying for a patent Fan Park·· = kind=(4) Collecting dirty sighs, which include Liu and the reduction process, the detection process and the re-learning process. The steps of the learning and reduction process are included, steps: Then the county axis, the age round county 'instrument (four) attribute vector (4 data before the training sample), calculate the relevant moment called face, elation ^) R system (four)}, where the force is the attribute vector of the heat dissipation opening ' ♦,) is seen The probability density function, E{XjX;} is the expected value of the attribute vector in the first category, "the number of attribute vectors for the heat dissipation opening; the eigenvector normalization (_alize), set the initial state...; Step 3: Find the input The car φ, the lower part of the W eigenvalue corresponding to the sacred sacred value is composed of H_1; Step 4: the main axis (four), -~ er - the utmost to the heart dimension, where % is the principal axis vector; The neural network evaluates the design conditions of the heat dissipation opening, and takes the input vector as the input vector. The training sample is evaluated by the inverse neural network to evaluate the design conditions of the heat dissipation opening of the training sample. If the estimation error is less than the allowable value, then go back to step 3 and input the vector. Lower one Repeat the above step t until it is found that the minimum (or minimum) spindle dimension can be estimated. If the difference is greater than the allowable value, then the minimum spindle dimension of the estimated shrinkage is determined as the learning process; The neural network for the white and the step-down process step 5 evaluates the design conditions of the heat dissipation opening, which is to reduce the order 22 1306207 = as the input vector of the reverse transmission god _ road, and the "round-out nodes correspond to the increase and decrease, respectively The error of the inverted neural network rotation value and the actual output value of the sample is minimized as a function of the objective, and the neural network sections, ', and the threshold value of the indicator are adjusted to be optimized to obtain the most accurate round-out. As a result, the 焱 焱 传递 传递 传递 传递 传递 传递 传递 传递 传递 传递 传递 传递 传递 传递 传递 传递 传递 传递 传递 传递 传递 传递 传递 传递 传递 传递 传递 传递 传递 传递 传递 传递 传递 传递 传递 传递 传递 传递 传递 传递 传递 传递 传递 传递The deviation of the saturation function of the horizontal axis (bias) & slices are small random numbers; Step -: When the stop condition has not arrived, the neural network output value and the actual output of the training sample are different I; Valley 4 value, step three Go to step ten, otherwise end Learning process; exhaust __' step to step IX; Step 4: Transfer the input vector (spindle vector) 1 to each node of the hidden layer for full-join operation; 'Step 5: Calculate the hidden layer The output value v, where the action function is defined as / (wide) = 1; Step 6: say ten counts each output node eight,; Step seven: calculate the return error of each output node < 鸣痛丨会(hbu. '·, call and key correction amount, Δκ, where the learning rate 0 <«<1, the actual output value ^; Step 8: Calculate the hidden layer per-node bond correction value △ ~ = 吟, and back error Step IX. Update each key value ^(_), ~—风州卿), where 23 1306207 Wjk<JleM,^wjk(〇ld) + hM>jk > uij(new) = uij(〇 Ld) + Auij . ek(new)=,〇k(〇ld) + h9k > 0j(new) = θ.(〇ι^ +Αθ^ . Step 10: Test if the stop condition is true; As follows, Step 1: The property vector X/ of the design conditions of the heat sink opening to be tested by the wheel: Step 2: Perform the spindle transformation and reduction of the attribute vector of the heat dissipation opening design condition. The minimum number of spindle vectors required to estimate the accuracy of the σ 鸠 (4) _ material conversion ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; (2) Samples again) 'If there is', the sample data will be stored in the learning sample database for the re-learning data acquisition; Step 5: Whether the detection is over, if no, go back to the step--, if yes, stop the detection process; 'The steps are included, Step--Add all the misjudgment samples to the learning sample database. · Step Township··==Speaking and lowering the call to make the weight of each node of the neural network re-judgment 'Calling to the sorcerer of the sorcerer's sorrows will occur again.' and the estimation accuracy of the invention is soaring; 24 1306207 By the combination of the above processes, the attribute vector of divergence *, , and Khan is passed through KL. The expansion is converted into the principal vector as the inherited vector of the inverse neural network (4), ·,...the output nodes Λ,Α=1,2,···, β correspond to the seeding out, respectively, so the nerve of Yl Network output cpu temperature I to the god of I6, the temperature monitoring point of different positions of the point _ point, 1 γ2 is the temperature monitoring _ at the lower left of the upper layer of the casing, I is the temperature monitoring point at the upper left of the upper layer of the casing, I is the machine, and the upper layer At the lower right temperature monitoring point, I is the temperature monitoring point on the upper right side of the upper layer of the casing, I is the temperature monitoring in the center of the upper layer of the casing, I is the temperature monitoring point at the lower left of the casing cap, and the % is the upper left of the casing cap. The temperature monitoring _,% is the temperature monitoring point at the lower right of the machine layer, % 12 is the temperature monitoring point at the upper right of the middle layer of the casing, % is the temperature monitoring point in the center of the middle layer of the casing, and γ 14 is the lower left of the lower layer of the lining Temperature monitoring _, Yu is the temperature monitoring point on the upper left side of the lower layer of the casing, and the 'degree of the right side of the lower layer of the γ 16 y yoke's lower layer [measuring point, % is the temperature monitoring point on the upper right side of the lower layer of the casing, I. For the machine; vX lower layer of the temperature of the towel is difficult to monitor, and then according to the handle line learning and reduction process, the detection process and re-mixing 'the towel f and the reduced order process towel, by κ_[which coffee method will heat Open. The attribute vector of the 30 parameters is converted to the orthogonal main axis to avoid the attribute vectors from interfering with each other, and to maintain the minimum number of spindle vectors required for estimation accuracy, so as to reduce the complexity of neural network estimation, and the neural network is complete. The sample is known as the input and loser _ _ _ learning stomach sample database, training sample attribute vector and its corresponding thermal opening design conditions) adjust the weight of each node, so that the error between the neural network output value and the sample actual output value is the smallest The objective function adjusts the node key values to optimize the neural network estimation accuracy, and fixes the weights of each node after the end of the learning and reduction process to facilitate the estimation of the detection process; the detection process will wait for the detection sample The attribute vector is converted and reduced by the k_l expansion method, and the reduced-order spindle is used as the input vector, and the design is based on the 25 1306207 If there is a mistake, the county misjudges the samples and stores the sample materials in the study sample database to facilitate the acquisition of the learning materials; the learning process is based on misjudgment; this is added to the learning sample database to re-adjust KL exqingi〇n The spindle orientation and neural network adjust the weight of each node. 2. The method of the heat dissipation opening of the smart computer case according to the patent scope of the Shenqing patent scope, wherein the learning sample database stores the attribute vector of the heat dissipation opening design parameter of the training sample, and the attribute vector of the heat dissipation opening design is via Obtained after the simulation analysis, each training data contains 26 items of data, before and after (or before, the data is the attribute vector of the heat dissipation opening design parameter is Χι to Χι, where 乂 is the current housing In the X standard, & is the coordinate of the _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ When the wind is out, it is placed at the _ coordinate, & is the coordinates of the γ axis of the cooling air outlet of the computer case, χ7 is the coordinate of the cooling air outlet of the computer case at the axis of the Ζ axis, and Χ8 is the heat dissipation σ of the casing. Yu, & is the number of oil holes in the computer case, Χ1〇 is the heat sink side hole diameter of the computer case, the Uth item (or W item) is the neural network output node I • ^CPU temperature, item 12 (or W Item) is the neural network output node γ2 is the temperature monitoring at the lower left of the upper layer of the casing Point, item 13 (or (4)) is the neural network output node ^ is the temperature monitoring point on the upper left side of the upper layer of the chassis 'the second item (or (4)) is the neural network input _ point 'the lower right side of the upper case The temperature monitoring point, item I5 (or (4)) is the neural network wheeling node, which is the temperature upper right of the upper layer of the casing, and the 16th item (or (4) pole god _ road wheel (10) point is on the casing The temperature monitoring point of the Guangzheng Central 'the πth term (or „+7 term) is the neural network output node γ7 is the temperature monitoring point at the lower left of the middle layer of the casing' Item 18 (or (4)) is the neural network output node % is the temperature monitoring point at the upper left of the middle layer of the machine 26 I3062Q7, the 19th item (or "+9 item" is the temperature monitoring point of the neural network output node 丫9 is the lower right of the middle layer of the casing, item 20 (or "+ Chuan Xiang) is the temperature monitoring point on the upper right side of the middle layer of the casing, and the 21st item (or item) is the temperature monitoring point of the center of the middle layer of the casing. The item (or "+/2") is the neural network output node Ylz is the temperature monitoring point at the lower left of the lower layer of the chassis, and the 23rd item (or „+73 items) is the neural network. The road output node Yl3 is the temperature monitoring point at the upper left of the lower layer of the casing, and the 24th item (or "+74 item" is the neural network output node Υ14 is the temperature point of the lower right side of the casing, the 25th item (or the state 5) Item) is the temperature monitoring point on the upper right side of the lower layer of the chassis, and the 26th item (or 6 items). The 输出神红 network output node Υΐ6 is the temperature monitoring point in the center of the lower layer of the chassis. The correspondence between the above output and the round-in value can be obtained by the experimental process (or the input and round-off evaluation obtained by the known standard method). 3. The Zhidao machine and the heat-dissipating opening mentioned in the first paragraph of the patent application scope The method of design, in which 兮"~team step, Khan julu and stare < detection process step three of the neural network opening material steps Wei as follows: f 细 fine _ 丨 丨 _ _ fine steps 2: for each - For each sample, do step 3 to step 5; μ ^ ' Step 3: Turn the people to the bottom (record to private transmission _ hide step four: calculate _ layer of the input ^ ^ partial 卩. tree full connection operation; m - 丄^ 4), its read function is defined as step five: calculate the output of each output layer - ㈣Σν ^). 4. A method of mentally ill computer Gu _ _ mouth, private law axis and re-learning satirical @ # _u 5 sub-process, detected 'study f process is from the f sample f library, borrowed domain design 27 1306207 The input direction is used as the input vector of the inverted neural network, and the "input nodes correspond to the five kinds of collapsed Z fruits respectively." Therefore, the neural network output node of Yl is the CPU temperature, and the neural network input of I to Yl6 2 is the temperature monitoring point of 15 different positions, where 1 is the temperature at the lower left of the upper layer of the casing and 丨^ Υ3 is the temperature monitoring point at the upper left of the upper layer of the casing, and 丫4 is the temperature Υ5 at the lower right of the upper layer of the casing. It is the temperature monitoring point on the upper right side of the upper layer of the casing, Υ6 is the temperature monitoring point in the center of the upper layer of the casing, Υ7 is the temperature monitoring point of the left τ side of the middle layer of the casing, and Υ8 is the temperature L at the upper left of the machine layer, then the point is ''9 The temperature is engraved on the lower right side of the middle layer of the casing, Υι. is the temperature red measuring point on the upper right side of the machine layer, Yll is the temperature monitoring of the financial layer of the casing layer, and the % is the temperature measuring point at the lower left of the lower layer, Υ13 It is the temperature monitoring point at the upper left of the lower layer of the casing, and L is the temperature I measuring point at the lower right of the lower layer of the casing, γ1 5 is the temperature monitoring point on the upper right side of the lower layer of the casing, and Υΐ6 is the temperature red measurement point in the center of the lower layer of the casing. The error of the output value of the money transfer network and the actual touch value of the sample is minimized 2 as the target secret view _ Each section of the road _ is light to the most touch, giving the most accurate results of the scatter, , and PU boxes, and obtaining the optimal key value for each node of the inverted neural network. Including: Step 1: Set the first she, (4) Shui Weiqier (bias) " are complex small chaos; J step - 'When the item has not arrived, the neural network output value and the actual output of the training sample difference Less than the allowable value, step L needle, no listening beam learning process; V-- for each training sample, step four to step nine; step four human vector _ opening design parameters attributes ♦ transmitted to the hidden layer _ _ _ points for full-join operation; 28 I3062Q7 Step 5: Calculate the output value of the hidden layer v~=/(stone, where the action function is defined as /(妁=_11^ ; , 1 + θ Step 6··Calculate each— Output nodes π / (ΣΆ,; J step seven · s ten count each - round out nodes Return error __fan) heart (11), (eight, /4: = 1, 2, '.., speak) and key correction amount / ^ = tons, ., then 4 = £ ^, where the learning rate ο<α<ι 'actual output value G Step 8: Calculate the key correction value and return error of each node of the hidden layer, (V: 1, 2, ..., / 7); Step 9: Update each - The key value is ~(_), the %(four) is (4) calendar, where '"qing" = '".W) + —,,~(卿)=«〆.”) + △%, W = 〇k (old) + A0k , eMew) = ej{〇ld)^9j ; Step 10: Test whether the stop condition is true; The detection process, including its steps, Step -: Enter the attribute vector of the sample design parameter of the heat sink opening to be tested Network input to step 2: fine _ 嶋 post (four), input (four) for her quantity, and calculate the output value of each output node of the neural network; data stored in the study sample capital step three: detect whether there is a misjudgment sample, if any, will Mistake the sample library to facilitate the acquisition of the re-learning data; if yes, stop the detection process; Step 4: Whether the detection is over, if not go back to step 1, then the learning process, the steps include, 29 1306 207 Steps—·· Add all the mistakes to the learning sample database; Step 2: Re-execute the above learning process to optimize the weights of the neural network nodes to effectively approximate or identically misjudge the samples. New adjustments are made to improve the estimation accuracy of the present invention; the stomach is misjudged, and the combination of the above-mentioned various processes, with the design of the heat dissipation opening, the "network" of the "network" In the process of the corresponding nodes, the process of the process of learning, the process of detection and the process of re-learning, and the learning of m by the _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ In the sample database, the output of the training "u target adjusts the weight of each node, so that the value is compared with the sample axis, and the her value is adjusted to = to improve the accuracy of the neural network estimation. After the end, and fixed the weight of each node, in order to check the estimation of the Z process; _ over the financial materials to be checked _ this hybrid vector as the input vector, via God = _ _ _, tear, Cong Shigang Na I, mistake _ shell material Store in the sample database for further study Made of material; false relearning process via the learning sample into the sample database, adjust the neural network node weights, in order to achieve subsequent detection process for the $ t similarities or her mistake and then ship the forest misjudgment case. The method of the smart chassis gap contact according to claim 4, wherein the learning sample database stores an attribute vector of a heat dissipation opening design of the training sample, and the attribute vector is replaced by a computer. After analysis, the 'per training data contains % item data, and the first W item (or before) 1 is the attribute vector of the heat dissipation opening design, in the order of & to &, where: Χι is the computer case' , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Shell heat sink inlet diameter, X5 is the computer case heat dissipation air outlet position in the Υ & 直 straight in the X-scale, χ 6 is the chassis cooling heat outlet position in the Υ axis coordinates, X7 is the computer case 散政熟 (4) σ Positioned on the axis of the Ζ axis, Χ8 is the computer machine to record the heat out diameter, χ9 is the number of holes in the computer case heat dissipation, knowing that the computer case is cooling money, the first U Na (four) ship nerve ring is secret ι is cpu temperature ' Item 12 (or (5)) is the god Lai Road output node Y2 is the upper left of the chassis Side of the temperature monitoring point, of η item (or items ...) is a neural network node Y3 wheel monitoring temperature of the upper left upper housing 測點,第14項(或㈣項)是神經網路輪出節財4為機殼上層右下方之溫度 監測點,第15項(或η+5項)是神經網路輸出節點&為機殼上層右上方之溫 度監測點’第16項(或㈣項)是神經網路輪出節點%為機殼上層正中央之 溫度監測點’第17項(或《+7項)是神經網路輸出節點¥?為機殼中層左下方 之溫度監測點’第18項(或㈣項)是神經網路輸出節點%為機殼中層左上 方之溫度監測點’第19項(或㈣項)是神經網路輸出節點%為機殼中層右 下方之溫度監測點,第20項(或《+川項)是神經網路輸出節點Υι〇為機殼中 層右上方之溫度監測點,第21項(或《+以項)是神經網路輸出節點γη為機 设中層正中央之溫度監測點,第22項(或π+υ項)是神經網路輸出節點γΐ2 為機殼下層左下方之溫度監測點,第23項(或η+73項)是神經網路輸出節點 Υΐ3為機殼下層左上方之溫度監測點,第24項(或π+Μ項)是神經網路輸出 節點Yu為機殼下層右下方之溫度監測點,第25項(或„+75項)是神經網路 輸出節點Y15為機殼下層右上方之溫度監測點,第26項(或《+7(5項)是神經 網路輸出節點Υ16為機殼下層正中央之溫度監測點,上述輸出與輸入值之對 31 1306207 應關係可由實驗過程獲得(或由已知標準方法獲取之輸入與輸出之評估值)。 6.如申請專利範圍第4項所述之智慧型電腦機殼散熱開口之方法,其中該檢測 過程步驟二之神經網路評估電腦機殼散熱開口設計步驟包含: 步驟一:將學習過程將中所訓練完成之神經網路鍵結值固定、; 步驟二:對每一個測試樣本,作步驟三至步驟五; 步驟三:將輸入向量(散熱開口設計之屬性向量)χ,傳送到隱藏層的每一個節 點作全連結運算; 步驟四:計算隱藏層之輸出值\其中; 步驟五:計算每一個輸出層之輸出值八,八=/(^>;ίϋ)。Measuring point, item 14 (or (4)) is the neural network rounding and saving 4 is the temperature monitoring point on the lower right side of the upper layer of the casing, and the 15th item (or η+5 items) is the neural network output node & The temperature monitoring point on the upper right side of the upper case 'Item 16 (or (4)) is the temperature monitoring point of the neural network rotation node % is the center of the upper layer of the chassis 'the 17th item (or "+7 item" is the neural network The output node of the road ¥? is the temperature monitoring point at the lower left of the middle layer of the casing. Item 18 (or (4)) is the temperature monitoring point of the neural network output node % at the upper left of the middle layer of the casing '19th item (or (4)) The neural network output node % is the temperature monitoring point at the lower right of the middle layer of the casing, and the 20th item (or "+chuan" is the neural network output node Υι〇 is the temperature monitoring point on the upper right side of the middle layer of the casing, item 21 (or "+" is the temperature monitoring point of the neural network output node γη is the center of the middle layer of the machine. The 22nd item (or π+υ) is the temperature of the neural network output node γΐ2 is the lower left of the lower layer of the chassis. Monitoring point, item 23 (or η+73) is the neural network output node Υΐ3 is the temperature monitoring point at the upper left of the lower layer of the chassis, item 24 (or π+ Item) is the neural network output node Yu is the temperature monitoring point on the lower right side of the lower layer of the chassis, and the 25th item (or „+75 items) is the temperature monitoring point of the upper right layer of the lower layer of the chassis of the neural network output node Y15, the 26th Item (or "+7 (5 items) is the neural network output node Υ16 is the temperature monitoring point in the center of the lower layer of the chassis. The relationship between the above output and the input value 31 1306207 can be obtained by the experimental process (or by the known standard method). Obtaining the evaluation value of the input and output. 6. The method for applying the heat dissipation opening of the smart computer case according to the fourth aspect of the patent application, wherein the neural network of the second step of the detecting process evaluates the heat dissipation opening of the computer case Including: Step 1: Fix the neural network key value of the training completed in the learning process; Step 2: For each test sample, perform steps 3 to 5; Step 3: Input the vector (heat dissipation opening design) Attribute vector), transfer to each node of the hidden layer for full-join operation; Step 4: Calculate the output value of the hidden layer\ where; Step 5: Calculate the output value of each output layer, eight = / (^) ; ϋ)). 3232
TW094105375A 2005-02-23 2005-02-23 Method and device using intelligent theory to design heat dissipation opening of computer housing TW200630833A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW094105375A TW200630833A (en) 2005-02-23 2005-02-23 Method and device using intelligent theory to design heat dissipation opening of computer housing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW094105375A TW200630833A (en) 2005-02-23 2005-02-23 Method and device using intelligent theory to design heat dissipation opening of computer housing

Publications (2)

Publication Number Publication Date
TW200630833A TW200630833A (en) 2006-09-01
TWI306207B true TWI306207B (en) 2009-02-11

Family

ID=45071326

Family Applications (1)

Application Number Title Priority Date Filing Date
TW094105375A TW200630833A (en) 2005-02-23 2005-02-23 Method and device using intelligent theory to design heat dissipation opening of computer housing

Country Status (1)

Country Link
TW (1) TW200630833A (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11403069B2 (en) 2017-07-24 2022-08-02 Tesla, Inc. Accelerated mathematical engine
US11409692B2 (en) 2017-07-24 2022-08-09 Tesla, Inc. Vector computational unit
US11487288B2 (en) 2017-03-23 2022-11-01 Tesla, Inc. Data synthesis for autonomous control systems
US11537811B2 (en) 2018-12-04 2022-12-27 Tesla, Inc. Enhanced object detection for autonomous vehicles based on field view
US11562231B2 (en) 2018-09-03 2023-01-24 Tesla, Inc. Neural networks for embedded devices
US11561791B2 (en) 2018-02-01 2023-01-24 Tesla, Inc. Vector computational unit receiving data elements in parallel from a last row of a computational array
US11567514B2 (en) 2019-02-11 2023-01-31 Tesla, Inc. Autonomous and user controlled vehicle summon to a target
US11610117B2 (en) 2018-12-27 2023-03-21 Tesla, Inc. System and method for adapting a neural network model on a hardware platform
US11636333B2 (en) 2018-07-26 2023-04-25 Tesla, Inc. Optimizing neural network structures for embedded systems
US11665108B2 (en) 2018-10-25 2023-05-30 Tesla, Inc. QoS manager for system on a chip communications
US11681649B2 (en) 2017-07-24 2023-06-20 Tesla, Inc. Computational array microprocessor system using non-consecutive data formatting
US11734562B2 (en) 2018-06-20 2023-08-22 Tesla, Inc. Data pipeline and deep learning system for autonomous driving
US11748620B2 (en) 2019-02-01 2023-09-05 Tesla, Inc. Generating ground truth for machine learning from time series elements
US11790664B2 (en) 2019-02-19 2023-10-17 Tesla, Inc. Estimating object properties using visual image data
US11816585B2 (en) 2018-12-03 2023-11-14 Tesla, Inc. Machine learning models operating at different frequencies for autonomous vehicles
US11841434B2 (en) 2018-07-20 2023-12-12 Tesla, Inc. Annotation cross-labeling for autonomous control systems
US11893393B2 (en) 2017-07-24 2024-02-06 Tesla, Inc. Computational array microprocessor system with hardware arbiter managing memory requests
US11893774B2 (en) 2018-10-11 2024-02-06 Tesla, Inc. Systems and methods for training machine models with augmented data

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11487288B2 (en) 2017-03-23 2022-11-01 Tesla, Inc. Data synthesis for autonomous control systems
US11403069B2 (en) 2017-07-24 2022-08-02 Tesla, Inc. Accelerated mathematical engine
US11409692B2 (en) 2017-07-24 2022-08-09 Tesla, Inc. Vector computational unit
US11893393B2 (en) 2017-07-24 2024-02-06 Tesla, Inc. Computational array microprocessor system with hardware arbiter managing memory requests
US11681649B2 (en) 2017-07-24 2023-06-20 Tesla, Inc. Computational array microprocessor system using non-consecutive data formatting
US11561791B2 (en) 2018-02-01 2023-01-24 Tesla, Inc. Vector computational unit receiving data elements in parallel from a last row of a computational array
US11797304B2 (en) 2018-02-01 2023-10-24 Tesla, Inc. Instruction set architecture for a vector computational unit
US11734562B2 (en) 2018-06-20 2023-08-22 Tesla, Inc. Data pipeline and deep learning system for autonomous driving
US11841434B2 (en) 2018-07-20 2023-12-12 Tesla, Inc. Annotation cross-labeling for autonomous control systems
US11636333B2 (en) 2018-07-26 2023-04-25 Tesla, Inc. Optimizing neural network structures for embedded systems
US11562231B2 (en) 2018-09-03 2023-01-24 Tesla, Inc. Neural networks for embedded devices
US11893774B2 (en) 2018-10-11 2024-02-06 Tesla, Inc. Systems and methods for training machine models with augmented data
US11665108B2 (en) 2018-10-25 2023-05-30 Tesla, Inc. QoS manager for system on a chip communications
US11816585B2 (en) 2018-12-03 2023-11-14 Tesla, Inc. Machine learning models operating at different frequencies for autonomous vehicles
US11537811B2 (en) 2018-12-04 2022-12-27 Tesla, Inc. Enhanced object detection for autonomous vehicles based on field view
US11908171B2 (en) 2018-12-04 2024-02-20 Tesla, Inc. Enhanced object detection for autonomous vehicles based on field view
US11610117B2 (en) 2018-12-27 2023-03-21 Tesla, Inc. System and method for adapting a neural network model on a hardware platform
US11748620B2 (en) 2019-02-01 2023-09-05 Tesla, Inc. Generating ground truth for machine learning from time series elements
US11567514B2 (en) 2019-02-11 2023-01-31 Tesla, Inc. Autonomous and user controlled vehicle summon to a target
US11790664B2 (en) 2019-02-19 2023-10-17 Tesla, Inc. Estimating object properties using visual image data

Also Published As

Publication number Publication date
TW200630833A (en) 2006-09-01

Similar Documents

Publication Publication Date Title
TWI306207B (en)
Baldock et al. The index is the best measure of a scientist's research productivity
Presanis et al. Conflict diagnostics in directed acyclic graphs, with applications in Bayesian evidence synthesis
CN110110754B (en) Method for classifying imbalance problems based on cost local generalization errors
Grobelna Formal verification of embedded logic controller specification with computer deduction in temporal logic
Barani et al. Implementation of Artificial Fish Swarm Optimization for Cardiovascular Heart Disease
Habiba et al. Classifying adenomyosis: Progress and challenges
CN106203530A (en) Method is determined for the feature weight of uneven distributed data towards k nearest neighbor algorithm
CN110472067A (en) Knowledge mapping indicates learning method, device, computer equipment and storage medium
CN111477337B (en) Infectious disease early warning method, system and medium based on individual self-adaptive transmission network
Hu et al. Aiding airway obstruction diagnosis with computational fluid dynamics and convolutional neural network: A new perspective and numerical case study
CN105894493A (en) FMRI data feature selection method based on stability selection
Tuesta et al. Does a country/region’s economic status affect its universities’ presence in international rankings?
Laenen et al. Reliability of a longitudinal sequence of scale ratings
Kim et al. Evaluation of deep learning for COVID‐19 diagnosis: impact of image dataset organization
KR101997992B1 (en) System for generating survey for social emotion survey based on emotion vocabulary and method thereof
Cillo et al. Bifurcated monocyte states are predictive of mortality in severe COVID-19
WO2006091983A3 (en) Variable diffusion-time magnetic resonance-based system and method
Dhar et al. An innovation diffusion model for the survival of a product in a competitive market: basic influence numbers
Altman et al. Brackets (parentheses) in formulas
Tang et al. Least squares regression methods for clustered ROC data with discrete covariates
Drechsler et al. PASSAT 2.0: A multi-functional SAT-based testing framework
DAVIES et al. Detection and significance of subclinical mitral regurgitation by colour Doppler techniques
Cheema et al. Power and Performance Analysis of Deep Neural Networks for Energy-aware Heterogeneous Systems
ALADENIYI et al. Structural Equation Modeling of the Relationship between Quality of Life and Satisfaction with Life Scale among HIV Positive Population

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees