TWI352805B - - Google Patents
Download PDFInfo
- Publication number
- TWI352805B TWI352805B TW97100006A TW97100006A TWI352805B TW I352805 B TWI352805 B TW I352805B TW 97100006 A TW97100006 A TW 97100006A TW 97100006 A TW97100006 A TW 97100006A TW I352805 B TWI352805 B TW I352805B
- Authority
- TW
- Taiwan
- Prior art keywords
- value
- luminance
- neural network
- light source
- grayscale
- Prior art date
Links
Landscapes
- Image Processing (AREA)
- Studio Devices (AREA)
Description
1352805 九、發明說明: 【發明所屬之技術領域】 本發明係一種輝度量測方法, 沄尤扣一種以類神經網路 為核心而可配合運用數位相機 飛以决速建立待測對象表面輝 度分佈圖像的二維輝度量測方法。 【先前技術】 按液晶顯示器(Liquid Crysta丨Disp|ay, LCD)具有薄型 化、輕量化、低耗電量、無輻射污染、且能與半導體製程 技術相容等優點,故目前是我國重點發展的產業之一,其 中背光模組(backlight module,BLM)為提供LCD背光源的 關鍵元件之一。由於液晶顯示器必須依靠背光模組發光, 因此背光模組的品質直接影響了液晶顯示器的工作表現, 其中輝度(luminance)即為評估背光模組品質的指標之一。 而Radiant Imaging公司早在二十年前便已經推出了 第一款以CCD為核心的輝度量測設備,另外日商| System與Mi no丨ta公司目前也有全面式輝度色度量測設備 問世;我國則有阿瑪光電等公司正在開發相關的商品。以 上提及的量測設備的工作原理都是先由CCD擷取待測表 面的圖像’再由各公司個自開發的軟體進行分析,進而獲 得所要求的二維輝度輸出結果。1352805 IX. Description of the invention: [Technical field of the invention] The present invention is a method for measuring the brightness of a test object, and a kind of neural network as a core can be used together with a digital camera to establish a surface luminance distribution of a test object at a constant speed. A two-dimensional glow measurement method for images. [Prior Art] According to liquid crystal display (Liquid Crysta丨Disp|ay, LCD), it has the advantages of thinness, light weight, low power consumption, no radiation pollution, and compatibility with semiconductor process technology. One of the industries, in which the backlight module (BLM) is one of the key components for providing LCD backlights. Since the liquid crystal display must rely on the backlight module to emit light, the quality of the backlight module directly affects the performance of the liquid crystal display, and luminance is one of the indicators for evaluating the quality of the backlight module. Radiant Imaging has launched the first CCD-based luminosity measurement device 20 years ago. In addition, the Japanese company | System and Mi no丨ta also have a full-scale luminance color measurement device; In China, companies such as Arma Optoelectronics are developing related products. The working principle of the above-mentioned measuring equipment is that the image of the surface to be tested is first captured by the CCD, and then analyzed by the software developed by each company to obtain the required two-dimensional luminance output.
Radiant Imaging公司指出利用其輝度量測設備可快 速量測出顯示器的色彩座標與輝度,主要係使用其開發的 CCD光度/色度量測系統裝置’搭配其開發的程式軟體 4 1352805Radiant Imaging pointed out that its color measurement device can quickly measure the color coordinates and luminance of the display, mainly using its developed CCD photo/color measurement system device' with its developed software 4 1352805
Prometric ϋ# ^ χ , 均十八曰ι τ貝枓为析。然而前述公司開發的量測設備 刀叩貝’有些業者基於生產 較經濟和較 Μ本上的考里,絲選擇 往受到質疑 測’不過其結果的可靠性卻往 【發明内容】 胃因此本發明主要目的在提供一種成本低且處理快速的 方法’其可實際運用在生產線上對面㈣進行量測, 進而增進生產線上的量測速度,並降低其成本。 為達成前述目的採取的技術手段係令前述方法包括以 下步驟: a. 對一面光源施加不同的工作電壓,並在該面光源的 同一指定位置上取其在不同工作電壓下的輝度值; b. 對則述面光源施加不同的工作電壓,並利用cCD在 面光源的同一指定位置上擷取其不同工作電壓下的圖像, 並分別轉換為一灰階圖像; c. 於各灰階圖像的指定位置分別取出一原始灰階值, 該原始灰階值與輝度值具有正比關係; d. 根據前述複數組對應的原始灰階值與輝度值,採用 曲線擬合方法引導出相關函數關係; e. 在前述面光源定義出複數個特徵點位置,並在各特 徵點位置上分別量測以取其正確的輝度值,再基於上述步 驟d中以曲線擬合方式形成的函數關係反求出該複數特徵 點位置的一已校正灰階值; 5 1352805 f. 以CCD分別擷取面光源的圖像,進而在前述複數特 徵點位置上分別取得其未校正灰階值; g. 根據前述複數組對應的未校正灰階值與已校正灰階 值,取得複數組”特徵值灰階校正係數”(灰階校正係數=已 校正灰階值/未校正灰階值); h. 根據前述”特徵值灰階校正係數”利用類神經網路形 成灰階校正矩陣; 丨.根據前述灰階校正矩陣對CCD取得的影像進行校正 » j.利用前述”灰階校正矩陣”與,,已校正灰階值與輝度的 擬合曲線”取得一模擬二維輝度分布。 利用前述方法可配合數位相機快速取像的優點,而迅 速求得二維平面甚至曲面的輝度分布,如此一來不僅確保 輝度量測的可靠度,更可大幅降低成本β 【實施方式】 本發明係利用數位相機來取得待測面輝度分佈的圖像 而電荷輕合元件(Charge Coupled Device,以下簡稱 CCD)是大多數數位相機採用的圖像擷取元件。而在傳統的 輝度计單點量測中,其量測點與其探測棒必須垂直正交, 藉以得到正確的輝度值。當使用數位相機拍照取像時,其 工作原理係如第一圖所示,一個均勻輝度的光源是座落在 成像透鏡的物平面’而CCD則座落在成像透鏡的像平面 ,第一^ φ * - 衣不出一個均勻的輝度光源,其輝度L可由照 6 1352805 度E計算求解;其中: 輝度(Luminance: L)被定義在單位發光面積之投影量 A,在立體角上所量測的光通量。 增=$ ’其中牦⑴Prometric ϋ# ^ χ , 均十八曰ι τ贝枓 for analysis. However, the measurement equipment developed by the aforementioned company, the mussels, is based on the fact that some manufacturers are more economical and more cost-effective, and the silk selection is subject to questioning. However, the reliability of the results is still [invention] The main objective is to provide a low-cost and fast-processing method that can be used to measure the opposite side of the production line (4), thereby increasing the measurement speed on the production line and reducing its cost. The technical means for achieving the foregoing objectives are such that the method comprises the steps of: a. applying different operating voltages to one side of the light source and taking the luminance values at different operating voltages at the same designated position of the surface light source; b. Applying different working voltages to the surface light source, and using cCD to capture images at different working voltages at the same specified position of the surface light source, and respectively converting them into a gray scale image; c. The original gray scale value is taken out from the specified position of the image, and the original gray scale value has a proportional relationship with the luminance value; d. according to the original gray scale value and the luminance value corresponding to the complex array, the curve fitting method is used to guide the correlation function relationship e. Define a plurality of feature point positions in the surface light source, and measure each feature point position to take the correct luminance value, and then reverse the function relationship formed by the curve fitting method in the above step d a corrected gray scale value of the complex feature point position; 5 1352805 f. The image of the surface light source is respectively captured by the CCD, and then respectively taken at the position of the plurality of feature points The uncorrected gray scale value; g. obtains the complex array "eisk value gray scale correction coefficient" according to the uncorrected gray scale value and the corrected gray scale value corresponding to the foregoing complex array (gray scale correction coefficient = corrected gray scale value / Uncorrected grayscale value); h. Use the neural network to form a grayscale correction matrix according to the aforementioned "eigenvalue grayscale correction coefficient"; 丨 correct the image obtained by the CCD according to the grayscale correction matrix described above. The "Grayscale Correction Matrix" and the fitted curve of the corrected grayscale value and the luminance obtain a simulated two-dimensional luminance distribution. The above method can be used together with the advantage of the digital camera to quickly take images, and quickly obtain a two-dimensional plane or even The luminance distribution of the curved surface not only ensures the reliability of the metric measurement, but also greatly reduces the cost β. [Embodiment] The present invention uses a digital camera to obtain an image of the luminance distribution of the surface to be measured, and the charge coupling component (Charge) Coupled Device (hereinafter referred to as CCD) is the image capture component used by most digital cameras. In traditional single-point measurement of luminance meter, the measurement point and its detector must be Straight orthogonal, in order to get the correct luminance value. When using a digital camera to take pictures, the working principle is as shown in the first figure, a uniform brightness light source is located in the object plane of the imaging lens' while the CCD is seated. Falling on the image plane of the imaging lens, the first ^ φ * - can not produce a uniform luminance source, and its luminance L can be calculated by 6 1352805 degrees E; where: Luminance (L) is defined in the unit light-emitting area Projection A, the luminous flux measured at the solid angle. Increase = $ ' where 牦 (1)
Θ是觀察方向與發光面方向的夹角 nit( cd/m2) ° ,輝度的單位是 根據光度學公式代入可得E E1=⑽ * 1 dA, ,άΦ I〇=dQ0 (2) (3) 所以藉由式(3)代入式(1)得到輝度L L0- d2° _dI〇 又0 cosordA0dn0 cosadA0 dA0 (4) 因為一個點輻射源在所有方向上輻射強度都相同,其 在有限立體角内發射的輻射通量為 Φ = ΙοχΩο (5) 所id (5)代人式(2)可得照度與輝度關係 E _da〇x〇0) _Q〇d(I0) dA '蛑 _ dAT = L。^^ (6) 又由光度學公式代入可得E π d(D 3 E3=i dQ, (7) (8) 所以藉由式(8)代入式(1)得到輝度 L2= ά2φ cosadAjdQ, cosadA2 (9) 因為一個點轄射源在所有方向上輻射強度都相同,其 在有限立體角内發射的輻射通量為 7 1352805 (10) Φ = I, χ Ω, 所以综合式(9)、(10)代入式(7)可得照度與輝度關係 _dq,xQ,) Ω,άα.) 3 dA3 dA3 =L2 dA2〇 —^-^cosa dA3 (11) 因為第二圖可知^^cosa,根據立體角定義可得 ΩοΑΆ#2,所以 Ω, = Q0cos2a (12) 由式(12)代入式(11)可得Θ is the angle η( cd/m2) ° between the direction of observation and the direction of the light-emitting surface. The unit of luminance is obtained by substituting the photometric formula. E E1=(10) * 1 dA, , άΦ I〇=dQ0 (2) (3) Therefore, the luminance L L0- d2° _dI 〇 and 0 cosordA0dn0 cosadA0 dA0 (4) are obtained by substituting equation (3) into equation (1). Since a point radiation source has the same radiation intensity in all directions, it emits in a limited solid angle. The radiant flux is Φ = ΙοχΩο (5) id (5) generation (2) illuminance and luminance relationship E _da〇x〇0) _Q〇d(I0) dA '蛑_ dAT = L. ^^ (6) Substituting the photometric formula to obtain E π d(D 3 E3=i dQ, (7) (8) Therefore, by substituting equation (8) into equation (1), the luminance L2 = ά2φ cosadAjdQ, cosadA2 (9) Since a point source has the same radiation intensity in all directions, its radiant flux emitted in a finite solid angle is 7 1352805 (10) Φ = I, χ Ω, so the comprehensive formula (9), 10) Substituting (7) can obtain the relationship between illuminance and luminance _dq, xQ,) Ω, άα.) 3 dA3 dA3 = L2 dA2 〇 -^-^cosa dA3 (11) Because the second figure can be seen ^^cosa, according to The solid angle definition can be obtained as ΩοΑΆ#2, so Ω, = Q0cos2a (12) can be obtained by substituting equation (12) into equation (11).
dAdA
E3=L2-^-Q0cos3cz (13) 因為办3面法線與由叫射入私面之射線,差了 a夾角, 所以由照度餘弦定律可知 (14) ^3=^2 T~^^〇COs4a dA3 由於第一圖上所示者,為一個均勻的輝度光源,所以 l0=l2所以最終e3 e3 (15)E3=L2-^-Q0cos3cz (13) Because the normal of the 3 faces and the rays emitted by the private face are inferior to the angle of the a, the law of the cosine cosine is known (14) ^3=^2 T~^^〇 COs4a dA3 is a uniform luminance source because it is shown on the first figure, so l0=l2 so the final e3 e3 (15)
別述公式(1 5)表示除了在水平轴線上的照度,其他位 置的照度值皆有一個cos4a的誤差,因此使用數位相機進行 平面輝度檢測時,其輸出之圖像必須經過校正。 至於如何配合數位相機進行具有校正功能的二維輝度 刀析’其具體技術手段詳如以下所述: 如第二圖所不,其主要係於一翻拍架(1〇)的一端設有 一測試平台⑴),相對另端則設有一固定座(12),該固定 座Π2)係供架設-數位相機或—輝度計,2固定座(⑺與 測試平台(11)相互平行,以確保數位相機或輝度計斑測試 平台⑴)上的待測光源呈正交關係。而數位相機及輝度計 8 里列取仔之資料則送 軟體Mat丨I其運异可採用數學 方… 仃數值、圖像、類神經、使用者介面設定 方面的處理。至於立a> 窄"囬叹疋 述❶ 、〃進仃二維輝度分析的步驟詳如以下所 # ’係在“翻㈣⑽上架設輝度計,利用輝 測-面光源(例如2.7时的背光模組)所 於本實施例中,係如第四圖所示,在該面光源上定義出九 個特徵點位置,拯荽旦兀*々上疋義出九The equation (1 5) indicates that in addition to the illuminance on the horizontal axis, the illuminance values of other positions have a cos4a error, so when using the digital camera for planar luminance detection, the output image must be corrected. As for how to cooperate with a digital camera to perform a two-dimensional luminance knife with correction function, the specific technical means are as follows: As shown in the second figure, it is mainly provided with a test platform at one end of a reticle (1 〇). (1)), opposite to the other end is provided with a fixing seat (12) for erecting - digital camera or - luminance meter, 2 fixing seat ((7) and test platform (11) parallel to each other to ensure digital camera or The light source to be tested on the luminance meter test platform (1) is in an orthogonal relationship. The data of the digital camera and the luminance meter are sent to the software Mat丨I, and the difference can be handled by the mathematical method... 仃 numerical value, image, neuron, user interface setting. As for the vertical a > narrow " sigh 疋 ❶ 〃 〃 〃 〃 〃 〃 〃 仃 详 详 详 详 详 详 详 详 详 详 详 详 详 详 详 详 详 详 详 详 详 仃 仃 仃 仃 仃 仃 仃 仃 仃 仃 仃 仃 仃 仃 仃 仃 仃 仃 仃 仃 仃 仃 仃In the embodiment, as shown in the fourth figure, nine feature point positions are defined on the surface light source, and the 荽 荽 兀 々 々 々 出
光源中央之第5個特徵點 位置的輝度,並蕻士#&, 藉由供給不同工作電壓給背光模組,分別 在第5個特徵點位置得到不同的輝度值。 第二步,改以CCD取像的方式拍攝取得面光源的照 度刀佈,其中CCD取像是以第5個特徵點位置為輸出圖 像中心。而CCD取得之圖像係送至電腦經由Matlab數學 軟體來分別得知圖像中# RGB三維矩陣,再利用Mat|abThe brightness of the 5th feature point in the center of the light source, and the gentleman #&, by supplying different working voltages to the backlight module, respectively obtain different luminance values at the 5th feature point position. In the second step, the illuminating knife cloth for obtaining the surface light source is captured by means of CCD image capturing, wherein the CCD image capturing is based on the position of the fifth feature point as the output image center. The image obtained by CCD is sent to the computer to learn the #RGB three-dimensional matrix in the image through Matlab mathematics, and then use Mat|ab
圖像處理内建函式,將RGB三維矩陣轉換為二維灰階值 矩陣,而構成一原始照度分布圖,進而得知第5個特徵點 位置的原始灰階值。在數位相機拍照取值之校正原理中, 其公式推導之最後說明除了在水平軸線上(即第5個特徵點 位置)的照度,其他位置的照度值至少有一個⑺^^的誤差。 由此假設第5個特徵點位置的原始灰階值不須經由校正, 即可與實際輝度做曲線配適,所以第5個特徵點位置的輝 度值與原始灰階值在理論上具有正比的關係。 接著藉由供給不同的工作電壓予背光模組,而在該第 5個特徵點位置分別得到複數組不同的已校正灰階值。經 9 1352805 • *複前述的第-步、第二步即可得複數組已校正灰階值對 ㈣度值的數據,並㈣曲線擬合方法引導出其相關函數 關係。 帛三步’使用輝度計來量測面光源的輝度,在此步驟 中,吾人係量測面光源上9個以上特徵點位置的輝度;利 用第二步曲線擬合形成的函數關係,反求出9點以上特徵 點位置的已校正灰階值。The image processing built-in function converts the RGB three-dimensional matrix into a two-dimensional gray-scale value matrix to form an original illuminance distribution map, and then knows the original gray-scale value of the fifth feature point position. In the correction principle of the digital camera photographing value, the final derivation of the formula indicates that in addition to the illuminance on the horizontal axis (i.e., the fifth feature point position), the illuminance values at other positions have at least one (7)^^ error. Therefore, it is assumed that the original grayscale value of the fifth feature point position does not need to be corrected, and the curve can be matched with the actual luminance, so the luminance value of the fifth feature point position is theoretically proportional to the original grayscale value. relationship. Then, by supplying different working voltages to the backlight module, the corrected gray scale values of the complex array are respectively obtained at the fifth feature point position. After 9 1352805 • * repeats the first step and the second step, the data of the corrected gray scale value pair (four) degree value can be obtained, and (4) the curve fitting method leads the related function relationship.帛Three steps' use the luminance meter to measure the luminance of the surface light source. In this step, we measure the luminance of more than 9 feature points on the surface light source; use the second step curve to form the functional relationship, reverse The corrected grayscale value of the feature point position above 9 points.
第四步,則先固定前一步驟的工作電壓,再以CCD 取像的方式來拍攝光源的照度分佈;其中CCD取像是以 第5點特徵值位置為輸出圖像中心,以取得9點以上特徵 點位置的未校正灰階值。根據第三步、第四步得到9組^ 上「特徵值灰階校正係數」(灰階校正係數=已校正灰階值/ 未校正灰階值),接著以該灰階校正係數利用類神經網路形 成灰階校正矩陣,並以此矩陣進行數位相機拍照取值所需 的校正程序。因而利用前述「灰階校正矩陣」與「已校正 秦 &階值與輝度的擬合曲線」可得到—模擬的:維輝度分布 〇 以下進-步針對前述步驟所提的「曲線擬合」方法提 出說明: 所謂的「曲、線擬合(curve_f_g)」就是將一組離散的 數據以-個近似的曲線方程式來代表,通常由曲線擬合建 立的數學模型是「單輸入/單輸出」(Single input Sing卜 output’簡稱S丨S0)’所以其特性可用一條曲線來表示。 假設觀察所得資料數據係如第五圖所示,其表格中說 10 1352805 明使用供給背光模組電壓由i〇 0V~12 ov,每次間隔〇 2V ’分別用輝度計量測背光模組中心特徵點輝度值,對應 CCD拍照所取得背光模組中心特徵點之值(灰階值),兩者 都疋長度為10的向量’假設要預測灰階值在〇~255的相 對輝度值,即須根據上述資料建立一數學模型,並依此模 型來進行預測。 根據第五圖所列數據運用Matlab的曲線擬合ΤοοΙΒοχ 模擬出輝度值與已校正灰階值的曲線,以已校正灰階值為 輸入’輝度值為輪出’而在曲線擬合設定,以Smoothing Spline作為曲線配適類型。 接著進一步說明該灰階校正係數如何利用類神經網路 形成灰階校正矩陣:根據文獻,理想均勻面光源在經由 CCD拍照所得到的照度分布圖像(如第六圖所示),在離 中心越近灰階值最高’離中心越遠灰階值越低,如以前述 推導公式可推測其存在cos4a的誤差;但實際上面光源本身 之均勻度通常並非如此理想’且數位相機在成像時所形成 的誤差亦不只有c〇s4a的誤差,因此本發明使用倒傳遞類神 經網路以進行較複雜但精密的校正工作,主要係使用此網 路設定幾項輸入向量對應目標向量,設定一些參數,再經 由收斂測試,得到正確的網路關係,應用成一個灰階校正 矩陣,可將原始圖像(如第七圖所示)的未校正灰階矩陣 轉化為已校正灰階矩陣,以形成正確的二維輝度分佈圖像 (如第八圖所示)。 所謂的「類神經網路」是一種計算系統,由許多高度 11 1352805 連結的人工神經元(或處理單元)所組成,如第九圖所示 ,類神經網路一般具有三層··輸入層、隱藏層 '輸出層β 隱藏層的人工神經元可以是一個或二個以上的數目,每個 人工神經元可視為一個單獨進行的處理器,處理器數量苦 為二個以上其工作方式是以並行方式運作。 而倒傳遞類神經網路的演算過程分為學習過程及回想 過程。其中學習過程包含兩個階段:順向傳遞與逆向傳遞 。順向傳遞是從輸入層開始,傳至隱藏層而後至輸出層, 一層一層向前傳遞並由非線性轉換函數計算各層處理單元 的輸出值,直至網路的最後一層。逆向傳遞則是由輸出層 向後傳遞,這一階段在於計算誤差及更新連接鏈結值,其 方法則根據輸出向量及目標向量的差別來做修正,將這些 差別訊號傳回網路中,然後去重新計算鏈結值,並更新鏈 結值,使得網路的輸出向量與目標向量愈來愈接近,完成 網路的訓練。此學習過程通常以一次一個訓練範例的方式 進行,直到學習完所有的訓練範例,且達到網路收斂為止 。至於回想過程則是網路依學習過程中 ’以輸入向量輸入,推估輸出向量的過…:= 程。以下進一步說明在倒傳遞網路中修改鍊結值的演算方 法》 ' 在倒傳遞網路中,第η層第】·個神經元的輸出值為第 n-1層第I個神經元輸出值的權重累加值的非線性函數: yaj = f (net]) (21) 式中y;為第n層第j個神經元的輸出值,,是轉換函數 12In the fourth step, the working voltage of the previous step is fixed first, and then the illuminance distribution of the light source is taken by the CCD image capturing method; wherein the CCD image is taken as the output point of the 5th point of the image value to obtain 9 points. The uncorrected grayscale value of the above feature point position. According to the third step and the fourth step, 9 sets of "eisk value gray scale correction coefficient" (gray scale correction coefficient = corrected gray scale value / uncorrected gray scale value) are obtained, and then the nerve-like correction coefficient is used to use the nerve-like The network forms a grayscale correction matrix and uses this matrix to perform the calibration procedure required for the digital camera to take values. Therefore, using the above-mentioned "gray scale correction matrix" and "corrected Qin & fitting curve and luminance fitting curve" can be obtained - simulated: dimensional luminance distribution 〇 following the "step fitting" proposed for the above steps The method puts forward the explanation: The so-called "curve_f_g" means that a set of discrete data is represented by an approximate curve equation. The mathematical model usually established by curve fitting is "single input/single output". (Single input Sing output 'short for S丨S0)' so its characteristics can be represented by a curve. Assume that the data obtained by observation is as shown in the fifth figure. The table says that 10 1352805 clearly uses the voltage supplied to the backlight module from i〇0V~12 ov, and each time interval 〇2V' The characteristic point luminance value corresponds to the value of the central feature point of the backlight module obtained by the CCD photographing (gray scale value), both of which are vectors of length 10, assuming that the relative luminance value of the gray scale value is 〇~255, that is, A mathematical model must be established based on the above information and predicted based on this model. According to the data listed in the fifth figure, use Matlab's curve fitting ΤοοΙΒοχ to simulate the curve of the luminance value and the corrected grayscale value, and set the curve with the corrected grayscale value as the input 'luminance value to turn out'. Smoothing Spline is a suitable type of curve. Then further explain how the gray-scale correction coefficient forms a gray-scale correction matrix using a neural network: according to the literature, an ideal uniform surface light source obtained by photographing through a CCD (as shown in the sixth figure) is off-center. The closer the grayscale value is, the lower the grayscale value is, the farther it is from the center. If the above derivation formula is used, the error of cos4a can be inferred; but in fact, the uniformity of the surface light source itself is usually not so ideal, and the digital camera is in imaging. The error formed is not only the error of c〇s4a, so the present invention uses the inverse transfer type neural network to perform more complicated but precise correction work, mainly using the network to set several input vectors corresponding to the target vector, and set some parameters. Then, through the convergence test, the correct network relationship is obtained, which is applied into a gray-scale correction matrix, and the uncorrected gray-scale matrix of the original image (as shown in the seventh figure) can be converted into the corrected gray-scale matrix to form The correct two-dimensional luminance distribution image (as shown in Figure 8). The so-called "like neural network" is a computing system consisting of many artificial neurons (or processing units) connected by a height of 11 1352805. As shown in the ninth figure, a neural network usually has three layers. The artificial neurons of the hidden layer 'output layer β hidden layer may be one or more than two, each artificial neuron may be regarded as a separate processor, and the number of processors is two or more. Operate in parallel. The calculation process of the inverse transfer neural network is divided into a learning process and a recall process. The learning process consists of two phases: forward delivery and reverse delivery. Forward transfer starts from the input layer and passes to the hidden layer and then to the output layer. The layer is forwarded one by one and the output value of each processing unit is calculated by the nonlinear conversion function until the last layer of the network. The reverse transfer is transmitted backward by the output layer. This stage is to calculate the error and update the link value. The method corrects the difference between the output vector and the target vector, and sends the difference signals back to the network. Recalculate the link value and update the link value so that the output vector of the network is getting closer to the target vector, completing the training of the network. This learning process is usually performed in a training paradigm at a time until all training paradigms have been learned and network convergence is achieved. As for the recall process, the network uses the input vector input to estimate the output vector over the course of learning...:=. The following is a further explanation of the calculation method for modifying the link value in the reverse transfer network. ' In the inverted transfer network, the output value of the nth layer is the nth layer of the first neuron output value. The weighted cumulative value of the nonlinear function: yaj = f (net)) (21) where y; is the output value of the nth jth neuron, is the conversion function 12
丄 J JZ/OVJJ ,⑽〗為第n-1層第i 一也· 個神!疋輸出值的權重累加值,可表 不為· Λβί"=Σ^Γ'-6. | (22) %為第n_1層第1個神經元的輸出值,%為第 η·1層的第i個神經 ,㈣為第】個神經元的^ ^個神經元間的鍊結值 於降很日;J傳遞用路屬於監督式的學習模式,訓練目的在 ^ ^ ”輸出向董間的差距,所以通常以誤差函 數五來表示學習品質 疋義一誤差函數: £=^Σκ~^)2 y' (23) :中,以第j個神經元的目標輸出值、為輸出層第 J個神經元的網路給 ^ 值,利用兩者差值來調整誤差函數五 敢小化’即為網路的學習 程利料陡坡降法來搜尋五 的最佳解’也就是最小的誤 μ . 、平方和。每輸入一筆訓練資 〆,、為路就稍微調整鍊結值的大 ,J 向调整的幅度和誤差 數對鍊結值的敏感程度成正比 Δ^=-7ϋ (24) 微積為ΓΓ速率’其衫了最陡坡降法修正的逮度。利用 微積刀的連鎖律可得: 、 (25) 、一 ·^即為倒傳遞演算法之關鍵公式。此學習過程通常 以一次輸入一個訓練範例的方式進行,直到 力丨|結為凡所有的 ί練向董’稱為-個學習循環,反覆數個學習猶環直到網 13 1352805 路達到收斂為止。 又在前述輝度分布量測步驟中,根據第三步、第四步 得到9絚以上特徵值灰階校正係數,在倒傳遞類神經網路 應用上,神經網路訓練模擬部分係將「(其他輝度特徵點X 轴值·最大輝度特徵點X軸值)」、「(其他輝度特徵點Y軸 值·最大輝度特徵點Y軸值)」、「(其他輝度特徵點原始灰 階值-最大輝度特徵點原始灰階值)」作為神經網路的三項 5向里’其他輝度特徵點灰階校正係數/最大輝度特徵 點灰階校正係數」當作一項目標向量,分成訓練範例、測 試範例、驗證範例。 在參數設定上:隱藏層通常使用彳層。隱藏層的處理 單兀之數目越多收斂越慢,但可達到更小的誤差值,但太 多並不一定必然的降低測試範例的誤差,只是增加執行的 負擔與時間,太少不足以反應輸入變數間的交互作用,因 而有較大的誤差,而數目越多,網路複雜,收斂較慢。 學習速率通常使用最大穩定學習速率。較大的學習速 率有較大的網路加權值修正量,可較快逼近函數最小值, 但過大郃導致加權值修正過量,造成數值震盪而難以達到 收斂的目的。一般用經驗取〇彳〜1〇之間或是採取自動 調整學習速率的方式來處理。 由上述說明可瞭解本發明量測方法中運用「曲線擬合 」與「類神經網路」進一步的技術内容,而在前述的量測 實施方式中可進一步歸納出下列步驟: a.對一面光源施加不同的工作電壓,並在該面光源的 1352805 同一指疋位置上取其在不同工作電壓下的輝度值; b. 對前述面光源施加不同的工作電壓,並利用CcD在 面光源的同一指定位置上擷取其不同工作電壓下的圖像, 並分別轉換為一灰階圖像; c. 於各灰階圖像的指定位置分別取出一原始灰階值, 該原始灰階值與輝度值具有正比關係; d. 根據前述複數組對應的原始灰階值與輝度值,採用 曲線擬合方法引導出相關函數關係; e. 在前述面光源定義出複數個特徵點位置,並對面光 源施加一固定的工作電壓,並以CCD擷取面光源在該工 作電壓下的圖像,進而在前述複數特徵點位置上分別取得 在該工作電壓下之面光源圖像的未校正灰階值,再基於曲 線擬合形成的函數關係,反求出該複數特徵點位置的已校 正灰階值; f. 根據則述複數組對應的未校正灰階值與輝度值,取 得複數組”特徵值灰階校正係數”(灰階校正係數=已校正灰 階值/未校正灰階值); g. 根據前述”特徵值灰階校正係數”利用類神經網路形 成灰階校正矩陣; h. 根據前述灰階校正矩陣對CCD取得的影像進行校正 9 i. 將上述灰階已經過校正後的影像透過”已校正灰階值 與輝度的擬合曲線”轉化成一二維輝度分布圖像。 利用前述方法可對用數位相機擷取待測面光源之取像 15 進仃校正,以得到一顯著提升可靠度的二維輝度分析結果 由於則述分析結果可能受雜訊干擾,故可針對分析結果 進仃濾波,以排除前述的雜訊干擾,一種可行的濾波技術 為中值濾波器。 前述二維輝度分布的量測被定義為一“自體量測,,模 式,其主要根據待測面光源本身量測的灰階值與輝度值建 2的灰階校正矩陣,作為模擬二維輝度分析之依據,因此 當對不同面光源進行二維輝度分析時,即須分別針對不同 的面光源分別建立灰階校正矩陣。 _除上述模式以外,亦可採用固定的灰階校正矩陣來進 行模擬二維輝度分析: 、受无利用前述的“自體量測”分布方法對一背光模 組刀別求出已校正灰階值與輝度的擬合曲線、灰階校正矩 陣。接著利用該「灰階校正矩陣」與「已校正灰階值與輝 度的擬合曲線」對其他同批或同類型的背光模組進行-唯 =分布的校正量測,此種模式被定義為一“系統量:, 模式。 【圖式簡單說明】 圖。 誤差之示意圖。 0 徵點位置之示意 第一圖:係一數位相機取像原理示意 第二圖··係數位相機取得照度值存在 第一圖.係本發明之硬體架構示意圖 第四圖.係本發明在面光源上定義特丄 J JZ/OVJJ, (10)〗 is the n-1th layer i i also a god!权 The weighted cumulative value of the output value can be expressed as Λβί"=Σ^Γ'-6. | (22) % is the output value of the first neuron of the n_1th layer, and the % is the η·1 layer i nerves, (4) is the number of neurons in the first neuron. The chain value is reduced in the day; the J transmission path belongs to the supervised learning mode, and the training purpose is in the ^^ ” output to the gap between the two Therefore, the error function is usually used to represent the learning quality. The error function is: £=^Σκ~^)2 y' (23) : In the middle, the target output value of the jth neuron is the Jth of the output layer. The network of neurons gives the value, and the difference between the two is used to adjust the error function. Five dare to make it 'the best solution for the network to learn the steep slope method to search for the best solution' is the smallest error. ., square sum. Each time a training resource is input, the link value is slightly adjusted for the road. The magnitude of the J-direction adjustment and the error number are proportional to the sensitivity of the link value Δ^=-7ϋ (24) The product is the rate of the ', and its catch is corrected by the steepest slope method. The chain law of the micro-product knife can be used: (25) Key formula. This learning process is usually performed by inputting a training paradigm at a time, until the force is smashed into all the ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ In the aforementioned luminance distribution measurement step, according to the third step and the fourth step, the eigenvalue gray scale correction coefficient of 9 絚 or more is obtained, and in the inverse transmission type neural network application, the neural network training simulation part system "(Other luminance feature point X-axis value, maximum luminance feature point X-axis value)", "(Other luminance feature point Y-axis value, maximum luminance feature point Y-axis value)", "(Other luminance feature point original grayscale value) - Maximum luminance feature point original grayscale value)" as the three-in-one of the neural network, 'other luminance feature point gray-scale correction coefficient/maximum luminance feature point gray-scale correction coefficient" as a target vector, divided into training examples , test examples, verification examples. In the parameter setting: the hidden layer usually uses the layer. The more the number of processing units of the hidden layer, the slower the convergence, but the smaller the error value can be achieved, but too much does not necessarily reduce the error of the test paradigm, but increases the burden and time of execution, too little to react. The interaction between the input variables has a large error, and the more the number, the more complicated the network and the slower the convergence. The learning rate usually uses the maximum stable learning rate. Larger learning rates have larger network weighting corrections, which can approach the minimum value of the function faster. However, if the value is too large, the weighting value is overcorrected, causing the numerical oscillation to be difficult to achieve convergence. It is usually handled by an experience of ~1〇 or by automatically adjusting the learning rate. The above description can be used to understand the further technical content of the "curve fitting" and the "neural neural network" in the measuring method of the present invention, and the following steps can be further summarized in the foregoing measuring embodiment: a. Apply different working voltages and take the luminance values at different operating voltages at the same finger position of the surface light source; b. apply different operating voltages to the surface light sources, and use CcD to specify the same in the surface light source. Positioning the images at different working voltages and converting them into a grayscale image; c. respectively extracting an original grayscale value at a specified position of each grayscale image, the original grayscale value and the luminance value Having a proportional relationship; d. according to the original gray scale value and the luminance value corresponding to the complex array, using a curve fitting method to guide the correlation function relationship; e. defining a plurality of feature point positions in the surface light source, and applying a surface light source a fixed working voltage, and taking an image of the surface light source at the working voltage by the CCD, and then obtaining the operating voltage under the plurality of characteristic point positions respectively The uncorrected gray scale value of the surface light source image is further determined based on the function relationship formed by the curve fitting, and the corrected gray scale value of the complex feature point position is obtained inversely; f. according to the uncorrected gray scale corresponding to the complex array For the value and the luminance value, obtain the complex array "eisk value gray scale correction coefficient" (gray scale correction coefficient = corrected gray scale value / uncorrected gray scale value); g. use the neural network according to the aforementioned "eigenvalue gray scale correction coefficient" The network forms a grayscale correction matrix; h. corrects the image obtained by the CCD according to the grayscale correction matrix described above. 9 i. passes the grayscale corrected image through the “fitted grayscale value and luminance fitting curve” Converted into a two-dimensional luminance distribution image. The above method can be used to extract the image of the surface light source to be measured by the digital camera, so as to obtain a two-dimensional luminance analysis result with significant improvement reliability, since the analysis result may be interfered by noise, so it can be analyzed The result is filtering to eliminate the aforementioned noise interference. A feasible filtering technique is the median filter. The measurement of the aforementioned two-dimensional luminance distribution is defined as a "self-measurement, mode, which is mainly based on the gray-scale correction matrix of the gray-scale value and the luminance value measured by the light source itself to be measured, as a simulated two-dimensional The basis of the luminance analysis, therefore, when performing two-dimensional luminance analysis on different surface light sources, it is necessary to separately establish gray scale correction matrices for different surface light sources. _ In addition to the above modes, a fixed gray scale correction matrix can also be used. Simulated two-dimensional luminance analysis: The fitting curve and grayscale correction matrix of the corrected grayscale value and luminance are obtained for a backlight module by using the above-mentioned "self-measurement" distribution method. The grayscale correction matrix and the fitted curve of the corrected grayscale value and luminance perform a calibration measurement of the same batch or the same type of backlight module, which is defined as a "system amount" :, mode. [Simple description of the diagram] Fig. Schematic diagram of the error. 0 Schematic diagram of the location of the sign. First diagram: The principle of taking a digital camera is shown in the second figure. · The camera obtains the illuminance value. A FIG. System architecture diagram of the present invention the hard body of the fourth FIG. Laid the present invention is defined based on the surface light source
梦 -r* 圖··係本發明施加背光模組工作電壓所產生不同 輝度值、灰階值之相關數據表格。 楚 -X» 圖:係利用CCD擷取理想均勻之面光源的未校 正灰階之輝度分布圖像。 Λ|ϋτ | 圖:係利用CCD擷取一背光模組的圖像(校正 灰階前) 第八圖:係利用CCD擷取一背光模組的輝度分佈圖 像(經神經網路校正灰階後)Dream-r* Figure is a table of related data for different luminance values and grayscale values generated by the operating voltage of the backlight module. Chu-X» Fig.: The image of the luminance distribution of the uncorrected gray scale using the CCD to extract the ideal uniform surface source. Λ|ϋτ | Fig.: Using CCD to capture an image of a backlight module (before correcting grayscale) Figure 8: Using CCD to capture the luminance distribution image of a backlight module (corrected grayscale via neural network) Rear)
第九圖:係一類神經網路之示意圖。 【主要元件符號說明】 (1〇)翻拍架 (ή)測試平台 (12)固定座Figure IX: Schematic diagram of a class of neural networks. [Main component symbol description] (1〇) remake frame (ή) test platform (12) fixed seat
1717
Claims (1)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW97100006A TW200931000A (en) | 2008-01-02 | 2008-01-02 | Neural network-based two-dimensional luminance measurement method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW97100006A TW200931000A (en) | 2008-01-02 | 2008-01-02 | Neural network-based two-dimensional luminance measurement method |
Publications (2)
Publication Number | Publication Date |
---|---|
TW200931000A TW200931000A (en) | 2009-07-16 |
TWI352805B true TWI352805B (en) | 2011-11-21 |
Family
ID=44865151
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW97100006A TW200931000A (en) | 2008-01-02 | 2008-01-02 | Neural network-based two-dimensional luminance measurement method |
Country Status (1)
Country | Link |
---|---|
TW (1) | TW200931000A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9015096B2 (en) | 2012-05-30 | 2015-04-21 | Qualcomm Incorporated | Continuous time spiking neural network event-based simulation that schedules co-pending events using an indexable list of nodes |
TWI501167B (en) * | 2012-05-10 | 2015-09-21 | Qualcomm Inc | Method and apparatus for strategic synaptic failure and learning in spiking neural networks |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104374551B (en) * | 2014-11-24 | 2017-05-10 | 深圳科瑞技术股份有限公司 | LED luminance uniformity detection method and system thereof |
CN111504608B (en) * | 2019-01-31 | 2022-10-04 | 中强光电股份有限公司 | Brightness uniformity detection system and brightness uniformity detection method |
-
2008
- 2008-01-02 TW TW97100006A patent/TW200931000A/en not_active IP Right Cessation
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI501167B (en) * | 2012-05-10 | 2015-09-21 | Qualcomm Inc | Method and apparatus for strategic synaptic failure and learning in spiking neural networks |
US9208431B2 (en) | 2012-05-10 | 2015-12-08 | Qualcomm Incorporated | Method and apparatus for strategic synaptic failure and learning in spiking neural networks |
US9015096B2 (en) | 2012-05-30 | 2015-04-21 | Qualcomm Incorporated | Continuous time spiking neural network event-based simulation that schedules co-pending events using an indexable list of nodes |
Also Published As
Publication number | Publication date |
---|---|
TW200931000A (en) | 2009-07-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200110271A1 (en) | Photosensor Oculography Eye Tracking For Virtual Reality Systems | |
US8319857B2 (en) | Apparatus and method for correcting digital color photographs | |
JP5293355B2 (en) | Glossiness evaluation method, glossiness evaluation apparatus, image evaluation apparatus having the apparatus, image evaluation method, and program for executing the method | |
TWI352805B (en) | ||
Kang et al. | Three-dimensional flame measurements using fiber-based endoscopes | |
CN110084260A (en) | A kind of semi-supervised method for training more pattern identifications and registration tools model | |
EP3314571B1 (en) | Method and system for detecting known measurable object features | |
WO2017195993A3 (en) | Method and electronic device for verifying light source of images | |
JP2017044596A (en) | Film thickness measurement device and film thickness measurement method | |
JP2018205037A (en) | Evaluation device, evaluation program, and method for evaluation | |
JP2020204880A (en) | Learning method, program, and image processing device | |
Rao et al. | Neural network based color decoupling technique for color fringe profilometry | |
Zemblys et al. | Making stand-alone PS-OG technology tolerant to the equipment shifts | |
JP2016095584A (en) | Pupil detection device, pupil detection method, and pupil detection program | |
JP2017147638A (en) | Video projection system, video processing apparatus, video processing program, and video processing method | |
TWI391639B (en) | Method and system compute backlight module luminance by using neural networks trained by averages and standard deviations of image intensity values of backlight modules | |
JP2004302581A (en) | Image processing method and device | |
Outlaw et al. | Smartphone colorimetry using ambient subtraction: application to neonatal jaundice screening in Ghana | |
JP6813749B1 (en) | How to quantify the color of an object, signal processor, and imaging system | |
TWI494549B (en) | A luminance inspecting method for backlight modules based on multiple kernel support vector regression and apparatus thereof | |
TW201247373A (en) | System and method for adjusting mechanical arm | |
JPH11237216A (en) | Evaluation method for optical distortion and evaluation device | |
CN106097255B (en) | Background noise characteristic estimation method for point source Hartmann wavefront detector | |
Del Campo et al. | Radial basis function neural network for the evaluation of image color quality shown on liquid crystal displays | |
CN108596888A (en) | A kind of Human Height Real-time Generation based on monocular image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
MM4A | Annulment or lapse of patent due to non-payment of fees |