TW201219769A - Optical property measuring device and method thereof - Google Patents

Optical property measuring device and method thereof Download PDF

Info

Publication number
TW201219769A
TW201219769A TW100141596A TW100141596A TW201219769A TW 201219769 A TW201219769 A TW 201219769A TW 100141596 A TW100141596 A TW 100141596A TW 100141596 A TW100141596 A TW 100141596A TW 201219769 A TW201219769 A TW 201219769A
Authority
TW
Taiwan
Prior art keywords
measurement
imaging
pixel
unit
value
Prior art date
Application number
TW100141596A
Other languages
Chinese (zh)
Other versions
TWI532985B (en
Inventor
Akihiro Eguchi
Tomoyuki Shimoda
Original Assignee
Fujifilm Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujifilm Corp filed Critical Fujifilm Corp
Publication of TW201219769A publication Critical patent/TW201219769A/en
Application granted granted Critical
Publication of TWI532985B publication Critical patent/TWI532985B/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/21Polarisation-affecting properties

Landscapes

  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)
  • Testing Of Optical Devices Or Fibers (AREA)
  • Liquid Crystal (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

This invention can measure optical property of large-area optical film rapidly and high precisely. An optical film 12 as an object to be measured is disposed on a planar irradiating portion 14. A circular polarized light is irradiated to the optical film 12 from the planar irradiating portion 14. Measuring pixels E are passed through first photo area to fourth photo area of a CCD camera in a photo portion 15 by moving the optical film 12 in X direction. The measuring pixels E are photoed multi-times by the CCD camera while the measuring pixels E are passing through each photo area. A value obtained by adding the output values obtained from the multi-time photoing as a measure value of the photo area. A Stokes parameter is calculated by the measure values of the four photo areas. And the Stokes parameters of all the measuring pixels E on the optical film 12 are obtained.

Description

201219769 六、發明說明: 【發明所屬之技術領域】 本發明是有關於一種對 光學特性測量裝置及方法。仏予獏的偏光特性進行測量的 【先前技術】 等的且中使用偏光板、視角修正膜、抗反射膜 學膜I)。液晶顯ΪΪ^ =7塑_旨膜(以下稱作「光 她特性而獲得對具有的雙折射(double 光學臈中亦必需具有規定的雙折射特:Γ。於 折射特性在整個面不具有均二於:亥t膜:雙 的圖像顯示中會產生不均。 况下’液日日顯示裝置 因此,在將光學·裝至液晶顯示裝置前 疋否具有所期望的雙折射特 而要于膜 腔昭心了 籌來進行:對成為測量對象的光學 ^射測1光的光源,接收自絲膜發出 2以對絲關偏祕性等進㈣量的她差=偏光 =如’專利文獻丨巾,藉由在光源與作為受光器的電 (charge coupled device,CCD)相機之間使相 立差板繞光軸旋轉而產生各種偏光狀態。而且,利用ccd ,機對不同的偏光狀態的圖像進行攝像,根據由攝像所獲 伸的圖像群的各像素的亮度值變化來對每個像素算出雙折 射特性。而且’專利文獻2中,揭示了 一種在規定方^上 201219769 ^送中的鮮_雙折則她進行在線 (on line)測量201219769 VI. Description of the Invention: [Technical Field] The present invention relates to a device and method for measuring optical characteristics. Measured by the polarization characteristics of 貘 [Prior Art] A polarizing plate, a viewing angle correction film, and an anti-reflection film I) are used. Liquid crystal display ^ = 7 plastic _ film (hereinafter referred to as "light her characteristics to obtain the birefringence of the pair (double optical yoke must also have a specified birefringence: Γ. The refractive properties do not have the entire surface Second: Hai Te film: double image display will produce unevenness. In the case of 'liquid day display device, therefore, before installing the optical device to the liquid crystal display device, whether it has the desired birefringence is special The membrane cavity is made up for the purpose of: the optical source that is the object of measurement, the light source that receives the measurement, and the light source that is sent from the silk film to send out the polarity of the wire. 4. The difference is as follows: The wiper generates various polarization states by rotating the phase difference plate around the optical axis between the light source and a charge coupled device (CCD) camera as a light receiver. Moreover, using ccd, the machine pairs different polarization states. The image is imaged, and the birefringence characteristic is calculated for each pixel based on the change in the luminance value of each pixel of the image group obtained by the imaging. Further, in Patent Document 2, a method is disclosed in 201219769 In the fresh _ double fold, she is online (o n line) measurement

Ltrt且,專利文獻3中揭示了如下裝置··於使膜一 邊進行測量時’考慮到⑽相機的視野尺寸與 膜的雙折進行2進仃攝像’精此對大面積的光學 [先前技術文獻] [專利文獻] [專利文獻1]日本專顺開2__229279號公報 利文獻2]日本專利特開平5_346397號公報 [專利文獻3]日本專利特開2GG7_263593號公報 近年來’因液晶顯示裳置大型化,故組襄在 f膜亦使用具有大面積的光學膜。隨此,尋求能夠對呈有 光學膜的雙折射特性進行測量的裝置或方法:: 如’在20 _左右的液晶顯示裝置中需 大小的絲朗檢查。 TA3&度的 先前而二上述專利文獻1〜專利文獻3中所示的 學膜的雙折射的理由,而存在無法對大面積的光 ί文3的^^下速且高精度地進行測量的問題。在專 和文馱1的If況下,理想的是使用遠心 來作為攝像透鏡,但透鏡的視野最多—邊為 右,因而無法以一個視野來檢查A3尺寸。* 因此,需要相應於CCD相機的 多個測量區,且針對該每侧量區來折=二分, 此__檢4。鱗,各職 201219769 偏光狀態進行測量而必需在使CCD相機靜止的狀態下一 邊使相位差板的角度旋轉一邊進行攝像。因此,在規定的 測量區中重複進行攝像(靜止)—朝向另一測量區移動 CCD相機—在另一測量區中進行攝像(靜止)—...,因而 存在測量怎麼也不進展而耗費時間的問題。 、-而^,專利文獻2中,將光學膜上的一點沿搬送方向 進行測量。ϋ無制的相位差板械轉,故無需使相機或 ,停止便可進行測量。對此,為了將—點測量擴展為面測 1,而考慮將該測量裝置在光學膜的寬度方向上排列。缺 而,在專利文獻2所示的測量裝置的測量空間解析度例: 為1 mm平方的情況下,若將該測量裝置在Α3尺寸的光學 膜的寬度方向上配置,則全部需要294台裝置,因而可以 ,無,實現。此處所使㈣「測量空間解析度」的用語是 :測量縣上的1侧量關尺寸,最終將測量結果的分 布圖像化時則成為該圖像的像素尺寸。 寻刊又默 —— ,丨…和機甲,在^ :TL件上設置著光學膜的偏光狀態下的測量所需的相 因而無需如專敎獻1般在每讀像時使相位差本 ^然而,該專利文獻3中CCD相機所具有的雜訊(_ 的問題無法解決,a而無法進行高精度的測量。 專利讀3巾,i視野尺寸_4是根據由CCD才 ,1次攝像所獲得的i張圖像來進行。CCD相機中,目 ^同-條件下進行連續攝像時,藉由因CCD的雜^ 起的免度的偏差,每次賴得的輸出值巾亦會產生㈣ 6 201219769 動。亦即’專利文獻3中測量的再現必定不充分。 【發明内容】 本發明中,在使用二維影像感測器來對具有比該二維 影像感測器的攝像視野更大面積的光學膜的偏光特性進行 測量的情況下’因進行多個偏光狀態的攝像而不再有使影 Ϊ 光=上靜止的必要性,並且對於為了提高測 感測器停止ϊ可s圖而言’亦無需使影像 仃因此,本發明的目的在於提供一種 迅速且具有向測量精度的偏光特性測量裝置及方法。 構,性測量裝置的特徵在於包括:投光機 特定的偏光照明;攝像機構,將波長 板在第1方向上對齊而排列,在第!方向上實現至少2 =====::像感測器被劃分為對 移於測量;象而:=上 綱移動且描機構而進行的攝像機構 行多次攝像所獲得的輪象區中進 而算出該攝像區的測量像素進仃相加’ 值來算出每個測量像夸土χ各攝像區收集的測量 Parameter )。較佳為上n的光的司托克士參數(Stokes 攝像元件來進行攝像,測器利用包含多個像素的 個像素加以結合而成的#時,針對每個將鄰接的多 為在測量之前以結而輸出一個輸出值。較佳 疋為早位而求出偏光傳遞矩陣 201219769 =ansfer matrix) ’該偏光傳遞矩陣表示自測量對象 各攝像區輸出的輸出值的關係,使Ϊ 述各攝像區輸出的輸出值來求: 如圖1所* ’ g)在作為測量對㈣絲膜丨 將測量空間解析度作為已知的條件而加以規定中 測量結果是每個將光學膜12以測量空晴析度的大^的 分化而成的微小區的測量結糾集合,亦即光 、知 分布資訊。在作為測量對象的光學膜12上視作存以兩 =間解析度而假想地細分化而成的區,對該 = 統稱,以後稱作「測量像素E」。 1進仃 較佳為上述影像❹彳^以結合單元為單位而輸出 攝像所獲得的輸出值。此處所謂的「結合單元」是將2 攝f單讀橫歸結規❹數而作為一個大 的早兀(、,OS早疋),該單元的輸出值設為將結合單元 二?所=:早兀的輸出值平均化而得的值。將該各個 、、,〇&早兀進仃統稱,以後稱作「結合單元CP」。 一如此以結合單元cp為單位來進行攝像的理由為如下 所不。圖2的曲線表示使用12位元輸出的咖相機來作 為影像感測器時的輸出值的偏差。該曲線是將偏差範圍作 圖而成’該偏差$_是對結合了規定數量的像素而成的結 合單元輸人相對明亮的光(輸出值為侧附近的光),單 純,行;256 -人測量後將所有輸出值中由最大值減去最小值 所得4曲線中’縱轴表示CCD相機的輸出值的偏差, 201219769 合單元數’魏合單元數表示像素的結合數。 疒:二本結合單元數為4的結合單元具有縱2個像素、 検2個像素。再者,圖2中,里^ 出值的偏差範圍(測量&要、’、 勺輪 #、〇、1 固C測里、、,。果),虛線表示根據黑圓點所獲得 &以線。根據該結果可知,附在ccd相機的輪出 ==Τ元㈣大致負1/2次方成比例,且具有隨 機雜戒(random noise )的性質。 …ϋ㈣所示’每當以1單元、4單元、9單元、b =、...而增大單元尺寸(結合單元數)時,CCD相機的 〜值的偏差減少。因此,若未將結合單元數一定程度地 增大來進彳了攝像’貞彳CCD相機的輸出值的偏差大,因而 只^增加攝像數及進行平均化等m高精度地進行 測里。另外,為了獲得該曲線而使用的CCD為1/1.8英吋, 2〇〇萬像素,且攝像單元尺寸為4 4 μιη見方。、 構成結合單元CP的像素的個數較佳為如卜4、9...般 自、1、開始的N2JN為自然數)的級數的數,最大值設“ 上述影像感測器上成像的測量對象的測量空間解析度(亦 即測量像素E的大小)為結合單元的大小的數。 圖3表示攝像部15。此處表示使用CCD來作為影像 感測器的示例’但亦可使用互補金氧半導體 (complementary metal oxide semiconductor,CMOS )來作 為影像感測器。且表示以每丨塊波長板實現i種偏光特性 而使用4塊波長板的示例。波長板的數量亦不僅限於如後 所述的4種。攝像部15的構造包括相機外殼4〇、cCD相 9 201219769 機41、遠心透鏡42、CCD相機旋轉機構43、第1波長板 〜第4波長板45〜48及偏光板49。相機外殼4〇具有大致 長方體形狀’且用以安裝第1波長板〜第4波長板45〜48 以及偏光板49的開口 40a (參照圖4)形成有1個。在該 相機外殼40内設置著CCD相機41、遠心透鏡42及CCD 相機旋轉機構43。再者,CCD相機旋轉機構43旨在使結 合單元的二維排列的1方向與掃描機構的掃描方向一致來 進行調整。再者,圖3中的箭頭X表示測量對象的相對 動方向(測量對象或攝像部的哪一個移動均可)。 遠心透鏡42使用兩側遠心透鏡或物體側遠心透鏡。、土 心透鏡42是將測量對象的像在CCD上以乘以該透鏡p 的透鏡倍率的大小而成像(關於倍率,使用的是1倍〜Μ 倍的倍率)。因遠心透鏡的深焦點深度及捕捉與光軸I行的3 光束的能力,透過各波長板的光到達影像感測器而不會沪 合在一起,並形成與各個波長板相對應的個別的區。以後^ 將可接收通過該各波長板的光束的、實質由波長板义 的CCD上的各區統稱為「攝像區」。 ~ 77 利用圖4來說明光學膜12上的某一個測量像素 過第1波長板〜第4波長板45〜48後由攝像部15以^ 方式進行測量。 赞 土再者,圖4中,為了使說明容易理解,圖中並不包人 遠心透鏡42 _小或放纽果、進而不包含倒置成二 果’而是將光學膜12上的1點以在CCD55Ji以等倍 成像的方式進行描繪。 ° 九 201219769 首先,當光學膜12上的某測量德去 — 而^像部15的視野内時,自測量像素^帝^ ::進:^板45的區塊中且經—段時間後橫= 免。第1波長板45在遠心透鏡42的作用下 乂區 上對應的攝像區50中成像,因而在測量像素£ ” 波長板45的區塊的過程中,測量傻 /、尹、弟1 -的區塊,相應於測量像素二+距=切攝像區 次攝像。同樣地,第2波長板〜第 多 區51〜攝像區53中忐儋、目,丨軎你*坂6 48在攝像 攝像。轉Q 53巾祕’缝像素£亦在此處進行多:欠 陣,的f合單元CR為單位而求出偏光傳遞矩 參數與自影像感測器輸出的輸出值的關係。 克士 獲得的關於測量像素E的咖的輸出值, 托克士參數與對該測量像素 具有的偏光傳遞矩陣的矩陣積。 :對同每:攝加,且作為該攝二 偏光傳二單元:的 值與光的司托克:參=二 種類的數量據攝像區的數量(或實現的偏先狀態的 故::===像區的數量即4種’ J托克 > 數。该步驟是針對每個測量像 201219769 =進行,藉此可算出測量對象的司把克士參數的面分 較佳為上述影像感測器跨及鄰接的 規定的測量像素進行攝像。較佳為上述 將一結合單元中對規定的=== 平均值相乘而得的 素所_比例與該像素的輸出平均後的像 JL中述移動機構使測量對象或攝像機構中的至少 進行的測』= = 藉由上述攝像機構 摄傻第 攝像結束時,使測量對象或 ==至少其中一個在與第1方向為直角且與測量 對象千仃的第2方向上移動。 為於上述移誠構制量縣或攝像機構中的至 =-個在與第1方向為直角且與測量對象平行的第2 2上移動的情況下,維持著攝像機構與投光機構的第2 介二j位置關係而使其雜,亦即以位置關不發生變 其移動,且將投光機構的偏光照射範圍縮小至 攝像機構的y照射第2方向的視野範_程度。 車乂佳為藉由上述波長板而實現的偏光的種類的數量為 40。較佳為上述波長板具有與遲相量為至I%。或 〇至290。中任一個波長板相同的遲相效果。 本發明的光學特性測量方法的特徵在於:使用攝像機 構,該攝像機構將波長板在第1方向上對齊而排列,在第 12 201219769Ltrt, and Patent Document 3 discloses the following apparatus: [When measuring the film side, it is considered that (10) the field of view of the camera and the birefringence of the film are subjected to two-in-one imaging.] This is a large-area optical [Previous Technical Literature] [Patent Document 1] Japanese Patent Laid-Open No. Hei. No. 5-346397 (Patent Document 3) Japanese Patent Laid-Open Publication No. 2 GG7-263593 Therefore, the group film also uses a large-area optical film in the f film. Accordingly, a device or method capable of measuring the birefringence characteristic of an optical film is sought:: A silky inspection of a size required in a liquid crystal display device of about 20 Å. The reason for the birefringence of the film to be described in the above-mentioned Patent Documents 1 to 3 of the above-mentioned patents 1 to 3, and the measurement of the large-area light source 3 cannot be performed with high speed. problem. In the case of the special and the If1, it is desirable to use the telecentric as the imaging lens, but the lens has the most field of view - the side is right, so the A3 size cannot be checked with one field of view. * Therefore, a plurality of measurement areas corresponding to the CCD camera are required, and for each of the side measurement areas, the value is divided by two points, and this __4 is detected. Scales, each job 201219769 When measuring the polarization state, it is necessary to rotate the angle of the phase difference plate while the CCD camera is stationary. Therefore, the imaging (stationary) is repeatedly performed in the prescribed measurement area - the CCD camera is moved toward the other measurement area - the imaging (still) is performed in the other measurement area - and thus the measurement does not progress and takes time. The problem. In addition, in Patent Document 2, a point on the optical film is measured in the transport direction. ϋ Unpredictable phase difference plate rotation, so you can take measurements without stopping the camera or . In this regard, in order to expand the point measurement to the surface measurement 1, it is considered to arrange the measuring devices in the width direction of the optical film. In the case of the measurement space resolution of the measuring device shown in Patent Document 2: When the measurement device is placed in the width direction of the Α3 size optical film, all 294 devices are required. And thus can, no, achieve. Here, the term "measurement spatial resolution" is used to measure the size of the 1 side of the county. When the distribution of the measurement results is finally imaged, the pixel size of the image is obtained. The search and the silence - , 丨 ... and the mech, set the phase required for the measurement of the optical film in the polarized state on the ^ : TL piece, thus eliminating the need to make the phase difference at each reading as in the special However, the problem of the CCD camera in the patent document 3 (the problem of _ cannot be solved, and a high-precision measurement cannot be performed. Patent read 3 towel, i view size _4 is based on CCD, 1 time camera The obtained i images are taken. In the CCD camera, when the continuous imaging is performed under the same conditions, the output value of each ray is also generated by the deviation of the CCD. 6 201219769. That is, the reproduction measured in Patent Document 3 is necessarily insufficient. [Invention] In the present invention, a two-dimensional image sensor is used to have a larger field of view than the two-dimensional image sensor. In the case where the polarization characteristic of the optical film of the area is measured, 'there is no need to cause the image to be reflected on the upper side due to the imaging of the plurality of polarization states, and it is necessary to increase the stop of the sensor. Words also do not need to make images, therefore, the purpose of the present invention is Provided is a rapid polarization measuring device and method for measuring polarization characteristics. The measuring device includes: a polarizer illumination specific to a light projector; and an imaging mechanism that aligns the wavelength plates in the first direction. At least 2 =====: in the direction of the !!: the image sensor is divided into a pair of images obtained by moving the camera; the image is obtained by multiple imaging of the camera mechanism In the area, the measurement pixel of the imaging area is further calculated to add a value to calculate a measurement parameter collected by each measurement area of each measurement area. Preferably, the Stokes parameter of the light of the upper n (the Stokes imaging element is used for imaging, and the detector uses a combination of pixels including a plurality of pixels to form a #, each of which will be adjacent to each other before the measurement. And outputting an output value. Preferably, the polarization transfer matrix is obtained in the early position. 201219769 = ansfer matrix) 'The polarization transfer matrix indicates the relationship between the output values output from the respective imaging areas of the measurement object, so that the output of each imaging area is described. The output value is obtained as follows: * g) as the measurement pair (4) silk film 丨 Measure the spatial resolution as a known condition. The measurement result is that each optical film 12 is measured by the measurement. The measurement of the micro-areas of the differentiation of the large degree is the set of knots, that is, the information of light and knowledge distribution. The optical film 12 to be measured is regarded as a region which is virtually subdivided by the resolution of two =, and is collectively referred to as "measurement pixel E" hereinafter. 1 input 仃 It is preferable that the above image ❹彳 is outputted by the imaging unit in units of combined units. Here, the so-called "combination unit" is a large early 兀 (, OS early 将) that takes 2 shots f single-reading horizontally, and the output value of the unit is set to combine unit two? ==: The value of the output value of the early 平均 is averaged. The individual, , 〇 & 兀 amp 兀 兀 兀 兀 兀 兀 兀 兀 兀 兀 兀 。 。 。 。 。 。 。 。 The reason why the imaging is performed in units of the combination unit cp is as follows. The graph of Fig. 2 shows the deviation of the output value when the 12-bit output coffee camera is used as the image sensor. The curve is a plot of the deviation range. The deviation $_ is a relatively bright light that is input to a combination unit that combines a predetermined number of pixels (the output value is light near the side), simple, line; 256 - In the 4 curves obtained by subtracting the minimum value from the maximum value among all the output values after the measurement, the vertical axis represents the deviation of the output value of the CCD camera, and the number of units in the 201219769 unit number represents the number of combinations of pixels.疒: Two combined units with 4 combined units have 2 pixels in length and 2 pixels in pixels. Furthermore, in Fig. 2, the deviation range of the value of the value (measurement & want, ', scoop wheel #, 〇, 1 solid C, 、, 、, fruit), the dotted line indicates the obtained according to the black dot & Take the line. According to the results, it is known that the rotation of the ccd camera ==Τ (4) is approximately proportional to the negative 1/2 power and has the property of random noise. ...ϋ(4)' Whenever the unit size (the number of combined units) is increased by 1 unit, 4 units, 9 units, b =, ..., the deviation of the value of the CCD camera is reduced. Therefore, if the number of coupling units is not increased to a certain extent, the deviation of the output value of the image pickup camera is large. Therefore, it is only necessary to increase the number of images and perform averaging, and the measurement is performed with high precision. In addition, the CCD used to obtain the curve is 1/1.8 inch, 2 million pixels, and the imaging unit size is 4 4 μηη square. The number of pixels constituting the combining unit CP is preferably a number of stages such as the number of the first and second N2JNs, and the maximum value is set to "the imaging on the image sensor". The measurement spatial resolution of the measurement object (that is, the size of the measurement pixel E) is the number of the size of the combining unit. Fig. 3 shows the imaging unit 15. Here, the CCD is used as an example of the image sensor, but it can also be used. Complementary metal oxide semiconductor (CMOS) is used as an image sensor, and shows an example in which four kinds of wavelength plates are used to realize i-type polarization characteristics per block wavelength plate. The number of wavelength plates is not limited to The four types of imaging units 15 include a camera housing 4, a cCD phase 9, a 201219769 machine 41, a telecentric lens 42, a CCD camera rotating mechanism 43, first to fourth wavelength plates 45 to 48, and a polarizing plate 49. One of the opening 40a (see FIG. 4) for mounting the first to fourth wavelength plates 45 to 48 and the polarizing plate 49 is formed in the camera housing 4A having a substantially rectangular parallelepiped shape, and is provided in the camera housing 40. CCD camera 41, far-reaching 42 and CCD camera rotation mechanism 43. Further, the CCD camera rotation mechanism 43 is intended to adjust the one direction of the two-dimensional arrangement of the coupling unit in accordance with the scanning direction of the scanning mechanism. Further, the arrow X in Fig. 3 indicates measurement. The relative movement direction of the object (which of the measurement object or the imaging unit can be moved). The telecentric lens 42 uses a bilateral telecentric lens or an object-side telecentric lens. The core lens 42 multiplies the image of the measurement object by the CCD. The lens p is imaged by the lens magnification (for magnification, 1 to Μ magnification is used). The depth of the telecentric lens and the ability to capture the 3 beams of the optical axis I pass through the wavelength plates. The light reaches the image sensor and does not come together, and forms individual regions corresponding to the respective wavelength plates. Later, the CCD can be received by the wavelength plate of the light beam passing through the wavelength plates. Each district is collectively referred to as a "camera area." As shown in Fig. 4, one of the measurement pixels on the optical film 12 is measured by the imaging unit 15 after passing through the first to fourth wavelength plates 45 to 48. Again, in Fig. 4, in order to make the description easy to understand, the figure does not enclose the telecentric lens 42_small or put the fruit, and thus does not contain the inverted fruit, but instead takes 1 point on the optical film 12 The CCD 55Ji is depicted in a manner of equal magnification imaging. ° Nine 201219769 First, when a certain measurement on the optical film 12 goes, and within the field of view of the image portion 15, the self-measurement pixel ^:::: plate 45 is in the block and after a period of time = exempt. The first wave plate 45 is imaged in the corresponding imaging region 50 on the crotch region under the action of the telecentric lens 42, and thus the region of the silo/, Yin, and Di-- is measured in the process of measuring the pixel of the pixel plate 45. Block, corresponding to the measurement pixel two + distance = cut imaging area sub-camera. Similarly, the second wavelength plate ~ the multi-region 51 ~ imaging area 53 in the 忐儋, mesh, 丨軎 you * 坂 6 48 in the camera. Q 53 towel secret 'slit pixel £ is also used here: the negative unit, the f-unit CR is the unit and the relationship between the polarization transmission moment parameter and the output value output from the image sensor is obtained. The output value of the coffee of the pixel E, the matrix product of the Tokas parameter and the polarization transfer matrix possessed by the measurement pixel. : The same as each: the addition, and the value of the two elements of the second polarization:克: = = the number of two types according to the number of imaging areas (or the implementation of the pre-emptive state:: === the number of image areas is 4 kinds of 'J Tok> number. This step is for each measurement image 201219769=Improve, it is possible to calculate the face of the measuring object to be the best The sensor is imaged across a predetermined measurement pixel adjacent to the image. Preferably, the ratio of the primed value obtained by multiplying the average value of the predetermined === in a combination unit to the image of the output of the pixel is averaged. The movement mechanism described above makes at least the measurement performed by the measurement target or the imaging mechanism. ??? = When the imaging mechanism ends by the imaging mechanism, the measurement object or == at least one of them is at right angles to the first direction and is measured. Moving in the second direction of the object Millennium, in the case where the number of the above-mentioned shifting counties or imaging means is shifted to the second 2 which is perpendicular to the first direction and parallel to the measurement target, The second positional relationship between the imaging unit and the light-emitting unit is maintained to be miscellaneous, that is, the position is not changed, and the polarization range of the light-emitting unit is reduced to the second stage of the imaging unit. The number of types of polarized light in the direction is 40. It is preferable that the wavelength plate has a phase of retardation of up to 1% or 290. The same phase effect of a wavelength plate. The optical characteristic measuring method of the present invention is characterized in that a camera mechanism is used which aligns the wavelength plates in the first direction and arranges them at the 12th 201219769.

V 〆〆 ^ f ^ Λ. X 1方向上實現至少4種偏光特性,且包括影像感測器今 影像感測器被劃分為對透過上述波長板的光個別地進行^ 像的攝像區,使攝像機構相對於測量對象而相對地在2 第1方向上移動,將測量對象的各測量像素在各攝像區中L ,行多次攝像所獲得的輸出值相加,並針對每個測量像素 • 算出該攝像區的測量值,並且根據自各攝像區同樣地收集 • 的測量值而針對每個測量像素算出自測量對象發出的朵2 司托克士參數。 X Μ [發明的效果] 根據本發明,因使用如下的攝像機構,即,將偏光特 性不同的至少4種波長板在第1方向上對齊而排列,且包 含對透過上述波長板的各個的光個別地進行攝像的波長板 的4個攝像區,故伴隨著測量對象相對於攝像機構的朝向 第1方向的移動,測量對象上的各測量像素在上述相對移 動過程中連續地進行4種以上的偏光測量攝像。進而,在 一個攝像區内,各測量像素為不同的結合單元而進行多次 • 攝像,且事先測量出各自的測量像素的偏光傳遞矩陣,因 • 此以同一測量的多次測量而實現實質平均處理且可提高信 號/雜訊(signal/noise,S/N)比。如此,一邊使攝像機構 相對於測量對象不停止地進行相對移動,一邊同時達成4 種以上的偏光狀態測量及多次攝像。在測量對象的朝向第 1方向的移動結束(即測量結束)後,使攝像對象以相當 於視野範圍的程度在第2方向上移動,藉此可擴大測量 面,可重複地進行朝向第1方向的移動攝像及朝向第2方 13 201219769 向的移動從而能夠進行測量對象的整個面的測量。因進行 4種以上的偏光狀態的測量,故可決定測量司托克士泉 數。藉由對光源的司托克士參數與測量司托克士參數進行 比較’而可算出測量對象的偏光特性。 、如此,根據本發明,可對大面積的光學膜的偏光特性 迅速且高精度地進行測量。例如,在空間解析度為丨mm 見方,軸方位測量精度為0.1。的條件下,先前方法中需要 約10分鐘,而本發明中進行約2分半鐘的測量即可。亦即, 根據本發明,触前方法相比,可達成約4倍的高速化。 【實施方式】 —如圖5所示’本發明的光學特性測量裝置1〇將具有規 疋的雙折射特性的光學膜丨2作為測量對象進行測量。光學 特性測量裝置10中,於安裝在試樣平台13的面照明部Μ 上載置著測量對象的光學膜12。而且,利用自面照明部Μ =的圓偏光(drcularly p〇larized响)的照明光來照明 =j 12,—邊使試樣平台13在χ方向上移動一邊利用 攝像# 15對自該光學膜12發出的光進行攝像。而且,電 取16根據由攝像部15所獲得的輸出值來進行各種解析, 光學膜Π的光學特性。另外,亦可使用橢圓偏光 作為偏光照明。 忒樣平台13藉由X方向移動機構2〇,能夠沿著基台 U上的2根軌道22a、軌道22b而在χ方向上移動。而且, ^向移動機構20包含根據自χ馬達驅動器以輸出的驅 動脈衝進行驅動的祠服馬達。 201219769 同樣地’攝像部1 #丄 上。臂31藉由¥方^ ^在汉置支持台30的臂31 玲由Y方向移動機構33而能夠 的Y方向上移動,並且藉由2方向移動:在構3=: 與X方向或Y方向正交的z方_:構34而此夠在 31在Y方向或Z方向上二方:,。如此,藉由臂 .7 L 门上移動,而攝像部15亦可在γ方向 攝像部二動:再者’朝向Z方向移動的目的是為了進 ^也。的',、、點調整°γ方向移動機構33根據自Y3 達驅,器(未圖示)所輸出的驅動脈衝來進行=自馬 幻、=、自=達驅動器24與γ馬達驅動器的驅動脈衝分 別被發运至X脈衝計數H 26與γ脈衝計數 中。各^衝計數輯所触制鷄脈衝進行計數。由脈 衝計數器計數的值被送至電腦16。電腦16中記憶著每一 脈衝的試樣平台13的移動量與攝像部15的移動量,因此 能夠根據兩脈衝計數器的計數值而f握攝像部15的視野 位於試樣平台13上的哪個位置。 然後,根據圖6的流程圖來對本發明的作用進行說 明。首先,最初進行的是測量準備,此處,以CCD相機 41的結合單το CP為單位來指定攝像部的偏光傳遞矩 陣。戎作業作為初始設定而僅進行一次。所求出的偏光傳 遞矩陣記憶在電腦16内’在初始設定以後使用上述所求出 的偏光傳遞矩陣。 測量準備之後所進行的是校正測量。此處遍及面照明 部14的整個面而以測量解像度的單位來測量自面照明部 14發出的光的司托克士參數(以後記為s參數)。 15 201219769 在作為投光機構的面照明部14中,視 所示的假想的關量解像度為單細細分化邮= 投光像素[」。由此,校厶量 疋技光像素早位L而求出s參數的步驟。 只要無光源變動則無需進行校正測量,但較理想的是 大致在一天的最初測量時來進行。 校正測量之後所進行的是實際測量,此處,在面照明 部二4上放置光學膜12’遍及光學膜12的整個面且以測量 對象測量解像度(即,測量對象Ε) 4 學膜12的光的S參數。 + J置远迅九 最後,將實際測量所得的s參數與照明部的s參數進 行比較而算出測量縣的雙折射特性。錢重要的是 由使實際測量中的各測量像素E的位置與校正測量中的; 投先像素L的位置-致’而在測量中使用測量對象平台所 ^有的計數ϋ。㈣計數!!的#前值㈣定相機的各結合 早7C捕捉到測量對象平台的哪個位置,因而能夠將測量時 所得的各結合單元的輸出值確實齡開為各投光像素L、 或各測量像素E。 以下自Aji準備步驟開始進行詳細敍述。測量準備 步驟疋如下步驟:在實際測量之前,指定本裝置的攝像部 15中所使㈣CCD相機的各結合單元的偏光傳遞矩陣。 可將遠指疋巾所使用的測量機構自身組裝在本裝置中,亦 可與本裝置分離而在外部崎絲傳遞祕測量且使用通 用串列匯流排(Universal Serial Bus,USB )記憶體等的機 201219769 構而僅將> 料存入電腦16中。 以CCD相機41的結合單元為單位來求出 的,光傳Ϊ矩陣的理由在於:將由位於同-波長板區塊内 的夕個測讀素Ε所獲得關量值平均化從而獲得該波長 板中的-個可靠性高的代表測量值。雖說是通過了同一皮 長板區塊的光’但由於是㈣板、偏級、遠心透鏡 的不同的心、(:CD的不同的測量像素而測得的測量值, 因而無法以相同方式進行處理。這是因為分別存在局部的 偏光f遞特,偏差(局部性(lenity))。然而,若事先 = 兀早位來求出偏光傳遞矩陣’則能夠根據 修正局雜的影_將可靠性高的代表測量 值歸結為一個。 偏光傳遞矩陣是表示入射至攝像部15光的s參數與 i二相:Γ1輸出的輸出值的關係的矩陣。該偏光傳遞 矩=根據形成攝像部15的光學構件等的缪勒矩陣 u:lermatnx)(以後記作M矩陣)的積而決與 :某結合單元建立關聯的M矩陣是將如下矩陣相乘而 合束騎㈣波紐45〜波長 =光束通過部分的M矩陣,偏光板49的V 〆〆^ f ^ Λ. At least four kinds of polarization characteristics are realized in the X 1 direction, and the image sensor is included in the image sensor, and the image sensor is divided into image areas for individually performing light transmission through the wavelength plate. The imaging mechanism relatively moves in the second direction with respect to the measurement target, and adds the measurement values obtained by the plurality of measurement pixels of the measurement target in each imaging region L, and performs imaging for a plurality of times, and for each measurement pixel. The measured value of the imaging area is calculated, and the 2 Stokes parameters emitted from the measurement target are calculated for each measurement pixel based on the measurement values collected from the respective imaging areas. X Μ [Effects of the Invention] According to the present invention, at least four types of wavelength plates having different polarization characteristics are arranged in alignment in the first direction, and the light is transmitted through the respective wavelength plates. Since the four imaging regions of the wavelength plate are imaged individually, the measurement target is moved in the first direction with respect to the imaging mechanism, and each of the measurement pixels on the measurement target is continuously subjected to four or more types during the relative movement. Polarized measurement camera. Further, in one imaging area, each measurement pixel performs multiple times of imaging for different combination units, and the polarization transfer matrix of each measurement pixel is measured in advance, because the actual measurement is realized by multiple measurements of the same measurement. Processing and can improve the signal/noise (S/N) ratio. In this manner, four or more types of polarization state measurement and multiple imaging are simultaneously achieved while the imaging mechanism is relatively moved without stopping the measurement target. After the movement of the measurement target in the first direction is completed (that is, the measurement is completed), the imaging target is moved in the second direction to the extent corresponding to the visual field range, whereby the measurement surface can be enlarged, and the first direction can be repeatedly performed. The moving image and the movement toward the second party 13 201219769 can measure the entire surface of the measurement object. Since the measurement of four or more types of polarization states is performed, it is possible to determine the number of toscan springs measured. The polarization characteristic of the measurement object can be calculated by comparing the Stokes parameter of the light source with the Toscan parameter of the measurement unit. As described above, according to the present invention, the polarization characteristics of a large-area optical film can be measured quickly and accurately. For example, in the spatial resolution of 丨mm square, the axis orientation measurement accuracy is 0.1. In the case of the prior art, about 10 minutes is required, and in the present invention, measurement of about 2 minutes and a half is carried out. That is, according to the present invention, about 4 times higher speed can be achieved than the pre-touch method. [Embodiment] - As shown in Fig. 5, the optical property measuring apparatus 1 of the present invention measures an optical film 丨 2 having a birefringence characteristic as a measurement object. In the optical characteristic measuring apparatus 10, the optical film 12 to be measured is placed on the surface illumination unit 安装 attached to the sample stage 13. Further, the illumination light of the drially p〇larized sound is used to illuminate =j 12, while moving the sample stage 13 in the x-direction while using the image #15 to the optical film. The light emitted by 12 is taken. Further, the electric discharge 16 performs various kinds of analysis and optical characteristics of the optical film 根据 based on the output value obtained by the imaging unit 15. Alternatively, elliptically polarized light can be used as the polarized illumination. The sample platform 13 is movable in the x direction along the two rails 22a and 22b on the base U by the X-direction moving mechanism 2'. Further, the moving mechanism 20 includes a servo motor that is driven in accordance with a drive pulse output from the motor driver. 201219769 Similarly, 'camera unit 1 #丄. The arm 31 is moved in the Y direction which can be moved by the Y direction moving mechanism 33 by the arm 31 of the Han support table 30, and is moved by the 2 direction: in the 3 =: and the X direction or the Y direction The orthogonal z-square _: 34 is sufficient for the two sides in the Y direction or the Z direction: 31. Thus, by moving the arm .7 L on the door, the imaging unit 15 can also move in the γ-direction imaging unit: the other purpose of moving in the Z direction is to advance. ',,, point adjustment ° γ direction moving mechanism 33 is based on the drive pulse output from the Y3 drive (not shown) = drive from the magic, =, self = drive 24 and γ motor drive The pulses are sent to the X pulse count H 26 and gamma pulse counts, respectively. Each of the punch counts is counted by the chicken pulse. The value counted by the pulse counter is sent to the computer 16. In the computer 16, the amount of movement of the sample stage 13 for each pulse and the amount of movement of the imaging unit 15 are memorized, so that it is possible to hold the position of the field of view of the imaging unit 15 on the sample stage 13 based on the count value of the two pulse counters. . Then, the action of the present invention will be explained based on the flowchart of Fig. 6. First, the measurement preparation is performed first. Here, the polarization transmission matrix of the imaging unit is specified in units of the combination το CP of the CCD camera 41. The job is performed only once as an initial setting. The obtained polarization transfer matrix is stored in the computer 16'. After the initial setting, the polarization transfer matrix obtained above is used. What is done after the measurement preparation is the calibration measurement. Here, the Stokes parameters (hereinafter referred to as s parameters) of the light emitted from the surface illumination unit 14 are measured in units of measurement resolution throughout the entire surface of the surface illumination unit 14. 15 201219769 In the surface illumination unit 14 as the light projecting unit, the virtual OFF degree resolution shown is a single subdivision email = projection pixel [". Thus, the step of determining the s-parameter is obtained by correcting the amount of time in which the pixel is in the early position L. It is not necessary to make a calibration measurement as long as there is no change in the light source, but it is desirable to perform it at approximately the first measurement of the day. What is performed after the calibration measurement is the actual measurement, where the optical film 12' is placed over the entire surface of the optical film 12 on the surface illumination portion 2 and the resolution is measured with the measurement object (ie, the measurement object Ε). The S parameter of the light. + J 远远迅九 Finally, the birefringence characteristics of the measurement county are calculated by comparing the actual measured s parameters with the s parameters of the illumination unit. It is important for the money to be used in the measurement of the position of each measurement pixel E in the actual measurement and the position of the pixel P to be used in the measurement; and the measurement 平台 of the measurement target platform is used in the measurement. (four) counting! ! #前值(四) Each combination of the camera is captured at the position of the measurement target platform as early as 7C, so that the output value of each combination unit obtained at the time of measurement can be surely set to each of the light-emitting pixels L or each measurement pixel E. The following is a detailed description from the Aji preparation steps. Measurement Preparation Step 疋 The following steps: Before the actual measurement, specify the polarization transfer matrix of each combination unit of the (4) CCD camera in the imaging unit 15 of the apparatus. The measuring mechanism used in the far-necked towel can be assembled in the device itself, or separated from the device, and the external measurement can be transmitted to the external surface, and a universal serial bus (USB) memory or the like can be used. Machine 201219769 constructs and stores only > into computer 16. The reason why the light transmission matrix is obtained by the unit of the combination unit of the CCD camera 41 is that the value obtained by averaging the readings in the same-wavelength plate block is averaged to obtain the wavelength plate. Among the high reliability representative measurements. Although it is the light passing through the same longboard block, but it is measured by the different cores of the (four) plate, the eccentricity, the telecentric lens, and the different measuring pixels of the CD, it cannot be performed in the same way. This is because there are local polarizations and deviations (locality). However, if the polarization transfer matrix is obtained in advance = 兀 early, the reliability can be corrected according to the correction The high representative measurement value is reduced to one. The polarization transfer matrix is a matrix indicating the relationship between the s parameter incident on the imaging unit 15 and the output value of the i-phase: Γ1 output. The polarization transmission moment = the optical according to the formation of the imaging unit 15. The product of the member's Muller matrix u: lermatnx) (hereinafter referred to as the M matrix) is determined by the fact that the M matrix associated with a certain combination unit is multiplied by the following matrix and combined to ride (four) Bun 45 to wavelength = beam passing Part of the M matrix, polarizer 49

拓陵、刀 矩陣’遠心透鏡的光束通過部分的M =TDr結合單元的Μ矩陣。[數1]中表示該Μ 矩陣的一般形式。X標記邀 指定的要素。 /、後的計鼻無關’因而是無需 [數1]Tuo Ling, Knife Matrix The beam of the telecentric lens passes through the partial M = TDr combined with the unitary matrix of the unit. The general form of the 矩阵 matrix is shown in [Number 1]. The X tag invites the specified feature. /, after the count is irrelevant, thus it is not necessary [number 1]

201219769 •J 〆〆 U I201219769 •J 〆〆 U I

M11 Ml2 Μ13 Mm X X X X X X X X X X X X 僅提取該M矩陣的第1行,若以Mu要素而標準化則 成為[數2]。將該矩陣定義為該結合單元中的偏光傳遞矩 陣。 [數2]M11 Ml2 Μ13 Mm X X X X X X X X X X X X Extracts only the first line of the M matrix, and if it is normalized by the Mu element, it becomes [number 2]. This matrix is defined as the polarization transfer matrix in the combining unit. [Number 2]

Μιι · | 1 M12/M11 M13/M11 Mm/Mh I 而且,再由與Μ矩陣關係密切的記號M12、M13、M14 來置換M12/Mu、Mn/Mu、M14/Mn,由係數K置換Μ”, 而獲得[數3]。 [數3]Μιι · | 1 M12/M11 M13/M11 Mm/Mh I Furthermore, M12/Mu, Mn/Mu, M14/Mn are replaced by the symbols M12, M13, M14 closely related to the unitary matrix, and replaced by the coefficient K" , and get [number 3]. [Number 3]

Κ · I 1 Μ12 Μ13 Ml4 I 將[數3]所示的形式的矩陣作為表現結合單元中的偏 光傳遞矩陣的一般的記號。此處,K為偏光傳遞矩陣的比 例係數,而該值中亦包含CCD相機的遮光(shading)效 果(CCD相機41的各結合單元的量子效率(quantum 18 201219769 efficiency)或增益(gain)係數的偏差)。 在測量準傷步驟中,如圖8所示,使用§參數為已知 的光70 #對攝像部15的所有結合單元中的每一個而個 別地求出偏光傳遞矩陣。此處,s參數為已知的光7〇是藉 由使基準技光$ 71内來自平行單色光源72的平行光透過 偏及1/4波長板QWP1而獲得。誠板PL1方位 固^且若將攝像部15中的方位的基準方向q 棘顏MU Q包括馬達驅動的連續旋 . p .不),在偏光傳遞矩陣的測量中使該1/4波 際丄量所使用的波長的實 中的基準方位為〇。而= 像部15 束。1已:中:投=71的光軸中心附近的光 美準Μ ϋ 個偏光㈣鱗進行測量。 ;量;象的束:中光轴中心以通過攝像部15的成為 沾ΥνΙΓ、.°疋的中心的方式,使用基準投光琴 、 機構71&而使基準投光器71與攝像部15^料 結合單元的偏光傳遞矩陣測量結束,則i用XY:; 動機構71a來進行相鄰測量彳n 此,對攝㈣Π μ 1 偏先傳遞矩陣測量。如 量。 _有結合單元的偏鱗遞矩陣進行測 芊一光70的已知s參數設為丨Pq Ρι ρ2 ρ3「,對 某,早凡的偏光傳遞矩陣的測量方法進 合單元的信號輸出值與測量中所使用的光70 19 201219769 π=:=合單元的偏光傳遞矩陣卿述 [數4] 輪出值=κ · { 1 * Ρ Ο +Μ|2 · Ρ + Μΐ3 β Ρ 2+Ml4 · Ρ , »3 若使用QWP1的常數對光 詳細敍述’則由[數5]而表示。 I;數 5] 70的S參數的各要素進行 1 Κ,. C2+S2_c〇se C*S· (1 —cose) S.sin ε 此處,將QWP1的方位設為γ,相位差設為ε,c = cos2y ’ s = sin2y。 ,K疋為了與貫際的CCD相機41的輪出值獲得匹配的 係數’且是在該測量中所決定的實數。 根據以上,p〇、Pi ' p2、p3成為以下的 改6]Κ · I 1 Μ12 Μ13 Ml4 I The matrix of the form shown in [3] is used as a general notation for expressing the polarization transfer matrix in the combining unit. Here, K is a proportional coefficient of the polarization transfer matrix, and the value also includes a shading effect of the CCD camera (quantum efficiency (quantum 18 201219769 efficiency) or gain coefficient of each combination unit of the CCD camera 41) deviation). In the measurement of the quasi-injury step, as shown in Fig. 8, the polarization transfer matrix is separately obtained for each of all the combining units of the imaging unit 15 using the known light 70#. Here, the known light s parameter is obtained by transmitting the parallel light from the parallel monochromatic light source 72 in the reference refractory light $71 to the quarter-wavelength plate QWP1. The plate PL1 orientation is fixed, and if the reference direction q of the orientation in the imaging unit 15 is included, the MU Q includes a motor-driven continuous rotation. p. No., the quarter-wave 丄 is made in the measurement of the polarization transfer matrix. The actual reference position of the wavelength used for the quantity is 〇. And = the image is 15 bundles. 1 Already: Medium: The light near the center of the optical axis of the projection = 71. The standard is measured by a polarized (four) scale. The amount of the image: the center of the intermediate optical axis is combined with the imaging unit 15 by the reference projector, the mechanism 71 & by the center of the imaging unit 15 which is the center of the ΥνΥ, .°疋. When the measurement of the polarization transfer matrix of the unit is completed, i uses XY:; moving mechanism 71a to perform adjacent measurement 彳n. For the camera (4) Π μ 1 , the matrix is measured first. Such as quantity. _The s-scaling matrix of the combined unit is used to measure the known s-parameter of the light 70 is set to 丨Pq Ρι ρ2 ρ3", for a certain measurement method of the polarization transfer matrix, the signal output value and measurement of the unit Light used in 70 19 201219769 π=:= 偏 的 偏 的 卿 数 数 数 数 数 数 数 数 数 数 数 数 数 数 数 数 数 数 数 数 数 数 数 数 数 数 数 数 数 数 数 数 数 数 数 数 数 数 数 数 数 数 数 数 数 数 2+ 2+ 2+ 2+ 2+ 2+ , »3 If the QWP1 constant is used to describe the light in detail, it is represented by [number 5]. I; the number 5] 70 elements of the S parameter are 1 Κ, . C2+S2_c〇se C*S· (1 —cose) S.sin ε Here, the orientation of QWP1 is set to γ, the phase difference is set to ε, and c = cos2y ' s = sin2y. K疋 is matched with the round-out value of the continuous CCD camera 41. The coefficient 'is the real number determined in the measurement. According to the above, p〇, Pi ' p2, p3 become the following changes 6]

Pl=K^(C^.c〇s£) = 1/2-r-(H-cose) + 1/2.^.(1-.c〇s,).cos4r P2—K -〇S· (1 —cos ε ) = 1/2·Κ' (1 — cos ε ) - sin4 γ P3-K^-S-sine=r-sin ε as\n2 γ 20 201219769 / / ,上/ Λ 將[數6]代入[數4]而獲得[數7]。 [數7] 輸出值=K_K、{1+Mi2_1/2_(1+c〇se) +Μΐ2· 1/2· (1 —cos ε ) ecos4 γ • +Μΐ3Β1/2· (1 —cose ) -sin4 y +Mi,i_sin2 γ _sin ε } 關於該[數7] ’若以QWPl的方位γ進行離散傅立葉 轉換(Discrete Fourier Transformation,DFT),則如[數 8] 所示’獲得表示直流成分以及下述頻率成分的輸出值的4 個關係式。此處’ Fdc表不直流成分的經測量的振幅,Fcos4 表示cos4y成分的經測量的振幅,Fsin4表示sin4Y成分的 經測量的振幅’ Fsin2表示sin2y成分的經測量的振幅。 [數8]Pl=K^(C^.c〇s£) = 1/2-r-(H-cose) + 1/2.^.(1-.c〇s,).cos4r P2—K -〇S· (1 —cos ε ) = 1/2·Κ' (1 — cos ε ) - sin4 γ P3-K^-S-sine=r-sin ε as\n2 γ 20 201219769 / / ,上 / Λ 6] Substituting [number 4] and obtaining [number 7]. [Number 7] Output value = K_K, {1+Mi2_1/2_(1+c〇se) +Μΐ2· 1/2· (1 —cos ε ) ecos4 γ • +Μΐ3Β1/2· (1 —cose ) -sin4 y +Mi,i_sin2 γ _sin ε } Regarding the [number 7] 'If Discrete Fourier Transformation (DFT) is performed with the orientation γ of QWP1, as shown in [8], the DC component is obtained and the following Four relations of the output values of the frequency components. Here, Fdc represents the measured amplitude of the DC component, Fcos4 represents the measured amplitude of the cos4y component, and Fsin4 represents the measured amplitude of the sin4Y component. Fsin2 represents the measured amplitude of the sin2y component. [Number 8]

Fdc=K-Kx- (1 +Μη12·1/2· (1 +cos£ ))Fdc=K-Kx- (1 +Μη12·1/2· (1 +cos£ ))

Fcos4r =K-r -1/2 - (1 _cos ε ) .Mni2 Fsin4T·—Κ·Κ "1/2· (1 —cos£ ) .^〇13Fcos4r =K-r -1/2 - (1 _cos ε ) .Mni2 Fsin4T·—Κ·Κ "1/2· (1 —cos£ ) .^〇13

Fsin2 f =Κ·Κ’ -sine _Mn14 此處,Fdc中需要注意的是ccd的暗電流(即便為光 量為零的CCD亦輸出某值)部分所造成的數值上升。當 將該數值設為BG時,[數8]中的自Fdc減去BG所得的值 為應在以後的計算中使用的直流成分,[數8]的4個式被修 21 201219769 ]。再者’ BG可藉由完全遮斷CCD相機的光而 才曰疋,可事先取得該值。 [數9] .....1¾ \ I ' Fc〇s4==k-K^1/2-(1-cos£).Mi η4~κ·κ ·1/2·(1-cose )-M1a Fsin2=K.K、ine·、______ (2) (3) (4) M ί [數9]所示的4個式中,未知數為K · κ,、m12、Ml3、 Μ 4個’故求值。例如,根據⑴+⑺而可指定 μ12,然後可指定κ·κ,,之後可指定Μΐ3、Μΐ4。 各命t此彳才曰定1個結合單元的偏光傳遞矩陣的所有要 = 的值。藉由在所有結合單元CP巾重複該計算, β曰二攝像部15的所有結合單SCp中的偏光傳遞矩陣 二 的值。Κ· Κ’的值中’κ是因每個結合單元而不同 日而為與本測量準備中所使用的摘強度有關的值 :"貝’里期間視作固定。由此,此處指定的κ· κ,的值 一=後的測1 (校正測量、實際測量)中可用作各結合單 U的相對的信魏纽。進而在以後的測量巾,即 的強度與此次測量準備的晰*同,亦可使_值 各結合單元間的相對的信號強度比。 為 此處、.·里才曰疋的各結合單元Cp中的偏光傳遞矩陣的 各要素預先記憶於電腦16内。此時,此處指定的κ·κ, 的值在以㈣朗巾將記號而參照。 22 201219769 如此,、偏光板、_、咖 併測量偏光傳遞矩陣,因而無需逐個缺。這⑹ϋ 少測量的負荷的顯著的優點。 ^樣便具有減 然後’對校正測量與實際測量的詳細 照圖6的流程圖)。動作雙方完全相、不订說明(> 對象在校正測量中為面照明部14,在實際測量 12。以下以實際測量為例進行說明。 ‘、、、子' 校正測量^際測量均在電腦16内設置暫時的輸出值 記憶區域並記憶CCD的輸出值。該輸出值 測量像素Η測量平台中的XY的二維位址來區分^ 列,且-讎列要素是以如_ 9麻㈣錢的數量與一 個波長板巾制f數來區分的二維構造,因而該輸出值記 憶區域整體上呈四維構造。 ° 在校正測量及實際測量中’首先於使攝像部15在Y 方向的規定位置處靜止的狀態下,使試樣平台13在X方 向上自其-端移動至另-端。若試樣平台13移動則攝像觸 發起動而自動地進行攝像。於試樣平台13上有面昭明部 14與光學膜12 (以下簡稱作「光學膜12等」),當攝像部 15攝像至光學膜12 #的另一端為止時,成為相機的視野 範圍的測量結束的狀態,因而為了變更相機視野而此次使 攝像部15在Y方向上以相當於視野範圍的程度進行移 動。然後再次自光學膜12等的另一端移動至一端,對光學 膜12等的未攝像部分進行攝像。重複上述程序,而進行光 學膜12等整體的攝像。 23 201219769. •J ^ ^ / 在攝像時庠θ =像至算出s參數為止的過程。再者,CCD 所有妗人簞疋ΐ像素同時進行攝像’因而1次攝像中自 著眼::個:Ϊ得資料’但以下的說明中’作為代表例而 在:CD相機’二像素Ε來進行說明。而且,為了簡化說明, ς 的各攝像區50〜攝像區53中,在χ方向 像所、結合單元CP,以作為對象的測量像素Ε的 — 4方向的某—個剖面進行說明。再者,此處設 j測里像|:Ε在CCD上無合單元cp的尺寸相同。該 =可根據通心透鏡的倍率設定或結合單元數來進行調 整。 如,10(A)所示’將設置在第1攝像區5〇的5個第 1結合單兀设為CPll〜cpi5,設置在第2攝像區51的5 個第2結合單元設為CP21〜CP25,設置在第3攝像區52 ,5個第3結合單^設為CP31〜CP35,設置在第4攝像 區53的5個第4結合單元設為CP41〜CP45。而且,將光 學膜12的測量像素設為E1〜En (η為2以上的自然數)。 而且’使光學臈12在X方向上移動時所產生的攝像觸發 的間隔被設定為如下’即,測量像素Ε在CCD上僅以結 合單70的Χ方向的長度L(= 一個測量像素中的X方向的 長度)前進的距離。在CCD的各攝像區,在X方向上排 列著5個結合單元’因而若一個測量像素E通過各攝像 區,則在各攝像區進行5次攝像。 首先’藉由光學膜在X方向上移動,而在某時序測量 像素E1的像到達第1攝像區的第1結合單元CP11上。而 24 201219769 且’如圖10 (B)所示’當測量像素E1的像位於第1結合 單元CP11上時’第1結合單元CP11對測量像素E1進行 攝像。藉由該攝像所獲得的輸出值記憶於電腦16内的測量 像素E1用的輸出值記憶區域EM1中。EM1按照圖9所示 的二維排列,行方向為攝像區的數量,列方向為測量值的 個數。該輸出值被儲存於輸出值記憶區域EM1的第1攝像 區用的行EM11中。 然後’若光學膜12在X方向上僅移動1攝像觸發的 移動量,則光學膜12的測量像素E1的像位於第1結合單 元區CP12上。然後’第1結合單元區CP12對測量像素 E1進行攝像。藉由該攝像所獲得的輸出值記憶於測量像素 El用的輸出值記憶區域EM1的第1攝像區用的行EM11 的另一區域中。而且’同樣地,在第1結合單元區CP13 〜第1結合單元區CP15中對測量像素E1進行攝像,並將 藉由該攝像所獲得的輸出值記憶於EM11中。因此,藉由 測量像素E1通過第1攝像區,而進行合計為5次的攝像。 然後’如圖11 (A)所示,當測量像素E1到達第2 攝像區51内的結合單元CP21上,結合單元CP21對測量 像素E1進行攝像。藉由該攝像所獲得的輸出值記憶於電 腦16内的輸出值記憶區域EM1的第2攝像區用行EM12 中。而且’同樣地,藉由測量像素E1通過第2結合單元 Cp22〜第2結合單元CP25而進行攝像,且由攝像所獲得 的輪出值被依序記憶於EM12中。 而且,如圖11 (B)所示,當測量像素E1通過第3 25 201219769 元cp31,結合單元⑽ :方同樣地進仃攝像。在該些第3結合 、、-。δ單元CP35中對測量像素E1 〜弟3 於輸出值記憶區域麵的第 ^^輸出值記憶 且,如圖u(c)所示,當測量中。而 幻内的第4結合單元⑽ 4=人=通過第4攝像區 樣地進行攝像,輸出值記憶於L:; = =P45時’亦同 攝像區用行而4中。輸出値錢區域麵的第4 結合:通過第4攝像區53内的第4 夠量像幸£1 測里像素幻的測量結束。位於相對於 量動作D 向的—後方處的測量像素ε2,其測 象素Ε1,目當於1次攝像觸發後便 的、^旦μ °,但進仃的是相同的内容。位於更後方 象素…^^均分別進而錯開相當於! l〇rΛ、發而進行攝像並結束。該些各輸出值記憶於與圖 〜81 11⑻中所示的輸出值記憶區域ΕΜ11〜輪 的^己憶區域刪4相當的測量像素Ε2〜測量像素Εη用 ^出值記憶區域ΕΜ21〜ΕΜη4中。其對應關係表示於圖 Η然後,表不如下方法:根據記憶於輸出值記憶區域 的站中,測量像素E1的輸出值與測量準備步驟中所指定 數了合單元的偏光傳遞矩陣,而求出測量像素E1的s參 當求出測量像素E1的S參數時,首先,如以下的[數 26 201219769 10]所示,求出由第1攝像區〜第4攝像區中的攝 的輸出值的合計,亦即EM11〜EM14的各自中所記&侍 個輸出值的合計Sll_s〜S14.s。本發明中,將此種^^ 5 的合計稱作測量值,並在S中附上_Σ的下产央矣— 值 =〜SM的S中所附的最初的數字u識;;測量:素而^ 且本來為與二維位址相對應的數字 =、、、Fsin2 f =Κ·Κ’ -sine _Mn14 Here, the Fdc needs to pay attention to the increase in the value caused by the dark current of ccd (even if the CCD is zero). When this value is set to BG, the value obtained by subtracting BG from Fdc in [8] is the DC component to be used in the subsequent calculation, and the four equations of [8] are repaired 21 201219769 ]. Furthermore, 'BG can be obtained by completely blocking the light of the CCD camera, and this value can be obtained in advance. [Number 9] .....13⁄4 \ I ' Fc〇s4==kK^1/2-(1-cos£).Mi η4~κ·κ ·1/2·(1-cose )-M1a Fsin2 =KK, ine, ______ (2) (3) (4) M ί [Number 9] In the four equations, the unknowns are K · κ, m12, Ml3, and Μ 4's. For example, μ12 can be specified according to (1) + (7), then κ·κ can be specified, and then Μΐ3, Μΐ4 can be specified. Each life t determines the value of all the = of the polarization transfer matrix of one combining unit. By repeating this calculation in all of the combining unit CPs, all of the β 曰 two camera units 15 combine the values of the polarization transfer matrix two in the single SCp. In the value of Κ· Κ', κ is a value related to the sizing strength used in the preparation of the measurement for each of the combined units: "Bei' period is regarded as fixed. Therefore, the value of κ·κ specified here, the value 1 = after the measurement 1 (correction measurement, actual measurement), can be used as the relative information of each combination U. Further, in the subsequent measurement, the intensity of the measurement towel is the same as the accuracy of the measurement preparation, and the relative signal intensity ratio between the _ values of the combination units can also be made. Here, each element of the polarization transfer matrix in each of the combining units Cp of the present invention is stored in the computer 16 in advance. At this time, the value of κ·κ specified here is referred to by the symbol in (4). 22 201219769 In this way, the polarizing plate, _, and coffee measure the polarization transfer matrix, so there is no need to be missing. This (6) has a significant advantage in measuring the load. The sample has a subtraction and then 'the flow chart of Fig. 6 for the correction measurement and the actual measurement. The two sides of the action are completely opposite, and the description is not made. (> The object is the surface illumination unit 14 in the calibration measurement, and the actual measurement is 12. The actual measurement is taken as an example below. ',,,,,,,,,,,,,,,,,,, Set a temporary output value memory area in 16 and memorize the output value of the CCD. The output value measures the two-dimensional address of the XY in the pixel measurement platform to distinguish the column, and the - column element is such as _ 9 hemp (four) money The number of the two-dimensional structure is different from the f-number of one wavelength plate, and thus the output value memory area has a four-dimensional structure as a whole. ° In the correction measurement and the actual measurement, 'the first position of the imaging unit 15 in the Y direction is first. In the stationary state, the sample platform 13 is moved from the end to the other end in the X direction. If the sample platform 13 is moved, the imaging trigger is activated and the image is automatically taken. The portion 14 and the optical film 12 (hereinafter simply referred to as "the optical film 12 or the like") are in a state in which the measurement of the field of view of the camera is completed when the imaging unit 15 is imaged to the other end of the optical film 12 #, so that the camera field is changed. and In this case, the imaging unit 15 is moved to the extent corresponding to the field of view in the Y direction, and then moved to the other end from the other end of the optical film 12 or the like to image the unimaged portion of the optical film 12 or the like. The entire film of the optical film 12 is imaged. 23 201219769. •J ^ ^ / During imaging, 庠θ = the process until the s parameter is calculated. Furthermore, all the CCD pixels are simultaneously imaged. In the sub-camera, it is self-seeking:: one: Chad data 'But the following description' is taken as a representative example: CD camera 'two-pixel Ε 进行 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 各 各 各 各 各 各 各 各 各 各 各 各In Fig. 53, in the χ direction image unit and the coupling unit CP, a certain cross section of the measurement pixel Ε in the direction of 4 is described. Further, here, the j image is displayed on the CCD. The size of the unit cp is the same. This = can be adjusted according to the magnification setting of the center lens or the number of combined units. For example, as shown in Fig. 10(A), the five first combining units will be set in the first imaging area 5〇. Set to CP11 to cpi5, set in the second imaging area 51 The five second combining units are set to CP21 to CP25, and are disposed in the third imaging area 52. The five third combining units are set to CP31 to CP35, and the five fourth combining units provided in the fourth imaging area 53 are set to CP41. ~CP45. Further, the measurement pixels of the optical film 12 are set to E1 to En (n is a natural number of 2 or more). Further, the interval of the imaging trigger generated when the optical cymbal 12 is moved in the X direction is set as follows That is, the distance at which the measurement pixel 前进 is advanced on the CCD only in the L direction of the splicing unit 70 (= the length in the X direction in one measurement pixel) is arranged in the X direction in each imaging area of the CCD. The combining unit' thus, if one measuring pixel E passes through each imaging area, five times of imaging is performed in each imaging area. First, by moving the optical film in the X direction, the image of the pixel E1 is measured at a certain timing to reach the first combining unit CP11 of the first imaging region. On the other hand, when the image of the measurement pixel E1 is located on the first combining unit CP11, the first combining unit CP11 images the measurement pixel E1. The output value obtained by the imaging is stored in the output value memory area EM1 for the measurement pixel E1 in the computer 16. The EM1 is arranged in two dimensions as shown in Fig. 9. The row direction is the number of imaging areas, and the column direction is the number of measured values. This output value is stored in the line EM11 for the first imaging area of the output value memory area EM1. Then, if the optical film 12 is moved by only 1 movement amount of the imaging trigger in the X direction, the image of the measurement pixel E1 of the optical film 12 is located on the first bonding unit area CP12. Then, the first combining unit area CP12 images the measuring pixel E1. The output value obtained by the imaging is stored in another area of the line EM11 for the first imaging area of the output value memory area EM1 for the measurement pixel El. Further, in the same manner, the measurement pixel E1 is imaged in the first combining unit area CP13 to the first combining unit area CP15, and the output value obtained by the imaging is memorized in the EM 11. Therefore, the measurement is performed five times by the measurement pixel E1 passing through the first imaging area. Then, as shown in Fig. 11(A), when the measurement pixel E1 reaches the combining unit CP21 in the second imaging area 51, the combining unit CP21 images the measurement pixel E1. The output value obtained by the imaging is stored in the second imaging area line EM12 of the output value memory area EM1 in the computer 16. Further, similarly, the measurement pixel E1 is imaged by the second combining unit Cp22 to the second combining unit CP25, and the round-out values obtained by the imaging are sequentially stored in the EM 12. Further, as shown in Fig. 11(B), when the measurement pixel E1 passes through the 3rd 25th 201219769 element cp31, the unit (10) is joined in the same manner as the image. In these third combinations, -. The δ unit CP35 memorizes the first ^^ output value of the measurement pixel E1 to the third of the output value memory area surface, and is measured as shown in Fig. u(c). The fourth combining unit (10) in the magical 4 = person = imaging by the fourth imaging area, and the output value is stored in L:; = = P45' is also the same as the imaging area. The fourth combination of the output money area is: the fourth sufficiency measurement in the fourth imaging area 53 is completed. The measurement pixel ε2 located at the rear with respect to the D direction of the amount of motion, the pixel Ε1 of the measurement, is intended to be the same as the one after the image capture, but the same content is entered. Located further behind the pixels...^^ are respectively staggered and equivalent! l〇rΛ, send and shoot and end. The respective output values are stored in the measurement pixel Ε2 to the measurement pixel 相当n corresponding to the output value memory area ΕΜ11 to the wheel shown in Fig. 81-(8), and are used in the value memory areas ΕΜ21 to ΕΜn4. The corresponding relationship is shown in FIG. Η, and then the method is as follows: according to the station stored in the output value memory area, the output value of the pixel E1 is measured and the polarization transfer matrix of the unit specified in the measurement preparation step is obtained. When the S parameter of the measurement pixel E1 is obtained, the output value of the image from the first to fourth imaging regions is obtained as shown in the following [No. 26 201219769 10]. In total, that is, the sum Sll_s to S14.s of the output values of the EM11 to EM14. In the present invention, the total of such ^^5 is referred to as a measured value, and the initial numerical value attached to the S of the lower production state of the _Σ is added to S; the measurement is: And ^ and originally the number corresponding to the two-dimensional address =,,,

當㈣的i。第2個數字丨及4對二=的= 4 ’為測量中所使用的波長板的編號。_Αι〜A 二=區中的測量的順序’即第1個測量結果4 5 [數 10] 測量值S11_z= S11 測量值S12_t 測量值S13_z:測量值S14_z= S14 .A1 S11 A2 S12.A1 + S12, 4- S13; το + S14, + 511 512 513 514 A5 A5 )5 .义5 另一方面,在結合單元CPU〜結合單元⑽ +所固有的偏光傳遞矩陣,該值在測 疋。若將CP11〜CP45的偏諸心_備々驟中被才曰 出則成為[數u]。. 陣依照定義而依序寫 [數 11] 27 201219769 ^ ^ ^ u I 1^*1When (iv) i. The second number 丨 and 4 pairs of two = = 4 ' are the numbers of the wavelength plates used in the measurement. _Αι~A The order of measurement in the two = area is the first measurement result 4 5 [number 10] Measured value S11_z = S11 Measured value S12_t Measured value S13_z: Measured value S14_z = S14 .A1 S11 A2 S12.A1 + S12 , 4 - S13; το + S14, + 511 512 513 514 A5 A5 ) 5 . On the other hand, in the combination of the unit CPU ~ combined unit (10) + inherent polarization transfer matrix, the value is measured. If the CP11~CP45 is out of the way, it will become [number u]. The array is written in order according to the definition [number 11] 27 201219769 ^ ^ ^ u I 1^*1

K_cpi i · I K—CP12 · I Kcp45 . IK_cpi i · I K—CP12 · I Kcp45 . I

M,2_cph M1;1CP11 M14cp11 I M12_c,,12 M13c1M2 M14Cp12 j ^CIM 5 1 M12CrM5 M13_C1M5 Mi ,處’下標_CP11等的編號為區分結合單㈣編號。 例中,輸出值S11_CP"〜SH_CP45為在結合單元cpu 〜L合單元CP45中個制量所㈣值,因而如以下的[數 12]所示應§由該結合單元的偏光傳遞矩陣與測量像素 Ε1的S參數| s〇_E1 S1_E1 S2_E1 S3_E1 | τ的矩陣積而定義。 此處’下標-E1表示測量像素El的S參數。 [數 12] 輸出值 S11-CP,丨=κ〇,„·( SOK1 輸出值 S1 Ι SOjn +Mi2.mi-S1jn+M113a,M.S2Kl + M1,4tl,u.S3J;l) +M12jm 2 .S1 a +M113加.S2jn + M1 ,_2. S3』)M, 2_cph M1; 1CP11 M14cp11 I M12_c,, 12 M13c1M2 M14Cp12 j ^CIM 5 1 M12CrM5 M13_C1M5 Mi , the number of the subscript _CP11, etc. is the number of the combined single (four) number. In the example, the output value S11_CP"~SH_CP45 is the value of the quadruple quantity in the combining unit cpu to the L unit CP45, and thus the polarization transfer matrix and the measuring pixel of the combining unit should be as shown in the following [12] S1 S parameter | s〇_E1 S1_E1 S2_E1 S3_E1 | τ is defined by the matrix product. Here, the 'subscript-E1' indicates the S parameter of the measurement pixel E1. [Number 12] Output value S11-CP, 丨 = κ 〇, „·( SOK1 output value S1 Ι SOjn +Mi2.mi-S1jn+M113a, M.S2Kl + M1, 4tl, u.S3J; l) +M12jm 2 .S1 a +M113 plus .S2jn + M1 ,_2. S3』)

I I I ·( SOKI +M12a,4,/S1K,+M113a>+1.S21;1H-M1 ·( SOj;, +Mi2.cp.|5'S1j;i + M1 + i4..rwS3j;|) 此處’若在[數10]的測量值Sll s中代入[數12]的輸 出值的定義並以S參數進行整理,則獲得[數13]。 [數 13] 測量值 S1 1-Σ —(K—CP11+K cp12+Kjyi3+K 〇>14 +1〇>15) 'SOj^ crn-M12_a,n2 tVi2+K_m3 ·Μ12_〇»ι ^+K_a>M ^i2„cvi4 +Κ_ιη5·Μ12 a,1Cj) -Sl^ ^_mi ·Μ13_αΊ1; +K_a*u*M13_a,ia +Κ_σΐ4 ·Μ13 α)14 4*κ_α,1Γ> ·μ13 m{.) -82^ + (^_CP11 ' Μ14_〇ρη 2 M14 CP12 + Kjpi 3 * Μ14_σι 3 + K_C?M e Ml4 Cpi 4 + K_im Mt 4 L?1 5 ) - S3^1 28 201219769 在該[數13]中,測量值311^:成為在各S參數要素中 分別乘以某些係數的形式。該些係數是將已在測量準備步 驟中指定的已知的值求積、求和所得。此處,如[數14]所 示,根據已知量的命名來定義該係數群。已知量可進行事 先計算,以後可作為一個數值而處理。 [數 14] SO& 的係數=0<_〇>11+1<_〇>12+»^13+1<_〇)14+1<_〇)15)三已知量01_^ S1 A的係數=(K-θη! · M! 2 CP1 i + K_CP1 2 · Μ Ί 2必 2 + K epi 3 Μ! 2 CP13 + K_CP14 Μ,2 CP14 + Ξ已知量11j S2_^的係數=(K-epn _ Μ,scpn+Kj^ 2 · M13CP12 +Κ_(ρ13· M13CP13 4 upi 4 + κ ot 5 _Μ13 m 5) Ξ 已知量 21 _Ν S3w 的係數=(Κ α,η 4 (Γ1!+Κ m 2 · Μ14 αη 2 + Κ CP1 ·Ί · ΜΊ +Κ_α>15 · Μ,4CP15)=已知量31 _Ν 已知量的下標的最初的數表示成為係數的S參數的要 素編號,第2個數字為區分波長板的編號。Ν為區分結合 單元的Υ方向上的位置的下標,而該例中對X方向的一個 剖面進行處理,因而成為表示某剖面位置的值。 藉由將[數14]代入[數13]中,而獲得以下的[數15]。 [數 15] 測量值S11_£=(已知量+(已知量11_Ν)·51Ε1 + (已知量 21_n).S2e1 +(已知量 31_ν)·533ι 可在[數10]的測量值S12j;〜測量值S14_s中應用相同 29 201219769 ^ ^ / 的處理,從而獲得[數16]。 [數 16] 測量值%= _〇2為:,+(已知量12為 測量值S13_z=(已知量〇 +(已知量22』).S2J;l +(已知量32』) 』% +(已知量13』).% 丄1 «,S14,= + (已知量24_Ν)% +(已知S34』)·% 將[數I5]與[數叫合併的4 個,式的數有4個’因 參數 素E1的S參數。 错此,未出測量像 以上,對如下情况進行了說明 素E1所成像的CCD上的χ 1稭由—個測量像 進行攝像,並最終算出s參數。°CCD β'=合單元而依次 ,’因而在進行上述處理的期間,: 亍 到光學膜η上的某處的測量像素Ε 能力。 有具有視野内的結合單元量的處理III ·( SOKI +M12a,4,/S1K,+M113a>+1.S21;1H-M1 ·( SOj;, +Mi2.cp.|5'S1j;i + M1 + i4..rwS3j;|) If 'the number of the output value of [12] is substituted in the measured value S11 s of [10] and sorted by the S parameter, [13] is obtained. [Number 13] Measured value S1 1-Σ —( K-CP11+K cp12+Kjyi3+K 〇>14 +1〇>15) 'SOj^ crn-M12_a,n2 tVi2+K_m3 ·Μ12_〇»ι ^+K_a>M ^i2„cvi4 +Κ_ιη5· Μ12 a,1Cj) -Sl^ ^_mi ·Μ13_αΊ1; +K_a*u*M13_a,ia +Κ_σΐ4 ·Μ13 α)14 4*κ_α,1Γ> ·μ13 m{.) -82^ + (^_CP11 ' Μ14_ 〇ρη 2 M14 CP12 + Kjpi 3 * Μ14_σι 3 + K_C?M e Ml4 Cpi 4 + K_im Mt 4 L?1 5 ) - S3^1 28 201219769 In [13], the measured value 311^: becomes The S parameter elements are respectively multiplied by some coefficients, which are obtained by summing and summing the known values that have been specified in the measurement preparation step. Here, as shown in [14], Knowing the naming to define the coefficient group. The known quantity can be calculated in advance and can be processed as a value later. [14] The coefficient of SO&=0<_〇>11+ 1<_〇>12+»^13+1<_〇)14+1<_〇) 15) Three known quantities 01_^ S1 A coefficient = (K-θη! · M! 2 CP1 i + K_CP1 2 · Μ Ί 2 must 2 + K epi 3 Μ! 2 CP13 + K_CP14 Μ, 2 CP14 + Ξ Known quantity 11j S2_^ coefficient = (K-epn _ Μ, scpn+Kj^ 2 · M13CP12 +Κ_(ρ13 · M13CP13 4 upi 4 + κ ot 5 _Μ13 m 5) 系数 Known quantity 21 _Ν S3w coefficient = (Κ α, η 4 (Γ1!+Κ m 2 · Μ14 αη 2 + Κ CP1 ·Ί · ΜΊ +Κ_α> 15 · Μ, 4CP15) = known quantity 31 _Ν The first number of the subscript of the known quantity indicates the element number of the S parameter which becomes the coefficient, and the second number is the number of the wavelength plate. Ν is a subscript that distinguishes the position in the Υ direction of the combining unit, and in this example, a section in the X direction is processed, and thus a value indicating a certain section position is obtained. By substituting [number 14] into [number 13], the following [number 15] is obtained. [Number 15] Measured value S11_£=(known quantity + (known quantity 11_Ν)·51Ε1 + (known quantity 21_n). S2e1 + (known quantity 31_ν) · 533ι The measured value S12j at [10] ; ~ Measured value S14_s applied the same 29 201219769 ^ ^ / processing, thus obtaining [number 16]. [Number 16] Measured value % = _ 〇 2 is:, + (known amount 12 is the measured value S13_z = ( Knowing 〇 + (known amount 22 』). S2J; l + (known amount 32 』) 』% + (known amount 13 』).% 丄1 «, S14, = + (known amount 24_Ν)% +(known S34』)·% The number of [number I5] and [number is merged into four, and the number of equations is four] because of the parameter S of the parameter E1. If this is not the case, the measurement is not as above, for the following cases The χ 1 straw on the CCD imaged by the characterization E1 was imaged by a measurement image, and finally the s parameter was calculated. °CCD β' = merging and sequentially, 'thus during the above processing: Measurement pixel 某 capability somewhere on the optical film η. Processing with a combined unit amount in the field of view

如此,與測量像素E1的§參數的求法 出所有測量像素E的S參數。所求出的:二:J 電腦16_實_量用s參數 在上述測量為校正測量的情況下,在測量像素Ε的變 化中測量面照明部14的所有投光像素L的s參數。該所 30 201219769 又光像素=s參數記憶於電 參數記憶區域中。 門町仅止剩里用的s 而且,測量像素p 動摄傻觫众的位址與投光像素^的XY位 址根據起動攝像觸發時攸樣平 = 果XY位址相同的剛量像素E與投光像 ==二’藉由遠心透鏡的深焦二== 像辛ΐ的光視作^订區分補充的功能’而可將照射測量 像素、先作自相同的ΧΥ位址的投光像素L照射。 由雷13的哪條置起觸侧發可事先 ^曰疋,因而被稱作已知量的數值可事先計算 出一這樣具有如下政果:在二維測量的必需處理大量的測 量貢料的㈣巾對於處理的高速化非常有利地發揮作用。 最後表示光干膜12的偏光特性的算出方法。電腦16 根據校正測量中所求出的面照明部14的整個面的s參數 及實際測量中所求出的光學膜12整個面的s參數,而算 出光予膜整個面的主軸方位與延遲(retardatj〇n)。當將自 光學膜12的某測量像素e透過的光的s參數設為| ιφψThus, the S parameter of all measurement pixels E is determined by the § parameter of the measurement pixel E1. What is found: 2: J computer 16_real_quantity s parameter In the case where the above measurement is the correction measurement, the s parameter of all the light projecting pixels L of the surface illumination unit 14 is measured in the change of the measurement pixel 。. The station 30 201219769 also the light pixel = s parameter is stored in the electrical parameter memory area. The door is only used for the remaining s. Moreover, the measurement pixel p is the location of the stupid person and the XY address of the projecting pixel ^ according to the start of the camera triggering level = the same amount of XY address of the rigid pixel E With the light-emitting image == two 'by the telecentric lens of the telecentric lens == like the function of the imaginary light as a complement to the supplementary function', the illumination measurement pixel can be made from the same ΧΥ address The pixel L is illuminated. Which one of the thunder 13 is placed on the side can be pre-twisted, so that the value called the known amount can be calculated in advance so that the following results are achieved: in the two-dimensional measurement, it is necessary to process a large amount of measurement metrics. (4) The towel plays a very advantageous role in speeding up the processing. Finally, a method of calculating the polarization characteristics of the light-drying film 12 is shown. The computer 16 calculates the spindle orientation and delay of the entire surface of the optical film based on the s-parameter of the entire surface of the surface illumination unit 14 obtained in the calibration measurement and the s-parameter of the entire surface of the optical film 12 obtained in the actual measurement ( Retartj〇n). When the s parameter of the light transmitted through a certain measurement pixel e of the optical film 12 is set to | ιφψ

IT ’位於E的正下方的投光像素L發出的光的S參數設 為丨1 XYZ I τ時’位於測量像素E的光學膜12的雙折射 的主軸方位a與遲相量δ可由[數17]來表示。 [數 17] tan2 α = (φ-χ) / (γ-ψ)When the S parameter of the light emitted from the projecting pixel L located directly below E is set to 丨1 XYZ I τ, the principal axis a and the retardation δ of the birefringence of the optical film 12 located at the measuring pixel E can be [number] 17] to express. [Number 17] tan2 α = (φ-χ) / (γ-ψ)

c〇s<5 = k_z+(s.cd-c.m/).(s_x-c.y)}/{ z2 +(s-x—c_y)2I 8Ϊηδ = {^.(5·χ-〇.γ)-(3.φ-〇·ψ)·Ζ J/{ Z2 +(S-X-C*Y)2} 31 201219769 此處,S二sin2d,C = cos2a。利用該關係式,。^ 出光學祺12的所有測量像素Ε的雙折射的主二,可計算 遲相量δ,藉此可根據試樣解像度來測量光風贈位a與 特性分布。 予犋12的偏光 再者,本實施形態的說明中,各攝像區中進_人 5次的攝像,而為了提高測量精度,亦可進并《仃合計為 .,t 丁 3 Γ欠以 f-、 ,如Η)次的攝像。該情況下’如以下所 (overlap)攝像(1次攝像視野與下—攝像視 疊)而進行。 。卩分重 重合攝像藉由將1攝像觸發間的試樣平台 為測量像素的X方向的長度LW而進行設量= 移動量的情況下,跨及鄰接的2個結合單“對 ,素進行攝像’因而必需進行輸出㈣分開。例如,圖= 是將1攝像觸發間的移動量設定為L的3/1〇時的 升 圖1〇 (C)的例中的第2次攝像的情 乂 μ攝像時序中,測量像素El位於結合單元CP11中 區域上’並且位於結合單scp12中3/1G的區域上。 人軍U ’定義次結合單元。次結合單方向為結 ^ =的1/1G的整數倍,γ方向與結合 元中所包含的⑽的攝像單元的輸 線矣-單元的輸出值。®13 +,7個區(斜 作^作為結合單元,形狀料值su_⑽ ’在CP12的3個區(斜線表示)中形 32 201219769 成CP12的次結合 出值,且蔣夂加μC〇s<5 = k_z+(s.cd-cm/).(s_x-cy)}/{ z2 +(sx-c_y)2I 8Ϊηδ = {^.(5·χ-〇.γ)-(3. Φ-〇·ψ)·Ζ J/{ Z2 +(SXC*Y)2} 31 201219769 Here, S is sin2d, C = cos2a. Use this relationship, . ^ For the main two of the birefringence of all the measured pixels of the optical 祺12, the late phase δ can be calculated, whereby the light wind gift a and the characteristic distribution can be measured according to the sample resolution. In the description of the present embodiment, in the description of the present embodiment, the imaging is performed five times in each imaging area, and in order to improve the measurement accuracy, it is also possible to add a total of . -, , such as Η) times the camera. In this case, it is performed as follows (overlap imaging) (one imaging field of view and lower-imaging image). . In the case where the sample platform between the imaging triggers is set to the length LW of the X-direction of the measurement pixel and the amount of movement is performed, the two adjacent sheets "opposite" are photographed. 'There is a need to separate the output (4). For example, the graph = is the case of the second shot in the example of the rising graph 1 〇 (C) when the amount of movement between the 1 imaging trigger is set to 3/1 of L. In the imaging timing, the measurement pixel E1 is located on the region in the combining unit CP11 and is located on the region combining 3/1G in the single scp12. The human U' defines the secondary combining unit. The secondary combined single direction is 1/1G of the node ^= Integer multiple, γ direction and the output value of the transmission line 单元-unit of the camera unit contained in the combination element (10).®13 +, 7 areas (inclined ^ as a combined unit, shape material value su_(10) '3 in CP12 The area (slanted line) is in the shape of 32 201219769 into the sub-combination value of CP12, and Jiang Wei added μ

[數 18] 且形成次輸出值Sii 扣-CPI2作為其輸 C憶於EMU t。而且,此時⑽ 示的内容的值而進行處理。 SUB輪出僅S11CP„ SUB^ 出僅S11_〇p,2[Equation 18] and form the secondary output value Sii - CPI2 as its output C recalls EMU t. Further, at this time, the value of the content shown in (10) is processed. SUB rounds out only S11CP „ SUB^ out only S11_〇p, 2

S1E1+M13 cp,2 -S2El+M14 ai)2 -S3E,) 圖14 (A)〜圖i4 (E)表示將i攝像觸發間的移動 量設定為L的5/10時的攝像區5〇中的情形,圖15表示第 3 -欠攝像時次結合單元的情形。測量像素在Cpii〜cpi5 中進行合計為11次的攝像。11次的輸出值的定義為[數 19] 〇 [數 19] 33 201219769 /ριί ♦第1次的攝像 SUB^i * flS11.CP1, =0.5 K_C1>11 (SOj:, + M,2 tJ)1 ,-81.^, + M,3 m, · S2JU + M,4 a>1, S3 ,:,) ♦第2次的攝像 輸出值S11_cp" =Ktvll_( SO.H •fMucpu.Sl.iii + Muc^-S^ + IVWvn-SU ♦第3次的攝像 SUB^iBfiS11.CP„^=0.5-Ka,u-(SOJ:] + M12_u,ll-S1J:l + M13.u>ll-S2_p,+Mu_a,,,-83^) SUB^ 出做 11 _cp,2=0.5 K_t?1;i· ( SO』,+Μ,2貫 S1 +Μ,3_咖 +M,4_㈣-53^,) ♦第4次的攝像 輸出值 S11 .CP,2 = K__m;! (SO;+Μ·| 它-咖 S1 刀+Μ,3_〇>12. S2J;!+Mi4_CP1;i _ S3JH) ♦第5次的攝像 SUBl^出 iiSI 1_CPi2 _O.S.K-Cpn· (SOj^+Muq^.SI j!l+M13jpl2_S2.El + M14_a>i2'S33i) SUB 輸出 /®S11-CP,3=0.5 _K_fp13. (SOjj + M·! 2_cpi3_S1j;i +Μ, 3^0^3.52^ +Μ,(咖-53^) ♦第6次的攝像 輸出值S11.Cpi3 =Km3_( SO_e! +%2〇>13.31_1!1+^3 + mjj.S3.Ej) ♦第7次的攝像 SUB|^ 出 *(iS11.CP13 =0.5_Κ_(;μιυ_ (SOj;】+ M14—u.1;j.S3j;|) SUB^i^ <(SS1 1_CPI4=0.5.Kj;f>h· (SOjji + MucPM.SIjn + Miajpi./SZj^ + MMjppi.SSjj) ♦第8次的攝像 輸出值S11_CP14 =K__CPM.( SO_ji +M12_Cpl,i-S1jl+M13J;pl4.52^+^4-(^-53^) ♦第9次的攝像 SUB輸出 ΊΙΒ11.cpi4 =0.5 Κ_ςρΐ41 ( SOji + Μ-, 2_cph" jii 3_cpm" + Mmjyi,)·S3j:i) SUBlii 出值S11.ορβ’Ο.δ’Κ^ρβ· ( SO』! +Mi2_ljil5.S1_Ei + M13jpl5.S2_gl + M14._(y|(;.S3j|) ♦第10次的攝像 輸出值S11.CPI5 =Kjpi5_( SOj;| +M12_a>i5.S1jii+M13_a,l5-S2-Ki+Ml4jj.l5'S3_EI) ♦第11次的攝像 記號與記號「〃」是為了明示測量的時序的不同 所標註的記號,並無其他含義。若對該結果中的已知量進 行計算則成為[數20]。此為[數14]的2倍,可解釋為攝像 增加至2倍的結果。 [數 20] 34 201219769 S0-E1 的係數=2 · (K-U^ + K-cpu+K^+k^+uS1E1+M13 cp,2 -S2El+M14 ai)2 -S3E,) Fig. 14 (A) to Fig. 4 (E) show the imaging area 5 when the amount of movement between the i imaging triggers is set to 5/10 of L. In the case of the middle, Fig. 15 shows the case of the third combining unit at the time of the third-under imaging. The measurement pixel is imaged in a total of 11 times in Cpii to cpi5. The output value of 11 times is defined as [number 19] 〇 [number 19] 33 201219769 /ριί ♦ The first shot SUB^i * flS11.CP1, =0.5 K_C1>11 (SOj:, + M,2 tJ) 1 , -81.^, + M,3 m, · S2JU + M,4 a>1, S3 ,:,) ♦ 2nd camera output value S11_cp" =Ktvll_( SO.H •fMucpu.Sl.iii + Muc^-S^ + IVWvn-SU ♦ 3rd camera SUB^iBfiS11.CP„^=0.5-Ka,u-(SOJ:] + M12_u,ll-S1J:l + M13.u>ll-S2_p ,+Mu_a,,,-83^) SUB^ Do 11 _cp, 2=0.5 K_t?1;i· ( SO 』,+Μ, 2 SS1 +Μ,3_咖+M,4_(四)-53^, ♦ 4th camera output value S11 .CP, 2 = K__m;! (SO;+Μ·| It-Curry S1 Knife+Μ, 3_〇>12. S2J;!+Mi4_CP1;i _S3JH) ♦The 5th camera SUBl^ iiSI 1_CPi2 _O.SK-Cpn· (SOj^+Muq^.SI j!l+M13jpl2_S2.El + M14_a>i2'S33i) SUB output/®S11-CP, 3=0.5 _K_fp13. (SOjj + M·! 2_cpi3_S1j;i +Μ, 3^0^3.52^ +Μ,(C-53^) ♦The sixth camera output value S11.Cpi3 =Km3_( SO_e! +%2〇&gt ;13.31_1!1+^3 + mjj.S3.Ej) ♦The seventh camera SUB|^ out*(iS11.CP13 =0.5_Κ_(;μιυ_ (SOj;]+ M14-u.1;j.S3j ;|) SUB^i^ <(S S1 1_CPI4=0.5.Kj;f>h· (SOjji + MucPM.SIjn + Miajpi./SZj^ + MMjppi.SSjj) ♦ 8th camera output value S11_CP14 =K__CPM.( SO_ji +M12_Cpl,i-S1jl+M13J ;pl4.52^+^4-(^-53^) ♦ The 9th camera SUB output ΊΙΒ11.cpi4 =0.5 Κ_ςρΐ41 ( SOji + Μ-, 2_cph" jii 3_cpm" + Mmjyi,)·S3j:i) SUBlii The value of S11.ορβ'Ο.δ'Κ^ρβ· (SO』! +Mi2_ljil5.S1_Ei + M13jpl5.S2_gl + M14._(y|(;.S3j|) ♦10th camera output value S11.CPI5 =Kjpi5_( SOj;| +M12_a>i5.S1jii+M13_a,l5-S2 -Ki+Ml4jj.l5'S3_EI) ♦ The 11th camera symbol and symbol "〃" are symbols marked to indicate the timing of the measurement, and have no other meaning. If the known amount in the result is calculated Then it becomes [number 20]. This is twice the [number 14], which can be interpreted as the result of increasing the imaging by 2 times. [20] 34 201219769 S0-E1 coefficient=2 · (KU^ + K-cpu+ K^+k^+u

SI』丨的係數=2 _ (Κ(3>η_Μ1ϊΐ;ρ丨丨+K 的係數=2 _ (Κ_ υρ)2*Μ12(?]2+κ(:ρϊ:ϊ.ΜιThe coefficient of SI 丨 = 2 _ (Κ(3>η_Μ1ϊΐ; the coefficient of ρ丨丨+K=2 _ (Κ_ υρ)2*Μ12(?]2+κ(:ρϊ:ϊ.Μι

'.CPU ,vi13_CPU 2..α>ι;1+Κ0Ρ14.Μ12^14+Κ(,ΐ15·Μ12^15) S3— —Mi-‘〜二::==:=:::::丨 根據[數18]、[數19]等的分配的關係式,對已知量進 行修正。攝像時序由試樣平台的位置而決定,在全部的攝 像時序中測量像素E由哪個結合單元cp或次結合單元而 事先進行設計。可事先計算出全部的攝像時序中的 結合單元的大小與已知量。 如此,存在各測量像素E中關於i次攝像而分配來自 2個次結合像素的資料的情況,但—個攝像區的測量值依 然可由在該攝像區中職得的所有輸出值的加法運算來進 ,:結:田,量值的數式被歸結為波長板的種類的數 =,以後,與上勒^目_妓 雙折射分丨。 Μ!耵豕的 本實施例中’第1波長板〜第4波長板4 量約請。的波長板,就主軸方位(進相轴方位目 第1波長板45以於圖3中將水平方向 又 20。的軸方位的方式進行配置,第2波長板奶 於第1波長板45而軸方位大致增加%。的方位 板47配置在相對於第2波長板46進而輛 波長 的方位,第4波長板48配置在相對於第^致说 軸方位大致增加36。的方位。另外,偏/皮長板47進而 轴為=純。祕㈣「纽 圖6的流程圖所示的測量準備步驟中,對攝像邙f M1 3_CP1 2 * 35 201219769 元CP為單位來指定偏光傳遞矩陣’以後使用偏光傳遞矩 陣,因而此處不需要很嚴格。大致處於±〇.5。的範圍即可。 本例中’波長板的種類設為4種,但波長板的種類亦 可為4種以上。例如,在使用N種(N為5以上的自然數) 的波長板的情況下’ N個波長板較佳為配置成將主轴方位 以180。均等分配的形式。而且,各個波長板的遲相量較佳 為大致135。。只要為此種構成,則第丨波長板的主軸^位 可為任意的方向。 以上述方式進行配置的理由在於,以聯立方程'.CPU, vi13_CPU 2..α>ι;1+Κ0Ρ14.Μ12^14+Κ(,ΐ15·Μ12^15) S3—Mi-'~2::==:=:::::丨The relational expression of the distribution of [Number 18] and [Number 19] is corrected for the known amount. The imaging timing is determined by the position of the sample stage, and it is determined in advance which combination unit cp or secondary combining unit the pixel E is designed in all imaging timings. The size and known amount of the combined unit in all the imaging timings can be calculated in advance. In this way, there is a case where the data from the two sub-bind pixels are allocated for each of the measurement pixels E, but the measurement values of one imaging area can still be added by the addition of all the output values that are used in the imaging area. In, the knot: the field, the number of the value is attributed to the number of the type of the wave plate =, and later, with the above-mentioned _ 妓 birefringence.本!耵豕 In the present embodiment, the amount of the first wave plate to the fourth wave plate 4 is about. The wavelength plate is arranged such that the first wave plate 45 of the first phase plate 45 is horizontally oriented in the axial direction of FIG. 3, and the second wave plate is milked on the first wave plate 45. The azimuth plate 47 having a substantially increased azimuth is disposed at an orientation of a further wavelength with respect to the second wave plate 46, and the fourth wave plate 48 is disposed at an orientation that is substantially increased by 36 with respect to the axis of the second axis. The long board 47 and the axis are = pure. Secret (4) In the measurement preparation step shown in the flowchart of New Zealand 6, the polarized light transfer matrix is specified for the camera 邙f M1 3_CP1 2 * 35 201219769 yuan CP. The transfer matrix is not required to be very strict here. It is approximately in the range of ±〇.5. In this example, the type of the wavelength plate is set to four, but the type of the wavelength plate may be four or more. For example, In the case of using N kinds of wavelength plates (N is a natural number of 5 or more), the 'N wavelength plates are preferably arranged in such a manner that the main axis orientation is 180. Equally distributed. Moreover, the retardation amount of each wavelength plate is compared. Good is roughly 135. As long as it is such a composition, then Shu ^ spindle wavelength plate bit. Reason for the above-described manner in that arranged in an arbitrary direction to simultaneous equations

(simultaneous equations)來對[數 15]、[數 16]求解時的誤 差為最小。當根據[數15]、[數16]而且求出光的s參數時i 各式的測量值(左邊)中包含附在CCD的輸出值中的雜 訊。這樣,該些雜訊雖藉由多重攝像的平均效果而得以削 減但並不為零,因而作為誤差而包含在最終所算出的s參 數中。附在[數15]、[數16]的左邊的雜訊在電腦16的計算 過程中作為誤差而反映於S參數中的量,由[數15]、[數 16]的變數(S參數)的係數而決定。亦即,若以成為適當 的係數的方式進行組合,則可使該CCD的計算誤差(CCD 的雜訊的影響)為最小。該係數的選擇惟取決於波長板的 規格(遲相量與主軸方位的選擇)。 y知筆者反覆進行模擬的結果為如下。首先,在使用 ^量相_波長板來改變設置齡軸方位而實現偏光狀 ^的變化的情況下’即便增多波長板的數量對於計算精度 亦無大的改善效果。這是因為,即便波長板的數量增多, 36 201219769 而1塊波長板的面積減小,故1塊波長板中的測量值的可 靠度下降。進而,若增加波長板的種類,則無法使用對著 波長板的邊界的結合單元,從而實質上CCD的受光面積 減少。受光面積的減少意味著S/N比的降低。另一方面,' 若波長板的數篁多則亦具有可提高信號中所含的雜訊的遮 斷頻率的效果。考慮到上述兩種情況,波長板的數量的最 小值為決定S參數所需的4種,最大值至多為4〇。 就波長板的種類而言,宜為使用遲相量相同的波長板 並使主軸方位的配置存在差異,當僅以將18〇。除以波長板 的數量,得的角度來使彼此的方位具有差異而進行配置 時,計算誤差為最小。以後將此種設置波長板的設置方法 稱作均4分割,將波長板的個數稱作分割數。 另一方面,在波長板的間隔角度差為45。的情況下, 無論波長板__數量為乡少,計算誤差職大。因波 長板為4種,均等分割角度為4S。,故無法使用計算誤差 應為最小的均等分割角度。因此選擇本實施例的配置。 一另一方面,波長板的遲相量如圖16 及圖16 (B 所示知’無論波長板的數量為乡 最小的區域。波長板的方位的角1分割設定為Ϊ 分割(其中僅4分宝丨丨A 割中均有該傾向 差)。確認4〜4_的所有< ,於波長板的位置,本實施形態中,是將第1波長板 波長板4 5〜4 8設置在遠心透鏡的物體側,但亦可 替其而如圖17所示,將第1波長板〜第4波長板45〜48 37 201219769 設置在CCD相機的正前方。此處,在第丨波長板〜第4 波長板45〜48與CCD相機41之間設置著偏光板49。 在此情況下’具有可減小波長板而降低成本的優點。 另一方面,遠心透鏡來自偏光板之外因而具有求出雙折射 傳遞函數時的誤差增大的缺點。 ^而且,本實施形態中,表示了如下方式,即,使用對 光學膜12整個©進行照明的大小的面照明部14,且使攝 像部15在X方向與γ方向上移動,但亦可如圖18所示, 代替面照明部14而使用照明部101,將寬度細化至足以照 明攝像部15的Y方向的視野範圍,當攝像部ls進行¥方 向上移動時照明部14亦在γ方向上移動。在以此方式構 成的情況下,獲得可降低面照明部14的成本並且可縮短司 托克士參數的取得時間的效果。另外’在使用照明部1〇1 的I♦况下亦可代替5式樣平台13,而使用在載置著光學膜 12的部分形成開口的試樣平台1〇2。 而且’本實施形態中,藉由使1台CCD相機41在X 方向與Υ方向上移動來進行光學膜12整體的光學特性的 測量,但為了進一步縮短測量時間,亦可由多台CCD相 機來進行測量。此時,如圖19所示,針對丨台相機而設置 1台專用的相機CPU,進而,將综合多台相機cpu的主 CPU設置在上位。相機的各結合單元的偏光傳遞矩陣置於 主CPU側,各已知量亦事先被算出。校正測量用及實際測 量用的S參數記憶區域亦置於主CPU内。儲存各相機中的 測量結果的輸出值記憶區域亦置於主CPU側,但在相機 38 201219769 CPU側預先準備輸出值記憶區域的複製σ ( 1 且,伴隨著測量平台的朝向χ方⑽移 重複進行攝像。相機CPU以結合單元為單位( ^ 合單元)而計算輸出值並將該輸出值儲存在輸出值記憶= 域的複製品(repHca”。若朝向χ方向的攝像到達 端則進行相當於各相機的視野範圍的γ方向的移動盥^ 平台的朝向X方向的掃描開始端的移動,但在_動期間 並不進行攝像因而相機CPU並無攝像負荷。利用該負荷^ 最低點而相機CPU進行自輸出值記憶區域的複製至主 CPU側的輸出值記憶區域的·複製。主CPU檢&對輪 出值記憶區域的資料的複製並進行S參數的計算與記憔/ 進而進行光學膜的偏光特性的計算。主CPU在CCD;機 的攝像中亦無攝像負荷’故CPU功率的大部分被分配用 該計算中。如以上所述,相機CPU及主CPU均可高效地 運作,從而可獲得藉由增加相機台數而實現的處理的高速 化效果。 、 本實施形態中,在校正測量及實際測量中,並未對攝 像開始及結束進行詳細敍述,但攝像開始與結束在CCD 相機的整個視野中為無效’因而由此時的攝像所獲得的輸 出值在事後的計算中亦並未得到使用。而且,未對第1波 長板〜第4波長板的邊界進行詳細敍述,而當測量像素e 的攝像在第1波長板〜第4波長板的邊界上進行時,由該 攝像所獲得的輸出值在事後的計算中亦未得到使用。這是 因為’由於位置檢測機構始終指定CCD的各結合單元捕 39 201219769 捉試樣平台13的何處,故可將試樣平台的不需要的位置上 的資訊排除在外。進而,位於波長板的接縫所成像的位置 處的結合單元已事先知曉,因此可不使用該部分。 再者,本發明中,以將鄰接的多個攝像單元結合而成 的結合單元為單位來進行攝像,但只要攝像單元尺寸充分 大且無雜訊的問題則亦可以攝像單元為單位來進行攝像。 【圖式簡單說明】 圖1是說明作為試樣的光學膜的測量像素的說明圖。 圖2是表示結合單元數與CDD相機的輸出值的偏差 (輸出值為12位元的相機中高亮度的374〇/4〇96 料)的關係的曲線。 圖3是攝像部的概略圖。 圖4是用以說明在CCD相機的第i攝像區〜第4 像區對測,像素E進行攝像的說明圖。 圖5疋本發明的光學特性測量裝置的概略圖。 圖6疋表示本發明的作用的流程圖。 圖7,說明面照明部的投光像素的說明圖。 部的SI;表不為了求出偏光傳遞矩陣而使用的光及攝像 的々:r!: ί不父正測量與具有測量中所使用的χ γ位址 己憶區域的1個要素的二維排列構造的概略圖。 CC二! 10 (Β)、圖10 (c) *用以說明藉由 進行攝像的^圖__各結合單絲侧量像素E1 201219769 CCT^ 1 (A)、圖11 (B)、圖11 是用以說明藉由 曰目機的第2攝像區〜第4攝像區内的各結合單元來對 /則1像素El進行攝像的說明圖。 圖U是用以說明將藉由測量像素E1〜測量像素En 的攝像所獲得的輸出值記憶於第1記憶部〜第4記憶部 E11〜En4的說明圖。 圖^是用以說明結合單元cpii及CP12以7: 3的比 例來對測量像素E1進行攝像的情況的說明圖。 圖14 (A)〜圖14 (E)是用以說明測量像素E1由結 合單元CP11〜CP15攝像11次的說明圖。 圖15是用以說明結合單元CP11及CP12以5:5的比 例來對測量像素E1進行攝像的情況的說明圖。 圖16 (A)是表示所使用的波長板的遲相量與計算誤 差量的關係的曲線。 、 圖16 (B)是表不所使用的波長板的遲相量與計算誤 差量的關係的表。 、^、 圖17是表示第1波長板〜第4波長板設置在CCD相 機的正前方的攝像部的概略圖。 圖18是將照明部細化而製作的測量裝置的概略圖。 圖I9是表示包括2台CCD相機的測量裝置的概略圖。 【主要元件符號說明】 10 :光學特性測量裝置 12 :光學膜 13、102 :試樣平台 201219769 ^u I ^il 14 :面照明部 15 :攝像部 16 :電腦 20 : X方向移動機構 22 :基台 22a、22b :軌道 24 : X馬達驅動器 26 : X脈衝計數器 30 :支持台 31 :臂 33 : Y方向移動機構 34 : Z方向移動機構 40 :相機外殼 40a :開口 41 : CCD相機 42 :遠心透鏡 43 : CCD相機旋轉機構 45〜48 :第1波長板〜第4波長板 49 :偏光板(simultaneous equations) to minimize the error when solving [number 15] and [number 16]. When the s parameter of the light is obtained from [number 15] and [number 16], the measured value (left side) of each equation contains the noise attached to the output value of the CCD. Thus, although the noise is reduced by the average effect of the multiple imaging, but is not zero, it is included as an error in the finally calculated s parameter. The noise attached to the left side of [number 15] and [number 16] is reflected in the S parameter as an error in the calculation of the computer 16, and the variable of [number 15] and [number 16] (S parameter) The coefficient is determined. In other words, if the combination is performed as an appropriate coefficient, the calculation error of the CCD (the influence of the noise of the CCD) can be minimized. The choice of this factor depends only on the specification of the wavelength plate (the selection of the late phase and spindle orientation). y know that the author repeated the simulation results as follows. First, in the case where the orientation phase_wavelength plate is used to change the orientation of the orientation axis to achieve a change in the polarization state, the number of the wavelength plates is increased, and the calculation accuracy is not greatly improved. This is because even if the number of wavelength plates is increased, 36 201219769 and the area of one wavelength plate is reduced, the reliability of the measured value in one wavelength plate is lowered. Further, when the type of the wavelength plate is increased, the bonding unit facing the boundary of the wavelength plate cannot be used, and the light receiving area of the CCD is substantially reduced. A decrease in the light receiving area means a decrease in the S/N ratio. On the other hand, 'if the number of wavelength plates is too large, it also has the effect of increasing the blocking frequency of the noise contained in the signal. Considering the above two cases, the minimum value of the number of wavelength plates is the four required to determine the S parameter, and the maximum value is at most 4 〇. As far as the type of the wavelength plate is concerned, it is preferable to use a wave plate having the same amount of hysteresis and to make a difference in the configuration of the spindle orientation when only 18 turns. The calculation error is minimized by dividing the number of the wavelength plates by the angles to make the orientations different from each other. Hereinafter, the method of setting such a wavelength plate will be referred to as a uniform division, and the number of wavelength plates will be referred to as a division number. On the other hand, the difference in the angular interval of the wavelength plate is 45. In the case, regardless of the number of wavelength plates __ for the township, the calculation error is large. There are 4 types of wave plates, and the equal division angle is 4S. Therefore, it is not possible to use the equal division angle that the calculation error should be the smallest. Therefore, the configuration of this embodiment is selected. On the other hand, the retardation amount of the wavelength plate is as shown in Fig. 16 and Fig. 16 (B), regardless of the number of wavelength plates, which is the smallest region of the township. The angle 1 of the orientation of the wavelength plate is set to Ϊ segmentation (only 4 of them) In the case of the division, all the positions of 4 to 4_ are confirmed at the position of the wavelength plate. In the present embodiment, the first wave plate wavelength plates 4 5 to 4 8 are set. The object side of the telecentric lens, but as shown in Fig. 17, the first to fourth wavelength plates 45 to 48 37 201219769 are placed in front of the CCD camera. Here, at the second wavelength plate ~ A polarizing plate 49 is disposed between the fourth wavelength plates 45 to 48 and the CCD camera 41. In this case, 'there is an advantage that the wavelength plate can be reduced to reduce the cost. On the other hand, the telecentric lens comes from the polarizing plate and thus has a request. A disadvantage of the error in the case of the birefringence transfer function is increased. Further, in the present embodiment, the surface illumination unit 14 of the size that illuminates the entire optical film 12 is used, and the imaging unit 15 is provided. Move in the X direction and the γ direction, but as shown in Figure 18, instead of the surface The illumination unit 101 uses the illumination unit 101 to refine the width to a range of the field of view in the Y direction sufficient to illuminate the imaging unit 15, and the illumination unit 14 also moves in the γ direction when the imaging unit ls moves in the ¥ direction. In the case of the configuration, it is possible to reduce the cost of the surface illumination unit 14 and to shorten the acquisition time of the Stokes parameter. In addition, the 'Model 5 can be replaced by the I type in the case of using the illumination unit 1〇1. The sample stage 1〇2 in which the opening is formed in the portion where the optical film 12 is placed is used. In the present embodiment, the entire optical film 12 is moved by moving one CCD camera 41 in the X direction and the x direction. Measurement of optical characteristics, but in order to further shorten the measurement time, it is also possible to measure by a plurality of CCD cameras. At this time, as shown in Fig. 19, one dedicated camera CPU is provided for the down camera, and further, a plurality of integrated cameras are integrated. The main CPU of the camera cpu is set in the upper position. The polarization transfer matrix of each combination unit of the camera is placed on the main CPU side, and each known amount is also calculated in advance. The S parameter memory area for calibration measurement and actual measurement is also placed in the main C. In the PU, the output value memory area for storing the measurement results in each camera is also placed on the main CPU side, but the copy σ of the output value memory area is prepared in advance on the CPU side of the camera 38 201219769 (1, and along with the orientation of the measurement platform) (10) Shifting is repeated. The camera CPU calculates the output value in units of combined units (^ unit) and stores the output value in the output value memory = domain replica (repHca). If the camera reaches the 到达 direction, the camera arrives at the end. The movement in the γ direction corresponding to the field of view of each camera is performed so as to move toward the scanning start end in the X direction. However, the imaging is not performed during the _ movement period, and thus the camera CPU does not have an imaging load. By the lowest point of the load ^, the camera CPU performs copying from the output value memory area to the copy of the output value memory area on the main CPU side. The main CPU checks & copies the data of the round-out memory area and calculates and records the S-parameters and further calculates the polarization characteristics of the optical film. The main CPU does not have an image pickup load in the CCD or the camera. Therefore, most of the CPU power is allocated for this calculation. As described above, the camera CPU and the main CPU can operate efficiently, and the speedup of processing realized by increasing the number of cameras can be obtained. In the present embodiment, in the correction measurement and the actual measurement, the start and end of the imaging are not described in detail, but the imaging is started and ended in the entire field of view of the CCD camera. Values have not been used in the calculations after the fact. Further, the boundary between the first to fourth wavelength plates is not described in detail, and when the imaging of the measurement pixel e is performed on the boundary between the first to fourth wavelength plates, the output value obtained by the imaging is obtained. It has not been used in the calculations after the event. This is because the position detection mechanism always specifies the combination of the CCD unit. Where the capture platform 13 is captured, the information on the undesired position of the sample platform can be excluded. Further, the bonding unit at the position where the seam of the wavelength plate is imaged is known in advance, and thus the portion may not be used. Further, in the present invention, imaging is performed in units of a combination unit in which a plurality of adjacent imaging units are combined, but imaging can be performed in units of imaging units as long as the size of the imaging unit is sufficiently large and there is no noise. . BRIEF DESCRIPTION OF THE DRAWINGS Fig. 1 is an explanatory view for explaining measurement pixels of an optical film as a sample. Fig. 2 is a graph showing the relationship between the number of combined units and the output value of the CDD camera (the output is a high-brightness 374 〇 / 4 〇 96 material in a 12-bit camera). 3 is a schematic view of an imaging unit. 4 is an explanatory view for explaining imaging of the pixel E in the ith to fourth image regions of the CCD camera. Fig. 5 is a schematic view showing an optical characteristic measuring apparatus of the present invention. Figure 6A is a flow chart showing the action of the present invention. Fig. 7 is an explanatory view showing a light projecting pixel of the surface illumination unit. The SI of the part; the light and the image used for the determination of the polarization transfer matrix: r!: ί, the parent is measuring the two-dimensionality of one element with the χ γ address used in the measurement. A schematic diagram of the arrangement structure. CC two! 10 (Β), Fig. 10 (c) * Used to explain the image by means of imaging __ each combined with the monofilament side pixel E1 201219769 CCT^ 1 (A), Figure 11 (B), Figure 11 is used An explanatory diagram for imaging an image of one pixel E1 by each of the combining units in the second to fourth imaging regions of the eyepiece will be described. FIG. 9 is an explanatory diagram for explaining that the output values obtained by the imaging of the measurement pixels E1 to the measurement pixels En are stored in the first to fourth memory units E11 to En4. Fig. 2 is an explanatory diagram for explaining a case where the measurement unit cpii and the CP12 image the measurement pixel E1 at a ratio of 7:3. Figs. 14(A) to 14(E) are explanatory diagrams for explaining that the measurement pixel E1 is imaged 11 times by the combining units CP11 to CP15. Fig. 15 is an explanatory diagram for explaining a case where the combining unit CP11 and the CP12 image the measurement pixel E1 at a ratio of 5:5. Fig. 16 (A) is a graph showing the relationship between the amount of late phase of the wavelength plate used and the amount of calculation error. Fig. 16 (B) is a table showing the relationship between the retardation amount of the wavelength plate used and the calculation error amount. And Fig. 17 is a schematic view showing an imaging unit in which the first to fourth wavelength plates are provided directly in front of the CCD camera. FIG. 18 is a schematic view of a measuring device produced by refining an illuminating unit. Fig. I9 is a schematic view showing a measuring device including two CCD cameras. [Description of main component symbols] 10: Optical property measuring device 12: Optical film 13, 102: Sample platform 201219769 ^u I ^il 14 : Surface illumination unit 15 : Imaging unit 16 : Computer 20 : X-direction moving mechanism 22 : Base Tables 22a, 22b: Track 24: X Motor Driver 26: X Pulse Counter 30: Support Table 31: Arm 33: Y-direction moving mechanism 34: Z-direction moving mechanism 40: Camera housing 40a: Opening 41: CCD camera 42: Telecentric lens 43 : CCD camera rotating mechanism 45 to 48: 1st wave plate to 4th wave plate 49: polarizing plate

50〜53 :第1攝像區〜第4攝像區 55 : CCD 70 :已知的光 71 :基準投光器 71a : XY移動機構 42 201219769 72 :平行單色光源 101 :照明部 CP1〜CP4 :第1結合單元〜第4結合單元 CP11〜CP15 :第1結合單元 CP21〜CP25 :第2結合單元 CP31〜CP35 :第3結合單元 CP41〜CP45 :第4結合單元 E、E卜E2、En :測量像素 E11〜En4 :第1記憶部〜第4記憶部 EM1 :測量像素E1用的輸出值記憶區域 EM11 :第1攝像區用行/輸出值記憶區域 EM12 :第2攝像區用行/輸出值記憶區域 EM13 :第3攝像區用行/輸出值記憶區域 EM14:第4攝像區用行/輸出值記憶區域 EM21〜EMn4 :測量像素E2〜測量像素En用的輸出 值記憶區域 L:投光像素 PL1 :偏光板 QWP1 : 1/4波長板 X、Y、Z :方向 4350 to 53 : 1st to 4th imaging areas 55 : CCD 70 : Known light 71 : Reference light projector 71a : XY moving mechanism 42 201219769 72 : Parallel monochromatic light source 101 : Illumination parts CP1 to CP4 : 1st combination Unit to fourth combining unit CP11 to CP15: first combining unit CP21 to CP25: second combining unit CP31 to CP35: third combining unit CP41 to CP45: fourth combining unit E, E, E2, En: measuring pixel E11~ En4 : 1st memory unit to 4th memory unit EM1 : Output value memory area EM11 for measurement pixel E1 : Line/output value memory area EM12 for 1st imaging area : Line/output value memory area EM13 for 2nd imaging area : Line/output value memory area EM14 for the third imaging area: Line/output value memory area EM21 to EMn4 for the fourth imaging area: Output value for measurement pixel E2 to measurement pixel En Memory area L: Projection pixel PL1: Polarizer QWP1 : 1/4 wavelength plate X, Y, Z: direction 43

Claims (1)

201219769 »* 〆〆 一 · 七、申請專利範圍: 1. 一種光學特性測量裝置,其特徵在於包括: 投光機構,對測量對象照射特定的偏光照明; 、攝像機構,將波長板在第1方向上對齊而排列,在上 述第1方向上實現至少4種偏光特性,且包括影像感測器, 該影像感測H被為對透過上述波長板的光個別地進行 攝像的攝像區;以及 掃描機構,使上述攝像機構相對於上述測量對象而相 對地在上述第1方向上移動;且 伴Ik著藉由上述掃描機構而進行的上述攝像機構的相 :移動ίΐ字對上述測量對象的各測量像素在各攝像區中進 订f次攝像所獲得的輸出值針對每侧量像素進行相加, 該攝像區的測量值,並根據自各攝像區收集的測量 值來鼻出每烟量像素減士參數。 2.如中凊專利範圍第i項所述之光學特性測量裝置, 其中 上述影像感測器彻包含多個像素的攝像元件來進行 ’ 在,像時’針對每個將鄰接的多個像素加以結合 成的結合單元而輸出一個輸出值。 3·如_ %專利範圍第2項所述之光學特性測量裝置, 陳,坊:=則以上述結合單元為單位而求出偏光傳遞矩 #遞矩陣表示自上述測量㈣發出的光與自上 ,象感測11的各攝像區輸出的輸出值的關係,使用上述 201219769 偏光傳遞矩陣與自上述各攝像區輸出的輸出值來求出上述 測量對象的司托克士參數。 4. 如申請專利範圍第2項至第3項中任一項所述之光 學特性測量裝置,其中 上述影像感測器跨及鄰接的2個結合單元而對規定的 測量像素進行攝像。 5. 如申請專利範圍第4項所述之光學特性測量裝置, 其中 ^述規定的測量像素的輸出值包含:將一結合單元中 對規疋的測里像素進行攝像後的像素所佔的比例與該像素 的,出平均值相乘而得的值’及另—結合單元中^規定的 二:: 象素進仃攝像後的像素所佔的比例與該像素的輸出平 均值相乘所得的值。 學二ΐπίΓ1項至第2項中任-項所述之光 少4 = 上述測量對象或上述攝像機構中的至 上述攝像機構中的至少其中—個在與上i 為直主角且與上述測量對象平行的第2方向上移動。 其中.專利範圍第6項所述之光學特性測量襄置, 至少:ΐ述構使上述測量對象或上述攝像機構中的 至夕其卜個在與上述第i方向為直角且與上述測量= 45 201219769 平行的上述第2方向上移動的情況下,維持著上述攝像機 構與上述投光機構的上述第2方向上的位置關係而使其移 動,且將上述投光機構的偏光照射範圍縮小至上述攝像機 構的可照射上述第2方向的視野範圍的程度。 8.如申請專利範圍第1項至第2項中任一項所述之光 學特性測量裝置,其中 藉由上述波長板而實現的偏光的種類的數 40。 , 9·如申請專利範圍第1項至第2項中任-項所述之光 學特性測量裝置,其中 驟状九 上述波長板具有與遲相量為70。至170。或19〇。至290。 中任一個波長板相同的遲相效果。 ίο. -種光學特性測量方法,其特徵在於: 卞j用攝像贿,該攝賴構賴長板在帛1方向上對 包括影傻咸^ 向上實現至少4種偏光特性,且 ’棘像感測器被劃分為對透過上述波長 板的先個舰騎攝像的攝像區, 敗上这波長 =攝像機構相對於測量對象而相對地在 上移動,將上述測量對象的 =方向 該攝像區的測量值= ’並針對每個測量像素算出 量值而針對每個測量像^ =各攝像區同樣地收集的測 司托克士參數。像素Μ自上制量縣發出的光的 46201219769 »* 〆〆一· VII. Patent application scope: 1. An optical characteristic measuring device, comprising: a light projecting mechanism for illuminating a specific polarized illumination of a measuring object; and a camera mechanism for arranging the wavelength plate in the first direction Arranging up and aligned, achieving at least four kinds of polarization characteristics in the first direction, and including an image sensor, the image sensing H being an imaging area for individually capturing light transmitted through the wavelength plate; and a scanning mechanism And causing the imaging unit to move relative to the measurement target in the first direction; and the phase of the imaging mechanism by the scanning mechanism: moving each of the measurement pixels of the measurement target The output values obtained by ordering the f-times in each imaging area are added for each side pixel, the measured value of the imaging area, and the parameters of the smoke pixels are output according to the measured values collected from the respective imaging areas. . 2. The optical characteristic measuring apparatus according to the above-mentioned item, wherein the image sensor includes a plurality of pixel image pickup elements for performing "in, image time" for each of a plurality of adjacent pixels. An integrated output value is output by combining the combined units. 3. The optical characteristic measuring device according to item 2 of the _% patent range, Chen, Fang: = then obtains the polarization transmission moment in units of the above-mentioned combined unit. The meta-matrix represents the light emitted from the above measurement (4) and The relationship between the output values of the respective imaging regions outputted by the sensing 11 is obtained by using the 201219769 polarization transfer matrix and the output values output from the respective imaging regions to obtain the Stokes parameters of the measurement target. 4. The optical characteristic measuring apparatus according to any one of claims 2 to 3, wherein the image sensor images a predetermined measurement pixel across two adjacent bonding units. 5. The optical characteristic measuring device according to claim 4, wherein the output value of the specified measuring pixel comprises: a proportion of pixels in a combined unit that images the measured pixel in the combined unit The value obtained by multiplying the average value of the pixel and the second specified by the pixel in the combination unit: the ratio of the pixel occupied by the pixel to the pixel is multiplied by the average value of the output of the pixel. value. The light of the above-mentioned measurement object or at least one of the above-mentioned image pickup mechanisms to the above-mentioned image pickup mechanism is a straight principal angle with the above i and the above-mentioned measurement object Move in parallel in the second direction. Wherein the optical characteristic measuring device described in item 6 of the patent scope, at least: the configuration is such that the measurement object or the imaging device is at right angles to the ith direction and the measurement is 45 When the second direction is moved in parallel in the second direction, the positional relationship between the imaging unit and the light projecting unit in the second direction is maintained and moved, and the polarization range of the light projecting unit is reduced to the above. The extent to which the imaging unit can illuminate the field of view in the second direction. 8. The optical property measuring apparatus according to any one of claims 1 to 2, wherein the number of types of polarized light achieved by the wavelength plate is 40. 9. The optical characteristic measuring apparatus according to any one of the items 1 to 2, wherein the wavelength plate has a phase of 70 and a retardation. To 170. Or 19〇. To 290. The same retardation effect of any one of the wavelength plates. Ίο. - A method for measuring optical characteristics, characterized in that: 卞j uses a camera to bribe, and the photo-reconstruction longboard achieves at least four kinds of polarization characteristics in the direction of 帛1, including the shadow silly ^, and the 'thorny image The detector is divided into an imaging area for the first ship riding through the above-mentioned wavelength plate, and this wavelength is lost = the imaging mechanism relatively moves upward with respect to the measurement object, and the measurement of the imaging area of the measurement object Value = 'and the magnitude of each measured pixel is calculated for each measurement image ^ = each camera area is similarly collected for the Toscank parameter. Pixels from the light emitted by the Shangliang County 46
TW100141596A 2010-11-15 2011-11-15 Optical property measuring device and method thereof TWI532985B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2010255041A JP5715381B2 (en) 2010-11-15 2010-11-15 Optical characteristic measuring apparatus and method

Publications (2)

Publication Number Publication Date
TW201219769A true TW201219769A (en) 2012-05-16
TWI532985B TWI532985B (en) 2016-05-11

Family

ID=46083894

Family Applications (1)

Application Number Title Priority Date Filing Date
TW100141596A TWI532985B (en) 2010-11-15 2011-11-15 Optical property measuring device and method thereof

Country Status (5)

Country Link
JP (1) JP5715381B2 (en)
KR (1) KR101650226B1 (en)
CN (1) CN103210294A (en)
TW (1) TWI532985B (en)
WO (1) WO2012066959A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI512277B (en) * 2013-01-04 2015-12-11 Taiwan Power Testing Technology Co Ltd Monitor inspection equipment
CN103616167B (en) * 2013-12-05 2016-03-09 福州大学 A kind of Automatic detection system for luminance uniformity of backlight source
JP7365263B2 (en) * 2019-05-09 2023-10-19 ローム株式会社 Illuminance sensors, electronic devices and two-dimensional image sensors

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05346397A (en) 1992-06-15 1993-12-27 New Oji Paper Co Ltd Double refraction measuring apparatus
JP2000009659A (en) * 1998-06-19 2000-01-14 Fuji Photo Film Co Ltd Surface inspection method and device
JP2003242511A (en) * 2002-02-19 2003-08-29 Nec Corp Similar periodic pattern inspection device and inspection method
JP3962605B2 (en) * 2002-02-28 2007-08-22 キヤノン株式会社 Sheet processing apparatus and image forming apparatus
JP2006058055A (en) * 2004-08-18 2006-03-02 Fuji Photo Film Co Ltd Birefringence measurement device and stokes parameter calculation program
JP4781746B2 (en) * 2005-04-14 2011-09-28 株式会社フジクラ Optical fiber birefringence measuring method and measuring apparatus, and optical fiber polarization mode dispersion measuring method
JP5118311B2 (en) 2006-03-27 2013-01-16 株式会社フォトニックラティス Measuring device for phase difference and optical axis orientation
JP2009068922A (en) * 2007-09-11 2009-04-02 Canon Inc Measurement apparatus, exposure apparatus, and device fabrication method
JP2009139356A (en) * 2007-12-04 2009-06-25 Photonic Lattice Inc Polarized light measuring device
JP2009168795A (en) * 2007-12-21 2009-07-30 Sharp Corp Polarization detecting device, polarization detecting element, and polarization detecting method
JP2009229279A (en) 2008-03-24 2009-10-08 Fujifilm Corp Birefringence measuring apparatus and method
JP2009293957A (en) * 2008-06-02 2009-12-17 Toshiba Corp Pattern defect inspection apparatus and inspection method
CN101762891B (en) * 2008-12-23 2011-12-28 财团法人工业技术研究院 Optical property measurement system of liquid crystal unit and method thereof

Also Published As

Publication number Publication date
KR20130124324A (en) 2013-11-13
JP5715381B2 (en) 2015-05-07
TWI532985B (en) 2016-05-11
KR101650226B1 (en) 2016-08-22
JP2012107893A (en) 2012-06-07
WO2012066959A1 (en) 2012-05-24
CN103210294A (en) 2013-07-17

Similar Documents

Publication Publication Date Title
US9369700B2 (en) Systems and methods for lens characterization
JP6124184B2 (en) Get distances between different points on an imaged subject
JP5273408B2 (en) 4D polynomial model for depth estimation based on two-photo matching
WO2008073700A3 (en) Method for assessing image focus quality
TW201143396A (en) Creating an image using still and preview
JP2011182397A (en) Method and apparatus for calculating shift length
TW201219769A (en) Optical property measuring device and method thereof
JP2012251997A (en) Three-dimensional measurement device, method for controlling three-dimensional measurement device and program
JP2004286465A (en) Method for measuring object by image and imaging apparatus
JP5599849B2 (en) Lens inspection apparatus and method
CN106197366A (en) The treating method and apparatus of range information
TW201231914A (en) Surface shape evaluating method and surface shape evaluating device
Chapman et al. Predicting pixel defect rates based on image sensor parameters
Johnsen et al. Segmentation, retardation and mass approximation of birefringent particles on a standard light microscope
TW201007162A (en) Optical carriage structure of inspection apparatus and its inspection method
CN107392955A (en) A kind of depth of field estimation device and method based on brightness
JP2004134861A (en) Resolution evaluation method, resolution evaluation program, and optical apparatus
US10607370B1 (en) Coarse to fine calibration parameter validation and temperature mitigation
TWI238885B (en) Method for three-dimension measurement
TWI375787B (en) Electrical device and method for measuring size of object
TW200849975A (en) Equipment and method for examining image recorded apparatus
Yang et al. Error rate of automated calculation for wound surface area using a digital photography
WO2022061899A1 (en) Infrared image processing method and device, and infrared camera
Kim Study on Comparative Analysis of Camera Calibration.
JP2009236678A (en) Device and method for measuring phase difference