TW202028722A - Method, program and device for determining quality of sample using fluorescence image - Google Patents

Method, program and device for determining quality of sample using fluorescence image Download PDF

Info

Publication number
TW202028722A
TW202028722A TW108142311A TW108142311A TW202028722A TW 202028722 A TW202028722 A TW 202028722A TW 108142311 A TW108142311 A TW 108142311A TW 108142311 A TW108142311 A TW 108142311A TW 202028722 A TW202028722 A TW 202028722A
Authority
TW
Taiwan
Prior art keywords
image data
aforementioned
fluorescent
sample
wavelength
Prior art date
Application number
TW108142311A
Other languages
Chinese (zh)
Inventor
内藤啓貴
蔦瑞樹
Original Assignee
日商日本煙草產業股份有限公司
國立研究開發法人農業 食品產業技術總合研究機構
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日商日本煙草產業股份有限公司, 國立研究開發法人農業 食品產業技術總合研究機構 filed Critical 日商日本煙草產業股份有限公司
Publication of TW202028722A publication Critical patent/TW202028722A/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/62Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light
    • G01N21/63Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light optically excited
    • G01N21/64Fluorescence; Phosphorescence

Landscapes

  • Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Image Analysis (AREA)
  • Investigating, Analyzing Materials By Fluorescence Or Luminescence (AREA)

Abstract

An object of the present invention is to conduct quality determination of samples with high-accuracy and high-efficiency in consideration of fluorescence photobleaching. Provided is a method for determining quality of sample using fluorescence image, which includes a fluorescence image acquiring step (S01), a machine-learning step (S02), and a quality class determination step (S03), wherein in the step S01, the acquiring time-point for the fluorescence image is set to be within a predetermined period P (P>0) from a time-point T (T>=0) which is set in consideration of fluorescence photobleaching with irradiation initiation of excitation light as the start point.

Description

使用螢光畫像的試樣的品質判定方法、程式及裝置 Quality judging method, program and device of sample using fluorescent image

本發明係關於利用螢光畫像的試樣的品質判定方法、程式及裝置,尤其關於適合利用於試樣中的成分的判定之品質判定方法、程式及裝置。 The present invention relates to a method, a program, and a device for determining the quality of a sample using a fluorescent image, and more particularly to a method, program, and device for determining the quality of a component suitable for use in the sample.

一直以來,為了評價試樣等的品質,使用的是利用連續光譜之高光譜影像(hyperspectral imaging)技術、或是利用離散光譜之多光譜影像(multispectral imaging)技術。另外,有人提出使用「螢光指紋」來作為上述光譜影像的「光譜」之「螢光指紋影像技術(Fluorescence Fingerprint Imaging technique)」(參照例如專利文獻1、非專利文獻1)。 Conventionally, in order to evaluate the quality of samples, etc., hyperspectral imaging technology using continuous spectrum or multispectral imaging technology using discrete spectrum has been used. In addition, a "fluorescence fingerprint imaging technique" using a "fluorescence fingerprint" as the "spectrum" of the above-mentioned spectral image has been proposed (see, for example, Patent Document 1 and Non-Patent Document 1).

螢光指紋也稱為激發螢光矩陣(Excitation Emission Matrix;EEM),係階段性地使激發光的激發波長變化而對於含有螢光物質之試驗試樣照射激發光,將從試驗試樣發射出的預定的螢光波長的光(螢光)的強度,在以激發波長(λEx)、螢光波長(測定波長)(λEm)、螢光強度(IEx, Em)作為三個正交軸之三維空間中描繪出其點而得到點的集合然後使該集合可視化而成者(參照第2圖)。 Fluorescent fingerprint is also called Excitation Emission Matrix (EEM), which changes the excitation wavelength of the excitation light step by step, and irradiates the test sample containing the fluorescent substance with the excitation light, which will emit from the test sample The intensity of light (fluorescence) at the predetermined fluorescence wavelength is determined by excitation wavelength (λEx), fluorescence wavelength (measurement wavelength) (λEm), fluorescence intensity (IEx, Em) The point is drawn in the three-dimensional space of three orthogonal axes to obtain a collection of points and then visualize the collection (refer to Figure 2).

螢光指紋可藉由以等高線形狀、顏色分佈等來表示各點的螢光強度,而表示成三維圖表(graph)(參照第3圖),或表示成二維圖表(參照第4圖)。因為螢光指紋表現出具有三維的龐大的資訊之試驗試樣固有的圖形(pattern),所以很適合使用於各種鑑別及定量等。 Fluorescent fingerprints can be expressed as a three-dimensional graph (refer to Fig. 3) or a two-dimensional graph (refer to Fig. 4) by expressing the fluorescent intensity of each point with contour shapes and color distribution. The fluorescent fingerprint exhibits a pattern inherent to the test sample with three-dimensional huge information, so it is very suitable for various identification and quantification.

在螢光指紋的高光譜影像(hyperspectral image)中,如第5圖所示,除了上述的螢光指紋資訊之外,也包含有位置資訊,所以利用螢光指紋影像技術,不僅可辨識出試樣中的成分,也可得知其分佈等。第5圖中的「波長條件」係對應於激發波長(λEx)與螢光波長(λEm)之組合,波長條件的個數係等於激發波長(λEx)與螢光波長(λEm)之組合的個數。 In the hyperspectral image of the fluorescent fingerprint, as shown in Figure 5, in addition to the fluorescent fingerprint information mentioned above, it also contains location information. Therefore, the fluorescent fingerprint imaging technology can not only identify the test The components in the sample can also be known for their distribution. The "wavelength condition" in Figure 5 corresponds to the combination of excitation wavelength (λEx) and fluorescence wavelength (λEm), and the number of wavelength conditions is equal to the combination of excitation wavelength (λEx) and fluorescence wavelength (λEm). number.

非專利文獻1中揭示了:為了從螢光指紋影像取得試樣中的特定成分的可視化畫像而採用二次規劃(quadratic programming)法等方法之內容等,如此的方法可說是對於高光譜影像的分類問題的一個解決方案。 Non-Patent Document 1 discloses the use of methods such as quadratic programming in order to obtain a visual image of specific components in a sample from fluorescent fingerprint images. Such a method can be said to be suitable for hyperspectral images. A solution to the classification problem.

一般而言,關於高光譜影像的分類問題,所得到的見解係:利用屬於機器學習的一種之深度學習來加以解決的話很有效(參照例如非專利文獻2)。 Generally speaking, regarding the classification of hyperspectral images, the knowledge obtained is that it is effective if it is solved by deep learning, which is a type of machine learning (see, for example, Non-Patent Document 2).

關於屬於機器學習的一種之深度學習,已知有各種方法(參照例如非專利文獻3),其中,關於適合於特別是畫像辨識之深度學習,所得到的見解係:使用卷積神經網路(Convolutional Neural Network:CNN)的話很有效(參照例如非專利文獻3、4)。 Regarding deep learning, which is a kind of machine learning, various methods are known (see, for example, Non-Patent Document 3). Among them, the insights obtained regarding deep learning, which is suitable for image recognition in particular, are: using convolutional neural networks ( Convolutional Neural Network: CNN) is very effective (see, for example, Non-Patent Documents 3 and 4).

[先前技術文獻] [Prior Technical Literature]

[專利文獻] [Patent Literature]

[專利文獻1]日本特開2012-98244號公報 [Patent Document 1] JP 2012-98244 A

[非專利文獻] [Non-Patent Literature]

[非專利文獻1]粉川美踏等「蛍光指紋

Figure 108142311-A0202-12-0003-29
技術
Figure 108142311-A0202-12-0003-30
開発(中譯:螢光指紋影像化技術的開發)」;「日本食品科學工學會誌」VOL.62 NO.10, P477-483 (2015) [Non-Patent Document 1] Fanchuan Meita et al. "Xiaoguang Fingerprint
Figure 108142311-A0202-12-0003-29
technology
Figure 108142311-A0202-12-0003-30
Development (Chinese translation: Development of fluorescent fingerprint imaging technology)";"Journal of the Japanese Society of Food Science and Technology" VOL.62 NO.10, P477-483 (2015)

[非專利文獻2]Li, Y, et al. “Spectral-Spatial Classification of Hyperspectral Imagery with 3D Convolutional Neural Network” Remote Sens. 2017, 9(1), 67; (http://dx.doi.org/10.3390/rs9010067) [Non-Patent Document 2] Li, Y, et al. "Spectral-Spatial Classification of Hyperspectral Imagery with 3D Convolutional Neural Network" Remote Sens. 2017, 9(1), 67; (http://dx.doi.org/ 10.3390/rs9010067)

[非專利文獻3]岡谷貴之「深層学習(中譯:深層學習)」2015-4-7,講談社 [Non-Patent Document 3] Okaya Takayuki "Deep Learning (Chinese Translation: Deep Learning)" 2015-4-7, Kodansha

[非專利文獻4]原田達也「面像認識(中譯:畫像認識)」2017-5-24,講談社 [Non-Patent Document 4] Harada Tatsuya "Knowing the Face (Chinese Translation: Knowing the Portrait)" 2017-5-24, Kodansha

如上述,藉由使用螢光指紋的高光譜影像之螢光指紋影像技術,不僅可辨識出試樣中的成分,也可得知其分佈等,然而在處理螢光指紋之情況,卻必須考慮所謂的螢光褪色(fluorescence photobleaching)之螢光特有的問題。 As mentioned above, by using the fluorescent fingerprint imaging technology using the hyperspectral image of fluorescent fingerprints, not only the components in the sample can be identified, but also the distribution, etc., but when dealing with fluorescent fingerprints, it must be considered The so-called fluorescence photobleaching is a unique problem of fluorescence.

螢光褪色係指:因為受到光的激發後之螢光色素分子的構造不穩定性,導致螢光色素分子的構造發生變化,變得無法維持激發狀態而停止發出螢光之現象。發生如此的螢光褪色,依褪色的程度而定可能發生觀測會變困難之情形。 Fluorescence fading refers to the instability of the structure of the fluorescent pigment molecules after being excited by light, which causes the structure of the fluorescent pigment molecules to change, becoming unable to maintain the excited state and stop emitting fluorescence. Depending on the degree of fading, it may become difficult to observe the fluorescence.

然而,以往的螢光指紋影像技術並未充分考慮如何因應如此的螢光褪色。 However, the previous fluorescent fingerprint imaging technology did not fully consider how to deal with such fluorescent fading.

本發明係為了解決如此的課題而提出者,其係希望能夠在螢光畫像的處理中應用有考慮到螢光褪色的現象之以機器學習來解決分類問題之方法,而高精度且有效率地進行試樣的品質判定。 The present invention was proposed in order to solve such a problem. It is hoped that the method of machine learning to solve the classification problem in consideration of the phenomenon of fluorescent fading in the processing of fluorescent images can be applied with high precision and efficiency. Carry out the quality judgment of the sample.

舉例來說明本發明的實施的態樣,有以下各種態樣。 To illustrate the implementation aspects of the present invention, there are the following various aspects.

(態樣1) (Aspect 1)

一種方法,包含: A method that includes:

螢光畫像資料取得程序,係取得螢光畫像作為對應於激發波長與螢光波長的組合之張數份的螢光畫像資料,該螢光畫像為對於包含已知的品質區分之試樣照射具有預定的激發波長之激發光,根據在以該激發光的照射開始為始點並考慮螢光褪色而設定的時刻T(T≧0)開始的預定的期間P(P>0)內得到的預定的螢光波長的反射光的強度而得的螢光畫像; The fluorescence image data acquisition process is to obtain the fluorescence image as the fluorescence image data corresponding to the combination of the excitation wavelength and the fluorescence wavelength. The fluorescence image is for sample irradiation with known quality distinction. The excitation light of the predetermined excitation wavelength is based on the predetermined period P(P>0) starting from the time T(T≧0) set with the start of the excitation light irradiation as the starting point and considering the fading of the fluorescence Fluorescent portrait derived from the intensity of the reflected light of the fluorescent wavelength;

機器學習程序,係從前述螢光畫像資料作成具有預定的通道(channel)數之輸入畫像資料,以該輸入畫像資料作為訓練資料而利用電腦進行機器學習,構築出對前述試樣中包含的成分而言賦予最合適的品質區分之分類器;以及 The machine learning program is to create input image data with a predetermined number of channels from the aforementioned fluorescent image data, and use the input image data as training data to perform machine learning using a computer to construct a reference to the components contained in the aforementioned sample In terms of classifiers that give the most suitable quality distinction; and

品質區分判定程序,係應用前述分類器於從包含有未知的品質區分之試樣的螢光畫像資料得到的輸入畫像資料,判定該試樣中包含的品質區分。 The quality classification judgment procedure is to apply the aforementioned classifier to the input image data obtained from the fluorescent image data of the sample containing the unknown quality classification to determine the quality classification contained in the sample.

(態樣2) (Aspect 2)

在態樣1所述的方法中,進行也考慮存在於螢光畫像資料中的物體(object)的形狀之機器學習,使之反映在前述試樣中包含的品質區分的判定中。 In the method described in aspect 1, machine learning is performed that also considers the shape of the object existing in the fluorescent image data, so that it is reflected in the judgment of the quality classification included in the sample.

(態樣3) (Aspect 3)

在態樣1或態樣2所述的方法中,設置複數個前述時刻T(0≦T1<…<Tn),將從依各時刻Ti(0≦i≦n)得到的螢光畫像資料所作成的具有預定的通道數之輸入畫像資料,分別作為前述複數個時刻中的各個時刻的訓練資料而利用電腦進行機器學習,構築出對前述試樣中包含的成分而言在前述複數個時刻中的各個時刻賦予最合適的品質區分之分類器,然後在品質區分的判定時,選擇適合該品質區分的分類器,來判定該試樣中包含的品質區分。 In the method described in aspect 1 or aspect 2, a plurality of the aforementioned times T (0≦T 1 <...<T n ) are set, and the fluorescence obtained from each time T i (0≦i≦n) The input image data with the predetermined number of channels created by the image data is used as the training data for each of the above-mentioned plural times, and the computer is used for machine learning, and the components contained in the above-mentioned sample Each of these moments is assigned a classifier of the most suitable quality classification, and then when determining the quality classification, a classifier suitable for the quality classification is selected to determine the quality classification contained in the sample.

(態樣4) (Aspect 4)

在態樣1或態樣2所述的方法中,設置複數個前述時刻T(0≦T1<…<Tn),將依各時刻Ti(0≦i≦n)得到的螢光畫像資料按時間序列予以累積起來而作成累積螢光畫像資料,以從前述累積螢光畫像資料作成的具有預定的通道數之輸入畫像資料作為訓練資料而利用電腦進行機器學習,構築出對前述試樣中包含的成分而言賦予最合適的品質區分之分類器。 In the method described in aspect 1 or aspect 2, a plurality of the aforementioned times T (0≦T 1 <...<T n ) are set, and the fluorescent image obtained at each time T i (0≦i≦n) The data is accumulated in time series to create cumulative fluorescent image data. The input image data with a predetermined number of channels created from the above cumulative fluorescent image data is used as training data, and the computer is used for machine learning to construct A classifier that gives the most suitable quality classification to the components contained in it.

(態樣5) (Aspect 5)

在態樣1至4所述的方法中,前述時刻T係螢光之褪色開始之前的時刻。 In the methods described in aspects 1 to 4, the aforementioned time T is the time before the fading of the fluorescent light starts.

(態樣6) (Aspect 6)

在態樣1至5中任一項所述的方法中,將前述時刻T及期間P設定為與品質區分相依來提高該品質區分的判定精度。 In the method described in any one of aspects 1 to 5, the time T and the period P are set to be dependent on the quality classification to improve the accuracy of the quality classification.

(態樣7) (Aspect 7)

在態樣1至6所述的方法中,還包含取得可視畫像資料之可視畫像資料取得程序,且從前述螢光畫像資料及該可視畫像資料作成的具有預定的通道數之輸入畫像資料,以該輸入畫像資料作為訓練資料而利用電腦進行機器學習。 In the method described in aspects 1 to 6, it further includes a visual image data obtaining program for obtaining visual image data, and the input image data with a predetermined number of channels made from the aforementioned fluorescent image data and the visual image data, to The input image data is used as training data for machine learning using a computer.

(態樣8) (Aspect 8)

在態樣7所述的方法中,前述可視畫像資料係由R(波長:680nm附近值)、G(波長:560nm附近值)、B(波長:450nm附近值)所構成。 In the method described in aspect 7, the aforementioned visual image data is composed of R (wavelength: value near 680 nm), G (wavelength: value near 560 nm), and B (wavelength: value near 450 nm).

(態樣9) (Aspect 9)

在態樣1至8所述的方法中,激發波長及螢光波長係使用適合於多酚(polyphenol)、葉綠素(chlorophyll)的檢出之激發、螢光波長。 In the methods described in aspects 1 to 8, the excitation wavelength and the fluorescence wavelength are suitable for the detection of polyphenol and chlorophyll.

(態樣10) (Aspect 10)

在態樣1至9所述的方法中,試樣係菸草原料,品質區分係柏萊(Burley)種的葉肉部(lamina)、黃色種的葉肉部、東方(orient)種的葉肉部、中骨(葉脈部)(stem)、菸草片(sheet)、膨化乾燥體(puff)。 In the method described in aspects 1 to 9, the sample is a tobacco raw material, and the quality classification is the mesophyll (lamina) of Burley species, the mesophyll of yellow species, the mesophyll of oriental (orient) species, and the Bone (leaf vein) (stem), tobacco sheet (sheet), puffed dry body (puff).

(態樣11) (Aspect 11)

一種使電腦執行態樣1至10所述的方法之程式。 A program for a computer to execute the methods described in aspects 1 to 10.

(態樣12) (Aspect 12)

一種裝置,具備有: A device with:

機器學習手段,係取得對應於與包含已知的品質區分之試樣有關之激發波長與螢光波長的組合之張數份的螢光畫像資料,從該螢光畫像資料作成具有預定的通道(channel)數之輸入畫像資料,以該輸入畫像資料作為訓練資料而進行機器學習,構築出對前述試樣中包含的成分而言賦予最合適的品質區分之分類器,該螢光畫像資料為根據在以激發光的照射開始為始點並考慮螢光褪色而設定的時刻T(T≧0)開始的預定的期間P(P>0)內得到的預定的螢光波長的反射光的強度而得之螢光畫像資料;以及 The machine learning method is to obtain the fluorescence image data corresponding to the combination of the excitation wavelength and the fluorescence wavelength related to the sample containing the known quality classification, and create a predetermined channel from the fluorescence image data ( Channel) number of input image data, use the input image data as training data to perform machine learning to construct a classifier that gives the most suitable quality distinction to the components contained in the aforementioned sample, based on the fluorescent image data The intensity of the reflected light of the predetermined fluorescent wavelength obtained in the predetermined period P (P>0) starting from the time T (T≧0) set in consideration of the fading of the fluorescent light starting from the start of the irradiation of the excitation light Fluorescent portrait data obtained; and

品質區分判定手段,係應用該分類器於從包含有未知的品質區分之試樣的螢光畫像資料得到的輸入畫像資料,判定該試樣中包含的品質區分。 The quality classification judging means is to apply the classifier to the input image data obtained from the fluorescent image data of the sample containing the unknown quality classification to determine the quality classification contained in the sample.

(態樣13) (Aspect 13)

在態樣12所述的裝置中,進行也考慮存在於螢光畫像資料中的物體(object)的形狀之機器學習,使之反映在前述試樣中包含的品質區分的判定中。 In the device described in aspect 12, machine learning that also considers the shape of an object existing in the fluorescent image data is performed, and this is reflected in the judgment of the quality classification included in the sample.

(態樣14) (Aspect 14)

在態樣12或態樣13所述的裝置中,將從前述取得時刻T各自不同的螢光畫像資料作成的具有預定的通道數之輸入畫像資料,分別作為各個前述取得時刻的訓練資料而利用電腦進行機器學習,構築出對前述試樣中包含的成分而言賦予最合適的品質區分之分類器,然後在品質區分的判定時,選擇適合該品質區分的分類器,來判定該試樣中包含的品質區分。 In the device described in aspect 12 or aspect 13, input image data with a predetermined number of channels created from the fluorescent image data at the respective acquisition time T is used as the training data at each acquisition time. The computer performs machine learning to construct a classifier that gives the most suitable quality classification to the components contained in the aforementioned sample. Then, when the quality classification is judged, the classifier suitable for the quality classification is selected to determine the sample Included quality distinction.

(態樣15) (Aspect 15)

在態樣12或態樣13所述的裝置中,將前述取得時刻T各自不同的螢光畫像資料按時間序列予以累積起來而作成累積螢光畫像資料,以從前述累積螢光畫像資料作成的具有預定的通道數之輸入畫像資料作為訓練資料而利用電腦進行機器學習,構築出對前述試樣中包含的成分而言賦予最合適的品質區分之分類器。 In the device described in aspect 12 or aspect 13, the fluorescent image data that are different from each other at the acquisition time T are accumulated in time series to form accumulated fluorescent image data, which is created from the aforementioned accumulated fluorescent image data The input image data with a predetermined number of channels is used as training data and machine learning is performed using a computer to construct a classifier that gives the most suitable quality classification to the components contained in the aforementioned sample.

(態樣16) (Aspect 16)

在態樣12至15所述的裝置中,還取得可視畫像資料,且從前述螢光畫像資料及該可視畫像資料作成具有預定的通道數之輸入畫像資料,以該輸入畫像資料作為訓練資料而利用電腦進行機器學習。 In the device described in aspects 12 to 15, the visual image data is also obtained, and input image data having a predetermined number of channels is formed from the aforementioned fluorescent image data and the visual image data, and the input image data is used as training data. Use computers for machine learning.

(態樣17) (Aspect 17)

在態樣16所述的裝置中,前述可視畫像資料係由R(波長:680nm附近值)、G(波長:560nm附近值)、B(波長:450nm附近值)所構成。 In the device of aspect 16, the aforementioned visual image data is composed of R (wavelength: value near 680 nm), G (wavelength: value near 560 nm), and B (wavelength: value near 450 nm).

在上述的態樣中,所謂的「附近值」係指在中心波長的上下的預定幅度內之值。該預定幅度可設定為例如5至10nm。 In the above aspect, the so-called "nearby value" refers to a value within a predetermined range above and below the center wavelength. The predetermined amplitude can be set to, for example, 5 to 10 nm.

另外,所謂的「程式」係為根據任意的語言或記述方法而記述成的資料處理方法,可為源代碼(source code)、二進制代碼(binary code)等形式。「程式」可為單獨一體之構成,亦可為分散成複數個模塊(module)或程序庫(library)之構成,或是與其他既有的程式合作來達成其功能之構成。 In addition, the so-called "program" is a data processing method described in any language or description method, and can be in the form of source code, binary code, or the like. The "program" can be a single integrated structure, or it can be a structure dispersed into multiple modules or libraries, or a structure that cooperates with other existing programs to achieve its functions.

另外,「裝置」可由硬體所構成,但亦可為利用電腦的軟體而實現各種機能之機能實現手段所組合成之構成。機能實現手段可包含例如程式模塊(program module)。 In addition, the "device" can be constituted by hardware, but it can also be constituted by a combination of means for realizing various functions using computer software. The function realization means may include, for example, a program module.

又,關於態樣10之菸草的品質區分,將在後面說明。 In addition, the quality classification of tobacco of aspect 10 will be described later.

根據本發明,能夠達成有考慮到螢光褪色現象之高精度且有效率的試樣的品質判定。 According to the present invention, it is possible to achieve a high-precision and efficient sample quality judgment in consideration of the phenomenon of fluorescence fading.

600:螢光成像系統 600: Fluorescence imaging system

610:分光照明裝置 610: Spectroscopic lighting device

612:氙氣燈光源 612: Xenon light source

614、622:帶通濾波器 614, 622: band pass filter

620:分光攝影裝置 620: Spectrophotographic device

624:攝影裝置 624: Photography installation

630:樣本 630: sample

710:第s層的畫像 710: Portrait of the sth floor

720:濾波器 720: filter

730:第s+1層的畫像 730: Portrait of the s+1 layer

1110:機器學習手段 1110: Machine learning methods

1115:學習完成之分類器 1115: Classifier that has been learned

1120:品質區分判定手段 1120: Quality discrimination means

S01~S03:程序 S01~S03: Program

第1圖係用來說明與本發明之方法有關的實施的一態樣的概要之流程圖。 Fig. 1 is a flow chart for explaining an outline of the implementation related to the method of the present invention.

第2圖係顯示對量測對象物照射激發光時從該量測對象物發出的螢光的光譜的概要之說明圖。 FIG. 2 is an explanatory diagram showing the outline of the spectrum of the fluorescence emitted from the measurement object when the measurement object is irradiated with excitation light.

第3圖係三維地顯示螢光指紋的一例之等高線形狀的圖表。 Figure 3 is a graph showing three-dimensional contour shapes of an example of fluorescent fingerprints.

第4圖係二維地顯示螢光指紋的一例之等高線形狀的圖表。 Figure 4 is a graph showing the contour shape of an example of fluorescent fingerprints in two dimensions.

第5圖係以模式圖的方式顯示螢光指紋的高光譜影像及多光譜影像的一例之說明圖。 Figure 5 is an explanatory diagram showing an example of the hyperspectral image and multispectral image of fluorescent fingerprints in the form of a schematic diagram.

第6圖係顯示螢光成像系統的一例之圖。 Figure 6 is a diagram showing an example of a fluorescent imaging system.

第7圖係顯示卷積層的處理例之說明圖。 Fig. 7 is an explanatory diagram showing a processing example of the convolutional layer.

第8圖係顯示AlexNet的構造之說明圖。 Figure 8 is an explanatory diagram showing the structure of AlexNet.

第9圖係顯示GoogLeNet的構造之說明圖。 Figure 9 is an explanatory diagram showing the structure of GoogLeNet.

第10圖係顯示SegNet的構造之說明圖。 Figure 10 is an explanatory diagram showing the structure of SegNet.

第11圖係用來說明與本發明之裝置有關的實施的一態樣的概要之方塊圖。 Fig. 11 is a block diagram for explaining an outline of the implementation related to the device of the present invention.

第12A圖係在白色光下取得之菸草原料樣本的可視畫像。 Figure 12A is a visual image of a sample of tobacco raw material taken under white light.

第12B圖係並未透過濾波器對菸草原料樣本照射激發光所取得之螢光畫像。 Figure 12B is the fluorescent image obtained by irradiating the tobacco raw material sample with excitation light without passing through the filter.

第13圖係針對菸草原料樣本以激發-螢光波長的組合I至VI取得之螢光畫像。 Figure 13 is a fluorescence profile obtained with excitation-fluorescence wavelength combinations I to VI for a tobacco raw material sample.

第14圖係用來說明應用語義分割方法於菸草原料樣本之情況的概要之模式圖。 Figure 14 is a schematic diagram used to illustrate the outline of the application of the semantic segmentation method to tobacco raw material samples.

第15圖係顯示針對菸草原料樣本合併使用螢光畫像資料及可視畫像資料而進行機器學習之情況的輸入畫像資料的選擇的一例之表。 Figure 15 is a table showing an example of the selection of input image data in the case of machine learning for a sample of tobacco raw materials that combines fluorescent image data and visual image data.

第16圖係顯示將學習完成之卷積神經網路應用於品質區分已知的菸草原料的測試試樣所得到的判定結果之表。 Figure 16 is a table showing the judgment results obtained by applying the learned convolutional neural network to test specimens of known tobacco raw materials.

以下,說明與本發明之方法有關之實施態樣例,說明與本發明之裝置有關之實施態樣例,以及說明將本發明之方法應用於菸草原料的品質區分的判定之態樣例。 Hereinafter, an example of an embodiment related to the method of the present invention will be explained, an example of an embodiment related to the device of the present invention will be explained, and an example of applying the method of the present invention to the judgment of the quality of tobacco raw materials will be explained.

請注意本發明並不限定於以下說明的實施態樣例。 Please note that the present invention is not limited to the implementation examples described below.

I.與本發明之方法有關之實施態樣例 I. Examples of implementation aspects related to the method of the present invention

I-1.實施之一態樣 I-1. One aspect of implementation

第1圖係用來說明與本發明之方法有關的實施的一態樣(實施態樣I-1)的概要之流程圖。如第1圖所示,實施態樣I-1包含螢光畫像 資料取得程序(S01)、機器學習程序(S02)、及品質區分判定程序(S03)。以下,針對上述各程序進行說明。 Fig. 1 is a flowchart for explaining the outline of an implementation aspect (implementation aspect I-1) related to the method of the present invention. As shown in Figure 1, the implementation pattern I-1 contains a fluorescent image Data acquisition program (S01), machine learning program (S02), and quality classification judgment program (S03). Hereinafter, the above-mentioned procedures will be described.

I-1-1.螢光畫像資料取得程序(S01) I-1-1. Procedure for obtaining fluorescent image data (S01)

本程序係:取得螢光畫像作為對應於激發波長與螢光波長的組合之張數份的螢光畫像資料之程序,該螢光畫像係對於包含已知的品質區分之試樣照射具有預定的激發波長之激發光,根據在以激發光的照射開始為始點之考慮螢光褪色而設定的時刻T(T≧0)開始的預定的期間P(P>0)內得到的預定的螢光波長的反射光的強度而作成的螢光畫像。 This procedure is a procedure to obtain a fluorescent image as a number of fluorescent image data corresponding to the combination of excitation wavelength and fluorescent wavelength. The fluorescent image has a predetermined value for sample irradiation with a known quality distinction The excitation light of the excitation wavelength is based on the predetermined fluorescent light obtained in the predetermined period P (P>0) starting at the time T (T≧0) set in consideration of the fading of the fluorescent light with the start of the excitation light irradiation as the starting point Fluorescent image created by the intensity of the reflected light of the wavelength.

在此,概略說明螢光畫像資料的取得方法。 Here, we will briefly explain how to obtain fluorescent image data.

在螢光畫像資料之取得上,採用的是如第6圖所示之螢光成像系統600。螢光成像系統600係由分光照明裝置610及分光攝影裝置620所構成。分光照明裝置610係例如使從氙氣燈光源612(MAX-303(商品名),朝日分光(公司名))通過可在340至420nm之範圍內以10nm之刻度調整設定之帶通濾波器614而發出的特定波長的激發光照射至樣本630。分光攝影裝置620係使從樣本630反射之反射光通過可在420至700nm之範圍內以10nm之刻度調整設定之帶通濾波器622,而用攝影裝置624僅拍攝特定波長的螢光畫像。藉由分別變換光源測的帶通濾波器614及攝影裝置側的帶通濾波器622對樣本進行攝影,可取得如第5圖所示之對應於激發波長與螢光波長的組合之張數份的螢光畫像資料。第6圖中,樣本的大小為4×4cm,但不特別限定於此數值,其大小可適當地設定。另外,也可將取得的複數張畫像予以合成來作為螢光畫像資料。 For the acquisition of fluorescent image data, the fluorescent imaging system 600 shown in Figure 6 is used. The fluorescent imaging system 600 is composed of a spectroscopic illumination device 610 and a spectroscopic photography device 620. The spectroscopic lighting device 610 is, for example, a xenon lamp light source 612 (MAX-303 (trade name), Asahi Spectroscopy (company name)) through a band-pass filter 614 that can be adjusted in the range of 340 to 420 nm with a scale of 10 nm. The emitted excitation light of a specific wavelength is irradiated to the sample 630. The spectroscopic imaging device 620 allows the reflected light from the sample 630 to pass through the band-pass filter 622 that can be adjusted in the range of 420 to 700 nm with a scale of 10 nm, and the imaging device 624 only captures fluorescent images of specific wavelengths. By separately changing the band-pass filter 614 of the light source and the band-pass filter 622 on the side of the imaging device to photograph the sample, a number of copies corresponding to the combination of the excitation wavelength and the fluorescence wavelength can be obtained as shown in Figure 5. Fluorescent portrait information. In Fig. 6, the size of the sample is 4×4 cm, but it is not particularly limited to this value, and the size can be appropriately set. In addition, it is also possible to synthesize multiple images obtained as fluorescent image data.

而且,以從激發光開始照射為始點之考慮螢光褪色而設定的時刻T(T≧0)作為螢光畫像資料之取得時刻,成為有考慮到如前述的螢光褪色問題之構成。時刻T可為螢光褪色開始前的時刻,但因為螢光褪色的程度相對於時間具有非線性特性,所以可利用如此的螢光褪色的特性,將要取得適合用於進行分類的螢光畫像資料所需的時刻及期間,設定作為適合各品質區分之時刻T及期間P。 Furthermore, the time T (T≧0) set in consideration of the fluorescence fading from the start of the excitation light irradiation as the acquisition time of the fluorescent image data is a configuration that takes into account the aforementioned fluorescent fading problem. Time T can be the time before the start of fluorescence fading, but because the degree of fluorescence fading has a non-linear characteristic with time, such fluorescence fading characteristics can be used to obtain fluorescent image data suitable for classification The required time and period are set as the time T and the period P suitable for each quality classification.

如上述,可使時刻T及期間P設定為與試樣中的品質區分相依而能夠提高該品質區分的判定精度,且可使之反映於後述的學習結果中來使得判定精度更加提高。 As described above, the time T and the period P can be set to be dependent on the quality classification in the sample to improve the judgment accuracy of the quality classification, and they can be reflected in the learning result described later to further improve the judgment accuracy.

I-1-2.機器學習程序(S02) I-1-2. Machine Learning Program (S02)

本程序係:從在前述螢光畫像資料取得程序取得的螢光畫像資料作成具有預定的通道數之輸入畫像資料,以該輸入畫像資料作為訓練資料而利用電腦進行機器學習,構築出對前述試樣中包含的成分而言賦予最合適的品質區分之分類器之程序。 This program is to create input image data with a predetermined number of channels from the fluorescent image data obtained in the aforementioned fluorescent image data acquisition program, and use the input image data as training data to perform machine learning on a computer to construct a test For the components contained in the sample, it is a program that gives the most suitable quality classification classifier.

首先,視需要對於在前述螢光畫像資料取得程序(S01)取得的螢光畫像資料進行前處理,作成輸入畫像資料。在使取得的螢光畫像資料的通道數都為K1之情況,並非一定要使輸入畫像資料的通道數K也為K1,亦可為小於K1之值(縮減之實施)。在實施如此的縮減之情況,關於要如何整理、統合資料,可考量事先實施的預備性的調查及已知的結果等來適當地決定。 First, if necessary, perform pre-processing on the fluorescent image data acquired in the aforementioned fluorescent image data acquisition program (S01) to create input image data. In the case where the number of channels of the acquired fluorescent image data is K 1 , it is not necessary that the number of channels K of the input image data is also K 1 , and may be a value smaller than K 1 (implementation of reduction). In the case of implementing such a reduction, how to organize and integrate the data can be appropriately determined in consideration of preliminary investigations conducted in advance and known results.

前處理除了例如資料之剔除、對於異常值之因應、尺寸之變更、資料量之調整以外,還有:進行使資料收斂到某一範圍內之變換之資 料的正則化(regularization)、進行使資料的平均為0、使標準偏差為1之加工之資料的標準化(standardization)、去除資料間的相關性之資料的無相關化、對於資料進行標準化及無相關化之資料的白色化(whitening)等之處理,可從中適當地選擇。對於訓練資料進行了前處理之情況,也必須對測試資料(test data)及未知資料進行同樣的處理。 In addition to pre-processing such as data removal, response to outliers, size changes, and data volume adjustments, there are also: data conversion to converge to a certain range. Regularization of data, standardization of data processed so that the average of the data is 0, and the standard deviation of 1, the non-correlation of the data that removes the correlation between the data, the standardization of the data, and the non-correlation of the data. The processing of whitening and other related data can be appropriately selected. If the training data has been pre-processed, the test data and unknown data must be processed in the same way.

接著,說明適合進行分類之機器學習之方法。 Next, the method of machine learning suitable for classification is explained.

適合進行分類之機器學習之方法,除了例如:進行樹形結構之訓練,以分枝之方式將資料分類之決策樹(decision tree);組合複數個決策樹,藉由取各決策樹的輸出的多數決,來進行分類之隨機森林(Random Forest:RF);進行多次元之超平面之訓練,來進行資料的分類之支援向量機(Support Vector Machine:SVM);以利用最近的K個點之多數決來進行分類之K最近鄰居法(K Nearest Neighbor method:KNN method);將複數個單純的分類器相組合來構築非線性的分類器之整體學習(ensemble learning)等之外,還可採用將腦神經細胞網路予以模型化而架構出的神經網路(neural network)。關於神經網路,已確知的是利用多層神經網路,特別是利用卷積神經網路(Convolutional Neural Network:CNN)之深層學習(deep learning)對於畫像之分類具有有效性。關於卷積神經網路之方法的概略將在後面說明。除此之外,也可使用例如視覺詞袋(Bag of Visual Words,BoVW)。BoVW係從屬於計算文章特徵的模型的詞袋(Bag of Words,BoW;也稱為特徵袋,Bag of Features)類推而產生之方法,係將文字分類的技術應用於畫像分類之方法。 The method of machine learning suitable for classification, except for example: training of tree structure, decision tree (decision tree) that classifies data by branching; combining multiple decision trees, by taking the output of each decision tree The majority decision is used to perform random forest (Random Forest: RF) for classification; to perform multi-dimensional hyperplane training to perform support vector machine (SVM) for data classification; to use the nearest K points K Nearest Neighbor method (KNN method), which is mostly used for classification; In addition to combining a plurality of simple classifiers to construct a nonlinear classifier, ensemble learning can also be used A neural network constructed by modeling a network of brain nerve cells. Regarding neural networks, it has been known that the use of multilayer neural networks, especially deep learning using Convolutional Neural Networks (CNN), is effective for image classification. The outline of the convolutional neural network method will be described later. In addition, for example, Bag of Visual Words (BoVW) can also be used. BoVW is a method derived from the analogy of the Bag of Words (BoW; also known as Bag of Features) that is subordinate to the model for calculating article features. It is a method of applying text classification technology to portrait classification.

除了以上所述的方法之外,還可採用例如對於高次元資料之分類有效之偏最小平方判別分析(Partial Least Squares-Discriminant Analysis:PLS-DA)、屬於遵循伯努利分佈的變數的統計的迴歸模型的一種之邏輯式迴歸(Logistic Regression:LR)等之統計的方法。 In addition to the above-mentioned methods, for example, partial least squares discriminant analysis (Partial Least Squares-Discriminant Analysis: PLS-DA), which is effective for the classification of high-dimensional data, can be used, which belongs to the statistics of variables following the Bernoulli distribution. A regression model is a statistical method such as Logistic Regression (LR).

以下,針對使用適合用來進行畫像的分類之卷積神經網路之深層學習,說明其概略等。 Below, the outline of deep learning using convolutional neural networks suitable for image classification will be explained.

<卷積神經網路的構造的概要> <Outline of Convolutional Neural Network Structure>

卷積神經網路係為由卷積層(convolution layer)、池化層(pooling layer)、全連結層(fully connected layer)之三種層所構成之主要應用於影像辨識之前饋神經網路(feedforward neural network),係採用誤差反向傳播法(back propagation)及隨機梯度下降法(Stochastic Gradient Descent:SGD)來進行學習的最佳化之神經網路。又,因為目的在分類,所以也有設置用來將全連結層的輸出正規化之softmax層、及以softmax層的輸出做為輸入然後輸出分類的機率之分類層的情況。 Convolutional neural network is composed of three layers: convolution layer, pooling layer, and fully connected layer. It is mainly used in image recognition feedforward neural network (feedforward neural network). network), which is a neural network that uses back propagation and stochastic gradient descent (Stochastic Gradient Descent: SGD) to optimize learning. Also, because the purpose is classification, there are cases where a softmax layer is used to normalize the output of the fully connected layer, and a classification layer that uses the output of the softmax layer as the input and outputs the probability of classification.

在卷積神經網路中,卷積層及池化層係多次重複然後連結到全連結層,全連結層也多次重複然後將最後的全連結層的輸出用於softmax層的輸入。 In a convolutional neural network, the convolutional layer and the pooling layer are repeated multiple times and then connected to the fully connected layer. The fully connected layer is repeated multiple times and the output of the final fully connected layer is used as the input of the softmax layer.

卷積層係進行將複數個濾波器施用於輸入的畫像之濾波處理,利用濾波處理將輸入畫像轉換為表示畫像的特徵之複數個畫像。然後,池化層以不會損及畫像的特徵之方式將畫像的尺寸縮小,全連結層則是將層間的所有的神經元(neuron)連結起來。 The convolutional layer performs filtering processing that applies a plurality of filters to the input image, and uses the filter processing to convert the input image into a plurality of images representing the characteristics of the image. Then, the pooling layer reduces the size of the image in a way that does not damage the characteristics of the image, and the fully connected layer connects all the neurons between the layers.

以下,簡單說明卷積層、池化層、全連結層所具有的特徵及動作態樣、以及在卷積神經網路中的學習等。 In the following, the features and behaviors of the convolutional layer, the pooling layer, and the fully connected layer, as well as the learning in the convolutional neural network, are briefly described.

<卷積層> <Convolutional layer>

利用卷積(convolution)演算可加強或減弱畫像的某一特徵,所以卷積層具有藉由如此的卷積處理將輸入畫像轉換為更強調特徵的畫像之機能。 Convolution (convolution) calculation can strengthen or weaken a certain feature of the image, so the convolution layer has the function of converting the input image into a more emphasized image through such a convolution process.

另外,畫像具有所謂的局部性之性質,卷積層也具有利用如此的畫像的局部性來檢出畫像的特徵之機能。因此,卷積層係利用分別檢測不同的特徵之複數個濾波器來進行特徵之檢出。 In addition, the image has the so-called locality, and the convolutional layer also has the function of detecting the characteristics of the image by using the locality of such an image. Therefore, the convolutional layer uses multiple filters that detect different features to detect features.

第7圖係顯示卷積層的處理例之圖,第7圖中顯示對於尺寸(寬×高×通道數)為Ms×Ns×Ks之第s層的畫像(輸入畫像)710施用尺寸為Ps+1×Qs+1×Ks之Ks+1個濾波器720而進行處理的情況之計算結果。 Figure 7 is a diagram showing a processing example of the convolutional layer. Figure 7 shows the application size for the image (input image) 710 of the sth layer whose size (width × height × number of channels) is M s × N s × K s The calculation result for the case of processing K s+1 filters 720 of P s+1 ×Q s+1 ×K s .

一般而言,在神經網路中,將從輸入到輸出之資訊的傳播稱為順向傳播(forward propagation),從輸出往輸入之資訊的回溯稱為反向傳播(back propagation)。 Generally speaking, in a neural network, the propagation of information from input to output is called forward propagation, and the backtracking of information from output to input is called back propagation.

就第7圖所示的例子而言,關於順向傳播,若第k’個濾波器上的寬度及高度(p,q)的通道k的權重為ws+1 p,q,k,k’,在s+1層的畫像(輸出畫像)的位置(m,n)之通道k’的輸入為as+1 m,n,k’,則可將as+1 m,n,k’表示成如下的式(1)。 For the example shown in Figure 7, regarding forward propagation, if the width and height (p, q) of the channel k on the k'th filter are weighted as w s+1 p, q, k, k ' , The input of channel k'at the position (m,n) of the portrait (output portrait) of the s+1 layer is a s+1 m,n,k' , then a s+1 m,n,k 'Is expressed as the following formula (1).

as+1 m,n,k’p,q,kws+1 p,q,k,k’‧zs m+p,n+q,k+bs+1 k’ (1) a s+1 m,n,k'p,q,k w s+1 p,q,k,k' ‧z s m+p,n+q,k +b s+1 k' (1 )

其中,zs m+p,n+q,k為在第s層的畫像(輸入畫像)710的位置(m+p,n+q)之通道k的畫素值,bs+1 k’為偏差(bias)。 Among them, z s m+p,n+q,k is the pixel value of channel k at the position (m+p,n+q) of the portrait (input portrait) 710 in the s- th layer, b s+1 k' It is bias.

因此,在第s+1層的畫像(輸出畫像)730的位置(m,n)之通道k’的畫素值zs+1 m,n,k’可使用活性化函數h(‧)而表示成如下的式(2)。 Therefore, the pixel value z s+1 m,n,k' of the channel k'at the position (m,n) of the image (output image) 730 of the s+1 layer can be activated using the activation function h(‧) and It is expressed as the following formula (2).

zs+1 m,n,k’=h(as+1 m,n,k’) (2) z s+1 m,n,k' = h(a s+1 m,n,k' ) (2)

按照神經元的模型,權重係相當於突觸(synapse)的傳遞效率,偏差係相當於神經元的感度(容易興奮的程度)。另外,活性化函數可說是用來使神經元興奮的函數。 According to the neuron model, the weight system is equivalent to the transmission efficiency of the synapse, and the deviation system is equivalent to the neuron's sensitivity (the degree of easy excitement). In addition, the activation function can be said to be a function to excite neurons.

接著,針對第7圖所示的例子中的反向傳播進行說明,誤差反向傳播係將順向傳播所得到的輸出與正解的誤差,一層層地回溯並更新權重及偏差來使誤差最小化而進行者。以下,將定義輸出與正解的誤差之函數稱為損失函數(loss function)。 Next, the back propagation in the example shown in Figure 7 will be explained. The error back propagation system will trace back the output obtained by forward propagation and the error of the forward solution layer by layer and update the weights and deviations to minimize the error. And the progresser. Hereinafter, the function that defines the error between the output and the positive solution is referred to as a loss function.

為了求出在卷積層中的誤差的反向傳播,必須得到權重的梯度。 In order to find the back propagation of the error in the convolutional layer, the gradient of the weight must be obtained.

因此,將損失函數表示成J來求針對權重的偏微分的話,可表示成如下的式(3)。 Therefore, if the loss function is expressed as J to obtain the partial differential with respect to the weight, it can be expressed as the following equation (3).

Figure 108142311-A0202-12-0016-5
Figure 108142311-A0202-12-0016-5

另外,從式(1)導出如下的式(4), In addition, the following equation (4) is derived from equation (1),

Figure 108142311-A0202-12-0016-6
Figure 108142311-A0202-12-0016-6

進一步導出

Figure 108142311-A0202-12-0016-9
的話,可表示成如下的式(5)。 Further export
Figure 108142311-A0202-12-0016-9
If it is, it can be expressed as the following formula (5).

Figure 108142311-A0202-12-0016-7
Figure 108142311-A0202-12-0016-7

針對偏差的偏微分也可利用同樣的計算而表示成如下的式(6)。 The partial differential for the deviation can also be expressed as the following equation (6) by the same calculation.

Figure 108142311-A0202-12-0016-8
Figure 108142311-A0202-12-0016-8

因此,δs m,n,k可表示成如下的式(7), Therefore, δ s m,n,k can be expressed as the following formula (7),

Figure 108142311-A0202-12-0017-10
Figure 108142311-A0202-12-0017-10

其中,h’(‧)為活性化函數h(‧)的微分值。 Among them, h'(‧) is the differential value of the activation function h(‧).

式(7)表示可利用以權重對於輸出層(上位層)的誤差進行卷積處理所得到之值來計算輸入層(下位層)的誤差。 Equation (7) indicates that the error of the input layer (lower layer) can be calculated using the value obtained by convolution processing the error of the output layer (upper layer) with weight.

關於活性化函數,因為網路要三層以上才能有效地發揮機能,所以活性化函數必須具有非線性。而且,如上述,為了以反向傳播法使機器學習,所以活性化函數必須可以微分。 Regarding the activation function, because the network needs more than three layers to function effectively, the activation function must be non-linear. Moreover, as described above, in order to make machine learning by backpropagation, the activation function must be differentiated.

代表性的活性化函數有以下的式(8)所示之邏輯式S型函數(logistic sigmoid function)。 A typical activation function has a logistic sigmoid function (logistic sigmoid function) shown in the following formula (8).

h(x)=σ(x)=1/(1+exp(-x)) (8) h(x)=σ(x)=1/(1+exp(-x)) (8)

在反向傳播之際因為是利用誤差與微分係數之積來計算下位層的誤差,所以越往下位層,誤差δs的值會越呈指數函數般地減小,因此會發生輸入x的值若大(或小)則微分係數h’(x)的值會變小,變得幾乎不會進行參數的更新之問題。為了減輕此問題,有人提出以下的式(9)及(10)表示的修正線性單元(Rectified Linear Unit,ReLU)來作為活性化函數。 In the case of back propagation, because the product of the error and the differential coefficient is used to calculate the error of the lower layer, the value of the error δ s will decrease exponentially as the lower layer goes, so the value of x will be input If it is large (or small), the value of the differential coefficient h'(x) becomes small, and there is almost no problem of updating the parameters. In order to alleviate this problem, some people have proposed Rectified Linear Unit (ReLU) represented by the following equations (9) and (10) as the activation function.

h(x)=max(0,x) (9) h(x)=max(0,x) (9)

Figure 108142311-A0202-12-0017-11
Figure 108142311-A0202-12-0017-11

此外,還有人提出屬於ReLU的改良版之LReLU(Leaky ReLU)、RReLU(Random ReLU)、PReLU(Parametric ReLU)等、以及貢獻於資料的標準化之ELU(指數型線性單元,Exponential Linear Unit)等,來作為活性化函數,在此將其詳細說明予以省略。 In addition, some people have proposed LReLU (Leaky ReLU), RReLU (Random ReLU), PReLU (Parametric ReLU), etc., which belong to the improved version of ReLU, and ELU (Exponential Linear Unit) that contributes to the standardization of data. As the activation function, the detailed description is omitted here.

<池化層> <Pooling layer>

接著,說明池化層。池化層係進行稱為池化(pooling)之將畫像劃分為各區域,然後抽出代表各區域之值加以排列來作成新的畫像之處理。池化也稱為使畫像模糊化之處理,可使對象的位置的感度降低。藉此,即使對象的位置有些許變化也可使輸出不變,因此池化層提供的是對於位置的變化之穩定(robust)性。而且,因為池化會使畫像尺寸變小,所以也具有削減計算量之效果。 Next, the pooling layer will be described. The pooling layer performs a process called pooling to divide the portrait into regions, and then extract the values representing the regions and arrange them to create a new portrait. Pooling is also called the process of blurring the image, which can reduce the sensitivity of the object's position. In this way, even if the position of the object changes slightly, the output can be kept unchanged. Therefore, the pooling layer provides robustness to changes in position. Moreover, because pooling will reduce the size of the image, it also has the effect of reducing the amount of calculation.

在畫像處理中,大多採用以各區域的最大值作為代表各區域的值之最大池化(max pooling)法。 In image processing, the maximum pooling method in which the maximum value of each area is used as the value representing each area is mostly used.

<全連結層> <Fully connected layer>

全連結層係通常的神經網路中使用之層,通常配置於重複幾次的卷積層及池化層之後。 The fully connected layer is a layer used in normal neural networks, and is usually arranged after the convolutional layer and the pooling layer that are repeated several times.

<卷積神經網路中之學習> <Learning in Convolutional Neural Networks>

卷積神經網路係與通常的神經網路一樣進行反向傳播之學習。 Convolutional neural network is the same as normal neural network for back propagation learning.

在卷積層,根據從輸出與正解的誤差傳播來的值而計算出構成濾波器之各值的梯度,來將濾波器予以更新。而且,也同樣將偏差予以更新。誤差係通過卷積層而向更上方的層傳播。 In the convolutional layer, the gradient of each value constituting the filter is calculated based on the value propagated from the error between the output and the correct solution, and the filter is updated. Moreover, the deviation is also updated similarly. The error is propagated to the upper layer through the convolutional layer.

在池化層,並不進行學習,誤差係通過池化層而向更上方的層傳播。 In the pooling layer, no learning is performed, and errors are propagated to higher layers through the pooling layer.

在全連結層,以與通常的神經網路相同的方法進行誤差之傳播。 In the fully connected layer, errors are propagated in the same way as a normal neural network.

藉由如此的權重與偏差的調節,以讓誤差變為最小的方式使神經網路最佳化。如此的最佳化所用的演算法,已知有隨機梯度下降法等。 Through such adjustment of weights and deviations, the neural network is optimized in a way that minimizes the error. The algorithm used for such optimization is known as the stochastic gradient descent method.

隨機梯度下降法係每次更新都隨機地選出樣本之演算法,權重w及偏差b的更新式為以下的式子。 The stochastic gradient descent method randomly selects the algorithm of the sample every time it is updated, and the update formula of the weight w and the deviation b is the following formula.

Figure 108142311-A0202-12-0019-31
Figure 108142311-A0202-12-0019-31

Figure 108142311-A0202-12-0019-32
Figure 108142311-A0202-12-0019-32

除此之外的最佳化演算法,已知的還有:將慣性項附加到隨機梯度下降法中之Momentum、自動地調節更新量之AdaGrad、克服AdaGrad會因為更新量的降低導致學習停滯之弱點之RMSProp、具有將Momentum及AdaGrad予以統合的特徵之Adam(Adaptive moment estimation)等方法,在此將其詳細說明予以省略。 In addition to the optimization algorithm, it is known that the inertia term is added to the Momentum in the stochastic gradient descent method, the AdaGrad that automatically adjusts the update amount, and the overcoming AdaGrad will cause the learning to stagnate due to the decrease in the update amount. Methods such as RMSProp of weakness, Adam (Adaptive moment estimation), which has the feature of integrating Momentum and AdaGrad, are omitted here in detail.

I-1-3.品質區分判定程序(3) I-1-3. Quality classification judgment procedure (3)

最後,將學習完成的分類器應用於從包含有未知的品質區分之試樣的螢光畫像資料得到的輸入畫像資料,判定該試樣中包含的品質區分(品質區分判定程序:S03)。如前述,在對訓練資料進行過前處理之情況,也必須對未知資料進行同樣的處理。而且,最好使包含有未知的品質區分之試樣的螢光畫像資料的取得時刻及期間適應於學習時的螢光畫像資料的取得時刻及期間。 Finally, the learned classifier is applied to the input image data obtained from the fluorescent image data of the sample containing the unknown quality classification, and the quality classification contained in the sample is determined (quality classification judgment procedure: S03). As mentioned above, in the case of pre-processing the training data, the unknown data must also be processed in the same way. Furthermore, it is preferable to adapt the acquisition time and period of the fluorescent image data of the sample including the unknown quality classification to the acquisition time and period of the fluorescent image data during learning.

藉由如此的構成,能夠精密地反映出螢光褪色的時間的變化,可實現分類精度的進一步的提高。 With such a configuration, it is possible to accurately reflect the time change of the fluorescence fading, and further improve the classification accuracy.

<卷積神經網路的實際構成例> <Example of actual configuration of convolutional neural network>

作為卷積神經網路的實際構成例,以下舉例AlexNet及GoogLeNet,說明此兩者的概略。 As an example of the actual structure of a convolutional neural network, the following examples are AlexNet and GoogLeNet to illustrate the outline of the two.

AlexNet係如第8圖所示,由五個卷積層及三個全連結層所構成,且在第一及第二個卷積層之後都使用了正規化層,在各正規化層之後及第五個卷積層之後都使用了最大值池化層。AlexNet係具有6000萬個參數及65萬個神經元,因為包含很龐大的參數,所以藉由失靈(dropout)、資料擴張等,為了避免得到的是局部解而在學習上下了很多工夫。 As shown in Figure 8, AlexNet is composed of five convolutional layers and three fully connected layers. After the first and second convolutional layers, the normalization layer is used. After each normalization layer and the fifth After each convolutional layer, the maximum pooling layer is used. The AlexNet system has 60 million parameters and 650,000 neurons. Because it contains very large parameters, a lot of effort has been made in learning to avoid getting local solutions through dropout, data expansion, etc.

GoogLeNet則是如第9圖所示,採用稱為Inception之在層之中還具有網路之模塊,構築出將9個該模塊連接起來而構成之深層網路。Inception模塊之中,有1×1、3×3、5×5之三個卷積層與最大值池化層並列,藉由使複數個卷積層並列,可得到具有複數的廣度之局部的畫像的相關性。而且,設於3×3、5×5之卷積層的前段之1×1之卷積層具有次元削減之機能。GoogLeNet雖然具有比AlexNet深之構造,但因為該等都是卷積層所以參數數只有1/12程度。 GoogLeNet, as shown in Figure 9, uses a module called Inception that also has a network in the layer to construct a deep network that connects 9 such modules. In the Inception module, there are three convolutional layers of 1×1, 3×3, and 5×5 in parallel with the maximum pooling layer. By arranging multiple convolutional layers in parallel, a partial image with a complex number of widths can be obtained. Correlation. In addition, the 1×1 convolutional layer provided in the front stage of the 3×3, 5×5 convolutional layer has the function of dimensional reduction. Although GoogLeNet has a deeper structure than AlexNet, because these are all convolutional layers, the number of parameters is only about 1/12.

I-2.另一實施的一態樣 I-2. Another aspect of implementation

接著,說明與本發明之方法有關之另一實施的一態樣(實施態樣I-2)的概要,但不重複與實施態樣I-1相同的部分之說明而將之省略。 Next, the outline of another implementation aspect (Implementation aspect I-2) related to the method of the present invention will be described, but the description of the same parts as the implementation aspect I-1 will not be repeated and will be omitted.

實施態樣I-2係進行也有考慮到螢光畫像資料中存在的物體(object)的形狀之機器學習,使之反映在前述試樣中包含的品質區分的判定中。 The implementation mode I-2 is to perform machine learning that also considers the shape of the object existing in the fluorescent image data, so that it is reflected in the judgment of the quality classification included in the aforementioned sample.

為了進行如此之也有考慮到物體的形狀之機器學習,可利用例如稱為語義分割(semantic segmentation)之可正確推測畫像內的物體的類別(class)及其輪廓之程序。語義分割係以畫素等級(pixel level)來辨識物體,可做到畫素等級之類別區分(分類)。有人曾提出編碼器-解碼器網路(encoder-decoder network),來作為利用了深度學習之語義分割方法。 In order to perform such machine learning that also considers the shape of the object, a program called semantic segmentation can be used to accurately estimate the class and contour of the object in the image. Semantic segmentation uses pixel level to identify objects, which can be classified (classified) by pixel level. Someone once proposed an encoder-decoder network as a semantic segmentation method using deep learning.

第10圖係顯示稱為SegNet之語義分割用的網路,其構造及動作的概略係如下所述。 Figure 10 shows a network called SegNet for semantic segmentation. The outline of its structure and operation is as follows.

SegNet之編碼器係將稱為VGG16之類別辨識用網路的全連結層都去除掉而成者。解碼器係使編碼器的輸出入反過來而成之網路,且在最終層追加了softmax層。輸入至解碼器之畫像,係在逆池化層(unpooling layer)接受上採樣(upsampling)。逆池化層係與編碼器的池化層一對一對應,記憶編碼時輸出最大值之單元,根據該記憶在解碼器進行上採樣。利用編碼時的資訊,不僅補充了空間資訊,而且不需要進行上採樣之方式的學習。 The encoder of SegNet is made by removing the full connection layer of the network for class recognition called VGG16. The decoder is a network that reverses the input and output of the encoder, and a softmax layer is added to the final layer. The image input to the decoder receives upsampling in the unpooling layer. The inverse pooling layer corresponds to the pooling layer of the encoder one-to-one, and the unit that outputs the maximum value during encoding is memorized, and the decoder is up-sampled according to the memory. Using the information during coding not only supplements the spatial information, but also does not require learning of up-sampling methods.

經逆池化而擴大之後的畫像,會成為很多的值為0之鬆散畫像,所以利用接在逆池化層後的轉置卷積層(transposed convolution layer)將鬆散畫像轉換為緻密畫像。最終的解碼器的輸出畫像係輸入至softmax層,逐一就每個畫素將之轉換為物體類別之機率,然後以機率最大的類別來表現各畫素,以此方式得到最終的分割結果。SegNet係將小批次(mini- batch)內的所有的畫素總共的交叉熵(cross-entropy)損失利用作為損失函數,以隨機梯度下降法等進行端對端(end-to-end)之學習。所謂的小批次係指從所有資料(樣本)隨機選出的一部分資料(少數的樣本集合)。 The image enlarged by inverse pooling will become a lot of loose images with a value of 0, so the transposed convolution layer connected to the inverse pooling layer is used to convert the loose images into dense images. The output image of the final decoder is input to the softmax layer, and each pixel is converted to the probability of the object category one by one, and then each pixel is represented by the category with the highest probability, and the final segmentation result is obtained in this way. SegNet is a small batch (mini- The total cross-entropy loss of all pixels in the batch is used as a loss function, and end-to-end learning is performed by the stochastic gradient descent method. The so-called small batch refers to a part of the data (a small sample set) randomly selected from all the data (samples).

藉由如此的構成,即使在複數個品質區分共有一部分的材料之情況,也因為有考慮形狀而可使品質區分的判定精度更加提高。 With such a configuration, even in the case where a plurality of quality classifications share a part of the material, since the shape is taken into consideration, the accuracy of the quality classification judgment can be further improved.

I-3.又另一實施的一態樣 I-3. Another aspect of implementation

接著,說明與本發明之方法有關之又另一實施的一態樣(實施態樣I-3)的概要,但不重複與實施態樣I-1、I-2相同的部分之說明而將之省略。 Next, the outline of another implementation aspect (Implementation aspect I-3) related to the method of the present invention will be explained, but the description of the same parts as the implementation aspects I-1 and I-2 will not be repeated. It is omitted.

實施態樣I-3係將以從激發光開始照射為始點之考慮螢光褪色而設定的時刻T設置複數個(0≦T1<…<Tn),將從各時刻Ti(0≦i≦n)得到的螢光畫像資料作成的具有預定的通道數之輸入畫像資料,分別作為前述複數個時刻中的各個時刻的訓練資料而利用電腦進行機器學習,構築出對前述試樣而言在前述複數個時刻中的各個時刻賦予最合適的品質區分之分類器,然後在品質區分的判定時,選擇適合該品質區分的分類器,來判定該試樣中包含的品質區分。 Implementation mode I-3 is to set a plurality of times (0≦T 1 <...<T n ) at the time T set in consideration of the fading of the fluorescent light starting from the start of the excitation light irradiation, and from each time T i (0 ≦i≦n) The input image data with a predetermined number of channels created by the fluorescent image data obtained is used as the training data at each of the above-mentioned plural times. In other words, the most suitable quality classification classifier is assigned to each of the aforementioned plural times, and then when the quality classification is judged, the classifier suitable for the quality classification is selected to determine the quality classification contained in the sample.

關於選擇對於包含有品質區分未知的成分之試樣而言適合的分類器之點,可例如選擇與以對於未知的試樣之激發光的照射開始為始點之螢光畫像資料的取得時刻相對應之分類器。 Regarding the selection of a suitable classifier for a sample containing components of unknown quality, for example, it can be selected to be the same as the acquisition time of the fluorescent image data starting from the start of the excitation light for the unknown sample. The corresponding classifier.

藉由如此的構成,能夠精密地反映出螢光褪色的時間的變化,可實現分類精度的進一步的提高。 With such a configuration, it is possible to accurately reflect the time change of the fluorescence fading, and further improve the classification accuracy.

I-4.又另一實施的一態樣 I-4. Another aspect of implementation

接著,說明與本發明之方法有關之又另一實施的一態樣(實施態樣I-4)的概要,但不重複與實施態樣I-1、I-2、I-3相同的部分之說明而將之省略。 Next, the outline of yet another implementation aspect (Implementation aspect I-4) related to the method of the present invention will be described, but the same parts as the implementation aspects I-1, I-2, and I-3 will not be repeated. The description is omitted.

實施態樣I-4係將以從激發光開始照射為始點之考慮螢光褪色而設定的時刻T設置複數個(0≦T1<…<Tn),將各時刻Ti(0≦i≦n)得到的螢光畫像資料按時間序列予以累積起來而作成螢光畫像資料,以從該螢光畫像資料作成的輸入畫像資料作為訓練資料進行機器學習。 The implementation mode I-4 is to set a plurality of times (0≦T 1 <...<T n ) at the time T set in consideration of the fading of the fluorescent light starting from the start of the excitation light irradiation, and set each time T i (0≦ i≦n) The fluorescent image data obtained are accumulated in time series to form fluorescent image data, and the input image data made from the fluorescent image data is used as training data for machine learning.

在實施態樣I-4中,可使卷積層為也反映了時間的次元之三次元的卷積層,而不是二次元的卷積層。 In the implementation aspect I-4, the convolutional layer may be a three-dimensional convolutional layer that also reflects the dimension of time, instead of a two-dimensional convolutional layer.

藉由如此的構成,可利用按時間序列累積得到的螢光畫像資料更精密地掌握螢光褪色的時間的變化,可實現分類精度的進一步的提高。 With such a configuration, it is possible to use the fluorescent image data accumulated in the time series to more precisely grasp the time change of the fluorescent fading, and the classification accuracy can be further improved.

另外,未知的試樣的螢光畫像資料之取得時刻,最好適應於與進行機器學習之際之螢光畫像資料之取得時刻,但因為在實施態樣I-4中係按時間序列將機器學習時的螢光畫像資料累積起來,可想成其中會保有各取得時點的特徵,所以即使在任意時刻取得未知的試樣的螢光畫像資料,也可期待能夠確保必要的分類精度。 In addition, the acquisition time of the fluorescent image data of the unknown sample is best adapted to the acquisition time of the fluorescent image data at the time of machine learning. However, in the implementation mode I-4, the machine is time-series The fluorescence image data during learning is accumulated, and it can be thought that it will retain the characteristics of each acquisition time. Therefore, even if the fluorescence image data of an unknown sample is acquired at any time, it can be expected to ensure the necessary classification accuracy.

I-5.又另一實施的一態樣 I-5. Another aspect of implementation

接著,說明與本發明之方法有關之又另一實施的一態樣(實施態樣I-5)的概要,但不重複與實施態樣I-1、I-2、I-3、I-4相同的部分之說明而將之省略。 Next, the outline of yet another implementation aspect (implementation aspect I-5) related to the method of the present invention will be described, but the implementation aspects I-1, I-2, I-3, and I- will not be repeated. 4 The description of the same parts will be omitted.

實施態樣I-5係還包含取得可視畫像資料之可視畫像取得程序,且從前述螢光畫像資料及該可視畫像資料作成具有預定的通道數之輸入畫像資料,以該輸入畫像資料作為訓練資料而進行機器學習。 The implementation mode I-5 also includes a visual image acquisition procedure for obtaining visual image data, and from the aforementioned fluorescent image data and the visual image data, an input image data with a predetermined number of channels is created, and the input image data is used as training data And for machine learning.

可視畫像資料係指波長在可視波長範圍內之高光譜影像或多光譜影像。 Visual image data refers to hyperspectral images or multispectral images with wavelengths within the visible wavelength range.

在此,針對可視畫像資料之取得進行概略的說明。 Here, the acquisition of visual image data will be briefly explained.

要取得可視畫像資料之情況,係例如在使用於螢光畫像資料的取得之如第6圖所示的螢光成像系統600中,並不使用分光照明裝置610的帶通濾波器614,而以從氙氣光源發出的白色光照射樣本530。分光攝影裝置620使從樣本630反射之反射光通過與特定的可視波長對應之帶通濾波器622,而用攝影裝置624只拍攝特定波長的可視畫像。藉由變換攝影裝置側的帶通濾波器622對樣本進行攝影,可取得對應於複數的可視波長之可視畫像資料。特定的可視波長,可採用R(波長:680nm附近值)、G(波長:560nm附近值)、B(波長:450nm附近值),但並非一定要限定於這些波長,也可採用其他的波長、或採用RGB中的一個或兩個。 To obtain visual image data, for example, in the fluorescent imaging system 600 shown in FIG. 6, which is used to obtain fluorescent image data, the band-pass filter 614 of the spectroscopic illumination device 610 is not used, but The sample 530 is illuminated with white light emitted from a xenon light source. The spectroscopic imaging device 620 allows the reflected light reflected from the sample 630 to pass through the band-pass filter 622 corresponding to the specific visible wavelength, and the imaging device 624 captures only the visible image of the specific wavelength. By changing the band-pass filter 622 on the side of the imaging device to photograph the sample, visual image data corresponding to multiple visible wavelengths can be obtained. The specific visible wavelength can be R (wavelength: near 680nm), G (wavelength: near 560nm), B (wavelength: near 450nm), but it is not necessarily limited to these wavelengths, and other wavelengths can also be used. Or use one or two of RGB.

實施態樣I-5的特徵在於係併用取得的螢光畫像資料及可視畫像資料兩者來作成輸入畫像資料。而且,在取得的螢光畫像資料及可視畫像資料的通道數分別為K1,K2之情況,並非一定要使輸入畫像資料的通道數K為K1+K2,也可使K為小於K1+K2之值(縮減之實施)。在實施如此的縮減之情況,關於要如何整理、統合資料,可考量事先實施的預備性的調查及已知的結果等來適當地決定。 The feature of the implementation mode I-5 is that it uses both the acquired fluorescent image data and the visual image data to form the input image data. Moreover, when the number of channels of the acquired fluorescent image data and the visual image data are K 1 and K 2 respectively, it is not necessary that the number of channels K of the input image data is K 1 +K 2 , and K can be less than The value of K 1 + K 2 (implementation of reduction). In the case of implementing such a reduction, how to organize and integrate the data can be appropriately determined in consideration of preliminary investigations conducted in advance and known results.

藉由如此的螢光畫像資料及可視畫像資料兩者之併用,可使品質區分的判定精度更加提高。 By using both the fluorescent image data and the visual image data, the accuracy of the quality classification can be improved.

II.與本發明之裝置有關之實施的一態樣 II. An aspect of implementation related to the device of the present invention

第11圖係用來說明與本發明之裝置有關的實施的一態樣(實施態樣II)的概要之方塊圖。 FIG. 11 is a block diagram for explaining the outline of an implementation aspect (implementation aspect II) related to the device of the present invention.

實施態樣II係具備有:構築出分類器之機器學習手段1110以及品質區分判定手段1120,係與前述的實施態樣I-1、I-5大致對應之態樣。 The implementation aspect II is equipped with: a machine learning means 1110 for constructing a classifier and a quality discrimination determination means 1120, which roughly correspond to the aforementioned implementation aspects I-1 and I-5.

機器學習手段1110係將包含已知的品質區分之試樣的螢光畫像資料、或將螢光畫像資料及可視畫像資料予以輸入,從該螢光畫像資料作成具有預定的通道數之輸入畫像資料,以該輸入畫像資料作為訓練資料而進行機器學習,而構築出對前述試樣而言賦予最合適的品質區分之分類器。而且,螢光畫像資料係存在有對應於與試樣有關之激發波長與螢光波長的組合之張數(通道數)份,且係根據在以激發光的照射開始為始點之考慮螢光褪色而設定的時刻T(T≧0)開始的預定的期間內得到的預定的螢光波長的反射光的強度。 The machine learning means 1110 is to input the fluorescent image data of the sample containing the known quality classification, or the fluorescent image data and the visual image data, and form the input image data with a predetermined number of channels from the fluorescent image data , Use the input image data as training data to perform machine learning, and construct a classifier that gives the most suitable quality classification to the aforementioned samples. In addition, the fluorescence image data contains the number of sheets (the number of channels) corresponding to the combination of the excitation wavelength and the fluorescence wavelength related to the sample, and is based on the consideration of fluorescence from the start of the excitation light irradiation. The intensity of the reflected light of the predetermined fluorescent wavelength obtained during the predetermined period from the time T (T≧0) set for the fading.

品質區分判定手段1120係將在機器學習手段1110構築出的學習完成的分類器應用於從包含有品質區分未知的成分之試樣的螢光畫像資料得到的輸入畫像資料,判定該試樣中包含的品質區分。 The quality classification judging means 1120 applies the learned classifier constructed by the machine learning means 1110 to the input image data obtained from the fluorescent image data of the sample whose quality classification is unknown, and judges that the sample contains The quality of distinction.

學習的方法等之詳細內容,係如同在實施態樣I-1中說明過的。 The details of the learning method, etc., are as explained in Implementation Mode I-1.

另外,實施態樣II的變化例可從與前述的實施態樣I-2、I-3、I-4相應的態樣導出,但其內容有實質重複的地方,故將其說明予以省略。 In addition, the modified example of the implementation aspect II can be derived from the aspects corresponding to the aforementioned implementation aspects I-2, I-3, and I-4, but there are substantial overlaps in the content, so the description will be omitted.

III.應用於菸草原料的品質區分的判定之態樣例 III. Examples of states applied to the judgment of quality distinction of tobacco raw materials

以下,說明將如前述的方法應用於菸草原料的品質區分的判定之實施的一態樣。 Hereinafter, an implementation mode of applying the aforementioned method to the judgment of the quality classification of tobacco raw materials will be described.

首先,說到菸草原料的品質區分的例子,係大致如以下所述。 First, an example of the quality classification of tobacco raw materials is roughly as follows.

<菸草原料的品質區分> <Quality Classification of Tobacco Raw Materials>

‧柏萊種的葉肉部(BLY):柏萊種的菸葉的葉肉部 ‧BLY: the mesophyll of the tobacco leaf

‧黃色種的葉肉部(FCV):黃色種的菸葉的葉肉部 ‧Mesophyll of yellow species (FCV): the mesophyll of tobacco leaves of yellow species

‧東方種的葉肉部(ORI):東方種的菸葉的葉肉部 ‧The mesophyll of oriental species (ORI): the mesophyll of oriental tobacco leaves

‧中骨(stem):菸葉的葉脈部 ‧Stem: the veins of tobacco leaves

‧菸草片(sheet):在葉肉、中骨等之主原料加入纖維質、助劑等再形成為片狀而成者(參照例如日本特許3872341號公報) ‧Tobacco sheet: The main raw material of mesophyll, middle bone, etc. is formed by adding fiber, additives, etc. into a sheet (see, for example, Japanese Patent No. 3872341)

‧膨化乾燥體(puff):使切絲的中骨等濕潤、膨脹後加以乾燥而成者(參照例如日本特許5948316號公報) ‧Puff: It is made by moistening and swelling the shredded middle bone and then drying it (refer to Japanese Patent No. 5948316, for example)

接著,說明與菸草原料的品質區分判定有關之實施的一態樣。請注意本發明並不限定於此一態樣。 Next, an aspect of the implementation related to the quality classification judgment of tobacco raw materials will be explained. Please note that the present invention is not limited to this aspect.

<使用樣本之準備> <Preparation for using sample>

使用屬於上述的品質區分之菸草原料來作為使用樣本,但葉肉部係將產地國不同之菸草的葉肉分別裁切成寬度約2mm的大小然後加以混合而調製出,中骨係將產地國不同之菸草的中骨加以混合而調製出。 Tobacco raw materials belonging to the above-mentioned quality classification are used as samples for use, but the mesophyll is prepared by cutting the mesophyll of tobacco from different countries of production into a size of about 2mm in width and then mixing them. The middle bone system varies the country of origin. The middle bones of tobacco are mixed and prepared.

<預備性的探討及驗證> <Preliminary Discussion and Verification>

為了確認可否將如前述的方法應用於菸草原料的品質區分的判定,進行如以下所述之預備性的探討及驗證。 In order to confirm whether the aforementioned method can be applied to the determination of the quality of tobacco raw materials, preliminary investigations and verifications as described below are carried out.

(1)激發波長與螢光波長的組合之探討及驗證 (1) Discussion and verification of the combination of excitation wavelength and fluorescence wavelength

為了取得螢光畫像資料,必須決定激發波長與螢光波長的組合。因此,考量與過去得到的菸草原料有關的知識見解等,選擇多酚及葉綠素的激發波長來作為激發、螢光波長的候補,並驗證選擇的妥當性。 In order to obtain fluorescence image data, the combination of excitation wavelength and fluorescence wavelength must be determined. Therefore, considering the knowledge and insights related to tobacco raw materials obtained in the past, the excitation wavelengths of polyphenols and chlorophyll are selected as candidates for excitation and fluorescence wavelengths, and the appropriateness of the selection is verified.

首先,在未透過濾波器的情況下對於在白色光下取得的可視畫像係如第12A圖所示之菸草原料樣本照射激發光,取得如第12B圖所示的螢光畫像。從第12B圖之螢光畫像,確認了可實現能夠在沒有濾波器的狀態下檢知一片片菸絲之空間解析度。此外,也確認了可概略判別上述的品質區分。 First, the visible image obtained under white light is irradiated with excitation light on the tobacco raw material sample shown in Figure 12A without passing through the filter to obtain a fluorescent image as shown in Figure 12B. From the fluorescent image in Figure 12B, it was confirmed that the spatial resolution of a piece of shredded tobacco can be detected without a filter. In addition, it was confirmed that the above-mentioned quality classification can be roughly discriminated.

接著,針對使激發、螢光波長(nm)為I(340,460)、II(360,440)、III(380,680)、IV(400,660)、V(400,680)、VI(420,660)之組合,使用如第6圖所示的螢光成像系統,取得如第13圖所示之螢光畫像。關於第6圖所示的螢光成像系統的動作等,因為已在「I-1-1.」中詳細說明過,故將其說明予以省略。 Next, for the combination of excitation and fluorescence wavelength (nm) as I (340,460), II (360,440), III (380,680), IV (400,660), V (400,680), VI (420,660), use the combination shown in Figure 6 The fluorescent imaging system shown, obtains the fluorescent image shown in Figure 13. The operation of the fluorescent imaging system shown in Fig. 6 has been described in detail in "I-1-1.", so the description will be omitted.

從第13圖判斷出大致可保證柏萊種的葉肉部(BLY)、黃色種的葉肉部(FCV)、東方種的葉肉部(ORI)、中骨(stem)、菸草片(sheet)、膨化乾燥體(puff)之各品質區分的區別的妥當性,所以確認在該等品質區分的判定上使用如上述的激發、螢光波長之組合I至VI為有效的。 It can be judged from Figure 13 that the mesophyll part (BLY) of the Belle species (BLY), the mesophyll part of the yellow species (FCV), the mesophyll part (ORI) of the oriental species (ORI), the stem, the tobacco sheet (sheet), and the puffing The appropriateness of the distinction between the various qualities of the puff, so it is confirmed that it is effective to use the combination of excitation and fluorescence wavelengths I to VI as described above in the judgment of the quality distinction.

(2)語義分割的有效性的驗證 (2) Verification of the effectiveness of semantic segmentation

如前述,語義分割係以畫素等級(pixel level)來辨識物體,可做到畫素等級之類別區分(分類)。 As mentioned above, semantic segmentation uses the pixel level to identify objects, which can achieve the classification (classification) of the pixel level.

例如關於菸草原料,有例如中骨(stem)、菸草片(sheet)、膨化乾燥體(puff)係共有一部分的材料之可能性,但就算是如此的情況,藉由也考慮到形狀,也可期待能夠使品質區分的判定精度進一步提高。 For example, with regard to tobacco raw materials, there is a possibility that a part of the material is shared, such as stem, sheet, and puff. However, even in this case, it is also possible to consider the shape It is expected that the accuracy of the quality classification can be further improved.

因此,為了進行此點之驗證,進行了如以下所述之用來確認應用語義分割的有效性之實驗。 Therefore, in order to verify this point, an experiment to confirm the effectiveness of applying semantic segmentation as described below was performed.

第14圖係用來說明應用語義分割於菸草原料樣本之情況的概要之模式圖。第14圖所示之程序的概要係如下所述。 Figure 14 is a schematic diagram used to illustrate the outline of the application of semantic segmentation to tobacco raw material samples. The outline of the program shown in Figure 14 is as follows.

對於利用如上述的激發、螢光波長的組合I至VI而得到的螢光畫像(A),應用屬於語義分割方法的一種之上述的SegNet而取得畫像(B)。 For the fluorescent image (A) obtained by using the combination of excitation and fluorescence wavelengths I to VI as described above, the image (B) is obtained by applying the above-mentioned SegNet, which is one of the semantic segmentation methods.

然後,對於畫像(B)進行對比強調處理後再加以二值化而取得畫像(C)。 Then, the portrait (B) is contrasted and emphasized and then binarized to obtain the portrait (C).

然後,從畫像(C)將無用的物體去除掉而得到畫像(D)。 Then, useless objects are removed from the portrait (C) to obtain the portrait (D).

最後,對於畫像(D)的各物體進行標籤化。 Finally, each object in the portrait (D) is labeled.

結果,確認了採用如此的程序而考慮形狀之妥當性。 As a result, it was confirmed that the appropriateness of the shape was considered using such a procedure.

(3)螢光畫像及可視畫像兩者併用的有效性之驗證 (3) Verification of the effectiveness of the combined use of both fluorescent images and visual images

只利用三通道的可視光的資料,亦即R(波長:680nm附近值)、G(波長:560nm附近值)、B(波長:450nm附近值)之多光譜資料(可視畫像資料),針對柏萊種的葉肉部(BLY)、黃色種的葉肉部(FCV)、東方種的葉肉 部(ORI),應用卷積神經網路(AlexNet,GoogLeNet)進行深度學習。不過,此情況的識別率為約68%。 Only use three-channel visible light data, namely R (wavelength: near 680nm), G (wavelength: near 560nm), B (wavelength: near 450nm) multispectral data (visual image data), for Cypress Mesophyll of Lai species (BLY), Mesophyll of yellow species (FCV), Mesophyll of Oriental species Department (ORI), using convolutional neural networks (AlexNet, GoogLeNet) for deep learning. However, the recognition rate in this case is about 68%.

因此,預測若併用螢光畫像資料及可視畫像資料兩者來作成輸入畫像資料,以該輸入畫像資料進行機器學習,可期待能夠使菸草原料的品質區分的判定精度進一步提高。 Therefore, it is predicted that if both the fluorescent image data and the visual image data are used to create the input image data, and machine learning is performed with the input image data, it is expected that the accuracy of the quality classification of tobacco raw materials can be further improved.

為了進行此點之驗證,進行了如此之併用螢光畫像及可視畫像兩者的有效性之實驗。 In order to verify this point, an experiment on the effectiveness of both fluorescent images and visual images was carried out.

首先,針對柏萊種的葉肉部(BLY)、黃色種的葉肉部(FCV)、東方種的葉肉部(ORI)、中骨(stem)、菸草片(sheet)、膨化乾燥體(puff)的切絲,各準備由可視畫像資料為R(波長:680nm附近值)、G(波長:560nm附近值)、B(波長:450nm附近值)之多光譜資料,螢光畫像資料為激發、螢光波長(nm)為(300,450)、(340,450)、(380,650)之多光譜資料所構成之6通道的菸草原料樣本的畫像資料約500張左右。 First of all, for the mesophyll part (BLY) of the Bailey species, the mesophyll part (FCV) of the yellow species, the mesophyll part (ORI) of the oriental species, the middle bone (stem), the tobacco sheet (sheet), and the puff Cut into shreds, prepare multi-spectral data of R (wavelength: near 680nm), G (wavelength: near 560nm), and B (wavelength: near 450nm) from the visual image data. Fluorescence image data are excitation and fluorescence The 6-channel image data of tobacco raw material samples composed of multispectral data with wavelengths (nm) of (300,450), (340,450), (380,650) are about 500.

接著,將如此取得的6通道的畫像縮減到3通道,作成學習用的輸入畫像。第15圖係顯示關於菸草原料樣本,併用螢光畫像資料及可視畫像資料而進行機器學習的3通道的輸入畫像資料的一例之表。此表中顯示了例如通道1之資料為R之光譜資料的貢獻度為100,激發、螢光波長(nm)為(340,450)、(380,650)之光譜資料的貢獻度各為50之資料。 Next, the 6-channel image obtained in this way is reduced to 3 channels to create an input image for learning. Figure 15 is a table showing an example of 3-channel input image data using fluorescent image data and visual image data for machine learning about tobacco raw material samples. This table shows, for example, that the data of channel 1 is R and the contribution of spectral data is 100, and the contribution of excitation and fluorescence wavelength (nm) is (340,450), (380,650) is 50 respectively.

然後,以如此得到的3通道的輸入畫像資料作為訓練資料,應用前述的AlexNet進行深層學習。 Then, using the 3-channel input image data obtained in this way as training data, apply the aforementioned AlexNet for deep learning.

結果,確認了如此之併用螢光畫像及可視畫像兩者之妥當性。 As a result, it was confirmed that it is appropriate to use both the fluorescent image and the visual image in this way.

使用的卷積神經網路並不限定於AlexNet,亦可使用其他的已知的網路(例如GoogLeNet等)來進行深層學習。 The convolutional neural network used is not limited to AlexNet, and other known networks (such as GoogLeNet, etc.) can also be used for deep learning.

<與菸草原料的品質區分的判定有關之態樣例> <Examples related to the judgment of the quality of tobacco raw materials>

根據如上述的預備性的探討及驗證,安裝用於菸草原料樣本的深層學習用的系統,進行如以下所述之深層學習。 Based on the preliminary discussion and verification as described above, a system for deep learning of tobacco raw material samples is installed, and the deep learning described below is performed.

安裝的系統,採用的是語義分割用的網路SegNet。 The installed system uses SegNet, a network for semantic segmentation.

另外,採用由可視畫像資料為R(波長:680nm附近值)、G(波長:560nm附近值)、B(波長:450nm附近值)之多光譜資料,螢光畫像資料為激發、螢光波長(nm)為I(340,460)、II(360,440)、III(380,680)、IV(400,660)、V(400,680)、VI(420,660)之多光譜資料所構成之9通道的菸草原料樣本的畫像資料。 In addition, multi-spectral data with visible image data as R (wavelength: near 680 nm), G (wavelength: near 560 nm), and B (wavelength: near 450 nm) are used. The fluorescent image data is excitation and fluorescence wavelength ( nm) is the image data of the 9-channel tobacco raw material sample composed of the multispectral data of I (340,460), II (360,440), III (380,680), IV (400,660), V (400,680), VI (420,660).

可視畫像資料及螢光畫像資料係使用如第6圖所示的螢光成像系統來取得,但以激發光照射開始為始點之取得時刻及期間(T,P)(單位:秒)係考慮螢光褪色而分別設定如下。 The visual image data and fluorescent image data are acquired using the fluorescent imaging system shown in Figure 6, but the acquisition time and period (T, P) (unit: second) starting from the start of excitation light irradiation are considered Fluorescence faded and set as follows.

關於可視畫像,係設定為R(0,1)、G(1,1)、B(2,10),關於螢光畫像,則是設定為接在可視畫像的取得之後依序每10秒取得I,II,III,IV,V,VI。 For the visual image, it is set to R(0,1), G(1,1), B(2,10). For the fluorescent image, it is set to be acquired every 10 seconds after the acquisition of the visual image. I, II, III, IV, V, VI.

根據關於螢光畫像的取得之如此的設定,能夠進行有利用到螢光褪色的特性之深層學習,結果就可期待判定精度之進一步的提高。 According to such a setting regarding the acquisition of fluorescent images, it is possible to perform in-depth learning that utilizes the characteristics of fluorescent fading, and as a result, further improvement in the accuracy of determination can be expected.

然後,對於品質區分已知的菸草原料樣本,進行利用卷積神經網路之深層學習(權重:0.5,訓練週期(epoch)數:400)。訓練週期數1係指學習一次所有的訓練資料。 Then, for the tobacco raw material samples with known quality distinctions, deep learning using convolutional neural networks (weight: 0.5, number of training epochs: 400) is performed. The number of training cycles 1 means to learn all the training materials once.

第16圖係顯示將如此而得到的學習完成的卷積神經網路應用於品質區分已知的菸草原料的測試樣本所得到的判定結果之表。 Figure 16 is a table showing the determination results obtained by applying the thus-learned convolutional neural network to test samples of tobacco raw materials whose quality is known.

表中的A,B,D,O,R,S分別為表示FCV,BLY,Puff,ORI,Sheet,Stem之符號。 A, B, D, O, R, and S in the table are the symbols representing FCV, BLY, Puff, ORI, Sheet, Stem.

針對表進行說明的話,例如,表的第一行表示:就菸草原料A(FCV)的實際的個數162而言,推測為A,B,D,O,R,S的個數分別為99,5,11,33,4,10。另外,表中的「精度」表示例如推測為「A」之個數與實際的「A」的個數之比。 For the description of the table, for example, the first row of the table indicates: in terms of the actual number of tobacco raw material A (FCV) 162, it is estimated that the numbers of A, B, D, O, R, and S are respectively 99 ,5,11,33,4,10. In addition, the "precision" in the table indicates, for example, the ratio of the estimated number of "A" to the actual number of "A".

從表(B),以如下的方式計算出全體的判定正確率: From Table (B), calculate the overall judgment accuracy rate in the following way:

總判定正確數(99+115+137+115+139+129)÷總個數(162+140+164+159+142+159)=79% Total number of correct judgments (99+115+137+115+139+129) ÷ total number (162+140+164+159+142+159)=79%

如此,雖然判定正確率會依原料的種類而有若干變化,但整體來看實現了可確保實用上的有效性之高的判定正確率。 In this way, although the judgment accuracy rate varies slightly depending on the type of raw material, as a whole, a high judgment accuracy rate that can ensure practical effectiveness is achieved.

請注意,本發明除了上述的實施的態樣以外,還可在申請專利範圍記載的技術思想的範圍內採用各種不同的實施態樣。 Please note that in addition to the above-mentioned implementation aspects, the present invention can also adopt various different implementation aspects within the scope of the technical ideas described in the scope of the patent application.

S01~S03:程序 S01~S03: Program

Claims (17)

一種方法,包含: A method that includes: 螢光畫像資料取得程序,係取得螢光畫像作為對應於激發波長與螢光波長的組合之張數份的螢光畫像資料,該螢光畫像為對於包含已知的品質區分之試樣照射具有預定的激發波長之激發光,根據在以該激發光的照射開始為始點並考慮螢光褪色而設定的時刻T(T≧0)開始的預定的期間P(P>0)內得到的預定的螢光波長的反射光的強度而得的螢光畫像; The fluorescence image data acquisition process is to obtain the fluorescence image as the fluorescence image data corresponding to the combination of the excitation wavelength and the fluorescence wavelength. The fluorescence image is for sample irradiation with known quality distinction. The excitation light of the predetermined excitation wavelength is based on the predetermined period P(P>0) starting from the time T(T≧0) set with the start of the excitation light irradiation as the starting point and considering the fading of the fluorescence Fluorescent portrait derived from the intensity of the reflected light of the fluorescent wavelength; 機器學習程序,係從前述螢光畫像資料作成具有預定的通道數之輸入畫像資料,以該輸入畫像資料作為訓練資料而利用電腦進行機器學習,構築出對前述試樣中包含的成分而言賦予最合適的品質區分之分類器;以及 The machine learning program is to create input image data with a predetermined number of channels from the aforementioned fluorescent image data, use the input image data as training data and use a computer to perform machine learning, and construct an assignment to the components contained in the aforementioned sample The most suitable quality classification classifier; and 品質區分判定程序,係應用前述分類器於從包含有未知的品質區分之試樣的螢光畫像資料得到的輸入畫像資料,判定該試樣中包含的品質區分。 The quality classification judgment procedure is to apply the aforementioned classifier to the input image data obtained from the fluorescent image data of the sample containing the unknown quality classification to determine the quality classification contained in the sample. 如申請專利範圍第1項所述之方法,其中, Such as the method described in item 1 of the scope of patent application, wherein: 前述方法係進行也考慮存在於螢光畫像資料中的物體的形狀之機器學習,使之反映在前述試樣中包含的品質區分的判定中。 The aforementioned method is to perform machine learning that also considers the shape of the object existing in the fluorescent image data, so that it is reflected in the judgment of the quality classification included in the aforementioned sample. 如申請專利範圍第1或2項所述之方法,其中, Such as the method described in item 1 or 2 of the scope of patent application, wherein: 設置複數個前述時刻T(0≦T1<…<Tn),將從依各時刻Ti(0≦i≦n)得到的螢光畫像資料所作成的具有預定的通道數之輸入畫像資料,分別作為前述複數個時刻中的各個時刻的訓練資料而利用電腦進行機器學習,構築出對前述試樣而言在前述複數個時刻中的各個時刻賦予最合適的品質區分之分類器,然後在品質區分的判定時,選擇適合該品質區分的分類器,來判定該試樣中包含的品質區分。 Set a plurality of the aforementioned times T (0≦T 1 <...<T n ), and create input image data with a predetermined number of channels from the fluorescent image data obtained at each time T i (0≦i≦n) , Using a computer to perform machine learning as training data at each of the aforementioned plural times, and construct a classifier that gives the most suitable quality distinction to the aforementioned sample at each of the aforementioned plural times, and then When determining the quality classification, a classifier suitable for the quality classification is selected to determine the quality classification contained in the sample. 如申請專利範圍第1或2項所述之方法,其中, Such as the method described in item 1 or 2 of the scope of patent application, wherein: 設置複數個前述時刻T(0≦T1<…<Tn),將依各時刻Ti(0≦i≦n)得到的螢光畫像資料按時間序列予以累積起來而作成累積螢光畫像資料,以從前述累積螢光畫像資料作成的具有預定的通道數之輸入畫像資料作為訓練資料而利用電腦進行機器學習,構築出對前述試樣而言賦予最合適的品質區分之分類器。 Set a plurality of the aforementioned times T (0≦T 1 <...<T n ), and accumulate the fluorescent image data obtained at each time T i (0≦i≦n) in a time series to create cumulative fluorescent image data Using the input image data with a predetermined number of channels created from the above-mentioned accumulated fluorescent image data as training data, machine learning is performed using a computer, and a classifier that gives the most suitable quality classification to the above-mentioned sample is constructed. 如申請專利範圍第1至4項中任一項所述之方法,其中, Such as the method described in any one of items 1 to 4 in the scope of patent application, wherein: 前述時刻T係螢光之褪色開始之前的時刻。 The aforementioned time T is the time before the fading of the fluorescent light starts. 如申請專利範圍第1至5項中任一項所述之方法,其中, Such as the method described in any one of items 1 to 5 in the scope of patent application, wherein: 將前述時刻T及期間P設定為與品質區分相依來提高該品質區分的判定精度。 The aforementioned time T and period P are set to be dependent on the quality classification to improve the accuracy of the quality classification. 如申請專利範圍第1至6項中任一項所述之方法,還包含:取得可視畫像資料之可視畫像取得程序, For example, the method described in any one of items 1 to 6 in the scope of the patent application further includes: a visual image acquisition procedure for obtaining visual image data, 且從前述螢光畫像資料及該可視畫像資料作成具有預定的通道數之輸入畫像資料,以該輸入畫像資料作為訓練資料而利用電腦進行機器學習。 And from the aforementioned fluorescent image data and the visual image data, an input image data with a predetermined number of channels is created, and the input image data is used as training data to perform machine learning using a computer. 如申請專利範圍第7項所述之方法,其中, As the method described in item 7 of the scope of patent application, in which, 前述可視畫像資料係由R(波長:680nm附近值)、G(波長:560nm附近值)、B(波長:450nm附近值)所構成。 The aforementioned visual image data is composed of R (wavelength: value near 680 nm), G (wavelength: value near 560 nm), and B (wavelength: value near 450 nm). 如申請專利範圍第1至8項中任一項所述之方法,其中, Such as the method described in any one of items 1 to 8 in the scope of patent application, wherein: 激發波長及螢光波長係使用適合於多酚、葉綠素的檢出之激發、螢光波長。 The excitation wavelength and fluorescence wavelength are suitable for the detection of polyphenols and chlorophyll. 如申請專利範圍第1至9項中任一項所述之方法,其中, Such as the method described in any one of items 1 to 9 in the scope of patent application, wherein: 試樣係菸草原料,品質區分係柏萊種的葉肉部、黃色種的葉肉部、東方(Orient)種的葉肉部、中骨、菸草片、膨化乾燥體。 The sample is a tobacco raw material, and the quality is divided into the mesophyll part of the Belle species, the mesophyll part of the yellow species, the mesophyll part of the Orient species, the middle bone, the tobacco slice, and the puffed dried body. 一種使電腦執行申請專利範圍第1至10項中任一項所述的方法之程式。 A program that enables a computer to execute the method described in any one of items 1 to 10 in the scope of the patent application. 一種裝置,具備有: A device with: 機器學習手段,係取得對應於與包含已知的品質區分之試樣有關之激發波長與螢光波長的組合之張數份的螢光畫像資料,從該螢光畫像資料作成具有預定的通道數之輸入畫像資料,以該輸入畫像資料作為訓練資料而進行機器學習,構築出對前述試樣而言賦予最合適的品質區分之分類器,該螢光畫像資料為根據在以激發光的照射開始為始點並考慮螢光褪色而設定的時刻T(T≧0)開始的預定的期間P(P>0)內得到的預定的螢光波長的反射光的強度而得之螢光畫像資料;以及 The machine learning method is to obtain the fluorescence image data corresponding to the combination of the excitation wavelength and the fluorescence wavelength related to the sample containing the known quality classification, and make a predetermined number of channels from the fluorescence image data The input image data is used as training data for machine learning to construct a classifier that gives the most suitable quality distinction to the aforementioned sample. The fluorescent image data is based on the start of excitation light irradiation Fluorescent image data obtained from the intensity of the reflected light of the predetermined fluorescent wavelength obtained during the predetermined period P (P>0) starting from the time T (T≧0) set for the starting point and considering the fluorescent fading; as well as 品質區分判定手段,係應用前述分類器於從包含有未知的品質區分之試樣的螢光畫像資料得到的輸入畫像資料,判定該試樣中包含的品質區分。 The quality classification judging means is to apply the aforementioned classifier to the input image data obtained from the fluorescent image data of the sample containing the unknown quality classification to determine the quality classification contained in the sample. 如申請專利範圍第12項所述之裝置,其中, As the device described in item 12 of the scope of patent application, in which, 前述裝置係進行也考慮存在於前述螢光畫像資料中的物體的形狀之機器學習,使之反映在前述試樣中包含的品質區分的判定中。 The aforementioned device performs machine learning that also considers the shape of the object existing in the aforementioned fluorescent image data, so that it is reflected in the judgment of the quality classification included in the aforementioned sample. 如申請專利範圍第12或13項所述之裝置,其中, Such as the device described in item 12 or 13 of the scope of patent application, wherein: 將從前述取得時刻T各自不同的螢光畫像資料作成的具有預定的通道數之輸入畫像資料,分別作為各個前述取得時刻的訓練資料而利用電腦進行機器學習,構築出對前述試樣而言在各個前述取得時刻賦予最合適的品質區分之分類器,然後針對包含有未知的品質區分的試樣選擇適合的分類 器,在品質區分的判定時,選擇適合該品質區分的分類器,來判定該試樣中包含的品質區分。 The input image data with a predetermined number of channels created from the fluorescent image data at the respective acquisition time T is used as the training data at each acquisition time. The computer is used for machine learning to construct Each of the aforementioned acquisition moments is assigned the most suitable classifier for quality classification, and then a suitable classification is selected for samples containing unknown quality classifications When the quality classification is determined, a classifier suitable for the quality classification is selected to determine the quality classification included in the sample. 如申請專利範圍第12或13項所述之裝置,其中, Such as the device described in item 12 or 13 of the scope of patent application, wherein: 將前述取得時刻T各自不同的螢光畫像資料按時間序列予以累積起來而作成累積螢光畫像資料,以從前述累積螢光畫像資料作成的具有預定的通道數之輸入畫像資料作為訓練資料而利用電腦進行機器學習,構築出對前述試樣而言賦予最合適的品質區分之分類器。 Accumulate the fluorescent image data that are different from each other at the acquisition time T in time series to create cumulative fluorescent image data, and use the input image data with a predetermined number of channels created from the cumulative fluorescent image data as training data The computer performs machine learning to construct a classifier that gives the most suitable quality distinction to the aforementioned samples. 如申請專利範圍第12至15項中任一項所述之裝置,其中, The device described in any one of items 12 to 15 in the scope of patent application, wherein: 前述裝置還取得可視畫像資料,且從前述螢光畫像資料及該可視畫像資料作成具有預定的通道數之輸入畫像資料,以該輸入畫像資料作為訓練資料而利用電腦進行機器學習。 The aforementioned device also obtains visual image data, and forms input image data with a predetermined number of channels from the aforementioned fluorescent image data and the visual image data, and uses the input image data as training data to perform machine learning using a computer. 如申請專利範圍第16項所述之裝置,其中, As the device described in item 16 of the scope of patent application, in which, 前述可視畫像資料係由R(波長:680nm附近值)、G(波長:560nm附近值)、B(波長:450nm附近值)所構成。 The aforementioned visual image data is composed of R (wavelength: value near 680 nm), G (wavelength: value near 560 nm), and B (wavelength: value near 450 nm).
TW108142311A 2019-01-28 2019-11-21 Method, program and device for determining quality of sample using fluorescence image TW202028722A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019012244 2019-01-28
JP2019-012244 2019-01-28

Publications (1)

Publication Number Publication Date
TW202028722A true TW202028722A (en) 2020-08-01

Family

ID=71840405

Family Applications (1)

Application Number Title Priority Date Filing Date
TW108142311A TW202028722A (en) 2019-01-28 2019-11-21 Method, program and device for determining quality of sample using fluorescence image

Country Status (3)

Country Link
JP (1) JP7134421B2 (en)
TW (1) TW202028722A (en)
WO (1) WO2020158107A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2023013134A1 (en) * 2021-08-02 2023-02-09
JPWO2023032296A1 (en) * 2021-09-02 2023-03-09
CN114720436B (en) * 2022-01-24 2023-05-12 四川农业大学 Agricultural product quality parameter detection method and equipment based on fluorescence hyperspectral imaging
CN116067931B (en) * 2023-02-06 2023-09-12 大连工业大学 Frozen strip tilapia TVB-N nondestructive testing method based on fluorescence response image

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914841B (en) * 2014-04-03 2018-03-09 深圳大学 Based on the segmentation of the vaginal bacteria of super-pixel and deep learning and categorizing system
CN106546569B (en) * 2016-10-31 2019-10-15 浙江大学 A kind of screening technique and device of the plant drought resistance mutant of high throughput
DK3602007T3 (en) * 2017-03-22 2024-01-15 Adiuvo Diagnostics Pvt Ltd DEVICE AND PROCEDURE FOR DETECTION AND CLASSIFICATION OF PATHOGENS

Also Published As

Publication number Publication date
JP7134421B2 (en) 2022-09-12
WO2020158107A1 (en) 2020-08-06
JPWO2020158107A1 (en) 2021-09-30

Similar Documents

Publication Publication Date Title
TW202028722A (en) Method, program and device for determining quality of sample using fluorescence image
Zhan et al. Semisupervised hyperspectral image classification based on generative adversarial networks
US7953264B2 (en) Classifying image features
Lu et al. Detection of surface and subsurface defects of apples using structured-illumination reflectance imaging with machine learning algorithms
CN110033032B (en) Tissue slice classification method based on microscopic hyperspectral imaging technology
Jayalakshmi et al. Performance analysis of convolutional neural network (CNN) based cancerous skin lesion detection system
CN110261329B (en) Mineral identification method based on full-spectrum hyperspectral remote sensing data
WO2018054091A1 (en) Method for identifying components of exocarpium
NL2025810A (en) Method for classifying and evaluating nitrogen content level of brassica rapa subsp. oleifera (brsro) canopy
CN115841594B (en) Attention mechanism-based coal gangue hyperspectral variable image domain data identification method
CN112766223A (en) Hyperspectral image target detection method based on sample mining and background reconstruction
CN114418027B (en) Hyperspectral image characteristic wave band selection method based on wave band attention mechanism
Silva et al. Automatic detection of Flavescense Dorée grapevine disease in hyperspectral images using machine learning
Xu et al. Deep learning classifiers for near infrared spectral imaging: a tutorial
Jayasundara et al. Multispectral imaging for automated fish quality grading
Orillo et al. Rice plant nitrogen level assessment through image processing using artificial neural network
Liu et al. Using hyperspectral imaging automatic classification of gastric cancer grading with a shallow residual network
Fu et al. Identification of maize seed varieties based on stacked sparse autoencoder and near‐infrared hyperspectral imaging technology
WO2023096971A1 (en) Artificial intelligence-based hyperspectrally resolved detection of anomalous cells
CN111340098A (en) STA-Net age prediction method based on shoe print image
CN114720436B (en) Agricultural product quality parameter detection method and equipment based on fluorescence hyperspectral imaging
Winkelmaier Differentiating Benign and Malignant Phenotypes in Breast Histology Sections with Quantum Cascade Laser Microscopy
Moyes et al. Multi-Channel Auto-Encoders and a Novel Dataset for Learning Domain Invariant Representations of Histopathology Images
Bhuvaneswari et al. Robust Image Forgery Classification using SqueezeNet Network
Genutis Deep learning approach to nematode detection in hyperspectral images of cod fillets