WO2018054091A1 - 化橘红成分鉴定方法 - Google Patents

化橘红成分鉴定方法 Download PDF

Info

Publication number
WO2018054091A1
WO2018054091A1 PCT/CN2017/086901 CN2017086901W WO2018054091A1 WO 2018054091 A1 WO2018054091 A1 WO 2018054091A1 CN 2017086901 W CN2017086901 W CN 2017086901W WO 2018054091 A1 WO2018054091 A1 WO 2018054091A1
Authority
WO
WIPO (PCT)
Prior art keywords
sample
hyperspectral data
hyperspectral
model
data
Prior art date
Application number
PCT/CN2017/086901
Other languages
English (en)
French (fr)
Inventor
沈小钟
崔穗旭
Original Assignee
广州道地南药技术研究有限公司
广东食品药品职业学院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州道地南药技术研究有限公司, 广东食品药品职业学院 filed Critical 广州道地南药技术研究有限公司
Publication of WO2018054091A1 publication Critical patent/WO2018054091A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • G01N21/31Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry

Definitions

  • the present disclosure relates to the field of medicinal material identification technology, for example, to a method for identifying an orange-red component.
  • the orange red is also known as the peeling skin and the orange red of Huazhou. It is the outer skin of the immature fruit of the medicinal state pomelo.
  • the orange red not only has the functions of curbing cough, phlegm, and hangover, but also the best raw material for human body beauty, and has broad market prospects.
  • volatile oils, flavonoids, polysaccharides and coumarins are the main active ingredients of orange.
  • the content of active ingredients in different varieties is different, the efficacy is different, and the price is also different, and the effect of genuine leather is best. Therefore, there are many genuine fruits, fake fruits and fake skins used in the market to impersonate genuine skin, which has harmed the interests of consumers and also affected the interests of farmers who plant excellent varieties.
  • the commonly used identification methods for the orange-red component in the related art mainly include trait identification, microscopic identification, and high performance liquid chromatography. Although these methods have their own advantages, they have the disadvantages of high degree of subjectivity, pretreatment, and complicated experimental procedures, which cannot meet the needs of rapid and reliable detection in the market.
  • the embodiment provides a method for identifying the orange-red component, which has simple operation steps and can accurately identify the component of the orange-red.
  • a method for identifying orange red includes:
  • a model of the modeling set is scanned by at least one band, and a hyperspectral image of the model set sample is acquired;
  • the hyperspectral data of the model set sample is used as an input variable of the discriminant analysis model, and is used as an input variable of the discriminant analysis model to obtain a component recognition result of the model set sample.
  • the method before the scanning the samples in the sample set by the at least one band to acquire the hyperspectral image of the modeling set sample, the method further includes:
  • the hyperspectral data of the test set sample and the class assignment of the test set samples are pre-stored in the discriminant analysis model.
  • the method further includes:
  • the hyperspectral data of the model set sample is used as an input variable of the discriminant analysis model, and the component identification result of the model set sample is obtained as an input variable of the discriminant analysis model, including:
  • the hyperspectral data corresponding to the characteristic wavelength is used as an input variable of the discriminant analysis model, and the component recognition result of the model set sample is obtained.
  • the discriminant analysis model comprises a discriminant model constructed by a partial least squares method or a discriminant model constructed by an extreme learning machine.
  • the at least one wavelength band comprises a wavelength band of 400 nm (nanometer) to 1000 nm or a wavelength band of 1000 nm to 2500 nm.
  • the sample set sample comprises a genuine orange, a fake skin, a genuine fruit, and a fake fruit.
  • the hyperspectral data includes spatial position data, wavelength data, and spectral absorption values.
  • the processing the hyperspectral data of the sample in the sample set includes:
  • the hyperspectral data of the sample in the sample set is corrected based on the following formula:
  • R ref is the corrected hyperspectral data
  • DN raw is the hyperspectral data of the sample in the sample set before correction
  • DN white is the whiteboard correction data
  • DN dark is the blackboard correction data.
  • the corrected hyperspectral data is de-dryed by a moving window least squares polynomial smoothing SG algorithm.
  • selecting a characteristic wavelength in the hyperspectral data of the modeling set sample by a continuous projection algorithm includes:
  • the hyperspectral data of the modeling set sample and the category assignment are used as input variables of a continuous projection algorithm, and feature wavelengths are selected in hyperspectral data of the modeling set samples.
  • the method further includes: calculating recognition accuracy based on the recognition result.
  • This embodiment also provides another method for identifying orange-red components, including:
  • the sample in the sample set is scanned by at least one band, and the highlight of the sample in the sample set is collected.
  • hyperspectral data of the test set sample Taking hyperspectral data of the test set sample, class assignment of the test set sample, and hyperspectral data of the model set sample as input variables of a discriminant analysis model, or hyperspectral data of the test set sample,
  • the class assignment of the test set sample and the hyperspectral data corresponding to the feature wavelength are used as input variables of the discriminant analysis model to obtain component recognition results of the model set samples.
  • the method for identifying the orange-red component provides the hyperspectral data of the test set sample, the class assignment of the test set sample, and the hyperspectral data of the model set sample or the hyperspectral data corresponding to the feature wavelength as the discriminant analysis model.
  • the input variable obtains the component recognition result of the modeling set sample, and the operation procedure is simple, and the component of the orange red can be accurately identified.
  • 1a is a flow chart of a method for identifying an orange-red component provided in the first embodiment
  • FIG. 1b is a spectral comparison diagram of hyperspectral data formed in a sample set sample in a range of 400 nm to 1000 nm in the scanning band provided in the first embodiment;
  • 1c is a spectral comparison diagram of hyperspectral data formed in a sample set sample in the range of 1000 nm to 2500 nm in the scanning band provided in the first embodiment;
  • FIG. 1d is a schematic diagram showing the class prediction values of the test set and the model set sample obtained when the hyperspectral data of the model set sample is taken as an input variable and is in the range of 400 nm to 1000 nm;
  • FIG. 1e is a schematic diagram showing the class prediction values of the test set and the model set sample obtained when the hyperspectral data corresponding to the characteristic wavelength is taken as an input variable and is in the range of 400 nm to 1000 nm;
  • FIG. 1f is a schematic diagram of the class prediction values of the test set and the model set obtained when the hyperspectral data of the model set sample is taken as an input variable and is in the range of 1000 nm to 2500 nm;
  • FIG. 1g is a schematic diagram of the class prediction values of the test set and the model set obtained when the hyperspectral data corresponding to the characteristic wavelength is taken as an input variable and is in the range of 1000 nm to 2500 nm;
  • 2a is a schematic diagram showing the class prediction values of the test set and the model set obtained when the hyperspectral data of the model set sample is taken as an input variable and the range is in the range of 400 nm to 1000 nm provided by the second embodiment;
  • 2b is a schematic diagram showing the class prediction values of the test set and the model set sample obtained when the hyperspectral data corresponding to the characteristic wavelength is taken as an input variable and the range is in the range of 400 nm to 1000 nm provided in the second embodiment;
  • 2c is a schematic diagram showing the class prediction values of the test set and the model set sample obtained when the hyperspectral data of the model set sample is taken as an input variable and is in the range of 1000 nm to 2500 nm;
  • 2d is a schematic diagram showing the class prediction values of the test set and the modeling set sample obtained when the hyperspectral data corresponding to the characteristic wavelength is taken as an input variable and is in the range of 1000 nm to 2500 nm.
  • FIG. 1a is a flow chart of a method for identifying an orange-red component provided in the first embodiment, as shown in FIG. 1a, the method includes:
  • the sample of the sample set is scanned by at least one band, and a hyperspectral image of the sample of the sample set is acquired.
  • the sample in the sample set includes genuine leather, fake skin, genuine fruit, and fake fruit of orange red.
  • the authentic leather is 32
  • the authentic fruit is 10
  • the fake fruit is 11
  • the fake leather is 7.
  • 5 g (g) of each was placed on the petri dish for collecting hyperspectral images.
  • At least one of the bands includes a 400 nm (nanometer) to 1000 nm band or a 1000 nm to 2500 nm band.
  • the acquisition of hyperspectral images can be performed using the GaiaSorter hyperspectral sorter system (V10E, N25E-SWIR) from Sichuan Shuangli Hepu Technology Co., Ltd.
  • the system is mainly composed of a hyperspectral imager, a CCD (Charge-coupled Device) camera, a light source, a black box, and a computer.
  • Table 1 lists the parameters of the experimental instruments in the hyperspectral sorter system.
  • the camera exposure time was set to 31em (cm)
  • the exposure time was set to 10ms (milliseconds)
  • the platform movement speed was set to 46mm/s (mm/s).
  • the image acquisition software is completed by the acquisition software of hyperspectral imaging system provided by Sichuan Shuangli Hepu Technology Co., Ltd. When the sample set samples are scanned in the 400nm-1000nm band, hyperspectral images of multiple samples are acquired; when the modeling set samples are scanned in the band of 1000nm-2500nm, hyperspectral images of multiple samples are obtained.
  • hyperspectral data of the sample set sample is acquired according to the hyperspectral image.
  • hyperspectral data of the modeling set samples can be acquired according to the hyperspectral image of the acquired modeling set samples.
  • hyperspectral data includes spatial position data, wavelength data, and spectral absorption values.
  • the hyperspectral data of the sample set sample is processed, and the sample in the sample set is divided into a model set sample and a test set sample according to the processed hyperspectral data of the sample set sample, and the Hyperspectral data of the model set samples and hyperspectral data of the test set samples.
  • processing the hyperspectral data of the sample set sample comprises: correcting hyperspectral data of the sample set sample according to the following formula: Where R ref is the corrected hyperspectral data; DN raw is the hyperspectral data of the sample in the sample set before correction; DN white is the whiteboard correction data; DN dark is the blackboard correction data; and the SG moving window least squares polynomial smoothing (Savitzky The -Golay Smoothing algorithm de-drys the corrected hyperspectral data.
  • FIG. 1b is a comparative comparison diagram of hyperspectral data formed in a sample concentration sample in a scanning wavelength range of 400 nm to 1000 nm
  • FIG. 1c is a hyperspectral data formation in a sample concentration sample in a scanning wavelength range of 1000 nm to 2500 nm.
  • the spectral comparison chart as shown in Fig. 1b and Fig. 1c, the spectral trend of the authentic fruit, the spectral curve 12 of the fake fruit, the spectral curve 14 of the genuine skin, and the spectral curve 13 of the pseudo-leather are substantially the same.
  • the spectral reflectance value of the spectral curve 14 of the genuine skin is lower than the spectral reflectance value of the spectral curve of the other three components, and there is no significant difference between the four different components in terms of the curve change trend.
  • the Kennard-Stone algorithm is used to divide the samples in the sample set into modeling set samples and test set samples, and extract hyperspectral data and test sets of the modeling set samples from the hyperspectral data processed in the sample set samples. Hyperspectral data for the sample.
  • the feature wavelength is selected in the hyperspectral data of the model set samples by a continuous projection algorithm.
  • selecting a feature wavelength in the hyperspectral data of the modeling set sample by a continuous projection algorithm comprises: classifying a sample in the modeling set sample;
  • the set of hyperspectral data and the class assignment are used as input variables of a continuous projection algorithm to select feature wavelengths in the hyperspectral data of the model set samples.
  • the genuine skin, fake skin, genuine fruit, and fake fruit are assigned 1, 2, 3, and 4 respectively.
  • Table 2 is a list of model set and test set sample partitions; as shown in Table 2, the model set samples have a total of 38 samples, the test set samples have a total of 32 samples, and the number of genuine skins in the model set samples is 22.
  • the number of genuine skins in the test set samples is 20, so the 10 genuine skin samples are used both as samples in the modeling set samples and as samples in the test set samples.
  • Table 3 is a list of selected characteristic wavelengths. As shown in Table 3, the characteristic wavelengths selected in the scanning range of the 400 nm to 1000 nm band are 15 and the characteristic wavelengths selected in the range of 1000 nm to 2500 nm are 5.
  • the hyperspectral data of the test set sample, the class assignment of the test set sample, and the hyperspectral data of the modeling set sample or the hyperspectral data corresponding to the feature wavelength are used as input variables of the discriminant model, and the model is acquired. Set the component identification results of the sample.
  • the discriminant analysis model is a discriminant model constructed by partial least squares.
  • the hyperspectral data of the test set sample, the class assignment of the test set sample, and the hyperspectral data of the model set sample are used as input variables of the discriminant model, and the component recognition result of the model set sample can be obtained; or the test set is obtained.
  • the hyperspectral data of the sample, the class assignment of the test set sample, and the hyperspectral data corresponding to the feature wavelength are used as input variables of the discriminant model, and the component recognition result of the model set sample can be obtained.
  • the relationship between the hyperspectral data and the category assignment can be established by the hyperspectral data of the input test set sample and the class assignment of the test set sample; the relationship between the established hyperspectral data and the category assignment is input.
  • the hyperspectral data of the model set sample or the hyperspectral data corresponding to the feature wavelength can obtain the class assignment of the corresponding modeling set sample.
  • the component recognition result of the modeling set sample is obtained by class assignment.
  • the hyperspectral data of the plurality of samples in the test set sample can be input into the discriminant model constructed by the partial least squares method, and the discriminant model constructed by the partial least squares method can be constructed.
  • the output category assignment is the same as the category assignment of multiple samples in the test set.
  • the discriminant model constructed by the partial least squares method is trained. Then, the hyperspectral data corresponding to the characteristic wavelengths of the plurality of samples and the hyperspectral data of the modeling set samples are respectively input into the discriminant model constructed by the partial least squares method, and the category assignments of the plurality of samples are respectively output.
  • FIG. 1d is a schematic diagram of the class prediction values of the test set and the model set sample obtained when the hyperspectral data of the model set sample is taken as an input variable and is in the range of 400 nm to 1000 nm;
  • Figure 1e is a schematic diagram of the class prediction values of the test set and the model set samples obtained when the hyperspectral data corresponding to the characteristic wavelength is taken as an input variable and in the range of 400 nm to 1000 nm;
  • Figure 1f is a sample of the model set.
  • the hyperspectral data is an input variable, and in the range of 1000 nm-2500 nm, the obtained class of the test set and the model set value of the model set value are schematic;
  • FIG. 1d is a schematic diagram of the class prediction values of the test set and the model set sample obtained when the hyperspectral data of the model set sample is taken as an input variable and is in the range of 400 nm to 1000 nm;
  • Figure 1e is a schematic diagram of the class prediction values of the test set
  • 1g is the input of the hyperspectral data corresponding to the characteristic wavelength as the input variable, and at 1000 nm Schematic diagram of the class prediction values obtained for the test set and model set samples at the -2500 nm band range.
  • the true value in the figure refers to the class assignment of the test set samples
  • the PLS-DA predictive value and the PLS-DA-SPA predictive value in the figure are the category predictive values of the modeling set samples.
  • PLS-DA is a discriminant model based on hyperspectral data (full band) of model set samples by partial least squares method
  • PLS-DA-SPA is partial least squares method based on hyperspectral data corresponding to characteristic wavelengths (feature bands).
  • the constructed discriminant model uses the above two models to identify certain components in the modeling set samples, but there are certain errors, but the error rate is low.
  • the method further includes: calculating recognition accuracy based on the recognition result.
  • the recognition accuracy is the ratio of the number of correct predictions of the category in the model set sample to the total number.
  • Table 4 is a list of the recognition accuracy of the discriminant model calculation based on the partial least squares method. As shown in Table 4, in the process of identifying the components of the modeling set samples, the overall recognition accuracy and the highest accuracy of the genuine skin recognition are in the range of 1000nm-2500nm, and the partial least squares method is based on the high of the modeling set samples.
  • the discriminant models for spectral data construction are 78% and 90%, respectively, and the lowest error rate for authentic skin recognition is in the range of 1000nm-2500nm.
  • the partial least squares method is based on the hyperspectral data of the model set samples and the hyperspectral corresponding to the characteristic wavelength.
  • the discriminant model for data construction is 5%.
  • the partial least squares method is based on the discriminant model constructed by the hyperspectral data of the model set samples, the overall recognition rate and the authentic skin recognition rate are higher than A discriminant model constructed of hyperspectral data corresponding to the characteristic wavelength.
  • the error recognition rate of the genuine leather whether in the range of 400nm-1000nm or 1000-2500nm, the discriminant model based on the hyperspectral data of the model set samples and the discriminant model based on the hyperspectral data corresponding to the characteristic wavelengths
  • the authentic skin has the same error recognition rate. Among them, in the range of 400n-1000nm, the false recognition rate of genuine leather is 10%; in the range of 1000nm-2500nm, the false recognition rate of genuine skin is 5%.
  • the method for identifying the orange-red component provided by the embodiment is characterized by using hyperspectral data of the test set sample and hyperspectral data of the model set sample or hyperspectral data of the test set sample and hyperspectral data corresponding to the characteristic wavelength as partial
  • the input variables of the model constructed by the least squares method obtain the component recognition results of the modeling set samples, and the operation steps are simple, and the components of the orange red can be accurately identified.
  • the difference between the second embodiment and the first embodiment is that the discriminant model used is a discriminant model constructed by the extreme learning machine.
  • the true value in the figure refers to the class assignment of the test set samples
  • the ELM predictive value and the ELM-SPA predictive value in the figure are the class predictive values of the modeling set samples.
  • the abscissa represents the test set sample number.
  • ELM is a discriminant model constructed by the extreme learning machine based on the hyperspectral data of the model set samples.
  • ELM-SPA is a discriminant model constructed by the extreme learning machine based on the hyperspectral data corresponding to the characteristic wavelength. The model built by the extreme learning machine has certain errors in identifying the components in the modeling set samples, but the error rate is low.
  • Table 5 is a list of the recognition accuracy of the discriminant model calculation based on the extreme learning machine.
  • the overall recognition accuracy and the highest accuracy of the genuine skin recognition are ELM models and ELM-SPA models in the range of 1000 nm to 2500 nm, which are 84% and 95%, respectively.
  • the lowest error rate of genuine skin recognition is ELM model and ELM-SPA model in the range of 1000nm-2500nm, both of which are 5%.
  • the overall recognition rate and the authentic skin recognition rate calculated by the ELM-SPA model are higher than the ELM model.
  • the error recognition rate calculated by the ELM-SPA model and the ELM model is the same.
  • the calculated overall recognition rate, the authentic skin recognition rate, and the authentic skin recognition error rate are the same, respectively It is 84%, 95% and 5%.
  • the discriminant model and the partial minimum constructed according to the extreme learning machine are in the range of 1000 nm-2500 nm.
  • the overall recognition rate and the authentic skin recognition rate calculated by the discriminant model constructed by the two-multiplication method are all higher than the 400-1000 nm band.
  • the accuracy of the discriminant model constructed by the extreme learning machine is higher than that of the discriminative model constructed by the partial least squares method.
  • the accuracy of the discriminant model constructed by the partial least squares method based on the hyperspectral data corresponding to the characteristic wavelength is lower than the discriminant model constructed based on the hyperspectral data of the model set samples.
  • the accuracy model of the discriminant model constructed by the extreme learning machine based on the hyperspectral data corresponding to the characteristic wavelength is higher than the discriminant model constructed based on the hyperspectral data of the model set sample, in the range of 1000nm-25nm.
  • the accuracy of the discriminant model constructed by the extreme learning machine based on the hyperspectral data corresponding to the characteristic wavelength is the same as the discriminant model constructed based on the hyperspectral data of the model set sample.
  • the method for identifying the orange-red component provides the hyperspectral data of the test set sample and the hyperspectral data of the model set sample or the hyperspectral data corresponding to the characteristic wavelength as the input variable of the model constructed by the limit learning machine.
  • the component identification result of the modeling set sample is obtained, and the operation steps are simple, and the component of the orange red can be accurately identified.
  • the method for identifying the orange-red component provided by the embodiment, by using the hyperspectral data of the test set sample, the class assignment of the test set sample, and the hyperspectral data of the model set sample, or the test set sample
  • the hyperspectral data, the class assignment of the test set samples, and the hyperspectral data corresponding to the feature wavelengths are used as input variables of the discriminant analysis model, and the component recognition results of the model set samples are obtained, and the operation steps are simple, and the components of the orange red can be accurately identified.

Landscapes

  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)

Abstract

一种化橘红成分鉴定方法,包括:通过至少一个波段对建模集样本进行扫描,采集建模集样本的高光谱图像(S110);根据高光谱图像,获取建模集样本的高光谱数据(S120);以及将建模集样本的高光谱数据作为判别分析模型的输入变量,获取建模集样本的成分识别结果。

Description

化橘红成分鉴定方法 技术领域
本公开涉及药材鉴定技术领域,例如涉及一种化橘红成分鉴定方法。
背景技术
化橘红又名化皮、化州橘红,为芸香科植物化州柚的未成熟果实的外层果皮。化橘红不仅具有治咳化啖、健胃行气、醒酒功能,而还是人体美容的最佳原料,有广阔的市场前景。研究表明,挥发油,黄酮类化合物,多糖以及香豆素类化合物等是化橘红的主要有效成分。不同的品种有效成分的含量不一样,功效不一样,并且在价格上也相差较大,以正品皮的效果最佳。因此市场上存在许多用化橘红的正品果、伪品果、伪品皮冒充正品皮,损害了消费者利益,也冲击了种植优良品种的农民们的利益。
相关技术中对化橘红成分常用的鉴别方法主要有性状鉴定、显微鉴定、高效液相色谱法。这些方法虽然都有各自的优势,但是存在不同程度上的主观性强、需要预处理、实验过程复杂等缺点,不能满足市场快速、可靠检测的需要。
发明内容
有鉴于此,本实施例提供一种化橘红成分鉴定方法,操作步骤简单,能够精确识别化橘红的成分。
本实施例一种化橘红鉴定方法,包括:
通过至少一个波段对建模集样本进行扫描,采集建模集样本的高光谱图像;
根据所述高光谱图像,获取建模集样本的高光谱数据;以及
将所述建模集样本的高光谱数据作为判别分析模型的输入变量,作为判别分析模型的输入变量,获取建模集样本的成分识别结果。
可选的,在所述通过至少一个波段对样本集中的样本进行扫描,采集建模集样本的高光谱图像之前,还包括:
通过所述至少一个波段对检验集样本进行扫描,采集所述检验集样本的高光谱图像;
根据所述高光谱图像,获取检验集样本的高光谱数据;以及
将所述检验集样本的高光谱数据和所述检验集样本的类别赋值预先存储在所述判别分析模型中。
可选的,在所述获取建模集样本的高光谱数据之后,还包括:
通过连续投影算法在所述建模集样本的高光谱数据中选择特征波长;
所述将所述建模集样本的高光谱数据作为判别分析模型的输入变量,作为判别分析模型的输入变量,获取建模集样本的成分识别结果,包括:
将所述特征波长对应的高光谱数据作为判别分析模型的输入变量,获取建模集样本的成分识别结果。
可选的,所述判别分析模型包括偏最小二乘法构建的判别模型或极限学习机构建的判别模型。
可选的,所述至少一个波段包括400nm(纳米)-1000nm的波段或1000nm-2500nm的波段。
可选的,所述样本集中样本包括化橘红的正品皮、伪品皮、正品果以及伪品果。
可选的,所述高光谱数据包括空间位置数据、波长数据和光谱吸收值。
可选的,所述对所述样本集中样本的高光谱数据进行处理包括:
对所述样本集中样本的高光谱数据基于如下的公式进行校正:
Figure PCTCN2017086901-appb-000001
其中,Rref为校正后的高光谱数据;DNraw为校正之前样本集中样本的高光谱数据;DNwhite为白板校正数据;DNdark为黑板校正数据。
通过移动窗口最小二乘多项式平滑SG算法对校正后的高光谱数据进行去燥处理。
可选的,通过连续投影算法在所述建模集样本的高光谱数据中选择特征波长包括:
对所述建模集样本的成分进行类别赋值;
将所述建模集样本的高光谱数据以及所述类别赋值作为连续投影算法的输入变量,在所述建模集样本的高光谱数据中选择特征波长。
可选的,还包括:基于识别结果计算识别精度。
本实施例还提供了另一种化橘红成分鉴定方法,包括:
通过至少一个波段对样本集中的样本进行扫描,采集样本集中样本的高光 谱图像;
根据所述高光谱图像,获取样本集中样本的高光谱数据;
对所述样本集中样本的高光谱数据进行处理,将样本集中的样本划分为建模集样本以及检验集样本,并获取所述建模集样本的高光谱数据以及所述检验集样本的高光谱数据;
通过连续投影算法在所述建模集样本的高光谱数据中选择特征波长;以及
将所述检验集样本的高光谱数据、所述检验集样本的类别赋值以及所述建模集样本的高光谱数据作为判别分析模型的输入变量,或者将所述检验集样本的高光谱数据、所述检验集样本的类别赋值以及所述特征波长对应的高光谱数据作为判别分析模型的输入变量,获取建模集样本的成分识别结果。
本实施例提供的一种化橘红成分鉴定方法,通过将检验集样本的高光谱数据、检验集样本的类别赋值以及建模集样本的高光谱数据或特征波长对应的高光谱数据作为判别分析模型的输入变量,获取建模集样本的成分识别结果,操作步骤简单,能够精确识别化橘红的成分。
附图概述
图1a是实施例一提供的一种化橘红成分鉴定方法流程图;
图1b是实施例一提供的在400nm-1000nm扫描波段范围内,样本集中样本的高光谱数据形成的光谱比较图;
图1c是实施例一提供的在1000nm-2500nm扫描波段范围,样本集中样本的高光谱数据形成的光谱比较图;
图1d是实施例一提供的当以建模集样本的高光谱数据为输入变量,且在400nm-1000nm波段范围时,获得的检验集与建模集样本的类别预测值示意图;
图1e是实施例一提供的当以特征波长对应的高光谱数据为输入变量,且在400nm-1000nm波段范围时,获得的检验集与建模集样本的类别预测值示意图;
图1f是实施例一提供的当以建模集样本的高光谱数据为输入变量,且在1000nm-2500nm波段范围时,获得的检验集与建模集样本的类别预测值示意图;
图1g是实施例一提供的当以特征波长对应的高光谱数据为输入变量,且在1000nm-2500nm波段范围时,获得的检验集与建模集样本的类别预测值示意图;
图2a为实施例二提供的当以建模集样本的高光谱数据为输入变量,且在400nm-1000nm波段范围时,获得的检验集与建模集样本的类别预测值示意图;
图2b为实施例二提供的当以特征波长对应的高光谱数据为输入变量,且在400nm-1000nm波段范围时,获得的检验集与建模集样本的类别预测值示意图;
图2c为实施例二提供的当以建模集样本的高光谱数据为输入变量,且在1000nm-2500nm波段范围时,获得的检验集与建模集样本的类别预测值示意图;
图2d是实施例二提供的当以特征波长对应的高光谱数据为输入变量,且在1000nm-2500nm波段范围时,获得的检验集与建模集样本的类别预测值示意图。
具体实施方式
下面结合附图和实施例对本公开作详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释本公开,而非对本公开的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与本公开相关的部分而非全部内容。在不冲突的情况下,以下实施例和实施例中的特征可以相互组合。
实施例一
图1a是实施例一提供的一种化橘红成分鉴定方法流程图,如图1a所示,所述方法包括:
在S110中,通过至少一个波段对样本集中样本进行扫描,采集样本集中样本的高光谱图像。
在本实施例中,样本集中的样本包括化橘红的正品皮、伪品皮、正品果以及伪品果。其中,正品皮为32个,正品果10个,伪品果11个,伪品皮7个。其中,将样本集中的正品皮、伪品皮、正品果以及伪品果粉碎均匀后,每一种取5g(克)放置培养皿上,以供采集高光谱图像。
在本实施例中,至少一个波段包括400nm(纳米)-1000nm波段或1000nm-2500nm波段。高光谱图像的采集可以是采用四川双利合谱科技有限公司的GaiaSorter高光谱分选仪系统(V10E、N25E-SWIR)。该系统主要由高光谱成像仪、CCD(Charge-coupled Device,电荷耦合元件)相机、光源、暗箱、计算机组成。表1为高光谱分选仪系统中实验仪器的参数列表。
表1
序号 相关参数 V10E N25E-SWIR
1 光谱范围 400-1000nm 1000-2500nm
2 光谱分辨率 2.8nm 12nm
3 像面尺寸 6.15×14.2 7.6×14.2
4 倒线色散 97.5nm/mm 208nm/mm
5 相对孔径 F/2.4 F/2.0
6 杂散光 <0.5% <0.5%
7 波段数 520 288
在进行高光谱图像采集时,需要设置相机曝光时间,携带样本的平台移动速度以及物镜与样本之间的距离。这3个参数相互影响,使采集的图像大小合适,清晰,不变形失真。经过反复尝试,物镜高度设置为31em(厘米),曝光时间设置为10ms(毫秒),平台移动速度设置为46mm/s(毫米/秒)。图像采集软件采用四川双利合谱科技有限公司提供的高光谱成像系统采集软件完成。当采用400nm-1000nm的波段对样本集样本进行扫描时,获取多个样本的高光谱图像;当采用1000nm-2500nm的波段对建模集样本进行扫描时,获取多个样本的高光谱图像。
在S120中,根据所述高光谱图像,获取样本集中样本的高光谱数据。
在本实施例中,根据获取的建模集样本的高光谱图像,就可以获取建模集样本的高光谱数据。其中,高光谱数据包括空间位置数据、波长数据和光谱吸收值。
在S130中,对所述样本集中样本的高光谱数据进行处理,并根据处理后的所述样本集中样本的高光谱数据将样本集中的样本划分为建模集样本以及检验集样本,并获取所述建模集样本的高光谱数据以及所述检验集样本的高光谱数据。
在本实施例中,示例性的,所述对样本集中样本的高光谱数据进行处理包括:对所述样本集中样本的高光谱数据基于如下的公式进行校正:
Figure PCTCN2017086901-appb-000002
其中,Rref为校正后的高光谱数据;DNraw为校正之前样本集中样本的高光谱数据;DNwhite为白板校正数据;DNdark为黑板校正数据;通过SG移动窗口最小二乘多项式平滑(Savitzky-Golay Smoothing)算法对校正后的高光 谱数据进行去燥处理。
在本实施例中,图1b为在400nm-1000nm扫描波段范围内,样本集中样本的高光谱数据形成的光谱比较图;图1c为在1000nm-2500nm扫描波段范围,样本集中样本的高光谱数据形成的光谱比较图,如图1b和图1c所示,正品果的光谱曲线11、伪品果的光谱曲线12、正品皮的光谱曲线14以及伪品皮的光谱曲线13的变化趋势大体是相同的,且正品皮的光谱曲线14的光谱反射率值低于其他三种成分的光谱曲线的光谱反射率值,从曲线变化趋势来看四种不同成分并没有十分明显的差异。在本实施例中,采用Kennard-Stone算法将样本集中的样本划分为建模集样本和检验集样本,并从样本集中样本处理的高光谱数据中提取建模集样本的高光谱数据以及检验集样本的高光谱数据。
在S140中,通过连续投影算法在所述建模集样本的高光谱数据中选择特征波长。
在本实施例中,示例性的,通过连续投影算法在所述建模集样本的高光谱数据中选择特征波长包括:对所述建模集样本中的样本进行类别赋值;将所述建模集样本的高光谱数据以及所述类别赋值作为连续投影算法的输入变量,在所述建模集样本的高光谱数据中选择特征波长。其中,对正品皮、伪品皮、正品果、伪品果分别赋值为1、2、3、4。表2为建模集样本和检验集样本划分列表;如表2所示,建模集样本共38个样本,检验集样本共32个样本,建模集样本中正品皮的数量为22个,检验集样本中正品皮的数量为20个,故10个正品皮样本既作为了建模集样本中的样本,又作为检验集样本中的样本。
表2
  正品皮 伪品皮 正品果 伪品果
类别赋值 1 2 3 4
建模集样本 22 4 5 7
检验集样本 20 3 5 4
表3是选取的特征波长的列表,如表3所示,在400nm-1000nm波段的扫描范围选择的特征波长为15个,在1000nm-2500nm范围内选择的特征波长为5个。
表3
Figure PCTCN2017086901-appb-000003
在S150中,将检验集样本的高光谱数据、检验集样本的类别赋值以及所述建模集样本的高光谱数据或所述特征波长对应的高光谱数据作为判别模型的输入变量,获取建模集样本的成分识别结果。
在本实施例中,判别分析模型为偏最小二乘法构建的判别模型。其中,将检验集样本的高光谱数据、检验集样本的类别赋值以及所述建模集样本的高光谱数据作为判别模型的输入变量,能够获取建模集样本的成分识别结果;或者将检验集样本的高光谱数据、检验集样本的类别赋值以及所述特征波长对应的高光谱数据作为判别模型的输入变量,能够获取建模集样本的成分识别结果。可选的,通过输入的检验集样本的高光谱数据和检验集样本的类别赋值,能够建立高光谱数据与类别赋值之间的关系;通过建立的高光谱数据与类别赋值之间的关系,输入建模集样本的高光谱数据或所述特征波长对应的高光谱数据,就可以得到对应的建模集样本的类别赋值。并通过类别赋值获取建模集样本的成分识别结果。
在上述实施例的基础上,还可以是如下的方式:可以将检验集样本中的多个样本的高光谱数据输入到偏最小二乘法构建的判别模型中,使偏最小二乘法构建的判别模型输出的类别赋值与检验集中多个样本的类别赋值相同,通过上述方法,对偏最小二乘法构建的判别模型进行训练。然后,将多个样本的特征波长对应的高光谱数据以及建模集样本的高光谱数据分别输入到偏最小二乘法构建的判别模型中,分别输出多个样本的类别赋值。
在本实施例中,图1d为当以建模集样本的高光谱数据为输入变量,且在400nm-1000nm波段范围时,获得的检验集与建模集样本的类别预测值示意图; 图1e为当以特征波长对应的高光谱数据为输入变量,且在400nm-1000nm波段范围时,获得的检验集与建模集样本的类别预测值示意图;图1f为当以建模集样本的高光谱数据为输入变量,且在1000nm-2500nm波段范围时,获得的检验集与建模集样本的类别预测值示意图;图1g为当以特征波长对应的高光谱数据为输入变量,且在1000nm-2500nm波段范围时,获得的检验集与建模集样本的类别预测值示意图。如图1d-1g所示,图中的真实值是指检验集样本的类别赋值,图中PLS-DA预测值以及PLS-DA-SPA预测值为建模集样本的类别预测值。其中,PLS-DA为偏最小二乘法基于建模集样本的高光谱数据(全波段)构建的判别模型,PLS-DA-SPA为偏最小二乘法基于特征波长(特征波段)对应的高光谱数据构建的判别模型,采用上述两种模型对建模集样本中成分识别时,存在一定的错误,但是错误率较低。
在上述实施例的基础上,所述的方法还包括:基于识别结果计算识别精度。其中,识别精度是建模集样本中类别预测值正确的个数与总体个数的比值。表4是基于偏最小二乘法构建的判别模型计算的识别精度的列表。如表4所示,对建模集样本的成分进行识别的过程中,总体识别精度、正品皮识别精度最高的均为在1000nm-2500nm波段范围内,偏最小二乘法基于建模集样本的高光谱数据构建的判别模型,分别是78%和90%,正品皮识别错误率最低的则为1000nm-2500nm范围内,偏最小二乘法基于建模集样本的高光谱数据以及特征波长对应的高光谱数据构建的判别模型,均为5%。
如表4所示,无论是在400nm-1000nm或1000nm-2500nm波段范围内,偏最小二乘法基于建模集样本的高光谱数据构建的判别模型,总体识别率和正品皮识别率均高于基于特征波长对应的高光谱数据构建的判别模型。而正品皮的错误识别率,无论是400nm-1000nm或1000-2500nm波段范围内,偏最小二乘法基于建模集样本的高光谱数据构建的判别模型以及基于特征波长对应的高光谱数据构建判别模型,正品皮的错误识别率相同。其中,400n-1000nm波段范围内,正品皮的错误识别率均为10%;1000nm-2500nm波段范围内,正品皮错误识别率则为5%。
表4
Figure PCTCN2017086901-appb-000004
Figure PCTCN2017086901-appb-000005
本实施例提供的一种化橘红成分鉴定方法,通过将检验集样本的高光谱数据以及建模集样本的高光谱数据或将检验集样本的高光谱数据以及特征波长对应的高光谱数据作为偏最小二乘法构建的模型的输入变量,获取建模集样本的成分识别结果,操作步骤简单,能够精确识别化橘红的成分。
实施例二
本实施二与本实施例一的不同之处在于:采用的判别模型为极限学习机构建的判别模型。
在本实施例中,如图2a-2d所示,图中的真实值是指检验集样本的类别赋值,图中ELM预测值以及ELM-SPA预测值为建模集样本的类别预测值。图2a-2d中,横坐标代表检验集样本编号。ELM为极限学习机基于建模集样本的高光谱数据构建的判别模型,ELM-SPA为极限学习机基于特征波长对应的高光谱数据构建的判别模型。通过极限学习机构建的模型对在对建模集样本中成分识别时,存在一定的错误,但是错误率较低。
表5为基于极限学习机构建的判别模型计算的识别精度的列表。如表5所示,总体识别精度、正品皮识别精度最高的均为在1000nm-2500nm波段范围内的ELM模型和ELM-SPA模型,分别是84%和95%。正品皮识别错误率最低的则为在1000nm-2500nm波段范围内的ELM模型和ELM-SPA模型,均为5%。在400nm-1000nm波段范围内,ELM-SPA模型计算出的总体识别率与正品皮识别率均高于ELM模型。对于正品皮的识别错误率,ELM-SPA模型和ELM模型计算出的错误识别率相同。在1000nm-2500nm波段范围内,无论是ELM-SPA模型和ELM模型,计算出的总体识别率、正品皮识别率、正品皮识别错误率均相同,分别 为84%、95%和5%。
表5
Figure PCTCN2017086901-appb-000006
由此可见,在极限学习机构建的判别模型和偏最小二乘法构建的判别模型对建模集样本进行识别过程中,在1000nm-2500nm波段范围内,根据极限学习机构建的判别模型和偏最小二乘法构建的判别模型计算出的总体识别率、正品皮识别率均高于400nm-1000nm波段范围内的。并且,极限学习机构建的判别模型的准确率高于偏最小二乘法构建的判别模型。偏最小二乘法基于特征波长对应的高光谱数据构建的判别模型的准确率低于基于建模集样本的高光谱数据构建的判别模型。但是,在400nm-1000nm波段范围内,极限学习机基于特征波长对应的高光谱数据构建的判别模型的准确率高于基于建模集样本的高光谱数据构建的判别模型,在1000nm-25nm范围内,极限学习机基于特征波长对应的高光谱数据构建的判别模型的准确率与基于建模集样本的高光谱数据构建的判别模型相同。
本实施例提供的一种化橘红成分鉴定方法,通过将检验集样本的高光谱数据、以及建模集样本的高光谱数据或特征波长对应的高光谱数据作为极限学习机构建的模型的输入变量,获取建模集样本的成分识别结果,操作步骤简单,能够精确识别化橘红的成分。
工业实用性
本实施例提供的一种化橘红成分鉴定方法,通过将检验集样本的高光谱数据、检验集样本的类别赋值以及建模集样本的高光谱数据,或将检验集样本的 高光谱数据、检验集样本的类别赋值以及特征波长对应的高光谱数据作为判别分析模型的输入变量,获取建模集样本的成分识别结果,操作步骤简单,能够精确识别化橘红的成分。

Claims (11)

  1. 一种化橘红成分鉴定方法,包括:
    通过至少一个波段对建模集样本进行扫描,采集建模集样本的高光谱图像;
    根据所述高光谱图像,获取建模集样本的高光谱数据;以及
    将所述建模集样本的高光谱数据作为判别分析模型的输入变量,获取建模集样本的成分识别结果。
  2. 根据权利要求1所述的方法,在所述通过至少一个波段对样本集中的样本进行扫描,采集建模集样本的高光谱图像之前,还包括:
    通过所述至少一个波段对检验集样本进行扫描,采集所述检验集样本的高光谱图像;
    根据所述高光谱图像,获取检验集样本的高光谱数据;以及
    将所述检验集样本的高光谱数据和所述检验集样本的类别赋值预先存储在所述判别分析模型中。
  3. 根据权利要求1或2所述的方法,在所述获取建模集样本的高光谱数据之后,还包括:
    通过连续投影算法在所述建模集样本的高光谱数据中选择特征波长;
    所述将所述建模集样本的高光谱数据作为判别分析模型的输入变量,作为判别分析模型的输入变量,获取建模集样本的成分识别结果,包括:
    将所述特征波长对应的高光谱数据作为判别分析模型的输入变量,获取建模集样本的成分识别结果。
  4. 根据权利要求3所述的方法,其中,所述判别分析模型包括偏最小二乘法构建的判别模型或极限学习机构建的判别模型。
  5. 根据权利要求3所述的方法,其中,所述至少一个波段包括400纳米-1000纳米的波段或1000纳米-2500纳米的波段。
  6. 根据权利要求1所述的方法,其中,所述样本集中样本包括化橘红的正品皮、伪品皮、正品果以及伪品果。
  7. 根据权利要求3所述的方法,其中,所述高光谱数据包括空间位置数据、波长数据和光谱吸收值。
  8. 根据权利要求3所述的方法,其中,所述对所述样本集中样本的高光谱数据进行处理包括:
    对所述样本集中样本的高光谱数据基于如下的公式进行校正:
    Figure PCTCN2017086901-appb-100001
    其中,Rref为校正后的高光谱数据;DNraw为校正之前样本集中样本的高光谱数据;DNwhite为白板校正数据;DNdark为黑板校正数据。
    通过移动窗口最小二乘多项式平滑SG算法对校正后的高光谱数据进行去燥处理。
  9. 根据权利要求3所述的方法,其中,所述通过连续投影算法在所述建模集样本的高光谱数据中选择特征波长包括:
    对所述建模集样本中的样本进行类别赋值;
    将所述建模集样本的高光谱数据以及所述类别赋值作为连续投影算法的输入变量,在所述建模集样本的高光谱数据中选择特征波长。
  10. 根据权利要求3所述的方法,还包括:基于识别结果计算识别精度。
  11. 一种化橘红成分鉴定方法,包括:
    通过至少一个波段对样本集中的样本进行扫描,采集样本集中样本的高光谱图像;
    根据所述高光谱图像,获取样本集中样本的高光谱数据;
    对所述样本集中样本的高光谱数据进行处理,将样本集中的样本划分为建模集样本以及检验集样本,并获取所述建模集样本的高光谱数据以及所述检验集样本的高光谱数据;
    通过连续投影算法在所述建模集样本的高光谱数据中选择特征波长;以及
    将所述检验集样本的高光谱数据、所述检验集样本的类别赋值以及所述建模集样本的高光谱数据作为判别分析模型的输入变量,或者将所述检验集样本的高光谱数据、所述检验集样本的类别赋值以及所述特征波长对应的高光谱数据作为判别分析模型的输入变量,获取建模集样本的成分识别结果。
PCT/CN2017/086901 2016-09-23 2017-06-02 化橘红成分鉴定方法 WO2018054091A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610846460.5 2016-09-23
CN201610846460.5A CN106404689A (zh) 2016-09-23 2016-09-23 一种化橘红成分鉴定方法

Publications (1)

Publication Number Publication Date
WO2018054091A1 true WO2018054091A1 (zh) 2018-03-29

Family

ID=57997942

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/086901 WO2018054091A1 (zh) 2016-09-23 2017-06-02 化橘红成分鉴定方法

Country Status (2)

Country Link
CN (1) CN106404689A (zh)
WO (1) WO2018054091A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109975217A (zh) * 2019-03-26 2019-07-05 贵阳学院 基于高光谱成像系统的李子可溶性固形物含量值检测方法
CN112129709A (zh) * 2020-09-16 2020-12-25 西北农林科技大学 一种苹果树冠层尺度氮含量诊断方法
CN113008815A (zh) * 2021-02-24 2021-06-22 浙江工业大学 一种基于高光谱图像信息无损检测酸枣仁中总黄酮的方法
CN113902673A (zh) * 2021-09-02 2022-01-07 北京市农林科学院信息技术研究中心 番茄灰霉病程度识别方法及装置

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106404689A (zh) * 2016-09-23 2017-02-15 广东食品药品职业学院 一种化橘红成分鉴定方法
CN110763698B (zh) * 2019-10-12 2022-01-14 仲恺农业工程学院 一种基于特征波长的高光谱柑橘叶片病害识别方法
CN111122579A (zh) * 2020-01-17 2020-05-08 中国农业科学院都市农业研究所 一种生菜叶片黄酮总量测定方法
CN113887543B (zh) * 2021-12-07 2022-03-18 深圳市海谱纳米光学科技有限公司 一种基于高光谱特征的箱包鉴伪方法与光谱采集装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104198398A (zh) * 2014-08-07 2014-12-10 浙江大学 一种基于高光谱成像鉴别阿胶的方法
US9230170B2 (en) * 2011-06-29 2016-01-05 Fujitsu Limited Plant species identification apparatus and method
CN105548037A (zh) * 2015-01-14 2016-05-04 青海春天药用资源科技利用有限公司 无损检测中药原药材的方法
CN106404689A (zh) * 2016-09-23 2017-02-15 广东食品药品职业学院 一种化橘红成分鉴定方法

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1831516A (zh) * 2006-04-03 2006-09-13 浙江大学 用可见光和近红外光谱技术无损鉴别卷烟品种及真假的方法
CN1995987B (zh) * 2007-02-08 2010-05-12 江苏大学 基于高光谱图像技术的农畜产品无损检测方法
CN102621077B (zh) * 2012-03-30 2015-03-25 江南大学 基于高光谱反射图像采集系统的玉米种子纯度无损检测方法
CN103942749B (zh) * 2014-02-24 2017-01-04 西安电子科技大学 一种基于修正聚类假设和半监督极速学习机的高光谱地物分类方法
CN105527241A (zh) * 2015-01-14 2016-04-27 青海春天药用资源科技利用有限公司 无损检测冬虫夏草原草真伪的方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9230170B2 (en) * 2011-06-29 2016-01-05 Fujitsu Limited Plant species identification apparatus and method
CN104198398A (zh) * 2014-08-07 2014-12-10 浙江大学 一种基于高光谱成像鉴别阿胶的方法
CN105548037A (zh) * 2015-01-14 2016-05-04 青海春天药用资源科技利用有限公司 无损检测中药原药材的方法
CN106404689A (zh) * 2016-09-23 2017-02-15 广东食品药品职业学院 一种化橘红成分鉴定方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LI, XUNLAN ET AL.: "Identification of Pummelo Cultivars based on Hyperspectral Imaging Technology", SPECTROSCOPY AND SPECTRAL ANALYSIS, vol. 35, no. 9, 30 September 2015 (2015-09-30), pages 2639 - 43, XP55603209 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109975217A (zh) * 2019-03-26 2019-07-05 贵阳学院 基于高光谱成像系统的李子可溶性固形物含量值检测方法
CN112129709A (zh) * 2020-09-16 2020-12-25 西北农林科技大学 一种苹果树冠层尺度氮含量诊断方法
CN113008815A (zh) * 2021-02-24 2021-06-22 浙江工业大学 一种基于高光谱图像信息无损检测酸枣仁中总黄酮的方法
CN113902673A (zh) * 2021-09-02 2022-01-07 北京市农林科学院信息技术研究中心 番茄灰霉病程度识别方法及装置

Also Published As

Publication number Publication date
CN106404689A (zh) 2017-02-15

Similar Documents

Publication Publication Date Title
WO2018054091A1 (zh) 化橘红成分鉴定方法
Nie et al. Classification of hybrid seeds using near-infrared hyperspectral imaging technology combined with deep learning
Lu et al. Detection of surface and subsurface defects of apples using structured-illumination reflectance imaging with machine learning algorithms
US7949181B2 (en) Segmentation of tissue images using color and texture
US20170091528A1 (en) Methods and Systems for Disease Classification
JP7187557B2 (ja) 医療画像学習装置、方法及びプログラム
Guzmán et al. Infrared machine vision system for the automatic detection of olive fruit quality
CN110163101B (zh) 中药材种子区别及等级快速判别方法
Cirillo et al. Tensor decomposition for colour image segmentation of burn wounds
CN108734205A (zh) 一种针对不同品种小麦种子的单粒定点识别技术
CN109253975A (zh) 基于msc-cfs-ica的苹果轻微损伤高光谱检测方法
CN116849612B (zh) 一种多光谱舌象图像采集分析系统
CN110108644A (zh) 一种基于深度级联森林和高光谱图像的玉米品种鉴别方法
Zhang et al. Computerized facial diagnosis using both color and texture features
CN115841594B (zh) 基于注意力机制的煤矸高光谱变图像域数据识别方法
Wang et al. Facial image medical analysis system using quantitative chromatic feature
Sánchez et al. Classification of cocoa beans based on their level of fermentation using spectral information
Xie et al. Modeling for mung bean variety classification using visible and near-infrared hyperspectral imaging
Das et al. Detection of diseases on visible part of plant—A review
Jiang et al. Detection and recognition of veterinary drug residues in beef using hyperspectral discrete wavelet transform and deep learning
US20230162354A1 (en) Artificial intelligence-based hyperspectrally resolved detection of anomalous cells
CN113450305B (zh) 医疗图像的处理方法、系统、设备及可读存储介质
CN116824171A (zh) 中医高光谱舌象图像波段的选择方法及相关装置
CN117132843A (zh) 野山参、林下山参、园参原位鉴别方法、系统及相关设备
Liu et al. Using hyperspectral imaging automatic classification of gastric cancer grading with a shallow residual network

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17852165

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 16/08/2019)

122 Ep: pct application non-entry in european phase

Ref document number: 17852165

Country of ref document: EP

Kind code of ref document: A1