CN110197178A - A kind of rice type of TuPu method fusion depth network quickly identifies detection device and its detection method - Google Patents
A kind of rice type of TuPu method fusion depth network quickly identifies detection device and its detection method Download PDFInfo
- Publication number
- CN110197178A CN110197178A CN201910568003.8A CN201910568003A CN110197178A CN 110197178 A CN110197178 A CN 110197178A CN 201910568003 A CN201910568003 A CN 201910568003A CN 110197178 A CN110197178 A CN 110197178A
- Authority
- CN
- China
- Prior art keywords
- rice
- fusion
- image
- holding table
- articles holding
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 235000007164 Oryza sativa Nutrition 0.000 title claims abstract description 53
- 238000000034 method Methods 0.000 title claims abstract description 53
- 235000009566 rice Nutrition 0.000 title claims abstract description 53
- 230000004927 fusion Effects 0.000 title claims abstract description 36
- 238000001514 detection method Methods 0.000 title claims abstract description 30
- 240000007594 Oryza sativa Species 0.000 title 1
- 241000209094 Oryza Species 0.000 claims abstract description 52
- 238000001228 spectrum Methods 0.000 claims abstract description 39
- 230000003287 optical effect Effects 0.000 claims abstract description 17
- 239000011159 matrix material Substances 0.000 claims abstract description 4
- 230000009182 swimming Effects 0.000 claims description 16
- 230000003595 spectral effect Effects 0.000 claims description 15
- 239000000284 extract Substances 0.000 claims description 14
- 230000000877 morphologic effect Effects 0.000 claims description 12
- 238000000605 extraction Methods 0.000 claims description 8
- 238000002310 reflectometry Methods 0.000 claims description 7
- 230000008901 benefit Effects 0.000 claims description 5
- 238000013528 artificial neural network Methods 0.000 claims description 4
- 230000004323 axial length Effects 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 230000007797 corrosion Effects 0.000 claims description 2
- 238000005260 corrosion Methods 0.000 claims description 2
- 230000011218 segmentation Effects 0.000 claims description 2
- 238000010187 selection method Methods 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 2
- 235000008429 bread Nutrition 0.000 description 1
- 238000000701 chemical imaging Methods 0.000 description 1
- 238000012850 discrimination method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 235000013305 food Nutrition 0.000 description 1
- 230000000050 nutritive effect Effects 0.000 description 1
- 239000002689 soil Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/17—Systems in which incident light is modified in accordance with the properties of the material investigated
- G01N21/25—Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
- G01N21/27—Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands using photo-electric detection ; circuits for computing concentration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/68—Food, e.g. fruit or vegetables
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Health & Medical Sciences (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Analytical Chemistry (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Biochemistry (AREA)
- Mathematical Physics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Chemical & Material Sciences (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to a kind of rice types of TuPu method fusion depth network quickly to identify detection device and its detection method, the detection device includes detection unit and the computing unit that connect with detection unit, and setting map is handled and analysis module on the computing unit;The detection unit includes optical spectrum imagers, and the optical spectrum imagers are made of imager, camera, camera lens;Articles holding table is set immediately below the camera lens of optical spectrum imagers, is provided with blackboard on articles holding table, matrix form is arranged with sample on the articles holding table, and light source is arranged between articles holding table and optical spectrum imagers.Quickly identified using the famous-brand and high-quality rice type that apparatus of the present invention and method complete a kind of TuPu method fusion depth network based on high light spectrum image-forming, realization accurately identifies famous-brand and high-quality type rice.
Description
Technical field
The present invention relates to quality of agricultural product detection field, in particular to a kind of rice kind of TuPu method fusion depth network
Class quickly identifies detection device and its detection method.
Background technique
Bread is the staff of life, and rice is essential food on Chinese's dining table.Simultaneously as different geographical, soil and
Environment creates the characteristic rice of varying quality, and the nutritive value different from of different quality rice, there are biggish differences for price
Away from.With the improvement of living standards, consumer begins to focus on rice quality when buying rice.It is filled in rice sale at present with secondary
Good situation continuously emerges, these phenomenons have invaded consumers' rights and interests, also will affect the export abroad trade of China's high quality white rice.
Since differences in appearance is small, different quality rice are difficult to directly distinguish.
Summary of the invention
The purpose of the present invention is to provide a kind of rice types of TuPu method fusion depth network quickly to identify detection dress
It sets and its detection method, completes a kind of TuPu method based on high light spectrum image-forming using apparatus of the present invention and method and merge depth net
The famous-brand and high-quality rice type of network quickly identifies, and realization accurately identifies famous-brand and high-quality type rice.
In order to achieve the above object, the technical solution adopted by the present invention are as follows: a kind of rice of TuPu method fusion depth network
Type quickly identifies detection device, including detection unit and the computing unit connecting with detection unit, on the computing unit
Map processing and analysis module are set;
The detection unit includes optical spectrum imagers, and the optical spectrum imagers are made of imager, camera, camera lens;
Articles holding table is set immediately below the camera lens of optical spectrum imagers, is provided with blackboard on articles holding table, matrix form is arranged on the articles holding table
Sample is shown, light source is set between articles holding table and optical spectrum imagers.
Further, the two sides between articles holding table and optical spectrum imagers are symmetrically arranged with support frame, the light source setting
In on the support frame.
Further, the middle position of the camera lens face articles holding table of optical spectrum imagers, the source symmetric are set to
Two sides above articles holding table.
A kind of rice type of TuPu method fusion depth network quickly identifies detection method, comprising the following steps:
(1) profile information acquires: blackboard correction is arranged sample size, rice is set according to m*n array type arrangement, is adopted
Collect the high spectrum image of rice.Area-of-interest is extracted using image zooming-out reflectivity, then using binarization method.
(2) TuPu method extracts:
(2-1) spectral signature: spectral data is obtained for the region of interesting extraction of step (1).Since spectrum is believed
Breath contains the garbages such as noise, needs to pre-process spectral information, to eliminate undesired information and lift scheme
Stability and veracity.The present invention locates the average spectral data of initial data using multiplicative scatter correction (MSC) in advance
Reason, the noise for generating various electron sources and sample conditions variation minimize.In order to eliminate uncorrelated variables, model is improved
Can, characteristic wavelength is extracted to original all-wave segment data, the present invention extracts characteristic wavelength, SPA using successive projection algorithm (SPA)
It is the classical characteristic wavelength selecting party that can be eliminated bulk redundancy information in spectrum before one kind to circulation while reduce synteny
Method.
(2-2) morphological feature: the two of the optimal characteristics wavelength correspondence image in characteristic wavelength chosen using step (2-1)
Value figure extracts morphological feature parameter, specifically includes that area, and long axial length, short axle is long, axial ratio, length, width, minimum outer
Junction product, tightness, perimeter, eccentricity totally ten morphological feature parameters.
(2-3) textural characteristics: it is special that texture is extracted using the grayscale image of step (2-1) characteristic wavelength correspondence image chosen
Sign.Specifically use three kinds of methods: 6 main feature values of 1. statistics with histogram methods;2. 11 of ray level run-length matrices statistic law
Main feature value;3. 4 main feature values of grey scale difference statistic law.The obtained all parameter values of three kinds of methods are utilized
Normalization algorithm is converted in the same order of magnitude, to eliminate the difference between each characteristic parameter.
(2-4) fusion: three kinds of obtained spectrum, form, texture features are combined fusion, obtain spectrum, form and
The data splitting of data texturing.
(2-5) category identification: classification mould is established in conjunction with fusion gained feature and SAE-FNN deep neural network (model)
Type realizes category identification.Autocoder (SAE) can make input data and output as a kind of unsupervised feature extracting method
" error " between data minimizes, and improves feature extraction efficiency, improves nicety of grading (removal redundancy).Melted by input
It closes characteristic variable and fusion feature variable as output phase obtains depth characteristic variable, the resulting depth characteristic variable of SAE is defeated
Enter to carry out the identification of rice type into the counterpropagation network (FNN) of full-mesh.
Technical effect of the invention: the invention proposes a kind of famous-brand and high-quality rice type discrimination methods, specifically utilize bloom
Spectral imaging technology extracts image and the spectral signature of rice and is merged, in conjunction with the quick of depth network implementations rice type
Intelligent detecting.
Detailed description of the invention
Fig. 1 is that the rice type of the i.e. TuPu method fusion depth network of rice profile information measuring device in the present invention is quick
Identify detection device schematic diagram;
Fig. 2 is the structure chart of SAE-FNN depth network of the invention;
Fig. 3 is the flow chart of the famous-brand and high-quality rice type method for quick identification of TuPu method fusion depth network of the present invention.
Specific embodiment
The present invention is described in further detail below with reference to example.
Fig. 1 is the detection device of the famous-brand and high-quality rice type method for quick identification of TuPu method fusion depth network of the invention
Schematic diagram.It is optical spectrum imagers including 01, is made of imager, camera, camera lens for acquiring high spectrum image;02 is light source;
03 is sample, and sample is arranged on articles holding table by matrix form;04 is blackboard, for correcting;05 is computing unit, is specially filled
The computer for carrying map processing and analysis module, for saving and handling high spectrum image, the map processing and analysis mould
Block includes profile information acquisition and TuPu method extraction procedure.
Specifically, the present invention provides a kind of famous-brand and high-quality rice type method for quick identification of TuPu method fusion depth network,
The following steps are included:
(1) profile information acquires: blackboard 04 corrects, and sample 03 chooses ten kinds of famous-brand and high-quality rice, and every kind of rice chooses 432
Rice, nine high spectrum images of every kind of selection, every image set rice according to 6*8 array type arrangement, acquire the bloom of rice
Spectrogram picture.Using image zooming-out reflectivity, reuses binarization method and extract area-of-interest.
Here, the acquisition of sample 03 and the correction of blackboard 04 are two processes, and the process that blackboard 04 corrects is collecting sample 03, black
The correction of plate 04 is using the correction of black and white plate or grey black plate correction (preferably).
The reflectivity of hawk, blackboard, is corrected using following formula, and formula is as follows:
Wherein, I0For sample reflectance, B is blackboard reflectivity, and W is hawk reflectivity.
(2) TuPu method extracts:
(2-1) spectral signature: spectral data is obtained for the region of interesting extraction of step (1).Since spectrum is believed
Breath contains the garbages such as noise, needs to pre-process spectral information, to eliminate undesired information and lift scheme
Stability and veracity.The present invention locates the average spectral data of initial data using multiplicative scatter correction (MSC) in advance
Reason, the noise for generating various electron sources and sample conditions variation minimize.In order to eliminate uncorrelated variables, model is improved
Can, characteristic wavelength is extracted to original all-wave segment data, the present invention extracts five characteristic waves using successive projection algorithm (SPA)
Long, SPA is the classical characteristic wavelength that can be eliminated bulk redundancy information in spectrum before one kind to circulation while reduce synteny
Selection method.
(2-2) morphological feature: the optimal characteristics wavelength correspondence image in five characteristic wavelengths chosen using step (2-1)
Binary picture, to binary image carry out corrosion expansion removal noise, to the company of every rice in the binary image after segmentation
Logical region is marked, and reuses image processing function (regionprops function) and extracts morphological feature parameter, specifically includes that
Area, long axial length, short axle is long, axial ratio, length, width, minimum external area, tightness, perimeter and eccentricity totally ten
Morphological feature parameter.
(2-3) textural characteristics: texture is extracted using the grayscale image of step (2-1) five characteristic wavelength correspondence images chosen
Feature.Three kinds of methods are used herein:
1. 6 main feature values using statistics with histogram method include:
(1) average value:
(2) standard deviation:
(3) smoothness:
(4) three rank torches:
(5) consistency:
(6) entropy:
2. 11 main feature values using ray level run-length matrices statistic law include:
(1) long distance of swimming advantage:
(2) short distance of swimming advantage:
(3) the long heterogeneity of the distance of swimming:
(4) distance of swimming percentage:
(5) gray level heterogeneity:
(6) high gray scale runs emphasis:
(7) low ash degree runs emphasis:
(8) the high gray scale emphasis of the long distance of swimming:
(9) long distance of swimming low ash degree emphasis:
(10) the high gray scale emphasis of the short distance of swimming:
(11) short distance of swimming low ash degree emphasis:
3. 4 main feature values using grey scale difference statistic law include:
(1) contrast:
(2) angle direction second moment:
(3) entropy:
(4) average value:
The obtained all parameter values of three kinds of methods are converted in the same order of magnitude using normalization algorithm, to disappear
Difference between characteristic parameter unless each.
(2-4) fusion: three kinds of obtained spectrum, form, texture features are combined fusion, obtain spectrum, form and
The data splitting of data texturing.
(2-5) category identification: the present invention establishes classification mould in conjunction with fusion gained feature and SAE-FNN deep neural network
Type realizes category identification.Autocoder (SAE) can make input data and output as a kind of unsupervised feature extracting method
" error " between data minimizes, and improves feature extraction efficiency, improves nicety of grading (removal redundancy).Melted by input
It closes characteristic variable and fusion feature variable as output phase obtains depth characteristic variable, the resulting depth characteristic variable of SAE is defeated
Enter to carry out the intelligent recognition of rice type into the counterpropagation network (FNN) of full-mesh (its structure is as shown in Fig. 2).
Specifically, using fusion feature variable in step (2-4) as the input of SAE, depth characteristic variable is obtained, then will
Depth characteristic variable is input in FNN, constitutes SAE-FNN depth network, realizes the intelligent recognition to rice type.
It is obvious to a person skilled in the art that invention is not limited to the details of the above exemplary embodiments, Er Qie
In the case where without departing substantially from spirit or essential attributes of the invention, the present invention can be realized in other specific forms.
Claims (5)
1. a kind of rice type of TuPu method fusion depth network quickly identifies detection device, it is characterised in that: including detection
Unit and the computing unit connecting with detection unit (05), setting map is handled and analysis mould on the computing unit (05)
Block;
The detection unit includes optical spectrum imagers (01), and the optical spectrum imagers (01) are by imager, camera, lens group
At;Articles holding table is set immediately below the camera lens of optical spectrum imagers (01), is provided with blackboard (04) on articles holding table, on the articles holding table
Matrix form is arranged with sample (03), and light source (02) is arranged between articles holding table and optical spectrum imagers (01).
2. a kind of rice type of TuPu method fusion depth network according to claim 1 quickly identifies detection device,
It is characterized by: the two sides between articles holding table and optical spectrum imagers (01) are symmetrically arranged with support frame, the light source (02) is set
It is placed on the support frame.
3. a kind of rice type of TuPu method fusion depth network according to claim 2 quickly identifies detection device,
It is characterized by: the middle position of the camera lens face articles holding table of optical spectrum imagers (01), the light source (02) are symmetrically disposed on
Two sides above articles holding table.
4. a kind of rice type of TuPu method fusion depth network quickly identifies detection method, comprising the following steps:
(1) profile information acquires: blackboard correction is arranged sample size, rice is set according to m*n array type arrangement, and acquisition is big
The high spectrum image of rice extracts area-of-interest using image zooming-out reflectivity, then using binarization method;
(2) TuPu method extracts:
(2-1) spectral signature: spectral data is obtained for the region of interesting extraction of step (1), using polynary scattering school
Positive (MSC) pre-processes the average spectral data of initial data, and the noise and sample conditions for generating various electron sources become
Change and minimize, characteristic wavelength is extracted using successive projection algorithm (SPA);
(2-2) morphological feature: the binaryzation for the optimal characteristics wavelength correspondence image in characteristic wavelength chosen using step (2-1)
Figure extracts morphological feature parameter, comprising: area, long axial length, short axle is long, axial ratio, length, width, minimum external area, tightly
Density, perimeter, eccentricity totally ten morphological feature parameters;
(2-3) textural characteristics: the grayscale image texture feature extraction of step (2-1) the characteristic wavelength correspondence image chosen is utilized;
(2-4) fusion: three kinds of obtained spectrum, form, texture features are combined fusion, obtain spectrum, form and texture
The data splitting of data;
(2-5) category identification: type is realized to establish disaggregated model in conjunction with fusion gained feature and SAE-FNN deep neural network
Identification;By input fusion feature variable and fusion feature variable as output phase obtains depth characteristic variable, and SAE is resulting
Depth characteristic variable is input to the identification that rice type is carried out in the counterpropagation network (FNN) of full-mesh.
5. a kind of rice type of TuPu method fusion depth network according to claim 4 quickly identifies detection method,
Specifically, the following steps are included:
(1) profile information acquires: blackboard correction, and sample chooses ten kinds of famous-brand and high-quality rice, and every kind of rice chooses 432 rice, every kind of choosing
Nine high spectrum images are taken, every image sets rice according to 6*8 array type arrangement, acquires the high spectrum image of rice, benefit
With image zooming-out reflectivity, reuses binarization method and extract area-of-interest;
(2) TuPu method extracts:
(2-1) spectral signature: spectral data is obtained for the region of interesting extraction of step (1);Using polynary scattering school
Positive (MSC) pre-processes the average spectral data of initial data, and the noise and sample conditions for generating various electron sources become
Change and minimize, bright to extract five characteristic wavelengths using successive projection algorithm (SPA), SPA can be disappeared before one kind to circulation
Except bulk redundancy information reduces the classical characteristic wavelength selection method of synteny simultaneously in spectrum;
(2-2) morphological feature: the two of the optimal characteristics wavelength correspondence image in five characteristic wavelengths chosen using step (2-1)
Value figure carries out corrosion expansion removal noise to binary image, to the connected region of every rice in the binary image after segmentation
Domain is marked, and reuses image processing function (regionprops function) and extracts morphological feature parameter, comprising: area, long axis
Long, short axle is long, axial ratio, length, width, minimum external area, tightness, and totally ten morphological features are joined for perimeter and eccentricity
Number;
(2-3) textural characteristics: it is special that texture is extracted using the grayscale image of step (2-1) five characteristic wavelength correspondence images chosen
Sign, using three kinds of methods: 6 main feature values of 1. statistics with histogram methods;2. 11 of ray level run-length matrices statistic law are main
Characteristic value;3. 4 main feature values of grey scale difference statistic law;It is specific:
1. 6 main feature values using statistics with histogram method include:
(1) average value:
(2) standard deviation:
(3) smoothness:
(4) three rank torches:
(5) consistency:
(6) entropy:
2. 11 main feature values using ray level run-length matrices statistic law include:
(1) long distance of swimming advantage:
(2) short distance of swimming advantage:
(3) the long heterogeneity of the distance of swimming:
(4) distance of swimming percentage:
(5) gray level heterogeneity:
(6) high gray scale runs emphasis:
(7) low ash degree runs emphasis:
(8) the high gray scale emphasis of the long distance of swimming:
(9) long distance of swimming low ash degree emphasis:
(10) the high gray scale emphasis of the short distance of swimming:
(11) short distance of swimming low ash degree emphasis:
3. 4 main feature values using grey scale difference statistic law include:
(1) contrast:
(2) angle direction second moment:
(3) entropy:
(4) average value:
The above-mentioned obtained all parameter values of three kinds of methods are converted in the same order of magnitude using normalization algorithm;
(2-4) fusion: three kinds of obtained spectrum, form, texture features are combined fusion, obtain spectrum, form and texture
The data splitting of data;
(2-5) category identification: type is realized to establish disaggregated model in conjunction with fusion gained feature and SAE-FNN deep neural network
Identification, by input fusion feature variable and fusion feature variable as output phase obtains depth characteristic variable, and SAE is resulting
Depth characteristic variable is input to the intelligent recognition that rice type is carried out in the counterpropagation network (FNN) of full-mesh.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910568003.8A CN110197178A (en) | 2019-06-27 | 2019-06-27 | A kind of rice type of TuPu method fusion depth network quickly identifies detection device and its detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910568003.8A CN110197178A (en) | 2019-06-27 | 2019-06-27 | A kind of rice type of TuPu method fusion depth network quickly identifies detection device and its detection method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110197178A true CN110197178A (en) | 2019-09-03 |
Family
ID=67755355
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910568003.8A Pending CN110197178A (en) | 2019-06-27 | 2019-06-27 | A kind of rice type of TuPu method fusion depth network quickly identifies detection device and its detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110197178A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110929944A (en) * | 2019-11-28 | 2020-03-27 | 安徽大学 | Wheat scab disease severity prediction method based on hyperspectral image and spectral feature fusion technology |
CN114219956A (en) * | 2021-10-08 | 2022-03-22 | 东北林业大学 | Database model construction method and device for polished rice seed detection and polished rice seed detection method |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090316159A1 (en) * | 2008-06-20 | 2009-12-24 | Com Dev International Ltd. | Slab waveguide spatial heterodyne spectrometer assembly |
CN104215584A (en) * | 2014-08-29 | 2014-12-17 | 华南理工大学 | Hyper-spectral image technology-based detection method for distinguishing rice growing areas |
-
2019
- 2019-06-27 CN CN201910568003.8A patent/CN110197178A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090316159A1 (en) * | 2008-06-20 | 2009-12-24 | Com Dev International Ltd. | Slab waveguide spatial heterodyne spectrometer assembly |
CN104215584A (en) * | 2014-08-29 | 2014-12-17 | 华南理工大学 | Hyper-spectral image technology-based detection method for distinguishing rice growing areas |
Non-Patent Citations (3)
Title |
---|
许思 等: "基于高光谱的水稻种子活力无损分级检测" * |
邓小琴 等: "融合光谱、纹理及形态特征的水稻种子品种高光谱图像单粒鉴别" * |
陈宇 等: "稀疏自动编码器视觉特征融合的多弹分类算法研究" * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110929944A (en) * | 2019-11-28 | 2020-03-27 | 安徽大学 | Wheat scab disease severity prediction method based on hyperspectral image and spectral feature fusion technology |
CN114219956A (en) * | 2021-10-08 | 2022-03-22 | 东北林业大学 | Database model construction method and device for polished rice seed detection and polished rice seed detection method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Singh et al. | Principal component analysis-based low-light image enhancement using reflection model | |
Mebatsion et al. | Automatic classification of non-touching cereal grains in digital images using limited morphological and color features | |
Zheng et al. | Qualitative and quantitative comparisons of multispectral night vision colorization techniques | |
IL262175B2 (en) | Image dehazing and restoration | |
EP1229493A2 (en) | Multi-mode digital image processing method for detecting eyes | |
CN109948566B (en) | Double-flow face anti-fraud detection method based on weight fusion and feature selection | |
JP2014515587A (en) | Learning image processing pipelines for digital imaging devices | |
Alkoffash et al. | A survey of digital image processing techniques in character recognition | |
El Khoury et al. | Color and sharpness assessment of single image dehazing | |
WO2020223963A1 (en) | Computer-implemented method of detecting foreign object on background object in image, apparatus for detecting foreign object on background object in image, and computer-program product | |
CN107911625A (en) | Light measuring method, device, readable storage medium storing program for executing and computer equipment | |
CN109883967B (en) | Eriocheir sinensis quality grade discrimination method based on information fusion | |
CN106960182A (en) | A kind of pedestrian integrated based on multiple features recognition methods again | |
Patki et al. | Cotton leaf disease detection & classification using multi SVM | |
CN111127384A (en) | Strong reflection workpiece vision measurement method based on polarization imaging | |
CN110197178A (en) | A kind of rice type of TuPu method fusion depth network quickly identifies detection device and its detection method | |
CN115829976A (en) | Image processing method for detecting appearance defects | |
Utaminingrum et al. | Alphabet Sign Language Recognition Using K-Nearest Neighbor Optimization. | |
CN116883303A (en) | Infrared and visible light image fusion method based on characteristic difference compensation and fusion | |
CN114092441A (en) | Product surface defect detection method and system based on dual neural network | |
CN114170668A (en) | Hyperspectral face recognition method and system | |
CN113627329A (en) | Wheat seed hyperspectral image classification method and system based on hybrid convolutional network | |
Zheng | A channel-based color fusion technique using multispectral images for night vision enhancement | |
CN113408545A (en) | End-to-end photoelectric detection system and method based on micro-optical device | |
Tian | Color correction and contrast enhancement for natural images and videos |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190903 |
|
RJ01 | Rejection of invention patent application after publication |