CN112444493A - Optical detection system and device based on artificial intelligence - Google Patents
Optical detection system and device based on artificial intelligence Download PDFInfo
- Publication number
- CN112444493A CN112444493A CN202011088696.XA CN202011088696A CN112444493A CN 112444493 A CN112444493 A CN 112444493A CN 202011088696 A CN202011088696 A CN 202011088696A CN 112444493 A CN112444493 A CN 112444493A
- Authority
- CN
- China
- Prior art keywords
- model
- calling
- optical detection
- terminal control
- preset
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 122
- 230000003287 optical effect Effects 0.000 title claims abstract description 84
- 238000013473 artificial intelligence Methods 0.000 title claims abstract description 47
- 239000000126 substance Substances 0.000 claims abstract description 41
- 230000003595 spectral effect Effects 0.000 claims abstract description 36
- 238000000034 method Methods 0.000 claims abstract description 29
- 238000012545 processing Methods 0.000 claims abstract description 27
- 238000001228 spectrum Methods 0.000 claims abstract description 18
- 238000004891 communication Methods 0.000 claims abstract description 16
- 230000008569 process Effects 0.000 claims abstract description 11
- 239000013598 vector Substances 0.000 claims description 69
- 238000004458 analytical method Methods 0.000 claims description 17
- 238000004364 calculation method Methods 0.000 claims description 6
- 230000003044 adaptive effect Effects 0.000 claims description 2
- 230000000712 assembly Effects 0.000 claims description 2
- 238000000429 assembly Methods 0.000 claims description 2
- 238000005516 engineering process Methods 0.000 abstract description 9
- 239000000463 material Substances 0.000 abstract description 6
- 238000010183 spectrum analysis Methods 0.000 abstract description 5
- 230000009286 beneficial effect Effects 0.000 description 10
- 239000000306 component Substances 0.000 description 10
- 238000013528 artificial neural network Methods 0.000 description 7
- 239000003814 drug Substances 0.000 description 6
- 238000003384 imaging method Methods 0.000 description 6
- 244000005700 microbiome Species 0.000 description 6
- 238000004519 manufacturing process Methods 0.000 description 5
- 238000012706 support-vector machine Methods 0.000 description 4
- WSFSSNUMVMOOMR-UHFFFAOYSA-N Formaldehyde Chemical compound O=C WSFSSNUMVMOOMR-UHFFFAOYSA-N 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 210000003811 finger Anatomy 0.000 description 3
- 235000013305 food Nutrition 0.000 description 3
- 230000004927 fusion Effects 0.000 description 3
- 238000010801 machine learning Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 241000533293 Sesbania emerus Species 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000007635 classification algorithm Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 229940079593 drug Drugs 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 235000012041 food component Nutrition 0.000 description 2
- 239000005428 food component Substances 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 239000000447 pesticide residue Substances 0.000 description 2
- 235000021251 pulses Nutrition 0.000 description 2
- 235000001674 Agaricus brunnescens Nutrition 0.000 description 1
- 240000007154 Coffea arabica Species 0.000 description 1
- 235000007460 Coffea arabica Nutrition 0.000 description 1
- 235000002187 Coffea robusta Nutrition 0.000 description 1
- 208000034656 Contusions Diseases 0.000 description 1
- 241000196324 Embryophyta Species 0.000 description 1
- 235000016623 Fragaria vesca Nutrition 0.000 description 1
- 240000009088 Fragaria x ananassa Species 0.000 description 1
- 235000011363 Fragaria x ananassa Nutrition 0.000 description 1
- 241000233866 Fungi Species 0.000 description 1
- 244000068988 Glycine max Species 0.000 description 1
- 235000010469 Glycine max Nutrition 0.000 description 1
- 244000141359 Malus pumila Species 0.000 description 1
- 229920000877 Melamine resin Polymers 0.000 description 1
- 208000031888 Mycoses Diseases 0.000 description 1
- 235000010627 Phaseolus vulgaris Nutrition 0.000 description 1
- 244000046052 Phaseolus vulgaris Species 0.000 description 1
- 238000010521 absorption reaction Methods 0.000 description 1
- 239000004480 active ingredient Substances 0.000 description 1
- 230000001476 alcoholic effect Effects 0.000 description 1
- 235000021016 apples Nutrition 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 235000015219 food category Nutrition 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000001678 irradiating effect Effects 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 235000013372 meat Nutrition 0.000 description 1
- JDSHMPZPIAZGSV-UHFFFAOYSA-N melamine Chemical compound NC1=NC(N)=NC(N)=N1 JDSHMPZPIAZGSV-UHFFFAOYSA-N 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 230000000813 microbial effect Effects 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 239000000843 powder Substances 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 238000002331 protein detection Methods 0.000 description 1
- 102000004169 proteins and genes Human genes 0.000 description 1
- 108090000623 proteins and genes Proteins 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
- 235000020095 red wine Nutrition 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 241000894007 species Species 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 210000003813 thumb Anatomy 0.000 description 1
- 229940126680 traditional chinese medicines Drugs 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 235000013311 vegetables Nutrition 0.000 description 1
- 238000001845 vibrational spectrum Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/17—Systems in which incident light is modified in accordance with the properties of the material investigated
- G01N21/25—Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/17—Systems in which incident light is modified in accordance with the properties of the material investigated
- G01N21/25—Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
- G01N21/31—Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry
- G01N21/35—Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry using infrared light
- G01N21/3581—Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry using infrared light using far infrared light; using Terahertz radiation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/06—Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/30—Services specially adapted for particular environments, situations or purposes
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Landscapes
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Spectroscopy & Molecular Physics (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Life Sciences & Earth Sciences (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- General Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Toxicology (AREA)
- Investigating Or Analysing Materials By Optical Means (AREA)
Abstract
The invention provides an optical detection system and device based on artificial intelligence, wherein the system comprises: the optical detection device is used for shooting a substance spectral image; the terminal control equipment is in communication connection with the optical detection device and is used for controlling the working mode of the optical detection device and acquiring a substance spectrum image shot by the optical detection device; and the artificial intelligence cloud processing platform is in communication connection with the terminal control equipment and is used for acquiring the substance spectral image transmitted by the terminal control equipment, processing the substance spectral image by adopting a preset model, acquiring a detection report and sending the detection report to the terminal control equipment. The optical detection system based on artificial intelligence, disclosed by the invention, is based on a spectral analysis technology, and combines the technologies of artificial intelligence, big data, cloud computing, Internet of things and the like, so that a quick, convenient and easy-to-use intelligent detection product and a comprehensive solution are provided for a user, the material detection process is simplified, the cost is reduced, and the user can easily realize on-site quick detection.
Description
Technical Field
The invention relates to the technical field of detection, in particular to an optical detection system and device based on artificial intelligence.
Background
At present, a spectrum is a pattern formed by arranging monochromatic light dispersed according to the wavelength after the monochromatic light is dispersed by a dispersion system; the specific spectrum of the article implies the information of the material composition and content; the spectral analysis is a method for identifying substances and determining chemical compositions and relative contents of the substances according to spectra of the substances, but the existing spectral analysis products have single functions and are not beneficial to large instruments and field detection.
Disclosure of Invention
One of the purposes of the invention is to provide an optical detection system based on artificial intelligence, which is based on a spectral analysis technology and combines the technologies of artificial intelligence, big data, cloud computing, internet of things and the like, so as to provide a quick, convenient and easy-to-use intelligent detection product and a comprehensive solution for a user, simplify the flow of material detection, reduce the cost and enable the user to easily realize on-site quick detection.
The embodiment of the invention provides an optical detection system based on artificial intelligence, which comprises:
the optical detection device is used for shooting a substance spectral image;
the terminal control equipment is in communication connection with the optical detection device and is used for controlling the working mode of the optical detection device and acquiring a substance spectrum image shot by the optical detection device;
the artificial intelligence cloud processing platform is in communication connection with the terminal control equipment and is used for acquiring a substance spectral image transmitted by the terminal control equipment, processing the substance spectral image by adopting a preset model, acquiring a detection report and sending the detection report to the terminal control equipment; and the terminal control equipment displays the detection report.
Preferably, the optical detection device comprises: a hyperspectral camera and/or a terahertz spectrometer.
Preferably, the terminal control device includes: one or more of a mobile phone, a tablet and a computer.
Preferably, the preset model includes: one or more of a three-dimensional convolution model, a two-branch convolution model and a small sample convolution model.
Preferably, the artificial intelligence cloud processing platform performs the following operations:
before a preset model is adopted to process a substance spectral image, receiving model calling information sent by terminal control equipment, and acquiring the preset model from a preset model calling library based on the model calling information;
wherein, the model call library comprises: the method comprises the steps of allowing calling vectors, model numbers corresponding to the allowing calling vectors one by one, and preset models corresponding to the model numbers one by one; wherein the permission call vector is as follows:
Xi=(xi1,xi2,…,xim);
wherein, XiIs the permission calling vector corresponding to the ith model number; a isimThe value of the m-th calling parameter in the ith permission calling vector;
obtaining a preset model from a preset model calling library based on model calling information, comprising:
analyzing model calling information based on a preset analysis template to obtain a model calling vector; the model call vector is as follows:
Y=(y1,y2,…,ym);
wherein Y represents a model invocation vector; y ismRepresenting the value of the mth call parameter in the model call vector; in the analysis process, when the value of the calling parameter of the analysis template is not analyzed from the model calling information, filling the value of the calling parameter by adopting a preset filling value;
calculating a first similarity between the model call vector and each of the permission call vectors, the calculation formula being as follows:
wherein, Sim (Y, X)p) Representing a first similarity between the model call vector Y and the pth permit call vector; y isqRepresenting the value of the qth calling parameter in the model calling vector; x is the number ofpqA value representing the qth invocation parameter of the pth permission invocation vector;
when the maximum value of all the first similarity values is greater than or equal to a preset first threshold value and smaller than a second preset threshold value, obtaining a model number corresponding to a permitted calling vector of the maximum value of the first similarity, and calling preset models in one-to-one correspondence with the model numbers based on the model number; when the maximum value in all the first similarity values is larger than or equal to a second preset threshold value, extracting the model number corresponding to the allowable calling vector with the first similarity larger than the second preset threshold value, acquiring preset model description information corresponding to the model number, making the model description information and the model number into a list to be selected, sending the list to be selected to terminal control equipment, receiving the selection operation of the terminal control equipment on the list to be selected, analyzing the selection operation, acquiring the model number selected and called by a user, and calling the preset models corresponding to the model numbers one by one on the basis of the model number; wherein the selecting operation comprises multiple selection; the first threshold value is smaller than a second preset threshold value;
when the maximum value of all the first similarity values is smaller than a preset first threshold value and/or the number of the values of the calling parameters filled by adopting a filling means in the analysis process is larger than a preset number, acquiring a historical calling record of the terminal control equipment, establishing a temporary calling library based on the historical calling record, and calculating a second similarity between the model calling vector and a permitted calling vector in the temporary calling library, wherein the calculation formula is as follows:
wherein, Sim (Y, L)j) A second similarity between the model call vector Y and the jth allowable call vector in the temporary call library; x is the number ofjqA value representing the q call parameter of the jth permitted call vector within the temporary call library;
calling a model corresponding to the maximum value of the second similarity;
the calling parameters include: the voltage and the current of the optical detection device, the spectral wavelength of the spectral image of the shot substance, the light intensity, the focal length of the lens and the depth of field of the lens.
Preferably, the optical detection system based on artificial intelligence further comprises: the model calls the two-dimensional code, and the model calls the two-dimensional code and includes: model calling information and/or setting information of the optical detection device;
the method comprises the following steps that when a key of the optical detection device is pressed for a long time to reach a preset time, the optical detection device enters a model calling and setting mode, and when the model is called and set, the optical detection device shoots a model to call a two-dimensional code;
the terminal control equipment acquires the model calling two-dimensional code through the optical detection device, and acquires model calling information and/or setting information based on the model calling two-dimensional code;
the terminal control equipment sets shooting parameters of the optical detection device based on the setting information;
the terminal control equipment sends the model calling information to the artificial intelligence cloud processing platform, and the artificial intelligence cloud processing platform calls the model based on the model calling information; the model calling information includes: model number of the model.
Preferably, the artificial intelligence cloud processing platform further performs the following operations:
receiving model calling information sent by terminal control equipment, and sending a model corresponding to the model calling information to the terminal control equipment when the model calling information is the same as the previous N times of model calling information; the terminal control device saves the model.
The invention also provides an optical detection device based on artificial intelligence, comprising:
a shell body, a plurality of first connecting rods and a plurality of second connecting rods,
a shooting window arranged at one end of the shell,
the display screen is arranged at the other end of the shell;
the key is arranged on one side of the shell, and hand-holding grains which are adaptive to four fingers of a human hand are arranged on one side of the shell, which is far away from the key;
the shooting module is arranged in the shell and used for shooting a substance spectral image;
the controller is arranged in the shell and is respectively and electrically connected with the shooting module, the display screen and the keys;
and the wireless communication module is electrically connected with the controller and is used for being in communication connection with the terminal control equipment.
Preferably, the photographing module includes:
the first lens assembly is sleeved with a first gear at the periphery of the first lens assembly;
the periphery of the second lens assembly is sleeved with a second gear;
the first gear and the second gear are both arranged in the inner gear ring, the first gear is meshed with the second gear, and the second gear is meshed with the inner teeth of the inner gear ring;
one end of a stator of the rotating shaft is fixedly connected with the shell, and the other end of the stator of the rotating shaft is fixedly connected with the first lens component;
the first connecting rods are respectively and vertically arranged with the central axis of the rotating shaft and are respectively and fixedly connected with the rotor of the rotating shaft;
a plurality of second connecting rods which are arranged perpendicular to the first connecting rods and are parallel to the central axis of the rotating shaft; the second connecting rod, the first connecting rod and the second lens component are in one-to-one correspondence; one end of the second connecting rod is rotatably connected with one end of the first connecting rod, which is far away from the rotating shaft, and the other end of the second connecting rod is fixedly connected with the middle part of the U-shaped fixing piece; and two ends of the U-shaped fixing piece are fixedly connected with one side of the second gear.
Preferably, the second lens assembly includes:
the body is provided with a plurality of grooves,
the annular body is sleeved on the outer periphery of the body, and two first rotating bodies are symmetrically arranged between the outer periphery of the body and the inner periphery of the annular body; the rotating end of the first rotating body is fixedly connected with the body; the fixed end of the first rotating body is fixedly connected with the annular body;
the two second rotating bodies are symmetrically arranged and arranged between the annular body and the second gear; the fixed end of the second rotating body is fixedly connected with the periphery of the annular body; the rotating end of the second rotating body is fixedly connected with the inner periphery of the second gear;
the central axes of the two first rotating bodies are perpendicular to the central axes of the two second rotating bodies.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic diagram of an artificial intelligence based optical detection system according to an embodiment of the present invention;
FIG. 2 is a three-dimensional convolution neural network-based object-level hyperspectral modeling method;
FIG. 3 is a hyper-spectral data modeling method based on a multi-branch convolutional network;
FIG. 4 is a small sample fast adaptation hyperspectral modeling method based on meta-learning;
FIG. 5 is a schematic diagram of an optical detection apparatus according to an embodiment of the present invention;
fig. 6 is a schematic diagram of positions of an inner ring gear, a second gear and a first gear of an optical detection apparatus according to an embodiment of the present invention.
In the figure:
1. an optical detection device; 2. a terminal control device; 3. an artificial intelligence cloud processing platform; 11. a housing; 12. a shooting window; 13. a display screen; 14. pressing a key; 15. a controller; 16. a wireless communication module; 17. a second gear; 18. a first gear; 19. an inner gear ring; 4. a shooting module; 41. a rotating shaft; 42. a first link; 43. a second link; 44. a first lens assembly; 45. a U-shaped fixing member; 46. a second lens assembly; 47. an annular body; 48. a first rotating body; 49. a second rotating body; 50. a body.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
An embodiment of the present invention provides an optical detection system based on artificial intelligence, as shown in fig. 1, including: an optical detection device 1 for taking a substance spectral image;
the terminal control device 2 is in communication connection with the optical detection device 1 and is used for controlling the working mode of the optical detection device 1 and acquiring a substance spectral image shot by the optical detection device 1;
the artificial intelligence cloud processing platform 3 is in communication connection with the terminal control equipment 2 and is used for acquiring a substance spectral image transmitted by the terminal control equipment 2, processing the substance spectral image by adopting a preset model, acquiring a detection report and sending the detection report to the terminal control equipment 2; the terminal control device 2 displays the detection report.
The working principle and the beneficial effects of the technical scheme are as follows:
a user uses the optical detection device 1 to shoot a substance spectrum image of a detected object; the substance spectral image is uploaded to the artificial intelligent cloud processing platform 3 through the terminal control device 2, the artificial intelligent cloud processing platform 3 processes the substance spectral image by adopting a preset model, and the optical detection device 1 is only responsible for collecting the substance spectral image, so the volume of the optical detection device can be reduced as much as possible, and when the optical detection device is applied to a production line for real-time detection, light sources can be erected at the front and back of the optical detection device 1 to irradiate the area shot by the optical detection device 1; the artificial intelligence cloud processing platform 3 stores models corresponding to various detections, so that the detection types are diversified, and the application scenes are diversified; the application scene comprises the following steps: analyzing the content of the components of the industrial screen, such as detecting the content of formaldehyde; identification of industrial product category, such as plastic category identification; analyzing the content of agricultural product components, such as protein detection, sugar detection, water detection and the like; identification of varieties of agricultural products, for example: identification between arabica coffee beans and robusta coffee beans; analyzing the content of food components, such as sweetness analysis of apples, alcoholic strength analysis, melamine detection and the like; food category identification, such as liquor odor identification, red wine quality identification and the like; analyzing the content of the medicine components, and detecting whether the medicine components are in compliance; identifying the drug classes to realize rapid classification; identification of adulteration of Chinese medicinal materials, such as rhizoma arisaematis rhizoma Pinelliae and radix Angelicae Pubescentis radix Angelicae sinensis; judicial government affair identification, such as handwriting ink identification and the like; detecting the purity of the articles, such as defects of ornaments, glass and the like; and (4) detecting the purity of the article, such as detecting the defects of ornaments, glass and other substances. The method comprises the following steps of irradiating food by adopting a spectrum in advance, feeding back spectrum data and a detection result of the food, enabling a machine to carry out deep learning to form a preset model, and then using the preset model for detection; detecting food components, pesticide residue, sweetness and acidity. The truth judgment of the medicines concentrated in the traditional Chinese medicinal materials and the pesticide residue of agricultural products.
The optical detection system based on artificial intelligence, disclosed by the invention, is based on a spectral analysis technology, and combines the technologies of artificial intelligence, big data, cloud computing, Internet of things and the like, so that a quick, convenient and easy-to-use intelligent detection product and a comprehensive solution are provided for a user, the material detection process is simplified, the cost is reduced, and the user can easily realize on-site quick detection.
In one embodiment, the optical detection apparatus 1 includes: a hyperspectral camera and/or a terahertz spectrometer.
The working principle and the beneficial effects of the technical scheme are as follows:
the hyperspectral camera is used for shooting hyperspectral images, and can obtain spatial morphology information of a measured object and spectral information of each pixel point to form a three-dimensional data block. Thereby realizing the on-line nondestructive detection of the components of the detected object. The method is applied to detection of the active ingredients of the western medicine tablets; detecting varieties of coffee beans and soybeans; identifying adulteration of the traditional Chinese medicine; and detecting strawberry bruising and the like. Specific application examples are as follows: visible and near infrared band hyperspectral sensing is adopted, and various chemometrics and machine learning algorithms such as PLS (partial least squares) PLS model, MLR (multiple linear regression), SVM (support vector machine) model and ANN (artificial neural network) are combined, so that the TVC (total plant counting) and plate counting of products such as meat, aquatic products, vegetables and mushrooms can be rapidly and nondestructively detected. Visible and near-infrared band hyperspectral sensing is adopted, and various machine learning classification algorithms such as SVM, LS-SVM and ANN are combined, so that fungus infection classification of various foods and agricultural products can be realized, and the visualization of infected parts can be realized. Visible and near-infrared band hyperspectral sensing is adopted, and classification of fungus varieties can be realized by combining various machine learning classification algorithms such as SVM, KNN, ANN and the like; meanwhile, the analysis and detection of the hypha growth process can be realized by means of PCA and the like.
The terahertz wave refers to 0.1-10 THz electromagnetic wave, has the characteristics of light wave and microwave, and has the advantages of safety, perspective and spectral resolution. According to different imaging principles, terahertz imaging technologies can be mainly divided into continuous terahertz wave and pulse terahertz wave imaging technologies. Imaging based on pulse waves can provide richer and more accurate information, and continuous terahertz wave imaging has the advantages of being simple in equipment and high in imaging speed. The terahertz wave can excite the molecular motion of low-frequency biomolecules such as proteins and DNA, so that the microbial structural and kinetic information which cannot be obtained by other vibration spectrums can be obtained. Researches show that different types of microorganisms show different time-domain terahertz spectrum characteristics, so that the method can be used for microorganism classification and identification and has certain potential in the field of new species discovery. In addition, the differences of the terahertz wave absorption rates of microorganisms living and dying in different living states and powder are obvious. Researches find that the microorganism concentration has strong correlation with the terahertz resonance frequency shift, so that the terahertz sensing can be used for quantitatively detecting the microorganism concentration. Currently, research in the field focuses on improving detection sensitivity, for example, designing a sensor based on a terahertz metamaterial to realize quantitative detection of trace microorganisms.
In one embodiment, the terminal control device 2 includes: one or more of a mobile phone, a tablet and a computer.
In one embodiment, the preset model includes: one or more of a three-dimensional convolution model, a two-branch convolution model and a small sample convolution model.
The working principle and the beneficial effects of the technical scheme are as follows:
three-dimensional convolution model: as shown in fig. 2, in order to realize automatic object-level hyperspectral data classification based on the deep learning technology, the object-level hyperspectral modeling method based on the three-dimensional convolutional neural network adopts a watershed algorithm to segment images and standardizes sample data; and (3) sending the standardized data into a three-dimensional convolution neural network, and simultaneously realizing extraction and fusion of the depth profile characteristics and the spectral characteristics.
Two-branch convolution model: as shown in fig. 3, the hyperspectral data modeling method based on the multi-branch convolutional network adopts a dual-branch convolutional neural network to extract morphology and spectral characteristics respectively for simplifying model parameters and improving model robustness, and finally performs feature fusion. The extraction of spectral features and spatial features and the deep fusion of the two features are realized through the structural design of a neural network.
Small sample convolution model: as shown in FIG. 4, a small sample fast adaptation hyperspectral modeling method based on meta-learning adopts a meta-learning method and utilizes a public hyperspectral data set to perform parallelization pre-training on a model. The method is used for testing a plurality of data sets of products such as agricultural products, traditional Chinese medicines and the like, and results show that the precision and the robustness of small sample modeling can be remarkably improved.
In one embodiment, the artificial intelligence cloud processing platform 3 performs the following operations:
before the spectral image of the substance is processed by adopting a preset model, receiving model calling information sent by the terminal control equipment 2, and acquiring the preset model from a preset model calling library based on the model calling information;
wherein, the model call library comprises: the method comprises the steps of allowing calling vectors, model numbers corresponding to the allowing calling vectors one by one, and preset models corresponding to the model numbers one by one; wherein the permission call vector is as follows:
Xi=(xi1,xi2,…,xim);
wherein, XiIs the permission calling vector corresponding to the ith model number; a isimThe value of the m-th calling parameter in the ith permission calling vector;
obtaining a preset model from a preset model calling library based on model calling information, comprising:
analyzing model calling information based on a preset analysis template to obtain a model calling vector; the model call vector is as follows:
Y=(y1,y2,…,ym);
wherein Y represents a model invocation vector; y ismRepresenting the value of the mth call parameter in the model call vector; in the analysis process, when the value of the calling parameter of the analysis template is not analyzed from the model calling information, filling the value of the calling parameter by adopting a preset filling value;
calculating a first similarity between the model call vector and each of the permission call vectors, the calculation formula being as follows:
wherein, Sim (Y, X)p) Representing a first similarity between the model call vector Y and the pth permit call vector; y isqRepresenting the value of the qth calling parameter in the model calling vector; x is the number ofpqA value representing the qth invocation parameter of the pth permission invocation vector;
when the maximum value of all the first similarity values is greater than or equal to a preset first threshold value and smaller than a second preset threshold value, obtaining a model number corresponding to a permitted calling vector of the maximum value of the first similarity, and calling preset models in one-to-one correspondence with the model numbers based on the model number; when the maximum value in all the first similarity values is larger than or equal to a second preset threshold value, extracting the model number corresponding to the allowable calling vector with the first similarity larger than the second preset threshold value, acquiring preset model description information corresponding to the model number, making the model description information and the model number into a list to be selected, sending the list to be selected to the terminal control equipment 2, receiving the selection operation of the terminal control equipment 2 on the list to be selected, analyzing the selection operation, acquiring the model number selected and called by the user, and calling the preset models corresponding to the model numbers one by one based on the model number; wherein the selecting operation comprises multiple selection; the first threshold value is smaller than a second preset threshold value;
when the maximum value of all the first similarity values is smaller than a preset first threshold value and/or the number of the values of the calling parameters filled by the filling means in the analysis process is larger than a preset number, obtaining a historical calling record of the terminal control device 2, establishing a temporary calling library based on the historical calling record, and calculating a second similarity between the model calling vector and the permitted calling vector in the temporary calling library, wherein the calculation formula is as follows:
wherein, Sim (Y, L)j) A second similarity between the model call vector Y and the jth allowable call vector in the temporary call library; x is the number ofjqA value representing the q call parameter of the jth permitted call vector within the temporary call library;
calling a model corresponding to the maximum value of the second similarity;
the calling parameters include: the voltage and the current of the optical detection device 1, the spectral wavelength of the spectral image of the shot substance, the light intensity, the focal length of the lens and the depth of field of the lens.
The working principle and the beneficial effects of the technical scheme are as follows:
the artificial intelligence cloud processing platform 3 analyzes a model calling vector from the model calling information sent by the terminal control equipment 2, matches the model calling vector with the allowable calling vectors corresponding to the models in the model calling library, determines the model to be called by the terminal control equipment 2, and analyzes the substance spectral image by using the called model, so that the accuracy of the called model is directly related to the accuracy of the final analysis result; the model calling information sent by the terminal control equipment 2 to the artificial intelligence cloud processing platform 3 can be manually input by a user, and the existing state of the optical detection device 1 can also be directly used as the model calling information; the states of the optical detection apparatus 1 include: the voltage and the current of the optical detection device 1, the spectral wavelength, the light intensity, the focal length and the depth of field of the lens of the spectral image of the shot substance, and the like; the corresponding models are respectively obtained according to different states, so that the recognition rate of the models and the accuracy rate of the final detection report are improved; when the model calling information is not enough to select the called model, the intention model of the terminal control device 2 is intelligently judged according to the calling record of the terminal control device 2.
In one embodiment, the artificial intelligence based optical detection system further comprises: the model calls the two-dimensional code, and the model calls the two-dimensional code and includes: model call information and/or setting information of the optical detection apparatus 1;
the method comprises the following steps that when a key 14 of the optical detection device 1 is pressed for a long time to reach a preset time, the optical detection device 1 enters a model calling and setting mode, and when the model calling and setting mode is adopted, the optical detection device 1 shoots a model calling two-dimensional code;
the terminal control equipment 2 acquires the model calling two-dimensional code through the optical detection device 1, and acquires model calling information and/or setting information based on the model calling two-dimensional code;
the terminal control device 2 sets shooting parameters of the optical detection device 1 based on the setting information;
the terminal control equipment 2 sends the model calling information to the artificial intelligence cloud processing platform 3, and the artificial intelligence cloud processing platform 3 calls the model based on the model calling information; the model calling information includes: model number of the model.
The working principle and the beneficial effects of the technical scheme are as follows:
when the optical detection device is used on a production line, when a substance to be detected is replaced, a professional is required to debug and set parameters of the optical detection device 1; the two-dimensional code is called by adopting the model, namely, the model and the setting parameters required by the production line are printed on an adjusting card, when the production line changes and generates articles, the operator of the production line only needs to brush the two-dimensional code of the model required to be called on the optical detection device 1, the terminal control equipment 2 reads the model and calls the two-dimensional code to realize the automatic parameter setting of the optical detection device 1 and the corresponding change of the calling model, and the operator can also quickly adjust according to the difference of the detection substances.
In one embodiment, the artificial intelligence cloud processing platform 3 further performs the following operations:
receiving model calling information sent by the terminal control equipment 2, and sending a model corresponding to the model calling information to the terminal control equipment 2 when the model calling information is the same as the previous N times of model calling information; the terminal control device 2 saves the model.
The working principle and the beneficial effects of the technical scheme are as follows:
when the same terminal control device 2 continuously calls the same model, directly putting the model at the front end (the terminal control device 2); in this way, the detection report can be completed directly at the terminal control device 2, and the detection rate can be increased.
The present invention also provides an optical detection device 1 based on artificial intelligence, as shown in fig. 5, comprising:
the housing (11) is provided with a plurality of grooves,
a photographing window 12 provided at one end of the housing 11,
a display 13 disposed at the other end of the housing 11;
the key 14 is arranged on one side of the shell 11, and hand-held lines matched with four fingers of a human hand are arranged on one side of the shell 11, which is far away from the key 14;
the shooting module 4 is arranged in the shell 11 and is used for shooting a substance spectral image;
the controller 15 is arranged in the shell 11 and is respectively and electrically connected with the shooting module 4, the display screen 13 and the keys 14;
and the wireless communication module 16 is electrically connected with the controller 15 and is used for being in communication connection with the terminal control equipment 2.
The working principle and the beneficial effects of the technical scheme are as follows:
when the user uses the hand, the thumb of the hand is placed on the key 14, and the remaining four fingers hold the hand-held lines; the hand-held lines can prevent the muscle of the hand from being ache when the user uses the hand-held lines for a long time; the controller 15 controls the shooting module 4 to shoot the substance spectrum image, sends the substance spectrum image to the terminal control equipment 2 through the wireless communication module 16, and uploads the substance spectrum image to the artificial intelligent cloud processing platform 3 through the terminal control equipment 2; the display screen 13 can display the state of the optical detection device 1, and when the optical detection device is in a model calling and setting mode, a preset display interface is adopted for displaying; the shooting window 12 is used for shooting by the shooting module 4; a lens may be provided at the photographing window 12 for protecting the photographing module 4; in addition, a filter lens can be used to remove spectral signals that affect the analysis. The embodiment provides the optical detection device 1 convenient for a user to use in a handheld manner, so that the miniaturization and portability of detection equipment are realized, and the inspection personnel can conveniently check lines and go out for use.
In one embodiment, as shown in fig. 5 and 6, the photographing module 4 includes:
a first lens assembly 44, wherein a first gear 18 is sleeved on the periphery of the first lens assembly 44;
at least one second lens assembly 46, wherein a second gear 17 is sleeved on the periphery of the second lens assembly 46;
the first gear 18 and the second gear 17 are both arranged in the internal gear ring 19, the first gear 18 is meshed with the second gear 17, and the second gear 17 is meshed with the internal teeth of the internal gear ring 19;
one end of a stator of the rotating shaft 41 is fixedly connected with the housing 11, and the other end of the stator of the rotating shaft 41 is fixedly connected with the first lens assembly 44;
a plurality of first links 42, which are respectively disposed perpendicular to the central axis of the rotating shaft 41 and respectively fixedly connected to the rotor of the rotating shaft 41;
a plurality of second links 43 disposed perpendicular to the first links 42 and parallel to the central axis of the rotating shaft 41; the second connecting rods 43, the first connecting rods 42 and the second lens assemblies 46 are in one-to-one correspondence; one end of the second connecting rod 43 is rotatably connected with one end of the first connecting rod 42 far away from the rotating shaft 41, and the other end is fixedly connected with the middle part of the U-shaped fixing piece 45; both ends of the U-shaped fixing member 45 are fixedly connected to one side of the second gear 17.
The working principle and the beneficial effects of the technical scheme are as follows:
the first lens assembly 44 is a main shooting component, the second lens assembly 46 is an auxiliary shooting component, three-dimensional modeling is performed on a shot object by combining pictures shot by the first lens assembly 44 and the second lens assembly 46, collection of three-dimensional spectrum images is achieved, the second lens assembly 46 rotates in the inner gear ring 19 under the matching of the rotating shaft 41, the first connecting rod 42, the second connecting rod 43 and the U-shaped fixing piece 45, angles are changed, images of objects with multiple angles are provided for three-dimensional modeling, and the three-dimensional modeling is closer to the actual situation. In addition, the second lens assembly can be replaced by a spectrum generator which is used for emitting a light source to irradiate the object, so that the first lens assembly can collect the reflected light of the object to form a spectrum image.
In one embodiment, as shown in fig. 6, second lens assembly 46 includes:
the body 50 is formed by a plurality of metal sheets,
an annular body 47, which is sleeved on the outer periphery of the body 50, and two first rotating bodies 48 are symmetrically arranged between the outer periphery of the body 50 and the inner periphery of the annular body 47; the rotating end of the first rotating body 48 is fixedly connected with the body 50; the fixed end of the first rotating body 48 is fixedly connected with the annular body 47;
two second rotating bodies 49 symmetrically arranged and arranged between the annular body 47 and the second gear 17; the fixed end of the second rotating body 49 is fixedly connected with the periphery of the annular body 47; the rotating end of the second rotating body 49 is fixedly connected with the inner periphery of the second gear 17;
the central axes of the two first rotating bodies 48 are perpendicular to the central axes of the two second rotating bodies 49.
The working principle and the beneficial effects of the technical scheme are as follows:
the shooting angle of the second lens assembly 46 is adjusted through the first rotating body 48 and the second rotating body 49, so that images of objects shot from multiple angles are shot when the second lens assembly 46 is located at the same direction of the first lens assembly 44, the image basis in three-dimensional modeling is further enriched, and the accuracy of three-dimensional spectral images after three-dimensional modeling is improved; the body 50 is an optical component for shooting of the second lens assembly 46.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (10)
1. An artificial intelligence based optical detection system, comprising:
an optical detection device (1) for capturing a spectral image of a substance;
the terminal control equipment (2) is in communication connection with the optical detection device (1) and is used for controlling the working mode of the optical detection device (1) and acquiring the substance spectrum image shot by the optical detection device (1);
the artificial intelligence cloud processing platform (3) is in communication connection with the terminal control equipment (2) and is used for acquiring the substance spectral image transmitted by the terminal control equipment (2), processing the substance spectral image by adopting a preset model, acquiring a detection report and sending the detection report to the terminal control equipment (2); the terminal control device (2) displays the detection report.
2. The artificial intelligence based optical detection system according to claim 1, wherein the optical detection device (1) comprises: a hyperspectral camera and/or a terahertz spectrometer.
3. The artificial intelligence based optical detection system according to claim 1, wherein the terminal control device (2) includes: one or more of a mobile phone, a tablet and a computer.
4. The artificial intelligence based optical detection system of claim 1 wherein the predetermined model comprises: one or more of a three-dimensional convolution model, a two-branch convolution model and a small sample convolution model.
5. The artificial intelligence based optical detection system according to claim 1, wherein the artificial intelligence cloud processing platform (3) performs the following operations:
before the substance spectral image is processed by adopting a preset model, receiving model calling information sent by the terminal control equipment (2), and acquiring the preset model from a preset model calling library based on the model calling information;
wherein the model call library comprises: the method comprises the steps of allowing calling vectors, model numbers corresponding to the allowing calling vectors one by one, and preset models corresponding to the model numbers one by one; wherein the permission call vector is as follows:
Xi=(xi1,xi2,…,xim);
wherein, XiThe permission calling vector corresponding to the ith model number; a isimThe value of the m-th calling parameter in the ith permission calling vector;
the obtaining of the preset model from a preset model calling library based on the model calling information includes:
analyzing the model calling information based on a preset analysis template to obtain a model calling vector; the model call vector is as follows:
Y=(y1,y2,…,ym);
wherein Y represents the model call vector; ym represents the value of the mth call parameter in the model call vector; in the analysis process, when the value of the calling parameter of the analysis template is not analyzed from the model calling information, filling the value of the calling parameter by adopting a preset filling value;
calculating a first similarity between the model call vector and each of the allowable call vectors, wherein the calculation formula is as follows:
wherein, Sim (Y, X)p) Representing a first similarity between the model call vector Y and the p-th license call vector; y isqRepresenting the value of the qth calling parameter in the model calling vector; x is the number ofpqA value representing the qth invocation parameter of the pth said permission invocation vector;
when the maximum value of all the first similarity values is greater than or equal to a preset first threshold value and smaller than a second preset threshold value, obtaining the model number corresponding to the allowable calling vector of the maximum value of the first similarity, and calling preset models corresponding to the model numbers one by one on the basis of the model number; when the maximum value in all the first similarity values is larger than or equal to a second preset threshold value, extracting all the model numbers corresponding to the allowable calling vectors with the first similarity larger than the second preset threshold value, obtaining preset model description information corresponding to the model numbers, making the model description information and the model numbers into a list to be selected, sending the list to be selected to the terminal control equipment (2), receiving the selection operation of the terminal control equipment (2) on the list to be selected, analyzing the selection operation, obtaining the model numbers selected and called by a user, and calling the preset models corresponding to the model numbers one by one based on the model numbers; wherein the selecting operation comprises a multiple selection; the first threshold is smaller than the second preset threshold;
when the maximum value of all the first similarity values is smaller than a preset first threshold value and/or the number of the calling parameter values filled by filling means in the analysis process is larger than a preset number, obtaining a historical calling record of the terminal control equipment (2), establishing a temporary calling library based on the historical calling record, and calculating a second similarity between the model calling vector and the allowable calling vector in the temporary calling library, wherein the calculation formula is as follows:
wherein, Sim (Y, L)j) A second similarity between the model call vector Y and the jth of the allowed call vectors within the temporary call library; x is the number ofjqA value representing the qth of the invocation parameter of the jth of the permitted invocation vector within the temporary invocation library;
calling the model corresponding to the maximum value of the second similarity;
the calling parameters comprise: the voltage and the current of the optical detection device (1), the spectrum wavelength for shooting the substance spectrum image, the light intensity, the lens focal length and the lens depth of field.
6. The artificial intelligence based optical detection system of claim 1 further comprising: the model calls the two-dimensional code, the model calls the two-dimensional code to include: model call information and/or setting information of the optical detection device (1);
the method comprises the following steps that when a key (14) of the optical detection device (1) is pressed for a long time to reach a preset time, the optical detection device (1) enters a model calling and setting mode, and when the model calling and setting mode is adopted, the optical detection device (1) shoots a model calling two-dimensional code;
the terminal control equipment (2) acquires the model calling two-dimensional code through the optical detection device (1), and acquires the model calling information and/or the setting information based on the model calling two-dimensional code;
the terminal control equipment (2) sets shooting parameters of the optical detection device (1) based on the setting information;
the terminal control equipment (2) sends the model calling information to the artificial intelligence cloud processing platform (3), and the artificial intelligence cloud processing platform (3) calls the model based on the model calling information; the model calling information includes: model numbers of the models.
7. The artificial intelligence based optical detection system according to claim 1, wherein the artificial intelligence cloud processing platform (3) further performs the following operations:
receiving model calling information sent by the terminal control equipment (2), and sending the model corresponding to the model calling information to the terminal control equipment (2) when the model calling information is the same as the previous N times of model calling information; the terminal control device (2) saves the model.
8. An artificial intelligence based optical detection device (1), characterized in that it comprises:
a shell (11) is arranged on the outer side of the shell,
a photographing window (12) provided at one end of the housing (11),
the display screen (13) is arranged at the other end of the shell (11);
the key (14) is arranged on one side of the shell (11), and hand-held lines adaptive to four fingers of a human hand are arranged on one side, far away from the key (14), of the shell (11);
the shooting module (4) is arranged in the shell (11) and is used for shooting a substance spectral image;
the controller (15) is arranged in the shell (11) and is respectively and electrically connected with the shooting module (4), the display screen (13) and the key (14);
and the wireless communication module (16) is electrically connected with the controller (15) and is used for being in communication connection with the terminal control equipment (2).
9. The artificial intelligence based optical detection apparatus (1) according to claim 8, wherein the photographing module (4) comprises:
the lens assembly comprises a first lens assembly (44), wherein a first gear (18) is sleeved on the periphery of the first lens assembly (44);
at least one second lens assembly (46), wherein a second gear (17) is sleeved on the periphery of the second lens assembly (46);
an internal gear ring (19), the first gear (18) and the second gear (17) being both disposed within the internal gear ring (19), the first gear (18) meshing with the second gear (17), the second gear (17) meshing with internal teeth of the internal gear ring (19);
one end of a stator of the rotating shaft (41) is fixedly connected with the shell (11), and the other end of the stator of the rotating shaft (41) is fixedly connected with the first lens assembly (44);
the first connecting rods (42) are respectively and vertically arranged with the central axis of the rotating shaft (41) and are respectively and fixedly connected with the rotor of the rotating shaft (41);
a plurality of second links (43) arranged perpendicular to the first links (42) and parallel to the central axis of the rotating shaft (41); the second connecting rods (43) and the first connecting rods (42) correspond to the second lens assemblies (46) one by one; one end of the second connecting rod (43) is rotatably connected with one end of the first connecting rod (42) far away from the rotating shaft (41), and the other end of the second connecting rod is fixedly connected with the middle part of the U-shaped fixing piece (45); and both ends of the U-shaped fixing piece (45) are fixedly connected with one side of the second gear (17).
10. The artificial intelligence based optical detection device (1) according to claim 9, wherein the second lens arrangement (46) comprises:
a body (50) of the device,
the annular body (47) is sleeved on the outer periphery of the body (50), and two first rotating bodies (48) are symmetrically arranged between the outer periphery of the body (50) and the inner periphery of the annular body (47); the rotating end of the first rotating body (48) is fixedly connected with the body (50); the fixed end of the first rotating body (48) is fixedly connected with the annular body (47);
two second rotating bodies (49) symmetrically arranged and arranged between the annular body (47) and the second gear (17); the fixed end of the second rotating body (49) is fixedly connected with the periphery of the annular body (47); the rotating end of the second rotating body (49) is fixedly connected with the inner periphery of the second gear (17);
the central axes of the two first rotating bodies (48) are perpendicular to the central axes of the two second rotating bodies (49).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011088696.XA CN112444493B (en) | 2020-10-13 | 2020-10-13 | Optical detection system and device based on artificial intelligence |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011088696.XA CN112444493B (en) | 2020-10-13 | 2020-10-13 | Optical detection system and device based on artificial intelligence |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112444493A true CN112444493A (en) | 2021-03-05 |
CN112444493B CN112444493B (en) | 2024-01-09 |
Family
ID=74735979
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011088696.XA Active CN112444493B (en) | 2020-10-13 | 2020-10-13 | Optical detection system and device based on artificial intelligence |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112444493B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113739039A (en) * | 2021-09-14 | 2021-12-03 | 中科蓝山怡源(北京)人工智能技术有限公司 | Inspection robot based on edge calculation |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1995987A (en) * | 2007-02-08 | 2007-07-11 | 江苏大学 | Non-destructive detection method and device for agricultural and animal products based on hyperspectral image technology |
CN109341869A (en) * | 2018-12-04 | 2019-02-15 | 西安思科赛德电子科技有限公司 | A kind of infrared detection sensor regulating device |
WO2019146582A1 (en) * | 2018-01-25 | 2019-08-01 | 国立研究開発法人産業技術総合研究所 | Image capture device, image capture system, and image capture method |
CN110222310A (en) * | 2019-05-17 | 2019-09-10 | 科迈恩(北京)科技有限公司 | A kind of shared AI scientific instrument Data Analysis Services system and method |
CN110490251A (en) * | 2019-03-08 | 2019-11-22 | 腾讯科技(深圳)有限公司 | Prediction disaggregated model acquisition methods and device, storage medium based on artificial intelligence |
CN110793923A (en) * | 2019-10-31 | 2020-02-14 | 北京绿土科技有限公司 | Hyperspectral soil data acquisition and analysis method based on mobile phone |
CN111609931A (en) * | 2020-05-17 | 2020-09-01 | 北京安洲科技有限公司 | Parameter-programmable real-time hyperspectral acquisition system and method |
-
2020
- 2020-10-13 CN CN202011088696.XA patent/CN112444493B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1995987A (en) * | 2007-02-08 | 2007-07-11 | 江苏大学 | Non-destructive detection method and device for agricultural and animal products based on hyperspectral image technology |
WO2019146582A1 (en) * | 2018-01-25 | 2019-08-01 | 国立研究開発法人産業技術総合研究所 | Image capture device, image capture system, and image capture method |
CN109341869A (en) * | 2018-12-04 | 2019-02-15 | 西安思科赛德电子科技有限公司 | A kind of infrared detection sensor regulating device |
CN110490251A (en) * | 2019-03-08 | 2019-11-22 | 腾讯科技(深圳)有限公司 | Prediction disaggregated model acquisition methods and device, storage medium based on artificial intelligence |
CN110222310A (en) * | 2019-05-17 | 2019-09-10 | 科迈恩(北京)科技有限公司 | A kind of shared AI scientific instrument Data Analysis Services system and method |
CN110793923A (en) * | 2019-10-31 | 2020-02-14 | 北京绿土科技有限公司 | Hyperspectral soil data acquisition and analysis method based on mobile phone |
CN111609931A (en) * | 2020-05-17 | 2020-09-01 | 北京安洲科技有限公司 | Parameter-programmable real-time hyperspectral acquisition system and method |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113739039A (en) * | 2021-09-14 | 2021-12-03 | 中科蓝山怡源(北京)人工智能技术有限公司 | Inspection robot based on edge calculation |
Also Published As
Publication number | Publication date |
---|---|
CN112444493B (en) | 2024-01-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11542461B2 (en) | Analysis device | |
CN104931470B (en) | A kind of pesticide residue detection device and detection method based on fluorescent high spectral technology | |
EP2976605B1 (en) | Spectroscopic characterization of seafood | |
CN105973858B (en) | A kind of automatic detection system for quality of Chinese medicine | |
US11054370B2 (en) | Scanning devices for ascertaining attributes of tangible objects | |
Eissa et al. | Understanding color image processing by machine vision for biological materials | |
Cao et al. | Identification of species and geographical strains of Sitophilus oryzae and Sitophilus zeamais using the visible/near‐infrared hyperspectral imaging technique | |
CN106679808B (en) | Correlation imaging system and method based on compressed spectrum | |
CN109557003A (en) | Pesticide deposition amount detection method and device and data acquisition combination device | |
CN112444493B (en) | Optical detection system and device based on artificial intelligence | |
Liu et al. | Rapid identification of chrysanthemum teas by computer vision and deep learning | |
CN108268902A (en) | High spectrum image transformation and substance detection identifying system and method based on recurrence plot | |
CN108898156A (en) | A kind of green green pepper recognition methods based on high spectrum image | |
CN209525221U (en) | Pesticide deposit amount characteristic wave data acquisition and pesticide deposit amount detection device | |
CN114419311B (en) | Multi-source information-based passion fruit maturity nondestructive testing method and device | |
US12007332B2 (en) | Portable scanning device for ascertaining attributes of sample materials | |
de Castro Pereira et al. | Detection and classification of whiteflies and development stages on soybean leaves images using an improved deep learning strategy | |
CN107576600A (en) | A kind of quick determination method for smearing tea grain size category | |
CN114136920A (en) | Hyperspectrum-based single-grain hybrid rice seed variety identification method | |
CN111077088B (en) | Smart phone imaging spectrometer and spectrum identification method thereof | |
CN206331487U (en) | A kind of agricultural product volume rapid measurement device based on machine vision | |
CN116862456A (en) | Traditional Chinese medicine production monitoring control system and method based on image processing | |
Li | Classification of black tea leaf water content based on hyperspectral imaging | |
DE212017000151U1 (en) | Surround and additional circuitry for enhancing the functionality of a mobile device | |
CN114627120B (en) | Method for distinguishing decoloration of regenerated polyester material |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |