CN116858789A - Food safety detection system and method thereof - Google Patents

Food safety detection system and method thereof Download PDF

Info

Publication number
CN116858789A
CN116858789A CN202310881627.1A CN202310881627A CN116858789A CN 116858789 A CN116858789 A CN 116858789A CN 202310881627 A CN202310881627 A CN 202310881627A CN 116858789 A CN116858789 A CN 116858789A
Authority
CN
China
Prior art keywords
feature
map
feature map
neural network
scale
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310881627.1A
Other languages
Chinese (zh)
Inventor
陈海花
顾恩婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Haining Sister Catering Management Co ltd
Original Assignee
Haining Sister Catering Management Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Haining Sister Catering Management Co ltd filed Critical Haining Sister Catering Management Co ltd
Priority to CN202310881627.1A priority Critical patent/CN116858789A/en
Publication of CN116858789A publication Critical patent/CN116858789A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/58Extraction of image or video features relating to hyperspectral data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/68Food, e.g. fruit or vegetables
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N2021/1765Method using an image detector and processing of image signal
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2201/00Features of devices classified in G01N21/00
    • G01N2201/12Circuits of general importance; Signal processing
    • G01N2201/129Using chemometrical methods
    • G01N2201/1296Using chemometrical methods using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)

Abstract

The application relates to the technical field of intelligent detection, and particularly discloses a food safety detection system and a food safety detection method, wherein a hyperspectral imaging technology is adopted to detect food safety, a hyperspectral cube image of food to be detected is firstly obtained, noise reduction processing is carried out on the hyperspectral cube image so as to remove interference of external factors, multi-scale associated characteristic information among spectral characteristics of the hyperspectral cube image after noise reduction under different wavelengths is extracted, and whether the food meets safety standards is judged. Therefore, the food safety can be well detected and evaluated, so that the quality and eating safety of the food are ensured.

Description

Food safety detection system and method thereof
Technical Field
The application relates to the technical field of intelligent detection, in particular to a food safety detection system and a food safety detection method.
Background
In recent years, with the rapid development of the food manufacturing industry and the food processing industry in China, the food safety problem has been the focus of social attention, and for this reason, the national and local layers dispute policies and actions are taken to carry out the construction of food safety relief engineering. Therefore, the food safety testing industry is steadily evolving.
Accordingly, a food safety detection system and method thereof are desired.
Disclosure of Invention
The present application has been made to solve the above-mentioned technical problems. The embodiment of the application provides a food safety detection system and a food safety detection method, which adopt a hyperspectral imaging technology to detect food safety, firstly acquire a hyperspectral cube map of food to be detected, perform noise reduction treatment on the hyperspectral cube map to remove interference of external factors, extract multi-scale associated characteristic information among spectral characteristics of the hyperspectral cube map under different wavelengths after noise reduction, and judge whether the food meets safety standards or not. Therefore, the food safety can be well detected and evaluated, so that the quality and eating safety of the food are ensured.
Accordingly, according to one aspect of the present application, there is provided a food safety detection system comprising:
the hyperspectral data acquisition module is used for acquiring a hyperspectral cube map of food to be detected, wherein the hyperspectral cube map comprises spectral images under a plurality of wavelengths;
the noise reduction module is used for enabling the hyperspectral cube map to pass through an image noise reducer based on an automatic coder-decoder to obtain a noise reduction hyperspectral cube map;
The depth feature extraction module is used for enabling the spectral images under each wavelength of the spectral images under the plurality of wavelengths in the noise reduction hyperspectral cube map to pass through a first convolution neural network model comprising a depth fusion module so as to obtain a plurality of image feature matrixes;
the three-dimensional arrangement module is used for arranging the image feature matrixes into three-dimensional feature tensors along the channel dimension;
a multi-scale associated feature extraction module, configured to pass the three-dimensional feature tensor through a dual-flow network model including a second convolutional neural network and a third convolutional neural network to obtain a first feature map and a second feature map, where the second convolutional neural network uses a three-dimensional convolutional kernel with a first scale, and the third convolutional neural network uses a three-dimensional convolutional kernel with a second scale;
the feature fusion module is used for fusing the first feature map and the second feature map to obtain a multi-scale associated feature map; and
and the detection result generation module is used for taking the multi-scale associated feature image as a classification feature image to pass through a classifier so as to obtain a classification result, wherein the classification result is used for indicating whether the food meets the safety standard.
In the above food safety detection system, the noise reduction module includes: the coding unit is used for inputting the hyperspectral cube map into an encoder of the image noise reducer, wherein the encoder uses a convolution layer to carry out explicit space coding on the hyperspectral cube map so as to obtain image characteristics; and a decoding unit, configured to input the image feature into a decoder of the image noise reducer, where the decoder uses a deconvolution layer to deconvolute the image feature to obtain the noise reduction hyperspectral cube map.
In the above food safety detection system, the depth feature extraction module includes: the shallow feature extraction unit is used for obtaining a shallow feature map from an M-th layer of the first convolutional neural network model, wherein M is more than or equal to 4 and less than or equal to 6; a deep feature extraction unit, configured to obtain a deep feature map from an nth layer of the first convolutional neural network model, where N/M is greater than or equal to 5 and less than or equal to 10; the fusion unit is used for fusing the shallow feature map and the deep feature map by using a depth fusion module of the first convolutional neural network model so as to obtain a fusion feature map; and the pooling unit is used for carrying out the mean pooling processing along the channel dimension on the fusion feature map so as to obtain the image feature matrix.
In the above food safety detection system, the multi-scale associated feature extraction module includes: a first scale feature extraction unit, configured to perform, in forward transfer of the layer, input data using the second convolutional neural network with the three-dimensional convolutional kernel of the first scale: a convolution process, a mean pooling process and a nonlinear activation process based on the three-dimensional convolution kernel having the first scale to obtain a first feature map; and a second scale feature extraction unit, configured to perform, in forward transfer of the layer, input data using the third convolutional neural network with the three-dimensional convolutional kernel of the second scale: and obtaining a second characteristic diagram based on convolution processing, mean pooling processing and nonlinear activation processing of the three-dimensional convolution kernel with the second scale.
In the above food safety detection system, the feature fusion module includes: a KL divergence calculating unit, configured to calculate KL divergences between each feature matrix of the first feature map along a channel dimension and each feature matrix of the second feature map along the channel dimension to obtain a plurality of KL divergence values; a geometric similarity calculation unit, configured to calculate, as a geometric similarity of a global feature distribution of each feature matrix of the first feature map along a channel dimension with respect to the second feature matrix, a sum of the plurality of KL divergence values; an arrangement unit, configured to arrange geometric similarity of each feature matrix of the first feature map along a channel dimension relative to a global feature distribution of the second feature matrix into a geometric similarity global input vector; the activation unit is used for inputting the geometric similarity global input vector into a Softmax function to obtain a probabilistic geometric similarity global feature vector; and a weighting unit, configured to fuse the first feature map and the second feature map with feature values of each position in the probabilistic geometric similarity global feature vector as weight values to obtain the multi-scale associated feature map.
In the above food safety detection system, the detection result generation module is configured to: processing the multi-scale associated feature map using the classifier to generate a classification result with a classification formula:
softmax{(M c ,B c )|Project(F)}
wherein Project (F) represents projecting the multi-scale associated feature map as a vector, M c Weight matrix of full connection layer, B c Representing the bias matrix for the fully connected layer, softmax represents the normalized exponential function.
According to another aspect of the present application, there is provided a food safety detection method comprising:
acquiring a hyperspectral cube map of food to be detected, wherein the hyperspectral cube map comprises spectral images under a plurality of wavelengths;
the hyperspectral cube map is passed through an image noise reducer based on an automatic coder and decoder to obtain a noise-reduced hyperspectral cube map;
the spectral images under each wavelength of the spectral images under a plurality of wavelengths in the noise reduction hyperspectral cube map are passed through a first convolution neural network model comprising a depth fusion module to obtain a plurality of image feature matrixes;
arranging the plurality of image feature matrices into a three-dimensional feature tensor along a channel dimension;
passing the three-dimensional feature tensor through a dual-flow network model comprising a second convolutional neural network and a third convolutional neural network to obtain a first feature map and a second feature map, wherein the second convolutional neural network uses a three-dimensional convolutional kernel with a first scale and the third convolutional neural network uses a three-dimensional convolutional kernel with a second scale;
Fusing the first feature map and the second feature map to obtain a multi-scale associated feature map; and
and taking the multi-scale associated feature map as a classification feature map to obtain a classification result through a classifier, wherein the classification result is used for indicating whether the food meets the safety standard.
In the above food safety detection method, passing the hyperspectral cube map through an image noise reducer based on an automatic codec to obtain a noise-reduced hyperspectral cube map includes: inputting the hyperspectral cube map into an encoder of the image noise reducer, wherein the encoder uses a convolution layer to carry out explicit spatial encoding on the hyperspectral cube map so as to obtain image characteristics; and inputting the image features into a decoder of the image noise reducer, wherein the decoder uses a deconvolution layer to deconvolute the image features to obtain the noise reduction hyperspectral cube map.
In the above food safety detection method, the step of obtaining a plurality of image feature matrices by passing the spectral images at each wavelength of the spectral images at a plurality of wavelengths in the noise reduction hyperspectral cube map through a first convolutional neural network model including a depth fusion module includes: obtaining a shallow feature map from an M-th layer of the first convolutional neural network model, wherein M is more than or equal to 4 and less than or equal to 6; obtaining a deep feature map from an Nth layer of the first convolutional neural network model, wherein N/M is more than or equal to 5 and less than or equal to 10; using a depth fusion module of the first convolutional neural network model to fuse the shallow feature map and the deep feature map to obtain a fusion feature map; and carrying out mean value pooling processing on the fusion feature map along the channel dimension to obtain the image feature matrix.
In the above food safety detection method, passing the three-dimensional feature tensor through a dual-flow network model including a second convolutional neural network and a third convolutional neural network to obtain a first feature map and a second feature map, including: using the second convolution neural network with the three-dimensional convolution kernel of the first scale to respectively perform on input data in forward transfer of layers: a convolution process, a mean pooling process and a nonlinear activation process based on the three-dimensional convolution kernel having the first scale to obtain a first feature map; and using the third convolutional neural network with the three-dimensional convolutional kernel of the second scale to respectively perform the forward transfer of the layers on input data: and obtaining a second characteristic diagram based on convolution processing, mean pooling processing and nonlinear activation processing of the three-dimensional convolution kernel with the second scale.
Compared with the prior art, the food safety detection system and the food safety detection method provided by the application adopt the hyperspectral imaging technology to detect food safety, firstly, a hyperspectral cube map of food to be detected is obtained, noise reduction treatment is carried out on the hyperspectral cube map to remove interference of external factors, multi-scale associated characteristic information among spectral characteristics of the hyperspectral cube map under different wavelengths after noise reduction is extracted, and whether the food meets safety standards is judged according to the multi-scale associated characteristic information. Therefore, the food safety can be well detected and evaluated, so that the quality and eating safety of the food are ensured.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing embodiments of the present application in more detail with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate the application and together with the embodiments of the application, and not constitute a limitation to the application. In the drawings, like reference numerals generally refer to like parts or steps.
Fig. 1 is a block diagram of a food safety detection system according to an embodiment of the present application.
Fig. 2 is a schematic diagram of a food safety detection system according to an embodiment of the application.
Fig. 3 is a block diagram of a noise reduction module in a food safety detection system according to an embodiment of the application.
Fig. 4 is a block diagram of a depth feature extraction module in a food safety inspection system according to an embodiment of the application.
Fig. 5 is a flowchart of a food safety detection method according to an embodiment of the present application.
Fig. 6 is a block diagram of an electronic device according to an embodiment of the application.
Detailed Description
Hereinafter, exemplary embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
As noted above in the background, food safety issues have been a focus of social concern. Accordingly, a food safety detection system and method thereof are desired that can accurately detect whether food meets safety standards to ensure quality and eating safety of the food.
At present, deep learning and neural networks have been widely used in the fields of computer vision, natural language processing, speech signal processing, and the like. In addition, deep learning and neural networks have also shown levels approaching and even exceeding humans in the fields of image classification, object detection, semantic segmentation, text translation, and the like.
In recent years, deep learning and the development of neural networks have provided new solutions and solutions for food safety detection.
Accordingly, hyperspectral imaging techniques are techniques for acquiring spectral information of the surface of an object. By collecting a plurality of spectrum data in narrow wave bands, the high-resolution spectrum measurement can be performed in the visible spectrum and the near infrared spectrum. The hyperspectral imaging technique can provide more spectral information than the conventional imaging technique, and can better analyze and identify the composition and characteristics of the object. The principle of hyperspectral imaging techniques is based on the property of the object surface to reflect, absorb and emit light of different wavelengths. By using a hyperspectral camera or spectrometer, spectral images of an object at different wavelengths can be acquired. Then, spectral features of each pixel point can be extracted by using data processing and analysis techniques, and classified, identified or quantitatively analyzed according to the features. The technology can detect the external quality of an object like the traditional imaging technology, and can detect the internal quality and the quality safety of the object like the spectrum technology, so that the technology is widely applied to the fields of food safety and the like.
Based on the above, in the technical scheme of the application, the hyperspectral imaging technology can be adopted to detect the food safety, firstly, the hyperspectral cube map of the food to be detected is obtained, the hyperspectral cube map is subjected to noise reduction treatment to remove the interference of external factors, the multiscale association characteristic information among the spectral characteristics of the hyperspectral cube map under different wavelengths after noise reduction is extracted, and whether the food meets the safety standard is judged. Therefore, the food safety can be well detected and evaluated, so that the quality and eating safety of the food are ensured.
Specifically, in the technical scheme of the application, firstly, a hyperspectral cube map of food to be detected is acquired through a hyperspectral analyzer. Next, it is contemplated that hyperspectral imaging techniques tend to suffer from some noise, such as sensor noise, illumination variations, and the like. These noise can have an impact on the quality of the hyperspectral cube map, making errors or useless information in the data difficult to process and analyze subsequently. Therefore, the image is subjected to noise reduction processing by using an image noise reducer based on an automatic coder-decoder so as to remove noise in the hyperspectral cube map and improve the definition and accuracy of the image. In particular, the image noise reducer based on the automatic codec comprises an encoder and a decoder, wherein the encoder uses a convolution layer to carry out explicit space coding on the hyperspectral cube map to obtain image features, and the decoder uses a deconvolution layer to carry out deconvolution processing on the image features to obtain the noise-reducing hyperspectral cube map.
Then, in the hyperspectral imaging, the spectral image at each wavelength contains information closely related to the composition of the object, considering that the hyperspectral cube map has a three-dimensional data structure. That is, the hyperspectral cube map has a wavelength hierarchy on the data structure. It should be understood that, since the hyperspectral cube map includes spectral images at a plurality of wavelengths, the amounts of information contained in the spectral image information at different wavelengths are different, and the depth fusion module can fuse the low-level and high-level features, so that the extracted features are more comprehensive and accurate. Therefore, in the technical scheme of the application, the spectral image is taken as image data, and the first convolution neural network model comprising the depth fusion module is taken as a feature extractor to extract high-dimensional local implicit feature distribution information of different layers of the image data under each wavelength in the noise reduction hyperspectral cube map, so that a plurality of image feature matrixes are obtained.
Further, in order to capture the correlation between the spectral features at different wavelengths, the image feature matrices are further arranged into three-dimensional feature tensors according to the channel dimension and then processed through a convolutional neural network. In particular, in order to sufficiently extract the correlation of the spectral features at different wavelengths to more precisely extract the feature information of the food to be detected for detecting the food safety, feature mining of the three-dimensional feature tensor is further performed using a dual-flow network model including a second convolutional neural network and a third convolutional neural network to obtain a first feature map and a second feature map, in consideration of the fact that the spectral features at different wavelengths have different degrees of correlation feature information. In particular, here, the second convolutional neural network uses a three-dimensional convolutional kernel having a first scale, and the third convolutional neural network uses a three-dimensional convolutional kernel having a second scale, the first scale being different from the second scale. It should be appreciated that by performing feature extraction of the three-dimensional feature tensor using a convolutional neural network of three-dimensional convolution kernels of different scales, correlation feature distribution information about different scales between spectral features at the different wavelengths in the three-dimensional feature tensor can be extracted.
And then, in order to comprehensively express the relevance characteristic distribution information of different scales among the spectrum characteristics under different wavelengths so as to facilitate the subsequent classification processing, the first characteristic diagram and the second characteristic diagram are further subjected to characteristic fusion to obtain a multi-scale relevance characteristic diagram.
Particularly, in the technical scheme of the application, the first feature map and the second feature map are obtained by carrying out different-scale coding on the same three-dimensional feature tensor, and have different dimensions, so that direct fusion can cause dimension mismatch and further loss of partial information. Meanwhile, the first feature map and the second feature map have different distribution features, and direct fusion can possibly lead to distortion of feature distribution, so that the fused feature map loses the original feature expression capability. The first feature map and the second feature map may have different weight assignments for importance of features, and direct fusion may not reasonably fuse weights of the two, resulting in unbalance of information.
Therefore, in the technical scheme of the application, fusing the first feature map and the second feature map to obtain a multi-scale associated feature map comprises: calculating KL divergence between each feature matrix of the first feature map along the channel dimension and each feature matrix of the second feature map along the channel dimension to obtain a plurality of KL divergence values, and calculating the sum of the plurality of KL divergence values as the geometric similarity of each feature matrix of the first feature map along the channel dimension relative to the global feature distribution of the second feature matrix; arranging the geometric similarity of each feature matrix of the first feature map along the channel dimension relative to the global feature distribution of the second feature matrix into a geometric similarity global input vector; inputting the geometric similarity global input vector into a Softmax function to obtain a probabilistic geometric similarity global feature vector; and fusing the first feature map and the second feature map by taking the feature values of all positions in the probabilistic geometric similarity global feature vector as weight values to obtain the multi-scale associated feature map.
In the technical scheme of the application, the geometric similarity between the feature manifold of each feature matrix along the channel dimension of the first feature map and the global feature manifold of the second feature map is measured by KL divergence, the probability of the geometric similarity measurement is realized by using a Softmax function, the feature manifold modulation is carried out on the first feature map by taking the global feature vector of the probability geometric similarity as a weight vector, and the feature popularity integration is carried out on the modulated first feature map and the second feature map to obtain the multi-scale associated feature map. In this way, the geometrical similarity constraint based on the high-dimensional feature distribution of the first feature map relative to the second feature map can ensure that the distribution of the multi-scale associated feature map in the high-dimensional space is similar to that of the original feature map, so that information loss or distortion is avoided. Moreover, the expressive power of the multi-scale associated feature map can also be enhanced, as it can take advantage of the correlation and complementarity between the original feature maps, thereby extracting more useful information.
And then, further taking the multi-scale associated feature map as a classification feature map to perform classification processing in a classifier so as to obtain a classification result for indicating whether the food meets the safety standard. Therefore, the food safety can be well detected and evaluated, so that the quality and eating safety of the food are ensured.
Having described the basic principles of the present application, various non-limiting embodiments of the present application will now be described in detail with reference to the accompanying drawings.
Exemplary System
Fig. 1 is a block diagram of a food safety detection system according to an embodiment of the present application. As shown in fig. 1, a food safety detection system 100 according to an embodiment of the present application includes: the hyperspectral data acquisition module 110 is used for acquiring a hyperspectral cube map of the food to be detected, wherein the hyperspectral cube map comprises spectral images at a plurality of wavelengths; a noise reduction module 120, configured to pass the hyperspectral cube map through an image noise reducer based on an automatic codec to obtain a noise reduction hyperspectral cube map; the depth feature extraction module 130 is configured to pass the spectral images at each wavelength of the spectral images at the plurality of wavelengths in the noise reduction hyperspectral cube map through a first convolutional neural network model including a depth fusion module to obtain a plurality of image feature matrices; a three-dimensional arrangement module 140, configured to arrange the plurality of image feature matrices into three-dimensional feature tensors along a channel dimension; a multi-scale associated feature extraction module 150, configured to pass the three-dimensional feature tensor through a dual-flow network model including a second convolutional neural network and a third convolutional neural network to obtain a first feature map and a second feature map, where the second convolutional neural network uses a three-dimensional convolutional kernel having a first scale, and the third convolutional neural network uses a three-dimensional convolutional kernel having a second scale; a feature fusion module 160, configured to fuse the first feature map and the second feature map to obtain a multi-scale associated feature map; and a detection result generating module 170, configured to pass the multi-scale associated feature map through a classifier as a classification feature map to obtain a classification result, where the classification result is used to indicate whether the food meets the safety standard.
Fig. 2 is a schematic diagram of a food safety detection system according to an embodiment of the application. As shown in fig. 2, first, a hyperspectral cube map of a food item to be detected is acquired, the hyperspectral cube map including spectral images at a plurality of wavelengths. The hyperspectral cube map is then passed through an automatic codec based image noise reducer to obtain a noise reduced hyperspectral cube map. And then, passing the spectral images at each wavelength of the spectral images at a plurality of wavelengths in the noise reduction hyperspectral cube map through a first convolution neural network model comprising a depth fusion module to obtain a plurality of image feature matrixes. The plurality of image feature matrices are then arranged along a channel dimension as a three-dimensional feature tensor. And secondly, passing the three-dimensional feature tensor through a double-flow network model comprising a second convolution neural network and a third convolution neural network to obtain a first feature map and a second feature map, wherein the second convolution neural network uses a three-dimensional convolution kernel with a first scale, and the third convolution neural network uses a three-dimensional convolution kernel with a second scale. And then, fusing the first characteristic diagram and the second characteristic diagram to obtain a multi-scale associated characteristic diagram. And finally, taking the multi-scale associated feature map as a classification feature map, and passing through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the food meets the safety standard.
In the above food safety inspection system 100, the hyperspectral data collection module 110 is configured to obtain a hyperspectral cube map of the food to be inspected, where the hyperspectral cube map includes spectral images at a plurality of wavelengths.
Accordingly, hyperspectral imaging techniques are techniques for acquiring spectral information of the surface of an object. By collecting a plurality of spectrum data in narrow wave bands, the high-resolution spectrum measurement can be performed in the visible spectrum and the near infrared spectrum. The principle of hyperspectral imaging techniques is based on the property of the object surface to reflect, absorb and emit light of different wavelengths. Spectral images of an object at different wavelengths can be acquired by using a hyperspectral camera or spectrometer, and then, spectral features of each pixel point can be extracted and classified, identified or quantitatively analyzed according to the features by using data processing and analysis techniques. The hyperspectral imaging technique can provide more spectral information than the conventional imaging technique, and can better analyze and identify the composition and characteristics of the object. Therefore, it is widely used in the fields of food safety and the like.
Based on the above, in the technical scheme of the application, the hyperspectral imaging technology can be adopted to detect the food safety, firstly, the hyperspectral cube map of the food to be detected is obtained, the hyperspectral cube map is subjected to noise reduction treatment to remove the interference of external factors, the multiscale association characteristic information among the spectral characteristics of the hyperspectral cube map under different wavelengths after noise reduction is extracted, and whether the food meets the safety standard is judged. Therefore, the food safety can be well detected and evaluated, so that the quality and eating safety of the food are ensured. Specifically, in the technical scheme of the application, firstly, a hyperspectral cube map of food to be detected is acquired through a hyperspectral analyzer.
In the food safety detection system 100, the noise reduction module 120 is configured to pass the hyperspectral cube map through an image noise reducer based on an automatic codec to obtain a noise-reduced hyperspectral cube map. It is contemplated that hyperspectral imaging techniques tend to suffer from some noise, such as sensor noise, illumination variations, and the like. These noise can have an impact on the quality of the hyperspectral cube map, making errors or useless information in the data difficult to process and analyze subsequently. Therefore, the image is subjected to noise reduction processing by using an image noise reducer based on an automatic coder-decoder so as to remove noise in the hyperspectral cube map and improve the definition and accuracy of the image.
Fig. 3 is a block diagram of a noise reduction module in a food safety detection system according to an embodiment of the application. As shown in fig. 3, the noise reduction module 120 includes: an encoding unit 121, configured to input the hyperspectral cube map into an encoder of the image noise reducer, where the encoder uses a convolution layer to perform explicit spatial encoding on the hyperspectral cube map to obtain image features; and a decoding unit 122, configured to input the image feature into a decoder of the image noise reducer, where the decoder uses a deconvolution layer to deconvolute the image feature to obtain the noise reduction hyperspectral cube map.
In the above food safety detection system 100, the depth feature extraction module 130 is configured to pass the spectral images at each wavelength of the spectral images at the plurality of wavelengths in the noise reduction hyperspectral cube map through a first convolutional neural network model including a depth fusion module to obtain a plurality of image feature matrices. In hyperspectral imaging, the spectral images at each wavelength contain information closely related to the composition of the object, considering that the hyperspectral cube map has a three-dimensional data structure. That is, the hyperspectral cube map has a wavelength hierarchy on the data structure. It should be understood that, since the hyperspectral cube map includes spectral images at a plurality of wavelengths, the amounts of information contained in the spectral image information at different wavelengths are different, and the depth fusion module can fuse the low-level and high-level features, so that the extracted features are more comprehensive and accurate. Therefore, in the technical scheme of the application, the spectral image is taken as image data, and the first convolution neural network model comprising the depth fusion module is taken as a feature extractor to extract high-dimensional local implicit feature distribution information of different layers of the image data under each wavelength in the noise reduction hyperspectral cube map, so that a plurality of image feature matrixes are obtained.
Fig. 4 is a block diagram of a depth feature extraction module in a food safety inspection system according to an embodiment of the application. As shown in fig. 4, the depth feature extraction module 130 includes: a shallow feature extraction unit 131, configured to obtain a shallow feature map from an mth layer of the first convolutional neural network model, where M is greater than or equal to 4 and less than or equal to 6; a deep feature extraction unit 132, configured to obtain a deep feature map from an nth layer of the first convolutional neural network model, where N/M is greater than or equal to 5 and less than or equal to 10; a fusion unit 133, configured to fuse the shallow feature map and the deep feature map by using a depth fusion module of the first convolutional neural network model to obtain a fused feature map; and a pooling unit 134, configured to perform a mean pooling process on the fused feature map along a channel dimension to obtain the image feature matrix.
In the food safety inspection system 100, the three-dimensional arrangement module 140 is configured to arrange the plurality of image feature matrices into three-dimensional feature tensors along a channel dimension. Further, in order to capture the correlation between the spectral features at different wavelengths, the image feature matrices are further arranged into three-dimensional feature tensors according to the channel dimension and then processed through a convolutional neural network.
In the food safety detection system 100, the multi-scale associated feature extraction module 150 is configured to pass the three-dimensional feature tensor through a dual-flow network model including a second convolutional neural network and a third convolutional neural network to obtain a first feature map and a second feature map, where the second convolutional neural network uses a three-dimensional convolution kernel having a first scale and the third convolutional neural network uses a three-dimensional convolution kernel having a second scale. In view of the fact that the spectral features at different wavelengths have different degrees of correlation feature information, in order to be able to sufficiently extract the correlation of the spectral features at different wavelengths to more accurately extract the feature information of the food to be detected for detecting food safety, feature mining of the three-dimensional feature tensor is further performed by using a dual-flow network model including a second convolutional neural network and a third convolutional neural network, so as to obtain a first feature map and a second feature map. It should be appreciated that by performing feature extraction of the three-dimensional feature tensor using a convolutional neural network of three-dimensional convolution kernels of different scales, correlation feature distribution information about different scales between spectral features at the different wavelengths in the three-dimensional feature tensor can be extracted.
Accordingly, in one specific example, the multi-scale associated feature extraction module 150 includes: a first scale feature extraction unit, configured to perform, in forward transfer of the layer, input data using the second convolutional neural network with the three-dimensional convolutional kernel of the first scale: a convolution process, a mean pooling process and a nonlinear activation process based on the three-dimensional convolution kernel having the first scale to obtain a first feature map; and a second scale feature extraction unit, configured to perform, in forward transfer of the layer, input data using the third convolutional neural network with the three-dimensional convolutional kernel of the second scale: and obtaining a second characteristic diagram based on convolution processing, mean pooling processing and nonlinear activation processing of the three-dimensional convolution kernel with the second scale.
In the above food safety inspection system 100, the feature fusion module 160 is configured to fuse the first feature map and the second feature map to obtain a multi-scale associated feature map. In order to comprehensively represent the relevance characteristic distribution information of different scales among the spectrum characteristics under different wavelengths so as to facilitate subsequent classification processing, the first characteristic diagram and the second characteristic diagram are further subjected to characteristic fusion to obtain a multi-scale relevance characteristic diagram. Particularly, in the technical scheme of the application, the first feature map and the second feature map are obtained by carrying out different-scale coding on the same three-dimensional feature tensor, and have different dimensions, so that direct fusion can cause dimension mismatch and further loss of partial information. Meanwhile, the first feature map and the second feature map have different distribution features, and direct fusion can possibly lead to distortion of feature distribution, so that the fused feature map loses the original feature expression capability. The first feature map and the second feature map may have different weight assignments for importance of features, and direct fusion may not reasonably fuse weights of the two, resulting in unbalance of information.
Therefore, in the solution of the present application, the feature fusion module 160 includes: a KL divergence calculating unit, configured to calculate KL divergences between each feature matrix of the first feature map along a channel dimension and each feature matrix of the second feature map along the channel dimension to obtain a plurality of KL divergence values; a geometric similarity calculation unit, configured to calculate, as a geometric similarity of a global feature distribution of each feature matrix of the first feature map along a channel dimension with respect to the second feature matrix, a sum of the plurality of KL divergence values; an arrangement unit, configured to arrange geometric similarity of each feature matrix of the first feature map along a channel dimension relative to a global feature distribution of the second feature matrix into a geometric similarity global input vector; the activation unit is used for inputting the geometric similarity global input vector into a Softmax function to obtain a probabilistic geometric similarity global feature vector; and a weighting unit, configured to fuse the first feature map and the second feature map with feature values of each position in the probabilistic geometric similarity global feature vector as weight values to obtain the multi-scale associated feature map.
In the technical scheme of the application, the geometric similarity between the feature manifold of each feature matrix along the channel dimension of the first feature map and the global feature manifold of the second feature map is measured by KL divergence, the probability of the geometric similarity measurement is realized by using a Softmax function, the feature manifold modulation is carried out on the first feature map by taking the global feature vector of the probability geometric similarity as a weight vector, and the feature popularity integration is carried out on the modulated first feature map and the second feature map to obtain the multi-scale associated feature map. In this way, the geometrical similarity constraint based on the high-dimensional feature distribution of the first feature map relative to the second feature map can ensure that the distribution of the multi-scale associated feature map in the high-dimensional space is similar to that of the original feature map, so that information loss or distortion is avoided. Moreover, the expressive power of the multi-scale associated feature map can also be enhanced, as it can take advantage of the correlation and complementarity between the original feature maps, thereby extracting more useful information.
In the above food safety detection system 100, the detection result generating module 170 is configured to pass the multi-scale associated feature map through a classifier as a classification feature map to obtain a classification result, where the classification result is used to indicate whether the food meets the safety standard. And further taking the multi-scale associated feature map as a classification feature map to perform classification processing in a classifier so as to obtain a classification result for indicating whether the food meets the safety standard. Therefore, the food safety can be well detected and evaluated, so that the quality and eating safety of the food are ensured.
Accordingly, in a specific example, the detection result generating module 170 is configured to: processing the multi-scale associated feature map using the classifier to generate a classification result with a classification formula:
softmax{(M c ,B c )|Project(F)}
wherein Project (F) represents projecting the multi-scale associated feature map as a vector, M c Weight matrix of full connection layer, B c Representing the bias matrix for the fully connected layer, softmax represents the normalized exponential function.
In summary, the food safety detection system according to the embodiment of the application is illustrated, which adopts a hyperspectral imaging technology to detect food safety, firstly obtains a hyperspectral cube map of food to be detected, performs noise reduction treatment on the hyperspectral cube map to remove interference of external factors, extracts multi-scale associated characteristic information among spectral characteristics of the hyperspectral cube map under different wavelengths after noise reduction, and judges whether the food meets safety standards. Therefore, the food safety can be well detected and evaluated, so that the quality and eating safety of the food are ensured.
Exemplary method
Fig. 5 is a flowchart of a food safety detection method according to an embodiment of the present application. As shown in fig. 5, the food safety detection method according to the embodiment of the present application includes the steps of: s110, acquiring a hyperspectral cube map of food to be detected, wherein the hyperspectral cube map comprises spectral images under a plurality of wavelengths; s120, enabling the hyperspectral cube map to pass through an image noise reducer based on an automatic coder-decoder to obtain a noise-reduced hyperspectral cube map; s130, passing the spectral images at each wavelength of the spectral images at a plurality of wavelengths in the noise reduction hyperspectral cube map through a first convolution neural network model comprising a depth fusion module to obtain a plurality of image feature matrixes; s140, arranging the image feature matrixes into three-dimensional feature tensors along the channel dimension; s150, passing the three-dimensional feature tensor through a double-flow network model comprising a second convolutional neural network and a third convolutional neural network to obtain a first feature map and a second feature map, wherein the second convolutional neural network uses a three-dimensional convolutional kernel with a first scale, and the third convolutional neural network uses a three-dimensional convolutional kernel with a second scale; s160, fusing the first feature map and the second feature map to obtain a multi-scale associated feature map; and S170, using the multi-scale associated feature map as a classification feature map to pass through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the food meets the safety standard.
In a specific example, in the above food safety detection method, the step S120 of passing the hyperspectral cube map through an image noise reducer based on an automatic codec to obtain a noise-reduced hyperspectral cube map includes: inputting the hyperspectral cube map into an encoder of the image noise reducer, wherein the encoder uses a convolution layer to carry out explicit spatial encoding on the hyperspectral cube map so as to obtain image characteristics; and inputting the image features into a decoder of the image noise reducer, wherein the decoder uses a deconvolution layer to deconvolute the image features to obtain the noise reduction hyperspectral cube map.
In a specific example, in the above food safety detection method, the step S130 of passing the spectral images at each wavelength of the spectral images at the plurality of wavelengths in the noise reduction hyperspectral cube map through a first convolutional neural network model including a depth fusion module to obtain a plurality of image feature matrices includes: obtaining a shallow feature map from an M-th layer of the first convolutional neural network model, wherein M is more than or equal to 4 and less than or equal to 6; obtaining a deep feature map from an Nth layer of the first convolutional neural network model, wherein N/M is more than or equal to 5 and less than or equal to 10; using a depth fusion module of the first convolutional neural network model to fuse the shallow feature map and the deep feature map to obtain a fusion feature map; and carrying out mean value pooling processing on the fusion feature map along the channel dimension to obtain the image feature matrix.
In a specific example, in the above food safety detection method, the step S150 of passing the three-dimensional feature tensor through a dual-flow network model including a second convolutional neural network and a third convolutional neural network to obtain a first feature map and a second feature map includes: using the second convolution neural network with the three-dimensional convolution kernel of the first scale to respectively perform on input data in forward transfer of layers: a convolution process, a mean pooling process and a nonlinear activation process based on the three-dimensional convolution kernel having the first scale to obtain a first feature map; and using the third convolutional neural network with the three-dimensional convolutional kernel of the second scale to respectively perform the forward transfer of the layers on input data: and obtaining a second characteristic diagram based on convolution processing, mean pooling processing and nonlinear activation processing of the three-dimensional convolution kernel with the second scale.
In a specific example, in the above food safety detection method, the step S160 of fusing the first feature map and the second feature map to obtain a multi-scale associated feature map includes: calculating KL divergence between each feature matrix of the first feature map along the channel dimension and each feature matrix of the second feature map along the channel dimension to obtain a plurality of KL divergence values, and calculating the sum of the plurality of KL divergence values as the geometric similarity of each feature matrix of the first feature map along the channel dimension relative to the global feature distribution of the second feature matrix; arranging the geometric similarity of each feature matrix of the first feature map along the channel dimension relative to the global feature distribution of the second feature matrix into a geometric similarity global input vector; inputting the geometric similarity global input vector into a Softmax function to obtain a probabilistic geometric similarity global feature vector; and fusing the first feature map and the second feature map by taking the feature values of all positions in the probabilistic geometric similarity global feature vector as weight values to obtain the multi-scale associated feature map.
In a specific example, in the above food safety detection method, the step S170 is performed to obtain a classification result by using the multi-scale correlation feature map as a classification feature map through a classifier, where the classification result is used to indicate whether the food meets the safety standard, and includes: processing the multi-scale associated feature map using the classifier to generate a classification result with a classification formula:
softmax{(M c ,B c )|Project(F)}
wherein Project (F) represents projecting the multi-scale associated feature map as a vector, M c Weight matrix of full connection layer, B c Representing the bias matrix for the fully connected layer, softmax represents the normalized exponential function.
Here, it will be understood by those skilled in the art that the specific operations of the respective steps in the above food safety detection method have been described in detail in the above description of the food safety detection system with reference to fig. 1 to 4, and thus, repetitive descriptions thereof will be omitted.
Exemplary electronic device
Next, an electronic device according to an embodiment of the present application is described with reference to fig. 6. Fig. 6 is a block diagram of an electronic device according to an embodiment of the application.
As shown in fig. 6, the electronic device 10 includes one or more processors 11 and a memory 12.
The processor 11 may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities, and may control other components in the electronic device 10 to perform desired functions.
Memory 12 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer readable storage medium that can be executed by the processor 11 to implement the food safety detection system and/or other desired functions of the various embodiments of the present application described above. Various contents such as hyperspectral cube maps of the food to be detected can also be stored in the computer-readable storage medium.
In one example, the electronic device 10 may further include: an input device 13 and an output device 14, which are interconnected by a bus system and/or other forms of connection mechanisms (not shown).
The input means 13 may comprise, for example, a keyboard, a mouse, etc.
The output device 14 may output various information including the classification result and the like to the outside. The output means 14 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, etc.
Of course, only some of the components of the electronic device 10 that are relevant to the present application are shown in fig. 6 for simplicity, components such as buses, input/output interfaces, etc. are omitted. In addition, the electronic device 10 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer readable storage Medium
In addition to the methods and apparatus described above, embodiments of the application may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps in a food safety detection method according to various embodiments of the application described in the "exemplary methods" section of this specification.
The computer program product may write program code for performing operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium, having stored thereon computer program instructions, which when executed by a processor, cause the processor to perform the steps in a food safety detection method according to various embodiments of the present application described in the above "exemplary method" section of the present specification.
The computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The basic principles of the present application have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present application are merely examples and not intended to be limiting, and these advantages, benefits, effects, etc. are not to be considered as essential to the various embodiments of the present application. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, as the application is not necessarily limited to practice with the above described specific details.
The block diagrams of the devices, apparatuses, devices, systems referred to in the present application are only illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
It is also noted that in the apparatus, devices and methods of the present application, the components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent aspects of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the application to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.

Claims (10)

1. A food safety inspection system, comprising:
the hyperspectral data acquisition module is used for acquiring a hyperspectral cube map of food to be detected, wherein the hyperspectral cube map comprises spectral images under a plurality of wavelengths;
the noise reduction module is used for enabling the hyperspectral cube map to pass through an image noise reducer based on an automatic coder-decoder to obtain a noise reduction hyperspectral cube map;
the depth feature extraction module is used for enabling the spectral images under each wavelength of the spectral images under the plurality of wavelengths in the noise reduction hyperspectral cube map to pass through a first convolution neural network model comprising a depth fusion module so as to obtain a plurality of image feature matrixes;
the three-dimensional arrangement module is used for arranging the image feature matrixes into three-dimensional feature tensors along the channel dimension;
a multi-scale associated feature extraction module, configured to pass the three-dimensional feature tensor through a dual-flow network model including a second convolutional neural network and a third convolutional neural network to obtain a first feature map and a second feature map, where the second convolutional neural network uses a three-dimensional convolutional kernel with a first scale, and the third convolutional neural network uses a three-dimensional convolutional kernel with a second scale;
The feature fusion module is used for fusing the first feature map and the second feature map to obtain a multi-scale associated feature map; and
and the detection result generation module is used for taking the multi-scale associated feature image as a classification feature image to pass through a classifier so as to obtain a classification result, wherein the classification result is used for indicating whether the food meets the safety standard.
2. The food safety detection system of claim 1, wherein the noise reduction module comprises:
the coding unit is used for inputting the hyperspectral cube map into an encoder of the image noise reducer, wherein the encoder uses a convolution layer to carry out explicit space coding on the hyperspectral cube map so as to obtain image characteristics; and
and the decoding unit is used for inputting the image features into a decoder of the image noise reducer, wherein the decoder uses a deconvolution layer to carry out deconvolution processing on the image features so as to obtain the noise reduction hyperspectral cube map.
3. The food safety detection system of claim 2, wherein the depth feature extraction module comprises:
the shallow feature extraction unit is used for obtaining a shallow feature map from an M-th layer of the first convolutional neural network model, wherein M is more than or equal to 4 and less than or equal to 6;
A deep feature extraction unit, configured to obtain a deep feature map from an nth layer of the first convolutional neural network model, where N/M is greater than or equal to 5 and less than or equal to 10;
the fusion unit is used for fusing the shallow feature map and the deep feature map by using a depth fusion module of the first convolutional neural network model so as to obtain a fusion feature map; and
and the pooling unit is used for carrying out mean pooling processing on the fusion feature map along the channel dimension so as to obtain the image feature matrix.
4. A food safety detection system according to claim 3, wherein the multi-scale associated feature extraction module comprises:
a first scale feature extraction unit, configured to perform, in forward transfer of the layer, input data using the second convolutional neural network with the three-dimensional convolutional kernel of the first scale: a convolution process, a mean pooling process and a nonlinear activation process based on the three-dimensional convolution kernel having the first scale to obtain a first feature map; and
a second scale feature extraction unit, configured to perform, in forward transfer of the layer, input data using the third convolutional neural network with the three-dimensional convolutional kernel of the second scale: and obtaining a second characteristic diagram based on convolution processing, mean pooling processing and nonlinear activation processing of the three-dimensional convolution kernel with the second scale.
5. The food safety detection system of claim 4, wherein the feature fusion module comprises:
a KL divergence calculating unit, configured to calculate KL divergences between each feature matrix of the first feature map along a channel dimension and each feature matrix of the second feature map along the channel dimension to obtain a plurality of KL divergence values;
a geometric similarity calculation unit, configured to calculate, as a geometric similarity of a global feature distribution of each feature matrix of the first feature map along a channel dimension with respect to the second feature matrix, a sum of the plurality of KL divergence values;
an arrangement unit, configured to arrange geometric similarity of each feature matrix of the first feature map along a channel dimension relative to a global feature distribution of the second feature matrix into a geometric similarity global input vector;
the activation unit is used for inputting the geometric similarity global input vector into a Softmax function to obtain a probabilistic geometric similarity global feature vector; and
and the weighting unit is used for fusing the first feature map and the second feature map by taking the feature values of all positions in the probabilistic geometric similarity global feature vector as weight values so as to obtain the multi-scale associated feature map.
6. The food safety inspection system of claim 5, wherein the inspection result generation module is configured to: processing the multi-scale associated feature map using the classifier to generate a classification result with a classification formula:
softmax{(M c ,B c )|Project(F)}
wherein Project (F) represents projecting the multi-scale associated feature map as a vector, M c Weight matrix of full connection layer, B c Representing the bias matrix for the fully connected layer, softmax represents the normalized exponential function.
7. A method of food safety testing comprising:
acquiring a hyperspectral cube map of food to be detected, wherein the hyperspectral cube map comprises spectral images under a plurality of wavelengths;
the hyperspectral cube map is passed through an image noise reducer based on an automatic coder and decoder to obtain a noise-reduced hyperspectral cube map;
the spectral images under each wavelength of the spectral images under a plurality of wavelengths in the noise reduction hyperspectral cube map are passed through a first convolution neural network model comprising a depth fusion module to obtain a plurality of image feature matrixes;
arranging the plurality of image feature matrices into a three-dimensional feature tensor along a channel dimension;
passing the three-dimensional feature tensor through a dual-flow network model comprising a second convolutional neural network and a third convolutional neural network to obtain a first feature map and a second feature map, wherein the second convolutional neural network uses a three-dimensional convolutional kernel with a first scale and the third convolutional neural network uses a three-dimensional convolutional kernel with a second scale;
Fusing the first feature map and the second feature map to obtain a multi-scale associated feature map; and
and taking the multi-scale associated feature map as a classification feature map to obtain a classification result through a classifier, wherein the classification result is used for indicating whether the food meets the safety standard.
8. The method of claim 7, wherein passing the hyperspectral cube map through an automatic codec-based image denoiser to obtain a denoised hyperspectral cube map, comprising:
inputting the hyperspectral cube map into an encoder of the image noise reducer, wherein the encoder uses a convolution layer to carry out explicit spatial encoding on the hyperspectral cube map so as to obtain image characteristics; and
and inputting the image features into a decoder of the image noise reducer, wherein the decoder uses a deconvolution layer to deconvolute the image features to obtain the noise reduction hyperspectral cube map.
9. The method of claim 8, wherein passing the spectral images at each of the plurality of wavelengths in the noise-reduced hyperspectral cube map through a first convolutional neural network model comprising a depth fusion module to obtain a plurality of image feature matrices, comprises:
Obtaining a shallow feature map from an M-th layer of the first convolutional neural network model, wherein M is more than or equal to 4 and less than or equal to 6;
obtaining a deep feature map from an Nth layer of the first convolutional neural network model, wherein N/M is more than or equal to 5 and less than or equal to 10;
using a depth fusion module of the first convolutional neural network model to fuse the shallow feature map and the deep feature map to obtain a fusion feature map; and
and carrying out mean pooling treatment on the fusion feature map along the channel dimension to obtain the image feature matrix.
10. The method of claim 9, wherein passing the three-dimensional feature tensor through a dual-flow network model comprising a second convolutional neural network and a third convolutional neural network to obtain a first feature map and a second feature map, comprises:
using the second convolution neural network with the three-dimensional convolution kernel of the first scale to respectively perform on input data in forward transfer of layers: a convolution process, a mean pooling process and a nonlinear activation process based on the three-dimensional convolution kernel having the first scale to obtain a first feature map; and
using the third convolution neural network with the three-dimensional convolution kernel of the second scale to perform respective processing on input data in forward transfer of the layer: and obtaining a second characteristic diagram based on convolution processing, mean pooling processing and nonlinear activation processing of the three-dimensional convolution kernel with the second scale.
CN202310881627.1A 2023-07-18 2023-07-18 Food safety detection system and method thereof Pending CN116858789A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310881627.1A CN116858789A (en) 2023-07-18 2023-07-18 Food safety detection system and method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310881627.1A CN116858789A (en) 2023-07-18 2023-07-18 Food safety detection system and method thereof

Publications (1)

Publication Number Publication Date
CN116858789A true CN116858789A (en) 2023-10-10

Family

ID=88233886

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310881627.1A Pending CN116858789A (en) 2023-07-18 2023-07-18 Food safety detection system and method thereof

Country Status (1)

Country Link
CN (1) CN116858789A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117392520A (en) * 2023-10-24 2024-01-12 江苏权正检验检测有限公司 Intelligent data sharing method and system for food inspection and detection

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117392520A (en) * 2023-10-24 2024-01-12 江苏权正检验检测有限公司 Intelligent data sharing method and system for food inspection and detection

Similar Documents

Publication Publication Date Title
CN110148103B (en) Hyperspectral and multispectral image fusion method based on joint optimization, computer-readable storage medium and electronic device
CN116994069B (en) Image analysis method and system based on multi-mode information
CN114912533B (en) State monitoring system and monitoring method applied to transformer
CN116858789A (en) Food safety detection system and method thereof
CN116309580B (en) Oil and gas pipeline corrosion detection method based on magnetic stress
CN116343301B (en) Personnel information intelligent verification system based on face recognition
CN116012837A (en) Food quality detection system based on nondestructive detection and detection method thereof
CN117030129A (en) Paper cup on-line leakage detection method and system thereof
Fang et al. Automatic zipper tape defect detection using two-stage multi-scale convolutional networks
CN117173154A (en) Online image detection system and method for glass bottle
CN116797533B (en) Appearance defect detection method and system for power adapter
US20200279148A1 (en) Material structure analysis method and material structure analyzer
CN116205918B (en) Multi-mode fusion semiconductor detection method, device and medium based on graph convolution
CN117557941A (en) Video intelligent analysis system and method based on multi-mode data fusion
CN116502899A (en) Risk rating model generation method, device and storage medium based on artificial intelligence
CN116206221B (en) Water flare detection method and system
CN110889290A (en) Text encoding method and apparatus, text encoding validity checking method and apparatus
Zhai et al. Information integration of force sensing and machine vision for in‐shell shrivelled walnut detection based on the golden‐section search optimal discrimination threshold
CN111222543A (en) Substance identification method and apparatus, and computer-readable storage medium
Ashraf et al. Attention 3D central difference convolutional dense network for hyperspectral image classification
Zhang et al. Explainable AI-driven wavelength selection for hyperspectral imaging of honey products
CN117237644B (en) Forest residual fire detection method and system based on infrared small target detection
CN117274689A (en) Detection method and system for detecting defects of packaging box
CN117934019B (en) Copper concentrate sample tracing method and system based on deep learning
CN117250521B (en) Charging pile battery capacity monitoring system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination