CN115187870A - Marine plastic waste material identification method and system, electronic equipment and storage medium - Google Patents

Marine plastic waste material identification method and system, electronic equipment and storage medium Download PDF

Info

Publication number
CN115187870A
CN115187870A CN202211109135.2A CN202211109135A CN115187870A CN 115187870 A CN115187870 A CN 115187870A CN 202211109135 A CN202211109135 A CN 202211109135A CN 115187870 A CN115187870 A CN 115187870A
Authority
CN
China
Prior art keywords
image
model
recognition
classification
hyperspectral
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211109135.2A
Other languages
Chinese (zh)
Other versions
CN115187870B (en
Inventor
陈光辉
贺玮
方敏
陈亚红
周一帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lanjing Technology Co ltd
Zhejiang Lanjing Technology Co ltd Hangzhou Branch
Original Assignee
Zhejiang Lanjing Technology Co ltd
Zhejiang Lanjing Technology Co ltd Hangzhou Branch
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lanjing Technology Co ltd, Zhejiang Lanjing Technology Co ltd Hangzhou Branch filed Critical Zhejiang Lanjing Technology Co ltd
Priority to CN202211109135.2A priority Critical patent/CN115187870B/en
Publication of CN115187870A publication Critical patent/CN115187870A/en
Application granted granted Critical
Publication of CN115187870B publication Critical patent/CN115187870B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/194Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02WCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO WASTEWATER TREATMENT OR WASTE MANAGEMENT
    • Y02W30/00Technologies for solid waste management
    • Y02W30/50Reuse, recycling or recovery technologies
    • Y02W30/62Plastics recycling; Rubber recycling

Abstract

The invention discloses a method and a system for identifying the material quality of marine plastic waste, electronic equipment and a storage medium, and belongs to the field of image identification. Collecting a hyperspectral image of marine plastic waste, and marking a first identification label; training an integrated type classification recognition model, acquiring a second recognition label and recognition probability of the image, marking a third recognition label if the recognition probability is smaller than a preset standard, comparing the first recognition label and the second recognition label of the image, judging by mistake if the comparison result is inconsistent, storing the judgment result into a second training set, and optimizing the model by using the second training set; and identifying the marine plastic waste to be identified by adopting a trained integrated classification identification model. According to the invention, the integrated classification recognition model is fused with three classification models, namely a one-dimensional DNN model, an LS-SVM model and a three-dimensional CNN model, and multiple image sample sets are obtained by random sampling to train the integrated classification recognition model, so that the marine plastic waste material recognition with high accuracy and high efficiency can be realized.

Description

Ocean plastic waste material identification method and system, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of image recognition, in particular to a method and a system for recognizing marine plastic waste materials, electronic equipment and a storage medium.
Background
The treatment of the marine plastic wastes is always an urgent problem to be solved in the global marine environment, and the marine plastic wastes are collected to be used as raw materials, so that the cyclic utilization of marine plastics can be realized.
The appearance and the color of some plastics are very similar, so that the problems of low speed, high difficulty, high cost and the like in plastic waste sorting exist, particularly, for marine plastic waste, the marine plastic waste is soaked in seawater to a certain degree, the identification difficulty is higher, and if the recycling of marine plastics is to be realized, the recycling accuracy of different types of marine plastic waste is required to be considered, and the recycling efficiency and the cost are required to be considered.
Because the sorting equipment commonly used at present is large and is not easy to carry, the sorting equipment is usually applied to a plastic recovery factory or a laboratory, and the configuration requirement of the equipment is high, so that the sorting equipment cannot be well popularized in the plastic garbage collection stage. In addition, the sorting equipment commonly used at present usually adopts near infrared sensor, and it is selected separately according to the reflectance spectrum that different materials show and judges discernment, in order to improve the discernment precision, needs to wash plastics, pretreatment such as fragmentation, and efficiency is lower, is not suitable for ocean plastic refuse material to discern.
Disclosure of Invention
Aiming at the defects of the prior art, the invention discloses a method, a system, electronic equipment and a storage medium for identifying the marine plastic waste material quality.
The invention provides a method for identifying the material quality of marine plastic waste, which comprises the following steps:
collecting hyperspectral images of the marine plastic waste, marking a first identification label, preprocessing the hyperspectral images to obtain a first training set;
training an integrated type classification recognition model by using a first training set, acquiring a second recognition label and a corresponding recognition probability of an image by using the integrated type classification recognition model, judging whether the recognition probability is smaller than a preset standard or not by using a classification screening model, and marking a third recognition label if the recognition probability is smaller than the preset standard; the integrated classification recognition model respectively adopts a one-dimensional DNN model, an LS-SVM model and a three-dimensional CNN model as a base learner in the first layer;
in the training stage, aiming at the image with the third identification label, comparing the results of the first identification label and the second identification label of the image, if the results are inconsistent, marking the image as misjudgment, storing the misjudgment into a second training set, and optimizing the integrated type classification identification model by utilizing the second training set;
and classifying the marine plastic waste to be recognized by adopting the trained integrated classification recognition model to obtain a recognition result.
Further, when the hyperspectral image is preprocessed, the hyperspectral image is corrected by replacing the background color of the original hyperspectral image, and the formula is as follows:
Figure 708584DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 812675DEST_PATH_IMAGE002
representing the corrected hyperspectral image, (x, y) representing the coordinate position of a pixel point in the hyperspectral image,
Figure 61254DEST_PATH_IMAGE003
represents a wavelength;
Figure 925305DEST_PATH_IMAGE004
representing an original hyperspectral image;
Figure 208518DEST_PATH_IMAGE005
representing a black background reference image;
Figure 943563DEST_PATH_IMAGE006
the representation is a white background reference picture.
Further, the first training set includes a first image sample set, a second image sample set, and a third image sample set, and the obtaining method includes:
randomly taking out different numbers of sample images from the preprocessed hyperspectral images according to a combination mode of multiple target types by combining the types of the first identification labels of the hyperspectral images to form a first image sample set;
randomly taking out different numbers of sample images from the preprocessed hyperspectral images according to each target type to form a second image sample set;
and introducing heterogeneous label samples out of all target types, and randomly taking different numbers of sample images from the preprocessed hyperspectral images according to a multi-type combination mode at least comprising one heterogeneous label and one target type label to form a third image sample set.
Furthermore, when the hyperspectral image is identified by the one-dimensional DNN model, the color feature, the texture feature and the edge feature of the reconstructed hyperspectral image are extracted by reconstructing the hyperspectral image.
Further, the formula for reconstructing the hyperspectral image is as follows:
M=Cr+k
wherein M is the reconstructed hyperspectral image, represented as a 3 x 1 vector of RGB intensities; c is an original hyperspectral image, r is a normalized reflectivity intensity vector of the hyperspectral image, and k represents a system noise vector.
Further, a K-fold cross validation mode is adopted to train three classification models including a one-dimensional DNN model, an LS-SVM model and a three-dimensional CNN model respectively, and 1/K data is reserved for each training of each classification model as a test set; and (4) taking the prediction results of the test set corresponding to the trained three classification models as the input of the meta-learner of the second layer, and finishing the training of the meta-learner of the second layer by combining the real labels of the prediction set.
Further, classifying the marine plastic waste to be recognized by adopting a trained integrated type classification recognition model, judging whether the recognition probability is smaller than a preset standard or not by using a classification screening model after obtaining a recognition result, marking a third recognition label if the recognition probability is smaller than the preset standard, manually checking, adding the first recognition label to the checked image, and storing the image into a second training set for optimizing the integrated type classification recognition model; and if not, feeding back the identification result.
The invention provides a marine plastic waste material identification system, which comprises:
the hyperspectral image acquisition module is used for performing hyperspectral image acquisition on the marine plastic waste;
the cloud platform is used for preprocessing the hyperspectral image obtained by sampling and outputting an identification result of the type of the marine plastic material; the cloud platform comprises:
the hyperspectral image preprocessing module is used for replacing the background color of the original hyperspectral image and correcting the hyperspectral image;
the first training set module is used for randomly sampling the preprocessed hyperspectral image samples, marking a first identification label and constructing a first training set;
the integrated type classification and recognition model module respectively adopts a one-dimensional DNN model, an LS-SVM model and a three-dimensional CNN model as base learners in a first layer of the integrated type classification and recognition model, and is used for performing material type recognition on the preprocessed hyperspectral image and outputting a second recognition label and corresponding recognition probability;
the classification screening model module is used for judging whether the recognition probability output by the integrated classification recognition model is smaller than a preset standard or not in the integrated classification recognition model training stage, and if so, marking a third recognition label;
the second training set module is used for acquiring the image with the third identification label, comparing the results of the first identification label and the second identification label of the image, marking the image as misjudgment if the results are inconsistent, and storing the image into a second training set;
and the training module is used for training and optimizing the integrated type classification recognition model module.
Further, in the first training set module, when the preprocessed hyperspectral image samples are randomly sampled, a first image sample set, a second image sample set and a third image sample set are obtained through sampling respectively; the method comprises the following specific steps:
randomly taking out different numbers of sample images from the preprocessed hyperspectral images in combination with the types of the first identification labels of the hyperspectral images and aiming at a combination mode of multiple target types to form a first image sample set;
randomly taking out different numbers of sample images from the preprocessed hyperspectral images according to each target type to form a second image sample set;
introducing heterogeneous label samples out of all target types, and randomly taking different numbers of sample images from the preprocessed hyperspectral images according to a multi-type combination mode at least comprising one heterogeneous label and one target type label to form a third image sample set.
According to a third aspect of the present invention, there is provided an electronic device, including a processor and a memory, where the memory stores machine executable instructions capable of being executed by the processor, and the processor executes the machine executable instructions to implement the method for identifying marine plastic waste material.
The fourth object of the present invention is to provide a machine-readable storage medium, which stores machine-executable instructions, when called and executed by a processor, for implementing the above-mentioned marine plastic waste material identification method.
Compared with the prior art, the invention has the following beneficial effects:
(1) According to the invention, the reflectivity of the collected hyperspectral image is corrected, the interference of background color is reduced, and the spectrum of the corrected image is reconstructed based on the spectrum reconstruction algorithm principle of DNN, so that the accuracy of average positioning of the center wavelength of the spectrum is improved, and the identification accuracy is ensured.
(2) The method takes the one-dimensional DNN model, the LS-SVM model and the three-dimensional CNN model as the base learners in the first layer of the integrated classification recognition model respectively, and the three models are fused by means of the mean values, so that the advantages of the three models are reserved, and the method can adapt to different application scenes; the fusion of the three models considers the complexity of the models and the recognition accuracy, and the purpose of improving the recognition efficiency on the basis of ensuring the recognition accuracy is achieved.
(3) In the invention, a cross validation mode and a same and different sample fusion mode are adopted in the model training stage, so that the model has stronger anti-interference capability and recognition accuracy; and aiming at the identification result with the identification probability lower than the threshold value, a difficult sample is screened out by judging the consistency condition of the first identification label and the second identification label, so that the model is optimized, and the capability of the model for identifying the complex marine plastic waste material is improved.
Drawings
Fig. 1 is a first flowchart of a method for identifying the material quality of marine plastic waste according to an embodiment of the present invention;
FIG. 2 is a second flowchart of a method for identifying the material quality of marine plastic wastes according to an embodiment of the present invention;
fig. 3 is a flow chart three of the method for identifying the material quality of the marine plastic waste according to the embodiment of the invention;
FIG. 4 is a schematic structural diagram of a marine plastic garbage material identification system according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device terminal for implementing a method for identifying a material of marine plastic waste according to an embodiment of the present invention;
FIG. 6 is a flow chart of the marine plastic refuse material identification implemented by a computer program when the computer program is called and executed according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
On the contrary, the invention is intended to cover alternatives, modifications, equivalents and alternatives which may be included within the spirit and scope of the invention as defined by the appended claims. Furthermore, in the following detailed description of the present invention, certain specific details are set forth in order to provide a better understanding of the present invention. It will be apparent to one skilled in the art that the present invention may be practiced without these specific details.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "or/and" includes any and all combinations of one or more of the associated listed items.
As shown in fig. 1, a method for identifying the material quality of marine plastic waste comprises the following steps:
s01, collecting hyperspectral images of the marine plastic waste, marking a first identification label, preprocessing the hyperspectral images to obtain a first training set;
s02, training the integrated type classification recognition model by using the first training set, acquiring a second recognition label and a corresponding recognition probability of the image by using the integrated type classification recognition model, judging whether the recognition probability is smaller than a preset standard or not by using a classification screening model, and marking a third recognition label if the recognition probability is smaller than the preset standard; the integrated classification recognition model adopts a one-dimensional DNN model, an LS-SVM model and a three-dimensional CNN model as a base learner in a first layer;
s03, in a training stage, comparing results of a first identification label and a second identification label of an image aiming at the image with a third identification label, if the results are inconsistent, marking the image as misjudgment, storing the misjudgment into a second training set, and optimizing the integrated type classification recognition model by utilizing the second training set;
and S04, classifying the marine plastic waste to be recognized by adopting the trained integrated classification recognition model to obtain a recognition result.
The step S01 mentioned above is intended to collect training samples of marine plastic waste. The source of the marine plastic waste is not specifically limited in this embodiment, and the hyperspectral image of the marine plastic waste needs to be acquired to obtain an original hyperspectral image.
The first identification tag represents a real plastic material type corresponding to the original hyperspectral image, for example, common plastic material types comprise PVC, PE, PET, BOPP, HDPE, ABS, PP, PC, PS, HIPS and the like, for a marine plastic garbage recycling station, the type of the plastic material recycled by the marine plastic garbage recycling station can be customized, for example, PVC, PE, PET and BOPP are used as recycling types, the type of each original hyperspectral image is marked, and if the types of the original hyperspectral images do not belong to the four types, the types of the original hyperspectral images are marked as heterogeneous. The recycling type can be set according to the requirements of those skilled in the art, and the embodiment is not limited.
In order to improve the identification accuracy of the hyperspectral image, the hyperspectral image is preprocessed in the embodiment. In one embodiment, the pretreatment process is: firstly, correcting a hyperspectral image of the collected marine plastic waste, filtering and denoising the corrected three-dimensional hyperspectral image data by a HySIME filter, controlling the band width between 1000 and 1600nm (eliminating color deviation), normalizing the data, and obtaining the three-dimensional hyperspectral image data after normalization for a subsequent identification process.
In this embodiment, a calculation formula for correcting the hyperspectral image is as follows:
Figure 679438DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 612759DEST_PATH_IMAGE002
representing the corrected hyperspectral image, (x, y) representing the coordinate position of a pixel point in the hyperspectral image,
Figure 999747DEST_PATH_IMAGE003
represents a wavelength;
Figure 665215DEST_PATH_IMAGE004
representing an original hyperspectral image;
Figure 888386DEST_PATH_IMAGE005
representing a black background reference image;
Figure 343507DEST_PATH_IMAGE006
the representation is a white background reference picture.
In order to improve the anti-interference capability of the recognition model, the present embodiment designs the sample structure in the first training set. The first training set comprises a first image sample set, a second image sample set and a third image sample set.
The three image sample set acquisition methods are as follows:
randomly taking out different numbers of sample images from the preprocessed hyperspectral images according to a combination mode of multiple target types by combining the types of the first identification labels of the hyperspectral images to form a first image sample set;
randomly taking out different numbers of sample images from the preprocessed hyperspectral images according to each target type to form a second image sample set;
and introducing heterogeneous label samples out of all target types, and randomly taking different numbers of sample images from the preprocessed hyperspectral images according to a multi-type combination mode at least comprising one heterogeneous label and one target type label to form a third image sample set.
Taking PVC, PE, PET, BOPP types as recycling objects as examples, four label types are taken as target types, and the rest material types are heterogeneous.
Firstly, aiming at a combination mode of a plurality of target types, for example, two or three target types are combined, 10 combination modes are provided in total, and aiming at each combination mode, different numbers of sample images are randomly taken out from a preprocessed hyperspectral image to form a first image sample set; or selecting a plurality of 10 combination modes, and randomly taking different numbers of sample images from the preprocessed hyperspectral images respectively according to each selected combination mode to form a second image sample set. Taking PVC and PE combination as an example, collecting p1 PVC type images and p1 PE type images from the preprocessed hyperspectral images; taking PVC and PET combination as an example, acquiring p2 images of PVC type and p2 images of PET type from the preprocessed hyperspectral images, preferably p1 ≠ p2; after sampling of 10 combination modes or sampling of a plurality of customized combination modes is completed, a first image sample set is obtained.
Then, randomly taking out different numbers of sample images from the preprocessed hyperspectral images according to each target type to form a second image sample set; for example, q1 PVC-type images, q2 PE-type images, q3 PET-type images, and q4 BOPP-type images are respectively extracted from the preprocessed hyperspectral images to form a second image sample set; preferably, q1 ≠ q2 ≠ q3 ≠ q4.
And finally, introducing heterogeneous label samples out of the four target types, and randomly taking different numbers of sample images from the preprocessed hyperspectral images according to a multi-type combination mode at least comprising one heterogeneous label and one target type label to form a third image sample set. For example, for a combination mode of PVC, PE and heterogeneous, collecting o1 images of PVC type, o1 images of PE type and o1 images of heterogeneous type from the preprocessed hyperspectral images; acquiring o2 images of a PET type, o2 images of a BOPP type and o2 images of a heterogeneous type from the preprocessed hyperspectral images according to a combination mode of PET, BOPP and the heterogeneous, wherein preferably o1 is not equal to o2; and obtaining a third image sample set after completing sampling of all the combination modes or sampling of a plurality of self-defined combination modes.
The above step S02 is intended to design and train an integrated classification recognition model.
The method takes a one-dimensional DNN model, an LS-SVM model and a three-dimensional CNN model as base learners in a first layer of an integrated classification recognition model respectively, and trains the three models respectively by using a first training set in the step S01.
For the three models, the invention considers both the complexity and the recognition accuracy of the model, and in order to achieve the purpose of improving the recognition efficiency on the basis of ensuring the recognition accuracy, the embodiment specially designs the parameters of the three models.
S021, constructing a one-dimensional DNN model.
The training samples are divided into a training set, a test set and a validation set.
Inputting the training set into a one-dimensional DNN model for classification training to obtain a one-dimensional DNN classifier;
inputting the verification set into a one-dimensional DNN classifier to perform pixel classification to obtain a one-dimensional DNN pixel classification result, and adjusting the learning rate according to the classification result to optimize the accuracy of the DNN classification model to obtain an optimized one-dimensional DNN classification model;
inputting the test set into a one-dimensional DNN classification model, and evaluating the accuracy of the optimized DNN classification model;
specifically, the one-dimensional DNN model is composed of 2 hidden layers, a ReLU function is used as a nonlinear activation function for thinning data, and a cross entropy loss function is used as a loss function.
The accuracy of classification is improved by adopting a cross validation mode, the learning rate is adjusted according to the change rate of a cross entropy loss function, gradient descent training is carried out by adopting an Adam optimization algorithm, the problems that a saddle point network is difficult to update and the learning rate cannot be adjusted in a self-adaptive mode are solved, training is stopped when the accuracy of a validation set continuously descends, and finally a trained one-dimensional DNN model is obtained.
In a specific implementation of the present invention, when the one-dimensional DNN model identifies the hyperspectral image, the method further includes the steps of reconstructing the hyperspectral image, and extracting color features, texture features, and edge features of the reconstructed hyperspectral image.
Specifically, the hyperspectral image is reconstructed based on a spectrum reconstruction algorithm principle of DNN, and the formula is as follows:
M=Cr+k
wherein M is the reconstructed hyperspectral image, represented as a 3 x 1 vector of RGB intensities; c is an original hyperspectral image and represents a 3 x N matrix, and N is the number of wavelengths; r is the normalized N x 1 reflectance intensity vector of the hyperspectral image, k is the zero-mean 3 x 1 systematic noise vector.
Performing feature extraction on the reconstructed three-dimensional hyperspectral data, and then fusing multiple features to obtain a feature value after weighted summation of the three features; the feature extraction includes color features, texture features and edge features, in this embodiment, a VSH color histogram is used for color feature extraction, an LBP feature vector is used for texture feature extraction, and a Canny operator edge detection algorithm is used for edge feature extraction.
And S022, constructing an LS-SVM model.
The training samples are divided into a training set, a test set and a validation set.
Inputting the training set into an LS-SVM classifier for training to obtain a trained LS-SVM prediction classification model;
inputting the verification set into a trained LS-SVM prediction classification model for verification to obtain a classification result, and optimizing parameters according to the classification result to obtain an optimized LS-SVM prediction classification model;
and inputting the test set into the optimized LS-SVM prediction classification model, and evaluating the accuracy of the classification result.
In the embodiment, LIN-Kernel is selected as the Kernel function, and the accuracy of classification is improved in a cross validation mode.
S023, constructing a three-dimensional CNN model.
The training samples are divided into a training set, a test set and a validation set.
Inputting the training set into a three-dimensional CNN model for training to obtain a three-dimensional CNN classifier;
inputting the verification set into a three-dimensional CNN classifier for verification to obtain a classification result, and adjusting the learning rate according to the classification result to optimize the accuracy of the CNN classification model to obtain an optimized CNN classification model;
and inputting the test set into the optimized CNN classification model for classification, and evaluating the accuracy of the classification result.
And improving the accuracy of classification by adopting a cross validation mode, and performing gradient descent training by adopting an Adam optimization algorithm.
In this embodiment, the parameters of the three-dimensional CNN model are as follows:
table 1:
Figure 601313DEST_PATH_IMAGE007
as shown in table 1, the three-dimensional CNN model includes four 3D convolutional layers and one fully-connected layer. The BN layer (Batech normaiza) is a deep neural network training technology, the input distribution of each layer of the three-dimensional CNN model in the training process can be guaranteed to be the same through batch normalization processing, the training speed of the classification model is improved, and the convergence process is accelerated. In this embodiment, the ReLU is used as an activation function of the three-dimensional CNN model to make data sparse.
And S024, realizing multi-model fusion by adopting a stacking algorithm.
In one specific implementation of the invention, a K-fold cross validation mode is adopted to train three classification models including a one-dimensional DNN model, an LS-SVM model and a three-dimensional CNN model respectively, and 1/K data is reserved as a test set for each training aiming at each classification model; and (4) taking the prediction results of the test set corresponding to the trained three classification models as the input of the meta-learner of the second layer, and finishing the training of the meta-learner of the second layer by combining the real labels of the prediction set.
Taking 5-fold training as an example, a sample set is randomly divided into 5 parts, four parts of the 5 parts are taken to form a training subset, the rest parts are taken as a testing subset, and five division modes are provided to obtain five combinations of the training subset and the testing subset.
At the first level of the stacking algorithm, 5 basis learners [ model _1, model _2, model _3, model _4, model _5are defined]Aiming at any one classification model of a one-dimensional DNN model, an LS-SVM model and a three-dimensional CNN model, 5 training subsets are respectively utilized to train to obtain 5 base learners, 5 testing subsets are respectively input into 5 base learners corresponding to the classification model, and a testing result P corresponding to the classification model is obtained i ={P i,1 , P i,2 , P i,3 , P i,4 , P i,5 I =1,2,3; wherein, P i Is the test result of the ith classification model, P i,k Is the test result of the kth base learner corresponding to the ith classification model; and splicing the test results of the three classification models to serve as the input of a meta-learner at the second layer of the stacking algorithm, and outputting a second identification label and corresponding identification probability by the meta-learner by taking the first identification label of each sample as a training label when the meta-learner is trained. In this embodiment, the meta learner may be implemented by using a multi-layer perceptron or a full connection layer, or may be implemented by using classifiers such as an SVM, which belongs to a conventional design of a stacking algorithm, and this embodiment is not limited.
In the prediction stage, a hyperspectral image to be identified is used as the input of three classification models, each classification model corresponds to 5 base learners, and the average value of the 5 base learners is used as the output of the classification model; and taking the average value of the outputs of the three classification models as the input of the meta-learner of the second layer, and outputting a prediction result by the meta-learner.
And S025, marking a difficult sample.
And judging whether the recognition probability output by the integrated type classification recognition model is smaller than a preset standard or not by using the classification screening model, and if so, marking a third recognition label.
Specifically, the prediction result of the integrated classification recognition model is input into the classification screening model for judgment, and the first recognition label is used for identification during image quantization processing, so that searching and tracing are facilitated. Taking PVC, PE, PET, and BOPP types as examples of recycling objects, setting a determination range of PVC, PE, PET, and BOPP sorting results in the classification screening model, for example, a PVC sorting result threshold is 0.9, if a prediction result of the integrated classification recognition model is a second recognition label = PVC, and a prediction probability =0.8, the classification screening model determines that 0.8 is less than 0.9, and plastic waste in an image cannot be determined to be a PVC material, then identifying the plastic waste as a third recognition label; if the prediction result of the integrated classification recognition model is that the second recognition label = PVC, and the prediction probability =0.92, the classification screening model judges that 0.92 is greater than 0.9, and the plastic waste in the image is determined to be PVC material, and the third recognition label is not marked at this time.
In this embodiment, the sorting result thresholds of different material types may be different, and may be set by a person skilled in the art according to actual conditions.
The step S03 is to optimize the integrated classification and recognition model by using the difficult samples.
In the training stage, aiming at the image with the third identification label, comparing the results of the first identification label and the second identification label of the image, if the results are inconsistent and the prediction of the integrated type classification identification model is wrong, marking the image as misjudgment, storing the image into a second training set, and optimizing the integrated type classification identification model by using the second training set.
The step S04 is to classify the marine plastic waste to be recognized by using the trained integrated classification recognition model to obtain a recognition result.
According to the content in the step S024, in a prediction stage, taking a hyperspectral image to be recognized as input of three classification models, wherein each classification model corresponds to 5 base learners, and the average value of the 5 base learners is taken as output of the classification model; and taking the average value of the outputs of the three classification models as the input of the meta-learner of the second layer, and outputting a prediction result by the meta-learner. The hyperspectral image to be recognized is used as the input of the integrated classification recognition model after being preprocessed in the same way as in the training stage.
In this embodiment, in the prediction stage, the classification screening model may also be used to determine whether the prediction result of the integrated classification recognition model is within the evaluation standard range, if so, the determination result is fed back to the mobile terminal, and if not, the third recognition tag is added to the image and is manually checked. For example, the administrator acquires an image with a third identification tag, performs manual classification and labeling on the image, may use the second identification tag of the image as a reference during manual classification and labeling, and synchronizes the manual classification and labeling result to the mobile terminal, with the manual classification and labeling as a standard for the final identification result of the image.
In the prediction stage, a manual review image and a review result can be further obtained, and the reviewed image is added with a first identification tag and then stored in a second training set for optimizing the integrated type classification recognition model. The triggering mode of the optimization training can be triggering after the second training set reaches a certain number, or triggering by a preset period, optimizing at intervals, or manually triggering.
As shown in fig. 2, the embodiment discloses a typical method for identifying the material quality of marine plastic waste, which includes:
s11: the operating personnel utilizes hyperspectral image acquisition equipment to acquire hyperspectral images of the marine plastic waste;
s12: uploading the collected original hyperspectral image to a cloud platform, preprocessing the original hyperspectral image, inputting the preprocessed hyperspectral image into a pre-trained integrated classification recognition model for judgment, and obtaining an evaluation result;
s13, presetting an evaluation standard of the type of the recyclable plastic material in a classification screening model, judging whether the evaluation result of the step S12 is within the range of the evaluation standard by using the classification screening model, and if so, directly entering the step S14; if not, adding a third identification tag to the image, and entering step S14;
s14: feeding back the judgment result to the mobile terminal, if the operator considers that the judgment result is wrong, identifying the image as abnormal, and auditing by a manager;
s15: and the manager performs manual classification labeling on the abnormal images and the images with the third identification labels, synchronizes the manual classification labeling results to the mobile terminal, and provides the results for the integrated classification identification model to perform learning optimization.
As shown in fig. 3, this embodiment discloses another typical marine plastic waste material identification method, which includes:
and S21, performing hyperspectral image acquisition on the marine plastic waste by an operator through portable hyperspectral image acquisition equipment or hyperspectral image acquisition equipment installed at a fixed position.
And S22, a cloud platform management system is built, a link is built between the cloud platform management system and the hyperspectral image acquisition equipment and the mobile terminal, the hyperspectral image acquisition equipment uploads the acquired images or videos to the cloud platform, and the cloud platform feeds back progress conditions and sorting results of the operation task process to the mobile terminal.
The sorting result acquisition process of step S22 is as follows:
s221: the cloud platform inputs the hyperspectral image information into a pre-trained integrated classification recognition model for judgment to obtain an evaluation result; taking four types of recyclable plastics including recycled PVC, PE, PET and BOPP as an example, the classification recognition model is trained by utilizing the image characteristic information of the PVC, PE, PET and BOPP in advance, the sorting results of the PVC, PE, PET and BOPP can be obtained, other types of plastic materials can also be recognized, and only the hyperspectral image of the type of the plastic material needs to be input into the classification recognition model for training; the integrated classification recognition model is a five-classification model and comprises four types of PVC, PE, PET and BOPP and a different type, wherein the different type is a type which does not belong to the four types of PVC, PE, PET and BOPP.
S222: an evaluation standard of the type of the recyclable plastic material is preset in the classification screening model, whether the evaluation result of the step S221 is within the range of the evaluation standard is judged by using the classification screening model, and if yes, the judged result is fed back to the mobile terminal; if not, adding a third identification tag to the image, and auditing by a manager;
and if the operator considers that the result fed back to the mobile terminal is wrong, the image is identified as abnormal and is checked by a manager.
S223: a manager checks the marked abnormity and the image with the third identification label through the cloud platform, and manually marks the image by combining the second identification label, the third identification label, the abnormal label and other characteristic information; and synchronizing the manual classification and labeling results to the mobile terminal.
In this embodiment, the administrator periodically performs manual labeling on the image with the abnormal mark and the third identification tag, and provides the labeled data to the integrated classification and identification model for learning and optimization.
The embodiment also provides a marine plastic waste material identification system, and the system is used for realizing the embodiment. The terms "module," "unit," and the like as used below may implement a combination of software and/or hardware of predetermined functions. Although the system described in the following embodiments is preferably implemented in software, an implementation in hardware, or a combination of software and hardware, is also possible.
As shown in fig. 4, the system for identifying the material of the marine plastic waste provided by this embodiment includes:
the hyperspectral image acquisition module is used for performing hyperspectral image acquisition on the marine plastic waste;
the cloud platform is used for preprocessing the hyperspectral image obtained by sampling and outputting an identification result of the type of the marine plastic material; the cloud platform comprises:
the hyperspectral image preprocessing module is used for replacing the background color of the original hyperspectral image and correcting the hyperspectral image;
the first training set module is used for randomly sampling the preprocessed hyperspectral image samples, marking a first identification label and constructing a first training set;
the integrated type classification and recognition model module respectively adopts a one-dimensional DNN model, an LS-SVM model and a three-dimensional CNN model as base learners in a first layer of the integrated type classification and recognition model, and is used for performing material type recognition on the preprocessed hyperspectral image and outputting a second recognition label and corresponding recognition probability;
the classification screening model module is used for judging whether the recognition probability output by the integrated classification recognition model is smaller than a preset standard or not in the integrated classification recognition model training stage, and if so, marking a third recognition label;
the second training set module is used for acquiring the image with the third identification label, comparing the results of the first identification label and the second identification label of the image, marking the image as misjudgment if the results are inconsistent, and storing the image into a second training set;
and the training module is used for training and optimizing the integrated type classification recognition model module.
The implementation process of the functions and actions of each module in the system is detailed in the implementation process of the corresponding steps in the method, for example, in the first training set module, when the preprocessed hyperspectral image sample is randomly sampled, the first image sample set, the second image sample set and the third image sample set are obtained by sampling respectively; the method specifically comprises the following steps:
randomly taking out different numbers of sample images from the preprocessed hyperspectral images according to a combination mode of multiple target types by combining the types of the first identification labels of the hyperspectral images to form a first image sample set;
randomly taking out different numbers of sample images from the preprocessed hyperspectral images according to each target type to form a second image sample set;
introducing heterogeneous label samples out of all target types, and randomly taking different numbers of sample images from the preprocessed hyperspectral images according to a multi-type combination mode at least comprising one heterogeneous label and one target type label to form a third image sample set.
For the system embodiment, since it basically corresponds to the method embodiment, reference may be made to part of the description of the method embodiment for relevant points, and details for implementing methods of the remaining modules are not described herein again. The above-described system embodiments are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution of the present invention. One of ordinary skill in the art can understand and implement without inventive effort.
Embodiments of the system of the present invention may be applied to any data processing capable device, which may be a device or apparatus such as a computer. The system embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. The software implementation is taken as an example, and as a logical device, the device is formed by reading corresponding computer program instructions in the nonvolatile memory into the memory for running through the processor of any device with data processing capability.
The embodiment of the invention also provides electronic equipment, which comprises a memory and a processor;
the memory for storing a computer program;
the processor is used for realizing the marine plastic waste material identification method when the computer program is executed.
In terms of hardware, as shown in fig. 5, for a hardware structure diagram provided in this embodiment, in addition to the processor, the memory, the network interface, and the nonvolatile memory shown in fig. 5, any device with data processing capability where a system is located in the embodiment may also include other hardware according to the actual function of the any device with data processing capability, which is not described again.
Alternatively, in this embodiment, as shown in fig. 6, the processor may be configured to execute the following steps by a computer program:
s31, performing hyperspectral image acquisition and pretreatment on the marine plastic waste to obtain a first training set;
s32, training an integrated type classification recognition model by using a first training set, acquiring a second recognition label and a corresponding recognition probability of the image by using the integrated type classification recognition model, judging whether the recognition probability is smaller than a preset standard or not by using a classification screening model, and marking a third recognition label if the recognition probability is smaller than the preset standard;
s33, in a training stage, aiming at the image with the third identification label, comparing the results of the first identification label and the second identification label of the image, if the results are inconsistent, marking the image as misjudgment, and storing the misjudgment into a second training set;
s34, classifying the marine plastic waste to be recognized by adopting a trained integrated classification recognition model to obtain an evaluation result;
s35, an evaluation standard of the type of the recyclable plastic material is preset in the classification screening model, whether the evaluation result of the step S34 is within the range of the evaluation standard is judged by using the classification screening model, if yes, the judged result is fed back to the mobile terminal, if not, a third identification tag is added to the image, subsequent manual review and feedback are carried out, and the reviewed image is stored in a second training set after the first identification tag is added;
and S36, optimizing the integrated classification recognition model by periodically utilizing the samples in the second training set.
The embodiment of the invention also provides a computer-readable storage medium, wherein a program is stored on the computer-readable storage medium, and when the program is executed by a processor, the method for identifying the marine plastic refuse material quality is realized.
The computer readable storage medium may be an internal storage unit, such as a hard disk or a memory, of any data processing capability device described in any of the foregoing embodiments. The computer readable storage medium may also be any external storage device of a device with data processing capabilities, such as a plug-in hard disk, a Smart Media Card (SMC), an SD Card, a Flash memory Card (Flash Card), etc. provided on the device. Further, the computer readable storage medium may include both an internal storage unit and an external storage device of any data processing capable device. The computer-readable storage medium is used for storing the computer program and other programs and data required by the arbitrary data processing capable device, and may also be used for temporarily storing data that has been output or is to be output.
It is obvious that the above-mentioned embodiments and drawings are only examples of the present application, and it is obvious to those skilled in the art that the present application can be applied to other similar cases according to the drawings without creative efforts. Moreover, it should be appreciated that such a development effort might be complex and lengthy, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure, and is not intended to limit the present disclosure to the particular forms disclosed herein. Without departing from the concept of the present application, several variations and modifications may be made, which are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (10)

1. A method for identifying the material quality of marine plastic waste is characterized by comprising the following steps:
collecting hyperspectral images of the marine plastic waste, marking a first identification label, preprocessing the hyperspectral images to obtain a first training set;
training the integrated type classification recognition model by using a first training set, acquiring a second recognition label and a corresponding recognition probability of the image by using the integrated type classification recognition model, judging whether the recognition probability is smaller than a preset standard by using a classification screening model, and marking a third recognition label if the recognition probability is smaller than the preset standard; the integrated classification recognition model respectively adopts a one-dimensional DNN model, an LS-SVM model and a three-dimensional CNN model as a base learner in the first layer;
a training stage, aiming at the image with a third identification label, comparing the results of the first identification label and the second identification label of the image, if the results are inconsistent, marking the image as misjudgment, storing the misjudgment into a second training set, and optimizing the integrated type classification identification model by utilizing the second training set;
and classifying the marine plastic waste to be recognized by adopting the trained integrated classification recognition model to obtain a recognition result.
2. The method for identifying the marine plastic waste material quality according to claim 1, wherein when the hyperspectral image is preprocessed, the hyperspectral image is corrected by replacing the background color of the original hyperspectral image, and the formula is as follows:
Figure 367445DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 810670DEST_PATH_IMAGE002
representing the corrected hyperspectral image, (x, y) representing the coordinate position of a pixel point in the hyperspectral image,
Figure 820214DEST_PATH_IMAGE003
represents a wavelength;
Figure 650636DEST_PATH_IMAGE004
representing an original hyperspectral image;
Figure 923485DEST_PATH_IMAGE005
representing a black background reference image;
Figure 839358DEST_PATH_IMAGE006
the representation is a white background reference picture.
3. The method for identifying the marine plastic refuse material according to claim 1, wherein the first training set comprises a first image sample set, a second image sample set and a third image sample set, and the obtaining method comprises:
randomly taking out different numbers of sample images from the preprocessed hyperspectral images according to a combination mode of multiple target types by combining the types of the first identification labels of the hyperspectral images to form a first image sample set;
randomly taking out different numbers of sample images from the preprocessed hyperspectral images according to each target type to form a second image sample set;
introducing heterogeneous label samples out of all target types, and randomly taking different numbers of sample images from the preprocessed hyperspectral images according to a multi-type combination mode at least comprising one heterogeneous label and one target type label to form a third image sample set.
4. The method for identifying the marine plastic waste material quality according to claim 1, wherein when the one-dimensional DNN model identifies the hyperspectral image, the color feature, the texture feature and the edge feature of the reconstructed hyperspectral image are extracted by reconstructing the hyperspectral image.
5. The method for identifying the marine plastic waste material quality according to claim 4, wherein the formula for reconstructing the hyperspectral image is as follows:
M=Cr+k
wherein M is the reconstructed hyperspectral image and represents a 3 x 1 vector of RGB intensity; c is an original hyperspectral image, r is a normalized reflectivity intensity vector of the hyperspectral image, and k represents a system noise vector.
6. The method for identifying the marine plastic waste material quality as claimed in claim 1, wherein three classification models including a one-dimensional DNN model, an LS-SVM model and a three-dimensional CNN model are respectively trained in a K-fold cross validation manner, and for each classification model, 1/K data is reserved for each training as a test set; and (4) taking the prediction results of the test set corresponding to the trained three classification models as the input of the meta-learner of the second layer, and finishing the training of the meta-learner of the second layer by combining the real labels of the prediction set.
7. The utility model provides a marine plastic refuse material matter identification system which characterized in that includes:
the hyperspectral image acquisition module is used for performing hyperspectral image acquisition on the marine plastic waste;
the cloud platform is used for preprocessing the hyperspectral image obtained by sampling and outputting an identification result of the type of the marine plastic material; the cloud platform comprises:
the hyperspectral image preprocessing module is used for replacing the background color of the original hyperspectral image and correcting the hyperspectral image;
the first training set module is used for randomly sampling the preprocessed hyperspectral image samples, marking a first identification label and constructing a first training set;
the integrated type classification and recognition model module respectively adopts a one-dimensional DNN model, an LS-SVM model and a three-dimensional CNN model as a base learning device in a first layer of the integrated type classification and recognition model, and is used for performing material type recognition on the preprocessed hyperspectral image and outputting a second recognition label and corresponding recognition probability;
the classification screening model module is used for judging whether the recognition probability output by the integrated classification recognition model is smaller than a preset standard or not in the integrated classification recognition model training stage, and if so, marking a third recognition label;
the second training set module is used for acquiring the image with the third identification label, comparing the results of the first identification label and the second identification label of the image, marking the image as misjudgment if the results are inconsistent, and storing the image into a second training set;
and the training module is used for training and optimizing the integrated type classification recognition model module.
8. The marine plastic waste material identification system according to claim 7, wherein in the first training set module, when the preprocessed hyperspectral image samples are randomly sampled, a first image sample set, a second image sample set and a third image sample set are obtained by sampling respectively; the method specifically comprises the following steps:
randomly taking out different numbers of sample images from the preprocessed hyperspectral images in combination with the types of the first identification labels of the hyperspectral images and aiming at a combination mode of multiple target types to form a first image sample set;
randomly taking out different numbers of sample images from the preprocessed hyperspectral images according to each target type to form a second image sample set;
and introducing heterogeneous label samples out of all target types, and randomly taking different numbers of sample images from the preprocessed hyperspectral images according to a multi-type combination mode at least comprising one heterogeneous label and one target type label to form a third image sample set.
9. An electronic device, comprising a processor and a memory, wherein the memory stores machine executable instructions capable of being executed by the processor, and the processor executes the machine executable instructions to implement the marine plastic waste material identification method according to any one of claims 1 to 6.
10. A machine-readable storage medium storing machine-executable instructions which, when invoked and executed by a processor, are configured to implement the marine plastic waste material identification method according to any one of claims 1 to 6.
CN202211109135.2A 2022-09-13 2022-09-13 Marine plastic waste material identification method and system, electronic equipment and storage medium Active CN115187870B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211109135.2A CN115187870B (en) 2022-09-13 2022-09-13 Marine plastic waste material identification method and system, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211109135.2A CN115187870B (en) 2022-09-13 2022-09-13 Marine plastic waste material identification method and system, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115187870A true CN115187870A (en) 2022-10-14
CN115187870B CN115187870B (en) 2023-01-03

Family

ID=83524847

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211109135.2A Active CN115187870B (en) 2022-09-13 2022-09-13 Marine plastic waste material identification method and system, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115187870B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115639354A (en) * 2022-12-21 2023-01-24 国高材高分子材料产业创新中心有限公司 Marine plastic identification method and device

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105606537A (en) * 2015-08-31 2016-05-25 山东科技大学 Mineral type remote sensing recognition method based on multi-type spectral feature parameter collaboration
CN106650830A (en) * 2017-01-06 2017-05-10 西北工业大学 Deep model and shallow model decision fusion-based pulmonary nodule CT image automatic classification method
CN106896069A (en) * 2017-04-06 2017-06-27 武汉大学 A kind of spectrum reconstruction method based on color digital camera single width RGB image
CN108537728A (en) * 2018-03-05 2018-09-14 中国地质大学(武汉) High spectrum image super-resolution forming method and system based on spectrum fidelity
CN108596085A (en) * 2018-04-23 2018-09-28 浙江科技学院 The method for building up of soil heavy metal content detection model based on convolutional neural networks
CN108896499A (en) * 2018-05-09 2018-11-27 西安建筑科技大学 In conjunction with principal component analysis and the polynomial spectral reflectance recovery method of regularization
CN110796186A (en) * 2019-10-22 2020-02-14 华中科技大学无锡研究院 Dry and wet garbage identification and classification method based on improved YOLOv3 network
CN110852369A (en) * 2019-11-06 2020-02-28 西北工业大学 Hyperspectral image classification method combining 3D/2D convolutional network and adaptive spectrum unmixing
CN110852227A (en) * 2019-11-04 2020-02-28 中国科学院遥感与数字地球研究所 Hyperspectral image deep learning classification method, device, equipment and storage medium
CN110946553A (en) * 2019-11-18 2020-04-03 天津大学 Hyperspectral image-based in-vivo tissue optical parameter measurement device and method
CN111443043A (en) * 2020-01-03 2020-07-24 新疆农业科学院农业机械化研究所 Hyperspectral image-based walnut kernel quality detection method
CN111639587A (en) * 2020-05-27 2020-09-08 西安电子科技大学 Hyperspectral image classification method based on multi-scale spectrum space convolution neural network
CN112613413A (en) * 2020-12-25 2021-04-06 平安国际智慧城市科技股份有限公司 Perishable garbage classification quality determination method and device and computer readable storage medium
CN112633401A (en) * 2020-12-29 2021-04-09 中国科学院长春光学精密机械与物理研究所 Hyperspectral remote sensing image classification method, device, equipment and storage medium
CN112733936A (en) * 2021-01-08 2021-04-30 北京工业大学 Recyclable garbage classification method based on image recognition
CN113239755A (en) * 2021-04-28 2021-08-10 湖南大学 Medical hyperspectral image classification method based on space-spectrum fusion deep learning
CN114049556A (en) * 2021-11-10 2022-02-15 中国天楹股份有限公司 Garbage classification method integrating SVM (support vector machine) and target detection algorithm
CN114719980A (en) * 2022-04-02 2022-07-08 杭州电子科技大学 End-to-end spectrum reconstruction method and system
CN114723999A (en) * 2022-05-24 2022-07-08 航天宏图信息技术股份有限公司 Garbage identification method and device based on unmanned aerial vehicle orthographic image
CN114821321A (en) * 2022-04-27 2022-07-29 浙江工业大学 Blade hyperspectral image classification and regression method based on multi-scale cascade convolution neural network
CN114937179A (en) * 2022-07-27 2022-08-23 深圳市海清视讯科技有限公司 Junk image classification method and device, electronic equipment and storage medium

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105606537A (en) * 2015-08-31 2016-05-25 山东科技大学 Mineral type remote sensing recognition method based on multi-type spectral feature parameter collaboration
CN106650830A (en) * 2017-01-06 2017-05-10 西北工业大学 Deep model and shallow model decision fusion-based pulmonary nodule CT image automatic classification method
CN106896069A (en) * 2017-04-06 2017-06-27 武汉大学 A kind of spectrum reconstruction method based on color digital camera single width RGB image
CN108537728A (en) * 2018-03-05 2018-09-14 中国地质大学(武汉) High spectrum image super-resolution forming method and system based on spectrum fidelity
CN108596085A (en) * 2018-04-23 2018-09-28 浙江科技学院 The method for building up of soil heavy metal content detection model based on convolutional neural networks
CN108896499A (en) * 2018-05-09 2018-11-27 西安建筑科技大学 In conjunction with principal component analysis and the polynomial spectral reflectance recovery method of regularization
CN110796186A (en) * 2019-10-22 2020-02-14 华中科技大学无锡研究院 Dry and wet garbage identification and classification method based on improved YOLOv3 network
CN110852227A (en) * 2019-11-04 2020-02-28 中国科学院遥感与数字地球研究所 Hyperspectral image deep learning classification method, device, equipment and storage medium
CN110852369A (en) * 2019-11-06 2020-02-28 西北工业大学 Hyperspectral image classification method combining 3D/2D convolutional network and adaptive spectrum unmixing
CN110946553A (en) * 2019-11-18 2020-04-03 天津大学 Hyperspectral image-based in-vivo tissue optical parameter measurement device and method
CN111443043A (en) * 2020-01-03 2020-07-24 新疆农业科学院农业机械化研究所 Hyperspectral image-based walnut kernel quality detection method
CN111639587A (en) * 2020-05-27 2020-09-08 西安电子科技大学 Hyperspectral image classification method based on multi-scale spectrum space convolution neural network
CN112613413A (en) * 2020-12-25 2021-04-06 平安国际智慧城市科技股份有限公司 Perishable garbage classification quality determination method and device and computer readable storage medium
CN112633401A (en) * 2020-12-29 2021-04-09 中国科学院长春光学精密机械与物理研究所 Hyperspectral remote sensing image classification method, device, equipment and storage medium
CN112733936A (en) * 2021-01-08 2021-04-30 北京工业大学 Recyclable garbage classification method based on image recognition
CN113239755A (en) * 2021-04-28 2021-08-10 湖南大学 Medical hyperspectral image classification method based on space-spectrum fusion deep learning
CN114049556A (en) * 2021-11-10 2022-02-15 中国天楹股份有限公司 Garbage classification method integrating SVM (support vector machine) and target detection algorithm
CN114719980A (en) * 2022-04-02 2022-07-08 杭州电子科技大学 End-to-end spectrum reconstruction method and system
CN114821321A (en) * 2022-04-27 2022-07-29 浙江工业大学 Blade hyperspectral image classification and regression method based on multi-scale cascade convolution neural network
CN114723999A (en) * 2022-05-24 2022-07-08 航天宏图信息技术股份有限公司 Garbage identification method and device based on unmanned aerial vehicle orthographic image
CN114937179A (en) * 2022-07-27 2022-08-23 深圳市海清视讯科技有限公司 Junk image classification method and device, electronic equipment and storage medium

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
OLUSEGUN PETER AWE等: "Cooperative spectrum sensing in cognitive radio networks using multi-class support vector machine algorithms", 《2015 9TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING AND COMMUNICATION SYSTEMS (ICSPCS)》 *
YUXIANG LU等: "Study on Stellar Spectra Classification Based on Multitask Residual Neural Network", 《2020 PROGNOSTICS AND HEALTH MANAGEMENT CONFERENCE (PHM-BESANÇON)》 *
刘强等: "基于倒谱与 BP 网络的船舶生活垃圾分类方法研究", 《南通航运职业技术学院学报》 *
刘昊等: "基于卷积神经网络的材质分类识别研究", 《激光与红外》 *
吴谊平: "城镇塑料生活垃圾智能精细分类关键技术研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅰ辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115639354A (en) * 2022-12-21 2023-01-24 国高材高分子材料产业创新中心有限公司 Marine plastic identification method and device

Also Published As

Publication number Publication date
CN115187870B (en) 2023-01-03

Similar Documents

Publication Publication Date Title
US10885531B2 (en) Artificial intelligence counterfeit detection
CN108090406B (en) Face recognition method and system
Bhunia et al. Text recognition in scene image and video frame using color channel selection
CN109829467A (en) Image labeling method, electronic device and non-transient computer-readable storage medium
EP2579211A2 (en) Graph-based segmentation integrating visible and NIR information
CN106203539B (en) Method and device for identifying container number
CN109685065B (en) Layout analysis method and system for automatically classifying test paper contents
CN109993201A (en) A kind of image processing method, device and readable storage medium storing program for executing
CN112037222B (en) Automatic updating method and system of neural network model
CN113761259A (en) Image processing method and device and computer equipment
CN112633297A (en) Target object identification method and device, storage medium and electronic device
CN111507344A (en) Method and device for recognizing characters from image
CN115187870B (en) Marine plastic waste material identification method and system, electronic equipment and storage medium
US20230147685A1 (en) Generalized anomaly detection
CN111126367A (en) Image classification method and system
CN114926441A (en) Defect detection method and system for machining and molding injection molding part
CN114373185A (en) Bill image classification method and device, electronic device and storage medium
Vizcarra et al. The Peruvian Amazon forestry dataset: A leaf image classification corpus
CN110796210A (en) Method and device for identifying label information
CN108595568A (en) A kind of text sentiment classification method based on very big unrelated multivariate logistic regression
KR102158967B1 (en) Image analysis apparatus, image analysis method and recording medium
CN110443306B (en) Authenticity identification method for wine cork
KR102230559B1 (en) Method and Apparatus for Creating Labeling Model with Data Programming
CN110610177A (en) Training method of character recognition model, character recognition method and device
US20230069960A1 (en) Generalized anomaly detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant