CN110276300B - Method and device for identifying quality of garbage - Google Patents

Method and device for identifying quality of garbage Download PDF

Info

Publication number
CN110276300B
CN110276300B CN201910547978.2A CN201910547978A CN110276300B CN 110276300 B CN110276300 B CN 110276300B CN 201910547978 A CN201910547978 A CN 201910547978A CN 110276300 B CN110276300 B CN 110276300B
Authority
CN
China
Prior art keywords
garbage
images
identified
image
recognized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910547978.2A
Other languages
Chinese (zh)
Other versions
CN110276300A (en
Inventor
黄特辉
刘明浩
郭江亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201910547978.2A priority Critical patent/CN110276300B/en
Publication of CN110276300A publication Critical patent/CN110276300A/en
Application granted granted Critical
Publication of CN110276300B publication Critical patent/CN110276300B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2155Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a method and a device for identifying garbage quality. One embodiment of the method comprises: acquiring an image sequence of the garbage to be identified; recognizing images in the image sequence of the garbage to be recognized by utilizing a pre-trained deep learning model corresponding to the category of the garbage to be recognized to obtain a recognition result of the garbage to be recognized, wherein the recognition result of the garbage to be recognized comprises information of a preset target object existing in the garbage to be recognized; and analyzing and counting the identification result of the garbage to be identified to generate a quality result of the garbage to be identified. The implementation mode relates to the field of cloud computing, and the preset target object existing in the garbage is automatically identified by utilizing the deep learning model, so that the identification efficiency of the garbage quality is improved.

Description

Method and device for identifying quality of garbage
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method and a device for identifying garbage quality.
Background
The domestic garbage discharge amount is increasing day by day, and the composition is complicated various, has characteristics such as polluting, resource nature and sociality. The garbage dry-wet classification is a simple and practical garbage classification mode which is provided aiming at the national conditions that kitchen garbage and peel garbage in domestic household garbage of China are high in proportion and high in moisture content and are not beneficial to garbage recovery and final disposal. The 'dry-wet classification' is to classify the household garbage of residents into wet garbage and dry garbage. After the wet garbage is collected, the wet garbage can be composted and anaerobically digested or used for preparing biofuel. After the dry garbage is collected, workers can pick out available substances from the dry garbage, and the remaining garbage is landfilled or incinerated for disposal. The percentage of the weight of the water contained in the dry garbage to the total weight of the garbage is called the water content, and the water content is one of the important reference indexes for measuring the quality of the dry garbage. The contents of non-degradable or long-time degradable plastic bags, bottles, pop cans and the like in the wet garbage are one of important reference indexes for measuring the quality of the wet garbage.
The purpose of garbage treatment is harmlessness, resource utilization and reduction. Therefore, it is very important to adopt different processing methods for garbage with different qualities.
The existing method for measuring the quality of dry and wet garbage mainly depends on a manual sampling detection method, namely, the garbage quality is judged according to a sampling detection result by randomly sampling and detecting the garbage.
Disclosure of Invention
The embodiment of the application provides a method and a device for identifying garbage quality.
In a first aspect, an embodiment of the present application provides a method for identifying garbage quality, including: acquiring an image sequence of the garbage to be identified; recognizing images in the image sequence of the garbage to be recognized by utilizing a pre-trained deep learning model corresponding to the category of the garbage to be recognized to obtain a recognition result of the garbage to be recognized, wherein the recognition result of the garbage to be recognized comprises information of a preset target object existing in the garbage to be recognized; and analyzing and counting the identification result of the garbage to be identified to generate a quality result of the garbage to be identified.
In some embodiments, the deep learning model includes a feature extraction network, a classification network, and a regression network.
In some embodiments, the sequence of images of the trash to be identified is a plurality of images acquired during the dumping or shipping of the trash to be identified.
In some embodiments, before identifying an image in the image sequence of the garbage to be identified by using a pre-trained deep learning model corresponding to the category of the garbage to be identified, the method further includes: and preprocessing the images in the image sequence of the garbage to be identified by using an image processing method.
In some embodiments, pre-processing images in the sequence of images of spam to be identified using an image processing method includes: classifying empty background images from an image sequence of the garbage to be identified; for an image except for an empty background image in an image sequence of the garbage to be identified, the empty background image is used as the background of the image, a moving object detection algorithm is used for detecting the area of the garbage to be identified in the image, and the pixel value except for the detected area in the image is set to be a preset value.
In some embodiments, pre-processing images in the sequence of images of spam to be identified using an image processing method includes: for an image in an image sequence of the garbage to be identified, moving a preset image interception frame on the image along a preset moving direction according to a preset step length, and intercepting a plurality of sub-images; and for the sub-image in the plurality of sub-images, eliminating the isolated noise point of the sub-image by using a median filtering method.
In some embodiments, after recognizing images in the image sequence of the spam to be recognized by using a pre-trained deep learning model corresponding to the category of the spam to be recognized, obtaining a recognition result of the spam to be recognized, the method further includes: for an image in an image sequence of the garbage to be identified, if a preset target object exists in the image, framing the preset target object in the image by using a target frame to serve as a marked target object; selecting a first preset number of images before and a second preset number of images after the images from an image sequence of the garbage to be identified; performing target tracking in the selected image based on the marked target object; and updating the identification result of the garbage to be identified based on the tracking result.
In some embodiments, updating the recognition result of the spam to be recognized based on the tracking result comprises: counting the number of images of the marked target object in the selected images; if the number of the tracked images of the marked target object exceeds a third preset number, information of the marked target object in the identification result of the garbage to be identified is reserved; and if the number of the images of the marked target object is tracked to be less than a third preset number, deleting the information of the marked target object in the identification result of the garbage to be identified.
In some embodiments, the deep learning model is trained by: acquiring a training sample set, wherein training samples in the training sample set comprise sample garbage images and sample garbage labeling images, and the sample garbage labeling images are images obtained by labeling preset target objects existing in the sample garbage images; and for the training samples in the training sample set, taking the sample garbage images in the training samples as input, taking the sample garbage labeled images in the training samples as output, and training to obtain the deep learning model.
In some embodiments, after obtaining the training sample set, further comprising: for the unlabeled training samples in the training sample set, generating low-entropy labels of the unlabeled training samples by a MixUp guessed data amplification method; the unlabeled training samples and the corresponding labeled training samples are mixed to augment the training sample set.
In a second aspect, an embodiment of the present application provides an apparatus for identifying garbage quality, including: an acquisition unit configured to acquire a sequence of images of the trash to be identified; the recognition unit is configured to recognize images in the image sequence of the garbage to be recognized by utilizing a pre-trained deep learning model corresponding to the category of the garbage to be recognized, so as to obtain a recognition result of the garbage to be recognized, wherein the recognition result of the garbage to be recognized comprises information of a preset target object existing in the garbage to be recognized; and the statistical unit is configured to analyze and count the recognition result of the garbage to be recognized and generate a quality result of the garbage to be recognized.
In some embodiments, the deep learning model includes a feature extraction network, a classification network, and a regression network.
In some embodiments, the sequence of images of the trash to be identified is a plurality of images acquired during the dumping or shipping of the trash to be identified.
In some embodiments, the apparatus further comprises: a processing unit configured to pre-process images in the sequence of images of the spam to be identified using an image processing method.
In some embodiments, the processing unit is further configured to: classifying empty background images from an image sequence of the garbage to be identified; for an image except for an empty background image in an image sequence of the garbage to be identified, the empty background image is used as the background of the image, a moving object detection algorithm is used for detecting the area of the garbage to be identified in the image, and the pixel value except for the detected area in the image is set to be a preset value.
In some embodiments, the processing unit is further configured to: for an image in an image sequence of the garbage to be identified, moving a preset image interception frame on the image along a preset moving direction according to a preset step length, and intercepting a plurality of sub-images; and for the sub-image in the plurality of sub-images, eliminating the isolated noise point of the sub-image by using a median filtering method.
In some embodiments, the apparatus further comprises: the image recognition device comprises a marking unit, a processing unit and a processing unit, wherein the marking unit is configured to frame a preset target object in an image sequence of garbage to be recognized by using a target frame as a marking target object if the preset target object exists in the image; the image recognition device comprises a selection unit, a recognition unit and a display unit, wherein the selection unit is configured to select a first preset target image and a second preset number of images from an image sequence of the garbage to be recognized; a tracking unit configured to perform target tracking in the selected image based on the marking target; and the updating unit is configured to update the identification result of the garbage to be identified based on the tracking result.
In some embodiments, the update unit is further configured to: counting the number of images of the marked target object in the selected images; if the number of the tracked images of the marked target object exceeds a third preset number, information of the marked target object in the identification result of the garbage to be identified is reserved; and if the number of the images of the marked target object is tracked to be less than a third preset number, deleting the information of the marked target object in the identification result of the garbage to be identified.
In some embodiments, the deep learning model is trained by: acquiring a training sample set, wherein training samples in the training sample set comprise sample garbage images and sample garbage labeling images, and the sample garbage labeling images are images obtained by labeling preset target objects existing in the sample garbage images; and for the training samples in the training sample set, taking the sample garbage images in the training samples as input, taking the sample garbage labeled images in the training samples as output, and training to obtain the deep learning model.
In some embodiments, after obtaining the training sample set, further comprising: for the unlabeled training samples in the training sample set, generating low-entropy labels of the unlabeled training samples by a MixUp guessed data amplification method; the unlabeled training samples and the corresponding labeled training samples are mixed to augment the training sample set.
In a third aspect, an embodiment of the present application provides an electronic device, including: one or more processors; a storage device having one or more programs stored thereon; when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the method as described in any implementation of the first aspect.
In a fourth aspect, the present application provides a computer-readable medium, on which a computer program is stored, which, when executed by a processor, implements the method as described in any implementation manner of the first aspect.
According to the method and the device for identifying the quality of the garbage, firstly, an image sequence of the garbage to be identified is obtained; then, inputting images in the image sequence of the garbage to be recognized into a pre-trained deep learning model to obtain a recognition result of the garbage to be recognized; and finally, analyzing and counting the recognition result of the garbage to be recognized to generate a quality result of the garbage to be recognized, and automatically recognizing preset target objects in the garbage by using a deep learning model, so that the recognition efficiency of the garbage quality is improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture to which the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a method for identifying trash quality according to the present application;
FIG. 3 is a flow diagram of yet another embodiment of a method for identifying trash quality according to the present application;
FIG. 4 is a flow diagram of another embodiment of a method for identifying trash quality according to the present application;
FIG. 5 is a flow diagram of yet another embodiment of a method for identifying spam quality according to the present application;
FIG. 6 is a schematic block diagram of one embodiment of an apparatus for identifying waste quality according to the present application;
FIG. 7 is a block diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which embodiments of the present method for identifying spam quality or apparatus for identifying spam quality can be applied.
As shown in fig. 1, a system architecture 100 may include an image pickup apparatus 101, a network 102, and a server 103. The network 102 is a medium to provide a communication link between the image pickup apparatus 101 and the server 103. Network 102 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The image pickup apparatus 101 can interact with the server 103 via the network 102 to receive or transmit a message or the like. The image pickup apparatus 101 may be hardware or software. When the image pickup apparatus 101 is hardware, it may be various electronic apparatuses that support an image or video shooting function. Including but not limited to cameras, smart phones, and tablet computers, among others. When the image pickup apparatus 101 is software, it can be installed in the above-described electronic apparatus. It may be implemented as multiple pieces of software or software modules, or as a single piece of software or software module. And is not particularly limited herein.
The server 103 may provide various services. For example, the server 103 may perform processing such as analysis on the acquired data such as the image sequence of the spam to be identified, and generate a processing result (e.g., a quality result of the spam to be identified).
The server 103 may be hardware or software. When the server 103 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server 103 is software, it may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be noted that the method for identifying the quality of spam provided by the embodiment of the present application is generally executed by the server 103, and accordingly, the apparatus for identifying the quality of spam is generally disposed in the server 103.
It should be understood that the number of camera devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of image capture devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for identifying spam quality in accordance with the present application is illustrated. The method for identifying the quality of the garbage comprises the following steps:
step 201, acquiring an image sequence of the garbage to be identified.
In the present embodiment, an executing subject (e.g., the server 103 shown in fig. 1) of the method for identifying spam quality may acquire an image sequence of spam to be identified from a photographing device (e.g., the photographing device 101 shown in fig. 1) to which it is communicatively connected. Generally, a photographing apparatus may support an image or video photographing function. Therefore, the shooting device can continuously shoot the garbage to be identified once to obtain the image sequence of the garbage to be identified. In addition, the shooting device can also generate an image sequence of the garbage to be identified by shooting the video of the garbage to be identified and selecting a plurality of frames of video from the video.
In some optional implementations of the embodiment, the image sequence of the garbage to be identified may be a plurality of images acquired during the dumping or shipping process of the garbage to be identified. Thus, the image sequence of the garbage to be identified records the dynamic moving process of the garbage to be identified.
Step 202, identifying the images in the image sequence of the garbage to be identified by using a pre-trained deep learning model corresponding to the category of the garbage to be identified to obtain the identification result of the garbage to be identified.
In this embodiment, the executing entity may recognize an image in the image sequence of the spam to be recognized by using a depth learning model trained in advance and corresponding to the category of the spam to be recognized, so as to obtain a recognition result of the spam to be recognized.
In some optional implementation manners of this embodiment, the executing body may directly input the images in the image sequence of the spam to be recognized to a pre-trained deep learning model corresponding to the category of the spam to be recognized, and output the recognition result of the spam to be recognized.
In some optional implementation manners of this embodiment, the executing body may further perform preprocessing on the images in the image sequence of the garbage to be recognized by using an image processing method. Subsequently, the executing body may input the images in the image sequence of the preprocessed to-be-recognized spam to a pre-trained deep learning model corresponding to the category of the to-be-recognized spam, and output the recognition result of the to-be-recognized spam.
Here, the deep learning model may be used to identify a preset target object existing in the garbage, and represent a correspondence between an image sequence of the garbage and an identification result of the garbage. In general, the recognition result of the spam to be recognized may include information of a preset target object present in the spam to be recognized. The information of the preset target object may include, but is not limited to, a preset target object existing in the trash to be recognized, a position of the preset target object, a category of the preset target object, a number of the preset target object, an area of the preset target object, a content of the preset target object, and the like.
Generally, domestic garbage of residents can be classified into wet garbage and dry garbage. After the wet garbage is collected, the wet garbage can be composted and anaerobically digested or used for preparing biofuel. The quality of the wet garbage can be seriously influenced by plastic bags, plastic bottles, pop cans and the like which are not degradable or have long degradation time in the wet garbage. Therefore, for wet garbage, plastic bags, plastic bottles, pop cans, etc., which are not degradable or have a long degradation time, may be preset targets. The deep learning model corresponding to the wet garbage can be used for identifying preset target objects such as non-degradable plastic bags, plastic bottles, pop cans and the like which exist in the wet garbage and have long degradation time. After the dry garbage is collected, the available substances are selected by workers, and the residual garbage is subjected to landfill and incineration disposal. The water flow in the dry waste can seriously affect the quality of the dry waste. Thus, for dry waste, a water stream or the like may be a preset target. The deep learning model corresponding to the dry garbage can be used for identifying preset target objects such as water flow and the like in the dry garbage.
In this embodiment, the deep learning model may be obtained by performing supervised training on an existing machine learning model by using a machine learning method and a training sample. In general, the execution subject may use a model training engine to train the deep learning model, optionally using a feature extraction network, a classification network, and a regression network. The feature extraction network can be used for extracting features of preset target objects existing in the garbage. The classification network may be used to identify classes of preset targets present in the spam. The regression network may be used to detect the location of a predetermined target object present in the garbage.
In some optional implementations of the present embodiment, the deep learning model may be trained by the following steps:
first, a set of training samples is obtained.
Here, the training samples in the training sample set may include sample garbage images and sample garbage annotation images. The sample spam annotation image can be an image obtained by marking a preset target object existing in the sample spam image.
Generally, if a deep learning model corresponding to wet trash is to be trained, the sample trash image may be an image obtained by photographing the wet trash, and the sample trash labeling image may be an image obtained by labeling a preset target object such as a water flow existing in the sample trash image. If a deep learning model corresponding to the dry garbage needs to be trained, the sample garbage image can be an image obtained by shooting the dry garbage, and the sample garbage labeling image can be an image obtained by labeling preset targets such as non-degradable plastic bags, plastic bottles and pop-top cans existing in the sample garbage image and having long degradation time.
Then, for the training samples in the training sample set, taking the sample garbage images in the training samples as input, taking the sample garbage labeling images in the training samples as output, and training to obtain the deep learning model.
Here, the execution subject may first initialize the deep learning model, setting parameters of the deep learning model to initial values; and then training the deep learning model by utilizing the training sample set. In the training process, the parameters of the deep learning model are continuously adjusted until the recognition effect of the deep learning model meets the preset constraint condition.
In practice, the flow pattern in wet waste varies widely. Therefore, for the unlabeled training samples in the training sample set, the executing agent may first use a semi-supervised learning method such as MixMatch to generate the low-entropy labels of the unlabeled training samples by a MixUp guessed data amplification method; the unlabeled training samples and the corresponding labeled training samples are then mixed to augment the set of training samples. In this way, the dependence of the algorithm on a large number of labeled training sample sets is greatly reduced.
And 203, analyzing and counting the identification result of the garbage to be identified to generate a quality result of the garbage to be identified.
In this embodiment, the executing entity may perform analysis and statistics on the recognition result of the garbage to be recognized to generate a quality result of the garbage to be recognized. Generally, the executing body may count the content of a preset target object existing in the garbage to be identified, and determine the quality of the garbage to be identified according to the content of the preset target object. Generally, the higher the content of the preset target object is, the lower the quality of the garbage to be identified is; conversely, the higher the quality of the waste to be identified.
In addition, the execution main body can also send the quality result of the garbage to be identified to the garbage quality display platform to be subjected to manual review. After the manual review, the garbage to be identified is processed by adopting a corresponding processing method according to the review evaluation result. Meanwhile, the garbage treatment management platform can also record at least one of an image sequence, an identification result, a quality result, an audit evaluation result and a treatment method of the garbage to be identified.
In practice, the corresponding treatment methods of the garbage with different qualities are different. For example, if the waste to be identified is wet waste of high quality, the waste to be identified is generally composted directly using microorganisms. For another example, if the garbage to be identified is wet garbage with low quality, plastic bags, plastic bottles, pop cans, etc. which are not degradable or have long degradation time are generally first sorted out from the garbage to be identified, and then the remaining wet garbage is composted using microorganisms.
The method for identifying the quality of the garbage comprises the steps of firstly obtaining an image sequence of the garbage to be identified; then, inputting images in the image sequence of the garbage to be recognized into a pre-trained deep learning model to obtain a recognition result of the garbage to be recognized; and finally, analyzing and counting the identification result of the garbage to be identified so as to generate a quality result of the garbage to be identified. The preset target object existing in the garbage is automatically identified by utilizing the deep learning model, so that the garbage quality identification efficiency is improved.
With further reference to FIG. 3, a flow 300 of yet another embodiment of a method for identifying spam quality in accordance with the present application is illustrated. The method for identifying the quality of the garbage comprises the following steps:
step 301, acquiring an image sequence of the garbage to be identified.
In the present embodiment, an executing subject (e.g., the server 103 shown in fig. 1) of the method for identifying spam quality may acquire an image sequence of spam to be identified from a photographing device (e.g., the photographing device 101 shown in fig. 1) to which it is communicatively connected. The image sequence of the garbage to be identified can be a plurality of images acquired during the dumping or shipping process of the garbage to be identified. Thus, the image sequence of the garbage to be identified records the dynamic moving process of the garbage to be identified.
Step 302, classifying empty background images from the image sequence of the garbage to be identified.
In this embodiment, the execution subject may classify an empty background image from the image sequence of the spam to be identified.
Typically, the camera device will begin capturing images before a trash dump or shipment is to be identified. Therefore, the image sequence acquired by the camera device comprises images without the garbage to be identified, and the images are empty background images. Here, the execution body may distinguish an empty background image in the image sequence of the trash to be recognized using an unsupervised One-Class classification algorithm such as ALOCC (adaptive Learned One-Class Classifier for singular-Class anomaly Detection) -CVPR2018(Computer Vision and Pattern Recognition 2018, 2018 Computer Vision and Pattern Recognition conference).
Step 303, regarding the images except the empty background image in the image sequence of the garbage to be identified, taking the empty background image as the background of the image, and detecting the region of the garbage to be identified in the image by using a moving object detection algorithm.
In this embodiment, for an image other than the empty background image in the image sequence of the spam to be identified, the executing entity may detect the area of the spam to be identified in the image by using a moving object detection algorithm, with the empty background image as the background of the image. Here, the moving object detection algorithm may include, but is not limited to, at least one of: the method comprises an interframe difference method, a three-frame difference method, a background removal method and an optical flow method.
In step 304, the pixel values of the image except the detected region are set to preset values.
In the present embodiment, the execution subject described above may set pixel values in the image other than the detected region to a preset value. For example, the execution subject may set the pixel value of the image other than the detected region to 0 to eliminate the influence of the background in the image on the subsequent recognition.
And 305, recognizing the images in the processed image sequence of the garbage to be recognized by using a pre-trained deep learning model corresponding to the category of the garbage to be recognized, and obtaining a recognition result of the garbage to be recognized.
And step 306, analyzing and counting the identification result of the garbage to be identified to generate a quality result of the garbage to be identified.
In the present embodiment, the specific operations of step 305-306 have been described in detail in step 202-203 in the embodiment shown in fig. 2, and are not described herein again.
As can be seen from fig. 3, compared with the embodiment corresponding to fig. 2, the flow 300 of the method for identifying the quality of the garbage in the present embodiment adds an image preprocessing step. Therefore, the scheme described in the embodiment eliminates the influence of the background in the image on subsequent recognition, only performs image recognition on the region where the garbage appears, solves the problem of false recognition caused by scene variability in the garbage recognition scene, realizes that the deep learning model has no sensitivity on the complicated and changeable garbage recognition scene, and further improves the accuracy of the model recognition result.
With further reference to FIG. 4, a flow 400 of another embodiment of a method for identifying spam quality in accordance with the present application is illustrated. The method for identifying the quality of the garbage comprises the following steps:
step 401, obtaining an image sequence of the garbage to be identified.
In the present embodiment, an executing subject (e.g., the server 103 shown in fig. 1) of the method for identifying spam quality may acquire an image sequence of spam to be identified from a photographing device (e.g., the photographing device 101 shown in fig. 1) to which it is communicatively connected. The image sequence of the garbage to be identified can be a plurality of images acquired during the dumping or shipping process of the garbage to be identified. Thus, the image sequence of the garbage to be identified records the dynamic moving process of the garbage to be identified.
Step 402, for an image in an image sequence of the garbage to be identified, moving a preset frame capture frame on the image along a preset moving direction according to a preset step length, and capturing a plurality of sub-images.
In this embodiment, for an image in an image sequence of the to-be-recognized spam, the executing body may move the preset screenshot frame on the image along a preset moving direction according to a preset step length to capture a plurality of sub-images. For example, the execution body may move on the image from left to right, top to bottom by a preset step size to intercept a plurality of sub-images. The default cutout frame may be a fixed-size frame.
In step 403, for the sub-image in the multiple sub-images, the median filtering method is used to eliminate the isolated noise point of the sub-image.
In this embodiment, for a sub-image in a plurality of sub-images, the executing entity may eliminate an isolated noise point of the sub-image by using a median filtering method to eliminate an influence of noise in the sub-image on subsequent recognition.
And step 404, recognizing the images in the processed image sequence of the garbage to be recognized by using a pre-trained deep learning model corresponding to the category of the garbage to be recognized to obtain a recognition result of the garbage to be recognized.
And 405, analyzing and counting the identification result of the garbage to be identified to generate a quality result of the garbage to be identified.
In the present embodiment, the specific operations of step 404 and step 405 have been described in detail in step 202 and step 203 in the embodiment shown in fig. 2, and are not described herein again.
As can be seen from fig. 4, compared with the embodiment corresponding to fig. 2, the flow 400 of the method for identifying the quality of the garbage in the present embodiment adds an image preprocessing step. Therefore, according to the scheme described in the embodiment, the image is divided into the sub-images, and the preset target object is identified by using the semantic division algorithm, so that the problem that the preset target object in the image is seriously shielded is solved, and the accuracy of the model identification result is further improved.
With further reference to FIG. 5, a flow 500 of yet another embodiment of a method for identifying spam quality in accordance with the present application is illustrated. The method for identifying the quality of the garbage comprises the following steps:
step 501, acquiring an image sequence of the garbage to be identified.
In the present embodiment, an executing subject (e.g., the server 103 shown in fig. 1) of the method for identifying spam quality may acquire an image sequence of spam to be identified from a photographing device (e.g., the photographing device 101 shown in fig. 1) to which it is communicatively connected. The image sequence of the garbage to be identified can be a plurality of images acquired during the dumping or shipping process of the garbage to be identified. Thus, the image sequence of the garbage to be identified records the dynamic moving process of the garbage to be identified.
And 502, identifying the images in the image sequence of the garbage to be identified by using a pre-trained deep learning model corresponding to the category of the garbage to be identified to obtain the identification result of the garbage to be identified.
In this embodiment, the specific operation of step 502 has been described in detail in step 202 in the embodiment shown in fig. 2, and is not described herein again.
Step 503, for an image in the image sequence of the garbage to be recognized, if a preset target object exists in the image, framing the preset target object in the image by using a target frame as a mark target object.
In this embodiment, for an image in an image sequence of the to-be-recognized spam, if a preset target exists in the image, the execution subject may frame the preset target in the image by using a target frame as a mark target. Wherein the target frame may be a minimum frame surrounding a preset target object.
Step 504, selecting a first preset number of images before and a second preset number of images after the image from the image sequence of the garbage to be identified.
In this embodiment, the executing body may select a first preset number of images and a second preset number of images from the image sequence of the garbage to be identified. For example, the execution subject may select the first 3 images and the last 3 images of the image from the image sequence of the garbage to be identified.
And 505, performing target tracking in the selected image based on the marked target object.
In this embodiment, the executing body may perform target tracking in the selected image based on the marked target object to obtain a tracking result. Here, the target Tracking algorithm may include, but is not limited to, CACF (Context-Aware Correlation Filter Tracking), KCF (2015PAMI, high-speed kernel Correlation Filter Tracking), SiemesFC (full-adaptive parameter Networks for Object Tracking, target Tracking based on Fully-connected twin Networks), C-COT (Beyond Correlation Filters: Learning Continuous Convolution Operators), HCF (high performance Correlation resources for Visual Tracking, Visual Tracking of Hierarchical Convolution Features), and the like. Wherein the tracking result may include information of an image in which the marker target is tracked in the selected image.
And step 506, updating the identification result of the garbage to be identified based on the tracking result.
In this embodiment, the executing entity may update the recognition result of the spam to be recognized based on the tracking result. Generally, the execution subject may count the number of images in which the marker target is tracked from among the selected images; if the number of the images of the marked target object is tracked to exceed a third preset number (for example, 2 images), keeping the information of the marked target object in the identification result of the garbage to be identified; and if the number of the images of the marked target object is tracked to be less than a third preset number, deleting the information of the marked target object in the identification result of the garbage to be identified.
And 507, analyzing and counting the identification result of the garbage to be identified to generate a quality result of the garbage to be identified.
In this embodiment, the specific operation of step 507 has been described in detail in step 203 in the embodiment shown in fig. 2, and is not described herein again.
As can be seen from fig. 5, compared with the embodiment corresponding to fig. 2, the flow 500 of the method for identifying the quality of garbage in the present embodiment adds a target tracking step. Therefore, the scheme described in the embodiment updates the recognition result of the garbage to be recognized based on the tracking result, solves the problems of missed detection caused by image brightness change caused in the image capturing process and image brightness change caused by weather illumination intensity and slight shielding of the target object in the dumping or shipping process, reduces missed detection and false detection to a certain extent, and improves the recall rate and accuracy of the model recognition result.
With further reference to fig. 6, as an implementation of the methods shown in the above-mentioned figures, the present application provides an embodiment of an apparatus for identifying quality of garbage, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus can be applied to various electronic devices.
As shown in fig. 6, the apparatus 600 for identifying quality of garbage of the present embodiment may include: an acquisition unit 601, a recognition unit 602, and a statistics unit 603. The acquiring unit 601 is configured to acquire an image sequence of the garbage to be identified; the identifying unit 602 is configured to identify images in an image sequence of the garbage to be identified by using a pre-trained deep learning model corresponding to the category of the garbage to be identified, so as to obtain an identifying result of the garbage to be identified, wherein the identifying result of the garbage to be identified comprises information of a preset target object existing in the garbage to be identified; the statistical unit 603 is configured to analyze and count the recognition result of the garbage to be recognized, and generate a quality result of the garbage to be recognized.
In the present embodiment, in the apparatus 600 for identifying garbage quality: the specific processing of the obtaining unit 601, the identifying unit 602, and the counting unit 603 and the technical effects thereof can refer to the related descriptions of step 201, step 202, and step 203 in the corresponding embodiment of fig. 2, which are not repeated herein.
In some optional implementations of this embodiment, the deep learning model includes a feature extraction network, a classification network, and a regression network.
In some optional implementations of the embodiment, the image sequence of the garbage to be identified is a plurality of images acquired during the dumping or shipping process of the garbage to be identified.
In some optional implementations of this embodiment, the apparatus 600 for identifying quality of garbage further includes: a processing unit (not shown in the figures) configured to pre-process images of the sequence of images of the spam to be identified using an image processing method.
In some optional implementations of this embodiment, the processing unit is further configured to: classifying empty background images from an image sequence of the garbage to be identified; for an image except for an empty background image in an image sequence of the garbage to be identified, the empty background image is used as the background of the image, a moving object detection algorithm is used for detecting the area of the garbage to be identified in the image, and the pixel value except for the detected area in the image is set to be a preset value.
In some optional implementations of this embodiment, the processing unit is further configured to: for an image in an image sequence of the garbage to be identified, moving a preset image interception frame on the image along a preset moving direction according to a preset step length, and intercepting a plurality of sub-images; and for the sub-image in the plurality of sub-images, eliminating the isolated noise point of the sub-image by using a median filtering method.
In some optional implementations of this embodiment, the apparatus 600 for identifying quality of garbage further includes: a marking unit (not shown in the figure) configured to frame, for an image in the image sequence of the to-be-identified trash, a preset target object in the image by using a target frame as a marking target object if the preset target object exists in the image; a selection unit (not shown in the figures) configured to select a first preset number of images before and a second preset number of images after the image from the sequence of images of the trash to be identified; a tracking unit (not shown in the figure) configured to perform target tracking in the selected image based on the marking target; an updating unit (not shown in the figure) configured to update the identification result of the spam to be identified based on the tracking result.
In some optional implementations of this embodiment, the update unit is further configured to: counting the number of images of the marked target object in the selected images; if the number of the tracked images of the marked target object exceeds a third preset number, information of the marked target object in the identification result of the garbage to be identified is reserved; and if the number of the images of the marked target object is tracked to be less than a third preset number, deleting the information of the marked target object in the identification result of the garbage to be identified.
In some optional implementations of the present embodiment, the deep learning model is trained by the following steps: acquiring a training sample set, wherein training samples in the training sample set comprise sample garbage images and sample garbage labeling images, and the sample garbage labeling images are images obtained by labeling preset target objects existing in the sample garbage images; and for the training samples in the training sample set, taking the sample garbage images in the training samples as input, taking the sample garbage labeled images in the training samples as output, and training to obtain the deep learning model.
In some optional implementations of this embodiment, after obtaining the training sample set, the method further includes: for the unlabeled training samples in the training sample set, generating low-entropy labels of the unlabeled training samples by a MixUp guessed data amplification method; the unlabeled training samples and the corresponding labeled training samples are mixed to augment the training sample set.
Referring now to FIG. 7, a block diagram of a computer system 700 suitable for use in implementing an electronic device (e.g., server 103 shown in FIG. 1) of an embodiment of the present application is shown. The electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 7, the computer system 700 includes a Central Processing Unit (CPU)701, which can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for the operation of the system 700 are also stored. The CPU 701, the ROM 702, and the RAM 703 are connected to each other via a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
The following components are connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711. The computer program, when executed by a Central Processing Unit (CPU)701, performs the above-described functions defined in the method of the present application.
It should be noted that the computer readable medium described herein can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or electronic device. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, an identification unit, and a statistics unit. The names of these units do not in this case constitute a limitation on the unit itself, for example, the acquisition unit may also be described as a "unit acquiring a sequence of images of the trash to be identified".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring an image sequence of the garbage to be identified; recognizing images in the image sequence of the garbage to be recognized by utilizing a pre-trained deep learning model corresponding to the category of the garbage to be recognized to obtain a recognition result of the garbage to be recognized, wherein the recognition result of the garbage to be recognized comprises information of a preset target object existing in the garbage to be recognized; and analyzing and counting the identification result of the garbage to be identified to generate a quality result of the garbage to be identified.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (12)

1. A method for identifying waste quality, comprising:
acquiring an image sequence of the garbage to be identified;
for the images in the image sequence of the garbage to be identified, moving a preset image interception frame on the images along a preset moving direction according to a preset step length, intercepting a plurality of sub-images, and eliminating isolated noise points of the sub-images by using a median filtering method for the sub-images in the plurality of sub-images;
recognizing images in the image sequence of the garbage to be recognized by utilizing a pre-trained deep learning model corresponding to the category of the garbage to be recognized to obtain a recognition result of the garbage to be recognized, wherein the recognition result of the garbage to be recognized comprises information of a preset target object existing in the garbage to be recognized;
and analyzing and counting the recognition result of the garbage to be recognized to generate a quality result of the garbage to be recognized.
2. The method of claim 1, wherein the deep learning model comprises a feature extraction network, a classification network, and a regression network.
3. The method of claim 1, wherein the sequence of images of the trash to be identified is a plurality of images acquired of a dumping or shipping process of the trash to be identified.
4. The method of claim 3, wherein before the identifying the images in the sequence of images of the spam to be identified using the pre-trained deep learning model corresponding to the category of spam to be identified, further comprising:
and preprocessing the images in the image sequence of the garbage to be identified by utilizing an image processing method.
5. The method of claim 4, wherein the pre-processing the images in the sequence of images of the spam to be identified using an image processing method comprises:
classifying empty background images from the image sequence of the garbage to be identified;
and for the images except the empty background image in the image sequence of the garbage to be identified, taking the empty background image as the background of the image, detecting the area of the garbage to be identified in the image by using a moving object detection algorithm, and setting the pixel values except the detected area in the image to be a preset value.
6. The method according to claim 3, wherein after the identifying the image in the image sequence of the garbage to be identified by using the pre-trained deep learning model corresponding to the category of the garbage to be identified to obtain the identifying result of the garbage to be identified, the method further comprises:
for the images in the image sequence of the garbage to be identified, if a preset target object exists in the images, framing the preset target object in the images by using a target frame to serve as a marked target object;
selecting a first preset number of images before and a second preset number of images after the images from the image sequence of the garbage to be identified;
performing target tracking in the selected image based on the marked target object;
and updating the identification result of the garbage to be identified based on the tracking result.
7. The method of claim 6, wherein the updating the recognition result of the spam to be recognized based on the tracking result comprises:
counting the number of images of the marked target object in the selected images;
if the number of the tracked images of the marked target object exceeds a third preset number, retaining the information of the marked target object in the identification result of the garbage to be identified;
and if the number of the tracked images of the marked target object does not exceed the third preset number, deleting the information of the marked target object in the identification result of the garbage to be identified.
8. The method according to one of claims 1 to 7, wherein the deep learning model is trained by:
acquiring a training sample set, wherein training samples in the training sample set comprise sample garbage images and sample garbage labeling images, and the sample garbage labeling images are images obtained by labeling preset target objects existing in the sample garbage images;
and for the training samples in the training sample set, taking the sample garbage images in the training samples as input, taking the sample garbage labeled images in the training samples as output, and training to obtain the deep learning model.
9. The method of claim 8, wherein after the obtaining a set of training samples, further comprising:
for the unlabeled training samples in the training sample set, generating low-entropy labels of the unlabeled training samples by a MixUp guessed data amplification method;
the unlabeled training samples and the corresponding labeled training samples are mixed to augment the set of training samples.
10. An apparatus for identifying waste quality, comprising:
an acquisition unit configured to acquire a sequence of images of the trash to be identified;
the denoising unit is configured to move a preset capture frame on an image in the image sequence of the garbage to be identified according to a preset step length along a preset moving direction, capture a plurality of sub-images, and eliminate isolated noise points of the sub-images by using a median filtering method for the sub-images in the plurality of sub-images;
the recognition unit is configured to recognize images in the image sequence of the garbage to be recognized by utilizing a pre-trained deep learning model corresponding to the category of the garbage to be recognized to obtain a recognition result of the garbage to be recognized, wherein the recognition result of the garbage to be recognized comprises information of a preset target object existing in the garbage to be recognized;
and the statistical unit is configured to analyze and count the recognition result of the garbage to be recognized and generate a quality result of the garbage to be recognized.
11. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-9.
12. A computer-readable medium, on which a computer program is stored, wherein the computer program, when being executed by a processor, carries out the method according to any one of claims 1-9.
CN201910547978.2A 2019-06-24 2019-06-24 Method and device for identifying quality of garbage Active CN110276300B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910547978.2A CN110276300B (en) 2019-06-24 2019-06-24 Method and device for identifying quality of garbage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910547978.2A CN110276300B (en) 2019-06-24 2019-06-24 Method and device for identifying quality of garbage

Publications (2)

Publication Number Publication Date
CN110276300A CN110276300A (en) 2019-09-24
CN110276300B true CN110276300B (en) 2021-12-28

Family

ID=67961724

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910547978.2A Active CN110276300B (en) 2019-06-24 2019-06-24 Method and device for identifying quality of garbage

Country Status (1)

Country Link
CN (1) CN110276300B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111017429B (en) * 2019-11-20 2021-05-25 重庆特斯联智慧科技股份有限公司 Community garbage classification method and system based on multi-factor fusion
CN113051963A (en) * 2019-12-26 2021-06-29 中移(上海)信息通信科技有限公司 Garbage detection method and device, electronic equipment and computer storage medium
CN111753661B (en) * 2020-05-25 2022-07-12 山东浪潮科学研究院有限公司 Target identification method, device and medium based on neural network

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036490A (en) * 2014-05-13 2014-09-10 重庆大学 Foreground segmentation method applied to mobile communication network transmission
CN106000904A (en) * 2016-05-26 2016-10-12 北京新长征天高智机科技有限公司 Automatic sorting system for household refuse
CN106423913A (en) * 2016-09-09 2017-02-22 华侨大学 Construction waste sorting method and system
CN106494789A (en) * 2016-11-14 2017-03-15 上海理工大学 Refuse classification statistic device, equipment and system
CN107054936A (en) * 2017-03-23 2017-08-18 广东数相智能科技有限公司 A kind of refuse classification prompting dustbin and system based on image recognition
CN108861183A (en) * 2018-03-26 2018-11-23 厦门快商通信息技术有限公司 A kind of intelligent garbage classification method based on machine learning
CN109472200A (en) * 2018-09-29 2019-03-15 深圳市锦润防务科技有限公司 A kind of intelligent sea rubbish detection method, system and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109201514B (en) * 2017-06-30 2019-11-08 京东方科技集团股份有限公司 Waste sorting recycle method, garbage classification device and classified-refuse recovery system
CN108639601A (en) * 2018-05-18 2018-10-12 赵欣 A kind of intelligent garbage classification storage device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036490A (en) * 2014-05-13 2014-09-10 重庆大学 Foreground segmentation method applied to mobile communication network transmission
CN106000904A (en) * 2016-05-26 2016-10-12 北京新长征天高智机科技有限公司 Automatic sorting system for household refuse
CN106423913A (en) * 2016-09-09 2017-02-22 华侨大学 Construction waste sorting method and system
CN106494789A (en) * 2016-11-14 2017-03-15 上海理工大学 Refuse classification statistic device, equipment and system
CN107054936A (en) * 2017-03-23 2017-08-18 广东数相智能科技有限公司 A kind of refuse classification prompting dustbin and system based on image recognition
CN108861183A (en) * 2018-03-26 2018-11-23 厦门快商通信息技术有限公司 A kind of intelligent garbage classification method based on machine learning
CN109472200A (en) * 2018-09-29 2019-03-15 深圳市锦润防务科技有限公司 A kind of intelligent sea rubbish detection method, system and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MixMatch: A Holistic Approach to Semi-Supervised Learning;David Berthelot等;《arXiv》;20190506;摘要、正文第1-4节 *

Also Published As

Publication number Publication date
CN110276300A (en) 2019-09-24

Similar Documents

Publication Publication Date Title
CN110276300B (en) Method and device for identifying quality of garbage
US9679354B2 (en) Duplicate check image resolution
CN110717426A (en) Garbage classification method based on domain adaptive learning, electronic equipment and storage medium
CN114169381A (en) Image annotation method and device, terminal equipment and storage medium
CN112784835B (en) Method and device for identifying authenticity of circular seal, electronic equipment and storage medium
CN113435407B (en) Small target identification method and device for power transmission system
CN113792578A (en) Method, device and system for detecting abnormity of transformer substation
CN117333776A (en) VOCs gas leakage detection method, device and storage medium
CN110683240A (en) Garbage classification processing system based on image processing
CN202815869U (en) Vehicle microcomputer image and video data extraction apparatus
CN113688905A (en) Harmful domain name verification method and device
CN112468509A (en) Deep learning technology-based automatic flow data detection method and device
KR102230559B1 (en) Method and Apparatus for Creating Labeling Model with Data Programming
CN111325207A (en) Bill identification method and device based on preprocessing
CN116886869A (en) Video monitoring system and video tracing method based on AI
CN111259926A (en) Meat freshness detection method and device, computing equipment and storage medium
CN116824135A (en) Atmospheric natural environment test industrial product identification and segmentation method based on machine vision
KR102342495B1 (en) Method and Apparatus for Creating Labeling Model with Data Programming
CN115375936A (en) Artificial intelligent checking and monitoring method, system and storage medium
CN114445751A (en) Method and device for extracting video key frame image contour features
CN115035443A (en) Method, system and device for detecting fallen garbage based on picture shooting
CN114067242A (en) Method and device for automatically detecting garbage random throwing behavior and electronic equipment
CN116259091B (en) Method and device for detecting silent living body
CN112905812B (en) Media file auditing method and system
CN115115980A (en) Method and system for automatically acquiring and classifying images based on AI algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant