CN114077877A - Newly added garbage identification method and device, computer equipment and storage medium - Google Patents

Newly added garbage identification method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN114077877A
CN114077877A CN202210058643.6A CN202210058643A CN114077877A CN 114077877 A CN114077877 A CN 114077877A CN 202210058643 A CN202210058643 A CN 202210058643A CN 114077877 A CN114077877 A CN 114077877A
Authority
CN
China
Prior art keywords
image
garbage
newly added
channel
internal image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210058643.6A
Other languages
Chinese (zh)
Other versions
CN114077877B (en
Inventor
亓胜章
张朝
王坚
李兵
余昊楠
胡卫明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Renmin Zhongke Jinan Intelligent Technology Co ltd
Original Assignee
Renmin Zhongke Jinan Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Renmin Zhongke Jinan Intelligent Technology Co ltd filed Critical Renmin Zhongke Jinan Intelligent Technology Co ltd
Priority to CN202210058643.6A priority Critical patent/CN114077877B/en
Publication of CN114077877A publication Critical patent/CN114077877A/en
Application granted granted Critical
Publication of CN114077877B publication Critical patent/CN114077877B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Abstract

The document relates to the field of artificial intelligence, and provides a newly added garbage identification method, a newly added garbage identification device, computer equipment and a storage medium, wherein the method comprises the following steps: collecting an image group, wherein the image group comprises a first internal image of the garbage can before delivering the garbage and a second internal image of the garbage can after delivering the garbage; carrying out channel separation processing on the first internal image and the second internal image to obtain a channel map of the first internal image and a channel map of the second internal image; determining a target image only containing newly added garbage according to the channel diagrams of the first internal image and the second internal image and the difference of the channel diagrams on the influence of the newly added garbage profile; and inputting the target image into a garbage classification model, predicting to obtain a newly added garbage category identifier, and training the garbage classification model according to the target image of the historical newly added garbage. The method can accurately determine the target image of the newly added garbage, and can improve the accuracy and speed of identifying the newly added garbage by only identifying the target image of the newly added garbage.

Description

Newly added garbage identification method and device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of artificial intelligence, and in particular, to a method and an apparatus for identifying new garbage, a computer device, and a storage medium.
Background
In recent years, along with the push of garbage classification, strict management of daily garbage delivery of residents is gradually started in various regions, but due to the backward of management means, a plurality of garbage delivery points need to supervise or sort for the second time the garbage delivery behavior of the residents by specially-assigned persons, and effective garbage classification is difficult to guarantee by using the artificial supervision means. With the development and maturity of artificial intelligence technology, how to use artificial intelligence and deep learning technology to replace the part of artificial supervision becomes a field with great market and research significance.
The garbage classification algorithm commonly used in the market at present is mainly realized based on a single image classification model and a detection model. For a single picture classification model, there is a lot of redundant information (e.g., background spam) and therefore it is difficult to accurately extract the spam target of a new delivery. For the detection model, the reasoning and training mechanism requires that the reasoning process needs to calculate target position information in addition to identifying garbage category information, so that the calculation speed is slow.
Disclosure of Invention
The method is used for solving the problems of low speed and low precision of the identification of the delivery rubbish in the prior art.
In order to solve the above technical problem, a first aspect of the present disclosure provides a newly added garbage identification method, including:
collecting a group of images, wherein the group of images comprises a first internal image of the trash can before delivering the trash and a second internal image of the trash can after delivering the trash;
performing channel separation processing on the first internal image and the second internal image to obtain a channel map of the first internal image and a channel map of the second internal image;
determining a target image only containing newly added garbage according to the channel map of the first internal image, the channel map of the second internal image and the difference of the influence of the channel map on the newly added garbage contour;
and inputting the target image into a garbage classification model, and predicting to obtain a newly added garbage category identifier, wherein the garbage classification model is obtained by training according to the target image of the historical newly added garbage.
As a further embodiment herein, the set of collected images comprises:
taking a second internal image of the garbage can after last garbage delivery as a first internal image of the garbage can before the garbage delivery;
and when the garbage delivery action of the user is detected, acquiring a second internal image of the garbage can after the garbage is delivered.
In a further embodiment of this document, before performing the channel separation process on the first internal image and the second internal image, the method further includes:
resizing the first internal image and the second internal image.
As a further embodiment herein, before determining a target image containing only new garbage according to the channel map of the first internal image, the channel map of the second internal image, and the difference of the channel map on the influence of the new garbage on the new garbage contour, the method further includes:
and respectively carrying out effect enhancement processing on the channel map of the first internal image and the channel map of the second internal image.
As a further embodiment herein, determining a target image only containing new garbage according to the channel map of the first internal image, the channel map of the second internal image, and the difference of the channel map on the influence of the new garbage contour includes:
calculating a newly added garbage image based on a pixel channel according to the channel map of the first internal image, the channel map of the second internal image and the difference of the influence of the channel map on the newly added garbage profile;
extracting a target contour from the pixel channel-based newly added garbage image, wherein the target contour only contains the outer boundary contour of newly added garbage;
and performing image mask processing on the second internal image by using the target contour, and taking the processed image as a target image only containing newly added garbage.
As a further embodiment herein, calculating a new garbage image based on pixel channels according to the channel map of the first internal image, the channel map of the second internal image, and the difference of the influence of the channel map on the new garbage contour includes:
carrying out weighted summation processing on pixel values of at least two channel images of the first internal image to obtain a first reconstructed image; carrying out weighted summation processing on pixel values of at least two channel images of the second internal image to obtain a second reconstruction image; comparing the first reconstruction image with the second reconstruction image, and reserving pixel values of the second reconstruction image, which are greater than a predetermined threshold value relative to the pixel difference of the first reconstruction image, so as to obtain a newly added garbage image based on a pixel channel, wherein the weighting of the weighted summation of the pixel values is determined by the difference of the channel image on the influence of a newly added garbage profile; or
Comparing the same channel map in the first internal image and the second internal image, and reserving the pixel difference of the channel map of the second internal image relative to the channel map of the first internal image to be larger than the pixel value within the preset threshold value so as to obtain a newly added garbage subimage of each channel; and performing weighted summation on the newly added garbage subimages of each channel to obtain a newly added garbage image based on the pixel channel, wherein the weighted summation weight of the newly added garbage subimages of each channel is determined by the difference of the channel map on the influence of the newly added garbage profile.
As a further embodiment herein, determining a target image only containing new garbage according to the channel map of the first internal image, the channel map of the second internal image, and the difference of the channel map on the influence of the new garbage contour, further includes:
comparing the first internal image with the second internal image, and reserving pixel values of the second internal image, which are larger than a predetermined threshold value relative to the first internal image, so as to obtain a new added garbage image based on pixel difference;
before performing image masking processing on the second internal image by using the target contour, the method further includes: and fine-tuning the target contour by utilizing the newly added garbage image based on the pixel difference.
In a further embodiment, the fine-tuning the target contour using the pixel difference-based garbage image includes:
and utilizing the newly-added garbage image based on the pixel difference to perform filling processing on discontinuous positions in the target contour and performing smoothing processing on unsmooth positions in the target contour.
As a further embodiment herein, the garbage classification model training process comprises:
receiving a sample set, wherein the sample set comprises a plurality of image groups of various garbage cans collected in a historical preset time period and newly added garbage type labeling information of each image group;
carrying out channel separation processing on the images in each image group in the sample set to obtain a channel image of each image;
determining historical target images related to each image group in the sample set according to the channel images of each image group in the sample set and the difference of the influence of the channel images on the newly-added garbage contour;
and training parameters in a garbage classification model by using the newly added garbage type labeling information of each image group in the sample set and the related historical target image.
In a further embodiment of this document, after determining the historical target image associated with each image group in the sample set according to the channel map of each image group in the sample set and the difference of the influence of the channel map on the new garbage contour, the method further includes:
preprocessing the historical target images to obtain a plurality of processed historical target images;
and assigning the newly added garbage type marking information in the history target image before processing to the processed history target image.
As a further embodiment herein, pre-processing the historical target image includes performing one or more of the following:
the method comprises the following steps of up-down turning, left-right turning, image compression, gray level transformation, fuzzy processing, affine transformation, sharpening processing and pixel disturbance.
A second aspect herein provides a newly added garbage identification system, comprising:
the garbage collecting device comprises a collecting module, a judging module and a processing module, wherein the collecting module is used for collecting an image group, and the image group comprises a first internal image of a garbage can before delivering garbage and a second internal image of the garbage can after delivering the garbage;
the channel dividing module is used for carrying out channel separation processing on the first internal image and the second internal image to obtain a channel map of the first internal image and a channel map of the second internal image;
the target extraction module is used for determining a target image only containing newly added garbage according to the channel map of the first internal image, the channel map of the second internal image and the difference of the influence of the channel map on the newly added garbage contour;
and the prediction module is used for inputting the target image into a garbage classification model and predicting to obtain a newly added garbage category identifier, wherein the garbage classification model is obtained according to the target image training of the historical newly added garbage.
A third aspect herein provides a computer apparatus comprising a memory, a processor, and a computer program stored on the memory, the computer program, when executed by the processor, performing the instructions of any of the preceding embodiment methods.
A fourth aspect herein provides a computer storage medium having stored thereon a computer program which, when executed by a processor of a computer device, carries out the instructions of any of the preceding embodiments of the method.
According to the newly added garbage identification method and device, the interference of background information (information except newly added garbage) in the image and the difference of the influence of the channel map on the newly added garbage profile are considered, and a first internal image of the garbage can before garbage delivery of the garbage can and a second internal image of the garbage can after garbage delivery are acquired; carrying out channel separation processing on the first internal image and the second internal image to obtain a channel map of the first internal image and a channel map of the second internal image; and determining the target image only containing the newly added garbage according to the channel map of the first internal image, the channel map of the second internal image and the difference of the influence of the channel map on the newly added garbage contour, so that the noise interference in the background can be greatly reduced, redundant information is eliminated, and the target image of the newly added garbage is accurately determined. By only identifying the newly added garbage target image, the speed and the precision of newly added garbage identification can be improved.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments or technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a system diagram illustrating a new spam identification system according to an embodiment herein;
FIG. 2 illustrates a first flowchart of a method for identifying new garbage according to an embodiment of the present disclosure;
FIG. 3 is a flow diagram of a garbage classification model training process according to an embodiment herein;
FIG. 4 is a first flowchart illustrating a new spam target image determination process according to an embodiment of the present disclosure;
FIG. 5 is a second flowchart illustrating a new spam target image determination process according to an embodiment herein;
FIG. 6 shows a first flowchart of a pixel channel based new garbage image determination process of embodiments herein;
FIG. 7 shows a second flowchart of a pixel channel-based new spam image determination process according to embodiments herein;
FIG. 8 is a block diagram illustrating a new garbage recognition apparatus according to an embodiment of the present disclosure;
FIG. 9A illustrates a flow diagram of a garbage classification model training process of an embodiment herein;
FIG. 9B is a flow diagram illustrating a new garbage identification process according to an embodiment herein;
FIG. 10A shows a schematic diagram of an incremental garbage image based on pixel differences according to an embodiment herein;
FIG. 10B shows a schematic representation of an R channel of embodiments herein;
FIG. 10C shows a schematic of a G channel of embodiments herein;
FIG. 10D shows a schematic of a channel of embodiment B herein;
FIG. 10E is a schematic diagram illustrating a target image of newly added garbage according to an embodiment of the disclosure;
FIG. 11 shows a block diagram of a computer device according to an embodiment of the present disclosure.
Description of the symbols of the drawings:
110. an image pickup apparatus;
120. a server;
130. a database;
801. a collection module;
802. a channel division module;
803. a target extraction module;
804. a prediction module;
1104. a processor;
1106. a memory;
1108. a drive mechanism;
1110. an input/output module;
1112. an input device;
1114. an output device;
1116. a presentation device;
1118. a graphical user interface;
1120. a network interface;
1122. a communication link;
1124. a communication bus.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments herein without making any creative effort, shall fall within the scope of protection.
It should be noted that the terms "first," "second," and the like in the description and claims herein and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments herein described are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, apparatus, article, or device that comprises a list of steps or modules is not necessarily limited to those steps or modules explicitly listed, but may include other steps or modules not expressly listed or inherent to such process, method, article, or device.
The present specification provides method steps as described in the examples or flowcharts, but may include more or fewer steps based on routine or non-inventive labor. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. When an actual system or apparatus product executes, it can execute sequentially or in parallel according to the method shown in the embodiment or the figures.
The image information, the spam identifier, and other information referred to in the present application are information and data that are authorized by the user or sufficiently authorized by each party.
In order to solve the problems of slow speed and low precision of identifying delivery spam in the prior art, an embodiment of the present disclosure provides a system for identifying newly added spam, as shown in fig. 1, including: an imaging apparatus 110, a server 120, and a database 130.
The camera device 110 is disposed right above the trash can and used for acquiring images inside the trash can, namely, images of top view inside the trash can. In specific implementation, the camera device 110 is disposed on a bracket above the trash can, or on a side of the trash can cover facing the inside of the trash can. The trash can described herein can refer to any place (e.g., community, road, mall, school, etc.), any type of trash can (box trash can, flip trash can, etc.), and the place and type of trash can are not limited herein. The camera device 110 is connected to the server 120, and is configured to send the acquired images inside the trash can to the server. During specific implementation, a garbage delivery sensor (such as an infrared sensor and a camera) can be arranged at the opening of the garbage can, the sensor is connected with the camera device 110, and the camera device 110 collects images inside the garbage can after the sensor senses garbage delivery.
The server 120 receives the images transmitted by the respective image capturing apparatuses 110, and stores the images in a sorted manner according to the image capturing apparatuses 110. The server 120 is configured to determine an image group, specifically, for each camera device 110, take an image newly sent by the camera device 110 as a second internal image of the post-delivery trash can of this time, and take an image last sent by the camera device 110 as a first internal image of the pre-delivery trash can of this time. The server 120 is further configured to determine, according to the channel map of each image in the image group and the difference of the influence of the channel map on the newly added garbage contour, a target image that only contains newly added garbage; acquiring a garbage classification model from the database 130, inputting a target image into the garbage classification model, and predicting to obtain a new garbage category identifier; the new added garbage category identifier, the target image and the image group are stored in the database 130 as a group of information to be called by a caller or to actively push the information to a manager, wherein the caller includes but is not limited to a manager, a supervisory program (for analyzing whether delivery is correct or not, counting delivery error rate according to analysis result, and further developing garbage throwing lecture and publicity according to the delivery error rate) and the like. In specific implementation, the server 120 may further determine whether the user correctly delivers by comparing the new trash type identifier with the trash types stored in the related trash bins (which may be predetermined), and send a prompt message to an alarm device on the site of the trash bin to remind the user of delivery errors if the new trash type identifier does not belong to the trash types stored in the related trash bins.
The database 130 stores the image group and the garbage classification model. In particular, server 120 may retrain the garbage classification model based on the set of images in database 130.
In specific implementation, the implementation process of the server 120 can be executed on a client, where the client includes a desktop computer, a tablet computer, a notebook computer, a smart phone, a digital assistant, a smart wearable device, and the like. Wherein, wearable equipment of intelligence can include intelligent bracelet, intelligent wrist-watch, intelligent glasses, intelligent helmet etc.. Of course, the client is not limited to the electronic device with a certain entity, and may also be software running in the electronic device.
The system provided by the embodiment considers the background information interference in the image and the difference of the channel map on the newly added garbage profile, acquires the image group before the server identifies the newly added garbage type, determines the target image only containing the newly added garbage by using the difference of the image group and the channel map on the newly added garbage profile, and identifies the target image only containing the newly added garbage, so that the background information interference in the directly acquired image can be reduced, and the identification speed and accuracy of the newly added garbage are improved.
In an embodiment of this document, a method for identifying newly added garbage is further provided, as shown in fig. 2, including:
step 201, collecting an image group, wherein the image group comprises a first internal image and a second internal image, the first internal image is an overhead view image of the garbage can before delivering garbage, and the second internal image is an overhead view image of the garbage can after delivering garbage;
step 202, performing channel separation processing on the first internal image and the second internal image to obtain a channel map of the first internal image and a channel map of the second internal image;
step 203, determining a target image only containing newly added garbage according to the channel maps of the first internal image and the second internal image and the difference of the influence of the channel maps on the newly added garbage contour;
and 204, inputting the target image into a garbage classification model, and predicting to obtain a newly added garbage category identifier, wherein the garbage classification model is obtained by training according to the target image of the historical newly added garbage.
In detail, before and after delivery in step 201 refer to before and after one delivery. The images described herein are all color images, including but not limited to RGB images and HSV images, and generally, a color image refers to an RGB mode image, unless otherwise specified. The image group is determined through images collected by the camera equipment arranged above the garbage can, and the specific image group determining process comprises the following steps: taking the overlook image of the garbage can after the garbage is delivered last time as a first internal image; and when the garbage delivery action of the user is detected, collecting an overlook image of the garbage can after the garbage is delivered, and taking the collected image as a second internal image.
When step 202 is implemented, the existing channel separation method may be used to perform channel separation processing on the first internal image and the second internal image, and the specific algorithm for channel separation is not limited herein.
Further, in order to adapt to the requirement of the garbage classification model, the first intra-image and the second intra-image are also resized before step 202 is performed. For example, the first internal image and the second internal image are adjusted to 448 × 448, 520 × 760, and 380 × 450 pixel matrices, and the specific adjustment size is determined according to the image size required by the garbage classification model, which is not limited herein.
The channel map of the image described in step 202 refers to an image corresponding to each parameter in the image type, for example, the channel maps of the RGB image are an R (red) channel map, a G (green) channel map, and a B (blue) channel map, and the channel maps of the HSV image are an H (hue) channel map, an S (saturation) channel map, and a V (brightness) channel map.
Further, in order to enhance the image effect of the channel map, emphasize some interesting features (object boundaries), and expand the difference between different object features in the image, after the channel map of the first internal image and the channel map of the second internal image are obtained in step 202, effect enhancement processing is further performed on the channel map of the first internal image and the channel map of the second internal image, respectively, wherein the effect enhancement processing includes, but is not limited to, graying processing, binarization processing, pixel inversion processing, and the like.
The difference of the influence of the channel map in step 203 on the newly added garbage profile can reflect the recognition degree of the channel map (such as an R channel map, a G channel map and a B channel map) on the article profile, and taking RGB images as an example, it can be known through a large amount of image analysis that the R channel map is more obvious in article profile recognition, and the G channel map and the B channel map are the second time. In specific implementation, the difference of the influence of the channel map on the newly added garbage contour can be embodied in a mode of setting weights for pixels in the R channel map, the G channel map and the B channel map. The greater the weight of the channel map pixel, the greater the influence of the corresponding channel map on the outline of the article, i.e. the more obvious the outline.
In specific implementation, the implementation programs corresponding to the steps 202 and 203 may be packaged as a target garbage extraction algorithm, so as to be called in the subsequent new garbage identification.
In step 204, the garbage classification model can directly output the new garbage category identifier and also can output the probability that the new garbage belongs to each category, the output dimension of the garbage classification model is consistent with the number of the categories, and the new garbage category identifier is obtained through mapping. Newly-increased rubbish category sign can be the rubbish name, still can be rubbish code, during concrete implementation, can confirm rubbish sign according to the application place of garbage bin, for example, be applied to the garbage bin in community, and the corresponding rubbish sign is shown as table 1:
table 1:
class number Name (R)
1 Shoes with air-permeable layer
2 Fabric
3 Metal
4 Pop-top can
5 Plastic material
6 Beverage bottle
7 Paper products
8 Small household appliance
9 Glass product
10 Suspected non-recovery
The garbage classification model used in step 204 includes a feature extraction layer and a full connection layer. The feature extraction layer extracts features through convolution and pooling, and in specific implementation, a feature extraction network of resnet50 can be selected, and other neural networks with cross-layer connection structures can also be selected. The garbage classification model is obtained by training according to the target images of the historical newly added garbage, and the specific training process can refer to the next embodiment.
In the method for identifying newly added garbage provided in this embodiment, in consideration of interference of background information (information other than newly added garbage) in an image and difference of influence of a channel map on a profile of the newly added garbage, a first internal image of a garbage can before delivering the garbage to the garbage can and a second internal image of the garbage can after delivering the garbage to the garbage can are acquired; carrying out channel separation processing on the first internal image and the second internal image to obtain a channel map of the first internal image and a channel map of the second internal image; and determining the target image only containing the newly added garbage according to the channel map of the first internal image, the channel map of the second internal image and the difference of the influence of the channel map on the newly added garbage contour, so that the noise interference in the background can be greatly reduced, redundant information is eliminated, and the target image of the newly added garbage is accurately determined. By only identifying the newly added garbage target image, the speed and the precision of newly added garbage identification can be improved.
In an embodiment of this document, there is further provided a training method of a garbage classification model, as shown in fig. 3, including:
step 301, receiving a sample set, wherein the sample set comprises a plurality of image groups of various garbage cans collected in a historical preset time period and newly added garbage type labeling information of each image group;
step 302, carrying out channel separation processing on the images in each image group in the sample set to obtain a channel map of each image;
step 303, determining historical target images related to each image group in the sample set according to the channel images of each image group in the sample set and the difference of the influence of the channel images on the newly-added garbage contour;
and step 304, training parameters in the garbage classification model by using the newly added garbage type labeling information of each image group in the sample set and the related historical target images.
In detail, the garbage classification model training method implemented herein can be applied to the server 120 of the newly added garbage recognition system, and can also be applied to a dedicated server for model training.
When the step 301 is implemented, a sample set can be obtained from the database 130 of the newly-added garbage recognition system, historical adjacent images collected by image collection equipment above various garbage cans can be analyzed, and if the adjacent images are different, an image group is set. The historical preset time period is, for example, the last half year time and a year time, which are not limited herein. The sample set contains image groups of common garbage can categories on the market, so that the generalization capability of the garbage classification model can be ensured. In order to improve the model accuracy, the sample set includes, for example, image groups of 10 ten thousand data levels. The more data volume, the more accurate the model training.
The new added garbage types of the image group are labeled manually, only the new added garbage (i.e. the garbage of the second internal image added relative to the first internal image) in the image group is labeled manually, the garbage labeling information is as shown in table 1 above, a plurality of new added garbage can be identified in each image group, and correspondingly, a plurality of new added garbage type identifications can also be obtained.
Further, in order to ensure the validity of the image group, data cleaning processing is also performed on the image group in the sample set before manual labeling is performed. The process of data cleansing includes, for example: deleting image groups which do not meet the service requirements (for example, the images are not clearly shot, and newly added garbage is not obvious); and performing brightness equalization processing on the image so as to prevent the training data from being influenced by brightness to cause deviation of results. Of course, the implementation may also include other data cleaning processes, which are not limited herein.
When steps 302 and 303 are performed, the method of steps 202 to 203 may be referred to, and history target images related to each image group are obtained according to each image group in the sample set, where each history target image only includes new garbage.
When step 304 is performed, the sample set may be divided into a training set and a test set. And training parameters in the garbage classification model by utilizing newly-added garbage type labeling information of each image group in the training set and the related historical target images. And verifying the garbage classification model by using the test set. The training process comprises the following steps: constructing a loss function according to newly added garbage type labeling information of the image group in the training set and a prediction result of the garbage classification model on the historical target image; and training the garbage classification model by using the loss function. Specifically, the training cycle comprises: (1) inputting historical target images of the image group in the training set into a garbage classification model to obtain a predicted garbage category identifier; (2) calculating loss according to a loss function, updating network parameters by utilizing gradient feedback until a preset training turn or an evaluation index (which can be set according to requirements) meets a set value is reached, and otherwise, continuously and circularly executing the step (1); (3) and drawing a model convergence curve and an index curve, analyzing the model performance, and selecting the optimal model as a final model according to the model effect.
In an embodiment of this document, in order to increase the training samples of the model and improve the generalization ability and robustness of the garbage classification model, after step 303, the method further includes: preprocessing the historical target images to obtain a plurality of processed historical target images; and assigning the newly added garbage type marking information in the history target image before processing to the processed history target image.
In some embodiments, the pretreatment process is as shown in table 2:
table 2:
transformation of Strength of
Turn over from top to bottom Information (can be set according to actual conditions)
Left-right turning *
Turn over up and down and left and right simultaneously *
Image compression *
Gray scale conversion *
Motion blur *
Gaussian blur *
Affine transformations *
Disturbance of brightness, chroma and saturation *
Sharpening *
In an embodiment of this document, as shown in fig. 4, the step 203 determining, according to the channel map of the first internal image, the channel map of the second internal image, and the difference of the influence of the channel maps on the new garbage added contour, a target image only containing new garbage includes:
step 401, calculating a new garbage image based on a pixel channel according to the channel map of the first internal image, the channel map of the second internal image and the difference of the influence of the channel map on the new garbage contour;
step 402, extracting a target contour from the newly added garbage image based on the pixel channel, wherein the target contour only comprises an outer boundary contour of the newly added garbage;
and 403, performing image mask processing on the second internal image by using the target contour, and taking the processed image as a target image of newly added garbage.
The newly added garbage image based on the pixel channel in step 401 refers to the newly added garbage image determined by using the knowledge about the first internal image and the second internal image channel map.
Step 402 may extract a target contour of the newly added garbage from the newly added garbage image based on the pixel channel by means of edge extraction. The newly added garbage target contour extracted in step 402 can accurately reflect the real contour of the newly added garbage. In order to only reserve the outermost contour, after the target contour is extracted, the other contour lines except for the outermost contour are also deleted.
In step 403, setting the area pixels outside the target contour in the second internal image to 0, and obtaining a target image of newly added garbage, where the target image only contains newly added garbage, as shown in fig. 10E.
Further, in order to make the target contour of the new garbage extracted in step 402 of this embodiment appear smoother, as shown in fig. 5, the method further includes:
step 404, comparing the first internal image with the second internal image, and reserving a pixel value of the second internal image, which is larger than a predetermined threshold value relative to the pixel difference of the first internal image, so as to obtain a new added garbage image based on the pixel difference;
step 405, fine tuning the target contour with the new garbage image based on the pixel difference.
Specifically, the new garbage image based on pixel difference obtained in step 404 is shown in fig. 10A. The pixel difference refers to a pixel difference value of the same position of the first internal image and the second internal image, the predetermined threshold value can be determined according to the identification precision, preferably, the value range of the predetermined threshold value is 0.15% -0.25% of the total range of the image parameters, and the value range of the predetermined threshold value is 40% -60% taking an RGB image as an example. In specific implementation, if the pixel difference is greater than the predetermined threshold, it is indicated that the difference between the pixel differences of the first internal image and the second internal image is large, and it is garbage added, the related pixels in the second internal image are reserved. If the pixel difference is within the predetermined threshold, which indicates that the pixel difference between the first intra-image and the second intra-image is not large, the related pixel in the second intra-image is set to be a fixed value (e.g. 0).
When the step 405 is implemented, the fine adjustment includes, but is not limited to, using the outermost contour of the newly added garbage image based on the pixel difference to perform the alignment processing on the discontinuous positions in the target contour, and performing the smoothing processing on the unsmooth positions in the target contour.
In an embodiment of this document, as shown in fig. 6, the specific implementation process of step 401 includes:
601, performing weighted summation processing on pixel values of at least two channel images of a first internal image to obtain a first reconstructed image; carrying out weighted summation processing on pixel values of at least two channel images of the second internal image to obtain a second reconstruction image;
step 602, comparing the first reconstructed image with the second reconstructed image, and reserving a pixel value of the second reconstructed image, which is larger than a pixel value within a predetermined threshold value relative to the pixel value of the first reconstructed image, to obtain a newly added garbage image based on the pixel channel.
When the step 601 is implemented, the weight values of the pixel values in the channel map are determined according to the difference of the influence of the image channel map on the contour of the article. Specifically, as shown in fig. 10B to 10D, the R-channel map can reflect the contour of the article, and therefore, the weights of the channels can be selected according to the following rules: the weight of the pixel value in the R-channel image is greater than the weight of the pixel value in the G-channel image and the weight of the pixel value in the B-channel image.
In step 602, when the first reconstruction image and the second reconstruction image are compared, if the pixel difference is greater than the predetermined threshold, the pixel value of the pixel X in the second internal image is retained, and if the pixel difference is less than or equal to the predetermined threshold, the pixel value of the pixel X in the second internal image is set to be a fixed value (e.g., 0).
In an embodiment herein, as shown in fig. 7, the specific implementation process of step 401 may further be:
step 701, comparing the same channel map in the first internal image and the second internal image, and reserving the pixel value of which the pixel difference is greater than the predetermined threshold value in the channel map of the second internal image to obtain a new garbage subimage of each channel;
and 702, performing weighted summation on the newly added garbage subimages of each channel to obtain a newly added garbage image based on the pixel channel, wherein the weighted summation weight of the newly added garbage subimages of each channel is determined according to the difference of the channel image on the influence of the newly added garbage profile.
When step 701 is performed, the R channel map, the G channel map, and the B channel map in the first internal image and the second internal image are compared, respectively. Taking the comparison between the R-channel map in the first internal image and the R-channel map in the second internal image as an example, if the pixel difference of any pixel of the two R-channel maps is greater than the predetermined threshold, the pixel value of the pixel of the R-channel map in the second internal image is retained, and if the pixel difference of any pixel of the two R-channel maps is less than or equal to the predetermined threshold, the pixel value of the pixel of the R-channel map in the second internal image is set to be a fixed value (for example, 0).
When the step 702 is implemented, determining the weight of each channel newly added garbage subimage by adopting the following principle: the weight of the newly added garbage subimage corresponding to the R channel graph is greater than that of the newly added garbage subimage corresponding to the G channel and the B channel.
Any of the embodiments shown in fig. 4-7 can be used as part of the target garbage extraction algorithm. Experiments show that the accuracy is improved to over 75 on the basis of the original garbage classification model by using the target garbage extraction algorithm, compared with the method for directly carrying out new garbage identification on the original image, the accuracy is improved by 15 percent, the robustness and the generalization capability of the garbage classification model are stronger, and the recall rate can be ensured in an actual scene. In addition, because the main network model adopted by the system is a classification model, the execution efficiency is far higher than that of most garbage classification models on the market, meanwhile, on the basis of keeping the inference efficiency of the classification network, the accuracy and the scene adaptability of the model are practically improved, after the service picture is preprocessed by using a pixel channel-based target garbage extraction algorithm, the noise influence in the background is greatly reduced, redundant information is removed, and the category of newly added garbage can be stably extracted.
Based on the same inventive concept, the present disclosure also provides a device for identifying new garbage, as described in the following embodiments. Because the principle of solving the problem of the newly added garbage recognition device is similar to that of the newly added garbage recognition method, the newly added garbage recognition device can be implemented by the newly added garbage recognition method, and repeated parts are not described again.
Specifically, as shown in fig. 8, the newly added garbage recognition apparatus includes:
a collecting module 801, configured to collect an image group, where the image group includes a first internal image of a trash can before delivering trash and a second internal image of the trash can after delivering trash;
a channel dividing module 802, configured to perform channel separation processing on the first internal image and the second internal image to obtain a channel map of the first internal image and a channel map of the second internal image;
a target extracting module 803, configured to determine, according to the channel map of the first internal image, the channel map of the second internal image, and the difference of the influence of the channel map on the new garbage added contour, a target image that only contains new garbage;
and the predicting module 804 is configured to input the target image into a garbage classification model, and predict to obtain a new garbage category identifier, where the garbage classification model is obtained by training a target image of the historical new garbage.
In the embodiment, the interference of background information in the images and the difference of the influence of the channel map on the newly-added garbage profile are considered, and the internal images of the garbage can before garbage delivery of the garbage can and the internal images of the garbage can after garbage delivery are acquired; according to the difference of the influence of the internal images of the garbage can before garbage delivery, the internal images of the garbage can after garbage delivery and the channel map on the newly added garbage outline, the target images containing the newly added garbage are determined, and the target images of the newly added garbage can be accurately determined. By only identifying the newly added garbage target image, the speed and the precision of newly added garbage identification can be improved.
To more clearly illustrate the technical solution herein, the following detailed description is provided in an embodiment, as shown in fig. 9A and 9B, which specifically includes:
1. a preparation phase.
Step 911, data collection: the method comprises the steps of collecting overhead images of the garbage cans in an actual scene, determining a sample set, wherein the sample set comprises a plurality of image groups of various types of garbage cans, and each image group comprises: a first interior image (i.e., a top view of the trash can prior to delivery of the trash) and a second interior image (i.e., a top view of the trash can after delivery of the trash).
Step 912, data cleaning and data labeling:
and (4) performing data cleaning on the image group in the sample set, for example, deleting the image group with unclear existence and unnoticeable new garbage.
And marking the cleaned image group manually to obtain identification information of newly added garbage in the image group, wherein the identification information of the newly added garbage corresponding to a certain image group is { shoes, pop-top cans and paper }.
And 913, performing data processing such as up-down turning, left-right turning, simultaneous up-down and left-right turning, image compression, gray scale transformation, motion blur, Gaussian blur, affine transformation, brightness, chromaticity, saturation disturbance transformation, sharpening transformation and the like on the image group obtained in the step 912.
And 914, determining the new garbage image corresponding to the image group obtained in the step 913 by using the target garbage extraction algorithm.
And 915, building a garbage classification model by using the Resnet50 network and the full connection layer, and training the garbage classification model by using the target image of the newly added garbage and the newly added garbage marking information determined in the step 914.
2. And an application stage.
And step 921, acquiring an image group in real time, wherein the image group comprises an overhead view image of the garbage can before garbage delivery and an overhead view image of the garbage can after garbage delivery.
In step 922, the images in the image group are adjusted to 448 x 448 images.
And step 923, determining the target image of the newly added garbage corresponding to the image obtained in the step 922 by using the target garbage extraction algorithm.
And 924, inputting the target image determined in the step 923 into a garbage classification model to obtain a new added garbage category identifier.
In an embodiment of the present disclosure, a computer device running the program of the new garbage recognition method or the program of the garbage classification model training method is also provided, and as shown in fig. 11, the computer device may include one or more processors 1104, such as one or more Central Processing Units (CPUs), each of which may implement one or more hardware threads. The computer device may also include any memory 1106 for storing any kind of information, such as code, settings, data, etc. For example, and without limitation, memory 1106 may include any one or more of the following in combination: any type of RAM, any type of ROM, flash memory devices, hard disks, optical disks, etc. More generally, any memory may use any technology to store information. Further, any memory may provide volatile or non-volatile retention of information. Further, any memory may represent fixed or removable components of the computer device. In one case, when the processor 1104 executes the associated instructions, which are stored in any memory or combination of memories, the computer device can perform any of the operations of the associated instructions. The computer device also includes one or more drive mechanisms 1108, such as a hard disk drive mechanism, an optical disk drive mechanism, etc., for interacting with any memory.
The computer device may also include an input/output module 1110 (I/O) for receiving various inputs (via input device 1112) and for providing various outputs (via output device 1114)). One particular output mechanism may include a presentation device 1116 and an associated graphical user interface 1118 (GUI). In other embodiments, input/output module 1110 (I/O), input device 1112, and output device 1114 may also be excluded, as only one computer device in a network. The computer device may also include one or more network interfaces 1120 for exchanging data with other devices via one or more communication links 1122. One or more communication buses 1124 couple the above-described components together.
Communication link 1122 may be implemented in any manner, e.g., via a local area network, a wide area network (e.g., the Internet), a point-to-point connection, etc., or any combination thereof. Communications link 1122 may include any combination of hardwired links, wireless links, routers, gateway functions, name servers, etc., governed by any protocol or combination of protocols.
Corresponding to the methods in fig. 2-7, the embodiments herein also provide a computer-readable storage medium having stored thereon a computer program, which, when executed by a processor, performs the steps of the above-described method.
Embodiments herein also provide computer readable instructions, wherein when executed by a processor, a program thereof causes the processor to perform the method as shown in fig. 2-7.
It should be understood that, in various embodiments herein, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments herein.
It should also be understood that, in the embodiments herein, the term "and/or" is only one kind of association relation describing an associated object, meaning that three kinds of relations may exist. For example, a and/or B, may represent: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Those of ordinary skill in the art will appreciate that the various illustrative modules and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system, the apparatus and the module described above may refer to corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided herein, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and in actual implementation, there may be other divisions, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may also be an electrical, mechanical or other form of connection.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purposes of the embodiments herein.
In addition, functional modules in the embodiments herein may be integrated into one processing unit, or each module may exist alone physically, or two or more modules are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the present invention may be implemented in a form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The principles and embodiments of this document are explained herein using specific examples, which are presented only to aid in understanding the methods and their core concepts; meanwhile, for the general technical personnel in the field, according to the idea of this document, there may be changes in the concrete implementation and the application scope, in summary, this description should not be understood as the limitation of this document.

Claims (13)

1. A newly added garbage identification method is characterized by comprising the following steps:
collecting a group of images, wherein the group of images comprises a first internal image of the trash can before delivering the trash and a second internal image of the trash can after delivering the trash;
performing channel separation processing on the first internal image and the second internal image to obtain a channel map of the first internal image and a channel map of the second internal image;
determining a target image only containing newly added garbage according to the channel map of the first internal image, the channel map of the second internal image and the difference of the influence of the channel map on the newly added garbage contour;
and inputting the target image into a garbage classification model, and predicting to obtain a newly added garbage category identifier, wherein the garbage classification model is obtained by training according to the target image of the historical newly added garbage.
2. The method for identifying new garbage as claimed in claim 1, wherein before the performing the channel separation process on the first internal image and the second internal image, the method further comprises:
resizing the first internal image and the second internal image.
3. The method for identifying new garbage as claimed in claim 1, wherein before determining the target image containing only new garbage according to the channel map of the first internal image, the channel map of the second internal image, and the difference in the influence of the channel map on the new garbage contour, the method further comprises:
and respectively carrying out effect enhancement processing on the channel map of the first internal image and the channel map of the second internal image.
4. The method for identifying new garbage as claimed in claim 1, wherein determining the target image containing only new garbage according to the channel map of the first internal image, the channel map of the second internal image, and the difference in the influence of the channel maps on the new garbage contour comprises:
calculating a newly added garbage image based on a pixel channel according to the channel map of the first internal image, the channel map of the second internal image and the difference of the influence of the channel map on the newly added garbage profile;
extracting a target contour from the pixel channel-based newly added garbage image, wherein the target contour only contains the outer boundary contour of newly added garbage;
and performing image mask processing on the second internal image by using the target contour, and taking the processed image as a target image only containing newly added garbage.
5. A new garbage collection method as defined in claim 4, wherein the calculating of the pixel channel-based new garbage collection image according to the channel map of the first intra-image, the channel map of the second intra-image and the difference of the influence of the channel maps on the new garbage collection contour comprises:
carrying out weighted summation processing on pixel values of at least two channel images of the first internal image to obtain a first reconstructed image; carrying out weighted summation processing on pixel values of at least two channel images of the second internal image to obtain a second reconstruction image; comparing the first reconstruction image with the second reconstruction image, and reserving pixel values of the second reconstruction image, which are greater than a predetermined threshold value relative to the pixel difference of the first reconstruction image, so as to obtain a newly added garbage image based on a pixel channel, wherein the weighting of the weighted summation of the pixel values is determined by the difference of the channel image on the influence of a newly added garbage profile; or
Comparing the same channel map in the first internal image and the second internal image, and reserving the pixel difference of the channel map of the second internal image relative to the channel map of the first internal image to be larger than the pixel value within the preset threshold value so as to obtain a newly added garbage subimage of each channel; and performing weighted summation on the newly added garbage subimages of each channel to obtain a newly added garbage image based on the pixel channel, wherein the weighted summation weight of the newly added garbage subimages of each channel is determined by the difference of the channel map on the influence of the newly added garbage profile.
6. The method for identifying new garbage as claimed in claim 4, wherein the determining the target image containing only new garbage according to the channel map of the first internal image, the channel map of the second internal image and the difference of the influence of the channel maps on the new garbage contour further comprises:
comparing the first internal image with the second internal image, and reserving pixel values of the second internal image, which are larger than a predetermined threshold value relative to the first internal image, so as to obtain a new added garbage image based on pixel difference;
before performing image masking processing on the second internal image by using the target contour, the method further includes: and fine-tuning the target contour by utilizing the newly added garbage image based on the pixel difference.
7. The method for newly added garbage identification as claimed in claim 6, wherein the fine-tuning the target contour using the newly added garbage image based on pixel difference comprises:
and utilizing the newly-added garbage image based on the pixel difference to perform filling processing on discontinuous positions in the target contour and performing smoothing processing on unsmooth positions in the target contour.
8. The method for identifying new garbage as claimed in claim 1, wherein the training process of garbage classification model comprises:
receiving a sample set, wherein the sample set comprises a plurality of image groups of various garbage cans collected in a historical preset time period and newly added garbage type labeling information of each image group;
carrying out channel separation processing on the images in each image group in the sample set to obtain a channel image of each image;
determining historical target images related to each image group in the sample set according to the channel images of each image group in the sample set and the difference of the influence of the channel images on the newly-added garbage contour;
and training parameters in a garbage classification model by using the newly added garbage type labeling information of each image group in the sample set and the related historical target image.
9. The method for identifying new garbage, according to claim 8, after determining the historical target image associated with each image group in the sample set according to the channel map of each image group in the sample set and the difference of the influence of the channel map on the new garbage contour, further comprising:
preprocessing the historical target images to obtain a plurality of processed historical target images;
and assigning the newly added garbage type marking information in the history target image before processing to the processed history target image.
10. A method for identifying new garbage as claimed in claim 9, wherein the pre-processing of the history target image comprises performing one or more of the following processes:
the method comprises the following steps of up-down turning, left-right turning, image compression, gray level transformation, fuzzy processing, affine transformation, sharpening processing and pixel disturbance.
11. A newly-increased rubbish recognition device which is characterized by comprising:
the garbage collecting device comprises a collecting module, a judging module and a processing module, wherein the collecting module is used for collecting an image group, and the image group comprises a first internal image of a garbage can before delivering garbage and a second internal image of the garbage can after delivering the garbage;
the channel dividing module is used for carrying out channel separation processing on the first internal image and the second internal image to obtain a channel map of the first internal image and a channel map of the second internal image;
the target extraction module is used for determining a target image only containing newly added garbage according to the channel map of the first internal image, the channel map of the second internal image and the difference of the influence of the channel map on the newly added garbage contour;
and the prediction module is used for inputting the target image into a garbage classification model and predicting to obtain a newly added garbage category identifier, wherein the garbage classification model is obtained according to the target image training of the historical newly added garbage.
12. A computer device comprising a memory, a processor, and a computer program stored on the memory, wherein the computer program, when executed by the processor, performs the instructions of the method of any one of claims 1-10.
13. A computer storage medium on which a computer program is stored, characterized in that the computer program, when being executed by a processor of a computer device, executes instructions of a method according to any one of claims 1-10.
CN202210058643.6A 2022-01-19 2022-01-19 Newly-added garbage identification method and device, computer equipment and storage medium Active CN114077877B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210058643.6A CN114077877B (en) 2022-01-19 2022-01-19 Newly-added garbage identification method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210058643.6A CN114077877B (en) 2022-01-19 2022-01-19 Newly-added garbage identification method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114077877A true CN114077877A (en) 2022-02-22
CN114077877B CN114077877B (en) 2022-05-13

Family

ID=80284685

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210058643.6A Active CN114077877B (en) 2022-01-19 2022-01-19 Newly-added garbage identification method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114077877B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115330993A (en) * 2022-10-18 2022-11-11 小手创新(杭州)科技有限公司 Recovery system new-entry discrimination method based on low computation amount

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070122000A1 (en) * 2005-11-29 2007-05-31 Objectvideo, Inc. Detection of stationary objects in video
CN103325112A (en) * 2013-06-07 2013-09-25 中国民航大学 Quick detecting method for moving objects in dynamic scene
CN106408554A (en) * 2015-07-31 2017-02-15 富士通株式会社 Remnant detection apparatus, method and system
CN107092914A (en) * 2017-03-23 2017-08-25 广东数相智能科技有限公司 Refuse classification method, device and system based on image recognition
CN108062525A (en) * 2017-12-14 2018-05-22 中国科学技术大学 A kind of deep learning hand detection method based on hand region prediction
CN110219290A (en) * 2019-05-07 2019-09-10 尹艺臻 Suspended matter identification and motor control method based on floating on water refuse collector
CN110648364A (en) * 2019-09-17 2020-01-03 华侨大学 Multi-dimensional space solid waste visual detection positioning and identification method and system
CN110795999A (en) * 2019-09-21 2020-02-14 万翼科技有限公司 Garbage delivery behavior analysis method and related product
CN111079639A (en) * 2019-12-13 2020-04-28 中国平安财产保险股份有限公司 Method, device and equipment for constructing garbage image classification model and storage medium
CN111582033A (en) * 2020-04-07 2020-08-25 苏宁云计算有限公司 Garbage classification identification method and system and computer readable storage medium
CN112132073A (en) * 2020-09-28 2020-12-25 中国银行股份有限公司 Garbage classification method and device, storage medium and electronic equipment
CN112232246A (en) * 2020-10-22 2021-01-15 深兰人工智能(深圳)有限公司 Garbage detection and classification method and device based on deep learning
CN112241667A (en) * 2019-07-18 2021-01-19 华为技术有限公司 Image detection method, device, equipment and storage medium
CN112651318A (en) * 2020-12-19 2021-04-13 重庆市信息通信咨询设计院有限公司 Image recognition-based garbage classification method, device and system
CN113255804A (en) * 2021-06-03 2021-08-13 图灵人工智能研究院(南京)有限公司 Garbage traceability method and device based on image change detection
CN113362333A (en) * 2021-07-07 2021-09-07 李有俊 Garbage classification box management method and device
CN113705638A (en) * 2021-08-13 2021-11-26 苏州凯利洁环保科技有限公司 Mobile vehicle-mounted intelligent garbage information management method and system
CN113859803A (en) * 2021-09-29 2021-12-31 嘉兴地星科技有限公司 Intelligent identification trash can and intelligent identification method thereof

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070122000A1 (en) * 2005-11-29 2007-05-31 Objectvideo, Inc. Detection of stationary objects in video
CN103325112A (en) * 2013-06-07 2013-09-25 中国民航大学 Quick detecting method for moving objects in dynamic scene
CN106408554A (en) * 2015-07-31 2017-02-15 富士通株式会社 Remnant detection apparatus, method and system
CN107092914A (en) * 2017-03-23 2017-08-25 广东数相智能科技有限公司 Refuse classification method, device and system based on image recognition
CN108062525A (en) * 2017-12-14 2018-05-22 中国科学技术大学 A kind of deep learning hand detection method based on hand region prediction
CN110219290A (en) * 2019-05-07 2019-09-10 尹艺臻 Suspended matter identification and motor control method based on floating on water refuse collector
CN112241667A (en) * 2019-07-18 2021-01-19 华为技术有限公司 Image detection method, device, equipment and storage medium
CN110648364A (en) * 2019-09-17 2020-01-03 华侨大学 Multi-dimensional space solid waste visual detection positioning and identification method and system
CN110795999A (en) * 2019-09-21 2020-02-14 万翼科技有限公司 Garbage delivery behavior analysis method and related product
CN111079639A (en) * 2019-12-13 2020-04-28 中国平安财产保险股份有限公司 Method, device and equipment for constructing garbage image classification model and storage medium
CN111582033A (en) * 2020-04-07 2020-08-25 苏宁云计算有限公司 Garbage classification identification method and system and computer readable storage medium
CN112132073A (en) * 2020-09-28 2020-12-25 中国银行股份有限公司 Garbage classification method and device, storage medium and electronic equipment
CN112232246A (en) * 2020-10-22 2021-01-15 深兰人工智能(深圳)有限公司 Garbage detection and classification method and device based on deep learning
CN112651318A (en) * 2020-12-19 2021-04-13 重庆市信息通信咨询设计院有限公司 Image recognition-based garbage classification method, device and system
CN113255804A (en) * 2021-06-03 2021-08-13 图灵人工智能研究院(南京)有限公司 Garbage traceability method and device based on image change detection
CN113362333A (en) * 2021-07-07 2021-09-07 李有俊 Garbage classification box management method and device
CN113705638A (en) * 2021-08-13 2021-11-26 苏州凯利洁环保科技有限公司 Mobile vehicle-mounted intelligent garbage information management method and system
CN113859803A (en) * 2021-09-29 2021-12-31 嘉兴地星科技有限公司 Intelligent identification trash can and intelligent identification method thereof

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
程军等: "《多目标分类智能垃圾箱电控设计研究》", 《电子制作》 *
罗文俊等: "《基于机器视觉的湖面垃圾识别算法设计》", 《工业控制计算机》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115330993A (en) * 2022-10-18 2022-11-11 小手创新(杭州)科技有限公司 Recovery system new-entry discrimination method based on low computation amount
CN115330993B (en) * 2022-10-18 2023-03-24 小手创新(杭州)科技有限公司 Recovery system new-entry discrimination method based on low computation amount

Also Published As

Publication number Publication date
CN114077877B (en) 2022-05-13

Similar Documents

Publication Publication Date Title
CN106845890B (en) Storage monitoring method and device based on video monitoring
JP2019035626A (en) Recognition method of tire image and recognition device of tire image
CN111461068B (en) Chromosome metaphase map identification and segmentation method
CN111524144B (en) Intelligent lung nodule diagnosis method based on GAN and Unet network
CN111079620B (en) White blood cell image detection and identification model construction method and application based on transfer learning
CN110334779A (en) A kind of multi-focus image fusing method based on PSPNet detail extraction
CN107341508B (en) Fast food picture identification method and system
CN112819821B (en) Cell nucleus image detection method
CN111369526B (en) Multi-type old bridge crack identification method based on semi-supervised deep learning
CN115909006B (en) Mammary tissue image classification method and system based on convolution transducer
CN114077877B (en) Newly-added garbage identification method and device, computer equipment and storage medium
CN107958253A (en) A kind of method and apparatus of image recognition
WO2021082433A1 (en) Digital pathological image quality control method and apparatus
CN110751191A (en) Image classification method and system
CN110704662A (en) Image classification method and system
CN116012291A (en) Industrial part image defect detection method and system, electronic equipment and storage medium
CN113362277A (en) Workpiece surface defect detection and segmentation method based on deep learning
CN116030396A (en) Accurate segmentation method for video structured extraction
CN111310531B (en) Image classification method, device, computer equipment and storage medium
CN111199228B (en) License plate positioning method and device
CN109886320B (en) Human femoral X-ray intelligent recognition method and system
CN111860629A (en) Jewelry classification system, method, device and storage medium
CN116612347A (en) Deep learning model training method based on examination room violations
CN113159015A (en) Seal identification method based on transfer learning
Talukder et al. A Computer Vision and Deep CNN Modeling for Spices Recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100080 floor 5, dream laboratory, No. 1, Haidian Street, Haidian District, Beijing

Applicant after: Renmin Zhongke (Beijing) Intelligent Technology Co.,Ltd.

Address before: 250101 Room 201, 2 / F, Hanyu Golden Valley new media building, No. 7000 Jingshi Road, Jinan area, free trade pilot zone, Jinan, Shandong Province

Applicant before: Renmin Zhongke (Jinan) Intelligent Technology Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant