CN118135308A - Image processing method and computer equipment for intelligent recognition of garbage station - Google Patents
Image processing method and computer equipment for intelligent recognition of garbage station Download PDFInfo
- Publication number
- CN118135308A CN118135308A CN202410274480.4A CN202410274480A CN118135308A CN 118135308 A CN118135308 A CN 118135308A CN 202410274480 A CN202410274480 A CN 202410274480A CN 118135308 A CN118135308 A CN 118135308A
- Authority
- CN
- China
- Prior art keywords
- garbage station
- garbage
- station
- tag
- group
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 239000010813 municipal solid waste Substances 0.000 title claims abstract description 333
- 238000003672 processing method Methods 0.000 title claims abstract description 15
- 238000012549 training Methods 0.000 claims abstract description 37
- 238000012545 processing Methods 0.000 claims abstract description 23
- 238000000034 method Methods 0.000 claims abstract description 21
- 230000007246 mechanism Effects 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 14
- 238000000605 extraction Methods 0.000 claims description 8
- 230000004044 response Effects 0.000 claims description 5
- 239000011159 matrix material Substances 0.000 description 79
- 239000013598 vector Substances 0.000 description 22
- 239000002699 waste material Substances 0.000 description 16
- 230000006835 compression Effects 0.000 description 12
- 238000007906 compression Methods 0.000 description 12
- 238000013145 classification model Methods 0.000 description 9
- 238000013507 mapping Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 5
- 238000003062 neural network model Methods 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000011176 pooling Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000010806 kitchen waste Substances 0.000 description 1
- 239000010847 non-recyclable waste Substances 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/26—Government or public services
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Tourism & Hospitality (AREA)
- Artificial Intelligence (AREA)
- Educational Administration (AREA)
- Economics (AREA)
- Development Economics (AREA)
- General Business, Economics & Management (AREA)
- Marketing (AREA)
- Human Resources & Organizations (AREA)
- Strategic Management (AREA)
- Primary Health Care (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the disclosure discloses an image processing method and computer equipment for intelligently identifying a garbage station. One embodiment of the method comprises the following steps: carrying out semantic classification on the garbage station tag set to obtain a garbage station tag set; for each garbage station tag group in the garbage station tag group sets, distributing the garbage station tag group to a corresponding data processing end so that the data processing end generates a garbage station training image sample set corresponding to the garbage station tag group; according to each garbage station training image sample set, performing model training on the initial garbage station identification model to obtain a garbage station identification model; acquiring a garbage station image of each target garbage station in the target garbage station group; inputting the garbage station image group into a garbage station identification model to obtain a garbage station identification result group; and according to the garbage station identification result set, at least one garbage to be treated is put into the corresponding garbage station. According to the embodiment, the classification accuracy of the garbage station is improved, and garbage can be accurately put into the garbage station.
Description
Technical Field
The embodiment of the disclosure relates to the field of computers, in particular to an image processing method for intelligently identifying a garbage station and computer equipment.
Background
With the increasing popularity of garbage classification, the differentiation of garbage stations has become a difficult problem for people. Currently, for the identification and classification of garbage stations, the following methods are generally adopted: the identification is performed manually or by using a signboard. However, by performing the identification classification in the above manner, there are generally the following technical problems: the garbage station is identified and classified manually, errors exist, and inaccurate classification of the garbage station is easy to cause; when the identification plate is used for identifying and classifying the garbage station, the identification is inaccurate due to the fact that the identification plate is small or is shielded. In addition, when throwing garbage into a garbage station, the types of the garbage are often not clearly distinguished, so that the garbage is inaccurately thrown, and the treatment efficiency of the subsequent garbage is affected.
Disclosure of Invention
The disclosure is in part intended to introduce concepts in a simplified form that are further described below in the detailed description. The disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose an image processing method, a computer device and a computer-readable storage medium for intelligently identifying a garbage station to solve the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide an image processing method for intelligently identifying a garbage station, the method including: acquiring a garbage station tag set; carrying out semantic classification on the garbage station tag set to obtain a garbage station tag set; for each garbage station tag group in the garbage station tag group sets, distributing the garbage station tag group to a corresponding data processing end so that the data processing end generates a garbage station training image sample set corresponding to the garbage station tag group; according to each garbage station training image sample set, performing model training on the initial garbage station identification model to obtain a garbage station identification model; acquiring a garbage station image of each target garbage station in the target garbage station group to obtain a garbage station image group; inputting the garbage station image group into the garbage station identification model to obtain a garbage station identification result group, wherein one garbage station image corresponds to one garbage station identification result; and according to the garbage station identification result set, at least one garbage to be treated is put into the corresponding garbage station.
In a second aspect, the present disclosure also provides a computer device comprising a processor, a memory, and a computer program stored on the memory and executable by the processor, wherein the computer program, when executed by the processor, implements a method as described in any of the implementations of the first aspect.
In a third aspect, the present disclosure also provides a computer readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements a method as described in any of the implementations of the first aspect.
The above embodiments of the present disclosure have the following advantageous effects: according to the image processing method for intelligently identifying the garbage station, disclosed by the embodiment of the invention, the accuracy of classification of the garbage station is improved, and garbage can be accurately put into the garbage station conveniently. First, a garbage station tag set is obtained. And secondly, carrying out semantic classification on the garbage station tag set to obtain a garbage station tag set. And thirdly, for each garbage station tag group in the garbage station tag group, distributing the garbage station tag group to a corresponding data processing end so that the data processing end generates a garbage station training image sample set corresponding to the garbage station tag group. Therefore, the garbage station image is convenient to process, so that the sample preparation time is shortened, and model training is convenient. And then, carrying out model training on the initial garbage station identification model according to each garbage station training image sample set to obtain a garbage station identification model. Thus, the garbage station can be classified and identified through the trained garbage station identification model. And then, acquiring a garbage station image of each target garbage station in the target garbage station group to obtain a garbage station image group. Thereby, the garbage station can be conveniently identified and classified. And then, inputting the garbage station image group into the garbage station identification model to obtain a garbage station identification result group. Wherein, a garbage station image corresponds to a garbage station recognition result. Thus, the category of each refuse station can be identified. And finally, according to the garbage station identification result set, at least one garbage to be treated is put into the corresponding garbage station. Therefore, garbage can be classified and put according to the recognized garbage station recognition result. The classification accuracy of the garbage station is improved, and garbage can be accurately put into the garbage station conveniently.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is a flow chart of some embodiments of an image processing method of an intelligent recognition waste station according to the present disclosure;
fig. 2 is a schematic block diagram of a computer device provided in an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings. Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is a flow chart of some embodiments of an image processing method of an intelligent recognition waste station according to the present disclosure. A flow 100 of some embodiments of an image processing method for intelligently identifying a waste station according to the present disclosure is shown. The image processing method for the intelligent recognition garbage station comprises the following steps:
step 101, acquiring a garbage station tag set.
In some embodiments, an executing entity (e.g., computing device) that intelligently identifies image processing of the waste station may obtain the set of waste station tags by way of a wired or wireless connection. The garbage station labels in the garbage station label set may be labels to be generated into a corresponding garbage station training image sample set. The sample labels corresponding to the garbage station training image sample sets are corresponding garbage station labels. The waste station tag may indicate the location, category, maximum load of the waste station. For example, waste stations can be divided into: buried recoverable waste station (XX road 1, 9 tons can be loaded), vertical recoverable waste station (XA road 1, 8 tons can be loaded), mobile non-recoverable waste station (AX road 1, 9 tons can be loaded) and split non-recoverable waste station (YY road 1, 8 tons can be loaded).
And 102, carrying out semantic classification on the garbage station tag set to obtain a garbage station tag set.
In some embodiments, the execution body may perform semantic classification on the garbage station tag set to obtain a garbage station tag set. First, the execution body may input the garbage station tags in the garbage station tag set into the word embedding model to output tag vectors, thereby obtaining a tag vector set. And then, carrying out vector clustering processing on the label vector set to obtain a label vector set. Finally, a set of garbage station tag sets for the set of tag vector sets is determined. Wherein one tag vector group corresponds to one garbage station tag group.
And 103, for each garbage station tag group in the garbage station tag group sets, distributing the garbage station tag group to a corresponding data processing end so that the data processing end generates a garbage station training image sample set corresponding to the garbage station tag group.
In some embodiments, the executing entity may assign, for each of the garbage station tag groups in the garbage station tag group, the garbage station tag group to a corresponding data processing end, so that the data processing end generates a garbage station training image sample set corresponding to the garbage station tag group. The data processing end can be a computing terminal for performing label processing on the training images of the garbage station. For example, the data processing end can be a computing terminal for a technician to process the garbage station training image labels. The garbage station training image samples in the garbage station training image sample set have corresponding labels. A garbage station tag group is allocated to a data processing end.
And 104, performing model training on the initial garbage station identification model according to each garbage station training image sample set to obtain a garbage station identification model.
In some embodiments, the executing body performs model training on the initial garbage station identification model according to each garbage station training image sample set to obtain a garbage station identification model. The initial garbage station identification model comprises the following steps: the first attention mechanism encodes a network and the second attention mechanism decodes the network. Wherein the initial garbage station identification model may be an untrained garbage station classification identification model. The garbage station classification recognition model may be a neural network model for generating a garbage station class to which the image content corresponding to the garbage station image belongs. For example, the garbage station classification recognition model may be a multi-layer serial connected convolutional neural network (Convolutional Neural Networks, CNN) model.
In an actual application scenario, the execution subject may select one garbage station image sample from the garbage station training image sample sets, as a target garbage station image sample, and execute the following training steps:
Firstly, inputting an image included in the target image sample of the garbage station into an image feature extraction network to obtain image features of the garbage station. Wherein the image feature extraction network may be an untrained image feature extraction network. The image feature extraction network may be a trained neural network model. The image feature extraction network may be a model for extracting content feature information corresponding to the image of the target garbage station. Wherein the subsequent image feature extraction network is trained and updated with the initial garbage station identification model. For example, the image feature extraction model may be a multi-layer serial connected residual neural network model. The trash image features may characterize image content features of trash images included in the target trash image sample.
Secondly, generating a tag feature group corresponding to each garbage station tag group in the garbage station tag group set by using the initial garbage station identification model to obtain a tag feature group set. For example, each trash station tag in the trash station tag group may be input into the encoding network based on the first attention mechanism to obtain a tag feature group. Wherein the first attention mechanism encoding network may be an untrained encoding model. The coding model can learn the semantic relation between the corresponding semantics of each garbage station label in the garbage station label group. Thus, by the coding model, more accurate tag features which characterize the content feature information of the tag content corresponding to the classification tag can be generated. The tag features in the tag feature set are in one-to-one correspondence with the classification tags in the garbage station tag set. For example, the first attention mechanism encoding network described above may be a transducer encoding model. The first attention mechanism may be a multi-headed attention mechanism.
Thirdly, using the initial garbage station identification model, and generating classification loss information according to the label feature group set and the garbage station image features. The classification loss information may characterize difference information between the prediction tags corresponding to the image features of the waste station and the real tags of the image corresponding to the image sample of the target waste station.
Wherein the classification loss information may be generated by the sub-steps of:
And step 1, inputting the tag feature set and the image features of the garbage station into the second attention mechanism decoding network to obtain classification results corresponding to the tag categories. Wherein the second attention mechanism decoding network is an untrained decoding network. The decoding network may learn the semantic relationship between the garbage station tags in the garbage station tag group set, and may learn the semantic relationship between the garbage station tag groups. For example, the second attention mechanism decoding network may be a transducer decoding model. The second attention mechanism may be a multi-headed attention mechanism. The tag categories may be tag categories corresponding to a set of garbage station tags. The classification result may be tag content of a tag to which the image content of the target trash station image sample belongs.
And 2, generating classification loss information according to the label category included in the target garbage station image sample and the classification result. The tag class and the classification result may be input to a binary cross entropy loss function to generate classification loss information.
Fourth, in response to determining that the classification loss information satisfies a preset loss condition, the initial garbage station identification model is determined as a garbage station identification model. The preset loss condition may be: the value represented by the classification loss information is smaller than or equal to a preset value.
Optionally, in response to determining that the classification loss information does not satisfy the preset loss condition, reselecting a target garbage station image sample from the respective garbage station image sample set, and performing the training step again.
In some embodiments, the executing body may reselect a target garbage station image sample from the respective garbage station image sample sets and execute the training step again in response to determining that the classification loss information does not satisfy the preset loss condition.
And 105, acquiring a garbage station image of each target garbage station in the target garbage station group to obtain a garbage station image group.
In some embodiments, the executing body may acquire the garbage station image of each target garbage station in the target garbage station group through a wired connection or a wireless connection, so as to obtain a garbage station image group. The target garbage station may refer to a garbage station of which a garbage station class is to be identified. The trash station image may characterize the general view of the target trash station.
And 106, inputting the garbage station image group into the garbage station identification model to obtain a garbage station identification result group.
In some embodiments, the execution entity may input the garbage station image set into the garbage station recognition model to obtain a garbage station recognition result set. Wherein, a garbage station image corresponds to a garbage station recognition result. The trash station identification result may represent the identified category of the trash station.
And 107, according to the garbage station identification result set, throwing at least one garbage to be treated into the corresponding garbage station.
In some embodiments, the executing body may throw at least one garbage to be processed to a corresponding garbage station according to the garbage station identification result set.
In an actual application scenario, the execution body may execute the following processing steps for each garbage to be processed in the at least one garbage to be processed:
Firstly, collecting a garbage image of the garbage to be treated. The trash image may show specific trash items in the trash to be treated. That is, a plurality of trash items are displayed in one trash image.
Secondly, inputting the garbage image into an input layer of a pre-trained garbage classification model to obtain an input vector matrix. The garbage classification model may be a neural network model that takes a garbage image as an input and a garbage classification result as an output. The garbage classification model may include an input layer, an encoding layer, and a decoding layer. The coding layer may include a plurality of encoders. The decoding layer may include a plurality of decoders. The basic structure of the garbage classification model can refer to a transducer. The input layer may convert the garbage image into a vector matrix. For example, the coding layer may include an encoder. The encoder may include a linear layer, a pooling layer, a multi-head attention layer, a feed forward network, and two residual connection & layer normalization layers. The linear layer may generate a query matrix, a key matrix, and a key value matrix from the input vector matrix. The pooling layer may compress the key matrix and the key value matrix into a compressed key matrix and a compressed key value matrix. The multi-head attention layer can generate a pooled attention value as an input of a downstream residual connection & layer normalization layer according to the input query matrix, the compression key matrix and the compression key value matrix.
Thirdly, generating a query matrix, a key matrix and a key value matrix according to the input vector matrix.
Wherein the query matrix, key matrix and key value matrix may be generated by the sub-steps of:
And a substep 1, determining a preset mapping matrix corresponding to the garbage classification model.
And 2, generating a query matrix, a key matrix and a key value matrix according to the input vector matrix and the mapping matrix. Wherein the mapping matrix includes: key mapping matrix, key value mapping matrix, query mapping matrix. First, a product of the input vector matrix and the query mapping matrix is determined as a query matrix. Then, the product of the input vector matrix and the key map matrix is determined as a key map matrix. And finally, determining the product of the input vector matrix and the key value mapping matrix as a key value mapping matrix.
Fourth, the key matrix and the key value matrix are compressed to obtain a compressed key matrix and a compressed key value matrix. Wherein the dimensions of the compressed key matrix and the compressed key value matrix are smaller than the dimensions of the key matrix and the key value matrix.
Fifthly, inputting the query matrix, the compression key matrix and the compression key value matrix into the self-attention layer of the garbage classification model to obtain garbage classification results. The garbage classification result may represent a class of garbage to be treated. The self-attention layer can generate the pooled self-attention value of each article garbage according to the input query matrix, the compression key matrix and the compression key value matrix. For example, the garbage classification result may be: kitchen waste, non-recyclable waste. For example, for each item of litter, the pooled self-attention value may be generated by the following formula:
Wherein, Representing the self-attention value of the ith item of garbage after pooling. Alpha represents a constant. /(I)And a vector representing the item garbage corresponding to the i in the query matrix. /(I)And a vector representing the ith article garbage in the compression key matrix. /(I)And a vector corresponding to the ith article garbage in the compression key value matrix.
Sixth, determining the garbage station identification result corresponding to the garbage classification result. For example, it is possible to determine the garbage station recognition result that the kinds of garbage represented by the garbage classification result are the same.
Seventh, the garbage to be treated is put into the garbage station corresponding to the recognition result of the garbage station. For example, the garbage to be treated may be put into a garbage station corresponding to the recognition result of the garbage station.
For the garbage can not be clearly distinguished when the garbage is put into the garbage station, which is mentioned in the background art, the garbage is inaccurately put, and the treatment efficiency of the subsequent garbage is affected. ". The method can be solved by the following steps: firstly, collecting a garbage image of the garbage to be treated. And secondly, inputting the garbage image into an input layer of a pre-trained garbage classification model to obtain an input vector matrix. Then, a query matrix, a key matrix and a key value matrix are generated according to the input vector matrix. Thus, the input vector matrix can be converted into a query matrix, a key matrix, and a key value matrix, respectively, in advance. Then, the key matrix and the key value matrix are compressed to obtain a compressed key matrix and a compressed key value matrix. Thus, the key matrix and the key value matrix can be compressed respectively, so that the dimensions of vectors in the compressed key matrix and key value matrix can be reduced. Then, inputting the query matrix, the compression key matrix and the compression key value matrix into a self-attention layer of the garbage classification model to obtain a garbage classification result; and determining a garbage station identification result corresponding to the garbage classification result. Thus, the pooled self-attention values can be generated by the self-attention layer of the garbage classification model from the query matrix, the compression key matrix, and the compression key value matrix. And finally, obtaining a garbage classification result after the self-attention value is input into the downstream. And finally, throwing the garbage to be treated into the garbage station corresponding to the garbage station identification result. Therefore, the introduced redundant information and noise information are reduced, and the accuracy of garbage classification results is improved. Therefore, the accuracy of the garbage classification result is improved, and the garbage treatment efficiency is improved.
Fig. 2 is a schematic block diagram of a structure of a computer device according to an embodiment of the disclosure. The computer device may be a terminal.
As shown in fig. 2, the computer device includes a processor, a memory, and a network interface connected by a system bus, wherein the memory may include a non-volatile storage medium and an internal memory.
The non-volatile storage medium may store an operating system and a computer program. The computer program comprises program instructions which, when executed, cause the processor to perform any one of a number of image processing methods for intelligently identifying a waste station.
The processor is used to provide computing and control capabilities to support the operation of the entire computer device.
The internal memory provides an environment for the execution of a computer program in the non-volatile storage medium, which when executed by the processor, causes the processor to perform any one of the image processing methods for the intelligent identification of the waste station.
The network interface is used for network communication such as transmitting assigned tasks and the like. Those skilled in the art will appreciate that the architecture shown in fig. 2 is merely a block diagram of some of the architecture relevant to the disclosed aspects and is not limiting of the computer device to which the disclosed aspects apply, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
It should be appreciated that the Processor may be a central processing unit (Central Processing Unit, CPU), it may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. Wherein the general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Wherein, in one embodiment, the processor is configured to execute a computer program stored in the memory to implement the steps of: acquiring a garbage station tag set; carrying out semantic classification on the garbage station tag set to obtain a garbage station tag set; for each garbage station tag group in the garbage station tag group sets, distributing the garbage station tag group to a corresponding data processing end so that the data processing end generates a garbage station training image sample set corresponding to the garbage station tag group; according to each garbage station training image sample set, performing model training on the initial garbage station identification model to obtain a garbage station identification model; acquiring a garbage station image of each target garbage station in the target garbage station group to obtain a garbage station image group; inputting the garbage station image group into the garbage station identification model to obtain a garbage station identification result group, wherein one garbage station image corresponds to one garbage station identification result; and according to the garbage station identification result set, at least one garbage to be treated is put into the corresponding garbage station.
Embodiments of the present disclosure also provide a computer readable storage medium having a computer program stored thereon, where the computer program includes program instructions, where a method implemented when the program instructions are executed may refer to various embodiments of an image processing method for intelligently identifying a garbage station according to the present disclosure.
The computer readable storage medium may be an internal storage unit of the computer device according to the foregoing embodiment, for example, a hard disk or a memory of the computer device. The computer readable storage medium may be an external storage device of the computer device, for example, a plug-in hard disk, a smart memory card (SMARTMEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD), or the like.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present disclosure are merely for description and do not represent advantages or disadvantages of the embodiments. While the invention has been described with reference to certain preferred embodiments, it will be apparent to one skilled in the art that various changes and substitutions can be made herein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.
Claims (6)
1. An image processing method for intelligently identifying a garbage station, which is characterized by comprising the following steps:
acquiring a garbage station tag set;
carrying out semantic classification on the garbage station tag set to obtain a garbage station tag set;
For each garbage station tag group in the garbage station tag group sets, distributing the garbage station tag group to a corresponding data processing end, so that the data processing end generates a garbage station training image sample set corresponding to the garbage station tag group;
According to each garbage station training image sample set, performing model training on the initial garbage station identification model to obtain a garbage station identification model;
acquiring a garbage station image of each target garbage station in the target garbage station group to obtain a garbage station image group;
Inputting the garbage station image group into the garbage station recognition model to obtain a garbage station recognition result group, wherein one garbage station image corresponds to one garbage station recognition result;
And according to the garbage station identification result set, at least one garbage to be treated is put into the corresponding garbage station.
2. The method of claim 1, wherein the model training the initial garbage station identification model based on each garbage station training image sample set to obtain the garbage station identification model comprises:
Selecting one garbage station image sample from the garbage station training image sample sets as a target garbage station image sample, and executing the following training steps:
Inputting an image included in the target garbage station image sample into an image feature extraction network to obtain garbage station image features;
generating a tag feature set corresponding to each garbage station tag group in the garbage station tag group sets by using the initial garbage station identification model to obtain tag feature group sets;
generating classification loss information according to the label feature set and the image features of the garbage station by using the initial garbage station identification model;
and determining the initial garbage station identification model as a garbage station identification model in response to determining that the classification loss information meets a preset loss condition.
3. The method of claim 2, wherein the initial garbage station identification model comprises: a first attention mechanism encoding network and a second attention mechanism decoding network; and
The generating the label feature set corresponding to each of the garbage station label groups in the garbage station label group set includes:
and inputting each garbage station tag in the garbage station tag group into the encoding network based on the first attention mechanism to obtain a tag feature group.
4. The method according to claim 2, wherein the method further comprises:
in response to determining that the classification loss information does not satisfy the preset loss condition, reselecting a target garbage station image sample from the respective garbage station image sample set, and performing the training step again.
5. A computer device, wherein the computer device comprises a processor, a memory, and a computer program stored on the memory and executable by the processor, wherein the computer program, when executed by the processor, implements the steps of the method according to any of claims 1-4.
6. A computer readable storage medium, wherein the computer readable storage medium has stored thereon a computer program, wherein the computer program, when executed by a processor, implements the steps of the method according to any of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410274480.4A CN118135308A (en) | 2024-03-11 | 2024-03-11 | Image processing method and computer equipment for intelligent recognition of garbage station |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410274480.4A CN118135308A (en) | 2024-03-11 | 2024-03-11 | Image processing method and computer equipment for intelligent recognition of garbage station |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118135308A true CN118135308A (en) | 2024-06-04 |
Family
ID=91231174
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410274480.4A Pending CN118135308A (en) | 2024-03-11 | 2024-03-11 | Image processing method and computer equipment for intelligent recognition of garbage station |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118135308A (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114937179A (en) * | 2022-07-27 | 2022-08-23 | 深圳市海清视讯科技有限公司 | Junk image classification method and device, electronic equipment and storage medium |
CN115082736A (en) * | 2022-06-23 | 2022-09-20 | 平安普惠企业管理有限公司 | Garbage identification and classification method and device, electronic equipment and storage medium |
CN116682098A (en) * | 2023-05-18 | 2023-09-01 | 小圾(上海)环保科技有限公司 | Automatic urban household garbage identification and classification system and method |
-
2024
- 2024-03-11 CN CN202410274480.4A patent/CN118135308A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115082736A (en) * | 2022-06-23 | 2022-09-20 | 平安普惠企业管理有限公司 | Garbage identification and classification method and device, electronic equipment and storage medium |
CN114937179A (en) * | 2022-07-27 | 2022-08-23 | 深圳市海清视讯科技有限公司 | Junk image classification method and device, electronic equipment and storage medium |
CN116682098A (en) * | 2023-05-18 | 2023-09-01 | 小圾(上海)环保科技有限公司 | Automatic urban household garbage identification and classification system and method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105354307B (en) | Image content identification method and device | |
CN107835496B (en) | Spam short message identification method and device and server | |
CN111275038A (en) | Image text recognition method and device, computer equipment and computer storage medium | |
CN101542466B (en) | Method and system for providing image processing to track digital information | |
CN110580308B (en) | Information auditing method and device, electronic equipment and storage medium | |
CN113590850A (en) | Multimedia data searching method, device, equipment and storage medium | |
CN112015923A (en) | Multi-mode data retrieval method, system, terminal and storage medium | |
CN111382248A (en) | Question reply method and device, storage medium and terminal equipment | |
CN110780965A (en) | Vision-based process automation method, device and readable storage medium | |
CN110019813B (en) | Life insurance case searching method, searching device, server and readable storage medium | |
CN112328655A (en) | Text label mining method, device, equipment and storage medium | |
CN113626591A (en) | Electronic medical record data quality evaluation method based on text classification | |
CN114419391A (en) | Target image identification method and device, electronic equipment and readable storage medium | |
CN114528944A (en) | Medical text encoding method, device and equipment and readable storage medium | |
CN117112829B (en) | Medical data cross-modal retrieval method and device and related equipment | |
CN113283514A (en) | Unknown class classification method, device and medium based on deep learning | |
CN118135308A (en) | Image processing method and computer equipment for intelligent recognition of garbage station | |
CN111198943B (en) | Resume screening method and device and terminal equipment | |
CN116052848A (en) | Data coding method and system for medical imaging quality control | |
CN116257885A (en) | Private data communication method, system and computer equipment based on federal learning | |
CN113536788B (en) | Information processing method, device, storage medium and equipment | |
CN115620749A (en) | Pre-training optimization method, device, equipment and medium based on artificial intelligence | |
CN113139561A (en) | Garbage classification method and device, terminal equipment and storage medium | |
CN113901817A (en) | Document classification method and device, computer equipment and storage medium | |
CN115408599A (en) | Information recommendation method and device, electronic equipment and computer-readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |