CN113222063B - Express carton garbage classification method, device, equipment and medium - Google Patents

Express carton garbage classification method, device, equipment and medium Download PDF

Info

Publication number
CN113222063B
CN113222063B CN202110600085.7A CN202110600085A CN113222063B CN 113222063 B CN113222063 B CN 113222063B CN 202110600085 A CN202110600085 A CN 202110600085A CN 113222063 B CN113222063 B CN 113222063B
Authority
CN
China
Prior art keywords
express
sample
carton
garbage
profile information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110600085.7A
Other languages
Chinese (zh)
Other versions
CN113222063A (en
Inventor
于凤英
王健宗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202110600085.7A priority Critical patent/CN113222063B/en
Publication of CN113222063A publication Critical patent/CN113222063A/en
Application granted granted Critical
Publication of CN113222063B publication Critical patent/CN113222063B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention relates to the field of artificial intelligence, and provides a method, a device, equipment and a medium for sorting express carton garbage, which can effectively solve the noise of the express garbage through image enhancement, solve the influence of image distortion on sorting accuracy, effectively avoid the influence of light reflection on sorting through image segmentation, introduce gray features, clearly extract texture features of the express carton, improve the accuracy of sorting the subsequent express carton garbage, combine the texture features and shape features simultaneously when a sample set is constructed so as to perform more accurate sorting, avoid the influence of training by adopting a single feature on the accuracy of a training model, input a picture to be processed into the express carton garbage sorting model to obtain a target class, and further realize the accurate sorting of the express garbage based on computer vision so as to assist in realizing the recycling of the express garbage. In addition, the invention also relates to a blockchain technology, and the express carton garbage classification model can be stored in a blockchain node.

Description

Express carton garbage classification method, device, equipment and medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a method, a device, equipment and a medium for sorting garbage in express cartons.
Background
Along with the continuous development of the express industry, the express delivery quantity is exponentially increased, so that a lot of waste paper box garbage is generated, and when the external rubber belt of the express paper box is reused, the generated gelatinous chemical substances can cause environmental pollution, so that the waste paper box of the express is recovered and reused in order to maintain the ecological environment. Due to the efficiency requirements of the factory assembly line, the classification of express cartons is difficult to achieve by manual operation.
In the field of garbage classification, the following advances have mainly been made:
(1) In 2015, ozkan et al work opened the door to CV (Computer Vision) technology in the field of garbage classification identification. Ozkan et al are plastic bottles, and the classification recognition with higher accuracy is realized through image preprocessing such as noise reduction.
(2) The study work of scholars such as SUDHAS, esakr and the like realizes more application of CV technology in the field of garbage classification and identification, and classification of garbage materials can be realized on a macroscopic level.
(3) There is little research effort in the field of garbage classification recognition based on CV technology in China. Expert students studied classification of water surface garbage in 2019, and classified construction garbage in the same year until 2020, classification and identification of domestic garbage based on CV technology can be realized under the condition of about 80% accuracy.
From the above, the classification and identification of the garbage based on the CV technology has been greatly developed, but there is no research on the development of single-variety garbage, especially on the classification of the garbage of the express cartons, and no research on the related classification and identification technology is available at present.
Disclosure of Invention
In view of the above, it is necessary to provide a method, a device, equipment and a medium for sorting express carton waste, which can accurately sort the express waste based on computer vision, so as to assist in recycling the express waste.
An express delivery carton waste classification method, the express delivery carton waste classification method comprises the following steps:
Responding to an express carton garbage classification instruction, and acquiring sample data according to the express carton garbage classification instruction;
performing image enhancement processing on the sample data to obtain an enhanced sample;
performing image segmentation processing on the enhanced sample to obtain a contour information set;
Extracting texture features of each profile information in the profile information set, and constructing a texture feature set according to the extracted texture features;
extracting shape characteristics of each piece of contour information in the contour information set, and constructing a shape characteristic set according to the extracted shape characteristics;
combining each texture feature in the texture feature set and the corresponding shape feature in the shape feature set into a training sample, and constructing a sample set according to all the training samples obtained by combination;
Training a designated classifier by using the sample set to obtain an express carton garbage classification model;
When receiving a picture to be processed, inputting the picture to be processed into the express carton garbage classification model, and acquiring output of the express carton garbage classification model as a target class of the picture to be processed.
According to a preferred embodiment of the present invention, the performing image enhancement processing on the sample data to obtain an enhanced sample includes:
for each sub-image in the sample data, blurring the sub-image according to a specified scale to obtain a blurred image;
Calculating the logarithmic value of the sub-image as a first logarithmic value, and calculating the logarithmic value of the blurred image as a second logarithmic value;
Calculating a difference between the first and second logarithmic values as a third logarithmic value;
converting the third logarithmic value into a pixel value to obtain an enhanced image corresponding to the sub-image;
and combining the obtained enhanced images to obtain the enhanced sample.
According to a preferred embodiment of the present invention, the image segmentation processing is performed on the enhanced sample, and obtaining the contour information set includes:
inputting the enhanced sample into a pre-trained neural network model for feature extraction to obtain a feature map set;
acquiring at least one candidate ROI of each feature map in the feature map set;
inputting at least one candidate ROI of each feature map to a region suggestion network for filtering to obtain a target ROI of each feature map;
Performing an alignment operation on the target ROI of each feature map;
inputting the target ROI after the alignment operation into a full convolution network, and acquiring the output of the full convolution network as the contour information of each feature map;
and constructing the profile information set by using the obtained profile information.
According to a preferred embodiment of the present invention, the extracting texture features of each profile information in the profile information set includes:
Acquiring the number of pixel points in each piece of contour information;
Determining the gray value of the pixel point in each profile information, and determining the probability of each pixel point to take the corresponding gray value;
calculating the average value corresponding to each piece of contour information according to the number of the pixel points in each piece of contour information, the gray value of the pixel points in each piece of contour information and the probability that each pixel point takes the corresponding gray value;
calculating the contrast corresponding to each piece of contour information according to the gray value of the pixel point in each piece of contour information and the probability that each pixel point takes the corresponding gray value;
calculating entropy corresponding to each profile information according to probability of taking corresponding gray value of each pixel point;
And combining the average value corresponding to each profile information, the contrast corresponding to each profile information and the entropy corresponding to each profile information to obtain the texture feature of each profile information.
According to a preferred embodiment of the present invention, the extracting the shape feature of each profile information in the profile information set includes:
Calculating the perimeter and the area of each piece of contour information in the contour information set;
the perimeter and area of each profile information are determined as the shape feature of each profile information.
According to a preferred embodiment of the present invention, training a specified classifier using the sample set to obtain an express carton waste classification model includes:
Determining a class of each training sample in the set of samples;
carrying out label processing on each training sample according to the category of each training sample to obtain a label sample set;
dividing the label sample set according to a preset proportion to obtain a first sample set and a second sample set;
Inputting the first sample set to a support vector machine classifier for training until the support vector machine classifier reaches convergence, and stopping training to obtain an intermediate model;
validating the intermediate model with the second set of samples;
And when the accuracy rate of the intermediate model is greater than or equal to a preset threshold value, determining the intermediate model as the express carton waste classification model.
According to a preferred embodiment of the present invention, after obtaining the output of the express carton waste classification model as the target category of the to-be-processed picture, the method further includes:
establishing an express garbage treatment table according to the category and the corresponding treatment measures;
matching in the express rubbish treatment table by utilizing the target category, and determining the treatment measure corresponding to the matched category as a target treatment measure;
Generating prompt information according to the target processing measure;
and sending the prompt information to the appointed terminal equipment.
An express delivery carton rubbish sorter, express delivery carton rubbish sorter includes:
the system comprises an acquisition unit, a storage unit and a processing unit, wherein the acquisition unit is used for responding to an express carton garbage classification instruction and acquiring sample data according to the express carton garbage classification instruction;
the enhancement unit is used for carrying out image enhancement processing on the sample data to obtain an enhanced sample;
the segmentation unit is used for carrying out image segmentation processing on the enhanced sample to obtain a contour information set;
the construction unit is used for extracting the texture feature of each profile information in the profile information set and constructing a texture feature set according to the extracted texture feature;
The construction unit is also used for extracting the shape characteristic of each piece of contour information in the contour information set and constructing a shape characteristic set according to the extracted shape characteristic;
The construction unit is further configured to combine each texture feature in the texture feature set and a corresponding shape feature in the shape feature set into a training sample, and construct a sample set according to all the training samples obtained by combination;
the training unit is used for training the specified classifier by utilizing the sample set to obtain an express carton garbage classification model;
And the classification unit is used for inputting the picture to be processed into the express carton garbage classification model when receiving the picture to be processed, and acquiring the output of the express carton garbage classification model as the target category of the picture to be processed.
A computer device, the computer device comprising:
a memory storing at least one instruction; and
And the processor executes the instructions stored in the memory to realize the express carton waste classification method.
A computer-readable storage medium having stored therein at least one instruction for execution by a processor in a computer device to implement the method of sorting express carton waste.
According to the technical scheme, the image enhancement processing can be performed on the sample data to obtain the enhancement sample, the image segmentation processing is performed on the enhancement sample to obtain the outline information set, the texture feature of each piece of outline information in the outline information set is extracted, the texture feature set is constructed according to the extracted texture feature, the shape feature of each piece of outline information in the outline information set is further extracted, the shape feature set is constructed according to the extracted shape feature, each texture feature in the texture feature set and the corresponding shape feature in the shape feature set are combined to form a training sample, the sample set is constructed according to all training samples obtained by combining, a specified classifier is trained by utilizing the sample set, and the express carton garbage classification model is obtained so that accurate classification of express garbage can be realized based on computer vision.
Drawings
Fig. 1 is a flowchart of a method for sorting waste in an express carton according to a preferred embodiment of the invention.
Fig. 2 is a functional block diagram of a garbage sorting device for express cartons according to a preferred embodiment of the present invention.
Fig. 3 is a schematic structural diagram of a computer device for implementing a preferred embodiment of the method for sorting waste in an express carton of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in detail with reference to the accompanying drawings and specific embodiments.
Fig. 1 is a flowchart of a method for sorting waste in an express carton according to a preferred embodiment of the present invention. The order of the steps in the flowchart may be changed and some steps may be omitted according to various needs.
The express carton garbage classification method is applied to one or more computer devices, wherein the computer device is a device capable of automatically performing numerical calculation and/or information processing according to preset or stored instructions, and the hardware of the computer device comprises, but is not limited to, a microprocessor, an Application SPECIFIC INTEGRATED Circuit (ASIC), a Programmable gate array (Field-Programmable GATE ARRAY, FPGA), a digital Processor (DIGITAL SIGNAL Processor, DSP), an embedded device and the like.
The computer device may be any electronic product that can interact with a user in a human-computer manner, such as a Personal computer, a tablet computer, a smart phone, a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA), a game console, an interactive internet protocol television (Internet Protocol Television, IPTV), a smart wearable device, etc.
The computer device may also include a network device and/or a user device. Wherein the network device includes, but is not limited to, a single network server, a server group composed of a plurality of network servers, or a Cloud based Cloud Computing (Cloud Computing) composed of a large number of hosts or network servers.
The network in which the computer device is located includes, but is not limited to, the internet, a wide area network, a metropolitan area network, a local area network, a virtual private network (Virtual Private Network, VPN), and the like.
S10, responding to the express carton garbage classification instruction, and acquiring sample data according to the express carton garbage classification instruction.
In this embodiment, the express carton garbage classification instruction may be triggered by a worker responsible for classifying the express carton garbage, or may be triggered by a developer, which is not limited by the present invention.
In at least one embodiment of the present invention, the obtaining sample data according to the express carton waste classification instruction includes:
Analyzing the express carton garbage classification instruction to obtain information carried by the express carton garbage classification instruction;
Acquiring a preset label corresponding to the address;
constructing a regular expression according to the preset label;
Traversing information carried by the express carton garbage classification instruction by using the regular expression, and determining the traversed information as a target address;
And connecting to the target address, and acquiring the data stored by the target address to construct the sample data.
For example: when the preset label is ADD, the constructed regular expression is ADD (), and the regular expression ADD () is used to traverse the information carried by the express carton garbage classification instruction to obtain a target address, where a large amount of data is stored at the target address, for example: and shooting pictures of express garbage. And further integrating the data stored at the target address to obtain the sample data.
S11, performing image enhancement processing on the sample data to obtain an enhanced sample.
It will be appreciated that due to the angle of the photographing, or the influence of illumination factors, or due to jitter during photographing, a great deal of noise may be included in the sample data, which affects the accuracy of the subsequent classification.
Therefore, in this embodiment, the image enhancement processing is performed on the sample data to obtain the enhanced sample, which specifically includes:
for each sub-image in the sample data, blurring the sub-image according to a specified scale to obtain a blurred image;
Calculating the logarithmic value of the sub-image as a first logarithmic value, and calculating the logarithmic value of the blurred image as a second logarithmic value;
Calculating a difference between the first and second logarithmic values as a third logarithmic value;
converting the third logarithmic value into a pixel value to obtain an enhanced image corresponding to the sub-image;
and combining the obtained enhanced images to obtain the enhanced sample.
The specified scale can be configured in a self-defined way, and is the fuzzy radius.
In this embodiment, when the sub-image is subjected to blurring processing according to the specified scale to obtain the blurred image, a gaussian blur algorithm may be adopted, or a mean blur algorithm may be adopted instead of the gaussian blur algorithm.
Specifically, the third logarithmic value is calculated using the following formula:
Log[R(x,y)]=Log[I(x,y)]-Log[L(x,y)]
Wherein R (x, y) represents a reflection component of a target object of which the pixel points (x, y) carry image detail information; i (x, y) represents the original input of the pixel point (x, y), L (x, y) represents the illumination component of the ambient light; log [ R (x, y) ] represents said third logarithmic value; log [ I (x, y) ] represents said first logarithmic value; log [ L (x, y) ] represents the second logarithmic value.
Further, the third logarithmic value is converted into a pixel value ranging from 0 to 255, and the result after image enhancement, namely the enhanced image, can be obtained.
In the embodiment, the noise of the express garbage can be effectively solved through image enhancement, and the influence of image distortion on classification accuracy is solved.
S12, performing image segmentation processing on the enhanced sample to obtain a contour information set.
In at least one embodiment of the present invention, the performing image segmentation processing on the enhanced sample to obtain a contour information set includes:
inputting the enhanced sample into a pre-trained neural network model for feature extraction to obtain a feature map set;
Acquiring at least one candidate ROI (Region Of Interest ) of each feature map in the set of feature maps;
inputting at least one candidate ROI of each feature map to a region suggestion network for filtering to obtain a target ROI of each feature map;
Performing an alignment operation on the target ROI of each feature map;
inputting the target ROI after the alignment operation into a full convolution network, and acquiring the output of the full convolution network as the contour information of each feature map;
and constructing the profile information set by using the obtained profile information.
Wherein the pre-trained neural network model may include ResNeXt or the like.
For example: the enhanced sample A is input into ResNeXt networks for feature extraction to obtain feature map, a plurality of candidate ROIs of the feature map are determined according to preset areas, further, the plurality of candidate ROIs are input into RPN networks (Region Proposal Network, area suggestion networks), binary classification (foreground or background classification) and BB regression (Bounding-box regression, bounding box regression) are carried out in the RPN networks, and a part of candidate ROIs are filtered to obtain target ROIs of the feature map. And performing ROIAlign (ROI alignment) operation on the target ROI, inputting the ROI obtained after the alignment operation into an FCN network (Fully Convolutional Networks, full convolution network), and performing classification regression in the FCN network to obtain the contour information of the feature map.
In the embodiment, the image is segmented, so that the influence of reflection on classification is effectively avoided.
S13, extracting texture features of each piece of contour information in the contour information set, and constructing a texture feature set according to the extracted texture features.
In at least one embodiment of the present invention, the extracting texture features of each profile information in the profile information set includes:
Acquiring the number of pixel points in each piece of contour information;
Determining the gray value of the pixel point in each profile information, and determining the probability of each pixel point to take the corresponding gray value;
calculating the average value corresponding to each piece of contour information according to the number of the pixel points in each piece of contour information, the gray value of the pixel points in each piece of contour information and the probability that each pixel point takes the corresponding gray value;
calculating the contrast corresponding to each piece of contour information according to the gray value of the pixel point in each piece of contour information and the probability that each pixel point takes the corresponding gray value;
calculating entropy corresponding to each profile information according to probability of taking corresponding gray value of each pixel point;
And combining the average value corresponding to each profile information, the contrast corresponding to each profile information and the entropy corresponding to each profile information to obtain the texture feature of each profile information.
Specifically, the average value corresponding to each profile information is calculated using the following formula:
Wherein Y mean represents an average value corresponding to the contour information, m represents the number of pixels in the contour information, i represents a gray value of the pixel, and p (i) represents a probability that the pixel takes the gray value i.
Further, the contrast corresponding to each profile information is calculated using the following formula:
Wherein Y con represents the contrast corresponding to the profile information.
Further, the entropy corresponding to each profile information is calculated using the following formula:
wherein Y entropy represents entropy corresponding to the profile information.
In the embodiment, when the texture features are extracted, the gray features are introduced, so that the texture features of the express cartons can be clearly extracted, and the accuracy of garbage classification of the subsequent express cartons is improved.
S14, extracting the shape characteristics of each piece of contour information in the contour information set, and constructing a shape characteristic set according to the extracted shape characteristics.
In at least one embodiment of the present invention, the extracting the shape feature of each profile information in the profile information set includes:
Calculating the perimeter and the area of each piece of contour information in the contour information set;
the perimeter and area of each profile information are determined as the shape feature of each profile information.
For example: when the identified contour information is a quadrangle, the corresponding shape feature can be calculated according to the calculation formula of the circumference and the area of the quadrangle, which is not described herein.
In the embodiment, the shape characteristics can be introduced, and a large number data basis is used for classifying the garbage of the subsequent express carton.
S15, combining each texture feature in the texture feature set and the corresponding shape feature in the shape feature set into a training sample, and constructing a sample set according to all the training samples obtained by combination.
Through the embodiment, the texture features and the shape features can be combined simultaneously when the sample set is constructed, so that more accurate classification can be performed, and the influence of training by adopting a single feature on the accuracy of a training model is avoided.
S16, training a specified classifier by using the sample set to obtain an express carton garbage classification model.
In at least one embodiment of the present invention, training a specified classifier using the sample set to obtain the express carton waste classification model includes:
Determining a class of each training sample in the set of samples;
carrying out label processing on each training sample according to the category of each training sample to obtain a label sample set;
dividing the label sample set according to a preset proportion to obtain a first sample set and a second sample set;
Inputting the first sample set to a support vector machine classifier for training until the support vector machine classifier reaches convergence, and stopping training to obtain an intermediate model;
validating the intermediate model with the second set of samples;
And when the accuracy rate of the intermediate model is greater than or equal to a preset threshold value, determining the intermediate model as the express carton waste classification model.
Wherein the specified classifier may comprise a support vector machine (Support Vector Machine, SVM) classifier.
It will be appreciated that during the training phase, the class of each training sample is known and therefore, in order to facilitate subsequent training, the training samples need to be labeled.
For example: the training samples may be marked as "express cartons", "other litter". The express delivery carton can be further refined into labels of a large-size express delivery carton, a medium-size express delivery carton and a small-size express delivery carton.
The preset proportion can be configured in a self-defined manner, such as 80%. The preset threshold may also be configured in a custom manner, such as 95%.
After the label sample set is divided according to the preset proportion, the obtained first sample set can be used as a training set, and the second sample set can be used as a verification set.
The model can be effectively ensured to be accurate by splitting the sample set for training, the classification effect of the model is improved, and in addition, the higher accuracy can be obtained by adopting a support vector machine method for classification and identification.
And S17, when receiving a picture to be processed, inputting the picture to be processed into the express carton garbage classification model, and acquiring the output of the express carton garbage classification model as the target category of the picture to be processed.
In this embodiment, the to-be-processed picture may be uploaded by a related staff, and the form of the to-be-processed picture may include a photo and the like.
In at least one embodiment of the present invention, after obtaining the output of the express carton waste classification model as the target category of the to-be-processed picture, the method further includes:
establishing an express garbage treatment table according to the category and the corresponding treatment measures;
matching in the express rubbish treatment table by utilizing the target category, and determining the treatment measure corresponding to the matched category as a target treatment measure;
Generating prompt information according to the target processing measure;
and sending the prompt information to the appointed terminal equipment.
The express garbage disposal table stores the corresponding relation between the category and the disposal measure, for example: the corresponding treatment measures of the large-size express delivery carton are as follows: delivering to a large dustbin, waiting for recovery.
The designated terminal device may include a terminal device of a worker responsible for garbage collection.
The prompt information comprises the class of garbage and corresponding treatment measures, and is used for prompting the relevant staff about the garbage recycling treatment mode.
The embodiment assists staff in accurately recycling garbage by predicting categories and matching corresponding treatment measures.
In order to improve the security of the data and avoid the data from being tampered maliciously, the express carton garbage classification model can be deployed at the block chain node.
According to the technical scheme, the invention can respond to the express carton garbage classification instruction, acquire sample data according to the express carton garbage classification instruction, carry out image enhancement processing on the sample data to obtain enhanced samples, effectively solve the noise of express garbage through image enhancement, solve the influence of image distortion on classification accuracy, carry out image segmentation processing on the enhanced samples to obtain contour information sets, effectively avoid the influence of reflection on classification through image segmentation, extract the texture feature of each contour information in the contour information sets, construct texture feature sets according to the extracted texture features, introduce gray scale features when extracting the texture features, clearly extract the texture features of express cartons, improve the accuracy of the subsequent express carton garbage classification, extracting shape characteristics of each profile information in the profile information set, constructing a shape characteristic set according to the extracted shape characteristics, introducing the shape characteristics, classifying large-size data base for subsequent express carton garbage, combining each texture characteristic in the texture characteristic set and the corresponding shape characteristics in the shape characteristic set into a training sample, constructing a sample set according to all training samples obtained by combination, combining the texture characteristics and the shape characteristics simultaneously when constructing the sample set so as to perform more accurate classification, avoiding the influence of training by adopting a single characteristic on the accuracy of a training model, training a specified classifier by utilizing the sample set to obtain an express carton garbage classification model, effectively ensuring the accuracy of the model and improving the classification effect of the model by splitting the sample set, the method of the support vector machine is adopted for classification and identification, higher accuracy can be obtained, when the picture to be processed is received, the picture to be processed is input into the express carton garbage classification model, and the output of the express carton garbage classification model is obtained to serve as the target class of the picture to be processed, so that the accurate classification of the express garbage can be realized based on computer vision, and the recycling of the express garbage is assisted.
Fig. 2 is a functional block diagram of a preferred embodiment of the sorting device for garbage in express cartons according to the present invention. The express carton waste classification device 11 comprises an acquisition unit 110, an enhancement unit 111, a segmentation unit 112, a construction unit 113, a training unit 114 and a classification unit 115. The module/unit referred to in the present invention refers to a series of computer program segments capable of being executed by the processor 13 and of performing a fixed function, which are stored in the memory 12. In the present embodiment, the functions of the respective modules/units will be described in detail in the following embodiments.
In response to the express carton waste classification instruction, the acquisition unit 110 acquires sample data according to the express carton waste classification instruction.
In this embodiment, the express carton garbage classification instruction may be triggered by a worker responsible for classifying the express carton garbage, or may be triggered by a developer, which is not limited by the present invention.
In at least one embodiment of the present invention, the obtaining unit 110 obtains the sample data according to the express carton waste classification instruction includes:
Analyzing the express carton garbage classification instruction to obtain information carried by the express carton garbage classification instruction;
Acquiring a preset label corresponding to the address;
constructing a regular expression according to the preset label;
Traversing information carried by the express carton garbage classification instruction by using the regular expression, and determining the traversed information as a target address;
And connecting to the target address, and acquiring the data stored by the target address to construct the sample data.
For example: when the preset label is ADD, the constructed regular expression is ADD (), and the regular expression ADD () is used to traverse the information carried by the express carton garbage classification instruction to obtain a target address, where a large amount of data is stored at the target address, for example: and shooting pictures of express garbage. And further integrating the data stored at the target address to obtain the sample data.
The enhancement unit 111 performs image enhancement processing on the sample data to obtain an enhanced sample.
It will be appreciated that due to the angle of the photographing, or the influence of illumination factors, or due to jitter during photographing, a great deal of noise may be included in the sample data, which affects the accuracy of the subsequent classification.
Therefore, in this embodiment, the image enhancement processing is performed on the sample data to obtain the enhanced sample, which specifically includes:
for each sub-image in the sample data, blurring the sub-image according to a specified scale to obtain a blurred image;
Calculating the logarithmic value of the sub-image as a first logarithmic value, and calculating the logarithmic value of the blurred image as a second logarithmic value;
Calculating a difference between the first and second logarithmic values as a third logarithmic value;
converting the third logarithmic value into a pixel value to obtain an enhanced image corresponding to the sub-image;
and combining the obtained enhanced images to obtain the enhanced sample.
The specified scale can be configured in a self-defined way, and is the fuzzy radius.
In this embodiment, when the sub-image is subjected to blurring processing according to the specified scale to obtain the blurred image, a gaussian blur algorithm may be adopted, or a mean blur algorithm may be adopted instead of the gaussian blur algorithm.
Specifically, the third logarithmic value is calculated using the following formula:
Log[R(x,y)]=Log[I(x,y)]-Log[L(x,y)]
Wherein R (x, y) represents a reflection component of a target object of which the pixel points (x, y) carry image detail information; i (x, y) represents the original input of the pixel point (x, y), L (x, y) represents the illumination component of the ambient light; log [ R (x, y) ] represents said third logarithmic value; log [ I (x, y) ] represents said first logarithmic value; log [ L (x, y) ] represents the second logarithmic value.
Further, the third logarithmic value is converted into a pixel value ranging from 0 to 255, and the result after image enhancement, namely the enhanced image, can be obtained.
In the embodiment, the noise of the express garbage can be effectively solved through image enhancement, and the influence of image distortion on classification accuracy is solved.
The segmentation unit 112 performs image segmentation processing on the enhanced sample to obtain a contour information set.
In at least one embodiment of the present invention, the image segmentation processing is performed on the enhanced sample by the segmentation unit 112, so as to obtain a contour information set, which includes:
inputting the enhanced sample into a pre-trained neural network model for feature extraction to obtain a feature map set;
Acquiring at least one candidate ROI (Region Of Interest ) of each feature map in the set of feature maps;
inputting at least one candidate ROI of each feature map to a region suggestion network for filtering to obtain a target ROI of each feature map;
Performing an alignment operation on the target ROI of each feature map;
inputting the target ROI after the alignment operation into a full convolution network, and acquiring the output of the full convolution network as the contour information of each feature map;
and constructing the profile information set by using the obtained profile information.
Wherein the pre-trained neural network model may include ResNeXt or the like.
For example: the enhanced sample A is input into ResNeXt networks for feature extraction to obtain feature map, a plurality of candidate ROIs of the feature map are determined according to preset areas, further, the plurality of candidate ROIs are input into RPN networks (Region Proposal Network, area suggestion networks), binary classification (foreground or background classification) and BB regression (Bounding-box regression, bounding box regression) are carried out in the RPN networks, and a part of candidate ROIs are filtered to obtain target ROIs of the feature map. And performing ROIAlign (ROI alignment) operation on the target ROI, inputting the ROI obtained after the alignment operation into an FCN network (Fully Convolutional Networks, full convolution network), and performing classification regression in the FCN network to obtain the contour information of the feature map.
In the embodiment, the image is segmented, so that the influence of reflection on classification is effectively avoided.
The construction unit 113 extracts a texture feature of each profile information in the profile information set, and constructs a texture feature set from the extracted texture features.
In at least one embodiment of the present invention, the extracting, by the construction unit 113, texture features of each profile information in the profile information set includes:
Acquiring the number of pixel points in each piece of contour information;
Determining the gray value of the pixel point in each profile information, and determining the probability of each pixel point to take the corresponding gray value;
calculating the average value corresponding to each piece of contour information according to the number of the pixel points in each piece of contour information, the gray value of the pixel points in each piece of contour information and the probability that each pixel point takes the corresponding gray value;
calculating the contrast corresponding to each piece of contour information according to the gray value of the pixel point in each piece of contour information and the probability that each pixel point takes the corresponding gray value;
calculating entropy corresponding to each profile information according to probability of taking corresponding gray value of each pixel point;
And combining the average value corresponding to each profile information, the contrast corresponding to each profile information and the entropy corresponding to each profile information to obtain the texture feature of each profile information.
Specifically, the average value corresponding to each profile information is calculated using the following formula:
Wherein Y mean represents an average value corresponding to the contour information, m represents the number of pixels in the contour information, i represents a gray value of the pixel, and p (i) represents a probability that the pixel takes the gray value i.
Further, the contrast corresponding to each profile information is calculated using the following formula:
Wherein Y con represents the contrast corresponding to the profile information.
Further, the entropy corresponding to each profile information is calculated using the following formula:
wherein Y entropy represents entropy corresponding to the profile information.
In the embodiment, when the texture features are extracted, the gray features are introduced, so that the texture features of the express cartons can be clearly extracted, and the accuracy of garbage classification of the subsequent express cartons is improved.
The construction unit 113 extracts a shape feature of each profile information in the profile information set, and constructs a shape feature set from the extracted shape features.
In at least one embodiment of the present invention, the extracting, by the construction unit 113, a shape feature of each profile information in the profile information set includes:
Calculating the perimeter and the area of each piece of contour information in the contour information set;
the perimeter and area of each profile information are determined as the shape feature of each profile information.
For example: when the identified contour information is a quadrangle, the corresponding shape feature can be calculated according to the calculation formula of the circumference and the area of the quadrangle, which is not described herein.
In the embodiment, the shape characteristics can be introduced, and a large number data basis is used for classifying the garbage of the subsequent express carton.
The construction unit 113 combines each texture feature in the texture feature set and a corresponding shape feature in the shape feature set into one training sample, and constructs a sample set according to all training samples obtained by the combination.
Through the embodiment, the texture features and the shape features can be combined simultaneously when the sample set is constructed, so that more accurate classification can be performed, and the influence of training by adopting a single feature on the accuracy of a training model is avoided.
The training unit 114 trains the specified classifier by using the sample set to obtain the express carton waste classification model.
In at least one embodiment of the present invention, the training unit 114 training the specified classifier using the sample set to obtain the express carton waste classification model includes:
Determining a class of each training sample in the set of samples;
carrying out label processing on each training sample according to the category of each training sample to obtain a label sample set;
dividing the label sample set according to a preset proportion to obtain a first sample set and a second sample set;
Inputting the first sample set to a support vector machine classifier for training until the support vector machine classifier reaches convergence, and stopping training to obtain an intermediate model;
validating the intermediate model with the second set of samples;
And when the accuracy rate of the intermediate model is greater than or equal to a preset threshold value, determining the intermediate model as the express carton waste classification model.
Wherein the specified classifier may comprise a support vector machine (Support Vector Machine, SVM) classifier.
It will be appreciated that during the training phase, the class of each training sample is known and therefore, in order to facilitate subsequent training, the training samples need to be labeled.
For example: the training samples may be marked as "express cartons", "other litter". The express delivery carton can be further refined into labels of a large-size express delivery carton, a medium-size express delivery carton and a small-size express delivery carton.
The preset proportion can be configured in a self-defined manner, such as 80%. The preset threshold may also be configured in a custom manner, such as 95%.
After the label sample set is divided according to the preset proportion, the obtained first sample set can be used as a training set, and the second sample set can be used as a verification set.
The model can be effectively ensured to be accurate by splitting the sample set for training, the classification effect of the model is improved, and in addition, the higher accuracy can be obtained by adopting a support vector machine method for classification and identification.
When receiving a picture to be processed, the classification unit 115 inputs the picture to be processed into the express carton waste classification model, and obtains the output of the express carton waste classification model as the target category of the picture to be processed.
In this embodiment, the to-be-processed picture may be uploaded by a related staff, and the form of the to-be-processed picture may include a photo and the like.
In at least one embodiment of the invention, after the output of the express carton garbage classification model is obtained as the target category of the picture to be processed, an express garbage treatment table is established according to the category and corresponding treatment measures;
matching in the express rubbish treatment table by utilizing the target category, and determining the treatment measure corresponding to the matched category as a target treatment measure;
Generating prompt information according to the target processing measure;
and sending the prompt information to the appointed terminal equipment.
The express garbage disposal table stores the corresponding relation between the category and the disposal measure, for example: the corresponding treatment measures of the large-size express delivery carton are as follows: delivering to a large dustbin, waiting for recovery.
The designated terminal device may include a terminal device of a worker responsible for garbage collection.
The prompt information comprises the class of garbage and corresponding treatment measures, and is used for prompting the relevant staff about the garbage recycling treatment mode.
The embodiment assists staff in accurately recycling garbage by predicting categories and matching corresponding treatment measures.
In order to improve the security of the data and avoid the data from being tampered maliciously, the express carton garbage classification model can be deployed at the block chain node.
According to the technical scheme, the invention can respond to the express carton garbage classification instruction, acquire sample data according to the express carton garbage classification instruction, carry out image enhancement processing on the sample data to obtain enhanced samples, effectively solve the noise of express garbage through image enhancement, solve the influence of image distortion on classification accuracy, carry out image segmentation processing on the enhanced samples to obtain contour information sets, effectively avoid the influence of reflection on classification through image segmentation, extract the texture feature of each contour information in the contour information sets, construct texture feature sets according to the extracted texture features, introduce gray scale features when extracting the texture features, clearly extract the texture features of express cartons, improve the accuracy of the subsequent express carton garbage classification, extracting shape characteristics of each profile information in the profile information set, constructing a shape characteristic set according to the extracted shape characteristics, introducing the shape characteristics, classifying large-size data base for subsequent express carton garbage, combining each texture characteristic in the texture characteristic set and the corresponding shape characteristics in the shape characteristic set into a training sample, constructing a sample set according to all training samples obtained by combination, combining the texture characteristics and the shape characteristics simultaneously when constructing the sample set so as to perform more accurate classification, avoiding the influence of training by adopting a single characteristic on the accuracy of a training model, training a specified classifier by utilizing the sample set to obtain an express carton garbage classification model, effectively ensuring the accuracy of the model and improving the classification effect of the model by splitting the sample set, the method of the support vector machine is adopted for classification and identification, higher accuracy can be obtained, when the picture to be processed is received, the picture to be processed is input into the express carton garbage classification model, and the output of the express carton garbage classification model is obtained to serve as the target class of the picture to be processed, so that the accurate classification of the express garbage can be realized based on computer vision, and the recycling of the express garbage is assisted.
Fig. 3 is a schematic structural diagram of a computer device for implementing a preferred embodiment of the method for sorting waste in express cartons according to the present invention.
The computer device 1 may comprise a memory 12, a processor 13 and a bus, and may further comprise a computer program stored in the memory 12 and executable on the processor 13, such as an express carton waste sorting program.
It will be appreciated by those skilled in the art that the schematic diagram is merely an example of the computer device 1 and does not constitute a limitation of the computer device 1, the computer device 1 may be a bus type structure, a star type structure, the computer device 1 may further comprise more or less other hardware or software than illustrated, or a different arrangement of components, for example, the computer device 1 may further comprise an input-output device, a network access device, etc.
It should be noted that the computer device 1 is only used as an example, and other electronic products that may be present in the present invention or may be present in the future are also included in the scope of the present invention by way of reference.
The memory 12 includes at least one type of readable storage medium including flash memory, a removable hard disk, a multimedia card, a card memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, etc. The memory 12 may in some embodiments be an internal storage unit of the computer device 1, such as a removable hard disk of the computer device 1. The memory 12 may also be an external storage device of the computer device 1 in other embodiments, such as a plug-in mobile hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD) or the like, which are provided on the computer device 1. Further, the memory 12 may also include both an internal storage unit and an external storage device of the computer device 1. The memory 12 may be used not only for storing application software installed in the computer device 1 and various types of data, such as codes of the express carton waste classification program, etc., but also for temporarily storing data that has been output or is to be output.
The processor 13 may be comprised of integrated circuits in some embodiments, for example, a single packaged integrated circuit, or may be comprised of multiple integrated circuits packaged with the same or different functions, including one or more central processing units (Central Processing unit, CPU), microprocessors, digital processing chips, graphics processors, various control chips, and the like. The processor 13 is a Control Unit (Control Unit) of the computer apparatus 1, connects the respective components of the entire computer apparatus 1 using various interfaces and lines, executes various functions of the computer apparatus 1 and processes data by running or executing programs or modules stored in the memory 12 (for example, executing a express box garbage sorting program or the like), and calling data stored in the memory 12.
The processor 13 executes the operating system of the computer device 1 and various types of applications installed. The processor 13 executes the application program to implement the steps in the above-described embodiments of the method for sorting the waste in the express cartons, for example, the steps shown in fig. 1.
Illustratively, the computer program may be partitioned into one or more modules/units that are stored in the memory 12 and executed by the processor 13 to complete the present invention. The one or more modules/units may be a series of computer readable instruction segments capable of performing the specified functions, which instruction segments describe the execution of the computer program in the computer device 1. For example, the computer program may be divided into an acquisition unit 110, an enhancement unit 111, a division unit 112, a construction unit 113, a training unit 114, a classification unit 115.
The integrated units implemented in the form of software functional modules described above may be stored in a computer readable storage medium. The software functional module is stored in a storage medium, and includes several instructions for making a computer device (which may be a personal computer, a computer device, or a network device, etc.) or a processor (processor) execute a part of the method for sorting the express carton waste according to the embodiments of the present invention.
The modules/units integrated in the computer device 1 may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on this understanding, the present invention may also be implemented by a computer program for instructing a relevant hardware device to implement all or part of the procedures of the above-mentioned embodiment method, where the computer program may be stored in a computer readable storage medium and the computer program may be executed by a processor to implement the steps of each of the above-mentioned method embodiments.
Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory, or the like.
Further, the computer-readable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created from the use of blockchain nodes, and the like.
The blockchain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, encryption algorithm and the like. The blockchain (Blockchain), essentially a de-centralized database, is a string of data blocks that are generated in association using cryptographic methods, each of which contains information from a batch of network transactions for verifying the validity (anti-counterfeit) of its information and generating the next block. The blockchain may include a blockchain underlying platform, a platform product services layer, an application services layer, and the like.
The bus may be a peripheral component interconnect standard (PERIPHERAL COMPONENT INTERCONNECT, PCI) bus, or an extended industry standard architecture (extended industry standard architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. For ease of illustration, only one straight line is shown in fig. 3, but not only one bus or one type of bus. The bus is arranged to enable a connection communication between the memory 12 and at least one processor 13 or the like.
Although not shown, the computer device 1 may further comprise a power source (such as a battery) for powering the various components, preferably the power source may be logically connected to the at least one processor 13 via a power management means, whereby the functions of charge management, discharge management, and power consumption management are achieved by the power management means. The power supply may also include one or more of any of a direct current or alternating current power supply, recharging device, power failure detection circuit, power converter or inverter, power status indicator, etc. The computer device 1 may further include various sensors, bluetooth modules, wi-Fi modules, etc., which will not be described in detail herein.
Further, the computer device 1 may also comprise a network interface, optionally comprising a wired interface and/or a wireless interface (e.g. WI-FI interface, bluetooth interface, etc.), typically used for establishing a communication connection between the computer device 1 and other computer devices.
The computer device 1 may optionally further comprise a user interface, which may be a Display, an input unit, such as a Keyboard (Keyboard), or a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch, or the like. The display may also be referred to as a display screen or display unit, as appropriate, for displaying information processed in the computer device 1 and for displaying a visual user interface.
It should be understood that the embodiments described are for illustrative purposes only and are not limited to this configuration in the scope of the patent application.
Fig. 3 shows only a computer device 1 with components 12-13, it being understood by those skilled in the art that the structure shown in fig. 3 is not limiting of the computer device 1 and may include fewer or more components than shown, or may combine certain components, or a different arrangement of components.
In connection with fig. 1, the memory 12 in the computer device 1 stores a plurality of instructions to implement a method of sorting express carton waste, the processor 13 being executable to implement:
Responding to an express carton garbage classification instruction, and acquiring sample data according to the express carton garbage classification instruction;
performing image enhancement processing on the sample data to obtain an enhanced sample;
performing image segmentation processing on the enhanced sample to obtain a contour information set;
Extracting texture features of each profile information in the profile information set, and constructing a texture feature set according to the extracted texture features;
extracting shape characteristics of each piece of contour information in the contour information set, and constructing a shape characteristic set according to the extracted shape characteristics;
combining each texture feature in the texture feature set and the corresponding shape feature in the shape feature set into a training sample, and constructing a sample set according to all the training samples obtained by combination;
Training a designated classifier by using the sample set to obtain an express carton garbage classification model;
When receiving a picture to be processed, inputting the picture to be processed into the express carton garbage classification model, and acquiring output of the express carton garbage classification model as a target class of the picture to be processed.
Specifically, the specific implementation method of the above instructions by the processor 13 may refer to the description of the relevant steps in the corresponding embodiment of fig. 1, which is not repeated herein.
In the several embodiments provided in the present invention, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be other manners of division when actually implemented.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof.
The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. The units or means stated in the invention may also be implemented by one unit or means, either by software or hardware. The terms first, second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.

Claims (9)

1. The method for sorting the express carton garbage is characterized by comprising the following steps of:
Responding to an express carton garbage classification instruction, and acquiring sample data according to the express carton garbage classification instruction;
performing image enhancement processing on the sample data to obtain an enhanced sample;
performing image segmentation processing on the enhanced sample to obtain a contour information set;
Extracting texture features of each profile information in the profile information set, and constructing a texture feature set according to the extracted texture features;
extracting shape characteristics of each piece of contour information in the contour information set, and constructing a shape characteristic set according to the extracted shape characteristics;
combining each texture feature in the texture feature set and the corresponding shape feature in the shape feature set into a training sample, and constructing a sample set according to all the training samples obtained by combination;
Training a designated classifier by using the sample set to obtain an express carton garbage classification model;
When receiving a picture to be processed, inputting the picture to be processed into the express carton garbage classification model, and acquiring output of the express carton garbage classification model as a target class of the picture to be processed;
The image segmentation processing is carried out on the enhanced sample, and the obtaining of the contour information set comprises the following steps:
inputting the enhanced sample into a pre-trained neural network model for feature extraction to obtain a feature map set;
acquiring at least one candidate ROI of each feature map in the feature map set;
Inputting at least one candidate ROI of each feature map into a region suggestion network for filtering to obtain a target ROI of each feature map, wherein the method comprises the following steps: inputting a plurality of candidate ROIs into the regional suggestion network, and carrying out binary classification and bounding box regression in the regional suggestion network so as to filter the candidate ROIs and obtain target ROIs of each feature map;
Performing an alignment operation on the target ROI of each feature map;
inputting the target ROI after the alignment operation into a full convolution network, and acquiring the output of the full convolution network as the contour information of each feature map;
and constructing the profile information set by using the obtained profile information.
2. The method for sorting the waste in the express cartons according to claim 1, wherein the performing image enhancement processing on the sample data to obtain enhanced samples comprises:
for each sub-image in the sample data, blurring the sub-image according to a specified scale to obtain a blurred image;
Calculating the logarithmic value of the sub-image as a first logarithmic value, and calculating the logarithmic value of the blurred image as a second logarithmic value;
Calculating a difference between the first and second logarithmic values as a third logarithmic value;
converting the third logarithmic value into a pixel value to obtain an enhanced image corresponding to the sub-image;
and combining the obtained enhanced images to obtain the enhanced sample.
3. The method of sorting the express carton waste according to claim 1, wherein the extracting texture features of each profile information in the profile information set comprises:
Acquiring the number of pixel points in each piece of contour information;
Determining the gray value of the pixel point in each profile information, and determining the probability of each pixel point to take the corresponding gray value;
calculating the average value corresponding to each piece of contour information according to the number of the pixel points in each piece of contour information, the gray value of the pixel points in each piece of contour information and the probability that each pixel point takes the corresponding gray value;
calculating the contrast corresponding to each piece of contour information according to the gray value of the pixel point in each piece of contour information and the probability that each pixel point takes the corresponding gray value;
calculating entropy corresponding to each profile information according to probability of taking corresponding gray value of each pixel point;
And combining the average value corresponding to each profile information, the contrast corresponding to each profile information and the entropy corresponding to each profile information to obtain the texture feature of each profile information.
4. The method of sorting the express carton waste according to claim 1, wherein the extracting the shape feature of each profile information in the profile information set comprises:
Calculating the perimeter and the area of each piece of contour information in the contour information set;
the perimeter and area of each profile information are determined as the shape feature of each profile information.
5. The method of sorting the express carton waste according to claim 1, wherein training a specified classifier using the sample set to obtain the express carton waste sorting model comprises:
Determining a class of each training sample in the set of samples;
carrying out label processing on each training sample according to the category of each training sample to obtain a label sample set;
dividing the label sample set according to a preset proportion to obtain a first sample set and a second sample set;
Inputting the first sample set to a support vector machine classifier for training until the support vector machine classifier reaches convergence, and stopping training to obtain an intermediate model;
validating the intermediate model with the second set of samples;
And when the accuracy rate of the intermediate model is greater than or equal to a preset threshold value, determining the intermediate model as the express carton waste classification model.
6. The method for sorting the express carton waste according to claim 1, wherein after obtaining the output of the express carton waste sorting model as the target category of the to-be-processed picture, the method further comprises:
establishing an express garbage treatment table according to the category and the corresponding treatment measures;
matching in the express rubbish treatment table by utilizing the target category, and determining the treatment measure corresponding to the matched category as a target treatment measure;
Generating prompt information according to the target processing measure;
and sending the prompt information to the appointed terminal equipment.
7. Express delivery carton rubbish sorter, its characterized in that, express delivery carton rubbish sorter includes:
the system comprises an acquisition unit, a storage unit and a processing unit, wherein the acquisition unit is used for responding to an express carton garbage classification instruction and acquiring sample data according to the express carton garbage classification instruction;
the enhancement unit is used for carrying out image enhancement processing on the sample data to obtain an enhanced sample;
the segmentation unit is used for carrying out image segmentation processing on the enhanced sample to obtain a contour information set;
the construction unit is used for extracting the texture feature of each profile information in the profile information set and constructing a texture feature set according to the extracted texture feature;
The construction unit is also used for extracting the shape characteristic of each piece of contour information in the contour information set and constructing a shape characteristic set according to the extracted shape characteristic;
The construction unit is further configured to combine each texture feature in the texture feature set and a corresponding shape feature in the shape feature set into a training sample, and construct a sample set according to all the training samples obtained by combination;
the training unit is used for training the specified classifier by utilizing the sample set to obtain an express carton garbage classification model;
The sorting unit is used for inputting the pictures to be processed into the express carton garbage sorting model when receiving the pictures to be processed, and obtaining the output of the express carton garbage sorting model as the target category of the pictures to be processed;
the segmentation unit performs image segmentation processing on the enhanced sample, and the obtaining of the contour information set includes:
inputting the enhanced sample into a pre-trained neural network model for feature extraction to obtain a feature map set;
acquiring at least one candidate ROI of each feature map in the feature map set;
Inputting at least one candidate ROI of each feature map into a region suggestion network for filtering to obtain a target ROI of each feature map, wherein the method comprises the following steps: inputting a plurality of candidate ROIs into the regional suggestion network, and carrying out binary classification and bounding box regression in the regional suggestion network so as to filter the candidate ROIs and obtain target ROIs of each feature map;
Performing an alignment operation on the target ROI of each feature map;
inputting the target ROI after the alignment operation into a full convolution network, and acquiring the output of the full convolution network as the contour information of each feature map;
and constructing the profile information set by using the obtained profile information.
8. A computer device, the computer device comprising:
a memory storing at least one instruction; and
A processor executing instructions stored in the memory to implement the method for sorting express carton waste according to any one of claims 1 to 6.
9. A computer-readable storage medium, characterized by: the computer readable storage medium has stored therein at least one instruction for execution by a processor in a computer device to implement the method of sorting express carton waste according to any one of claims 1 to 6.
CN202110600085.7A 2021-05-31 Express carton garbage classification method, device, equipment and medium Active CN113222063B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110600085.7A CN113222063B (en) 2021-05-31 Express carton garbage classification method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110600085.7A CN113222063B (en) 2021-05-31 Express carton garbage classification method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN113222063A CN113222063A (en) 2021-08-06
CN113222063B true CN113222063B (en) 2024-07-02

Family

ID=

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106897673A (en) * 2017-01-20 2017-06-27 南京邮电大学 A kind of recognition methods again of the pedestrian based on retinex algorithms and convolutional neural networks
CN107092914A (en) * 2017-03-23 2017-08-25 广东数相智能科技有限公司 Refuse classification method, device and system based on image recognition
CN111144322A (en) * 2019-12-28 2020-05-12 广东拓斯达科技股份有限公司 Sorting method, device, equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106897673A (en) * 2017-01-20 2017-06-27 南京邮电大学 A kind of recognition methods again of the pedestrian based on retinex algorithms and convolutional neural networks
CN107092914A (en) * 2017-03-23 2017-08-25 广东数相智能科技有限公司 Refuse classification method, device and system based on image recognition
CN111144322A (en) * 2019-12-28 2020-05-12 广东拓斯达科技股份有限公司 Sorting method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
Ruiz et al. Automatic image-based waste classification
Gazcón et al. Automatic vehicle identification for Argentinean license plates using intelligent template matching
CN110705583A (en) Cell detection model training method and device, computer equipment and storage medium
CN111695392B (en) Face recognition method and system based on cascade deep convolutional neural network
CN110723432A (en) Garbage classification method and augmented reality equipment
WO2022141858A1 (en) Pedestrian detection method and apparatus, electronic device, and storage medium
CN112699775A (en) Certificate identification method, device and equipment based on deep learning and storage medium
CN112052850A (en) License plate recognition method and device, electronic equipment and storage medium
CN112580684B (en) Target detection method, device and storage medium based on semi-supervised learning
CN111738212B (en) Traffic signal lamp identification method, device, equipment and medium based on artificial intelligence
WO2021151277A1 (en) Method and apparatus for determining severity of damage on target object, electronic device, and storage medium
CN113033543A (en) Curved text recognition method, device, equipment and medium
CN108268641A (en) Invoice information recognition methods and invoice information identification device, equipment and storage medium
Rehman et al. An efficient approach for vehicle number plate recognition in Pakistan
CN115471775A (en) Information verification method, device and equipment based on screen recording video and storage medium
CN113704474A (en) Bank outlet equipment operation guide generation method, device, equipment and storage medium
CN116797864B (en) Auxiliary cosmetic method, device, equipment and storage medium based on intelligent mirror
CN113222063B (en) Express carton garbage classification method, device, equipment and medium
CN115439850A (en) Image-text character recognition method, device, equipment and storage medium based on examination sheet
CN116363365A (en) Image segmentation method based on semi-supervised learning and related equipment
CN113222063A (en) Express carton garbage classification method, device, equipment and medium
CN114267064A (en) Face recognition method and device, electronic equipment and storage medium
CN112183520A (en) Intelligent data information processing method and device, electronic equipment and storage medium
Kavitha et al. Text detection based on text shape feature analysis with intelligent grouping in natural scene images
Fernandes et al. A robust automatic license plate recognition system for embedded devices

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant