CN113222063A - Express carton garbage classification method, device, equipment and medium - Google Patents

Express carton garbage classification method, device, equipment and medium Download PDF

Info

Publication number
CN113222063A
CN113222063A CN202110600085.7A CN202110600085A CN113222063A CN 113222063 A CN113222063 A CN 113222063A CN 202110600085 A CN202110600085 A CN 202110600085A CN 113222063 A CN113222063 A CN 113222063A
Authority
CN
China
Prior art keywords
express
sample
contour information
waste classification
carton waste
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110600085.7A
Other languages
Chinese (zh)
Inventor
于凤英
王健宗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202110600085.7A priority Critical patent/CN113222063A/en
Publication of CN113222063A publication Critical patent/CN113222063A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of artificial intelligence, and provides a method, a device, equipment and a medium for classifying express carton waste, which can effectively solve the noise of express waste through image enhancement, solve the influence of image distortion on classification accuracy, by image segmentation, the influence of light reflection on classification is effectively avoided, the gray scale features are introduced, the texture features of the express delivery carton can be clearly extracted, the accuracy of garbage classification of the subsequent express delivery carton is improved, combining texture features and shape features simultaneously when constructing a sample set so as to carry out more accurate classification, avoiding the influence on the accuracy of a training model caused by training by adopting single features, inputting a picture to be processed into an express carton waste classification model to obtain a target class, and then can realize the accurate classification to express delivery rubbish based on computer vision to supplementary realization is to the recycle of express delivery rubbish. In addition, the invention also relates to a block chain technology, and the express carton waste classification model can be stored in the block chain nodes.

Description

Express carton garbage classification method, device, equipment and medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an express carton waste classification method, device, equipment and medium.
Background
Along with the continuous development of the express delivery industry, the express delivery amount is also exponentially increased, so that a lot of waste carton garbage is generated, and when the outer coating adhesive tape of the express delivery carton is reused, the generated colloidal chemical substances can cause environmental pollution, so that the waste carton for express delivery is recycled and reused urgently in order to maintain the ecological environment. Due to the requirement of factory assembly lines on efficiency, manual work for achieving classification of express cartons is difficult to achieve.
In the field of waste classification, the following developments have mainly been obtained:
(1) work by Ozkan et al opened the door to CV (Computer Vision) technology in the field of garbage classification recognition in 2015. Ozkan et al target the plastic bottle for classification and recognition, and through image preprocessing such as noise reduction, the classification and recognition with higher accuracy is realized.
(2) The research work of scholars such as SUDHAS and Esakr realizes more applications of CV technology in the field of garbage classification and identification, and can realize the classification of garbage materials on a macroscopic level.
(3) The research work in the field of garbage classification and identification based on CV technology is less in China. The expert scholars study the classification of the water surface garbage in 2019, and realize the classification of the construction garbage in the same year, and domestic garbage classification and identification based on the CV technology can realize the classification and identification of the domestic wastes with the accuracy rate of about 80% until 2020.
From the above, garbage classification and identification based on the CV technology have been greatly developed, but research on single-variety garbage is lacked, and particularly, no research on related classification and identification technology exists for classification of express carton garbage.
Disclosure of Invention
In view of the above, it is necessary to provide a method, a device, an apparatus, and a medium for sorting express delivery carton waste, which can accurately sort the express delivery waste based on computer vision to assist in recycling the express delivery waste.
An express delivery carton waste classification method comprises the following steps:
responding to an express carton waste classification instruction, and acquiring sample data according to the express carton waste classification instruction;
carrying out image enhancement processing on the sample data to obtain an enhanced sample;
carrying out image segmentation processing on the enhanced sample to obtain a contour information set;
extracting the texture features of each contour information in the contour information set, and constructing a texture feature set according to the extracted texture features;
extracting the shape feature of each contour information in the contour information set, and constructing a shape feature set according to the extracted shape feature;
combining each texture feature in the texture feature set and the corresponding shape feature in the shape feature set into a training sample, and constructing a sample set according to all training samples obtained by combination;
training a designated classifier by using the sample set to obtain an express carton waste classification model;
when a picture to be processed is received, inputting the picture to be processed into the express carton waste classification model, and acquiring the output of the express carton waste classification model as the target category of the picture to be processed.
According to a preferred embodiment of the present invention, the performing image enhancement processing on the sample data to obtain an enhanced sample includes:
for each sub-image in the sample data, carrying out fuzzy processing on the sub-image according to a specified scale to obtain a fuzzy image;
calculating a logarithm value of the sub-image as a first logarithm value, and calculating a logarithm value of the blurred image as a second logarithm value;
calculating a difference value between the first logarithmic value and the second logarithmic value as a third logarithmic value;
converting the third logarithmic value into a pixel value to obtain an enhanced image corresponding to the sub-image;
and combining the obtained enhanced images to obtain the enhanced sample.
According to a preferred embodiment of the present invention, the performing image segmentation processing on the enhanced sample to obtain a contour information set includes:
inputting the enhanced sample into a pre-trained neural network model for feature extraction to obtain a feature map set;
acquiring at least one candidate ROI of each feature map in the feature map set;
inputting at least one candidate ROI of each feature map into a regional suggestion network for filtering to obtain a target ROI of each feature map;
performing alignment operation on the target ROI of each feature map;
inputting the target ROI after the alignment operation into a full convolution network, and acquiring the output of the full convolution network as the contour information of each feature map;
and constructing the contour information set by using the obtained contour information.
According to a preferred embodiment of the present invention, the extracting the texture feature of each contour information in the contour information set includes:
acquiring the number of pixel points in each profile information;
determining the gray value of each pixel point in each profile information, and determining the probability of each pixel point for taking the corresponding gray value;
calculating the average value corresponding to each profile information according to the number of pixel points in each profile information, the gray value of the pixel points in each profile information and the probability of taking the corresponding gray value of each pixel point;
calculating the contrast corresponding to each profile information according to the gray value of the pixel point in each profile information and the probability of taking the corresponding gray value of each pixel point;
calculating the entropy corresponding to each profile information according to the probability of taking the corresponding gray value of each pixel point;
and combining the average value corresponding to each contour information, the contrast corresponding to each contour information and the entropy corresponding to each contour information to obtain the texture characteristics of each contour information.
According to a preferred embodiment of the present invention, the extracting the shape feature of each contour information in the set of contour information includes:
calculating the perimeter and the area of each contour information in the contour information set;
the perimeter and the area of each profile information are determined as the shape feature of each profile information.
According to a preferred embodiment of the present invention, the training of the designated classifier by using the sample set to obtain the express carton waste classification model includes:
determining a class of each training sample in the sample set;
performing label processing on each training sample according to the category of each training sample to obtain a label sample set;
dividing the label sample set according to a preset proportion to obtain a first sample set and a second sample set;
inputting the first sample set into a support vector machine classifier for training until the support vector machine classifier reaches convergence, and stopping training to obtain an intermediate model;
validating the intermediate model using the second set of samples;
and when the accuracy of the intermediate model is greater than or equal to a preset threshold value, determining the intermediate model as the express carton waste classification model.
According to a preferred embodiment of the present invention, after the output of the express delivery carton waste classification model is obtained as the target category of the to-be-processed picture, the method further includes:
establishing an express garbage disposal table according to the category and the corresponding disposal measure;
matching in the express refuse handling table by using the target category, and determining the handling measures corresponding to the matched categories as target handling measures;
generating prompt information according to the target processing measure;
and sending the prompt information to appointed terminal equipment.
The utility model provides an express delivery carton waste classification device, express delivery carton waste classification device includes:
the system comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for responding to an express carton waste classification instruction and acquiring sample data according to the express carton waste classification instruction;
the enhancement unit is used for carrying out image enhancement processing on the sample data to obtain an enhanced sample;
the segmentation unit is used for carrying out image segmentation processing on the enhanced sample to obtain a contour information set;
the construction unit is used for extracting the texture features of each contour information in the contour information set and constructing a texture feature set according to the extracted texture features;
the constructing unit is further configured to extract a shape feature of each contour information in the contour information set, and construct a shape feature set according to the extracted shape feature;
the construction unit is further configured to combine each texture feature in the texture feature set and a corresponding shape feature in the shape feature set into one training sample, and construct a sample set according to all training samples obtained through combination;
the training unit is used for training a designated classifier by using the sample set to obtain an express carton waste classification model;
and the classification unit is used for inputting the picture to be processed into the express carton waste classification model when the picture to be processed is received, and acquiring the output of the express carton waste classification model as the target category of the picture to be processed.
A computer device, the computer device comprising:
a memory storing at least one instruction; and
and the processor executes the instructions stored in the memory to realize the express delivery carton waste classification method.
A computer-readable storage medium having at least one instruction stored therein, the at least one instruction being executable by a processor in a computer device to implement the express carton waste sorting method.
According to the technical scheme, the method can perform image enhancement processing on the sample data to obtain an enhanced sample, perform image segmentation processing on the enhanced sample to obtain a contour information set, extract the texture features of each contour information in the contour information set, construct a texture feature set according to the extracted texture features, further extract the shape features of each contour information in the contour information set, construct a shape feature set according to the extracted shape features, combine each texture feature in the texture feature set and the corresponding shape features in the shape feature set into a training sample, construct a sample set according to all the training samples obtained through combination, train a designated classifier by using the sample set to obtain an express delivery carton rubbish classification model so as to realize accurate classification of express delivery rubbish on the basis of computer vision in the subsequent process, when a picture to be processed is received, inputting the picture to be processed into the express carton waste classification model, acquiring the output of the express carton waste classification model as the target category of the picture to be processed, and performing waste classification through the express carton waste classification model so as to assist in recycling express waste.
Drawings
Fig. 1 is a flowchart of a method for sorting express carton waste according to a preferred embodiment of the present invention.
Fig. 2 is a functional block diagram of the express carton waste sorting device according to the preferred embodiment of the present invention.
Fig. 3 is a schematic structural diagram of a computer device according to a preferred embodiment of the method for sorting express carton waste according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in detail with reference to the accompanying drawings and specific embodiments.
Fig. 1 is a flowchart illustrating a method for sorting express carton waste according to a preferred embodiment of the present invention. The order of the steps in the flow chart may be changed and some steps may be omitted according to different needs.
The express carton waste classification method is applied to one or more computer devices, the computer devices are devices capable of automatically performing numerical calculation and/or information processing according to preset or stored instructions, and hardware of the computer devices includes but is not limited to a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device and the like.
The computer device may be any electronic product capable of performing human-computer interaction with a user, for example, a Personal computer, a tablet computer, a smart phone, a Personal Digital Assistant (PDA), a game machine, an interactive web Television (IPTV), an intelligent wearable device, and the like.
The computer device may also include a network device and/or a user device. The network device includes, but is not limited to, a single network server, a server group consisting of a plurality of network servers, or a Cloud Computing (Cloud Computing) based Cloud consisting of a large number of hosts or network servers.
The Network in which the computer device is located includes, but is not limited to, the internet, a wide area Network, a metropolitan area Network, a local area Network, a Virtual Private Network (VPN), and the like.
And S10, responding to the express carton waste classification instruction, and acquiring sample data according to the express carton waste classification instruction.
In this embodiment, the express carton waste classification instruction may be triggered by a worker responsible for express carton waste classification, or may be triggered by a developer, which is not limited in the present invention.
In at least one embodiment of the present invention, the obtaining sample data according to the express carton waste classification instruction includes:
analyzing the express carton waste classification instruction to obtain information carried by the express carton waste classification instruction;
acquiring a preset label corresponding to the address;
constructing a regular expression according to the preset label;
traversing information carried by the express carton waste classification instruction by using the regular expression, and determining the traversed information as a target address;
and connecting to the target address, and acquiring the data stored in the target address to construct the sample data.
For example: when the preset label is ADD, the constructed regular expression is ADD (), traversal is performed on information carried by the express carton garbage classification instruction by using the regular expression ADD () to obtain a target address, wherein a large amount of data are stored in the target address, and the steps comprise: pictures of the shot express rubbish and the like. And further integrating the data stored at the target address to obtain the sample data.
And S11, performing image enhancement processing on the sample data to obtain an enhanced sample.
It can be understood that due to the angle problem of shooting, or the influence of the lighting factor, or due to the shaking during shooting, a large amount of noise may be included in the sample data, which affects the accuracy of subsequent classification.
Therefore, in this embodiment, first, performing image enhancement processing on the sample data to obtain the enhanced sample, specifically including:
for each sub-image in the sample data, carrying out fuzzy processing on the sub-image according to a specified scale to obtain a fuzzy image;
calculating a logarithm value of the sub-image as a first logarithm value, and calculating a logarithm value of the blurred image as a second logarithm value;
calculating a difference value between the first logarithmic value and the second logarithmic value as a third logarithmic value;
converting the third logarithmic value into a pixel value to obtain an enhanced image corresponding to the sub-image;
and combining the obtained enhanced images to obtain the enhanced sample.
The specified scale can be configured in a user-defined mode, and the specified scale is the fuzzy radius.
In this embodiment, when the blurred image is obtained by blurring the sub-image according to the specified scale, a gaussian blur algorithm may be used, or a mean value blur algorithm may be used instead of the gaussian blur algorithm, which is not limited in the present invention.
Specifically, the third logarithmic value is calculated using the following formula:
Log[R(x,y)]=Log[I(x,y)]-Log[L(x,y)]
wherein, R (x, y) represents the reflection component of the target object of which the pixel point (x, y) carries image detail information; i (x, y) represents the original input of the pixel point (x, y), L (x, y) represents the illumination component of the ambient light; log [ R (x, y) ] represents the third logarithmic value; log [ I (x, y) ] represents the first logarithmic value; log [ L (x, y) ] represents the second pair of values.
Further, the third logarithmic value is converted into a pixel value ranging from 0 to 255, and a result after image enhancement, that is, the enhanced image, can be obtained.
In the embodiment, the noise of express delivery garbage can be effectively solved through image enhancement, and the influence of image distortion on the classification accuracy rate is solved.
And S12, carrying out image segmentation processing on the enhanced sample to obtain a contour information set.
In at least one embodiment of the present invention, the performing image segmentation processing on the enhanced sample to obtain a set of contour information includes:
inputting the enhanced sample into a pre-trained neural network model for feature extraction to obtain a feature map set;
acquiring at least one candidate ROI (Region Of Interest) Of each feature map in the feature map set;
inputting at least one candidate ROI of each feature map into a regional suggestion network for filtering to obtain a target ROI of each feature map;
performing alignment operation on the target ROI of each feature map;
inputting the target ROI after the alignment operation into a full convolution network, and acquiring the output of the full convolution network as the contour information of each feature map;
and constructing the contour information set by using the obtained contour information.
Wherein the pre-trained neural network model may include ResNeXt, etc.
For example: inputting the enhanced sample A into a ResNeXt Network for feature extraction to obtain a feature map, determining a plurality of candidate ROIs of the feature map according to a preset Region, further inputting the plurality of candidate ROIs into an RPN (Region suggestion Network), performing binary classification (classification of foreground or background) and BB regression (Bounding-box regression) in the RPN, filtering out a part of candidate ROIs, and obtaining a target ROI of the feature map. And further performing ROIAlign (ROI alignment) operation on the target ROI, inputting the ROI obtained after the alignment operation into an FCN (full Convolutional network), and performing classification regression in the FCN to obtain the contour information of the feature map.
In the embodiment, the image is segmented, so that the influence of light reflection on classification is effectively avoided.
And S13, extracting the texture feature of each contour information in the contour information set, and constructing a texture feature set according to the extracted texture feature.
In at least one embodiment of the present invention, the extracting the texture feature of each contour information in the contour information set includes:
acquiring the number of pixel points in each profile information;
determining the gray value of each pixel point in each profile information, and determining the probability of each pixel point for taking the corresponding gray value;
calculating the average value corresponding to each profile information according to the number of pixel points in each profile information, the gray value of the pixel points in each profile information and the probability of taking the corresponding gray value of each pixel point;
calculating the contrast corresponding to each profile information according to the gray value of the pixel point in each profile information and the probability of taking the corresponding gray value of each pixel point;
calculating the entropy corresponding to each profile information according to the probability of taking the corresponding gray value of each pixel point;
and combining the average value corresponding to each contour information, the contrast corresponding to each contour information and the entropy corresponding to each contour information to obtain the texture characteristics of each contour information.
Specifically, the average value corresponding to each profile information is calculated using the following formula:
Figure BDA0003092625280000101
wherein, YmeanAnd (b) representing an average value corresponding to the contour information, m representing the number of pixel points in the contour information, i representing the gray value of the pixel points, and p (i) representing the probability of taking the gray value i from the pixel points.
Further, the contrast corresponding to each profile information is calculated by adopting the following formula:
Figure BDA0003092625280000102
wherein, YconThe contrast corresponding to the contour information is represented.
Further, the entropy corresponding to each profile information is calculated using the following formula:
Figure BDA0003092625280000103
wherein, YentropyRepresenting the corresponding entropy of the contour information.
In the above embodiment, when the texture features are extracted, the gray scale features are introduced, the texture features of the express cartons can be clearly extracted, and the accuracy of garbage classification of the follow-up express cartons is improved.
And S14, extracting the shape feature of each contour information in the contour information set, and constructing a shape feature set according to the extracted shape feature.
In at least one embodiment of the present invention, the extracting the shape feature of each contour information in the contour information set includes:
calculating the perimeter and the area of each contour information in the contour information set;
the perimeter and the area of each profile information are determined as the shape feature of each profile information.
For example: when the identified contour information is a quadrangle, the corresponding shape characteristics can be calculated according to the calculation formula of the perimeter and the area of the quadrangle, which is not described herein.
In the above embodiment, the shape characteristics can be introduced, and the large-size data basis for the subsequent express carton waste classification is provided.
And S15, combining each texture feature in the texture feature set and the corresponding shape feature in the shape feature set into a training sample, and constructing a sample set according to all the training samples obtained by combination.
Through the embodiment, the texture features and the shape features can be combined simultaneously when the sample set is constructed, so that more accurate classification is carried out, and the influence on the accuracy of the training model caused by training by adopting a single feature is avoided.
And S16, training a designated classifier by using the sample set to obtain an express carton waste classification model.
In at least one embodiment of the present invention, the training of the designated classifier by using the sample set to obtain the express carton waste classification model includes:
determining a class of each training sample in the sample set;
performing label processing on each training sample according to the category of each training sample to obtain a label sample set;
dividing the label sample set according to a preset proportion to obtain a first sample set and a second sample set;
inputting the first sample set into a support vector machine classifier for training until the support vector machine classifier reaches convergence, and stopping training to obtain an intermediate model;
validating the intermediate model using the second set of samples;
and when the accuracy of the intermediate model is greater than or equal to a preset threshold value, determining the intermediate model as the express carton waste classification model.
Wherein the specified classifier may include a Support Vector Machine (SVM) classifier.
It will be appreciated that in the training phase, the class of each training sample is known, and therefore, in order to facilitate subsequent training, the training samples need to be labelled.
For example: the training samples can be marked as 'express cartons' or 'other garbage'. Can also further refine to belong to "express delivery carton" and be label "large size express delivery carton", "medium size express delivery carton" and "small size express delivery carton".
Wherein, the preset proportion can be configured by self-definition, such as 80%. The preset threshold may also be configured by a user, such as 95%.
After the label sample set is divided according to the preset proportion, the obtained first sample set can be used as a training set, and the second sample set can be used as a verification set.
The sample set is split for training, so that the accuracy of the model can be effectively guaranteed, the classification effect of the model is improved, and in addition, the classification and identification are carried out by adopting a support vector machine method, so that higher accuracy can be obtained.
S17, when a picture to be processed is received, inputting the picture to be processed into the express carton waste classification model, and acquiring the output of the express carton waste classification model as the target category of the picture to be processed.
In this embodiment, the to-be-processed picture may be uploaded by a relevant worker, and the to-be-processed picture may include a photograph and the like.
In at least one embodiment of the present invention, after obtaining the output of the express carton waste classification model as the target category of the to-be-processed picture, the method further includes:
establishing an express garbage disposal table according to the category and the corresponding disposal measure;
matching in the express refuse handling table by using the target category, and determining the handling measures corresponding to the matched categories as target handling measures;
generating prompt information according to the target processing measure;
and sending the prompt information to appointed terminal equipment.
The express garbage disposal table stores a corresponding relationship between categories and disposal measures, for example: the corresponding treatment measures of the large-size express carton are as follows: and (5) delivering the garbage to a large-size garbage can for recovery.
Wherein, the appointed terminal device can comprise the terminal device of the staff responsible for garbage collection.
The prompt information comprises the garbage category and corresponding treatment measures, and is used for prompting the treatment mode of garbage collection of related workers.
According to the method and the device, the classes are predicted and corresponding processing measures are matched, so that workers are assisted to carry out accurate garbage recycling.
It should be noted that, in order to improve the security of data and avoid malicious tampering of data, the express carton waste classification model may be deployed at a block link point.
According to the technical scheme, the express delivery carton waste classification method can respond to an express delivery carton waste classification instruction, sample data is obtained according to the express delivery carton waste classification instruction, the sample data is subjected to image enhancement processing to obtain an enhanced sample, the noise of express delivery waste is effectively solved through image enhancement, the influence of image distortion on the classification accuracy is solved, the enhanced sample is subjected to image segmentation processing to obtain a contour information set, the influence of reflection on classification is effectively avoided through image segmentation, the texture feature of each contour information in the contour information set is extracted, the texture feature set is constructed according to the extracted texture feature, when the texture feature is extracted, the gray feature is introduced, the texture feature of an express delivery carton can be clearly extracted, the accuracy of subsequent express delivery carton waste classification is improved, and the shape feature of each contour information in the contour information set is extracted, and according to the extracted shape feature, constructing a shape feature set, introducing the shape feature, combining each texture feature in the texture feature set and the corresponding shape feature in the shape feature set into a training sample for the subsequent express carton waste classification large-size data basis, constructing a sample set according to all training samples obtained by combination, combining the texture feature and the shape feature simultaneously when constructing the sample set so as to carry out more accurate classification, avoiding the influence on the accuracy of the training model due to the training of a single feature, training a specified classifier by using the sample set to obtain an express carton waste classification model, training by splitting the sample set, effectively ensuring the accuracy of the model, improving the classification effect of the model, and in addition, carrying out classification and identification by using a method of a support vector machine, also obtaining higher accuracy, when a picture to be processed is received, the picture to be processed is input into the express carton waste classification model, the output of the express carton waste classification model is obtained and serves as the target category of the picture to be processed, and then express waste can be accurately classified based on computer vision, so that the express waste can be recycled in an auxiliary mode.
Fig. 2 is a functional block diagram of an express carton waste sorting device according to a preferred embodiment of the present invention. The express delivery carton waste classification device 11 comprises an acquisition unit 110, an enhancement unit 111, a segmentation unit 112, a construction unit 113, a training unit 114 and a classification unit 115. The module/unit referred to in the present invention refers to a series of computer program segments that can be executed by the processor 13 and that can perform a fixed function, and that are stored in the memory 12. In the present embodiment, the functions of the modules/units will be described in detail in the following embodiments.
In response to the express carton waste classification instruction, the obtaining unit 110 obtains sample data according to the express carton waste classification instruction.
In this embodiment, the express carton waste classification instruction may be triggered by a worker responsible for express carton waste classification, or may be triggered by a developer, which is not limited in the present invention.
In at least one embodiment of the present invention, the obtaining, by the obtaining unit 110, sample data according to the express carton waste classification instruction includes:
analyzing the express carton waste classification instruction to obtain information carried by the express carton waste classification instruction;
acquiring a preset label corresponding to the address;
constructing a regular expression according to the preset label;
traversing information carried by the express carton waste classification instruction by using the regular expression, and determining the traversed information as a target address;
and connecting to the target address, and acquiring the data stored in the target address to construct the sample data.
For example: when the preset label is ADD, the constructed regular expression is ADD (), traversal is performed on information carried by the express carton garbage classification instruction by using the regular expression ADD () to obtain a target address, wherein a large amount of data are stored in the target address, and the steps comprise: pictures of the shot express rubbish and the like. And further integrating the data stored at the target address to obtain the sample data.
The enhancement unit 111 performs image enhancement processing on the sample data to obtain an enhanced sample.
It can be understood that due to the angle problem of shooting, or the influence of the lighting factor, or due to the shaking during shooting, a large amount of noise may be included in the sample data, which affects the accuracy of subsequent classification.
Therefore, in this embodiment, first, performing image enhancement processing on the sample data to obtain the enhanced sample, specifically including:
for each sub-image in the sample data, carrying out fuzzy processing on the sub-image according to a specified scale to obtain a fuzzy image;
calculating a logarithm value of the sub-image as a first logarithm value, and calculating a logarithm value of the blurred image as a second logarithm value;
calculating a difference value between the first logarithmic value and the second logarithmic value as a third logarithmic value;
converting the third logarithmic value into a pixel value to obtain an enhanced image corresponding to the sub-image;
and combining the obtained enhanced images to obtain the enhanced sample.
The specified scale can be configured in a user-defined mode, and the specified scale is the fuzzy radius.
In this embodiment, when the blurred image is obtained by blurring the sub-image according to the specified scale, a gaussian blur algorithm may be used, or a mean value blur algorithm may be used instead of the gaussian blur algorithm, which is not limited in the present invention.
Specifically, the third logarithmic value is calculated using the following formula:
Log[R(x,y)]=Log[I(x,y)]-Log[L(x,y)]
wherein, R (x, y) represents the reflection component of the target object of which the pixel point (x, y) carries image detail information; i (x, y) represents the original input of the pixel point (x, y), L (x, y) represents the illumination component of the ambient light; log [ R (x, y) ] represents the third logarithmic value; log [ I (x, y) ] represents the first logarithmic value; log [ L (x, y) ] represents the second pair of values.
Further, the third logarithmic value is converted into a pixel value ranging from 0 to 255, and a result after image enhancement, that is, the enhanced image, can be obtained.
In the embodiment, the noise of express delivery garbage can be effectively solved through image enhancement, and the influence of image distortion on the classification accuracy rate is solved.
The segmentation unit 112 performs image segmentation processing on the enhanced sample to obtain a contour information set.
In at least one embodiment of the present invention, the segmenting unit 112 performs image segmentation processing on the enhanced sample, and obtaining the set of contour information includes:
inputting the enhanced sample into a pre-trained neural network model for feature extraction to obtain a feature map set;
acquiring at least one candidate ROI (Region Of Interest) Of each feature map in the feature map set;
inputting at least one candidate ROI of each feature map into a regional suggestion network for filtering to obtain a target ROI of each feature map;
performing alignment operation on the target ROI of each feature map;
inputting the target ROI after the alignment operation into a full convolution network, and acquiring the output of the full convolution network as the contour information of each feature map;
and constructing the contour information set by using the obtained contour information.
Wherein the pre-trained neural network model may include ResNeXt, etc.
For example: inputting the enhanced sample A into a ResNeXt Network for feature extraction to obtain a feature map, determining a plurality of candidate ROIs of the feature map according to a preset Region, further inputting the plurality of candidate ROIs into an RPN (Region suggestion Network), performing binary classification (classification of foreground or background) and BB regression (Bounding-box regression) in the RPN, filtering out a part of candidate ROIs, and obtaining a target ROI of the feature map. And further performing ROIAlign (ROI alignment) operation on the target ROI, inputting the ROI obtained after the alignment operation into an FCN (full Convolutional network), and performing classification regression in the FCN to obtain the contour information of the feature map.
In the embodiment, the image is segmented, so that the influence of light reflection on classification is effectively avoided.
The construction unit 113 extracts a texture feature of each contour information in the contour information set, and constructs a texture feature set according to the extracted texture feature.
In at least one embodiment of the present invention, the extracting, by the constructing unit 113, the texture feature of each contour information in the contour information set includes:
acquiring the number of pixel points in each profile information;
determining the gray value of each pixel point in each profile information, and determining the probability of each pixel point for taking the corresponding gray value;
calculating the average value corresponding to each profile information according to the number of pixel points in each profile information, the gray value of the pixel points in each profile information and the probability of taking the corresponding gray value of each pixel point;
calculating the contrast corresponding to each profile information according to the gray value of the pixel point in each profile information and the probability of taking the corresponding gray value of each pixel point;
calculating the entropy corresponding to each profile information according to the probability of taking the corresponding gray value of each pixel point;
and combining the average value corresponding to each contour information, the contrast corresponding to each contour information and the entropy corresponding to each contour information to obtain the texture characteristics of each contour information.
Specifically, the average value corresponding to each profile information is calculated using the following formula:
Figure BDA0003092625280000171
wherein, YmeanAnd (b) representing an average value corresponding to the contour information, m representing the number of pixel points in the contour information, i representing the gray value of the pixel points, and p (i) representing the probability of taking the gray value i from the pixel points.
Further, the contrast corresponding to each profile information is calculated by adopting the following formula:
Figure BDA0003092625280000181
wherein, YconThe contrast corresponding to the contour information is represented.
Further, the entropy corresponding to each profile information is calculated using the following formula:
Figure BDA0003092625280000182
wherein, YentropyRepresenting the corresponding entropy of the contour information.
In the above embodiment, when the texture features are extracted, the gray scale features are introduced, the texture features of the express cartons can be clearly extracted, and the accuracy of garbage classification of the follow-up express cartons is improved.
The construction unit 113 extracts the shape feature of each contour information in the contour information set, and constructs a shape feature set from the extracted shape features.
In at least one embodiment of the present invention, the extracting, by the constructing unit 113, the shape feature of each contour information in the set of contour information includes:
calculating the perimeter and the area of each contour information in the contour information set;
the perimeter and the area of each profile information are determined as the shape feature of each profile information.
For example: when the identified contour information is a quadrangle, the corresponding shape characteristics can be calculated according to the calculation formula of the perimeter and the area of the quadrangle, which is not described herein.
In the above embodiment, the shape characteristics can be introduced, and the large-size data basis for the subsequent express carton waste classification is provided.
The constructing unit 113 combines each texture feature in the texture feature set and the corresponding shape feature in the shape feature set into one training sample, and constructs a sample set according to all the training samples obtained by the combination.
Through the embodiment, the texture features and the shape features can be combined simultaneously when the sample set is constructed, so that more accurate classification is carried out, and the influence on the accuracy of the training model caused by training by adopting a single feature is avoided.
The training unit 114 trains the designated classifier by using the sample set to obtain an express carton waste classification model.
In at least one embodiment of the present invention, the training unit 114 trains a specific classifier by using the sample set, and obtaining the express carton waste classification model includes:
determining a class of each training sample in the sample set;
performing label processing on each training sample according to the category of each training sample to obtain a label sample set;
dividing the label sample set according to a preset proportion to obtain a first sample set and a second sample set;
inputting the first sample set into a support vector machine classifier for training until the support vector machine classifier reaches convergence, and stopping training to obtain an intermediate model;
validating the intermediate model using the second set of samples;
and when the accuracy of the intermediate model is greater than or equal to a preset threshold value, determining the intermediate model as the express carton waste classification model.
Wherein the specified classifier may include a Support Vector Machine (SVM) classifier.
It will be appreciated that in the training phase, the class of each training sample is known, and therefore, in order to facilitate subsequent training, the training samples need to be labelled.
For example: the training samples can be marked as 'express cartons' or 'other garbage'. Can also further refine to belong to "express delivery carton" and be label "large size express delivery carton", "medium size express delivery carton" and "small size express delivery carton".
Wherein, the preset proportion can be configured by self-definition, such as 80%. The preset threshold may also be configured by a user, such as 95%.
After the label sample set is divided according to the preset proportion, the obtained first sample set can be used as a training set, and the second sample set can be used as a verification set.
The sample set is split for training, so that the accuracy of the model can be effectively guaranteed, the classification effect of the model is improved, and in addition, the classification and identification are carried out by adopting a support vector machine method, so that higher accuracy can be obtained.
When receiving a to-be-processed picture, the classifying unit 115 inputs the to-be-processed picture into the express carton waste classification model, and obtains an output of the express carton waste classification model as a target category of the to-be-processed picture.
In this embodiment, the to-be-processed picture may be uploaded by a relevant worker, and the to-be-processed picture may include a photograph and the like.
In at least one embodiment of the invention, after the output of the express carton waste classification model is obtained as the target category of the picture to be processed, an express waste processing table is established according to the category and the corresponding processing measure;
matching in the express refuse handling table by using the target category, and determining the handling measures corresponding to the matched categories as target handling measures;
generating prompt information according to the target processing measure;
and sending the prompt information to appointed terminal equipment.
The express garbage disposal table stores a corresponding relationship between categories and disposal measures, for example: the corresponding treatment measures of the large-size express carton are as follows: and (5) delivering the garbage to a large-size garbage can for recovery.
Wherein, the appointed terminal device can comprise the terminal device of the staff responsible for garbage collection.
The prompt information comprises the garbage category and corresponding treatment measures, and is used for prompting the treatment mode of garbage collection of related workers.
According to the method and the device, the classes are predicted and corresponding processing measures are matched, so that workers are assisted to carry out accurate garbage recycling.
It should be noted that, in order to improve the security of data and avoid malicious tampering of data, the express carton waste classification model may be deployed at a block link point.
According to the technical scheme, the express delivery carton waste classification method can respond to an express delivery carton waste classification instruction, sample data is obtained according to the express delivery carton waste classification instruction, the sample data is subjected to image enhancement processing to obtain an enhanced sample, the noise of express delivery waste is effectively solved through image enhancement, the influence of image distortion on the classification accuracy is solved, the enhanced sample is subjected to image segmentation processing to obtain a contour information set, the influence of reflection on classification is effectively avoided through image segmentation, the texture feature of each contour information in the contour information set is extracted, the texture feature set is constructed according to the extracted texture feature, when the texture feature is extracted, the gray feature is introduced, the texture feature of an express delivery carton can be clearly extracted, the accuracy of subsequent express delivery carton waste classification is improved, and the shape feature of each contour information in the contour information set is extracted, and according to the extracted shape feature, constructing a shape feature set, introducing the shape feature, combining each texture feature in the texture feature set and the corresponding shape feature in the shape feature set into a training sample for the subsequent express carton waste classification large-size data basis, constructing a sample set according to all training samples obtained by combination, combining the texture feature and the shape feature simultaneously when constructing the sample set so as to carry out more accurate classification, avoiding the influence on the accuracy of the training model due to the training of a single feature, training a specified classifier by using the sample set to obtain an express carton waste classification model, training by splitting the sample set, effectively ensuring the accuracy of the model, improving the classification effect of the model, and in addition, carrying out classification and identification by using a method of a support vector machine, also obtaining higher accuracy, when a picture to be processed is received, the picture to be processed is input into the express carton waste classification model, the output of the express carton waste classification model is obtained and serves as the target category of the picture to be processed, and then express waste can be accurately classified based on computer vision, so that the express waste can be recycled in an auxiliary mode.
Fig. 3 is a schematic structural diagram of a computer device according to a preferred embodiment of the method for sorting express carton waste according to the present invention.
The computer device 1 may comprise a memory 12, a processor 13 and a bus, and may further comprise a computer program, such as a courier carton waste sorter program, stored in the memory 12 and executable on the processor 13.
It will be understood by those skilled in the art that the schematic diagram is merely an example of the computer device 1, and does not constitute a limitation to the computer device 1, the computer device 1 may have a bus-type structure or a star-shaped structure, the computer device 1 may further include more or less other hardware or software than those shown, or different component arrangements, for example, the computer device 1 may further include an input and output device, a network access device, etc.
It should be noted that the computer device 1 is only an example, and other electronic products that are currently available or may come into existence in the future, such as electronic products that can be adapted to the present invention, should also be included in the scope of the present invention, and are included herein by reference.
The memory 12 includes at least one type of readable storage medium, which includes flash memory, removable hard disks, multimedia cards, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disks, optical disks, etc. The memory 12 may in some embodiments be an internal storage unit of the computer device 1, for example a removable hard disk of the computer device 1. The memory 12 may also be an external storage device of the computer device 1 in other embodiments, such as a plug-in removable hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), etc. provided on the computer device 1. Further, the memory 12 may also include both an internal storage unit and an external storage device of the computer device 1. The memory 12 may be used not only to store application software installed in the computer apparatus 1 and various types of data, such as codes of an express carton garbage sorting program, etc., but also to temporarily store data that has been output or is to be output.
The processor 13 may be composed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 13 is a Control Unit (Control Unit) of the computer device 1, connects various components of the entire computer device 1 by using various interfaces and lines, and executes various functions and processes data of the computer device 1 by running or executing programs or modules (for example, executing a delivery carton garbage classification program and the like) stored in the memory 12 and calling data stored in the memory 12.
The processor 13 executes the operating system of the computer device 1 and various installed application programs. The processor 13 executes the application program to implement the steps in each of the above embodiments of the express carton waste sorting method, such as the steps shown in fig. 1.
Illustratively, the computer program may be divided into one or more modules/units, which are stored in the memory 12 and executed by the processor 13 to accomplish the present invention. The one or more modules/units may be a series of computer readable instruction segments capable of performing certain functions, which are used to describe the execution of the computer program in the computer device 1. For example, the computer program may be segmented into an acquisition unit 110, an enhancement unit 111, a segmentation unit 112, a construction unit 113, a training unit 114, a classification unit 115.
The integrated unit implemented in the form of a software functional module may be stored in a computer-readable storage medium. The software functional module is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a computer device, or a network device) or a processor (processor) to execute parts of the express carton waste classification method according to various embodiments of the present invention.
The integrated modules/units of the computer device 1 may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as separate products. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented.
Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), random-access Memory, or the like.
Further, the computer-readable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the blockchain node, and the like.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
The bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one line is shown in FIG. 3, but this does not mean only one bus or one type of bus. The bus is arranged to enable connection communication between the memory 12 and at least one processor 13 or the like.
Although not shown, the computer device 1 may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 13 through a power management device, so that functions of charge management, discharge management, power consumption management and the like are realized through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The computer device 1 may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
Further, the computer device 1 may further include a network interface, and optionally, the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a bluetooth interface, etc.), which are generally used for establishing a communication connection between the computer device 1 and other computer devices.
Optionally, the computer device 1 may further comprise a user interface, which may be a Display (Display), an input unit, such as a Keyboard (Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable for displaying information processed in the computer device 1 and for displaying a visualized user interface.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
Fig. 3 shows only the computer device 1 with the components 12-13, and it will be understood by a person skilled in the art that the structure shown in fig. 3 does not constitute a limitation of the computer device 1 and may comprise fewer or more components than shown, or a combination of certain components, or a different arrangement of components.
With reference to fig. 1, the memory 12 of the computer device 1 stores a plurality of instructions to implement an express carton waste classification method, and the processor 13 can execute the plurality of instructions to implement:
responding to an express carton waste classification instruction, and acquiring sample data according to the express carton waste classification instruction;
carrying out image enhancement processing on the sample data to obtain an enhanced sample;
carrying out image segmentation processing on the enhanced sample to obtain a contour information set;
extracting the texture features of each contour information in the contour information set, and constructing a texture feature set according to the extracted texture features;
extracting the shape feature of each contour information in the contour information set, and constructing a shape feature set according to the extracted shape feature;
combining each texture feature in the texture feature set and the corresponding shape feature in the shape feature set into a training sample, and constructing a sample set according to all training samples obtained by combination;
training a designated classifier by using the sample set to obtain an express carton waste classification model;
when a picture to be processed is received, inputting the picture to be processed into the express carton waste classification model, and acquiring the output of the express carton waste classification model as the target category of the picture to be processed.
Specifically, the processor 13 may refer to the description of the relevant steps in the embodiment corresponding to fig. 1 for a specific implementation method of the instruction, which is not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the present invention may also be implemented by one unit or means through software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. The express carton waste classification method is characterized by comprising the following steps:
responding to an express carton waste classification instruction, and acquiring sample data according to the express carton waste classification instruction;
carrying out image enhancement processing on the sample data to obtain an enhanced sample;
carrying out image segmentation processing on the enhanced sample to obtain a contour information set;
extracting the texture features of each contour information in the contour information set, and constructing a texture feature set according to the extracted texture features;
extracting the shape feature of each contour information in the contour information set, and constructing a shape feature set according to the extracted shape feature;
combining each texture feature in the texture feature set and the corresponding shape feature in the shape feature set into a training sample, and constructing a sample set according to all training samples obtained by combination;
training a designated classifier by using the sample set to obtain an express carton waste classification model;
when a picture to be processed is received, inputting the picture to be processed into the express carton waste classification model, and acquiring the output of the express carton waste classification model as the target category of the picture to be processed.
2. The express delivery carton waste classification method of claim 1, wherein the image enhancement processing is performed on the sample data to obtain an enhanced sample, and the image enhancement processing comprises:
for each sub-image in the sample data, carrying out fuzzy processing on the sub-image according to a specified scale to obtain a fuzzy image;
calculating a logarithm value of the sub-image as a first logarithm value, and calculating a logarithm value of the blurred image as a second logarithm value;
calculating a difference value between the first logarithmic value and the second logarithmic value as a third logarithmic value;
converting the third logarithmic value into a pixel value to obtain an enhanced image corresponding to the sub-image;
and combining the obtained enhanced images to obtain the enhanced sample.
3. The express carton waste classification method of claim 1, wherein the image segmentation processing on the enhancement sample to obtain a contour information set comprises:
inputting the enhanced sample into a pre-trained neural network model for feature extraction to obtain a feature map set;
acquiring at least one candidate ROI of each feature map in the feature map set;
inputting at least one candidate ROI of each feature map into a regional suggestion network for filtering to obtain a target ROI of each feature map;
performing alignment operation on the target ROI of each feature map;
inputting the target ROI after the alignment operation into a full convolution network, and acquiring the output of the full convolution network as the contour information of each feature map;
and constructing the contour information set by using the obtained contour information.
4. The express carton waste classification method of claim 1, wherein the extracting the texture feature of each contour information in the set of contour information comprises:
acquiring the number of pixel points in each profile information;
determining the gray value of each pixel point in each profile information, and determining the probability of each pixel point for taking the corresponding gray value;
calculating the average value corresponding to each profile information according to the number of pixel points in each profile information, the gray value of the pixel points in each profile information and the probability of taking the corresponding gray value of each pixel point;
calculating the contrast corresponding to each profile information according to the gray value of the pixel point in each profile information and the probability of taking the corresponding gray value of each pixel point;
calculating the entropy corresponding to each profile information according to the probability of taking the corresponding gray value of each pixel point;
and combining the average value corresponding to each contour information, the contrast corresponding to each contour information and the entropy corresponding to each contour information to obtain the texture characteristics of each contour information.
5. The express carton waste classification method of claim 1, wherein the extracting the shape feature of each profile information in the profile information set comprises:
calculating the perimeter and the area of each contour information in the contour information set;
the perimeter and the area of each profile information are determined as the shape feature of each profile information.
6. The method for sorting the express carton waste according to claim 1, wherein the training of the designated classifier by using the sample set to obtain the express carton waste sorting model comprises:
determining a class of each training sample in the sample set;
performing label processing on each training sample according to the category of each training sample to obtain a label sample set;
dividing the label sample set according to a preset proportion to obtain a first sample set and a second sample set;
inputting the first sample set into a support vector machine classifier for training until the support vector machine classifier reaches convergence, and stopping training to obtain an intermediate model;
validating the intermediate model using the second set of samples;
and when the accuracy of the intermediate model is greater than or equal to a preset threshold value, determining the intermediate model as the express carton waste classification model.
7. The express carton waste classification method of claim 1, wherein after obtaining the output of the express carton waste classification model as the target category of the to-be-processed picture, the method further comprises:
establishing an express garbage disposal table according to the category and the corresponding disposal measure;
matching in the express refuse handling table by using the target category, and determining the handling measures corresponding to the matched categories as target handling measures;
generating prompt information according to the target processing measure;
and sending the prompt information to appointed terminal equipment.
8. The utility model provides an express delivery carton rubbish sorter which characterized in that, express delivery carton rubbish sorter includes:
the system comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for responding to an express carton waste classification instruction and acquiring sample data according to the express carton waste classification instruction;
the enhancement unit is used for carrying out image enhancement processing on the sample data to obtain an enhanced sample;
the segmentation unit is used for carrying out image segmentation processing on the enhanced sample to obtain a contour information set;
the construction unit is used for extracting the texture features of each contour information in the contour information set and constructing a texture feature set according to the extracted texture features;
the constructing unit is further configured to extract a shape feature of each contour information in the contour information set, and construct a shape feature set according to the extracted shape feature;
the construction unit is further configured to combine each texture feature in the texture feature set and a corresponding shape feature in the shape feature set into one training sample, and construct a sample set according to all training samples obtained through combination;
the training unit is used for training a designated classifier by using the sample set to obtain an express carton waste classification model;
and the classification unit is used for inputting the picture to be processed into the express carton waste classification model when the picture to be processed is received, and acquiring the output of the express carton waste classification model as the target category of the picture to be processed.
9. A computer device, characterized in that the computer device comprises:
a memory storing at least one instruction; and
a processor executing instructions stored in the memory to implement the express carton waste classification method of any of claims 1 to 7.
10. A computer-readable storage medium characterized by: the computer-readable storage medium has stored therein at least one instruction that is executable by a processor in a computer device to implement the express carton waste sorting method of any of claims 1-7.
CN202110600085.7A 2021-05-31 2021-05-31 Express carton garbage classification method, device, equipment and medium Pending CN113222063A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110600085.7A CN113222063A (en) 2021-05-31 2021-05-31 Express carton garbage classification method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110600085.7A CN113222063A (en) 2021-05-31 2021-05-31 Express carton garbage classification method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN113222063A true CN113222063A (en) 2021-08-06

Family

ID=77082014

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110600085.7A Pending CN113222063A (en) 2021-05-31 2021-05-31 Express carton garbage classification method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN113222063A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113658125A (en) * 2021-08-11 2021-11-16 全芯智造技术有限公司 Method, device and storage medium for evaluating layout hot spot
CN114549902A (en) * 2022-02-23 2022-05-27 平安普惠企业管理有限公司 Image classification method and device, computer equipment and storage medium
CN114841384A (en) * 2022-05-06 2022-08-02 扬州市职业大学(扬州开放大学) Express delivery package recycle's management system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106897673A (en) * 2017-01-20 2017-06-27 南京邮电大学 A kind of recognition methods again of the pedestrian based on retinex algorithms and convolutional neural networks
CN107092914A (en) * 2017-03-23 2017-08-25 广东数相智能科技有限公司 Refuse classification method, device and system based on image recognition
CN111144322A (en) * 2019-12-28 2020-05-12 广东拓斯达科技股份有限公司 Sorting method, device, equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106897673A (en) * 2017-01-20 2017-06-27 南京邮电大学 A kind of recognition methods again of the pedestrian based on retinex algorithms and convolutional neural networks
CN107092914A (en) * 2017-03-23 2017-08-25 广东数相智能科技有限公司 Refuse classification method, device and system based on image recognition
CN111144322A (en) * 2019-12-28 2020-05-12 广东拓斯达科技股份有限公司 Sorting method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘禾等: "数字图像处理及应用", vol. 1, 31 January 2006, 中国电力出版社, pages: 164 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113658125A (en) * 2021-08-11 2021-11-16 全芯智造技术有限公司 Method, device and storage medium for evaluating layout hot spot
CN113658125B (en) * 2021-08-11 2024-02-23 全芯智造技术有限公司 Method, device and storage medium for evaluating layout hot spot
CN114549902A (en) * 2022-02-23 2022-05-27 平安普惠企业管理有限公司 Image classification method and device, computer equipment and storage medium
CN114841384A (en) * 2022-05-06 2022-08-02 扬州市职业大学(扬州开放大学) Express delivery package recycle's management system

Similar Documents

Publication Publication Date Title
Ruiz et al. Automatic image-based waste classification
CN113222063A (en) Express carton garbage classification method, device, equipment and medium
CN112528863A (en) Identification method and device of table structure, electronic equipment and storage medium
CN112395978A (en) Behavior detection method and device and computer readable storage medium
CN112052850A (en) License plate recognition method and device, electronic equipment and storage medium
CN112699775A (en) Certificate identification method, device and equipment based on deep learning and storage medium
CN112580684B (en) Target detection method, device and storage medium based on semi-supervised learning
CN111695609A (en) Target damage degree determination method, target damage degree determination device, electronic device, and storage medium
CN111738212B (en) Traffic signal lamp identification method, device, equipment and medium based on artificial intelligence
WO2022141858A1 (en) Pedestrian detection method and apparatus, electronic device, and storage medium
CN112396005A (en) Biological characteristic image recognition method and device, electronic equipment and readable storage medium
CN112137591B (en) Target object position detection method, device, equipment and medium based on video stream
CN113034406A (en) Distorted document recovery method, device, equipment and medium
Rehman et al. An efficient approach for vehicle number plate recognition in Pakistan
CN111985449A (en) Rescue scene image identification method, device, equipment and computer medium
CN113704474A (en) Bank outlet equipment operation guide generation method, device, equipment and storage medium
CN112101191A (en) Expression recognition method, device, equipment and medium based on frame attention network
CN114913518A (en) License plate recognition method, device, equipment and medium based on image processing
CN114267064A (en) Face recognition method and device, electronic equipment and storage medium
CN114996386A (en) Business role identification method, device, equipment and storage medium
CN114385815A (en) News screening method, device, equipment and storage medium based on business requirements
CN112580505A (en) Method and device for identifying opening and closing states of network points, electronic equipment and storage medium
CN112183520A (en) Intelligent data information processing method and device, electronic equipment and storage medium
CN112132037A (en) Sidewalk detection method, device, equipment and medium based on artificial intelligence
CN113487630B (en) Matting method, device, equipment and storage medium based on material analysis technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination