CN113192017A - Package defect identification method, device, equipment and storage medium - Google Patents

Package defect identification method, device, equipment and storage medium Download PDF

Info

Publication number
CN113192017A
CN113192017A CN202110433708.6A CN202110433708A CN113192017A CN 113192017 A CN113192017 A CN 113192017A CN 202110433708 A CN202110433708 A CN 202110433708A CN 113192017 A CN113192017 A CN 113192017A
Authority
CN
China
Prior art keywords
package
image
model
parcel
defect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110433708.6A
Other languages
Chinese (zh)
Inventor
徐梦佳
李斯
杨周龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongpu Software Co Ltd
Original Assignee
Dongpu Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongpu Software Co Ltd filed Critical Dongpu Software Co Ltd
Priority to CN202110433708.6A priority Critical patent/CN113192017A/en
Publication of CN113192017A publication Critical patent/CN113192017A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The invention relates to the field of image recognition, and discloses a package defect recognition method, a device, equipment and a storage medium. The method comprises the following steps: acquiring package detection data and corresponding package images of defective packages as sorting results; identifying defects of the package image, and labeling the identified package image based on package detection data to obtain a labeled image; constructing a model training sample set according to the labeled image, inputting the model training sample set into a preset Centernet model for package defect identification training to obtain a package defect identification model; and inputting the obtained package video frames on the sorting production line into a package defect identification model for identification to obtain a corresponding package defect identification result. The package defect recognition model obtained through training identifies damaged packages in the package sorting center, the technical problem that the damaged packages cannot be recognized by fully utilizing video stream is solved, and the defect package sorting efficiency is improved.

Description

Package defect identification method, device, equipment and storage medium
Technical Field
The invention relates to the field of monitoring, in particular to a method, a device, equipment and a storage medium for identifying package defects.
Background
The remote video monitoring system can reach any corner of the world through a standard telephone line, a network, a mobile broadband and ISDN data line or a direct connection, and can control a pan/lens and store video monitoring images. The remote transmission monitoring system transmits a remote activity scene to a computer screen of a viewer through a common telephone line and has the function of reverse dialing an alarm to a receiving end when the alarm is triggered. The existing video monitoring system generally comprises a monitor and a monitoring terminal, wherein the monitor shoots a monitored object and transmits shot video to the monitoring terminal.
With the rapid development of the logistics industry, logistics have penetrated various aspects of people's daily lives. With the popularization of electronic commerce, more and more people buy commodities through networks. In order to meet the demand of more and more damaged packages, the current packages are mainly checked at the front end of an assembly line manually, and damaged packages in a sorting center cannot be identified through video streaming. Therefore, the full utilization of video stream to identify broken packages is a technical problem to be faced by those skilled in the art.
Disclosure of Invention
The invention mainly aims to fully utilize video stream to identify damaged packages, realize automatic defect package sorting and improve the defect package identification and sorting efficiency.
The invention provides a package defect identification method in a first aspect, which comprises the following steps: acquiring historical sorting data of packages; determining that the sorting result in the historical sorting data is the package detection data of the defective package and the corresponding package image; identifying defects of the package images, and labeling the identified package images based on the package detection data to obtain labeled images; constructing a model training sample set according to the labeled image, inputting the model training sample set into a preset Centernet model for package defect identification training to obtain a package defect identification model; and acquiring package video frames on a sorting production line, and inputting the package video frames into the package defect identification model for identification to obtain a package defect identification result of each package in the package video frames.
Optionally, in a first implementation manner of the first aspect of the present invention, the obtaining historical sorting data of the packages includes: acquiring a plurality of parcel sorting images in a historical parcel sorting scene, inputting the parcel sorting images into a preset parcel recognition model for recognition, and outputting the area range of each parcel in the parcel sorting images; extracting a parcel image corresponding to each parcel from the parcel sorting image according to the area range of each parcel in the parcel sorting image; and identifying the package image to obtain package information of each package, and acquiring historical sorting data of the packages according to the package information.
Optionally, in a second implementation manner of the first aspect of the present invention, before the inputting the package sorting image into a preset package identification model for identification and outputting an area range of each package in the package sorting image, the method further includes: acquiring a plurality of first images in a parcel sorting scene, labeling parcels in the first images to obtain a labeled file, and taking the first images as training sample images; inputting the training sample image into the ResNet-101 network, and extracting a first feature map of the training sample image through the ResNet-101 network; inputting the first characteristic diagram into the RPN network, and generating a prediction frame corresponding to the first characteristic diagram through the RPN network; inputting the first feature map and the prediction frame into the ROI Align layer, and fusing the prediction frame and the first feature map through the ROI Align layer to obtain a second feature map containing the prediction frame; inputting the second feature map into the classification network, and generating a prediction result corresponding to the second feature map through the classification network; and adjusting parameters of a preset MASK R-CNN model according to the prediction result and the label file until the MASK R-CNN model converges to obtain a package identification model.
Optionally, in a third implementation manner of the first aspect of the present invention, the performing defect identification on the package image, and labeling the identified package image based on the package detection data to obtain a labeled image includes: identifying defects in the package image to obtain a defect area range of the package image; extracting a corresponding defect package image from the package image based on the defect area range of the package image; performing feature extraction on the defect package image to obtain image features of the defect package image, wherein the image features comprise geometric features, texture features and semantic features; determining image information of the defect package image according to the image characteristics; and labeling the defective package image according to the image information and the package detection data to obtain a labeled image after labeling.
Optionally, in a fourth implementation manner of the first aspect of the present invention, the obtaining a package video frame on a sorting assembly line, and inputting the package video frame into the package defect identification model for identification, and obtaining a package defect identification result of each package in the package video frame includes: acquiring a package video frame on a sorting production line, and inputting the package video frame into the package defect identification model; obtaining the area range of each parcel in the parcel video frame through the parcel defect identification model; according to the region range of each parcel in the parcel video frame, extracting a parcel image corresponding to each parcel from the parcel video frame respectively; and inputting the package image into the package defect identification model, and respectively carrying out defect identification on the package image through the package defect identification model to obtain a package defect identification result of each package in the package video frame.
Optionally, in a fifth implementation manner of the first aspect of the present invention, after the performing defect identification on the parcel images by the parcel defect identification model respectively to obtain a parcel defect identification result of each parcel in the parcel video frame, the method further includes: obtaining a parcel damage value of a corresponding parcel in the parcel video frame according to the parcel defect identification result; judging whether the package damage value is larger than a preset damage threshold value or not; and if so, sending a package defect notification to a preset monitoring center so as to repackage the package.
Optionally, in a sixth implementation manner of the first aspect of the present invention, the inputting the model training sample set into a preset centeret model for package defect recognition training, and obtaining the package defect recognition model includes: inputting the labeled image in the model training sample set into a first convolution network of the Centernet model, and extracting a feature map of the labeled image; calculating a loss value of the Centernet model according to the feature map of the labeled image; and updating the weight parameters of the Centernet model according to the loss value by utilizing a back propagation algorithm to obtain a package defect identification model.
The second aspect of the present invention provides a package defect identifying apparatus, comprising: the acquisition module is used for acquiring historical sorting data of the packages; the determining module is used for determining that the sorting result in the historical sorting data is the package detection data of the defective package and the corresponding package image; the first labeling module is used for identifying defects of the package image and labeling the identified package image based on the package detection data to obtain a labeled image; the training module is used for constructing a model training sample set according to the labeled image, inputting the model training sample set into a preset Centernet model for package defect identification training, and obtaining a package defect identification model; and the identification module is used for acquiring the package video frames on the sorting production line, inputting the package video frames into the package defect identification model for identification, and obtaining the package defect identification result of each package in the package video frames.
Optionally, in a first implementation manner of the second aspect of the present invention, the obtaining module is specifically configured to: acquiring a plurality of parcel sorting images in a historical parcel sorting scene, inputting the parcel sorting images into a preset parcel recognition model for recognition, and outputting the area range of each parcel in the parcel sorting images; extracting a parcel image corresponding to each parcel from the parcel sorting image according to the area range of each parcel in the parcel sorting image; and identifying the package image to obtain package information of each package, and acquiring historical sorting data of the packages according to the package information.
Optionally, in a second implementation manner of the second aspect of the present invention, the package defect identifying apparatus further includes: the second labeling module is used for acquiring a plurality of first images in a parcel sorting scene, labeling parcels in the first images to obtain labeled files, and taking the first images as training sample images;
the extraction module is used for inputting the training sample image into the ResNet-101 network and extracting a first feature map of the training sample image through the ResNet-101 network; the first generation module is used for inputting the first characteristic diagram into the RPN network and generating a prediction frame corresponding to the first characteristic diagram through the RPN network; the fusion module is used for inputting the first feature map and the prediction frame into the ROI Align layer and fusing the prediction frame and the first feature map through the ROI Align layer to obtain a second feature map containing the prediction frame; the second generation module is used for inputting the second feature map into the classification network and generating a prediction result corresponding to the second feature map through the classification network; and the adjusting module is used for adjusting the parameters of the preset MASK R-CNN model according to the prediction result and the label file until the MASK R-CNN model converges to obtain a package identification model.
Optionally, in a third implementation manner of the second aspect of the present invention, the first labeling module is specifically configured to: identifying defects in the package image to obtain a defect area range of the package image; extracting a corresponding defect package image from the package image based on the defect area range of the package image; performing feature extraction on the defect package image to obtain image features of the defect package image, wherein the image features comprise geometric features, texture features and semantic features; determining image information of the defect package image according to the image characteristics; and labeling the defective package image according to the image information and the package detection data to obtain a labeled image after labeling.
Optionally, in a fourth implementation manner of the second aspect of the present invention, the identification module is specifically configured to: acquiring a package video frame on a sorting production line, and inputting the package video frame into the package defect identification model; obtaining the area range of each parcel in the parcel video frame through the parcel defect identification model; according to the region range of each parcel in the parcel video frame, extracting a parcel image corresponding to each parcel from the parcel video frame respectively; and inputting the package image into the package defect identification model, and respectively carrying out defect identification on the package image through the package defect identification model to obtain a package defect identification result of each package in the package video frame.
Optionally, in a fifth implementation manner of the second aspect of the present invention, the package defect identifying apparatus further includes: the judging module is used for obtaining a package damage value of a corresponding package in the package video frame according to the package defect identification result; judging whether the package damage value is larger than a preset damage threshold value or not; and the sending module is used for sending a package defect notification to a preset monitoring center when the package damage value is larger than a preset damage threshold value so as to repackage the package.
Optionally, in a sixth implementation manner of the second aspect of the present invention, the training module includes: an extracting unit, configured to input an annotated image in the model training sample set into a first convolution network of the centeret model, and extract a feature map of the annotated image; the computing unit is used for computing the loss value of the Centernet model according to the feature map of the labeled image; and the updating unit is used for updating the weight parameters of the Centernet model according to the loss value by using a back propagation algorithm to obtain a package defect identification model.
A third aspect of the present invention provides a package defect identifying apparatus comprising: a memory having instructions stored therein and at least one processor, the memory and the at least one processor interconnected by a line;
the at least one processor invokes the instructions in the memory to cause the package defect identification device to perform the package defect identification method described above.
A fourth aspect of the present invention provides a computer-readable storage medium having stored therein instructions, which, when run on a computer, cause the computer to execute the above-mentioned package defect identification method.
In the technical scheme provided by the invention, the sorting result is acquired as the package detection data of the defective package and the corresponding package image; identifying defects of the package image, and labeling the identified package image based on package detection data to obtain a labeled image; constructing a model training sample set according to the labeled image, inputting the model training sample set into a preset Centernet model for package defect identification training to obtain a package defect identification model; and inputting the obtained package video frames on the sorting production line into a package defect identification model for identification to obtain a corresponding package defect identification result. The package defect recognition model obtained through training identifies damaged packages in the package sorting center, and the technical problem that the damaged packages cannot be recognized by fully utilizing video stream is solved.
Drawings
FIG. 1 is a schematic diagram of a first embodiment of the package defect identification method of the present invention;
FIG. 2 is a schematic diagram of a second embodiment of the package defect identification method of the present invention;
FIG. 3 is a schematic diagram of a third embodiment of the package defect identification method of the present invention;
FIG. 4 is a schematic diagram of a fourth embodiment of the package defect identification method of the present invention;
FIG. 5 is a schematic diagram of a fifth embodiment of the package defect identification method of the present invention;
FIG. 6 is a schematic view of a first embodiment of the package defect identifying apparatus of the present invention;
FIG. 7 is a schematic view of a second embodiment of the package defect identifying apparatus of the present invention;
fig. 8 is a schematic diagram of an embodiment of the package defect identifying apparatus of the present invention.
Detailed Description
The embodiment of the invention provides a method, a device, equipment and a storage medium for identifying package defects, wherein in the technical scheme of the invention, a video stream in a package sorting scene is obtained at first, and package information in each frame of video image in the video stream is marked to obtain a corresponding marked image; generating a model training sample set according to the labeled image, inputting the model training sample set into a preset Centernet model for package defect identification training to obtain a package defect identification model; acquiring a multi-frame parcel sorting image in a parcel sorting scene, inputting the image into a parcel defect identification model for identification, and obtaining a parcel defect identification result in the corresponding parcel sorting scene. The package defect recognition model obtained through training identifies damaged packages in the package sorting center, and the technical problem that the damaged packages cannot be recognized by fully utilizing video stream is solved.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprises," "comprising," or "having," and any variations thereof, are intended to cover non-exclusive inclusions, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
For convenience of understanding, a specific flow of an embodiment of the present invention is described below, and referring to fig. 1, a first embodiment of a package defect identification method according to an embodiment of the present invention includes:
101. acquiring historical sorting data of packages;
in this embodiment, a video stream in a package sorting scene shot in advance is acquired, and a video image in the video stream is extracted. It is to be understood that the execution subject of the present invention may be a package defect identification apparatus, and may also be a terminal or a server, which is not limited herein. The embodiment of the present invention is described by taking a server as an execution subject.
In this embodiment, historical sorting data of packages is obtained. The historical sorting data refers to a package sorting video of a sorting area of a package sorting scene in a preset time period (working time), and a package sorting video of the package sorting scene is shot through a camera or other equipment. For example, all the monitoring videos are accessed to a local area network, so that all cameras can be accessed through a DSS platform, the DSS has a screenshot function, package sorting images of a distribution center shot by screenshot are stored in a bmp form, about 900 (or more) sample images are taken, package identification is carried out on the area range of the package to be identified according to the range of the package to be identified in the images, and whether the package is a damaged package is judged.
102. Determining the sorting result in the historical sorting data as the package detection data of the defective package and the corresponding package image;
in this embodiment, the sorting result in the historical sorting data is determined to be the package detection data of the defective package and the corresponding package image. Historical sorting data of a package sorting scene is first captured by a camera or other device. For example, the packages on the conveyor belt have rectangular, circular, irregular shapes and the like. And then the server reads the stored package detection data and the corresponding package image of the defective package as the sorting result, and uses the package detection data and the corresponding package image as a training sample image.
103. Identifying defects of the package image, and labeling the identified package image based on package detection data to obtain a labeled image;
in this embodiment, the package image is subjected to defect identification, and the identified package image is labeled based on the package detection data, so as to obtain a labeled image. And identifying the defective package information in the package image, and labeling the defective package information to obtain a labeled image. The video image is input into preset image labeling software for displaying. Labelme software is preferred as the image annotation software. And selecting the defect packages in the images by using the closed lines connected at the head through an interactive device in a manual mode. And the server defines the wrapping area of the defect wrapping in the video image according to the position coordinate corresponding to the closed line to obtain the image containing the range of the labeling wrapping area, namely the labeling information. And finally, writing the labeling information into a blank file with a preset JSON format, thereby obtaining a data set with the JSON format.
104. Constructing a model training sample set according to the labeled image, inputting the model training sample set into a preset Centernet model for package defect identification training to obtain a package defect identification model;
in this embodiment, a model training sample set is constructed according to the labeled image, and the model training sample set is input into a preset centeret model to perform package defect identification training, so as to obtain a package defect identification model. And generating a model training sample set according to the labeled image, and inputting the model training sample set into a preset Centernet model to perform package defect identification training to obtain a package defect identification model. The manufactured data sets are classified according to the optimal proportion through script codes, and are divided into training sample sets, verification sample sets and testing sample sets, wherein the proportion accounts for 60%, 30% and 10% respectively. The Centernet model needs to preprocess the picture and convert the RGB format into the BGR format; meanwhile, the size of the picture is adjusted to 224 × 3, the picture is normalized, and then the normalized picture is input into an optimized centret model for training.
In this embodiment, the centeret model includes 16 Convolutional layers and 3 fully-connected layers, each Convolutional layer (Convolutional layer) in the Convolutional neural network is composed of a plurality of Convolutional units, and parameters of each Convolutional unit are optimized through a back propagation algorithm. The convolution operation aims to extract different input features, the convolution layer at the first layer can only extract some low-level features such as edges, lines, angles and other levels, and more layers of networks can iteratively extract more complex features from the low-level features. The effect of the convolutional layer is local perception, which is that, rather than identifying the whole picture at once when we see a picture, the convolutional layer firstly locally perceives each feature in the picture, and then performs comprehensive operation on the local part at a higher level, so as to obtain global information.
The full link layer, hereinafter referred to as FC. Can act as a "firewall" during the migration of model representation capabilities. Specifically, if the model pre-trained on ImageNet is assumed to be, ImageNet can be regarded as a source domain (source domain in migration learning). Fine tuning (fine tuning) is the most common migratory learning technique in the deep learning field. For fine-tuning, if the image in the target domain is very different from the image in the source domain (e.g. the target domain image is not an object-centered image, but a landscape image, as compared to ImageNet), the result after fine-tuning for the FC-free network is inferior to the FC-containing network. The FC may therefore be viewed as a "firewall" of model representation capabilities, and in particular where the source domain differs significantly from the target domain, the FC may maintain a large model capacity to ensure migration of the model representation capabilities.
105. And acquiring a package video frame on the sorting production line, and inputting the package video frame into a package defect identification model for identification to obtain a package defect identification result of each package in the package video frame.
In this embodiment, the package video frames on the sorting production line are obtained, and the package video frames are input into the package defect identification model for identification, so as to obtain a package defect identification result of each package in the package video frames. And inputting the video image of the parcel sorting scene of the real-time snapshot distribution center into parcel defect recognition, and recognizing the parcels in the parcel image according to the size and shape information of the parcels in the image. For example, by performing a series of processing on the picture to obtain the size of the corresponding parcel in the picture, and whether the parcel is damaged, it is determined whether the parcel is defective.
In the embodiment of the invention, the sorting result is the package detection data of the defective package and the corresponding package image; identifying defects of the package image, and labeling the identified package image based on package detection data to obtain a labeled image; constructing a model training sample set according to the labeled image, inputting the model training sample set into a preset Centernet model for package defect identification training to obtain a package defect identification model; and inputting the obtained package video frames on the sorting production line into a package defect identification model for identification to obtain a corresponding package defect identification result. The package defect recognition model obtained through training identifies damaged packages in the package sorting center, and the technical problem that the damaged packages cannot be recognized by fully utilizing video stream is solved.
Referring to fig. 2, a second embodiment of the package defect identification method according to the embodiment of the present invention includes:
201. acquiring a plurality of package sorting images in a historical package sorting scene, inputting the package sorting images into a preset package identification model for identification, and outputting the area range of each package in the package sorting images;
in this embodiment, a plurality of package sorting images in a history package sorting scene are acquired, the package sorting images are input to a preset package identification model for identification, and the area range of each package in the package sorting images is output. Package video frames on a sorting line are first captured by a camera or other device. For example, the packages on the conveyor belt have rectangular, circular, irregular shapes and the like. And then the server reads the stored package video frame, inputs the package video frame as a training sample image into a package recognition model, and outputs the area range of each package in the package sorting image.
202. Extracting a parcel image corresponding to each parcel from the parcel sorting image according to the area range of each parcel in the parcel sorting image;
in this embodiment, the parcel images corresponding to the parcels are extracted from the parcel sorting image according to the area range of each parcel in the parcel sorting image. And cutting the area range of each parcel in the parcel image from the parcel image so as to extract the parcel image corresponding to each parcel.
203. Identifying the package image to obtain package information of each package, and acquiring historical sorting data of the package according to the package information;
in this embodiment, the package images are identified to obtain package information of each package, and historical sorting data of the packages is obtained according to the package information. After the parcel image is extracted, the parcel image is input into other models, for example, the parcel information is obtained, and the models obtain corresponding parcel information. In this embodiment, a corner point information acquisition model that can identify a parcel is preferred. The method comprises the steps of identifying the upper left corner and the lower right corner of a parcel through a parcel corner information acquisition model, extracting the maximum value of an object boundary, continuously extracting the maximum value to the inside (along the direction of a dotted line in the figure) at the maximum value of the boundary, and adding the maximum value to the maximum value of the boundary, so that richer associated object semantic information is provided for corner characteristics, and volume information of the parcel is determined.
204. Determining the sorting result in the historical sorting data as the package detection data of the defective package and the corresponding package image;
in this embodiment, the sorting result in the historical sorting data is determined to be the package detection data of the defective package and the corresponding package image. And determining a parcel image corresponding to the parcel with the defect according to the sorting result in the historical sorting data.
205. Identifying defects in the package image to obtain a defect area range of the package image;
in this embodiment, the defect in the package image is identified to obtain the defect area range of the package image. After a pre-trained parcel recognition model is called, each frame image in a parcel sorting scene video is obtained through modes of real-time snapshot or screenshot and the like, the images comprise parcel sorting scene images, and then the parcel sorting scene images are input into the parcel recognition model. The package recognition model can identify the packages in the field image through a circular, rectangular or other-shaped frame, and the area range of each package in the video image corresponding to the package sorting scene is obtained.
206. Extracting a corresponding defect package image from the package image based on the defect area range of the package image;
in this embodiment, the corresponding first package image is extracted from the video image based on the area range of the defect package. And then according to the area range of each parcel in the video image corresponding to the parcel sorting scene, cutting the area range of each parcel in the video image corresponding to the parcel sorting scene from the video image so as to extract a parcel image corresponding to each parcel, wherein the parcel image is a part of the video image.
207. Performing feature extraction on the defect package image to obtain image features of the defect package image;
in this embodiment, feature extraction is performed on the defect package image to obtain image features of the defect package image. The characteristics comprise bottom layer geometric characteristics, middle layer textural characteristics and high-layer semantic characteristics of the picture in the convolutional layer, wherein the bottom layer geometric characteristics are the geometric shapes and the geometric sizes of all objects in the picture, the middle layer textural characteristics are used for distinguishing the categories of all objects, such as plants, animals, buildings and the like, and the high-layer semantic characteristics are matting according to the meanings expressed by the objects in the picture, namely distinguishing the same object in the picture. The object types in the pictures can be more accurately expressed and distinguished by extracting the hierarchical features in the pictures, and the pictures are labeled based on the object types in the pictures.
And inputting the picture to be marked into a deep convolutional neural network, after the deep convolutional neural network acquires the picture to be marked, carrying out convolution on the picture to be marked, and extracting the hierarchical features of the picture to be marked in each convolutional layer and each pooling layer through convolution, wherein the hierarchical features comprise features passing through each convolutional layer and each pooling layer under different scales. Because in the neural network result, under the same scale, there is one pooling layer and more than one convolution layer. The convolution layer is used for carrying out feature extraction on an input picture, and the pooling layer is as follows: compressing the input feature diagram, so that the feature diagram is reduced and the network computation complexity is simplified; on one hand, feature compression is carried out, and the main features of the image are extracted.
208. Determining image information of the defect package image according to the image characteristics;
in this embodiment, the image information of the first package image is determined according to the image feature. The characteristics comprise bottom layer geometric characteristics, middle layer textural characteristics and high-layer semantic characteristics of the picture in the convolutional layer, wherein the bottom layer geometric characteristics are the geometric shapes and the geometric sizes of all objects in the picture, the middle layer textural characteristics are used for distinguishing the categories of all objects, such as plants, animals, buildings and the like, and the high-layer semantic characteristics are matting according to the meanings expressed by the objects in the picture, namely distinguishing the same object in the picture. The object types in the pictures can be more accurately expressed and distinguished by extracting the hierarchical features in the pictures, and the pictures are labeled based on the object types in the pictures. For example, a picture includes: ground, traffic lines, sidewalks, pedestrians, buildings, trees, and other infrastructure. Geometric features such as ground geometry and size, traffic line shape size, tree shape and size. The textural features are the shape and size of the traffic line, ground, tree.
209. Labeling the defective package image according to the image information and the package detection data to obtain a labeled image after labeling;
in this embodiment, the first package image is labeled according to the image information, so as to obtain a labeled image after labeling. And selecting the packages in the images by using the closed lines connected at the head through an interactive device in a manual mode. And the interactive equipment sends the position coordinates corresponding to the closed lines to the server. And the server defines the parcel area in the video image according to the position coordinate to obtain an image containing the range of the marked parcel area, so that the example of the parcel image is segmented and marked. And the image containing the range of the labeling package area is the required labeling image.
In this embodiment, the annotation image is written into a blank file in a preset JSON format, so as to obtain a data set in the JSON format.
210. Constructing a model training sample set according to the labeled image, inputting the model training sample set into a preset Centernet model for package defect identification training to obtain a package defect identification model;
211. and acquiring a package video frame on the sorting production line, and inputting the package video frame into a package defect identification model for identification to obtain a package defect identification result of each package in the package video frame.
Steps 201 and 208 and 209 in this embodiment are similar to steps 101 and 103 and 104 in the first embodiment, and are not described herein again.
In the embodiment of the invention, the sorting result is the package detection data of the defective package and the corresponding package image; identifying defects of the package image, and labeling the identified package image based on package detection data to obtain a labeled image; constructing a model training sample set according to the labeled image, inputting the model training sample set into a preset Centernet model for package defect identification training to obtain a package defect identification model; and inputting the obtained package video frames on the sorting production line into a package defect identification model for identification to obtain a corresponding package defect identification result. The package defect recognition model obtained through training identifies damaged packages in the package sorting center, and the technical problem that the damaged packages cannot be recognized by fully utilizing video stream is solved.
Referring to fig. 3, a third embodiment of the method for identifying a package defect according to the embodiment of the present invention includes:
301. acquiring a plurality of first images in a parcel sorting scene, labeling parcels in the first images to obtain a labeled file, and taking the first images as training sample images;
in this embodiment, a plurality of first images in a parcel sorting scene are obtained, a parcel in the first image is labeled to obtain an labeled file, and the first image is used as a training sample image.
302. Inputting the training sample image into a ResNet-101 network, and extracting a first feature map of the training sample image through the ResNet-101 network;
in the embodiment, the training sample image is input into a ResNet-101 network, and a first feature map of the training sample image is extracted through the ResNet-101 network. Where ResNet-101 is a member of the ResNet series of convolutional neural networks. ResNet also learns the loss between the features of the upper layer and the features of the lower layer, namely the residual error, through adding an identical quick link mode, so that the accumulation layer can learn new features on the basis of input features, and more features can be extracted. And the depth of ResNet-101 is 101 layers, so that the extracted features are finer and the precision is higher in example segmentation.
After the training sample image is input into a ResNet-101 network, the ResNet network extracts the features in the training sample image through convolution to obtain a first feature map. Since the image is composed of individual pixels, each of which can be represented by a numerical value, such as an RGB-type image, which can be represented by R, G, B three numerical values of three channels, it can be represented as a mathematical matrix of 3x a x b. The essence of feature extraction is to use a convolution kernel of a certain size, such as c x d, to convolve the value of each pixel. The first profile can therefore also be represented by a matrix of m x k.
303. Inputting the first characteristic diagram into an RPN network, and generating a prediction frame corresponding to the first characteristic diagram through the RPN network;
in this embodiment, the first feature map is input to the RPN network, and a prediction frame corresponding to the first feature map is generated by the RPN network. The method comprises the following steps: inputting the first characteristic diagram into an RPN network, and acquiring preset anchor frame information; generating an anchor frame of the first characteristic diagram according to the anchor frame information; judging whether a package exists in the anchor frame or not through a first classifier; and if so, performing frame regression on the anchor frame to obtain a prediction frame corresponding to the first feature map.
In the past, a sliding window is adopted for target recognition, however, only one target can be detected by one window, and the problem of multiple sizes exists. Anchor boxes (Anchor boxes) have therefore been proposed. Anchor frame information is preset, for example, the number of anchor frames is 9, and the anchor frames include nine specifications of 3x1, 3x2 and the like. Since the first feature map obtained by convolution can be represented by an m × k matrix, 9 anchor frames corresponding to each numerical value in the matrix can be generated according to the anchor frame information, and the specifications are nine specifications such as 3 × 1 and 3 × 2.
The RPN network includes a first classifier, and the present embodiment preferably uses softmax as the first classifier for the determination. softmax is also called a normalization index function, and is normalized through the gradient logarithm of wired discrete probability distribution, so that a corresponding probability value is obtained. The value of the inclusion package is calculated for each anchor frame, and then normalization is carried out, so that the probability of the inclusion package is obtained. And if the probability is greater than a preset threshold value, determining that the anchor frame has the parcel, and if the probability is smaller than the preset threshold value, determining that the anchor frame has no parcel. Border-box regression, also called BB regression, refers to the fine position adjustment of the retained anchor frame by regression analysis. The anchor frames with the packages can be screened out through the classifier, but the sizes of the anchor frames are all fixed by the preset anchor frame information, so that the anchor frames do not necessarily contain the packages accurately, and therefore fine adjustment is needed.
The fine-tuning approaches that are often employed are panning and size scaling. Since both of these two ways can be accomplished by simple linear mapping, a linear transformation formula can be preset and then learned by training. If the parcel exists in the anchor frame, the anchor frame containing the parcel is reserved, and the reserved anchor frame is finely adjusted through border regression, so that a preselected frame corresponding to the first characteristic diagram is obtained.
304. Inputting the first feature map and the prediction frame into an ROI Align layer, and fusing the prediction frame and the first feature map through the ROI Align layer to obtain a second feature map containing the prediction frame;
in this embodiment, the first feature map and the prediction frame are input into the ROI Align layer, and the prediction frame and the first feature map are fused by the ROI Align layer to obtain the second feature map including the prediction frame. Wherein, ROI Align is a region feature aggregation mode. Since the grid size required by the subsequent network is generally smaller than that of the feature map, two times of quantization are adopted in the ROI Pooling layer, so that decimal points may exist at the positions of the grid size, and the number of values in the feature map is an integer, so that the matching is performed in an integer manner. However, the matching is not completely matched, so that the phenomenon of mismatching exists. And ROI Align can solve this problem.
Firstly, traversing a corresponding area of each preselected frame in the first feature map, keeping the boundary of a floating point number not to be quantized, then dividing the area into k x k units, finally calculating and fixing four position coordinates in each unit, calculating the values of the four positions by using a bilinear interpolation method, and then performing maximum pooling operation. Thereby obtaining a second profile comprising the preselected box.
305. Inputting the second feature map into a classification network, and generating a prediction result corresponding to the second feature map through the classification network;
in this embodiment, the second feature map is input to the classification network, and a prediction result corresponding to the second feature map is generated by the classification network. The method comprises the following steps: inputting the second characteristic diagram into a full connection layer to obtain a target vector corresponding to the second characteristic diagram through the full connection layer, wherein the classification network comprises the full connection layer and a second classifier; inputting the target vector into a second classifier so as to obtain the prediction probability that the prediction box contains the package through the second classifier; and if the prediction probability is larger than a preset threshold value, taking the area range corresponding to the prediction frame as a prediction area, and taking the prediction area as a prediction result. Each node in the fully connected layers (FC) is connected with all nodes in the previous layer, so as to integrate all the features extracted previously.
In this embodiment, the fully-connected layer is a one-dimensional vector. And extracting and integrating all the previous features, and then adding an activation function to perform nonlinear mapping, so that all the features are mapped onto the one-dimensional vector to obtain a target vector corresponding to the second feature map.
In this embodiment, the preferred second classifier is a softmax classifier. And after the target vector is obtained, obtaining the probability value that each pre-selection frame contains the package or does not contain the package through a softmax classifier. And if the probability value of the packages is larger than the preset threshold value of the packages, judging that the pre-selection box contains the packages. And then taking the area range corresponding to the prediction frame as a prediction area with the parcel, and outputting the prediction area as a prediction result.
306. Adjusting parameters of a preset MASK R-CNN model according to the prediction result and the label file until the MASK R-CNN model converges to obtain a package identification model;
in this embodiment, the parameters of the preset MASK R-CNN model are adjusted according to the prediction result and the label file until MASK is reachedAnd the R-CNN model converges to obtain a package identification model. Wherein, in MASK R-CNN model, the loss function is L ═ Lcls+Lbox+Lmask. Wherein L isclsIs the loss of classification, LboxIs the loss of the preselected frame, obtained by comparing the preselected frame with the coordinates corresponding to the area range circled in the label file, and LmaskRefers to the loss of the mask. Through a preset loss function, a loss value between the prediction result and the label file can be calculated.
And transmitting the loss value back to the MASK R-CNN model through back propagation, and adjusting parameters of each network according to a random gradient descent method. And if the MASK R-CNN model converges, taking the MASK R-CNN at the moment as a package recognition model.
307. Acquiring historical sorting data of packages;
308. determining the sorting result in the historical sorting data as the package detection data of the defective package and the corresponding package image;
309. identifying defects of the package image, and labeling the identified package image based on package detection data to obtain a labeled image;
310. constructing a model training sample set according to the labeled image, inputting the model training sample set into a preset Centernet model for package defect identification training to obtain a package defect identification model;
311. and acquiring a package video frame on the sorting production line, and inputting the package video frame into a package defect identification model for identification to obtain a package defect identification result of each package in the package video frame.
Steps 307-311 in this embodiment are similar to steps 101-105 in the first embodiment, and are not described herein again.
In the embodiment of the invention, the sorting result is the package detection data of the defective package and the corresponding package image; identifying defects of the package image, and labeling the identified package image based on package detection data to obtain a labeled image; constructing a model training sample set according to the labeled image, inputting the model training sample set into a preset Centernet model for package defect identification training to obtain a package defect identification model; and inputting the obtained package video frames on the sorting production line into a package defect identification model for identification to obtain a corresponding package defect identification result. The package defect recognition model obtained through training identifies damaged packages in the package sorting center, and the technical problem that the damaged packages cannot be recognized by fully utilizing video stream is solved.
Referring to fig. 4, a fourth embodiment of the method for identifying a package defect according to the embodiment of the present invention includes:
401. acquiring historical sorting data of packages;
402. determining the sorting result in the historical sorting data as the package detection data of the defective package and the corresponding package image;
403. identifying defects of the package image, and labeling the identified package image based on package detection data to obtain a labeled image;
404. constructing a model training sample set according to the labeled image, inputting the model training sample set into a preset Centernet model for package defect identification training to obtain a package defect identification model;
405. acquiring a package video frame on a sorting production line, and inputting the package video frame into a package defect identification model;
in this embodiment, the package video frame on the sorting pipeline is acquired, and the package video frame is input into the package defect identification model. Package video frames on a sorting line are first captured by a camera or other device. For example, the packages on the conveyor belt have rectangular, circular, irregular shapes and the like. And then the server reads the stored package video frame, and inputs the package video frame into a package defect identification model as a training sample image.
406. Obtaining the area range of each parcel in the parcel video frame through a parcel defect identification model;
in this embodiment, the area range of each parcel in the parcel video frame is obtained through the parcel defect identification model. And obtaining the area range of each parcel in at least two frames of parcel sorting images through the parcel defect identification model. The video images shot by the parcel sorting scene comprise parcels, and the parcels are provided with information such as whether the parcels are damaged.
And after receiving the video image of the package sorting scene, the package defect identification model identifies the package information of the package defect identification in the video image of the package sorting scene and converts the package information into a form which can be identified by a server. Then the server obtains the area range of each parcel in the parcel video frame according to the video image
407. According to the area range of each parcel in the parcel video frame, extracting a parcel image corresponding to each parcel from the parcel video frame;
in this embodiment, the parcel images corresponding to the parcels are respectively extracted from the parcel video frame according to the area range of each parcel in the parcel video frame. And extracting a package image corresponding to each package from the package sorting scene image according to the area range of each package in the package video frame, and then cutting out the area range of each package in the package image, thereby extracting the package image corresponding to each package.
408. Inputting the package image into a package defect identification model, and respectively carrying out defect identification on the package image through the package defect identification model to obtain a package defect identification result of each package in the package video frame;
in this embodiment, the package image is input into the package defect identification model, and the package image is subjected to defect identification through the package defect identification model, so as to obtain a package defect identification result of each package in the package video frame. After the package image is extracted, the package image is input into other models, such as a package sorting model, to obtain corresponding package information.
In this embodiment, a package defect recognition model that can recognize whether a package is broken or not is preferable. Through the package defect identification model, whether a damaged package is identified in the package can be identified, and then the server determines whether the package in the package image has a damage defect according to the package information. Therefore, the server issues an instruction and sends the identification result to the monitoring identification center.
409. Judging whether the package damage value is larger than a preset damage threshold value or not;
in this embodiment, it is determined whether the package damage value is greater than a predetermined damage threshold. A breakage threshold, such as 10%, is preset. When the package damage assessment model outputs a package damage value, it is compared to a damage threshold.
410. And when the package damage value is larger than a preset damage threshold value, sending a package defect notification to a preset monitoring center so as to repackage the package.
In this embodiment, when the package damage value is greater than the preset damage threshold value, a package defect notification is sent to a preset monitoring center to repackage the package. If the package breakage value is larger than 10%, the package is repackaged to avoid further expansion of breakage in subsequent transportation. The server therefore issues a repackaging notification for the package to allow the staff member to repackage the package.
The steps 401-404 in this embodiment are similar to the steps 101-104 in the first embodiment, and are not described herein again.
In the embodiment of the invention, the sorting result is the package detection data of the defective package and the corresponding package image; identifying defects of the package image, and labeling the identified package image based on package detection data to obtain a labeled image; constructing a model training sample set according to the labeled image, inputting the model training sample set into a preset Centernet model for package defect identification training to obtain a package defect identification model; and inputting the obtained package video frames on the sorting production line into a package defect identification model for identification to obtain a corresponding package defect identification result. The package defect recognition model obtained through training identifies damaged packages in the package sorting center, and the technical problem that the damaged packages cannot be recognized by fully utilizing video stream is solved.
Referring to fig. 5, a fifth embodiment of the method for identifying a package defect according to the present invention includes:
501. acquiring historical sorting data of packages;
502. determining the sorting result in the historical sorting data as the package detection data of the defective package and the corresponding package image;
503. identifying defects of the package image, and labeling the identified package image based on package detection data to obtain a labeled image;
504. inputting the labeled image in the model training sample set into a first convolution network of a Centernet model, and extracting a feature map of the labeled image;
in this embodiment, the labeled image in the model training sample set is input to the first convolution network of the centeret model, and the feature map of the labeled image is extracted. The Centernet model comprises a first convolution network, a second convolution network, a third convolution network, a domain classifier, a gradient inversion layer and a target detector. The first convolution network is used for extracting the characteristics of the labeled image. The target detector is used for identifying the labeling information and for classifying the target object. The gradient inversion layer is used for inverting the partial derivatives of the back propagation to realize the effect of the first convolution network and the domain classifier on the counterlearning.
505. Calculating the loss value of the Centernet model according to the feature map of the labeled image;
in this embodiment, the loss value of the centeret model is calculated from the feature map of the annotation image. And acquiring the labeling information of the labeled image, and calculating a first loss value of the Centeret model according to the labeling information and the labeling information output by the Centeret model. And the annotation information output by the Centernet model comprises the target key point and the rectangular frame size information. The first loss value is calculated by the following formula one:
Ldet=Lk+λoffLoff+λsizeLsize
where Ldet represents the first loss value, Lk represents the target keypoint loss value, Loff represents the target center shift loss value, and Lsize represents the target size loss value.
The target key point loss value Lk is calculated as follows: and performing downsampling processing on the midpoint coordinates of the real labeling information, and distributing the midpoint coordinates of the real labeling information to the transition characteristic diagram through a Gaussian kernel to calculate a loss value Lk of the target key point. The target size loss value Lsize calculation process is as follows: assuming that the boundary frame of the object k with the category ck is, predicting all center points in the image through the key point estimation factor, and performing regression of the object size sk on each object k, wherein the calculation formula of the target size loss value Lsize is as follows:
Figure BDA0003030370890000141
where sk denotes the size of the object k and denotes the predicted size, and the target key point loss value Lk, the target center shift loss value Loff, and the target size loss value Lsize are obtained from the above.
506. Updating the weight parameters of the Centernet model according to the loss values by using a back propagation algorithm to obtain a package defect identification model;
in this embodiment, the weight parameter of the centeret model is updated according to the loss value by using a back propagation algorithm, so as to obtain a package defect identification model. Wherein, the partial derivative of the parameter is calculated according to the loss value, and the parameter is updated by a gradient descent method. And after the partial derivative of the gradient inversion layer is obtained through calculation, updating the parameters of the gradient inversion layer according to the opposite number of the partial derivative. As is well known, the back propagation refers to transmitting the loss (the difference between the predicted value and the true value) back layer by layer, and then each layer of network calculates the partial derivative according to the transmitted error, so as to update the parameters of the layer of network. What the gradient inversion layer does is to multiply the partial derivative transmitted to the gradient inversion layer by a negative number, so that the training targets of the network before and after the gradient inversion layer are opposite, the countermeasure effect is realized, and the weight parameters of the Centeret model are updated according to the loss values, so that the package defect identification model is obtained.
507. And acquiring a package video frame on the sorting production line, and inputting the package video frame into a package defect identification model for identification to obtain a package defect identification result of each package in the package video frame.
The steps 501, 503, 507 in this embodiment are similar to the steps 101, 104, 105 in the first embodiment, and are not described herein again.
In the embodiment of the invention, the sorting result is the package detection data of the defective package and the corresponding package image; identifying defects of the package image, and labeling the identified package image based on package detection data to obtain a labeled image; constructing a model training sample set according to the labeled image, inputting the model training sample set into a preset Centernet model for package defect identification training to obtain a package defect identification model; and inputting the obtained package video frames on the sorting production line into a package defect identification model for identification to obtain a corresponding package defect identification result. The package defect recognition model obtained through training identifies damaged packages in the package sorting center, and the technical problem that the damaged packages cannot be recognized by fully utilizing video stream is solved.
With reference to fig. 6, the method for identifying package defects in an embodiment of the present invention is described above, and a package defect identifying apparatus in an embodiment of the present invention is described below, where a first embodiment of the package defect identifying apparatus in an embodiment of the present invention includes:
an obtaining module 601, configured to obtain historical sorting data of packages;
a determining module 602, configured to determine that the sorting result in the historical sorting data is package detection data of a defective package and a corresponding package image;
a first labeling module 603, configured to perform defect identification on the package image, and label the identified package image based on the package detection data to obtain a labeled image;
the training module 604 is configured to construct a model training sample set according to the labeled image, and input the model training sample set into a preset centeret model to perform package defect identification training, so as to obtain a package defect identification model;
the identification module 605 is configured to obtain a package video frame on the sorting production line, and input the package video frame into the package defect identification model for identification, so as to obtain a package defect identification result of each package in the package video frame.
In the embodiment of the invention, the sorting result is the package detection data of the defective package and the corresponding package image; identifying defects of the package image, and labeling the identified package image based on package detection data to obtain a labeled image; constructing a model training sample set according to the labeled image, inputting the model training sample set into a preset Centernet model for package defect identification training to obtain a package defect identification model; and inputting the obtained package video frames on the sorting production line into a package defect identification model for identification to obtain a corresponding package defect identification result. The package defect recognition model obtained through training identifies damaged packages in the package sorting center, and the technical problem that the damaged packages cannot be recognized by fully utilizing video stream is solved.
Referring to fig. 7, a package defect recognition apparatus according to a second embodiment of the present invention specifically includes:
an obtaining module 601, configured to obtain historical sorting data of packages;
a determining module 602, configured to determine that the sorting result in the historical sorting data is package detection data of a defective package and a corresponding package image;
a first labeling module 603, configured to perform defect identification on the package image, and label the identified package image based on the package detection data to obtain a labeled image;
the training module 604 is configured to construct a model training sample set according to the labeled image, and input the model training sample set into a preset centeret model to perform package defect identification training, so as to obtain a package defect identification model;
the identification module 605 is configured to obtain a package video frame on the sorting production line, and input the package video frame into the package defect identification model for identification, so as to obtain a package defect identification result of each package in the package video frame.
In this embodiment, the obtaining module 601 is specifically configured to:
acquiring a plurality of parcel sorting images in a historical parcel sorting scene, inputting the parcel sorting images into a preset parcel recognition model for recognition, and outputting the area range of each parcel in the parcel sorting images;
extracting a parcel image corresponding to each parcel from the parcel sorting image according to the area range of each parcel in the parcel sorting image;
and identifying the package image to obtain package information of each package, and acquiring historical sorting data of the packages according to the package information.
In this embodiment, the package defect recognition apparatus further includes:
a second labeling module 606, configured to obtain multiple first images in a parcel sorting scene, label a parcel in the first image to obtain a labeled file, and use the first image as a training sample image;
an extracting module 607, configured to input the training sample image into the ResNet-101 network, and extract a first feature map of the training sample image through the ResNet-101 network;
a first generating module 608, configured to input the first feature map into the RPN network, and generate a prediction frame corresponding to the first feature map through the RPN network;
a fusion module 609, configured to input the first feature map and the prediction frame into the ROI Align layer, and fuse the prediction frame and the first feature map through the ROI Align layer to obtain a second feature map including the prediction frame;
a second generating module 610, configured to input the second feature map into the classification network, and generate a prediction result corresponding to the second feature map through the classification network;
and the adjusting module 611 is configured to adjust parameters of a preset MASK R-CNN model according to the prediction result and the label file until the MASK R-CNN model converges to obtain a package identification model.
In this embodiment, the first labeling module 603 is specifically configured to:
identifying defects in the package image to obtain a defect area range of the package image;
extracting a corresponding defect package image from the package image based on the defect area range of the package image;
performing feature extraction on the defect package image to obtain image features of the defect package image, wherein the image features comprise geometric features, texture features and semantic features;
determining image information of the defect package image according to the image characteristics;
and labeling the defective package image according to the image information and the package detection data to obtain a labeled image after labeling.
In this embodiment, the identification module 605 is specifically configured to:
acquiring a package video frame on a sorting production line, and inputting the package video frame into the package defect identification model;
obtaining the area range of each parcel in the parcel video frame through the parcel defect identification model;
according to the region range of each parcel in the parcel video frame, extracting a parcel image corresponding to each parcel from the parcel video frame respectively;
and inputting the package image into the package defect identification model, and respectively carrying out defect identification on the package image through the package defect identification model to obtain a package defect identification result of each package in the package video frame.
In this embodiment, the package defect recognition apparatus further includes:
a judging module 612, configured to obtain a package damage value of a corresponding package in the package video frame according to the package defect identification result; judging whether the package damage value is larger than a preset damage threshold value or not;
a sending module 613, configured to send a package defect notification to a preset monitoring center when the package damage value is greater than a preset damage threshold value, so as to repackage the package.
In this embodiment, the training module 604 includes:
an extracting unit 6041, configured to input an annotation image in the model training sample set into the first convolution network of the centeret model, and extract a feature map of the annotation image;
a calculating unit 6042, configured to calculate a loss value of the centeret model according to the feature map of the annotation image;
an updating unit 6043, configured to update, by using a back propagation algorithm, the weight parameter of the centeret model according to the loss value, so as to obtain a package defect identification model.
In the embodiment of the invention, the sorting result is the package detection data of the defective package and the corresponding package image; identifying defects of the package image, and labeling the identified package image based on package detection data to obtain a labeled image; constructing a model training sample set according to the labeled image, inputting the model training sample set into a preset Centernet model for package defect identification training to obtain a package defect identification model; and inputting the obtained package video frames on the sorting production line into a package defect identification model for identification to obtain a corresponding package defect identification result. The package defect recognition model obtained through training identifies damaged packages in the package sorting center, and the technical problem that the damaged packages cannot be recognized by fully utilizing video stream is solved.
Fig. 6 and 7 describe the parcel defect identification apparatus in the embodiment of the present invention in detail from the perspective of the modular functional entity, and the parcel defect identification apparatus in the embodiment of the present invention is described in detail from the perspective of hardware processing.
Fig. 8 is a schematic structural diagram of a package defect identification device according to an embodiment of the present invention, where the package defect identification device 800 may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 810 (e.g., one or more processors) and a memory 820, and one or more storage media 830 (e.g., one or more mass storage devices) storing an application 833 or data 832. Memory 820 and storage medium 830 may be, among other things, transient or persistent storage. The program stored on the storage medium 830 may include one or more modules (not shown), each of which may include a series of instruction operations for the package defect identifying apparatus 800. Further, the processor 810 may be configured to communicate with the storage medium 830 and execute a series of instruction operations in the storage medium 830 on the package defect identifying device 800 to implement the steps of the package defect identifying method provided by the above-mentioned method embodiments.
The package defect identification apparatus 800 may also include one or more power supplies 840, one or more wired or wireless network interfaces 850, one or more input-output interfaces 860, and/or one or more operating systems 831, such as Windows Server, Mac OS X, Unix, Linux, FreeBSD, etc. Those skilled in the art will appreciate that the package defect identification device configuration shown in fig. 8 does not constitute a limitation of the package defect identification devices provided herein, and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
The present invention also provides a computer-readable storage medium, which may be a non-volatile computer-readable storage medium, and which may also be a volatile computer-readable storage medium, having stored therein instructions, which, when executed on a computer, cause the computer to perform the steps of the above package defect identification method.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A package defect identification method is characterized by comprising the following steps:
acquiring historical sorting data of packages;
determining that the sorting result in the historical sorting data is the package detection data of the defective package and the corresponding package image;
identifying defects of the package images, and labeling the identified package images based on the package detection data to obtain labeled images;
constructing a model training sample set according to the labeled image, inputting the model training sample set into a preset Centernet model for package defect identification training to obtain a package defect identification model;
and acquiring package video frames on a sorting production line, and inputting the package video frames into the package defect identification model for identification to obtain a package defect identification result of each package in the package video frames.
2. The package defect identification method of claim 1, wherein said obtaining historical sorting data for packages comprises:
acquiring a plurality of parcel sorting images in a historical parcel sorting scene, inputting the parcel sorting images into a preset parcel recognition model for recognition, and outputting the area range of each parcel in the parcel sorting images;
extracting a parcel image corresponding to each parcel from the parcel sorting image according to the area range of each parcel in the parcel sorting image;
and identifying the package image to obtain package information of each package, and acquiring historical sorting data of the package according to the package information.
3. The package defect identification method according to claim 2, before the inputting the package sorting image into a preset package identification model for identification and outputting the area range of each package in the package sorting image, further comprising:
acquiring a plurality of first images in a parcel sorting scene, labeling parcels in the first images to obtain a labeled file, and taking the first images as training sample images;
inputting the training sample image into the ResNet-101 network, and extracting a first feature map of the training sample image through the ResNet-101 network;
inputting the first characteristic diagram into the RPN network, and generating a prediction frame corresponding to the first characteristic diagram through the RPN network;
inputting the first feature map and the prediction frame into the ROIAlign layer, and fusing the prediction frame and the first feature map through the ROI Align layer to obtain a second feature map containing the prediction frame;
inputting the second feature map into the classification network, and generating a prediction result corresponding to the second feature map through the classification network;
and adjusting parameters of a preset MASK R-CNN model according to the prediction result and the label file until the MASK R-CNN model converges to obtain a package identification model.
4. The package defect identification method according to claim 1, wherein the performing defect identification on the package image and labeling the identified package image based on the package detection data to obtain a labeled image comprises:
identifying defects in the package image to obtain a defect area range of the package image;
extracting a corresponding defect package image from the package image based on the defect area range of the package image;
performing feature extraction on the defect package image to obtain image features of the defect package image, wherein the image features comprise geometric features, texture features and semantic features;
determining image information of the defect package image according to the image characteristics;
and labeling the defective package image according to the image information and the package detection data to obtain a labeled image after labeling.
5. The package defect identification method according to claim 1, wherein the obtaining of the package video frames on the sorting line and inputting the package video frames into the package defect identification model for identification to obtain the package defect identification result of each package in the package video frames comprises:
acquiring a package video frame on a sorting production line, and inputting the package video frame into the package defect identification model;
obtaining the area range of each parcel in the parcel video frame through the parcel defect identification model;
according to the region range of each parcel in the parcel video frame, extracting a parcel image corresponding to each parcel from the parcel video frame respectively;
and inputting the package image into the package defect identification model, and respectively carrying out defect identification on the package image through the package defect identification model to obtain a package defect identification result of each package in the package video frame.
6. The package defect identification method according to claim 5, wherein after the package defect identification model respectively identifies the package images to obtain the package defect identification result of each package in the package video frame, the method further comprises:
obtaining a parcel damage value of a corresponding parcel in the parcel video frame according to the parcel defect identification result;
judging whether the package damage value is larger than a preset damage threshold value or not;
and if so, sending a package defect notification to a preset monitoring center so as to repackage the package.
7. The package defect identification method according to claim 1, wherein the inputting the model training sample set into a preset centret model for package defect identification training to obtain a package defect identification model comprises:
inputting the labeled image in the model training sample set into a first convolution network of the Centernet model, and extracting a feature map of the labeled image;
calculating a loss value of the Centernet model according to the feature map of the labeled image;
and updating the weight parameters of the Centernet model according to the loss value by utilizing a back propagation algorithm to obtain a package defect identification model.
8. A package defect identifying device, characterized in that the package defect identifying device comprises:
the acquisition module is used for acquiring historical sorting data of the packages;
the determining module is used for determining that the sorting result in the historical sorting data is the package detection data of the defective package and the corresponding package image;
the first labeling module is used for identifying defects of the package image and labeling the identified package image based on the package detection data to obtain a labeled image;
the training module is used for constructing a model training sample set according to the labeled image, inputting the model training sample set into a preset Centernet model for package defect identification training, and obtaining a package defect identification model;
and the identification module is used for acquiring the package video frames on the sorting production line, inputting the package video frames into the package defect identification model for identification, and obtaining the package defect identification result of each package in the package video frames.
9. A package defect identifying apparatus, characterized in that the package defect identifying apparatus comprises: a memory having instructions stored therein and at least one processor, the memory and the at least one processor interconnected by a line;
the at least one processor invokes the instructions in the memory to cause the package defect identification device to perform the steps of the package defect identification method of any of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the package defect identification method according to any one of claims 1 to 7.
CN202110433708.6A 2021-04-21 2021-04-21 Package defect identification method, device, equipment and storage medium Pending CN113192017A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110433708.6A CN113192017A (en) 2021-04-21 2021-04-21 Package defect identification method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110433708.6A CN113192017A (en) 2021-04-21 2021-04-21 Package defect identification method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113192017A true CN113192017A (en) 2021-07-30

Family

ID=76978051

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110433708.6A Pending CN113192017A (en) 2021-04-21 2021-04-21 Package defect identification method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113192017A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113919780A (en) * 2021-10-18 2022-01-11 广东华智科技有限公司 Logistics management system and method based on Internet of things
CN114781982A (en) * 2022-06-17 2022-07-22 广州朗通科技有限公司 Cold chain warehouse management method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109785289A (en) * 2018-12-18 2019-05-21 中国科学院深圳先进技术研究院 A kind of transmission line of electricity defect inspection method, system and electronic equipment
CN111428682A (en) * 2020-04-09 2020-07-17 上海东普信息科技有限公司 Express sorting method, device, equipment and storage medium
CN111709987A (en) * 2020-06-11 2020-09-25 上海东普信息科技有限公司 Package volume measuring method, device, equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109785289A (en) * 2018-12-18 2019-05-21 中国科学院深圳先进技术研究院 A kind of transmission line of electricity defect inspection method, system and electronic equipment
CN111428682A (en) * 2020-04-09 2020-07-17 上海东普信息科技有限公司 Express sorting method, device, equipment and storage medium
CN111709987A (en) * 2020-06-11 2020-09-25 上海东普信息科技有限公司 Package volume measuring method, device, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113919780A (en) * 2021-10-18 2022-01-11 广东华智科技有限公司 Logistics management system and method based on Internet of things
CN114781982A (en) * 2022-06-17 2022-07-22 广州朗通科技有限公司 Cold chain warehouse management method and system

Similar Documents

Publication Publication Date Title
Du et al. Pavement distress detection and classification based on YOLO network
US11144889B2 (en) Automatic assessment of damage and repair costs in vehicles
CN109447169B (en) Image processing method, training method and device of model thereof and electronic system
US9830704B1 (en) Predicting performance metrics for algorithms
CN108304835B (en) character detection method and device
CN111709987B (en) Package volume measuring method, device, equipment and storage medium
CN111738231B (en) Target object detection method and device, computer equipment and storage medium
KR102166458B1 (en) Defect inspection method and apparatus using image segmentation based on artificial neural network
CN110781836A (en) Human body recognition method and device, computer equipment and storage medium
CN112529015A (en) Three-dimensional point cloud processing method, device and equipment based on geometric unwrapping
CN111415106A (en) Truck loading rate identification method, device, equipment and storage medium
CN110889446A (en) Face image recognition model training and face image recognition method and device
CN113192017A (en) Package defect identification method, device, equipment and storage medium
CN111428682B (en) Express sorting method, device, equipment and storage medium
CN114764778A (en) Target detection method, target detection model training method and related equipment
CN113312957A (en) off-Shift identification method, device, equipment and storage medium based on video image
CN113516146A (en) Data classification method, computer and readable storage medium
CN110555420A (en) fusion model network and method based on pedestrian regional feature extraction and re-identification
CN114821102A (en) Intensive citrus quantity detection method, equipment, storage medium and device
CN114331949A (en) Image data processing method, computer equipment and readable storage medium
US20230095533A1 (en) Enriched and discriminative convolutional neural network features for pedestrian re-identification and trajectory modeling
US20210034907A1 (en) System and method for textual analysis of images
CN111881958A (en) License plate classification recognition method, device, equipment and storage medium
CN114255377A (en) Differential commodity detection and classification method for intelligent container
CN114332457A (en) Image instance segmentation model training method, image instance segmentation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210730

RJ01 Rejection of invention patent application after publication