CN114092396A - Method and device for detecting corner collision flaw of packaging box - Google Patents

Method and device for detecting corner collision flaw of packaging box Download PDF

Info

Publication number
CN114092396A
CN114092396A CN202111189749.1A CN202111189749A CN114092396A CN 114092396 A CN114092396 A CN 114092396A CN 202111189749 A CN202111189749 A CN 202111189749A CN 114092396 A CN114092396 A CN 114092396A
Authority
CN
China
Prior art keywords
image
packaging box
model
collision
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111189749.1A
Other languages
Chinese (zh)
Inventor
刘冲
杨翠
黄师化
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anqing Normal University
Original Assignee
Anqing Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anqing Normal University filed Critical Anqing Normal University
Priority to CN202111189749.1A priority Critical patent/CN114092396A/en
Publication of CN114092396A publication Critical patent/CN114092396A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10144Varying exposure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10152Varying illumination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention discloses a method and a device for detecting a collision angle flaw of a packing box, which are characterized in that a packing box sample with the collision angle flaw is collected, image compression, feature fusion, image segmentation and other processing modes are carried out on the sample to obtain a packing box vertex angle feature image set, a lightweight deep neural network is designed based on a YOLOV framework, a detection model is constructed, model training is carried out by utilizing the vertex angle feature image of the sample to obtain a high-precision collision angle flaw detection model, the packing box is detected according to the high-precision detection model, and the purpose of rapidly detecting the complex and multi-scale collision angle flaw in the packing box is realized.

Description

Method and device for detecting corner collision flaw of packaging box
Technical Field
The invention relates to the field of flaw detection, in particular to a method and a device for detecting a corner collision flaw of a packaging box.
Background
The commodity packing box can prevent commodities from being damaged, polluted, leaked or deteriorated in the circulation process, and is the most powerful and convenient means for selling commodities by manufacturers. For example, an aesthetically pleasing tablet computer outer package can improve the public acceptance of tablet computer brands. However, due to the influence of the manufacturing process and the transportation environment, the tablet computer packaging box inevitably has many defects in the production and transportation process, wherein the corner collision appearing on the inner surface and the outer surface of the tablet computer box is one of the most common product defects. The existence of the corner impact seriously affects the brand benefit of the tablet computer product, so manufacturers must strictly detect the problem products with the corner impact defect so as to avoid the influence of the corner impact on the brand value of the tablet computer. Due to the fact that the types of corner collision defects appearing on the surface of a flat computer are complex, and the sizes and the scales of the corner collision defects are different, most manufacturers adopt a manual detection method for the defects at present. The manual detection is not only high in cost, but also easy to fatigue due to long-time detection of detection personnel, and the error rate is too high, so that the machine detection is necessary in the future instead of the manual detection.
Machine detection is widely applied in various fields along with the rapid development of computer vision detection technology, and a plurality of conventional detection technical schemes are formed. At present, a plurality of product outer package flaws and product defect detection tasks also adopt a machine vision technology in a large quantity, but the corner collision defect of a packaging box of intelligent equipment such as a tablet personal computer is essentially different from the conventional product defect, and the detection method has the characteristics of high complexity, diversity and the like, and the corner collision amplitude and angle are relatively light in many times, so that the detection method belongs to a micro flaw, and the high-precision detection result is difficult to realize by the existing detection scheme due to the fact that the shooting angle is not in place.
Disclosure of Invention
In order to solve the technical problems in the background art, the invention provides a method and a device for detecting the corner collision flaws of the packaging box, which realize the rapid target detection of the packaging box with the complex and multi-scale corner collision.
The invention provides a method for detecting a corner collision flaw of a packaging box, which comprises the following steps:
s1: acquiring a detection image of the packaging box; processing the detection image to obtain a characteristic image;
s2: extracting a binary image of the characteristic image, and performing closed operation on the binary image to obtain an image mask;
s3: searching the boundary of the packing box in an image mask to obtain a boundary point set, carrying out equidistant sparse sampling on the boundary point set for multiple times, and determining the central positions of four vertex angles of the packing box based on the slope of the connecting line of adjacent boundary points;
s4: presetting a radius value, selecting vertex angle areas at the center positions of four vertex angles on the characteristic image, and constructing a local mask of the vertex angle areas of the packaging box;
s5: multiplying the local mask and the characteristic image of the top corner area of the packaging box to obtain a top corner characteristic image of the packaging box;
s6: collecting a packaging box sample with a collision corner flaw, and acquiring a vertex angle characteristic image set of the packaging box according to the steps S1-S5;
s7: designing a lightweight deep neural network based on a YOLOV framework, constructing a detection model of the corner collision flaws of the packaging box, and training the detection model according to a top angle characteristic image set;
s8: and detecting the packaging box according to the trained packaging box corner collision flaw detection model.
Preferably, the step S1 specifically includes:
s101: acquiring two detection images of the packing box with the same size, wherein the two detection images are a high light exposure mode shooting image and a low light exposure mode shooting image of the same packing surface of the packing box;
s102: respectively carrying out same-scale nearest neighbor interpolation compression on the two detection images to obtain two compressed images;
s103: combining with high-pass filtering to respectively calculate the gradients of the two compressed images to obtain two gradient images;
s104: and performing highlight fusion on the two gradient images by using wiener filtering to obtain a characteristic image.
Preferably, the image is detected as a gray scale image.
Preferably, step S3 specifically includes:
s301: searching the boundary of the packing box in the image mask to obtain a boundary point set S:
S={(xi,yi)|i=1,2,...,N};
where N is the number of boundary points, xi,yiRespectively a boundary point transverse coordinate and a boundary point longitudinal coordinate;
s302: carrying out equidistant sparse sampling on the boundary point set S to obtain a boundary point subset S0
S0={(xj,yj) 1, 2.., n }, where n is the number of samples;
s303: computing a subset of boundary points S0Slope k of connecting line of two adjacent pointsj
kj=(yj-yj-1)/(xj-xj-1),(j=1,2,...,n-1);
S304: finding and recording a value satisfying 1 < | arctan ((k)j-kj+1)/(1+kjkj+1) ) | < 1.6 of the target boundary point;
s305: repeating the steps S302-S304 to carry out equidistant sparse sampling on the boundary point set for multiple times to obtain a target boundary point set X:
X={(xk,yk)|(xk,yk)∈S,k=1,2,...};
s306: and respectively calculating the average coordinates of the target boundary points belonging to the same vertex angle, and defining the average coordinates as the central position of the vertex angle, thereby obtaining the central positions of the four vertex angles.
Preferably, before calculating the average coordinates of the target boundary points belonging to the same vertex angle, the step further includes determining whether the target boundary points belong to the same vertex angle, and specifically includes:
s3061: according to the formula
Figure BDA0003297350370000031
Calculating the distance d between the boundary points of the targetij
S3062: if d isij< delta, then the vertex (x)i,yi) And (x)j,yj) Belonging to the same corner, where δ is the subset S0The medium maximum pitch is 2 times the pitch of the adjacent boundary points.
Preferably, the extracting a binary image of the feature image in step S2 specifically includes: and carrying out median filtering on the characteristic image and extracting the characteristics by adopting a canny operator to obtain a binary image of the characteristic image.
Preferably, the S7 specifically includes:
s701: designing a lightweight deep neural network Model based on a YOLOV framework;
s702: sequentially marking collision angle information of the top angle characteristic images in the top angle characteristic image set, selecting two thirds of the top angle characteristic images to form a model training set, and forming a model testing set by the remaining one third of the top angle characteristic images;
s703: setting corresponding network parameters and training parameters based on the Model, inputting a Model training set into the Model for training, and obtaining a collision angle flaw detection Model PreModel;
s704: inputting the model test set into a detection model PreModel for detection, comparing a detection result with the marked impact angle information, and calculating a model error epsilon;
s705: judging whether epsilon is more than c, if so, obtaining a high-precision collision angle flaw detection Model Pre-Model; if the judgment result is no, returning to the step S703; where c represents the industry standard error.
Preferably, the depth of the deep neural network Model is 27 layers.
The invention also provides a device for detecting the corner collision flaw of the packing box, which comprises:
the characteristic fusion module is used for acquiring a detection image of the packaging box and processing the detection image to acquire a characteristic image;
the image processing module is used for extracting a binary image of the characteristic image and performing closed operation on the binary image to obtain an image mask; searching the boundary of the packing box in an image mask to obtain a boundary point set, carrying out equidistant sparse sampling on the boundary point set for multiple times, and determining the central positions of four vertex angles of the packing box based on the slope of the connecting line of adjacent boundary points; presetting a radius value, selecting vertex angle areas at the center positions of four vertex angles on the characteristic image, and constructing a local mask of the vertex angle areas of the packaging box; multiplying the local mask and the characteristic image of the top corner area of the packaging box to obtain a top corner characteristic image of the packaging box;
the data acquisition module is used for acquiring a packing box sample with a collision angle flaw and acquiring a top angle characteristic image set of the packing box;
the model building module is used for designing a lightweight deep neural network based on a YOLOV framework, building a packaging box corner collision flaw detection model and training the detection model according to a vertex angle characteristic image set;
and the detection module is used for detecting the packaging box according to the trained packaging box corner collision flaw detection model.
The invention also provides a computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed by a processor, implements the steps of the method for detecting a corner impact defect of a packaging box.
According to the method, a packaging box sample with a corner collision flaw is collected, processing modes such as image compression, feature fusion and image segmentation are carried out on the packaging box sample to obtain a packaging box vertex angle feature image set, a lightweight deep neural network is designed based on a YOLOV framework, a detection model is constructed, model training is carried out by utilizing the vertex angle feature image of the sample to obtain a high-precision corner collision flaw detection model, the packaging box is detected according to the high-precision detection model, and the purpose of rapidly detecting the complex and multi-scale corner collision flaws in the packaging box is achieved.
In the invention, the acquired detection image is an image shot by a gray high-definition industrial camera, which is beneficial to acquiring micro collision angle information and effectively capturing collision angle information; two detection images shot in a highlight and low-light secondary exposure mode are adopted, so that the shadow structure information of the corner collision defect is enhanced;
in the invention, the gradient of the compressed image is calculated by combining high-pass filtering to obtain a gradient image, so that the complexity of the collision angle defect type can be effectively reduced; highlight fusion is carried out on the gradient image by utilizing wiener filtering to obtain a characteristic image, so that a characteristic structure of a collision angle is effectively enhanced; extracting a characteristic image binary image by using a canny operator, and simply and effectively obtaining the edge structure characteristics of the packaging box; by performing closed operation on the characteristic image binary image, a complete information mask of the closed area of the packing box can be quickly obtained, the area boundary of the packing box can be conveniently and quickly searched, and the boundary point can be quickly and effectively obtained.
According to the method, equidistant sparse sampling boundary points are selected to calculate the change rate of adjacent connecting lines, the calculated amount is effectively reduced, and the vertex angle can be accurately judged by adopting repeated sampling positioning of the vertex angle; and the local vertex angle characteristic image is used as detection input, so that the target detection range is greatly reduced.
In the invention, a 27-layer lightweight network based on a YOLOV framework is constructed, the detection precision and the training speed can be ensured at the same time, and the training verification is carried out by using sample data to ensure that a high-precision detection model is finally obtained.
The detection method provided by the invention is based on local positioning and local segmentation, adopts a lightweight learning network and a non-end-to-end training application strategy, directly takes an enhanced image gradient characteristic diagram as input, avoids the design of a deeper learning module layer, can quickly and efficiently detect the impact angle defect of mixed types and multiple scales, and the designed deep network model has high expansibility and can be applied to the migration learning of other types of problems.
Drawings
FIG. 1 is a flow chart of a method for detecting corner impact defects of a packaging box according to the present invention.
Detailed Description
Fig. 1 is a flowchart illustrating a method for detecting a corner impact defect of a package according to an embodiment of the present invention.
Referring to fig. 1, an embodiment of a method for detecting a corner collision defect of a packaging box according to the present invention, which takes a packaging box of a tablet personal computer as an example, specifically includes:
s1: acquiring a detection image of the packaging box; processing the detection image to obtain a characteristic image;
specifically, the step S1 specifically includes:
s101: obtaining two packing box detection images I with the same size1、I2
It should be noted that the two detection images are two images of the packaging surface of the same packaging box of the tablet computer respectively captured in a high light exposure mode and a low light exposure mode, specifically, a fixed gray scale industrial camera is perpendicular to the packaging capture surface of the tablet computer, strip-shaped parallel light sources for controlling the high light exposure and the low light exposure are respectively selected, parallel light emitted by the light sources is perpendicular to the packaging box surface of the tablet computer, and then the two detection images are captured to obtain the detection images.
In this embodiment, the detection image is a captured image by using a grayscale high-definition industrial camera, which is beneficial to obtaining information of a small impact angle; in addition, the edge color of the tablet personal computer packaging box is used as a reference color during shooting, and a pure color with a large difference with the edge color is selected as a background color, so that the interference of the peripheral background can be eliminated as much as possible.
S102: respectively carrying out same-scale nearest neighbor interpolation compression on the two detection images to obtain two compressed images J1、J2
It should be noted that, by performing neighbor interpolation compression on the acquired high-definition image, the input scale and the computational complexity can be greatly reduced while feature structure information is ensured.
S103: combining high-pass filtering to respectively calculate the gradients of the two compressed images to obtain two gradient images C1、C2
S104: highlight fusion is carried out on the two gradient images by utilizing wiener filtering, and a characteristic image G is obtained.
In the embodiment, the gradient characteristic images calculated by the high-pass filtering and the gradient algorithm are fused to obtain the characteristic images, and the characteristic images are used as the images for detecting the input, so that the complexity of the collision angle defect types can be effectively reduced.
S2: extracting a binary image of the characteristic image, and performing closed operation on the binary image to obtain an image mask;
in this embodiment, extracting the binary image of the feature image specifically includes: carrying out median filtering on the characteristic image G, extracting characteristics by adopting a canny operator to obtain a characteristic binary image EdgeI of the characteristic image, and then carrying out closed operation on the EdgeI to obtain a complete Mask of the tablet personal computer packing box region;
s3: searching the boundary of the packing box in an image Mask to obtain a boundary point set, carrying out equidistant sparse sampling on the boundary point set for multiple times, and determining the central positions of four vertex angles of the packing box based on the slope of the connecting line of adjacent boundary points;
in this embodiment, step S3 specifically includes:
s301: searching the boundary of the packing box in an image Mask to obtain a boundary point set S:
S={(xi,yi)|i=1,2,...,N};
where N is the number of boundary points, xi,yiRespectively as the horizontal and vertical coordinates of the boundary point;
s302: carrying out equidistant sparse sampling on the boundary point set S to obtain a boundary point subset S0
S0={(xj,yj) 1, 2.., n }, where n is the number of samples;
s303: computing a subset of boundary points S0Slope k of connecting line of two adjacent pointsj
kj=(yj-yj-1)/(xj-xj-1),(j=1,2,...,n-1);
S304: finding and recording a value satisfying 1 < | arctan ((k)j-kj+1)/(1+kjkj+1) ) | < 1.6 of the target boundary point;
it should be noted that, because the included angle α between two straight lines:
Figure BDA0003297350370000081
and is
Figure BDA0003297350370000082
So 1 < | arctan ((k) is satisfiedj-kj+1)/(1+kjkj+1) The boundary point of which is less than 1.6) is determined as the target boundary point.
S305: repeating the steps S302-S304 to carry out equidistant sparse sampling on the boundary point set for multiple times to obtain a target boundary point set X:
X={(xk,yk)|(xk,yk)∈S,k=1,2,...};
in this embodiment, the steps S302 to S304 are repeated at least three times, that is, more than three times of sampling are required.
S306: and respectively calculating the average coordinates of the target boundary points belonging to the same vertex angle, and defining the average coordinates as the central position of the vertex angle, thereby obtaining the central positions of the four vertex angles.
In this embodiment, before calculating the average coordinates of the target boundary points belonging to the same vertex angle, it is further determined whether the target boundary points belong to the same vertex angle, which specifically includes:
s3061: according to the formula
Figure BDA0003297350370000091
Calculating the distance d between the boundary points of the targetij
S3062: if d isij< delta, then the vertex (x)i,yi) And (x)j,yj) Belonging to the same corner, where δ is the subset S0The medium maximum pitch is 2 times the pitch of the adjacent boundary points.
S4: presetting a radius value, selecting vertex angle areas at the center positions of four vertex angles on the characteristic image, and constructing a local mask of the vertex angle areas of the packaging box;
s5: multiplying the local mask and the characteristic image of the top corner area of the packaging box to obtain a top corner characteristic image of the packaging box;
s6: collecting a tablet personal computer packing box sample with a collision corner flaw, and acquiring a vertex angle characteristic image set of the packing box according to the steps S1-S5;
in this embodiment, the steps are executed multiple timesStep 1 and step 5 obtain corner feature image dataset F ═ G0,G1,…,Gn}
S7: designing a lightweight deep neural network based on a YOLOV framework, constructing a detection model of the corner collision flaws of the packaging box, and training the detection model according to a top angle characteristic image set;
in this embodiment, the step S7 specifically includes:
s701: designing a lightweight deep neural network Model based on a YOLOV framework; in this embodiment, the optimal depth of the deep neural network Model is 27 layers.
S702: sequentially marking collision angle information of the top angle characteristic images in the top angle characteristic image set, selecting two thirds of the top angle characteristic images to form a model training set, and forming a model testing set by the remaining one third of the top angle characteristic images;
s703: setting corresponding network parameters and training parameters based on the Model, inputting a Model training set into the Model for training, and obtaining a collision angle flaw detection Model PreModel;
s704: inputting the model test set into a collision angle flaw detection model PreModel for detection, comparing a detection result with the marked collision angle information, and calculating a model error epsilon;
s705: judging whether epsilon is more than c, if so, obtaining a high-precision collision angle flaw detection Model Pre-Model; if the judgment result is no, returning to the step S703; where c represents the industry standard error.
In this embodiment, when the determination result is negative, the process returns to step S703, adjusts the network parameters and the training parameters, and performs training and detection again according to the training set and the test set until the error of the detection model is within the standard error range, so as to obtain a high-precision detection model.
In this embodiment, the high-precision corner collision flaw detection Model Pre-Model is a detection Model PreModel that is obtained by continuously training the detection Model PreModel and meets the error criterion, i.e., epsilon < c.
S8: and detecting the packaging box according to the trained packaging box corner collision flaw detection model.
In the embodiment, a 27-layer lightweight network based on a YOLOV framework is constructed, the detection precision and the training speed can be ensured at the same time, the training verification is performed by using sample data, and the high-precision detection model of the tablet computer is ensured to be finally obtained.
The embodiment of the invention also provides a device for detecting the corner collision flaw of the packaging box, which comprises:
the characteristic fusion module is used for acquiring a detection image of the packaging box and processing the detection image to acquire a characteristic image;
specifically, the feature fusion module performs feature fusion through steps S101 to S104 in the above-described packing box corner collision flaw detection method embodiment to obtain a feature image;
the image processing module is used for extracting a binary image of the characteristic image and performing closed operation on the binary image to obtain an image mask; searching the boundary of the packing box in an image mask to obtain a boundary point set, carrying out equidistant sparse sampling on the boundary point set for multiple times, and determining the central positions of four vertex angles of the packing box based on the slope of the connecting line of adjacent boundary points; presetting a radius value, selecting vertex angle areas at the center positions of four vertex angles on the characteristic image, and constructing a local mask of the vertex angle areas of the packaging box; multiplying the local mask and the characteristic image of the top corner area of the packaging box to obtain a top corner characteristic image of the packaging box;
specifically, the image processing module in the embodiment of the invention performs image processing through steps S2-S5 in the embodiment of the method for detecting the corner collision flaws of the packing box to obtain a characteristic image of a top corner of the packing box;
the data acquisition module is used for acquiring a packing box sample with a collision angle flaw and acquiring a top angle characteristic image set of the packing box;
specifically, after a data acquisition module acquires a packing box sample with a corner collision flaw, a feature fusion module and an image processing module are used for performing feature extraction and image processing through steps S1-S5 in the packing box corner collision flaw detection method embodiment to obtain a vertex angle feature image set of the packing box;
the model building module is used for designing a lightweight deep neural network based on a YOLOV framework, building a detection model of the corner collision flaws of the packaging box and training the detection model according to a vertex angle characteristic image set;
specifically, the model construction module of the embodiment of the invention performs image processing through steps S701 to S705 in the embodiment of the method for detecting the corner collision flaw of the packaging box to obtain a detection model of the corner collision flaw of the packaging box;
and the detection module is used for detecting the packaging box according to the trained packaging box corner collision flaw detection model.
The invention also provides a computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed by a processor, implements the steps of the method for detecting a corner impact defect of a packaging box.
The detection method provided by the embodiment of the invention is based on local positioning and local segmentation, adopts a lightweight learning network and a non-end-to-end training application strategy, directly takes an enhanced image gradient characteristic graph as input, avoids the design of a deeper learning module layer, can quickly and efficiently detect the multi-scale impact angle defect of a mixed type, and can be used for migratory learning application of other types of problems due to the high expansibility of the designed deep network model.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (10)

1. A method for detecting corner collision flaws of a packaging box is characterized by comprising the following steps:
s1: acquiring a detection image of the packaging box; processing the detection image to obtain a characteristic image;
s2: extracting a binary image of the characteristic image, and performing closed operation on the binary image to obtain an image mask;
s3: searching the boundary of the packing box in the image mask and acquiring a boundary point set, carrying out equidistant sparse sampling on the boundary point set for many times, and determining the central positions of four vertex angles of the packing box based on the slope of the connecting line of adjacent boundary points;
s4: presetting a radius value, selecting vertex angle areas at the center positions of four vertex angles on the characteristic image, and constructing a local mask of the vertex angle areas of the packaging box;
s5: multiplying the local mask and the characteristic image of the top corner area of the packaging box to obtain a top corner characteristic image of the packaging box;
s6: collecting a packaging box sample with a collision corner flaw, and acquiring a vertex angle characteristic image set of the packaging box according to the steps S1-S5;
s7: designing a lightweight deep neural network based on a YOLOV framework, constructing a detection model of the corner collision flaws of the packaging box, and training the detection model according to a top angle characteristic image set;
s8: and detecting the packaging box according to the trained packaging box corner collision flaw detection model.
2. The method for detecting corner collision defects of packaging boxes according to claim 1, wherein the step S1 specifically comprises:
s101: acquiring two detection images of the packing box with the same size, wherein the two detection images are a high light exposure mode shooting image and a low light exposure mode shooting image of the same packing surface of the packing box;
s102: respectively carrying out same-scale nearest neighbor interpolation compression on the two detection images to obtain two compressed images;
s103: combining with high-pass filtering to respectively calculate the gradients of the two compressed images to obtain two gradient images;
s104: and performing highlight fusion on the two gradient images by using wiener filtering to obtain a characteristic image.
3. The method of claim 2, wherein the inspection image is a gray scale image.
4. The method for detecting corner collision defects of packaging boxes according to claim 1, wherein the step S3 specifically comprises:
s301: searching the boundary of the packing box in the image mask to obtain a boundary point set S:
S={(xi,yi)|i=1,2,...,N},
where N is the number of boundary points, xi,yiRespectively a boundary point transverse coordinate and a boundary point longitudinal coordinate;
s302: carrying out equidistant sparse sampling on the boundary point set S to obtain a boundary point subset S0
S0={(xj,yj) 1, 2.., n }, where n is the number of samples;
s303: computing a subset of boundary points S0Slope k of connecting line of two adjacent pointsj
kj=(yj-yj-1)/(xj-xj-1),(j=1,2,...,n-1);
S304: finding and recording a value satisfying 1 < | arctan ((k)j-kj+1)/(1+kjkj+1) ) | < 1.6 of the target boundary point;
s305: repeating the steps S302-S304 to carry out equidistant sparse sampling on the boundary point set for multiple times to obtain a target boundary point set X:
X={(xk,yk)|(xk,yk)∈S,k=1,2,...};
s306: and respectively calculating the average coordinates of the target boundary points belonging to the same vertex angle, and defining the average coordinates as the central position of the vertex angle, thereby obtaining the central positions of the four vertex angles.
5. The method of claim 4, wherein before calculating the average coordinates of the target boundary points belonging to the same vertex angle, the step of determining whether the target boundary points belong to the same vertex angle further comprises:
s3061: according to the formula
Figure FDA0003297350360000031
Computing a targetDistance d between boundary pointsij
S3062: if d isij< delta, then the vertex (x)i,yi) And (x)j,yj) Belonging to the same corner, where δ is the subset S0The medium maximum pitch is 2 times the pitch of the adjacent boundary points.
6. The method of claim 1, wherein the extracting the binary image of the feature image in step S2 specifically includes: and carrying out median filtering on the characteristic image and extracting the characteristics by adopting a canny operator to obtain a binary image of the characteristic image.
7. The method for detecting corner collision defects of packaging boxes according to claim 1, wherein the step S7 specifically comprises:
s701: designing a lightweight deep neural network Model based on a YOLOV framework;
s702: sequentially marking collision angle information of the top angle characteristic images in the top angle characteristic image set, selecting two thirds of the top angle characteristic images to form a model training set, and forming a model testing set by the remaining one third of the top angle characteristic images;
s703: setting corresponding network parameters and training parameters based on the Model, inputting a Model training set into the Model for training, and obtaining a collision angle flaw detection Model PreModel;
s704: inputting the model test set into a collision angle flaw detection model PreModel for detection, comparing a detection result with the marked collision angle information, and calculating a model error epsilon;
s705: judging whether epsilon is more than c, if so, obtaining a high-precision collision angle flaw detection Model Pre-Model; if the judgment result is no, returning to the step S703; where c represents the industry standard error.
8. The method of claim 7, wherein the depth neural net Model has a net depth of 27 layers.
9. The utility model provides a device that packing carton corner collision flaw detected which characterized in that includes:
the characteristic fusion module is used for acquiring a detection image of the packaging box and processing the detection image to acquire a characteristic image;
the image processing module is used for extracting a binary image of the characteristic image and performing closed operation on the binary image to obtain an image mask; searching the boundary of the packing box in an image mask to obtain a boundary point set, carrying out equidistant sparse sampling on the boundary point set for multiple times, and determining the central positions of four vertex angles of the packing box based on the slope of the connecting line of adjacent boundary points; presetting a radius value, selecting vertex angle areas at the center positions of four vertex angles on the characteristic image, and constructing a local mask of the vertex angle areas of the packaging box; multiplying the local mask and the characteristic image of the top corner area of the packaging box to obtain a top corner characteristic image of the packaging box;
the data acquisition module is used for acquiring a packing box sample with a collision angle flaw and acquiring a top angle characteristic image set of the packing box;
the model building module is used for designing a lightweight deep neural network based on a YOLOV framework, building a detection model of the corner collision flaws of the packaging box and training the detection model according to a vertex angle characteristic image set;
and the detection module is used for detecting the packaging box according to the trained packaging box corner collision flaw detection model.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method for detecting a corner impact defect of a package according to any one of claims 1 to 8.
CN202111189749.1A 2021-10-11 2021-10-11 Method and device for detecting corner collision flaw of packaging box Pending CN114092396A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111189749.1A CN114092396A (en) 2021-10-11 2021-10-11 Method and device for detecting corner collision flaw of packaging box

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111189749.1A CN114092396A (en) 2021-10-11 2021-10-11 Method and device for detecting corner collision flaw of packaging box

Publications (1)

Publication Number Publication Date
CN114092396A true CN114092396A (en) 2022-02-25

Family

ID=80296739

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111189749.1A Pending CN114092396A (en) 2021-10-11 2021-10-11 Method and device for detecting corner collision flaw of packaging box

Country Status (1)

Country Link
CN (1) CN114092396A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114882033A (en) * 2022-07-11 2022-08-09 心鉴智控(深圳)科技有限公司 Flaw online detection method and system for medical packaging box product

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114882033A (en) * 2022-07-11 2022-08-09 心鉴智控(深圳)科技有限公司 Flaw online detection method and system for medical packaging box product
CN114882033B (en) * 2022-07-11 2022-09-20 心鉴智控(深圳)科技有限公司 Flaw online detection method and system for medical packaging box product

Similar Documents

Publication Publication Date Title
CN111062915B (en) Real-time steel pipe defect detection method based on improved YOLOv3 model
CN108520274B (en) High-reflectivity surface defect detection method based on image processing and neural network classification
CN107230203B (en) Casting defect identification method based on human eye visual attention mechanism
CN111462120B (en) Defect detection method, device, medium and equipment based on semantic segmentation model
CN111783772A (en) Grabbing detection method based on RP-ResNet network
Garfo et al. Defect detection on 3d print products and in concrete structures using image processing and convolution neural network
CN110443791B (en) Workpiece detection method and device based on deep learning network
CN109284779A (en) Object detecting method based on the full convolutional network of depth
CN103424409A (en) Vision detecting system based on DSP
CN111932511B (en) Electronic component quality detection method and system based on deep learning
CN111127417B (en) Printing defect detection method based on SIFT feature matching and SSD algorithm improvement
CN109584206B (en) Method for synthesizing training sample of neural network in part surface flaw detection
CN112200790B (en) Cloth defect detection method, device and medium
CN113538331A (en) Metal surface damage target detection and identification method, device, equipment and storage medium
CN113012153A (en) Aluminum profile flaw detection method
CN115100116A (en) Plate defect detection method based on three-dimensional point cloud
CN114092396A (en) Method and device for detecting corner collision flaw of packaging box
CN113496480A (en) Method for detecting weld image defects
CN115546202B (en) Tray detection and positioning method for unmanned forklift
CN116051808A (en) YOLOv 5-based lightweight part identification and positioning method
CN116309817A (en) Tray detection and positioning method based on RGB-D camera
CN114066810A (en) Method and device for detecting concave-convex point defects of packaging box
CN114187269B (en) Rapid detection method for surface defect edge of small component
CN115330705A (en) Skin paint surface defect detection method based on adaptive weighting template NCC
Xue et al. Detection of Various Types of Metal Surface Defects Based on Image Processing.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination