CN115880699B - Food packaging bag detection method and system - Google Patents

Food packaging bag detection method and system Download PDF

Info

Publication number
CN115880699B
CN115880699B CN202310197298.9A CN202310197298A CN115880699B CN 115880699 B CN115880699 B CN 115880699B CN 202310197298 A CN202310197298 A CN 202310197298A CN 115880699 B CN115880699 B CN 115880699B
Authority
CN
China
Prior art keywords
pixel points
skeleton
distance
pixel
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310197298.9A
Other languages
Chinese (zh)
Other versions
CN115880699A (en
Inventor
王俊凤
胥秀丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan Laiwu District Comprehensive Inspection And Testing Center
Original Assignee
Jinan Laiwu District Comprehensive Inspection And Testing Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinan Laiwu District Comprehensive Inspection And Testing Center filed Critical Jinan Laiwu District Comprehensive Inspection And Testing Center
Priority to CN202310197298.9A priority Critical patent/CN115880699B/en
Publication of CN115880699A publication Critical patent/CN115880699A/en
Application granted granted Critical
Publication of CN115880699B publication Critical patent/CN115880699B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to a food packaging bag detection method and system. Firstly, acquiring a code spraying area in an image of a food packaging bag; obtaining the skeleton fitting degree of the pixel points according to the distance between the pixel points and other pixel points and the similarity degree between the normal vectors of the pixel points; calculating the skeleton information density of the pixel points based on the skeleton fitting degree of the pixel points and the color feature similarity of the adjacent pixel points; calculating the distribution distance of the pixel points according to the density difference of skeleton information between the pixel points and the central point of each optimal window, and constructing a distance field according to the distribution distance; and extracting a skeleton extraction result from the distance field, and judging the code spraying quality of the code spraying area. The invention realizes the detection of the code spraying on the food packaging bag by utilizing the improved K3M sequential iterative algorithm, so that the food packaging bag belonging to the defective product can be removed from the production line conveniently.

Description

Food packaging bag detection method and system
Technical Field
The invention relates to the technical field of image processing, in particular to a food packaging bag detection method and system.
Background
With the increase of material demands, people pay more attention to relevant information such as ingredients and quality safety of foods, and in order to facilitate preservation and storage of foods, importance of food packaging bags is gradually highlighted, patterns of food packaging bags are also more and more, and accordingly detection requirements of the food packaging bags are also more and more. The detection of the food packaging bag mainly comprises printing detection on the packaging bag, size detection of the packaging bag and performance detection of the packaging bag. The printing detection standard is that patterns cannot run, marks, pattern characters and the like are clear and complete, and the surface of the packaging bag is flat and odorless; once the inspection of the food packaging bag does not reach the standard, for example, if the production date and the quality guarantee date on the food packaging bag are unclear and incomplete in the printing process, the safety quality of food can be jeopardized, and the purchasing desire of consumers can be seriously influenced, so that the inspection of the food packaging bag is necessary.
At present, the common printing detection of food packaging bags generally utilizes a K3M sequential iterative algorithm to extract the skeleton of the code-spraying character, compares the extracted code-spraying character skeleton with a template skeleton, and judges whether the printing has defects. The method is easy to deviate from the center when extracting skeletons of the code spraying characters with different shapes or irregular codes, and influences the accuracy of code spraying detection.
Disclosure of Invention
In order to solve the technical problem that the traditional K3M sequential iterative algorithm is easy to deviate from the center when performing skeleton extraction and affects the accuracy of code spraying detection, the invention aims to provide a food packaging bag detection method and a system, and the adopted technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a method for detecting a food packaging bag, including the steps of:
acquiring a code spraying area containing code spraying characters in an image of the food packaging bag;
clustering the pixel points on each code-spraying character based on the normal direction corresponding to the pixel points for each code-spraying character in the code-spraying area to obtain a plurality of divided character areas; for each pixel point, obtaining the skeleton fitting degree of the pixel points in the food packaging bag image according to the distance between the pixel point and other pixel points and the similarity degree between the normal vectors of the pixel points;
calculating the skeleton information density of the pixel points based on the skeleton fitting degree of the pixel points and the color feature similarity of the adjacent pixel points; sliding a sliding window on the segmented character region, and screening an optimal window based on the sum of skeleton information densities of pixel points in the window; calculating the distribution distance of the pixel points in the food packaging bag image according to the density difference of the skeleton information between the pixel points and the central point of each optimal window, and constructing a distance field according to the distribution distance; screening out partial pixel points in the distance field based on the distribution distance of the pixel points in the neighborhood of the pixel points in the distance field, and extracting a skeleton extraction result from the distance field;
And judging the code spraying quality of the code spraying area based on the skeleton extraction result.
Preferably, the obtaining the skeleton fitting degree of the pixels in the food packaging bag image according to the distance between the pixels and other pixels and the similarity degree between the normal vectors of the pixels includes:
the calculation formula of the initial skeleton fitting degree is as follows:
Figure SMS_1
wherein ,
Figure SMS_3
the initial skeleton fitting degree of the pixel points i in the segmentation character region a is obtained;
Figure SMS_6
the Euclidean distance from a pixel point i in a segmentation character region a to an xth contour point in a region contour sequence; n is the number of contour points in the region contour sequence corresponding to the segmentation character region a;
Figure SMS_8
the vertical distance from the pixel point i to the straight line where the normal vector is located;
Figure SMS_5
is a natural constant;
Figure SMS_7
the normal vector of the pixel point i in the segmentation character area a is obtained;
Figure SMS_10
a normal vector for dividing the maximum curvature point in the character area a;
Figure SMS_13
is the normal vector
Figure SMS_2
And normal vector
Figure SMS_9
Cosine similarity of (c);
Figure SMS_11
as a variance function;
Figure SMS_12
for segmenting the wheel corresponding to pixel i in character region aAll pixel points on the contour sequence correspond to the region contour sequence
Figure SMS_4
Is a variance of (2);
the ratio of the initial skeleton fitting degree corresponding to each pixel point to the sum of the initial skeleton fitting degrees corresponding to all the pixel points is the skeleton fitting degree corresponding to the pixel points.
Preferably, the calculating the skeleton information density of the pixel based on the skeleton fitting degree of the pixel and the color feature similarity of the adjacent pixel includes:
selecting any pixel point as a target pixel point, and calculating the maximum gray difference value in a window where the target pixel point is positioned; quantizing the pixel points in the window into a plurality of quantized colors through the color aggregation vector; calculating the difference value of the quantity of the polymerized pixel points and the non-polymerized pixel points in each quantized color; taking the product of the sum of the quantity differences corresponding to all quantized colors in the window and the maximum gray level difference as the adjustment information density; taking the product of the adjustment information density and the framework fitting degree as framework information quantity; and taking the reciprocal of the variance of the skeleton information quantity of each pixel point in the window corresponding to the target pixel point as the skeleton information density of the target pixel point.
Preferably, the screening the partial pixels in the distance field based on the distribution distance of the pixels in the neighborhood of the pixels in the distance field, and extracting the skeleton extraction result from the distance field includes:
for each segmented character area, acquiring the average value of the distribution distance of each pixel point in the segmented character area as an initial segmentation distance, and carrying out K3M sequential iterative judgment on the pixel points with the distribution distance of the initial segmentation distance in the distance field; selecting any pixel point with the distribution distance being the initial segmentation distance as a first pixel point, and deleting the first pixel point when the number of the pixel points with the distribution distance being equal to a threshold value of the number of the preset pixel points exists in the neighborhood of the first pixel point; when all the pixels with the distribution distance being the initial segmentation distance are deleted, all the pixels with the distribution distance being greater than the initial segmentation distance are deleted;
Iterating an initial segmentation threshold value by a preset step length, deleting pixel points, marking the reserved pixel points when the deleted pixel points and reserved pixel points exist in the pixel points corresponding to the distribution distance, and obtaining a primary extraction result of the skeleton when the initial segmentation threshold value of the iteration reaches the minimum value of the distribution distance; when the width of each line of the skeleton in the primary extraction result is larger than 1 pixel point, reserving the pixel point with the maximum skeleton information density, and when the width of each line of the skeleton in the primary extraction result is 1 pixel point, stopping iteration to obtain a skeleton extraction result corresponding to the segmentation character region.
Preferably, the clustering of the pixel points on each code-spraying character based on the normal direction corresponding to the pixel points to obtain a plurality of divided character areas includes:
clustering the pixel points on each code-spraying character by using a k-means clustering algorithm based on the normal direction corresponding to the pixel points to obtain a plurality of clustering clusters; the pixels in a cluster form a segmented character region.
Preferably, the screening the optimal window based on the sum of skeleton information densities of the pixel points in the window includes:
and calculating the sum of the skeleton information densities of the pixel points in each window, and taking the window corresponding to the sum of the skeleton information densities of the threshold values of the preset number as the optimal window according to the sequence from the big to the small of the sum of the skeleton information densities.
Preferably, the calculating the distribution distance of the pixels in the image of the food packaging bag according to the density difference of the skeleton information between the pixels and the center point of each optimal window includes:
and for each pixel point, calculating the average value of the difference value of the skeleton information density of the center point of each pixel point and each optimal window, wherein the average value of the difference values is the distribution distance corresponding to the pixel point.
Preferably, the constructing a distance field according to the distribution distance includes:
and taking the distribution distance corresponding to each pixel point as the value corresponding to the pixel point on the distance field.
Preferably, the determining the code spraying quality of the code spraying area based on the skeleton extraction result includes:
when the skeleton extraction result is completely overlapped with the template skeleton in the template library, judging that the code spraying region corresponding to the skeleton extraction result has no code spraying defect; when the skeleton extraction result is not completely overlapped with all the template skeletons in the template library, judging that the code spraying region corresponding to the skeleton extraction result has the code spraying defect.
In a second aspect, an embodiment of the present invention provides a food packaging bag detection system, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor implements a food packaging bag detection method as described above when executing the computer program.
The embodiment of the invention has at least the following beneficial effects:
the invention relates to the technical field of image processing. When the traditional K3M sequential iterative algorithm is used for obtaining the skeleton extraction result of the code spraying characters on the food packaging bag, if the code spraying defect occurs, the skeleton extraction of the code spraying characters in different shapes or irregularly deviates from the center easily, and the accuracy of code spraying detection is affected. The invention constructs the skeleton fitting degree of the pixel points based on the influence of the pixel points on the character skeleton extraction result, namely, the skeleton fitting degree of the pixel points is obtained based on the distance between the pixel points and other pixel points and the similarity between normal vectors of the pixel points, the skeleton fitting degree realizes the evaluation of skeleton extraction results of the pixel points in the partitioned character area, and the problem that a small amount of pixel points in the partitioned character area are used as edge points to be deleted when the code spraying defect exists in the code spraying area is solved; the method comprises the steps of constructing skeleton information density corresponding to a pixel point based on image information in a pixel point neighborhood, namely, calculating the skeleton information density of the pixel point based on the skeleton fitting degree of the pixel point and the color feature similarity of adjacent pixel points, calculating the distribution distance of the pixel point based on the difference value of the skeleton information densities of the pixel points, constructing a distance field, accelerating the judgment speed of the pixel point through the distance field, and restricting the range of the pixel point in a K3M sequential iteration process, so that the finally obtained skeleton extraction result is more in accordance with the actual code spraying condition on a food packaging bag. And finally, judging the code spraying quality of the code spraying area based on the skeleton extraction result. The invention realizes the detection of the code spraying on the food packaging bag by utilizing the improved K3M sequential iterative algorithm, improves the accuracy of the code spraying detection, and is convenient for removing the food packaging bag belonging to defective products from the production line in the follow-up process.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a method for detecting a food packaging bag according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a plurality of divided character areas corresponding to the code-spraying character 3 according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a distance field provided by an embodiment of the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following description refers to the specific implementation, structure, characteristics and effects of a method and a system for detecting food packaging bags according to the invention in combination with the accompanying drawings and preferred embodiments. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The embodiment of the invention provides a food packaging bag detection method and a specific implementation method of a system, and the method is suitable for a scene of detecting printed characters on a food packaging bag. In this case, the code spraying on the food packaging bag is detected for the food packaging bag printed with information such as the date of manufacture. In order to solve the technical problem that the traditional K3M sequential iterative algorithm is easy to deviate from the center when skeleton extraction is performed, and the accuracy of code spraying detection is affected, after the food packaging bag image is acquired and the corresponding code spraying area is obtained, a skeleton fitting coefficient and skeleton information density are built based on the influence of pixel points on a character skeleton extraction result and image information in the neighborhood of the pixel points, a distance field corresponding to a segmented character area is built based on the skeleton information density, the judgment speed of the pixel points is accelerated through the distance field, the range of the pixel points in the K3M sequential iterative process is restrained, and the acquired code spraying character skeleton is more in line with the actual code spraying condition on the food packaging bag.
The invention provides a food packaging bag detection method and a system specific scheme by combining the drawings.
Referring to fig. 1, a method flowchart of a method for detecting a food packaging bag according to an embodiment of the invention is shown, and the method includes the following steps:
step S100, a code spraying area containing code spraying characters in the food packaging bag image is obtained.
The food packaging bag of production processing is transmitted to the detection area, selects suitable position to place the bar light source around the conveyer belt, avoids the inhomogeneous influence to follow-up code spraying detection of illumination, and this bar light source is bar LED light source, places industry CCD camera in the conveyer bag top for acquire the surface image of every food packaging bag, this surface image is the RGB image. In the process of acquiring a surface image, interference by surrounding noise is unavoidable, resulting in degradation of the quality of the obtained surface image, and thus, denoising of the surface image is required. Common image denoising techniques are: in order to retain as much image information in the surface image as possible, the invention uses bilateral filtering denoising technology to perform denoising pretreatment on the obtained surface image. It should be noted that, the denoising of bilateral filtering is a known technique, and the specific process is not described in detail. And taking the pretreated surface image as a food packaging bag image, and carrying out code spraying detection on the subsequent food packaging bag by utilizing the food packaging bag image.
The method realizes that the image of the food packaging bag to be detected is obtained by utilizing an industrial camera and a denoising technology and is used for the code spraying detection on the subsequent food packaging bag.
The invention aims to detect the code spraying on the produced food packaging bag and remove the food packaging bag with the code spraying defect. The code spraying result on the standard food packaging bag shows that the characters are clear and the intervals between the adjacent characters are uniform in the image of the food packaging bag, and the characteristic is destroyed by the code spraying defect. The code spraying defects comprise four defects of missing printing, incomplete printing, misprinting and pollution, so that if the code spraying defects occur in the code spraying region to be detected, certain differences exist between image information statistical results such as tone histograms, brightness histograms and the like corresponding to the code spraying region and the standard code spraying region in a color space. In addition, if skeleton extraction is performed on the code spraying region where the code spraying is located, pixel points capable of representing the code spraying character skeleton are gradually screened, and character skeleton results formed by the pixel points are also quite different. Therefore, the detection result of the code spraying area to be detected is obtained based on the image information of the code spraying characters in the food packaging bag image and the character skeleton extraction result, and the food packaging bag with the code spraying defect is removed.
Because the template matching method is often used for image defects, the image to be detected is matched with the template image, but in the invention, the image to be detected is a food packaging bag image, whether the code spraying character corresponding to the production information has defects or not is detected, and the reason that the direct template matching cannot be performed is that the code spraying character is changed, different production information corresponds to different code spraying codes, and the position of the code spraying is not fixed; secondly, the occupied area of the code-spraying character in the food packaging bag image is too small, the calculated amount of matching with the template image by utilizing window sliding is too large, most areas are not areas where code-spraying characters are located, and the requirement of quick matching cannot be met. Therefore, the invention firstly obtains the code spraying area where the code spraying character is located from the food packaging bag image, and further refines the matching object to a pixel width by utilizing the improved K3M algorithm, thereby improving the detection rate.
The image of the food packaging bag contains various code spraying, and the invention detects the code spraying defect of the production information. The code spraying defects comprise four defects of character missing printing, character incomplete printing, character misprint and character pollution. The code spraying of the production information has certain code spraying standard, such as the size of the code spraying area, the interval between adjacent characters, etc. If the four code spraying defects occur, the code spraying results of the production information in the food packaging bag image and the standard code spraying results have differences, and the differences corresponding to the different types of code spraying defects are different. For example, when the character of the production information is leaked or damaged, a part of pixels belonging to the character is lost, and the statistical result of pixels in the defect area is obviously different from the statistical result of pixels in the same spatial position area as the standard code spraying result.
Further, a neural network is utilized to obtain a code spraying area containing code spraying characters in the food packaging bag image, and the code spraying area is specifically:
a large number of food package images are obtained, the food package images are marked by people, the target area and the non-target area are respectively represented by 0 and 1, the image marks and the food package images are encoded by single thermal encoding, a large number of encoding results are used as input of a neural network, the output of the neural network is the target area in the food package images, namely the code spraying area containing code spraying characters in the food package images, the neural network utilized in the embodiment of the invention is ResNet50, and the optimization algorithm is Adam algorithm. It should be noted that, training of the neural network is a well-known technique for those skilled in the art, and the specific process is not repeated. After training, the neural network is utilized to output a code spraying area containing code spraying characters of production information in the food packaging bag image. The code spraying area in the food packaging bag image is obtained through the neural network processing.
Step S200, clustering the pixel points on each code-spraying character based on the normal direction corresponding to the pixel points for each code-spraying character in the code-spraying area to obtain a plurality of divided character areas; and for each pixel point, obtaining the skeleton fitting degree of the pixel points in the food packaging bag image according to the distance between the pixel point and other pixel points and the similarity degree between the normal vectors of the pixel points.
Further, based on the segmentation result of each code-spraying character in the code-spraying area, the skeleton fitting degree corresponding to each pixel point is calculated.
And obtaining a code spraying area where the code spraying character of the production information in the food packaging bag image is located according to the steps, and further analyzing and detecting the code spraying character in the code spraying area. Among the four types of code spraying defects, the outlines and shapes of single characters are affected by character misprints, character defects and character pollution, and the character misprints are the loss of the whole characters and can cause the change of the outlines and shapes of the whole code spraying characters in a code spraying area. Therefore, the invention considers that the character skeleton in the code spraying area is extracted, and the subsequent code spraying defect detection is carried out through the skeleton extraction result.
The basic principle of the K3M sequential iterative algorithm for extracting the image skeleton is that if eight adjacent pixels around a pixel are all object pixels, the pixel is the point in the image to be deleted. The method judges the pixel points through continuous iteration, the skeleton is easily deviated from the center of the image without distance constraint, and if the code spraying defect occurs, the K3M sequential iteration algorithm cannot be strictly carried out according to the direction of image contraction in the inward pushing process, so that the positioning of the code spraying character skeleton is not accurate enough.
The code spraying content of the production information exists in the food packaging bag image in the form of numbers, and the code spraying content comprises 10 numbers from 0 to 9, and the skeleton extraction of code spraying characters can be understood as the refinement process of different areas in each number. And as a result of extracting the skeleton with higher precision, the direction of the skeleton is consistent with the direction on the corresponding character area. Therefore, the invention considers that each code-spraying character in the code-spraying area is subjected to segmentation processing, and the purpose of segmentation is to gather pixel points with similar normal directions in each code-spraying character area together, so that the extracted character skeleton is more fit with the structure of the code-spraying character. Based on the normal direction corresponding to each pixel point in the code spraying character, dividing each code spraying character in the code spraying area by using a k-means clustering algorithm, and specifically: based on the normal direction corresponding to the pixel points, clustering the pixel points on each code-spraying character by using a k-means clustering algorithm to obtain a plurality of clustering clusters, wherein the pixel points in one clustering cluster form a segmentation character area. In the embodiment of the present invention, the value of k takes 4, that is, each code-spraying character is divided into 4 divided character areas, and in other embodiments, the practitioner can adjust the value according to the actual situation. Referring to fig. 2, fig. 2 is a schematic diagram of a plurality of divided character areas corresponding to the code injection character 3. The solid arrow intersecting the normal vector of the divided character area 1 in fig. 2 is a tangent to the point of maximum curvature in the divided character area 1; the solid arrow intersecting the normal vector of the divided character area 2 in fig. 2 is a tangent to the point of maximum curvature in the divided character area 2; the solid arrow intersecting the normal vector of the divided character area 3 in fig. 2 is a tangent to the point of maximum curvature in the divided character area 3; the solid arrow intersecting the normal vector of the divided character area 4 in fig. 2 is a tangent to the point of maximum curvature in the divided character area 4; the 3 broken line segments without arrows in fig. 2 divide the code-sprayed character 3 into 4 divided character areas, which are divided character area 1, divided character area 2, divided character area 3, and divided character area 4, respectively.
The method for calculating the measurement distance in the k-means clustering algorithm in the embodiment of the invention comprises the following steps: and calculating the open square of the Euclidean distance between the pixel point and the seed point of the segmentation character area, and calculating the open square of the gray level difference value between the pixel point and the seed point of the segmentation character area, wherein the sum of the open square of the Euclidean distance and the open square of the gray level difference value is used as the corresponding measurement distance of the pixel point.
The calculation formula of the measurement distance is as follows:
Figure SMS_14
wherein ,
Figure SMS_15
the measurement distance between the seed point z and the pixel point i in the k-means clustering algorithm is measured;
Figure SMS_16
seed points for dividing character areas;
Figure SMS_17
the Euclidean distance between the seed point z and the pixel point i;
Figure SMS_18
is the gray level difference between pixel i and the seed point. In the embodiment of the invention, the calculation formula of the measurement distance of all the code-spraying character segmentation is the formula.
The calculation formula of the measurement distance not only reflects the distance between the pixel point and the seed point, but also reflects the gray level difference between the pixel point and the seed point, so that the classification of the pixel point is realized. The distance between the pixel point and the seed point is reflected by the gray level difference between the pixel point and the seed point, because the more similar the pixel point is, the more the pixel point is affected by the light or the color sprayed during spray painting is more similar, and the pixel point is more needed to be divided into the same divided character area. Therefore, the smaller the Euclidean distance between the pixel point and the seed point is, the smaller the measured distance between the pixel point and the seed point is, and vice versa, and the Euclidean distance and the measured distance are in a direct proportion relation. Similarly, the smaller the gray difference between the pixel point and the seed point is, the smaller the measurement distance between the pixel point and the seed point is, and vice versa, the gray difference and the measurement distance are in a proportional relationship.
For each pixel point, obtaining the skeleton fitting degree of the pixel point in the food packaging bag image according to the distance between the pixel point and other pixel points and the similarity degree between the normal vectors of the pixel points, and specifically:
firstly, acquiring an area outline sequence of a segmented character area where a pixel point is located, and specifically: and obtaining edge points with the maximum curvature in the segmented character region, respectively obtaining 10 edge points at the left side and the right side of the edge points with the maximum curvature, forming a local contour containing 21 pixel points with the edge points with the maximum curvature, and sequencing the local contour from left to right to obtain a region contour sequence. And then acquiring a contour sequence corresponding to the pixel point, wherein the acquisition method of the contour sequence corresponding to the pixel point comprises the following steps: taking about 10 pixels with the pixels as the center, including the center pixel, and forming a contour sequence of the pixels by sorting the local contour containing 21 pixels from left to right.
The calculation formula of the initial skeleton fitting degree is as follows:
Figure SMS_19
wherein ,
Figure SMS_22
the initial skeleton fitting degree of the pixel points i in the segmentation character region a is obtained;
Figure SMS_24
the Euclidean distance from a pixel point i in a segmentation character region a to an xth contour point in a region contour sequence; n is the number of contour points in the region contour sequence corresponding to the segmentation character region a;
Figure SMS_27
The vertical distance from the pixel point i to the straight line where the normal vector is located;
Figure SMS_23
is a natural constant;
Figure SMS_25
the normal vector of the pixel point i in the segmentation character area a is obtained;
Figure SMS_28
a normal vector for dividing the maximum curvature point in the character area a;
Figure SMS_30
is the normal vector
Figure SMS_20
And normal vector
Figure SMS_26
Cosine similarity of (c);
Figure SMS_29
as a variance function;
Figure SMS_31
for dividing all pixel points on the contour sequence corresponding to the pixel point i in the character area a and corresponding to the area contour sequence
Figure SMS_21
Is a variance of (c).
The ratio of the initial skeleton fitting degree corresponding to each pixel point to the sum of the initial skeleton fitting degrees corresponding to all the pixel points is the skeleton fitting degree corresponding to the pixel points.
The farther the distance between the pixel point in the segmentation character region and the seed point of the cluster corresponding to the segmentation character region is, the more far the pixel point is from the possible skeleton refinement result of the segmentation character region, and the less the pixel point is affected by the final position of the skeleton. The larger the difference between the normal direction of the pixel point and the normal direction of the segmented character region, the more likely the pixel point is positioned near the edge of the segmented character region, and the weaker the constraint of the pixel point on the refinement result, the less the final position of the skeleton is affected. The skeleton fitting degree is jointly influenced by skeleton distance weights and skeleton direction weights of pixel points in the segmentation character region.
wherein ,
Figure SMS_32
is the degree of fluctuation in the distance of the pixel point i.
Figure SMS_33
The larger the projection distance and the linear distance change between the pixel point and the regional outline are, the larger the difference between the skeleton obtained by taking the pixel point as the pixel point on the skeleton and the local outline is.
Figure SMS_34
For dividing all pixel points on the contour sequence corresponding to the pixel point i in the character area a and corresponding to the area contour sequence
Figure SMS_35
The reason for this calculation is that since the region contour sequence composed of tangent points and adjacent edge points thereof has a large influence on the contour structure of the region of the split character, the influence on the skeleton extraction result should also be large, and there are some adjacent pixel points on the skeleton extraction result, the distance between the contour sequence composed of these pixel points and the region contour sequence should be almost unchanged. The tangent point here is the point at which the curvature is maximum.
wherein ,
Figure SMS_36
the skeleton distance weight of the pixel point i reflects the instability degree of the outline distance between the pixel point and the pixel points around the normal vector of the segmentation character area, and the larger the skeleton distance weight is, the more likely the pixel point is positioned on the edge of the segmentation character area, and the more the pixel point should be deleted when skeleton extraction is carried out.
wherein ,
Figure SMS_37
in order to divide the normal vector of the maximum curvature point in the character area a, the normal vector is used as the normal vector of the divided character area, the curvature calculation can be performed by using a least square fitting plane, and the specific process is not repeated as a known technology of the person skilled in the art.
Figure SMS_38
The skeleton direction weight of the pixel point i reflects the similarity degree between the normal direction of the pixel point i in the code spraying area and the normal direction of the segmentation character area a, and the larger the skeleton direction weight is, the closer the pixel point i is to the two side edges of the segmentation character area a, and the smaller the influence on the extraction result of the skeleton is.
The skeleton fitting degree reflects the influence of the pixel point i in the partitioned character area a on the skeleton extraction result of the partitioned character area a, and the farther the pixel point i is from the distribution distance of the pixel point in the partitioned character area a, the greater the possibility that the pixel point in the non-partitioned character area exists around, namely
Figure SMS_39
The larger the skeleton distance weight
Figure SMS_40
The smaller; the more similar the normal direction of the pixel point i is to the normal direction of the divided character area, the closer the pixel point i is to the pixel point with the largest curvature on the edge of the divided character area, namely
Figure SMS_41
The larger the impact on the refinement results, the smaller the skeleton fitness.
When the code spraying defect exists in the code spraying area, the traditional K3M is corroded in the area with the pixel point as the center 3*3, not all the pixel points need to be corroded by the corresponding 3*3 area, for example, the phenomenon of character incomplete can lead to the partial disconnection of the segmented character area, each pixel point is corroded, each disconnected segmented character area can obtain a skeleton extraction result, the pixel points on the real skeleton are corroded in the corrosion process, namely, a plurality of discontinuous skeleton extraction results possibly exist in one segmented character area, and the problem of partial distortion of the character skeleton can be caused. The skeleton fitting degree evaluates skeleton extraction results of any pixel point in the segmented character region, and considers the influence of the pixel points of different segmented character regions on the skeleton extraction results.
And obtaining the skeleton fitting degree of the pixel points through segmentation processing of each code-spraying character in the code-spraying area.
Step S300, calculating the skeleton information density of the pixel points based on the skeleton fitting degree of the pixel points and the color feature similarity of the adjacent pixel points; sliding a sliding window on the code spraying area, and screening an optimal window based on the sum of skeleton information densities of pixel points in the window; calculating the distribution distance of the pixel points in the food packaging bag image according to the density difference of the skeleton information between the pixel points and the central point of each optimal window, and constructing a distance field according to the distribution distance; and screening out partial pixel points in the distance field based on the distribution distance of the pixel points in the neighborhood of the pixel points in the distance field, and extracting a skeleton extraction result from the distance field.
And (2) obtaining the skeleton fitting degree of the pixel points in each code-spraying character region in the code-spraying region according to the step (S200), and obtaining the local density of the image information of the pixel points based on the skeleton fitting degree of the pixel points and the color feature similarity of the adjacent pixel points. The purpose of obtaining the local density of the image information is that the K3M sequential iterative process is to judge whether 3 or more than 3 adjacent pixels exist in the neighborhood of the pixels, and if the adjacent pixels exist, the pixels are deleted. That is, the deleted points all have the above-mentioned conditions, and the pixel points have a certain similarity. For the image with the width of one pixel, which is the last rest of the skeleton, the pixel points in the skeleton are usually located in the central area of the image, and the pixel points on the skeleton are usually close to the central point of each divided character area in the invention, so that the central point of each divided character area can be taken as a target position, the distance between the rest of the pixel points and the target position is calculated, and a distance field is obtained according to the distances from all the pixel points to the target position. Based on the analysis, skeleton information density is constructed for representing the similarity of the pixel points in each pixel point neighborhood to skeleton extraction information, and the method is specifically as follows: and calculating the skeleton information density of the pixel points based on the skeleton fitting degree of the pixel points and the color feature similarity of the adjacent pixel points. The skeleton information density acquisition method comprises the following steps: selecting any pixel point as a target pixel point, and calculating the maximum gray difference value in a window where the target pixel point is positioned; quantizing the pixel points in the window into a plurality of quantized colors through the color aggregation vector, wherein each quantized color contains aggregated pixel points and non-aggregated pixel points; calculating the difference value of the quantity of the polymerized pixel points and the non-polymerized pixel points in each quantized color; taking the product of the sum of the quantity differences corresponding to all quantized colors in the window and the maximum gray level difference as the adjustment information density; taking the product of the adjustment information density and the framework fitting degree as framework information quantity; and taking the reciprocal of the variance of the skeleton information quantity of each pixel point in the window corresponding to the target pixel point as the skeleton information density of the target pixel point.
The calculation formula of the skeleton information quantity is as follows:
Figure SMS_42
wherein ,
Figure SMS_45
the skeleton information quantity of the pixel point i in the segmentation character area a;
Figure SMS_46
the framework fitting degree of the pixel points i in the segmentation character region a is obtained;
Figure SMS_48
the maximum gray level difference value of the pixel point i in the segmentation character region a in the window w;
Figure SMS_44
the number of types of quantization colors for the pixels in the window w;
Figure SMS_47
the number of the aggregation pixel points in the b-th quantization color in the window w;
Figure SMS_49
the number of non-polymerized pixel points in the b-th quantization color in the window w;
Figure SMS_50
the difference value of the quantity of the polymerized pixel points and the non-polymerized pixel points in the b-th quantization color in the window w is obtained;
Figure SMS_43
the information density is adjusted for dividing the window w where the pixel point i is located in the character area a.
Taking the reciprocal of the variance of the skeleton information quantity of each pixel point in the window w corresponding to the pixel point i as the skeleton information density of the pixel point i.
In the embodiment of the present invention, the number of quantization colors in the window is set to be 6, that is, the value of B is set to be 6 in the embodiment of the present invention. In the embodiment of the invention, the magnitude of the aggregation threshold value takes an empirical value of 4, and in other embodiments, the practitioner can adjust the value according to the actual situation. It should be noted that the color aggregate vector is a well-known technique for those skilled in the art, and will not be described herein.
wherein ,
Figure SMS_51
the difference degree of the pixel point and the image information of the pixel point in the neighborhood is reflected as the local information difference value of the pixel point i, and the larger the local information difference value is, the larger the difference degree of the pixel point and the image information of the pixel point in the neighborhood is. Degree of framework fitting
Figure SMS_52
The influence degree of the pixel points on the skeleton extraction result of the segmentation character region a is reflected;
Figure SMS_53
the image information distribution condition of the pixel point i in the segmentation character area a is reflected, so that the product of the image information distribution condition and the image information distribution condition is taken as the skeleton information quantity of the pixel point i in the segmentation character area a, and the skeleton information density is conveniently obtained subsequently. The skeleton information density reflects the likelihood of the pixel point i as the pixel point in the skeleton extraction result of the segmentation character region a. The greater the skeleton information density, the more likely the pixel i will become a pixel on the skeleton extraction result.
The skeleton information density reflects the possibility that the pixel points become pixel points on the skeleton extraction result of the segmentation character region a, and the smaller the information difference between the pixel points and the distribution image of the pixel points in the neighborhood is, the larger the corresponding local information difference is, and the pixel point i is dissimilar to the pixel points in the neighborhood; the larger the variance of the skeleton information value, i.e. the skeleton information density
Figure SMS_54
The smaller the pixel point i is, the different influence degrees of the pixel points in the neighborhood of the pixel point i on the skeleton extraction result are indicated, and the pixel point is less likely to become the pixel point on the skeleton extraction result.
Further, the skeleton information density of each pixel is calculated, and since the pixel which finally forms the skeleton is often close to the center line of the image, the skeleton information density is considered to be the largest, and the skeleton information density of the pixel at both sides in the horizontal direction is considered to be the candidate point of the target position, wherein the pixel which gradually decreases.
Sliding a sliding window on the segmented character area, screening an optimal window based on the sum of skeleton information densities of pixel points in the window, and specifically: and calculating the sum of the skeleton information densities of the pixel points in each window, and taking the window corresponding to the sum of the skeleton information densities of the threshold values of the preset number as the optimal window according to the sequence from the big to the small of the sum of the skeleton information densities. In the embodiment of the present invention, the preset number of thresholds is 5, and in other embodiments, the practitioner may adjust the value according to the actual situation. That is, for each divided character area, M windows with the maximum skeleton information density accumulation sum are obtained, in the invention, the M size takes the checked value of 5, and the set of the central points of the 5 windows is recorded as a set
Figure SMS_55
For each code-spraying character, skeleton extraction is carried out according to the distance field and skeleton information density, specifically, firstly, the distribution distance is calculated, the distance field is constructed, and the minimum value of the distribution distance is obtained from the distance field. And calculating the distribution distance of the pixel points in the food packaging bag image according to the density difference of the skeleton information between the pixel points and the central point of each optimal window, and constructing a distance field according to the distribution distance. The acquisition method of the distribution distance comprises the following steps: and for each pixel point, calculating the average value of the difference value of the skeleton information density of the center point of each pixel point and each optimal window, wherein the average value of the difference values is the distribution distance corresponding to the pixel point. And taking the distribution distance corresponding to each pixel point as the value corresponding to the pixel point on the distance field. Please refer to fig. 3, which is a schematic diagram of a distance field.
Further, for each segmented character region, acquiring a mean value of the distribution distance of each pixel point in the segmented character region as an initial segmentation distance, and performing K3M sequential iterative judgment on the pixel points with the distribution distance of the initial segmentation distance in the distance field; selecting any pixel point with the distribution distance being the initial segmentation distance as a first pixel point, and deleting the first pixel point when the number of the pixel points with the distribution distance being equal to a threshold value of the number of the preset pixel points exists in the neighborhood of the first pixel point; when all the pixels with the distribution distance being the initial segmentation distance are deleted, all the pixels with the distribution distance being greater than the initial segmentation distance are deleted, the value of the threshold value of the number of the preset pixels is 3 according to the conventional method in the embodiment of the invention, and in other embodiments, the operator can adjust the value according to the actual situation. Iterating an initial segmentation threshold value by a preset step length, deleting pixel points, marking the reserved pixel points when the deleted pixel points and reserved pixel points exist in the pixel points corresponding to the distribution distance, and obtaining a primary extraction result of the skeleton when the initial segmentation threshold value of continuous iteration reaches the minimum value of the distribution distance; when the width of each line of the skeleton in the primary extraction result is larger than 1 pixel point, reserving the pixel point with the maximum skeleton information density, and when the width of each line of the skeleton in the primary extraction result is 1 pixel point, stopping iteration to obtain a skeleton extraction result corresponding to the segmentation character region. In the embodiment of the present invention, the preset step length is 1, and in other embodiments, the practitioner can adjust the value according to the actual situation.
That is, to set the initial dividing distance
Figure SMS_56
The initial segmentation distance is the average value of the distribution distance of each pixel point in the belonging segmentation character area, and the distribution distance in the query distance field is
Figure SMS_57
K3M sequential iterative judgment is carried out on the pixel points x in such a way that whether 3 or more than 3 pixel points with equal distribution distances exist in the x neighborhood is deleted or reserved according to the judgment result, if the distribution distances are
Figure SMS_58
All corresponding pixels are judged to be deleted, and all pixels larger than the initial segmentation distance are deleted. It should be noted that K3M sequential iteration is a technique in the artThe details of the procedures are not repeated in detail, which are known to the operator. Iterative initial segmentation distance with step length of 1
Figure SMS_59
Deleting pixels, if one dividing distance exists two types of pixels which are deleted and reserved, marking reserved pixels, and when the dividing distance reaches the minimum value of the distribution distance
Figure SMS_60
And obtaining a preliminary extraction result of the skeleton, if the width of each row of the skeleton is larger than 1 pixel point, reserving the pixel point with the maximum skeleton information density, and ensuring the width of the skeleton to be 1. And stopping iteration when the skeleton width is 1 pixel point to obtain a skeleton extraction result corresponding to the segmentation character region a.
And according to the same steps, obtaining skeleton extraction results of all characters in the code spraying area, and marking the skeleton extraction results in the code spraying area as TF. The method and the device realize that the code spraying character in the food packaging bag image is processed according to the framework fitting degree and the distance field, and the framework extraction result TF of each code spraying character in the code spraying area is obtained.
And step S400, judging the code spraying quality of the code spraying area based on the skeleton extraction result.
And (3) carrying out skeleton extraction on all the standard code spraying results of the production information according to the step S300, taking the skeleton extraction result corresponding to each production information as a template skeleton, and putting all the template skeletons into a template library. And detecting the overlapping area of the skeleton extraction result TF of the code spraying area in the obtained food packaging bag image to be detected and each template skeleton in the template library. In the embodiment of the invention, the detection result of the overlapping area is obtained by utilizing the scale-invariant feature transformation, the obtaining of the scale-invariant feature is a well-known technology of a person skilled in the art, and the specific process is not repeated. Acquiring a code spraying detection result of the packaging bag according to the detection result of the overlapping area, when the skeleton extraction result TF is completely overlapped with one template skeleton in the template library, indicating that the code spraying on the packaging bag is normal, and judging that the code spraying area corresponding to the skeleton extraction result has no code spraying defect; when the skeleton extraction result TF is not completely overlapped with all the template skeletons in the template library, the defect of code spraying exists on the packaging bag, and the code spraying defect exists in the code spraying area corresponding to the skeleton extraction result is judged.
According to the steps S100-S400, food packaging bags with code spraying defects on a food packaging bag production line are obtained, food safety storage cannot be realized for the food packaging bags with the code spraying defects, the food packaging bags are divided into defective products on the production line, and a rejecting device is placed at the tail end position of a detection system to reject the food packaging bags. The common rejecting devices in the current processing production are as follows: in consideration of the weight of the food packaging bag and the subsequent treatment process, the embodiment of the invention selects to absorb and put the food packaging bag which is considered to be a defective product into a storage box by using a mechanical sucker to finish removing, namely, the food packaging bag which is judged to have the code spraying defect is absorbed and put into the storage box to finish removing, and the removing process of the defective product packaging bag is as follows: when the packaging bag is transmitted below the sensor, the sensor outputs a high level to the switch controller, and if the detection result is a defective packaging bag with a code spraying defect, the switch controller controls the mechanical sucker to reject the defective packaging bag, and the food packaging bag without the code spraying defect in the detection result is continuously transmitted to the next processing stage.
In summary, the present invention relates to the field of image processing technology. Firstly, acquiring a code spraying area containing code spraying characters in an image of a food packaging bag; clustering the pixel points on each code-spraying character based on the normal direction corresponding to the pixel points for each code-spraying character in the code-spraying area to obtain a plurality of divided character areas; for each pixel point, obtaining the skeleton fitting degree of the pixel points in the food packaging bag image according to the distance between the pixel point and other pixel points and the similarity degree between the normal vectors of the pixel points; calculating the skeleton information density of the pixel points based on the skeleton fitting degree of the pixel points and the color feature similarity of the adjacent pixel points; sliding a sliding window on the code spraying area, and screening an optimal window based on the sum of skeleton information densities of pixel points in the window; calculating the distribution distance of the pixel points in the food packaging bag image according to the density difference of the skeleton information between the pixel points and the central point of each optimal window, and constructing a distance field according to the distribution distance; screening out partial pixel points in the distance field based on the distribution distance of the pixel points in the neighborhood of the pixel points in the distance field, and extracting a skeleton extraction result from the distance field; and judging the code spraying quality of the code spraying area based on the skeleton extraction result. The method is convenient for detecting the code spraying on the food packaging bag by utilizing the improved K3M sequential iterative algorithm, and the food packaging bag belonging to defective products is removed from the production line.
The embodiment of the invention also provides a food packaging bag detection system which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes the steps of the method when executing the computer program. Since a detailed description is given above for a food packaging bag detection method, a detailed description is omitted.
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. The processes depicted in the accompanying drawings do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.

Claims (8)

1. The food packaging bag detection method is characterized by comprising the following steps of:
acquiring a code spraying area containing code spraying characters in an image of the food packaging bag;
clustering the pixel points on each code-spraying character based on the normal direction corresponding to the pixel points for each code-spraying character in the code-spraying area to obtain a plurality of divided character areas; for each pixel point, obtaining the skeleton fitting degree of the pixel points in the food packaging bag image according to the distance between the pixel point and other pixel points and the similarity degree between the normal vectors of the pixel points;
Calculating the skeleton information density of the pixel points based on the skeleton fitting degree of the pixel points and the color feature similarity of the adjacent pixel points; sliding a sliding window on the segmented character region, and screening an optimal window based on the sum of skeleton information densities of pixel points in the window; calculating the distribution distance of the pixel points in the food packaging bag image according to the density difference of the skeleton information between the pixel points and the central point of each optimal window, and constructing a distance field according to the distribution distance; screening out partial pixel points in the distance field based on the distribution distance of the pixel points in the neighborhood of the pixel points in the distance field, and extracting a skeleton extraction result from the distance field;
judging the code spraying quality of the code spraying area based on the skeleton extraction result;
the method for obtaining the skeleton fitting degree of the pixel points in the food packaging bag image according to the distance between the pixel points and other pixel points and the similarity degree between the normal vectors of the pixel points comprises the following steps:
the calculation formula of the initial skeleton fitting degree is as follows:
Figure QLYQS_1
wherein ,
Figure QLYQS_3
the initial skeleton fitting degree of the pixel points i in the segmentation character region a is obtained; />
Figure QLYQS_7
The Euclidean distance from a pixel point i in a segmentation character region a to an xth contour point in a region contour sequence; n is the number of contour points in the region contour sequence corresponding to the segmentation character region a; / >
Figure QLYQS_11
The vertical distance from the pixel point i to the straight line where the normal vector is located; />
Figure QLYQS_5
Is a natural constant; />
Figure QLYQS_8
The normal vector of the pixel point i in the segmentation character area a is obtained; />
Figure QLYQS_10
A normal vector for dividing the maximum curvature point in the character area a;
Figure QLYQS_13
is a normal vector->
Figure QLYQS_2
And normal vector->
Figure QLYQS_6
Cosine similarity of (c); />
Figure QLYQS_9
As a variance function; />
Figure QLYQS_12
For dividing all pixel points on the contour sequence corresponding to the pixel point i in the character area a and corresponding to the area contour sequence
Figure QLYQS_4
Is a variance of (2);
the ratio of the initial skeleton fitting degree corresponding to each pixel point to the sum of the initial skeleton fitting degrees corresponding to all the pixel points is the skeleton fitting degree corresponding to the pixel points;
the method for calculating the skeleton information density of the pixel points based on the skeleton fitting degree of the pixel points and the color feature similarity of the adjacent pixel points comprises the following steps: selecting any pixel point as a target pixel point, and calculating the maximum gray difference value in a window where the target pixel point is positioned; quantizing the pixel points in the window into a plurality of quantized colors through the color aggregation vector; calculating the difference value of the quantity of the polymerized pixel points and the non-polymerized pixel points in each quantized color; taking the product of the sum of the quantity differences corresponding to all quantized colors in the window and the maximum gray level difference as the adjustment information density; taking the product of the adjustment information density and the framework fitting degree as framework information quantity; and taking the reciprocal of the variance of the skeleton information quantity of each pixel point in the window corresponding to the target pixel point as the skeleton information density of the target pixel point.
2. The method for detecting food packaging bags according to claim 1, wherein the step of screening out a part of the pixels in the distance field based on the distribution distance of the pixels in the neighborhood of the pixels in the distance field, and extracting the skeleton extraction result from the distance field comprises the steps of:
for each segmented character area, acquiring the average value of the distribution distance of each pixel point in the segmented character area as an initial segmentation distance, and carrying out K3M sequential iterative judgment on the pixel points with the distribution distance of the initial segmentation distance in the distance field; selecting any pixel point with the distribution distance being the initial segmentation distance as a first pixel point, and deleting the first pixel point when the number of the pixel points with the distribution distance being equal to a threshold value of the number of the preset pixel points exists in the neighborhood of the first pixel point; when all the pixels with the distribution distance being the initial segmentation distance are deleted, all the pixels with the distribution distance being greater than the initial segmentation distance are deleted;
iterating an initial segmentation threshold value by a preset step length, deleting pixel points, marking the reserved pixel points when the deleted pixel points and reserved pixel points exist in the pixel points corresponding to the distribution distance, and obtaining a primary extraction result of the skeleton when the initial segmentation threshold value of the iteration reaches the minimum value of the distribution distance; when the width of each line of the skeleton in the primary extraction result is larger than 1 pixel point, reserving the pixel point with the maximum skeleton information density, and when the width of each line of the skeleton in the primary extraction result is 1 pixel point, stopping iteration to obtain a skeleton extraction result corresponding to the segmentation character region.
3. The method for detecting food packaging bags according to claim 1, wherein clustering the pixel points on each code-spraying character based on the normal direction corresponding to the pixel points to obtain a plurality of divided character areas comprises:
clustering the pixel points on each code-spraying character by using a k-means clustering algorithm based on the normal direction corresponding to the pixel points to obtain a plurality of clustering clusters; the pixels in a cluster form a segmented character region.
4. The method for detecting food packaging bags according to claim 1, wherein the screening the optimal window based on the sum of skeleton information densities of pixels in the window comprises:
and calculating the sum of the skeleton information densities of the pixel points in each window, and taking the window corresponding to the sum of the skeleton information densities of the threshold values of the preset number as the optimal window according to the sequence from the big to the small of the sum of the skeleton information densities.
5. The method for detecting a packaging bag for food according to claim 1, wherein the calculating the distribution distance of the pixels in the image of the packaging bag for food according to the density difference of the skeleton information between the pixels and the center point of each optimal window comprises:
And for each pixel point, calculating the average value of the difference value of the skeleton information density of the center point of each pixel point and each optimal window, wherein the average value of the difference values is the distribution distance corresponding to the pixel point.
6. The method of claim 1, wherein the constructing a distance field according to the distribution distance comprises:
and taking the distribution distance corresponding to each pixel point as the value corresponding to the pixel point on the distance field.
7. The method for detecting a packaging bag for food according to claim 1, wherein the determining the quality of the sprayed code in the sprayed code region based on the result of skeleton extraction comprises:
when the skeleton extraction result is completely overlapped with the template skeleton in the template library, judging that the code spraying region corresponding to the skeleton extraction result has no code spraying defect; when the skeleton extraction result is not completely overlapped with all the template skeletons in the template library, judging that the code spraying region corresponding to the skeleton extraction result has the code spraying defect.
8. A food package bag inspection system comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor, when executing the computer program, performs the steps of a food package bag inspection method as claimed in any one of claims 1 to 7.
CN202310197298.9A 2023-03-03 2023-03-03 Food packaging bag detection method and system Active CN115880699B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310197298.9A CN115880699B (en) 2023-03-03 2023-03-03 Food packaging bag detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310197298.9A CN115880699B (en) 2023-03-03 2023-03-03 Food packaging bag detection method and system

Publications (2)

Publication Number Publication Date
CN115880699A CN115880699A (en) 2023-03-31
CN115880699B true CN115880699B (en) 2023-05-09

Family

ID=85761903

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310197298.9A Active CN115880699B (en) 2023-03-03 2023-03-03 Food packaging bag detection method and system

Country Status (1)

Country Link
CN (1) CN115880699B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116152242B (en) * 2023-04-18 2023-07-18 济南市莱芜区综合检验检测中心 Visual detection system of natural leather defect for basketball
CN116205911B (en) * 2023-04-27 2023-07-18 济南市莱芜区综合检验检测中心 Machine vision-based method for detecting appearance defects of leather sports goods

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111027563A (en) * 2019-12-09 2020-04-17 腾讯云计算(北京)有限责任公司 Text detection method, device and recognition system
CN115311292A (en) * 2022-10-12 2022-11-08 南通创铭伊诺机械有限公司 Strip steel surface defect detection method and system based on image processing
CN115620333A (en) * 2022-12-05 2023-01-17 蓝舰信息科技南京有限公司 Test paper automatic error correction method based on artificial intelligence
CN115690106A (en) * 2023-01-03 2023-02-03 菏泽城建新型工程材料有限公司 Deep-buried anchor sealing detection method based on computer vision

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4492258B2 (en) * 2004-08-26 2010-06-30 パナソニック電工株式会社 Character and figure recognition and inspection methods
ITRM20130022A1 (en) * 2013-01-11 2014-07-12 Natural Intelligent Technologies S R L PROCEDURE AND HAND-WRITING APPROVAL
US9911220B2 (en) * 2014-07-28 2018-03-06 Adobe Systes Incorporated Automatically determining correspondences between three-dimensional models
CN106504263B (en) * 2016-11-04 2019-07-12 辽宁工程技术大学 A kind of quick continuous boundary extracting method of image
CN107622263B (en) * 2017-02-20 2018-08-21 平安科技(深圳)有限公司 The character identifying method and device of document image
CN114005127A (en) * 2021-11-15 2022-02-01 中再云图技术有限公司 Image optical character recognition method based on deep learning, storage device and server
CN115619845A (en) * 2022-09-28 2023-01-17 上海致宇信息技术有限公司 Self-adaptive scanning document image inclination angle detection method
CN115272341B (en) * 2022-09-29 2022-12-27 华联机械集团有限公司 Packaging machine defect product detection method based on machine vision
CN115273088B (en) * 2022-09-30 2022-12-13 南通慕派商贸有限公司 Chinese character printing quality detection method based on machine vision
CN115601757A (en) * 2022-10-20 2023-01-13 上海致宇信息技术有限公司(Cn) Scanning document image inclination correction method based on segmented projection

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111027563A (en) * 2019-12-09 2020-04-17 腾讯云计算(北京)有限责任公司 Text detection method, device and recognition system
CN115311292A (en) * 2022-10-12 2022-11-08 南通创铭伊诺机械有限公司 Strip steel surface defect detection method and system based on image processing
CN115620333A (en) * 2022-12-05 2023-01-17 蓝舰信息科技南京有限公司 Test paper automatic error correction method based on artificial intelligence
CN115690106A (en) * 2023-01-03 2023-02-03 菏泽城建新型工程材料有限公司 Deep-buried anchor sealing detection method based on computer vision

Also Published As

Publication number Publication date
CN115880699A (en) 2023-03-31

Similar Documents

Publication Publication Date Title
CN115880699B (en) Food packaging bag detection method and system
CN115239735B (en) Communication cabinet surface defect detection method based on computer vision
CN113160192B (en) Visual sense-based snow pressing vehicle appearance defect detection method and device under complex background
CN111310558A (en) Pavement disease intelligent extraction method based on deep learning and image processing method
CN104680519B (en) Seven-piece puzzle recognition methods based on profile and color
CN114565614B (en) Injection molding surface defect analysis method and system based on machine vision
CN110427979B (en) Road water pit identification method based on K-Means clustering algorithm
CN114862855B (en) Textile defect detection method and system based on template matching
Aijazi et al. Detecting and analyzing corrosion spots on the hull of large marine vessels using colored 3D lidar point clouds
CN114926410A (en) Method for detecting appearance defects of brake disc
CN114882026A (en) Sensor shell defect detection method based on artificial intelligence
CN104778458A (en) Textile pattern retrieval method based on textural features
CN111753794A (en) Fruit quality classification method and device, electronic equipment and readable storage medium
CN114022439A (en) Flexible circuit board defect detection method based on morphological image processing
CN115272350A (en) Method for detecting production quality of computer PCB mainboard
CN110348307B (en) Path edge identification method and system for crane metal structure climbing robot
JP4062987B2 (en) Image area dividing method, image area dividing apparatus, and image area dividing program
CN112102189B (en) Line structure light bar center line extraction method
CN117253024B (en) Industrial salt quality inspection control method and system based on machine vision
CN112784922A (en) Extraction and classification method of intelligent cloud medical images
CN104036232A (en) Image edge feature analysis-based necktie pattern retrieval method
CN108335296B (en) Polar plate identification device and method
US8699761B2 (en) Method for evaluating quality of image representing a fingerprint pattern
RU2163394C2 (en) Material entity identification method
CN115100608B (en) Electric drill shell glass fiber exposure identification method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant