CN116703899B - Bag type packaging machine product quality detection method based on image data - Google Patents

Bag type packaging machine product quality detection method based on image data Download PDF

Info

Publication number
CN116703899B
CN116703899B CN202310966707.7A CN202310966707A CN116703899B CN 116703899 B CN116703899 B CN 116703899B CN 202310966707 A CN202310966707 A CN 202310966707A CN 116703899 B CN116703899 B CN 116703899B
Authority
CN
China
Prior art keywords
area
preliminary
text
contrast
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310966707.7A
Other languages
Chinese (zh)
Other versions
CN116703899A (en
Inventor
刘德成
孙鑫
冯成国
王建军
潘国强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Yilong Packaging Machinery Co ltd
Original Assignee
Qingdao Yilong Packaging Machinery Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Yilong Packaging Machinery Co ltd filed Critical Qingdao Yilong Packaging Machinery Co ltd
Priority to CN202310966707.7A priority Critical patent/CN116703899B/en
Publication of CN116703899A publication Critical patent/CN116703899A/en
Application granted granted Critical
Publication of CN116703899B publication Critical patent/CN116703899B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/19007Matching; Proximity measures
    • G06V30/19093Proximity measures, i.e. similarity or distance measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image data processing, in particular to a bag type packaging machine product quality detection method based on image data. The method comprises the steps of initially dividing a panoramic surface gray image of a packaged product into different areas; obtaining the initial contrast of each pixel point; the regions are divided more accurately; obtaining a complete text region through fusion analysis, and adjusting the initial contrast of each pixel point in the complete text region to obtain a final contrast, so as to obtain the fusion contrast of each pixel point; and obtaining a comparison graph according to the fusion contrast ratio, and detecting the quality of the packaged product according to the comparison graph. The invention completes the construction of contrast ratio by accurately dividing the area and applying a plurality of factors, thereby improving the accuracy of the quality detection of the packaged product.

Description

Bag type packaging machine product quality detection method based on image data
Technical Field
The invention relates to the technical field of image data processing, in particular to a bag type packaging machine product quality detection method based on image data.
Background
The package is an important carrier for providing protection and related information explanation for the product, and the good package is a basic guarantee for the quality of the product, so that the package not only can bring security to consumers, but also can bring processing requirements to product producers. In the production process of the product, defects may occur on the surface of the packaged product in the packaging process of the product due to the influence of external factors and the failure of the bag type packaging machine. Defective packages are troublesome for consumers and also affect merchants, so quality detection of the surface of the packaged product is required.
The contrast of the surface of the packaging product can show the gorgeous degree of surface color information, in the quality detection process, the contrast control of the surface of the packaging product is an important process, and in the prior art, because the information of the surface of the packaging product is rich, the areas with different contents are more, and because the contrast influences of the different areas are different, if the same contrast calculation method is adopted for the different areas, the acquisition accuracy of the contrast is not high enough.
Disclosure of Invention
In order to solve the technical problem that the contrast is not high enough in acquisition accuracy due to the fact that the same contrast calculation method is adopted for different areas in the prior art, the invention aims to provide a bag type packaging machine product quality detection method based on image data, and the adopted technical scheme is as follows:
the invention provides a bag type packaging machine product quality detection method based on image data, which comprises the following steps:
acquiring a panoramic surface gray image of a packaged product of a bag type packaging machine, wherein the panoramic surface gray image comprises a background area, a preliminary graph area and a preliminary character area;
acquiring an initial contrast of each pixel point in a preset first-size neighborhood;
Scaling the preliminary graph area based on the preliminary standard character size to obtain a scaled graph area; screening the preliminary graph area according to the similarity degree of the scaled graph area and the preliminary character area with the preliminary standard character size to obtain a first character area;
sequentially carrying out fusion analysis on the first text region set and the preliminary text region set to respectively obtain a first complete text region and a second complete text region;
in the first complete text region set and the second complete text region set, aiming at each pixel point in each complete text region, according to the shape similarity of the complete text region and other complete text regions and the initial contrast of the pixel points at corresponding positions in other complete text regions, the initial contrast of each pixel point is adjusted, and the final contrast of each pixel point is obtained; taking the initial contrast of the pixel points in the non-text area as the final contrast;
obtaining the fusion contrast of each pixel point according to the final contrast of each pixel point in a neighborhood pixel point in a preset second-size neighborhood, and obtaining a surface comparison chart of the packaged product; and detecting the quality of the packaged product according to the surface comparison graph of the packaged product to obtain a detection result.
Further, the method for acquiring the initial contrast comprises the following steps:
in a preset first-size neighborhood taking each pixel point as a center, taking an average value of gray value differences of the neighborhood pixel points and the center pixel point as a neighborhood gray value difference of each pixel point;
and taking the ratio of the neighborhood gray level difference of each pixel point to the gray level value of the corresponding pixel point as the initial contrast of each pixel point.
Further, the method for obtaining the primary standard text size comprises the following steps:
and acquiring the minimum circumscribed rectangle of all the preliminary text areas, acquiring the mode numbers of the length and the width of all the minimum circumscribed rectangle, and taking the mode numbers of the length and the width as the preliminary standard text size.
Further, the method for acquiring the first text region includes:
obtaining the minimum circumscribed rectangle of all the preliminary graph areas, and taking the center point of the minimum circumscribed rectangle of the preliminary graph areas as the center, and carrying out equal-ratio scaling according to the preliminary standard character size to obtain a scaled graph area;
obtaining the similarity between each scaled graph area and each preliminary character area with the preliminary standard character size by using a template matching algorithm; and if the maximum similarity is greater than a preset threshold, the scaled graphic region is a first text region.
Further, the method of fusion analysis comprises:
acquiring the minimum circumscribed rectangle of all the text areas in the text area set, and taking the mode of the length and the width of the minimum circumscribed rectangle of all the text areas as a reference standard text size;
screening out a character area to be fused and a complete character area according to the size of the character area; performing preliminary fusion on two adjacent text areas to be fused to obtain a preliminary fusion area;
performing negative correlation mapping and normalization on the size difference between the size of the preliminary fusion area and the reference standard character size corresponding to the belonging character area set to obtain fusion size similarity;
taking the minimum distance between adjacent character areas with the reference standard character size as a reference distance; acquiring a distance difference between the reference distance and two text areas to be fused corresponding to the preliminary fusion area, and taking a ratio of the distance difference to the distance between the two corresponding text areas to be fused as a constraint distance;
taking the product of the similarity of the fusion size and the constraint distance as the fusion necessity, and judging whether the two corresponding text areas to be fused belong to the same text area according to the fusion necessity; and fusing the text regions to be fused, which belong to the same text region, to obtain a complete text region.
Further, the method for obtaining the shape similarity comprises the following steps:
the method comprises the steps of obtaining the number of pixel points in each row in each complete text area and the average gray value of the pixels in each row, and taking the product of the difference of the number of the pixel points in the corresponding row in the complete text area and the average gray value difference of the pixels in each row between every two as the difference of row information; and carrying out negative correlation mapping on the sum of the row information differences of each row in the complete text region and normalizing the sum to be the shape similarity.
Further, the method for obtaining the final contrast of each pixel point in the complete text region includes:
the method for acquiring the final contrast of each pixel point in each complete text region comprises the following steps:
wherein ,for the contrast of the pixels in the complete text region, < >>Is pixel dot +.>Is>Is pixel dot +.>The complete text region and the +.>Shape similarity of other complete text regions, < >>In the +.>Pixels in other complete text regions +.>Initial contrast of co-located pixels, +.>For the number of complete text areas in the corresponding complete text area set, Σ is the sum symbol, ++>Is pixel dot +.>And the fusion size similarity of the located complete text region and the corresponding reference standard text size.
Further, the fusion contrast obtaining method includes:
and taking the average value of the final contrast of the neighborhood pixel points of each pixel point in the preset second-size neighborhood as the fusion contrast of each pixel point.
Further, the method for obtaining the detection result comprises the following steps:
obtaining a surface comparison graph of the packaged product according to the fusion contrast of each pixel point; obtaining a surface gradient map of the packaged product by utilizing a Sobel operator; obtaining a surface texture feature map of the packaged product by using an LBP algorithm;
based on a quaternion Fourier transform phase spectrum model PQRT, obtaining a packaging product surface fusion saliency map according to the packaging product surface comparison map, the packaging product surface gradient map and the packaging product surface texture feature map; and analyzing the fusion saliency map of the surface of the packaged product by using a neural network to obtain a defect area of the surface of the packaged product, and detecting the quality of the packaged product according to the defect area to obtain a detection result.
Further, the segmentation method of the background area, the preliminary graph area and the preliminary text area comprises the following steps:
detecting the panoramic surface gray image by using a canny operator to obtain a closed area consisting of adjacent edge pixel points in the panoramic surface gray image, wherein the area outside the closed area is a background area;
Performing cluster analysis with the area K of 2 by using a K-means clustering algorithm, and obtaining two categories by taking distance measurement as area difference; the closed area in the category with the largest average area of the closed areas is a preliminary graph area, and the closed area in the category with the smallest average area of the closed areas is a preliminary text area.
The invention has the following beneficial effects:
the invention firstly carries out preliminary segmentation on the panoramic surface gray image of the packaged product, and the purpose of segmentation is to preliminarily divide different areas. Further, the preliminary graph area is further divided, the first character area is screened out, and the larger characters can be prevented from being misjudged as the graph area by further dividing the preliminary graph area, so that accurate segmentation of the areas is realized. Obtaining a complete text region through fusion analysis; through segmentation and identification of the complete text region, the subsequent contrast analysis process can conduct targeted analysis according to regions with different contents, and more accurate contrast can be obtained. In the contrast analysis process, the shape similarity represents the similarity degree between the complete text areas, so that the final contrast of each pixel point in the complete text area is obtained based on the contrast of the corresponding positions of other complete text areas and the corresponding shape similarity, so that the final contrast of the corresponding pixel point has higher reference property, and the contrast information at the corresponding positions can be represented more accurately. And the final contrast of each pixel point is further adjusted through the acquisition of the fusion contrast, so that contrast information in a contrast graph obtained by the fusion contrast has better referential property, and the accuracy of the quality detection process of the packaged product is ensured. According to the method, the panoramic surface image of the packaging product is divided more accurately, the specific contrast is acquired in different areas, and various factors such as initial contrast, shape similarity and the like are referred in the contrast acquisition process, so that the acquired contrast image is more accurate, and a more accurate detection result is obtained.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a method for detecting product quality of a bag-type packaging machine based on image data according to an embodiment of the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following is a detailed description of specific implementation, structure, characteristics and effects of the bag type packaging machine product quality detection method based on image data according to the invention with reference to the attached drawings and the preferred embodiment. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of the product quality detection method of the bag type packaging machine based on image data.
Referring to fig. 1, a method flowchart of a method for detecting quality of a bag-type packaging machine based on image data according to an embodiment of the present invention is shown, where the method includes:
step S1: and acquiring a panoramic surface gray image of a packaged product of the bag type packaging machine, wherein the panoramic surface gray image comprises a background area, a preliminary graph area and a preliminary character area.
According to the invention, the quality of the packaged product of the bag type packaging machine is required to be detected, so that panoramic surface image data of the packaged product of the bag type packaging machine is required to be acquired firstly. In the embodiment of the invention, in order to facilitate subsequent image processing, after acquiring the acquired panoramic surface image of the packaging product, the panoramic surface image needs to be subjected to gray-scale processing. In one embodiment of the invention, the panoramic surface image is subjected to graying treatment by a weighted graying method to obtain a panoramic surface gray image U. It should be noted that, the arrangement of the specific image acquisition device, the viewing angle range of the camera, and other parameter settings, the practitioner can adjust according to specific actual conditions, and the invention is not limited herein; the image stitching technique and the weighted graying method are well known to those skilled in the art, and are not described herein.
Further, according to priori knowledge, the surface information of the packaged product is complex, and the surface information is generally classified into three categories of background information, graphic information and text information, so in the embodiment of the invention, the panoramic surface gray image U is required to be segmented, and the image is divided into different areas; the purpose of segmentation is that different areas can be primarily divided, so that subsequent further analysis is facilitated.
Preferably, in one embodiment of the present invention, the method for segmenting the panoramic surface gray image U is:
and detecting the panoramic surface gray image U by using a canny operator to obtain edge pixel points in the panoramic surface gray image U, wherein a closed area formed by adjacent edge pixel points is an area comprising graphic characters, and an area which is outside the closed area and is positioned in the panoramic surface gray image U is a background area. Because the closed area comprises two parts of a graph area and a text area, and the graph area is usually small in number and large in number, the area of the closed area is clustered by using a K-means clustering algorithm, K is set to be 2, the distance measurement is area difference, and the closed area is divided into two categories, wherein the average area of the closed area in the category is a preliminary graph area, and the average area of the closed area in the category is a preliminary text area. It should be noted that, both the canny operator and the K-means clustering algorithm are technical means well known to those skilled in the art, and are not described herein.
Thus, a background area, a preliminary graph area and a preliminary text area after the panoramic surface gray image U is segmented are obtained.
Step S2: and acquiring the initial contrast of each pixel point in the preset first-size neighborhood.
For the background area and the graphic area, the background information and the graphic information often have certain special meanings and have corresponding texture characteristics, and the color information is rich, so that the construction of the initial contrast of the pixel point can be completed through the difference of the gray values of the pixel points in the area; for the text region, the defect in the text region is often the change of the shape of the text, and the change of the gray value of the pixel points in the text region is not obvious, if the contrast is built only by the difference of the gray values of the pixel points, the obtained contrast has smaller referenceability and poorer effect, so the initial contrast built by the difference of the gray values of the pixel points can provide reference for the final contrast of the pixel points in the subsequent built region, and therefore, the initial contrast of each pixel point in all the regions needs to be obtained.
Preferably, in one embodiment of the present invention, the method for constructing the initial contrast of each pixel includes:
Analyzing the pixel points in all the areas, taking each pixel point as a center, acquiring an average value of gray value differences between the neighborhood pixel points and the center pixel points in a neighborhood of a preset first size, taking the average value as the neighborhood gray value difference of each center pixel point, and calculating the average value to ensure that the acquired neighborhood gray value difference is more obvious; taking the ratio of the obtained neighborhood gray level difference of the central pixel point to the gray level value of each central pixel point as the initial contrast of each central pixel point, wherein the larger the neighborhood gray level difference is, the larger the obtained initial contrast is. It should be noted that, in the embodiment of the present invention, the size of the preset first size neighborhood is 3*3, and the specific size can be adjusted according to the specific implementation scenario, which is not limited herein. In the embodiment of the invention, the expression of the initial contrast is:
wherein ,for initial contrast +.>For the gray value of the central pixel in the 3*3 neighborhood,/->Is the eighth +.>The gray values of the neighboring pixels, sigma, are sum symbols.
So far, the initial contrast of each pixel point in all the regions is obtained.
Step S3: scaling the preliminary graph area based on the preliminary standard character size to obtain a scaled graph area; and screening the preliminary graph area according to the similarity degree of the scaled graph area and the preliminary character area with the preliminary standard character size to obtain a first character area.
For the preliminary graph area, if the preliminary graph area is divided only by the area, larger characters can be misjudged as the graph area, so that the preliminary graph area needs to be further divided in order to improve the accuracy of the area division; firstly, the primary standard character size is required to be obtained according to the primary character area, and the reason for obtaining the primary standard character size is that characters on the packaged product are always the same font, so that the minimum circumscribed rectangle of each primary character area is consistent in size, and the primary standard character size can be obtained more representatively.
Preferably, the method for acquiring the primary standard text size in one embodiment of the invention comprises the following steps:
firstly, obtaining the minimum circumscribed rectangle of all the preliminary character areas, taking the mode of the length and the width of the minimum circumscribed rectangle of all the preliminary character areas as the preliminary standard character size, namely the preliminary standard character size simultaneously comprises length information and width information; the reason for the mode is that the mode represents the overall trend of a set of data, more typically. In another embodiment of the present invention, a minimum circumscribed ellipse may be obtained, and the primary standard text size may be obtained by counting the major axis mode and the minor axis mode of the ellipse.
Further, the preliminary graph area is scaled in an equal ratio based on the size of the preliminary standard character size, and a scaled graph area is obtained. After scaling, the size of the scaled graph area is the size of the primary standard character size, so that preparation is made for the acquisition of the subsequent similarity; and screening the preliminary graph area according to the similarity between the scaled graph area and the preliminary character area with the preliminary standard character size to obtain a first character area.
Preferably, in one embodiment of the present invention, the method for acquiring the first text region includes:
obtaining the minimum circumscribed rectangle of all the preliminary graph areas, and carrying out equal-ratio scaling according to the size of the preliminary standard character size by taking the center point of the minimum circumscribed rectangle of the preliminary graph areas as the center to obtain a scaled graph area; the preliminary graphical region is scaled to a preliminary standard text size in order to provide for the subsequent acquisition of similarity.
And (3) obtaining the similarity between each scaled graph area and all the preliminary text areas with the preliminary standard text size by using a template matching algorithm, obtaining the maximum similarity XS in all the similarities for improving the accuracy of segmentation, comparing the normalized result of the maximum similarity XS with a preset threshold value, wherein the scaled graph area is a first text area if the result is larger than the preset threshold value, and is a graph area if the result is smaller than the preset threshold value. It should be noted that the operations of the template matching algorithm and normalization are well known to those skilled in the art, and are not described herein. The preset threshold in the embodiment of the present invention is 0.8, and the specific size can be adjusted according to the specific implementation scenario, which is not limited herein.
So far, the text area in the preliminary graph area is segmented and marked as a first text area.
Step S4: and sequentially carrying out fusion analysis on the first text region set and the preliminary text region set to respectively obtain a first complete text region and a second complete text region.
Because an ideal text region should represent a text for each text region, in actual detection, if different strokes of a word are connected, such as "electric", "collection", "group" and the like, a text region is obtained through canny operator detection, but if different strokes of a word are not connected, such as "article", "two", "Yuan" and the like, a plurality of text regions may be obtained through canny operator detection, so fusion analysis is needed for the text regions. The fusion analysis process is to obtain a complete text region, and the construction of contrast ratio of pixel points in the complete text region can more accurately reflect the possibility of defects in the text region; because the step S1 and the step S3 sequentially obtain the preliminary text region and the first text region, and the text size of the first text region differs greatly from the text size of the preliminary text region, the first text region set and the preliminary text region set should be sequentially subjected to fusion analysis.
Preferably, the fusion analysis process in one embodiment of the present invention includes:
firstly, respectively obtaining the minimum circumscribed rectangle of all the character areas in two character area sets, and taking the mode of the length and the width of the minimum circumscribed rectangle of the character areas as a reference standard character size; mode represents the overall trend of a set of data, more typically; thus, the reference standard character sizes of the first character area set and the preliminary character area set are obtained respectively.
Comparing the minimum circumscribed rectangle of any one of the first text region set and the preliminary text region set with the reference standard text size corresponding to the affiliated set, and if the length of the minimum circumscribed rectangle of the text region is larger than or equal to the length of the reference standard text size corresponding to the affiliated set and the width of the minimum circumscribed rectangle of the text region is larger than or equal to the width of the reference standard text size corresponding to the affiliated set, indicating that the text region is a complete text region and fusion is not needed; if the length of the minimum circumscribed rectangle of the text region is smaller than the length or width of the reference standard text size corresponding to the belonging set and smaller than the width of the reference standard text size corresponding to the belonging set, taking the text region A in the first text region set as an example, the description text region A belongs to a part of a certain text region and is a text region to be fused. Because the gap between each part of a complete character is smaller and the distance is closer, a character area to be fused, which is adjacent to the character area A, can be obtained, the character area A and the character area B are preliminarily fused, the size difference between the minimum circumscribed rectangle of the character area after preliminary fusion and the reference standard character size is obtained, the smaller the size difference is, the closer the minimum circumscribed rectangle of the character area after preliminary fusion and the reference standard character size is, the more likely the character area A and the character area B belong to the same character area; and carrying out negative correlation mapping and normalization on the size difference to obtain a fusion size similarity, wherein the smaller the size difference is, the larger the fusion size similarity is, the smaller the difference between the minimum circumscribed rectangle of the fused character area and the corresponding reference standard character size is, and the more likely two character areas to be fused belong to the same character area, the greater the necessity of fusion is. In the embodiment of the invention, the expression of the fusion size similarity is as follows:
wherein ,to fuse size similarity, +.>,/>The length and the width of the minimum circumscribed rectangle after the character area A and the character area B are preliminarily fused and then are regarded as a character area, and the length and the width of the minimum circumscribed rectangle after the character area A and the character area B are respectively treated as the minimum circumscribed rectangle after the character area A and the character area B are preliminarily fused>For the length of the corresponding reference standard letter size, < >>For the width of the corresponding reference standard letter size, < >>Is an exponential function based on a natural constant e. It should be noted that in the embodiment of the present invention, other basic mathematical operations may be used to implement the negative correlation mapping and normalization, and such operations are well known to those skilled in the art, and are not described herein.
Since the two text regions to be fused belong to the same text region, the distance between the two text regions to be fused should have obvious difference with the distance between the two adjacent text regions with the reference standard text size, so the distance between the two text regions to be fused is acquired, and if the distance between the two text regions to be fused is smaller, the more likely that the two text regions to be fused belong to the same text region is indicated, the greater the fusion necessity is; counting the distances between all adjacent preliminary character areas with reference standard character sizes as initial reference distances, taking the minimum initial reference distance as reference distance, and selecting the minimum initial reference distance as the reference distance to prepare for the follow-up acquisition of constraint distances; the method comprises the steps of obtaining the difference between a reference distance and two to-be-fused text regions, taking the ratio of the difference to the distance between the two to-be-fused text regions as a constraint distance, and restraining the two to-be-fused text regions through the constraint distance to prevent the far to-be-fused text regions from being fused. It should be noted that, the distance calculating method is a technical means well known to those skilled in the art, and will not be described herein.
Multiplying the similarity of the fusion size by the constraint distance to obtain a result which is the fusion necessity, and when the fusion necessity is larger, indicating that the two text regions to be fused are more likely to belong to different parts of the same text region, the more the text regions to be fused need to be fused; when the fusion necessity is smaller, the two character areas to be fused do not belong to the same character area, and fusion is not needed. The expression of the fusion necessity is:
wherein ,for the fusion necessity->For reference distance->Is the distance between two text regions to be fused,for constraint distance->To fuse size similarity.
In the expression of the fusion necessity, when the fusion size similarity V is larger, the minimum circumscribed rectangle of the regions after the primary fusion of the two character regions to be fused is closer to the corresponding reference standard character rulerThe more likely the cun is that it belongs to the same text region, the necessity is fusedThe larger the size; when the distance between two text regions to be fused is +.>The smaller the gap between two text regions to be fused is, the more likely it is that the two text regions belong to the same text region, while the distance is restricted>The larger the value of (2), therefore, the better the constraint generated by the constraint distance, the better the fusion of the far text region to be fused can be prevented, and the fusion necessity is further improved >The larger.
The acquired fusion necessity of two character areas to be fusedAnd a preset judgment threshold +.>Comparison is made if the fusion necessity +.>Is greater than or equal to a preset judgment threshold value->When the text regions to be fused are required to be fused, the text regions to be fused are required to be fused; if fusion necessity->Less than a preset judgment threshold->And when the method is used, the fact that the two character areas to be fused are not required to be fused is explained. It should be noted that, in the embodiment of the present invention, the judgment threshold value +.>The size of (2) is 0.75, and the specific size can be adjusted according to the specific implementation scenario, and is not limited herein.
When the size of the minimum circumscribed rectangle of the fused text region is not smaller than the corresponding reference standard text size, the fact that a complete text region is obtained at the moment is indicated, otherwise, the fusion is carried out continuously through the step S4, and a first complete text region corresponding to the first text region set is obtained. And carrying out the same fusion analysis on the preliminary text region set according to the corresponding reference standard text size to obtain a second complete text region.
Taking the "article" word as an example, when the canny operator is used for detection, the "article" word is divided into three parts, the upper "mouth" is marked as a first part, the lower left "mouth" is marked as a second part, and the lower right "mouth" is marked as a third part, and the sizes of the three parts are smaller than the reference standard character sizes corresponding to the belonging sets no matter whether the "article" word appears in the first character area set or the preliminary character area set, so that fusion analysis is needed at the moment; assuming that the first part and the second part are initially fused, at the moment, the size difference between the minimum circumscribed rectangle of the character area after the initial fusion and the reference standard character size corresponding to the belonging set should be smaller, so that the fusion size similarity obtained after the size difference is subjected to the negative correlation mapping and the normalization is larger, and the probability that the first part and the second part belong to the same character area is larger; and then, the minimum distance between the adjacent character areas with the reference standard character sizes corresponding to the first and second part groups is obtained as a reference distance, the reference distance is used for preparing for the subsequent obtaining of the constraint distance, the difference between the reference distance and the distance between the first and second part groups is obtained, the ratio of the difference to the distance between the first and second part groups is used as the constraint distance, and the first and second part groups belong to the same character area, so that the distance between the first and second part groups is smaller, the value of the constraint distance is larger, and the constraint effect generated by the constraint distance is better. Multiplying the fusion size similarity by the constraint distance to obtain the fusion necessity, wherein the larger the fusion size similarity is, the larger the fusion necessity is; the smaller the distance between the first part and the second part is, the larger the constraint distance is, and the larger the fusion necessity is; the greater the probability that the first part and the second part belong to the same text region; comparing the obtained fusion necessity with a preset judgment threshold value, and if the obtained fusion necessity is larger than or equal to the preset judgment threshold value, fusing; at this time, the size of the text region corresponding to the fused text region of the first part and the second part should be smaller than the reference standard text size corresponding to the integrated text region of the first part and the second part, so that the analysis needs to be continued until the fusion of the three parts is completed, and the text region at this time should not be smaller than the reference standard text size corresponding to the integrated text region of the three parts, which means that a complete text region is obtained at this time.
Thus, a first complete text region corresponding to the first text region set and a second complete text region corresponding to the preliminary text region set are obtained.
Step S5: in the first complete text region set and the second complete text region set, aiming at each pixel point in each complete text region, according to the shape similarity of the complete text region and other complete text regions and the initial contrast of the pixel points at corresponding positions in other complete text regions, the initial contrast of each pixel point is adjusted, and the final contrast of each pixel point is obtained; and taking the initial contrast of the pixel points in the non-text area as the final contrast.
Because the color information of the text regions is regular, the contrast information between different regions corresponding to the same text should be consistent, so that the initial contrast of a certain position in one complete text region can be adjusted by combining the initial contrast of the same position in other complete text regions, and the shape similarity between two complete text regions needs to be considered in the adjustment process, namely, the larger the shape similarity is, the stronger the information reference in other complete text regions is.
Because the shapes of some strokes of the fonts have similarity and the specifications of the fonts are basically consistent, the shape similarity between the complete text areas can be obtained according to the gray value difference of the pixel points between the complete text areas, the shape similarity represents the similarity degree between every two complete text areas, the construction of the contrast of the pixel points in the complete text areas is completed based on the shape similarity, and the obtaining precision of the contrast can be improved.
Preferably, the method for acquiring the shape similarity in one embodiment of the present invention includes:
the method comprises the steps of obtaining the number of pixel points in each row in each complete text area and the average gray value of the pixels in each row, and taking the product of the difference of the number of the pixel points in the corresponding row in the complete text area and the average gray value difference of the pixels in each row between every two as the difference of row information; and carrying out negative correlation mapping on the sum of the information differences of each line in the complete text region and normalizing the sum to be the shape similarity. The expression of shape similarity in the embodiment of the invention is:
wherein, the two complete text areas take the A area and the L area as examples,for the shape similarity of the complete text region A and the complete text region L, < >>Pixel point line number of complete text area A, < > >Is the +.>The number of row pixels, < >>Is the +.>The number of row pixels, < >>Is the +.>Row pixel average gray value of row, +.>Is the +.>Row pixel average gray value of row, +.>The sum sign is an exponential function based on a natural constant e.
Further, based on the shape similarity of each complete text region and other complete text regions and the initial contrast of the pixel points at corresponding positions in other complete text regions, the initial contrast of each pixel point is adjusted, so that the final contrast of each pixel point in each complete text region can be obtained, and if the difference between the two complete text regions is larger, the smaller the contrast information reference in the corresponding other complete text regions in the adjustment process is illustrated.
Preferably, the method for obtaining the final contrast in one embodiment of the present invention is:
the method for acquiring the final contrast of each pixel point in each complete text region comprises the following steps:
wherein, pixel points are usedFor example, a->Is pixel dot +.>Final of (2)Contrast, or->Is pixel dot +.>Is>Is pixel dot +.>The complete text region and the +. >Shape similarity of individual complete text regions, +.>In the +.>Pixels in the whole text region>Initial contrast of pixel point at the same position, +.>For the number of complete text areas, Σ is the sum symbol ++>Is pixel dot +.>And the fusion size similarity of the located complete text region and the corresponding reference standard text size.
In the obtaining formula of the final contrast of each pixel point in each complete text region, the shape similarity of the complete text region where each pixel point is located and other complete text regions is taken as the weight of the initial contrast, the initial contrast is adjusted through the weight, and meanwhile the fusion size similarity of the complete text region where each pixel point is located and the corresponding reference standard text size is taken as the final result of confidence level adjustment, so that the final contrast is obtained, the obtained final contrast is more accurate, the contrast information under the corresponding position can be more accurately represented, and the reference value is further improved. Since the color information in the non-text region is often complex, the initial contrast obtained from the difference in gray values of the pixels according to step S2 is already representative, and thus the initial contrast of the pixels in the non-text region may be used as the final contrast.
And respectively acquiring the final contrast of each pixel point in the first complete text region set and the second complete text region set according to the step S5.
So far, the final contrast of all pixels is obtained.
Step S6: obtaining the fusion contrast of each pixel point according to the final contrast of each pixel point in a neighborhood pixel point in a preset second-size neighborhood, and obtaining a surface comparison chart of the packaged product; and detecting the quality of the packaged product according to the surface comparison chart of the packaged product to obtain a detection result.
According to step S5, the final contrast of all the pixels is obtained, but since a defective area may be often distributed in a plurality of different areas, for example, a part of the defective area is located in a text area and a part of the defective area is located in a graphics area, and different areas in the present invention have different contrast obtaining methods, the pixels belonging to the same defective area may cause excessive differences in the final contrast of the pixels in the area due to the difference of the located areas, and thus cause interference when quality detection is performed subsequently, so that in the embodiment of the present invention, the final contrast of each pixel in a neighborhood of pixels in a preset second size neighborhood is fused to obtain the fused contrast of each pixel.
Preferably, the method for acquiring the fusion contrast of each pixel point in one embodiment of the present invention includes:
taking each pixel point as a center, acquiring an average value of the final contrast of the neighborhood pixel points of each pixel point in the neighborhood of the preset second size as the fusion contrast of each center pixel point, and taking the average value to reduce interference caused by overlarge difference of the final contrast of the pixel points among different areas. It should be noted that, in the embodiment of the present invention, the size of the preset second size neighborhood is 5*5, and the specific size can be adjusted according to the specific implementation scenario, which is not limited herein. In the embodiment of the invention, the expression of the fusion contrast is:
wherein ,for fusing contrast +.>Is 5*5 th->The final contrast of the individual pixels, Σ, is the sum sign.
Therefore, the surface contrast graph DBU of the packaged product of the bag type packaging machine can be obtained according to the fusion contrast of each pixel point, and further the quality detection of the packaged product can be carried out according to the surface contrast graph DBU, but in order to obtain a more accurate result in the quality detection process, more image information can be considered to be fused and then the quality detection can be carried out.
Preferably, the method for detecting quality in one embodiment of the present invention includes:
obtaining surface gradient map of packaged product of bag type packaging machine by utilizing Sobel operatorObtaining surface texture feature map of packaging product of bag type packaging machine by using LBP algorithm>The method comprises the steps of carrying out a first treatment on the surface of the Completing image fusion based on quaternion Fourier transform phase spectrum model PQRT, and enabling panoramic surface gray level image ++>As a matrix of supercomplex quaternions->Parameter->Surface comparison map->As a matrix of supercomplex quaternions->Parameter->Surface gradient map->As a matrix of supercomplex quaternions->Parameter->Surface texture feature map->As a matrix of supercomplex quaternions->Parameter->. Then an supercomplex quaternion matrix can be obtained>
wherein ,is a matrix of supercomplex quaternions,>for panoramic surface grey scale image +.>For the surface contrast, add>For surface gradient map, ">For surface texture feature map, < >>、/>、/>Are respectively imaginary units ++>、/>、/>The size of (2) is as followsAnd->,/>,/>
Obtaining an supercomplex quaternion matrixAfter that, pair->Performing the ultra-complex Fourier transform to obtain an amplitude spectrum and a phase spectrum of the image, obtaining a scale space of the amplitude spectrum based on Gaussian function kernels of different scales, performing the quaternion Fourier inverse transform, and calculating the result of the inverse transform and a Gaussian filter to obtain a surface fusion saliency map of the packaging product. It should be noted that, the sobel operator, the LBP algorithm, the method for obtaining the hyper-complex quaternion matrix, the gaussian function kernel, the fourier transform, the inverse fourier transform and the gaussian filtering are all technical means well known to those skilled in the art, and are not described herein.
Taking the obtained surface fusion saliency map of the packaged product as the input of a trained neural network, wherein the neural network model is U-Net, the loss function is a cross entropy function, and the output of the neural network is a defect area of the surface image of the packaged product; the total area S of defective areas of the surface image of the packaged product is obtained, and a comparison threshold is set,/>For packaging the surface area of the product, when the total area S of the defect area is larger than or equal to the comparison threshold value, the product quality is considered to be unqualified, and when the total area S of the defect area is smaller than the comparison threshold value, the product quality is considered to be qualified. It should be noted that, the training method of the neural network is a technical means well known to those skilled in the art, and is not described herein in detail; the setting of the comparison threshold may be adjusted according to the specific implementation, and is not limited herein.
In summary, in the embodiment of the invention, the panoramic surface gray image of the packaged product is initially segmented through a canny operator and a K-means clustering algorithm to obtain a background area, a primary graphic area and a primary text area; further, taking each pixel point as a center, and taking the ratio of the neighborhood gray scale difference in the 3*3 neighborhood to the center pixel point as the initial contrast of each pixel point; further, in order to accurately divide the region and avoid that larger characters are recognized as graphs, scaling is performed with the center point of the smallest circumscribed rectangle of all the preliminary graph regions as the center based on the preliminary standard character size to obtain a scaled graph region, similarity matching is performed on the scaled graph region and all the preliminary character regions with the preliminary standard character size by using a template matching algorithm to obtain maximum similarity, and a value normalized by the maximum similarity is compared with a preset threshold to obtain a first character region; further, sequentially carrying out fusion analysis on the first text region and the preliminary text region to sequentially obtain a first complete text region and a second complete text region; further, in the first complete text region set and the second complete text region set, for each pixel point, adjusting the initial contrast of each pixel point according to the shape similarity between the complete text regions and the initial contrast of the pixel point at the corresponding position to obtain the final contrast of each pixel point in the complete text region, wherein the initial contrast of the pixel point in the non-text region is the final contrast; further, taking the average value of the final contrast of the neighborhood pixel points in the 5*5 neighborhood as the fusion contrast of each central pixel point by taking each pixel point as the center; further, a contrast diagram is obtained according to the fusion contrast ratio, and quality detection is carried out on the packaged product according to the contrast diagram. According to the image data processing technology, accurate segmentation of the region is realized, and the contrast is built by referencing a plurality of factors, so that a more accurate contrast diagram can be obtained, and the accuracy of quality detection of the packaged product is improved.
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. The processes depicted in the accompanying drawings do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.

Claims (10)

1. A method for detecting the quality of a bag type packaging machine product based on image data, the method comprising:
acquiring a panoramic surface gray image of a packaged product of a bag type packaging machine, wherein the panoramic surface gray image comprises a background area, a preliminary graph area and a preliminary character area;
acquiring an initial contrast of each pixel point in a preset first-size neighborhood;
scaling the preliminary graph area based on the preliminary standard character size to obtain a scaled graph area; screening the preliminary graph area according to the similarity degree of the scaled graph area and the preliminary character area with the preliminary standard character size to obtain a first character area;
Sequentially carrying out fusion analysis on the first text region set and the preliminary text region set to respectively obtain a first complete text region and a second complete text region;
in the first complete text region set and the second complete text region set, aiming at each pixel point in each complete text region, according to the shape similarity of the complete text region and other complete text regions and the initial contrast of the pixel points at corresponding positions in other complete text regions, the initial contrast of each pixel point is adjusted, and the final contrast of each pixel point is obtained; taking the initial contrast of the pixel points in the non-text area as the final contrast;
obtaining the fusion contrast of each pixel point according to the final contrast of each pixel point in a neighborhood pixel point in a preset second-size neighborhood, and obtaining a surface comparison chart of the packaged product; and detecting the quality of the packaged product according to the surface comparison graph of the packaged product to obtain a detection result.
2. The method for detecting the quality of a bag type packaging machine product based on image data according to claim 1, wherein the method for acquiring the initial contrast comprises:
in a preset first-size neighborhood taking each pixel point as a center, taking an average value of gray value differences of the neighborhood pixel points and the center pixel point as a neighborhood gray value difference of each pixel point;
And taking the ratio of the neighborhood gray level difference of each pixel point to the gray level value of the corresponding pixel point as the initial contrast of each pixel point.
3. The method for detecting the quality of a bag type packaging machine product based on image data according to claim 1, wherein the method for acquiring the preliminary standard character size comprises the following steps:
and acquiring the minimum circumscribed rectangle of all the preliminary text areas, acquiring the mode numbers of the length and the width of all the minimum circumscribed rectangle, and taking the mode numbers of the length and the width as the preliminary standard text size.
4. The method for detecting the quality of a bag type packaging machine product based on image data according to claim 1, wherein the method for acquiring the first text region comprises the following steps:
obtaining the minimum circumscribed rectangle of all the preliminary graph areas, and taking the center point of the minimum circumscribed rectangle of the preliminary graph areas as the center, and carrying out equal-ratio scaling according to the preliminary standard character size to obtain a scaled graph area;
obtaining the similarity between each scaled graph area and each preliminary character area with the preliminary standard character size by using a template matching algorithm; and if the maximum similarity is greater than a preset threshold, the scaled graphic region is a first text region.
5. The method for detecting the product quality of the bag type packaging machine based on the image data according to claim 1, wherein the method for fusion analysis comprises the following steps:
acquiring the minimum circumscribed rectangle of all the text areas in the text area set, and taking the mode of the length and the width of the minimum circumscribed rectangle of all the text areas as a reference standard text size;
screening out a character area to be fused and a complete character area according to the size of the character area; performing preliminary fusion on two adjacent text areas to be fused to obtain a preliminary fusion area;
performing negative correlation mapping and normalization on the size difference between the size of the preliminary fusion area and the reference standard character size corresponding to the belonging character area set to obtain fusion size similarity;
taking the minimum distance between adjacent character areas with the reference standard character size as a reference distance; acquiring a distance difference between the reference distance and two text areas to be fused corresponding to the preliminary fusion area, and taking a ratio of the distance difference to the distance between the two corresponding text areas to be fused as a constraint distance;
taking the product of the similarity of the fusion size and the constraint distance as the fusion necessity, and judging whether the two corresponding text areas to be fused belong to the same text area according to the fusion necessity; and fusing the text regions to be fused, which belong to the same text region, to obtain a complete text region.
6. The method for detecting the quality of a bag type packaging machine product based on image data according to claim 1, wherein the method for acquiring the shape similarity comprises the following steps:
the method comprises the steps of obtaining the number of pixel points in each row in each complete text area and the average gray value of the pixels in each row, and taking the product of the difference of the number of the pixel points in the corresponding row in the complete text area and the average gray value difference of the pixels in each row between every two as the difference of row information; and carrying out negative correlation mapping on the sum of the row information differences of each row in the complete text region and normalizing the sum to be the shape similarity.
7. The method for detecting the product quality of the bag type packaging machine based on the image data according to claim 5, wherein the method for acquiring the final contrast of each pixel point in the complete text region comprises the following steps:
the method for acquiring the final contrast of each pixel point in each complete text region comprises the following steps:
wherein ,for the contrast of the pixels in the complete text region, < >>Is pixel dot +.>Is>Is pixel dot +.>The complete text region and the +.>Shape similarity of other complete text regions, < >>In the +.>Pixels in other complete text regions +. >Initial contrast of co-located pixels, +.>For the number of complete text areas in the corresponding complete text area set, Σ is the sum symbol, ++>Is pixel dot +.>And the fusion size similarity of the located complete text region and the corresponding reference standard text size.
8. The method for detecting the quality of a bag type packaging machine product based on image data according to claim 1, wherein the method for acquiring the fusion contrast comprises the following steps:
and taking the average value of the final contrast of the neighborhood pixel points of each pixel point in the preset second-size neighborhood as the fusion contrast of each pixel point.
9. The method for detecting the quality of a bag type packaging machine product based on image data according to claim 1, wherein the method for acquiring the detection result comprises the following steps:
obtaining a surface comparison graph of the packaged product according to the fusion contrast of each pixel point; obtaining a surface gradient map of the packaged product by utilizing a Sobel operator; obtaining a surface texture feature map of the packaged product by using an LBP algorithm;
based on a quaternion Fourier transform phase spectrum model PQRT, obtaining a packaging product surface fusion saliency map according to the packaging product surface comparison map, the packaging product surface gradient map and the packaging product surface texture feature map; and analyzing the fusion saliency map of the surface of the packaged product by using a neural network to obtain a defect area of the surface of the packaged product, and detecting the quality of the packaged product according to the defect area to obtain a detection result.
10. The method for detecting the quality of a bag type packaging machine product based on image data according to claim 1, wherein the method for dividing the background area, the preliminary graphic area and the preliminary text area comprises the steps of:
detecting the panoramic surface gray image by using a canny operator to obtain a closed area consisting of adjacent edge pixel points in the panoramic surface gray image, wherein the area outside the closed area is a background area;
performing cluster analysis with the area K of 2 by using a K-means clustering algorithm, and obtaining two categories by taking distance measurement as area difference; the closed area in the category with the largest average area of the closed areas is a preliminary graph area, and the closed area in the category with the smallest average area of the closed areas is a preliminary text area.
CN202310966707.7A 2023-08-03 2023-08-03 Bag type packaging machine product quality detection method based on image data Active CN116703899B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310966707.7A CN116703899B (en) 2023-08-03 2023-08-03 Bag type packaging machine product quality detection method based on image data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310966707.7A CN116703899B (en) 2023-08-03 2023-08-03 Bag type packaging machine product quality detection method based on image data

Publications (2)

Publication Number Publication Date
CN116703899A CN116703899A (en) 2023-09-05
CN116703899B true CN116703899B (en) 2023-10-24

Family

ID=87831465

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310966707.7A Active CN116703899B (en) 2023-08-03 2023-08-03 Bag type packaging machine product quality detection method based on image data

Country Status (1)

Country Link
CN (1) CN116703899B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005004334A (en) * 2003-06-10 2005-01-06 Ricoh Co Ltd Image processing apparatus, image processing method and program used for execution thereof
US7310445B2 (en) * 2003-11-26 2007-12-18 International Business Machines Corporation Classification of image blocks by region contrast significance and uses therefor in selective image enhancement in video and image coding
CN101561866A (en) * 2009-05-27 2009-10-21 上海交通大学 Character recognition method based on SIFT feature and gray scale difference value histogram feature
CN102332096A (en) * 2011-10-17 2012-01-25 中国科学院自动化研究所 Video caption text extraction and identification method
CN104200209A (en) * 2014-08-29 2014-12-10 南京烽火星空通信发展有限公司 Image text detecting method
CN106934386A (en) * 2017-03-30 2017-07-07 湖南师范大学 A kind of natural scene character detecting method and system based on from heuristic strategies
CN109190632A (en) * 2018-08-23 2019-01-11 甘肃政法学院 A kind of binarization method of ancient books file and picture
CN113657407A (en) * 2021-07-26 2021-11-16 扆亮海 High-recall-rate accurate positioning method for large-amplitude picture characters
CN115147409A (en) * 2022-08-30 2022-10-04 深圳市欣冠精密技术有限公司 Mobile phone shell production quality detection method based on machine vision
CN115272341A (en) * 2022-09-29 2022-11-01 华联机械集团有限公司 Packaging machine defect product detection method based on machine vision
CN115311290A (en) * 2022-10-12 2022-11-08 南通市通州区精华电器有限公司 Method for detecting defects of metal parts of precision instrument
CN115601364A (en) * 2022-12-14 2023-01-13 惠州威尔高电子有限公司(Cn) Golden finger circuit board detection method based on image analysis
CN116110053A (en) * 2023-04-13 2023-05-12 济宁能源发展集团有限公司 Container surface information detection method based on image recognition

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005004334A (en) * 2003-06-10 2005-01-06 Ricoh Co Ltd Image processing apparatus, image processing method and program used for execution thereof
US7310445B2 (en) * 2003-11-26 2007-12-18 International Business Machines Corporation Classification of image blocks by region contrast significance and uses therefor in selective image enhancement in video and image coding
CN101561866A (en) * 2009-05-27 2009-10-21 上海交通大学 Character recognition method based on SIFT feature and gray scale difference value histogram feature
CN102332096A (en) * 2011-10-17 2012-01-25 中国科学院自动化研究所 Video caption text extraction and identification method
CN104200209A (en) * 2014-08-29 2014-12-10 南京烽火星空通信发展有限公司 Image text detecting method
CN106934386A (en) * 2017-03-30 2017-07-07 湖南师范大学 A kind of natural scene character detecting method and system based on from heuristic strategies
CN109190632A (en) * 2018-08-23 2019-01-11 甘肃政法学院 A kind of binarization method of ancient books file and picture
CN113657407A (en) * 2021-07-26 2021-11-16 扆亮海 High-recall-rate accurate positioning method for large-amplitude picture characters
CN115147409A (en) * 2022-08-30 2022-10-04 深圳市欣冠精密技术有限公司 Mobile phone shell production quality detection method based on machine vision
CN115272341A (en) * 2022-09-29 2022-11-01 华联机械集团有限公司 Packaging machine defect product detection method based on machine vision
CN115311290A (en) * 2022-10-12 2022-11-08 南通市通州区精华电器有限公司 Method for detecting defects of metal parts of precision instrument
CN115601364A (en) * 2022-12-14 2023-01-13 惠州威尔高电子有限公司(Cn) Golden finger circuit board detection method based on image analysis
CN116110053A (en) * 2023-04-13 2023-05-12 济宁能源发展集团有限公司 Container surface information detection method based on image recognition

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
低质量文档图像二值化算法研究;熊炜;赵诗云;徐晶晶;赵楠;;计算机应用与软件(07);全文 *
基于图像单元对比度与统计特性的显著性检测;唐勇;杨林;段亮亮;;自动化学报(10);全文 *
基于形状特征的红外目标检测方法;高晶;孙继银;吴昆;李琳琳;;激光与红外(01);全文 *
文档图像分割技术研究;付旻;黄祥林;高芸;;中国传媒大学学报(自然科学版)(04);全文 *
融合双特征图信息的图像显著性检测方法;崔玲玲;许金兰;徐岗;吴卿;;中国图象图形学报(04);全文 *

Also Published As

Publication number Publication date
CN116703899A (en) 2023-09-05

Similar Documents

Publication Publication Date Title
US20210374940A1 (en) Product defect detection method, device and system
CN106803247B (en) Microangioma image identification method based on multistage screening convolutional neural network
CN105389593B (en) Image object recognition methods based on SURF feature
CN113724231B (en) Industrial defect detection method based on semantic segmentation and target detection fusion model
Ibrahim et al. Leaf recognition using texture features for herbal plant identification
CN111539330B (en) Transformer substation digital display instrument identification method based on double-SVM multi-classifier
CN108108753A (en) A kind of recognition methods of check box selection state based on support vector machines and device
CN110826408B (en) Face recognition method by regional feature extraction
CN108108760A (en) A kind of fast human face recognition
CN110738216A (en) Medicine identification method based on improved SURF algorithm
CN105320970A (en) Potato disease diagnostic device, diagnostic system and diagnostic method
CN109583493A (en) A kind of credit card detection and digit recognition method based on deep learning
CN106372624A (en) Human face recognition method and human face recognition system
CN111046881A (en) Pointer type instrument reading identification method based on computer vision and deep learning
CN115346227B (en) Method for vectorizing electronic file based on layout file
CN113609984A (en) Pointer instrument reading identification method and device and electronic equipment
CN116958125B (en) Electronic contest host power supply element defect visual detection method based on image processing
CN111950559A (en) Pointer instrument automatic reading method based on radial gray scale
CN109977899A (en) A kind of training, reasoning and the method and system for increasing New raxa of article identification
CN113793357A (en) Bronchopulmonary segment image segmentation method and system based on deep learning
CN108416304A (en) A kind of three classification method for detecting human face using contextual information
CN114821554A (en) Image recognition method, electronic device, and storage medium
CN106548195A (en) A kind of object detection method based on modified model HOG ULBP feature operators
CN116703899B (en) Bag type packaging machine product quality detection method based on image data
CN107247958A (en) A kind of skin disease feature extracting method based on image recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant