CN113554591B - Label positioning method and device - Google Patents

Label positioning method and device Download PDF

Info

Publication number
CN113554591B
CN113554591B CN202110638410.9A CN202110638410A CN113554591B CN 113554591 B CN113554591 B CN 113554591B CN 202110638410 A CN202110638410 A CN 202110638410A CN 113554591 B CN113554591 B CN 113554591B
Authority
CN
China
Prior art keywords
area
image
points
corner
connected domain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110638410.9A
Other languages
Chinese (zh)
Other versions
CN113554591A (en
Inventor
张伟
武春杰
赵兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LCFC Hefei Electronics Technology Co Ltd
Original Assignee
LCFC Hefei Electronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LCFC Hefei Electronics Technology Co Ltd filed Critical LCFC Hefei Electronics Technology Co Ltd
Priority to CN202110638410.9A priority Critical patent/CN113554591B/en
Publication of CN113554591A publication Critical patent/CN113554591A/en
Application granted granted Critical
Publication of CN113554591B publication Critical patent/CN113554591B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a label positioning method, which comprises the steps of obtaining an original image, searching a connected domain in the processed image to determine a label area after morphological processing of the original image, processing the image in the area to obtain the vertex of a white part of a label image, determining the white part of the label image through the vertex, correcting the white part of the label image, and combining the white part of the label image with the black part to obtain the label image.

Description

Label positioning method and device
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method and apparatus for positioning a label.
Background
In the production process of a notebook computer, labels are often attached to an outer packing box of the notebook computer, and information related to the notebook computer is represented by the labels, so that defect detection on the labels is particularly important in the production process, and label positioning is an important link in the defect detection of the labels.
One way in the prior art is to position the tag by making a standard template, correcting the acquired image to the same size as the template, and comparing the acquired image with the standard template; the other mode is that the area with interference to the label in the label is effectively eliminated by carrying out threshold segmentation, opening operation and closing operation on the label, and then the label in the image is identified, so that the positioning effect is achieved;
The former needs to consume a great deal of manpower and material resources in the process of manufacturing the template, and the template can be detected after the template is manufactured; the latter is difficult to set uniform parameters for operation because of various notebook types, and only the labels can be roughly positioned by the method.
Disclosure of Invention
The embodiment of the invention provides a label positioning method and label positioning equipment, which have the function of positioning a label image from an original image.
In one aspect, an embodiment of the present invention provides a tag positioning method, where the method includes: acquiring an original image; filtering the original image to obtain a filtered image; performing binarization processing on the filtered image to obtain a binarized image; performing expansion treatment on the binarized image, and performing corrosion treatment on the image obtained after the expansion treatment to obtain a denoised image; carrying out connected domain division on the denoised image based on the designated color to obtain one or more first connected domains; screening the connected domains according to the size or the area of the tag to obtain a second connected domain; the shape of the second communicating region is similar to a polygon, and the second communicating region is provided with a plurality of sides, wherein each side is an irregular line segment and approaches to a straight line segment; acquiring a first area corresponding to the first connected domain position from the original image; acquiring a second area which is circumscribed and tangent to the first area, wherein the second area is polygonal, a plurality of vertexes of the second area are taken as centers, a plurality of fourth areas are selected, the third area is in a designated shape, gray values of images in the intersecting area of the third area and the second area are set as specific values, and corner points of the images in each third area are detected according to the specific values to obtain at most one corner point; acquiring a fourth region circumscribed with the first region, wherein the fifth region is polygonal, and each side of the fourth region is parallel to a boundary corresponding to the original image; performing linear detection on each side in the fourth area to obtain at least one first line segment close to each side, determining the first line segment with the length meeting a set threshold as a second line segment, intersecting the second line segments two by two to obtain a plurality of first intersection points, and determining the first intersection points in the fourth area as second intersection points; counting the number of the corner points and the second intersection points in each third area, if the number of the corner points and the second intersection points is multiple, calculating Euclidean distances between any two points in the corner points and the second intersection points, and determining the middle points of the two points with the minimum Euclidean distances as vertexes; if the statistical result is one, confirming the corner point or the second intersection point corresponding to the statistical result as a vertex; and performing transmission transformation on the obtained images in the polygonal areas formed among the vertexes according to the size of the label, obtaining a first image with corrected size and position, obtaining a second image matched with the first image in position, and combining the first image and the second image to obtain a label image.
In an embodiment, the screening the first connected domain according to the size or the area of the tag to obtain the second connected domain includes: setting a first threshold according to the size or the area of the tag, and judging whether the number of the first connected domains meeting the first threshold is one; when it is determined that the first connected domain satisfying the first threshold is one, determining the first connected domain satisfying the first threshold as a second connected domain; when it is determined that the first connected domain satisfying the first threshold exceeds one or is less than one, it is determined that the second connected domain does not exist.
In an embodiment, the detecting the corner of the image in each third area according to the specific value, to obtain at most one corner, includes: traversing pixel points in the intersection area of the third area and the second area, and calculating a gray value average value in a designated area taking the pixel points as the center; and determining the pixel point corresponding to the designated area with the lowest gray value average value as a corner point, wherein the corner point is one.
In an embodiment, the method further comprises: counting the number of the corner points and the second intersection points in each third area; and when the statistical result is that the angular point and the second intersection point do not exist, judging that the vertex does not exist.
In an embodiment, performing binarization processing on the filtered image to obtain a binarized image, including: setting a second threshold according to the gray value of the label, setting the pixel point larger than the second threshold as a first gray value, setting the pixel point smaller than the second threshold as a second gray value, and setting the first gray value larger than the second gray value.
Another aspect of an embodiment of the present invention provides a tag locating apparatus, including: the acquisition module acquires an original image; the processing module is used for carrying out filtering processing on the original image to obtain a filtered image; the processing module is also used for carrying out binarization processing on the filtered image to obtain a binarized image; the processing module is also used for performing expansion processing on the binarized image, and performing corrosion processing on the image obtained after the expansion processing to obtain a denoised image; the searching module is used for carrying out connected domain division on the denoised image based on the specified color to obtain one or more first connected domains; screening the connected domains according to the size or the area of the tag to obtain a second connected domain; the shape of the second communicating region is similar to a polygon, and the second communicating region is provided with a plurality of sides, wherein each side is an irregular line segment and approaches to a straight line segment; an acquisition module, configured to acquire a first area corresponding to the first connected domain position in the original image; the detection module is used for acquiring a second area which is circumscribed and tangent to the first area, the second area is polygonal, a plurality of vertexes of the second area are taken as centers, a plurality of fourth areas are selected, the third area is in a specified shape, gray values of images in the intersecting area of the third area and the second area are set as specific values, and corner detection is carried out on the images in each third area according to the specific values to obtain at most one corner; the detection module is also used for acquiring a fourth area circumscribed with the first area, the fifth area is polygonal, and each side of the fourth area is parallel to the boundary corresponding to the original image; performing linear detection on each side in the fourth area to obtain at least one first line segment close to each side, determining the first line segment with the length meeting a set threshold as a second line segment, intersecting the second line segments two by two to obtain a plurality of first intersection points, and determining the first intersection points in the fourth area as second intersection points; the statistics module is used for counting the number of the angular points and the second intersection points in each third area, if the number of the angular points and the second intersection points is multiple, the Euclidean distance between any two points in the angular points and the second intersection points is calculated, and the midpoint of the two points with the minimum Euclidean distance is confirmed to be the vertex; if the statistical result is one, confirming the corner point or the second intersection point corresponding to the statistical result as a vertex; and the merging module is used for carrying out transmission transformation on the obtained images in the polygonal areas formed among the plurality of vertexes according to the size of the label, obtaining a first image with corrected size and position, obtaining a second image matched with the first image in position, and merging the first image and the second image to obtain a label image.
In an embodiment, the search module includes: the judging submodule is used for setting a first threshold according to the size or the area of the tag and judging whether the number of the first connected domains meeting the first threshold is one or not; a first determining sub-module configured to determine, when it is determined that the first connected domain satisfying the first threshold is one, the first connected domain satisfying the first threshold as a second connected domain; the first determining submodule is further used for determining that the second connected domain does not exist when the first connected domain meeting the first threshold is judged to be more than one or less than one.
In one embodiment, the detection module includes: a traversing submodule, configured to traverse pixel points in an intersection area of the third area and the second area, and calculate a gray value average value in a specified area with the pixel points as a center; the second determining submodule is further used for determining a pixel point corresponding to a designated area with the lowest gray value average value as a corner point, and the corner point is one.
In one embodiment, the statistics module includes: a statistics sub-module, configured to count the number of the corner points and the second intersection points in each third area; and the judging submodule is used for judging that the vertex does not exist when the statistical result is that the corner point and the second intersection point do not exist.
In one embodiment, the processing module includes: the setting submodule is used for setting a second threshold value according to the gray value of the label, setting the pixel point larger than the second threshold value as a first gray value, setting the pixel point smaller than the second threshold value as a second gray value, and setting the first gray value larger than the second gray value.
In the embodiment of the invention, a label positioning method is provided, wherein the vertex of the white part of the label image is determined by calculating the angle and the intersection point in the image, and the white part of the label image can be accurately positioned from the image by the method; and correcting the white part of the label image, and combining the white part and the black part of the label image to obtain the label image.
Drawings
The above, as well as additional purposes, features, and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. Several embodiments of the present invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
In the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
FIG. 1 is a schematic diagram of a label positioning method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an original image according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating binarization according to an embodiment of the present invention;
FIG. 4 is a schematic illustration of an exemplary expansion etch process according to the present invention;
FIG. 5 is a schematic view of an embodiment of the present invention in region 8;
FIG. 6 is a schematic view of a square area 9 according to an embodiment of the present invention;
FIG. 7 is a schematic view of an embodiment of the present invention in area 10;
FIG. 8 is a schematic diagram of a linear detection according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a label image according to an embodiment of the present invention;
fig. 10 is a schematic diagram of a tag positioning apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, features and advantages of the present invention more comprehensible, the technical solutions according to the embodiments of the present invention will be clearly described in the following with reference to the accompanying drawings, and it is obvious that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Fig. 1 is a schematic diagram of an implementation flow of a tag positioning method according to an embodiment of the present invention.
Referring to fig. 1, in one aspect, a method for positioning a tag according to an embodiment of the present invention includes: step 101, obtaining an original image;
in step 101, an original image of the tag is acquired by imaging.
The imaging can be imaging the surface of the article attached with the label by means of photographing, shooting, scanning and the like, so the original image can be an image carrying the label, wherein the label is a paper, a card or a brand which plays a role in identifying things, and the like. Specifically, the information related to the object to which the tag is attached can be obtained by identifying the text, the picture or other contents on the tag.
102, filtering an original image to obtain a filtered image;
in step 102, the filtering process refers to an operation of filtering frequencies in a specific band in an image, filtering the obtained original image, removing the specific band in the original image to obtain a filtered image, specifically, performing gaussian filtering on the original image, and removing gaussian noise that may exist in the image, where the noise refers to isolated pixels or pixel blocks that often appear as causing a strong visual effect on the image. Whereas gaussian noise refers to noise whose probability density function follows a gaussian distribution (i.e., normal distribution). Thereby avoiding the influence of Gaussian noise on the subsequent detection.
Step 103, performing binarization processing on the filtered image to obtain a binarized image;
in step 103, the binarization processing refers to setting the gradation value of the pixel point on the image to 0 or 255. Specifically, according to the gray value characteristics of the image, the image is divided into a foreground and a background, the gray value of the foreground is set to 255 (the color corresponding to the gray value of 255 is white), the gray value of the background is set to 0 (the color corresponding to the gray value of 0 is black), and the obtained image is the binarized image.
The binarization processing method can adopt an Ojin algorithm, and the binarization can be carried out through the Ojin algorithm to adaptively calculate the gray threshold according to the gray value of the image.
104, performing expansion treatment on the binarized image, and performing corrosion treatment on the image obtained after the expansion treatment to obtain a denoised image;
in step 104, the binarized image is subjected to a dilation process, specifically, the dilation process refers to setting a black pixel point adjacent to a white pixel point (i.e., a gray value of 255) in a region formed by the black pixel point (i.e., a gray value of 0) in the binarized image to white.
And then performing corrosion treatment on the image after the expansion treatment, wherein the specific corrosion treatment is to set white pixel points adjacent to black pixel points in an area formed by the white pixel points in the image after the expansion treatment to be black, so as to obtain a denoised image.
The white region is inflated by the inflation process, so that noise existing in the binary image is removed, noise refers to noise in the image, and noise refers to isolated pixels or pixel blocks which are often expressed on the image to cause a strong visual effect, namely unnecessary or redundant interference information in image data, specifically, characters or patterns existing in a label image, for example.
After the expansion treatment is adopted, the size of the white area can be changed, and the image after the expansion treatment is corroded, so that the white area and the black area in the image after the expansion treatment are restored to be consistent with those in the binarized image, and the situation that the subsequent positioning fails due to deformation of the expansion treatment is avoided.
It should be noted that the purpose of performing the expansion and then performing the etching treatment is to remove black noise, and if performing the etching treatment and then performing the expansion, white noise in the image can be removed.
Step 105, carrying out connected domain division on the denoised image based on the designated color to obtain one or more first connected domains; screening the connected domain according to the size or the area of the tag to obtain a second connected domain; the shape of the second communicating region is similar to a polygon, and the second communicating region is provided with a plurality of sides, and each side is an irregular line segment and approaches to a straight line segment;
In step 105, performing connected domain division on the denoised image based on the specified color refers to searching for the adjacent pixel point of the specified color in the pixel point in the denoised image, and obtaining one or more first connected domains formed by the adjacent pixel point of the specified color; the shape of the second communicating region is polygonal, namely the shape of the label is polygonal, and because the second communicating region is obtained through image processing, the variable edges of the second communicating region are irregular line segments and approach to straight line segments;
and searching for the adjacent pixel points with the specified color (namely, the specified gray value) in all the pixel points in the denoised image, and obtaining one or more first connected domains formed by the adjacent pixel points with the specified color, wherein the specified color is preferably white.
And screening the first connected domain obtained by searching through the size of the label or the area of the label, and determining the first connected domain obtained by screening as the second connected domain.
Specifically, comparing the size of the first connected domain with the size of the tag, and taking the white first connected domain meeting the size condition as the second connected domain; and/or comparing the area of the white first connected domain with the area of the label, and taking the white first connected domain meeting the area condition as the second connected domain. Wherein the number of first communication domains in the second communication domain is one.
Specific embodiments are provided, for example, in terms of area or size: taking the area as an example: the preset threshold value is set to be 60% -100% of the area of the first communicating region, and if the area of a certain first communicating region is more than or equal to 60% of the area of the label and less than or equal to 100% of the area of the label, the first communicating region is judged to be the second communicating region.
Taking the dimensions as examples: and setting a preset threshold value to be 50% -100% of each side length of the label, and judging that the first connected domain is the second connected domain if the side length of each side of a certain first connected domain is larger than half of the side length of the corresponding label and does not exceed the side length of the corresponding label.
It should be noted that, for imaging reasons, if the first connected domain is not present or a plurality of first connected domains are screened out, it is determined that the second connected domain is not present and the tag positioning fails.
Step 106, acquiring a first area corresponding to the first communication domain position from the original image;
in step 106, a first region corresponding to the position of the second connected domain is acquired in the original image, that is, the position of the first region in the original image is consistent with the position of the second connected domain in the denoised image. Because the denoised image is compared with the original image, and the characteristics in the denoised image are changed after processing, the second connected domain is corresponding to the first region in the original image, so that the label image obtained by final positioning can be more accurate.
Step 107, obtaining a second area which is circumscribed and tangent to the first area, wherein the second area is polygonal, selecting a plurality of fourth areas by taking a plurality of vertexes of the second area as the center, taking a third area as a designated shape, setting gray values of images in the intersecting areas of the third area and the second area as specific values, and carrying out corner detection on the images in each third area according to the specific values to obtain at most one corner;
in step 107, a second region circumscribing and tangential to the first region is acquired, wherein the second region is rectangular. Specifically, circumscribing means that each side of the first region has and only has one intersection point with the corresponding side of the second region, and the range of the second region is larger than that of the first region; tangent means that the intersection point of the first area and the second area is the highest point of the peaks of each side of the first area.
Selecting a plurality of third areas by taking a plurality of vertexes of the second areas as centers, wherein the third areas are of specified shapes, specifically, square with specified side lengths, and each side is parallel to the corresponding boundary of the original image; specifically, the designated side length of the third region may preferably be 1.5 times the length of the black portion of the label, and each side of the third region is parallel to the boundary of the original image.
Then, setting the gray value of the pixel point in the third area and the pixel point in the second area as a specified value, and then carrying out corner detection on the image in the third area according to the specified value, wherein the specified value is preferably 255 (namely white).
The corner detection refers to traversing pixel points in an intersecting area of the third area and the second area, and calculating a gray value average value in a designated area taking the pixel points as the center; and determining the pixel point corresponding to the designated area with the lowest gray value average value as a corner point, wherein the corner point is one. Specifically, the designated area may be a pixel area of 3*3. Specifically, the corner detection method may employ Harris (Harris) corner detection. The harris corner detection is to compare the two cases before sliding and after sliding by using a fixed window to slide in any direction on the image, and if there is a large gray change in any direction of sliding, then the corner in the window can be considered to exist. Further, according to the characteristics of the label, the corner point can be determined as the maximum gray level change. The Harris corner detection is adopted to more accurately position the corner, so that the condition of false corner is avoided.
Step 108, obtaining a fourth area circumscribed with the first area, wherein the fourth area is rectangular, and each side of the fourth area is parallel to a boundary corresponding to the original image; performing linear detection on each side in a fourth area to obtain at least one first line segment close to each side, determining the first line segment with the length meeting a set threshold value as a second line segment, intersecting the second line segments two by two to obtain a plurality of first intersection points, and determining the first intersection points in the fourth area as second intersection points;
in step 108, at least one first line segment close to each side is obtained by performing line detection on each side in the fourth area, and the determination that the set threshold is satisfied in the first line segment is defined as the second line segment, and specifically, the set threshold may be set to be half of the length of the side corresponding to the first line segment.
The method of line detection may be hough line detection.
Step 109, counting the number of the corner points and the second intersection points in each third area, if the number of the corner points and the second intersection points is multiple, calculating Euclidean distances between any two points in the corner points and the second intersection points, and determining the middle points of the two points with the minimum Euclidean distances as vertexes; if the statistical result is one, confirming the corner point or the second intersection point corresponding to the statistical result as a vertex;
In step 109, if the statistics result is that the corner point or the second intersection point does not exist, it is determined that the vertex does not exist.
And 110, performing transmission transformation on the obtained images in the polygonal areas formed among the vertexes according to the size of the label, obtaining a first image with corrected size and position, obtaining a second image corresponding to the position of the first image, and combining the first image and the second image to obtain the label image.
In step 110, a polygonal area formed between the obtained plurality of vertices is specifically referred to as a polygonal area surrounded by a plurality of line segments formed by connecting a plurality of vertices, each two adjacent vertices being connected to each other, according to the size of the label.
And performing transmission transformation on the image in the polygonal area to obtain a first image with corrected size and position, wherein the shape of the first image is rectangular. And obtaining a second image corresponding to the position of the first image according to the size of the first image, wherein the shape of the second image is also rectangular, and the width of the second image is consistent with the width of the first image.
And combining the first image and the second image to obtain a label image. Specifically, the first image refers to the white portion of the label and the second image refers to the black portion of the label.
In the embodiment of the invention, a label positioning method is provided, wherein the vertex of the white part of the label image is determined by calculating the angle and the intersection point in the image, and the white part of the label image can be accurately positioned from the image by the method; and correcting the white part of the label image, and combining the white part and the black part of the label image to obtain the label image.
The above-described aspects of the present invention will be described in detail below using labels as examples.
1. By imaging the tag, an original image is acquired.
In this embodiment, the label may be a label attached to the package of the article, wherein the specific manner of imaging may be by photographing, camera shooting, scanning imaging, etc.
The label on the common external packing box of the article comprises a black label head and a white content part, and the collected original image of the label is shown in fig. 2: 1 is a background image, and the color of the background image and the pattern in the background image are not limited by the invention; the label image is composed of a black area 3 and a white area 2, the black area 3 is above the white area 2 and connected with the white area 2, and the black area 3 and the white area 2 are rectangular in shape. The black line segments in the white area 2 schematically show the graphic or text content in the label. Of course, the tag head of the actual tag is not limited to black, nor is the content portion of the tag limited to white. Wherein, when the position of the white region 2 is changed, the black region 3 can obtain the position of the black region 3 corresponding to the position of the white region 2 according to the changed position of the white region 2. For convenience of the following description, the background image 1 is labeled here as region 1, the white region 2 as region 2, and the black region 3 as region 3.
Depending on the nature of the tag actually used, the tag image may have only one area after imaging, or more areas may be distinguished.
2. And carrying out Gaussian filtering on the original image to obtain a filtered image.
In the process of imaging the tag, gaussian noise is generated in the image due to insufficient brightness, uneven brightness, overhigh temperature of the photographing device caused by long-term operation, and the like. Therefore, gaussian filtering is adopted, gaussian noise possibly existing in the image is eliminated, and the influence on a detection result caused by the Gaussian noise is avoided.
Noise refers to isolated pixels or blocks of pixels that often appear on an image to cause a strong visual effect. Gaussian noise refers to noise whose probability density function follows a gaussian distribution (i.e., normal distribution).
3. And binarizing the filtered image to obtain a binarized image.
In this embodiment, the purpose of binarizing the image is to divide the filtered image into a background and a foreground. The foreground at least comprises a label image, and the rest is a background.
Preferably, the image is divided into two parts, namely a foreground and a background according to the gray value of the filtered image, for example, a gray threshold is set, a region with the gray value larger than the gray threshold is used as the foreground, and a region with the gray value smaller than the gray threshold is used as the background.
In this embodiment, the gray value of the foreground is set to 255 (the color corresponding to the gray value of 255 is white), and the gray value of the background is set to 0 (the color corresponding to the gray value of 0 is black), so as to obtain the binary image.
In this embodiment, an oxford algorithm may be used for binarizing the image, and the oxford algorithm has the advantage of adaptively calculating the gray threshold according to the gray value of the image.
It should be noted that: if the area 2 of the label is white and the area 3 is black, then when the binarization process is performed, the area 2 is recognized as foreground, the area 3 is recognized as background, and the background image 1 is recognized as background. If the gray values of region 2 and region 3 are close, then both region 2 and region 3 will be identified as foreground.
Taking fig. 2 as an example, the image is obtained after binarizing fig. 2, as shown in fig. 3, where the area 1 and the area 3 in fig. 2 are identified as the background, the background gray value is set to 0, and the color corresponds to black, so as to obtain a new area 4. The region 2 is identified as the foreground, the gray value of the foreground is set to 255, the color corresponds to white, and the position of the region 2 is not changed. Wherein the graphic or text content in the label that would be present in region 2 may also be identified as background, i.e., black line segments within region 2 that are used to characterize text or patterns. In region 1, a spot formed due to illumination may also cause the gray value of the portion to rise, and be recognized as a foreground, such as region 5 shown in fig. 3, in which case the foreground includes region 2 and region 5. Wherein the formation of the region 5 may be due to, but is not limited to, illumination.
4. And performing expansion treatment on the binarized image, and performing corrosion treatment on the expanded image to obtain a denoised image.
Performing expansion processing on the binarized image, namely setting black pixel points adjacent to the outline of the area 2 in the binarized image to be white for eliminating noise in the image, wherein the outline of the area 2 can comprise an outer outline and an inner outline, as shown in fig. 3, the outer outline can be the outline adjacent to the area 4 of the area 2, and the inner outline can be the outline adjacent to the black line section used for representing characters or patterns in the area 2 of the area 2; as shown in fig. 4, black line segments used for representing characters or patterns in the area 2 are set to be white after the binary image is subjected to expansion treatment.
The image after expansion treatment is corroded, namely white pixel points adjacent to black pixel points in the outline of the region 2 in the image after expansion treatment are set to be black, and the size of the region 2 after expansion treatment is restored to the original size, specifically, the expansion treatment is used for removing noise in the image, but the area of the region 2 in the binary image is increased; therefore, the white region 2 in the binarized image can be restored to the original size by performing the etching treatment again on the image after the swelling treatment.
The noise in the image refers to isolated pixels or pixel blocks that often appear as causing a stronger visual effect on the image, i.e. unnecessary or redundant interference information in the image data, unlike gaussian noise in step 2, which mainly refers to text or patterns in the label. For example, the black line segments present in the first region 2 of the label image represent the patterns or text in the label.
5. Searching all adjacent white pixel points in the denoised image to obtain one or more white connected domains formed by the adjacent white pixel points; the white connected domain is screened by the size or label area of the label to obtain a region 6.
Specifically, the size of the white connected domain is compared with the size of the label, and the white connected domain satisfying the size condition is taken as a region 6; and/or comparing the area of the white connected domain with the area of the label, and taking the white connected domain meeting the area condition as a region 6. Wherein the number of areas 6 is one.
Taking the area as an example: the preset threshold value is set to be 60% -100% of the area of the white connected domain, and if the area of a certain white connected domain is more than or equal to 60% of the area of the label and less than or equal to 100% of the area of the label, the white connected domain is judged to be the area 6.
Taking the dimensions as examples: and setting a preset threshold value to be 50% -100% of each side length of the label, and judging that a white connected domain is the region 6 if the side length of each side of a certain white connected domain is larger than half of the side length of the corresponding label and does not exceed the side length of the corresponding label.
As shown in fig. 4, the white connected domains are one white connected domain corresponding to the region 2 and two white connected domains corresponding to the region 5, after screening, the region 2 meeting the preset threshold is left, and in fig. 4, the region 2 corresponds to the region 6 in the step 5;
there is a special case where the spot is too large, resulting in the integration of the area 5 and the area 2, and in this case, the area 6 meeting the preset threshold is not screened out, i.e. the positioning failure is considered.
6. An area 7 corresponding to the position of the area 6 is obtained in the original image.
The region 7 corresponding to the position of the region 6 is acquired in the original image, because the subsequent straight line detection and corner detection are more accurate in the result obtained in the original image, and the denoised image in the step 5 is inconsistent with the original image because the binarization, corrosion and other processes are not consistent with the original image, so that the complete label image cannot be positioned in the denoised image, and therefore, the region 7 corresponding to the position of the region 6 is obtained in the original image, and the subsequent detection is performed on the region 7.
Wherein the region 7 corresponding to the position of the region 6 means that the position of the region 7 in the original image is the same as the position of the region 6 in the denoised image.
The region 7 is approximately rectangular in shape, having four sides, each of which is an irregular line segment and approaches a straight line.
The irregular line segments of the four sides are exactly straight line segments, so that the scheme is not affected substantially, and the description is not repeated.
A rectangular area 8 circumscribing and tangent to the area 7 is obtained, as shown in fig. 5, the shape of the rectangular area 8 is rectangular, wherein circumscribing means that each side of the rectangular area 8 has one and only one intersection point with each side of the area 7, and tangent means that the intersection point of each side of the rectangular area 8 and the area 7 is the highest peak of each side of the area 7. In which the area 7 is represented in fig. 5 by a thicker black line, since the rectangular area 8 is too close to the area 7.
As shown in fig. 6, four vertices of the rectangular area 8 are taken as the centers, four square areas 9 are selected, the side length of each square area 9 is greater than a preset threshold, and each side of each square area 9 is parallel to the corresponding boundary of the original image. For example, if the threshold is set to be larger than the width of the region 3 of the label image, the side length of the square region 9 is larger than the width of the region 3.
And setting the images which belong to the square areas 9 and the rectangular areas 8 to be white, and carrying out corner detection on the images in each square area 9. Specifically, the corner detection refers to traversing pixel points which belong to a square region 9 and a rectangular region 8, and calculating a gray value average value in a designated region with the pixel points as the center; and determining the pixel point corresponding to the designated area with the lowest gray value average value as a candidate angular point, wherein the specific area can be a three-by-three area taking the white pixel point as the center. The number of candidate corner points that can be detected in the square area 9 is at most one.
The fact that the image belonging to both the square area 9 and the rectangular area 8 is set to white is to avoid that the pixel points possibly present in the image except white affect the detection of the corner points, so that the erroneous judgment of the candidate corner points is avoided.
Specifically, the corner detection method may employ harris corner detection. The harris corner detection is to compare the two cases before sliding and after sliding by using a fixed window to slide in any direction on the image, and if there is a large gray change in any direction of sliding, then the corner in the window can be considered to exist.
Further, according to the characteristics of the label, the corner point can be determined as the maximum gray level change. The Harris corner detection is adopted to more accurately position the corner, so that the condition of false corner is avoided.
7. As shown in fig. 7, a rectangular region 10 is obtained which circumscribes the region 7 and is parallel to the corresponding boundary of the original image, the rectangular region 10 being rectangular in shape, wherein the circumscribed refers to that each side of the rectangular region 10 has and has only one intersection point with the corresponding side of the region 7.
As shown in fig. 8, each side of the area 7 is subjected to linear detection to obtain a first line segment close to at least one side, a second line segment is obtained by screening out line segments smaller than half of the corresponding side length in the first line segment, and an intersection point formed by intersecting all the second line segments in the second line segment in pairs is obtained, wherein the intersection point in the rectangular area 10 is a candidate intersection point. The number of candidate intersections is such that one, more or zero.
The specific method for detecting the straight line can adopt Hough straight line detection.
8. For one square region 9, the total number of candidate corner points and candidate intersection points within the square region 9 is counted.
If a plurality of candidate angular points and candidate intersection points exist in the square area 9, euclidean distance between any two points (the points are the candidate angular points or the candidate intersection points) is calculated, and after two points with the smallest Euclidean distance are selected, the middle points of the two points are used as the vertexes of the label.
If a candidate corner or a candidate intersection exists in the square area 9, the candidate corner or the candidate intersection is the label vertex.
If the candidate angular points and the candidate intersection points do not exist in the square area 9, the search of the label vertex is considered to be failed, the label has defects, and the detection is finished.
Note that, since each side of the first region is an irregular line segment, but is close to a straight line, there may be one or more first straight lines detected, and there may be a plurality of candidate intersections generated by intersecting straight lines, so that there may be more than two candidate corner points and candidate intersections in the square region 9.
9. The transmission transformation is performed on the image in the quadrangular region formed by the vertexes of the obtained four labels based on the label size, so as to obtain a size and position corrected region 11 image, wherein the region 11 image corresponds to a white region in the label image.
Where transmission transformation refers to an operation for correcting the size and position of an image within a rectangular area.
Acquiring an area 12 image corresponding to the image position of the area 11 according to the image position of the area 11, wherein the area 12 image corresponds to a black area image in the label image;
And combining the region 11 image and the region 12 image to obtain a label image.
As shown in fig. 9, the label image is composed of a white area image (i.e., an area 11 image) in which black characters or patterns may be present, and a black area image (i.e., an area 12 image) in which white characters or patterns may be present. The white and black colors correspond to the present embodiment only.
Fig. 10 is a schematic diagram of a label positioning apparatus according to an embodiment of the present invention
Referring to fig. 10, another aspect of an embodiment of the present invention provides a tag locating apparatus, the apparatus including: an acquisition module 201 that acquires an original image; the processing module 202 performs filtering processing on the original image to obtain a filtered image; the processing module 202 is further configured to perform binarization processing on the filtered image to obtain a binarized image; the processing module 202 is further configured to perform expansion processing on the binarized image, and perform corrosion processing on the image obtained after the expansion processing to obtain a denoised image; the searching module 203 is configured to divide the connected domain of the denoised image based on the specified color, and obtain one or more first connected domains; screening the connected domain according to the size or the area of the tag to obtain a second connected domain; the shape of the second communicating region is similar to a polygon, and the second communicating region is provided with a plurality of sides, and each side is an irregular line segment and approaches to a straight line segment; an acquiring module 201, configured to acquire a first area corresponding to a first communication domain position in an original image; the detection module 204 is configured to obtain a second area circumscribed and tangent to the first area, where the second area is polygonal, select a plurality of fourth areas with a plurality of vertices of the second area as centers, the third area is a specified shape, set a gray value of an image in an intersection area of the third area and the second area as a specific value, and perform corner detection on the image in each third area according to the specific value to obtain at most one corner; the detection module 204 is further configured to obtain a fourth region circumscribed with the first region, where the fifth region is a polygon, and each edge of the fourth region is parallel to a boundary corresponding to the original image; performing linear detection on each side in a fourth area to obtain at least one first line segment close to each side, determining the first line segment with the length meeting a set threshold as a second line segment, intersecting the second line segments two by two to obtain a plurality of first intersection points, and determining the first intersection points in the fourth area as second intersection points; the statistics module 205 is configured to perform statistics on the number of corner points and second intersection points in each third area, and if the statistics result is multiple, calculate euclidean distances between any two points in the corner points and the second intersection points, and determine a midpoint of two points with the minimum euclidean distances as a vertex; if the statistical result is one, confirming the corner point or the second intersection point corresponding to the statistical result as a vertex; and the merging module 206 is configured to perform transmission transformation on the obtained image in the polygonal area formed between the plurality of vertices according to the size of the label, obtain a first image with corrected size and position, obtain a second image matched with the first image in position, and merge the first image and the second image to obtain the label image.
In one embodiment, the lookup module 203 includes: a judging submodule 2031, configured to set a first threshold according to a size or an area of the tag, and judge whether the number of the first connected domains that meets the first threshold is one; a first determining submodule 2032 configured to determine, when it is determined that the first connected domain that satisfies the first threshold is one, the first connected domain that satisfies the first threshold as the second connected domain; the first determining submodule 2032 is further configured to determine that the second connected domain does not exist when it is determined that the first connected domain that satisfies the first threshold exceeds one or is less than one.
In one embodiment, the detection module 204 includes: a traversal submodule 2041 for traversing pixel points in the intersection region of the third region and the second region, and calculating a gray value average value in a specified region with the pixel points as the center; the second determining submodule 2042 is further configured to determine, as a corner, a pixel point corresponding to the specified area with the lowest average gray value, where the corner is one.
In one embodiment, the statistics module 205 includes: a statistics submodule 2051, configured to count the number of corner points and second intersection points in each third area; and the judging submodule 2051 is used for judging that the vertex does not exist when the statistical result is that the corner point and the second intersection point do not exist.
In one embodiment, the processing module 202 includes: the setting submodule 2021 is configured to set a second threshold according to the gray value of the label, set a pixel point larger than the second threshold to a first gray value, set a pixel point smaller than the second threshold to a second gray value, and set the first gray value to be larger than the second gray value.
Another aspect of the invention provides a computer readable storage medium comprising a set of computer executable instructions for performing the tag locating method of any of the above, when the instructions are executed.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present invention, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
The foregoing is merely illustrative embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily think about variations or substitutions within the technical scope of the present invention, and the invention should be covered. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (10)

1. A method of locating a tag, the method comprising:
acquiring an original image;
filtering the original image to obtain a filtered image;
performing binarization processing on the filtered image to obtain a binarized image;
performing expansion treatment on the binarized image, and performing corrosion treatment on the image obtained after the expansion treatment to obtain a denoised image;
Carrying out connected domain division on the denoised image based on the designated color to obtain one or more first connected domains; screening the connected domains according to the size or the area of the tag to obtain a second connected domain; the shape of the second communicating region is similar to a polygon, and the second communicating region is provided with a plurality of sides, wherein each side is an irregular line segment and approaches to a straight line segment;
acquiring a first area corresponding to the first connected domain position from the original image;
acquiring a second area which is circumscribed and tangent to the first area, wherein the second area is polygonal, taking a plurality of vertexes of the second area as the center, selecting a plurality of third areas which are of specified shapes, setting gray values of images in the intersecting areas of the third areas and the second area as specific values, and carrying out corner detection on the images in each third area according to the specific values to obtain at most one corner;
acquiring a fourth area circumscribed with the first area, wherein the fourth area is polygonal, and each side of the fourth area is parallel to a boundary corresponding to the original image; performing linear detection on each side in the fourth area to obtain at least one first line segment close to each side, determining the first line segment with the length meeting a set threshold as a second line segment, intersecting the second line segments two by two to obtain a plurality of first intersection points, and determining the first intersection points in the fourth area as second intersection points;
Counting the number of the corner points and the second intersection points in each third area, if the number of the corner points and the second intersection points is multiple, calculating Euclidean distances between any two points in the corner points and the second intersection points, and determining the middle points of the two points with the minimum Euclidean distances as vertexes; if the statistical result is one, confirming the corner point or the second intersection point corresponding to the statistical result as a vertex;
and performing transmission transformation on the obtained images in the polygonal areas formed among the vertexes according to the size of the label, obtaining a first image with corrected size and position, obtaining a second image matched with the first image in position, and combining the first image and the second image to obtain a label image.
2. The method of claim 1, wherein screening the first connected domain according to the size or area of the tag to obtain a second connected domain comprises:
setting a first threshold according to the size or the area of the tag, and judging whether the number of the first connected domains meeting the first threshold is one;
when it is determined that the first connected domain satisfying the first threshold is one, determining the first connected domain satisfying the first threshold as a second connected domain;
When it is determined that the first connected domain satisfying the first threshold exceeds one or is less than one, it is determined that the second connected domain does not exist.
3. The method according to claim 1, wherein performing corner detection on the image in each third region according to the specific value to obtain at most one corner comprises:
traversing pixel points in the intersection area of the third area and the second area, and calculating a gray value average value in a designated area taking the pixel points as the center;
and determining the pixel point corresponding to the designated area with the lowest gray value average value as a corner point, wherein the corner point is one.
4. The method according to claim 1, wherein the method further comprises:
counting the number of the corner points and the second intersection points in each third area;
and when the statistical result is that the angular point and the second intersection point do not exist, judging that the vertex does not exist.
5. The method of claim 1, wherein binarizing the filtered image to obtain a binarized image comprises:
setting a second threshold according to the gray value of the label, setting the pixel point larger than the second threshold as a first gray value, setting the pixel point smaller than the second threshold as a second gray value, and setting the first gray value larger than the second gray value.
6. A tag locating apparatus, the apparatus comprising:
the acquisition module acquires an original image;
the processing module is used for carrying out filtering processing on the original image to obtain a filtered image;
the processing module is also used for carrying out binarization processing on the filtered image to obtain a binarized image;
the processing module is also used for performing expansion processing on the binarized image, and performing corrosion processing on the image obtained after the expansion processing to obtain a denoised image;
the searching module is used for carrying out connected domain division on the denoised image based on the specified color to obtain one or more first connected domains; screening the connected domains according to the size or the area of the tag to obtain a second connected domain; the shape of the second communicating region is similar to a polygon, and the second communicating region is provided with a plurality of sides, wherein each side is an irregular line segment and approaches to a straight line segment;
an acquisition module, configured to acquire a first area corresponding to the first connected domain position in the original image;
the detection module is used for acquiring a second area which is circumscribed and tangent to the first area, the second area is polygonal, a plurality of vertexes of the second area are taken as centers, a plurality of third areas are selected, the third areas are in a specified shape, gray values of images in the intersecting areas of the third areas and the second area are set as specific values, and corner detection is carried out on the images in each third area according to the specific values to obtain at most one corner;
The detection module is also used for acquiring a fourth area circumscribed with the first area, the fourth area is polygonal, and each side of the fourth area is parallel to the boundary corresponding to the original image; performing linear detection on each side in the fourth area to obtain at least one first line segment close to each side, determining the first line segment with the length meeting a set threshold as a second line segment, intersecting the second line segments two by two to obtain a plurality of first intersection points, and determining the first intersection points in the fourth area as second intersection points;
the statistics module is used for counting the number of the angular points and the second intersection points in each third area, if the number of the angular points and the second intersection points is multiple, the Euclidean distance between any two points in the angular points and the second intersection points is calculated, and the midpoint of the two points with the minimum Euclidean distance is confirmed to be the vertex; if the statistical result is one, confirming the corner point or the second intersection point corresponding to the statistical result as a vertex;
and the merging module is used for carrying out transmission transformation on the obtained images in the polygonal areas formed among the plurality of vertexes according to the size of the label, obtaining a first image with corrected size and position, obtaining a second image matched with the first image in position, and merging the first image and the second image to obtain a label image.
7. The device of claim 6, wherein the lookup module comprises:
the judging submodule is used for setting a first threshold according to the size or the area of the tag and judging whether the number of the first connected domains meeting the first threshold is one or not;
a first determining sub-module configured to determine, when it is determined that the first connected domain satisfying the first threshold is one, the first connected domain satisfying the first threshold as a second connected domain;
the first determining submodule is further used for determining that the second connected domain does not exist when the first connected domain meeting the first threshold is judged to be more than one or less than one.
8. The apparatus of claim 6, wherein the detection module comprises:
a traversing submodule, configured to traverse pixel points in an intersection area of the third area and the second area, and calculate a gray value average value in a specified area with the pixel points as a center;
and the second determining submodule is used for determining the pixel point corresponding to the appointed area with the lowest gray value average value as a corner point, and the corner point is one.
9. The apparatus of claim 6, wherein the statistics module comprises:
A statistics sub-module, configured to count the number of the corner points and the second intersection points in each third area;
and the judging submodule is used for judging that the vertex does not exist when the statistical result is that the corner point and the second intersection point do not exist.
10. The apparatus of claim 6, wherein the processing module comprises:
the setting submodule is used for setting a second threshold value according to the gray value of the label, setting the pixel point larger than the second threshold value as a first gray value, setting the pixel point smaller than the second threshold value as a second gray value, and setting the first gray value larger than the second gray value.
CN202110638410.9A 2021-06-08 2021-06-08 Label positioning method and device Active CN113554591B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110638410.9A CN113554591B (en) 2021-06-08 2021-06-08 Label positioning method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110638410.9A CN113554591B (en) 2021-06-08 2021-06-08 Label positioning method and device

Publications (2)

Publication Number Publication Date
CN113554591A CN113554591A (en) 2021-10-26
CN113554591B true CN113554591B (en) 2023-09-01

Family

ID=78102060

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110638410.9A Active CN113554591B (en) 2021-06-08 2021-06-08 Label positioning method and device

Country Status (1)

Country Link
CN (1) CN113554591B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104835173A (en) * 2015-05-21 2015-08-12 东南大学 Positioning method based on machine vision
CN109410215A (en) * 2018-08-02 2019-03-01 北京三快在线科技有限公司 Image processing method, device, electronic equipment and computer-readable medium
CN111291743A (en) * 2020-03-31 2020-06-16 深圳前海微众银行股份有限公司 Tool disinfection monitoring method, device, equipment and storage medium
CN112119399A (en) * 2019-10-14 2020-12-22 深圳市大疆创新科技有限公司 Connected domain analysis method, data processing apparatus, and computer-readable storage medium
CN112837257A (en) * 2019-11-06 2021-05-25 广州达普绅智能设备有限公司 Curved surface label splicing detection method based on machine vision

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104517089B (en) * 2013-09-29 2017-09-26 北大方正集团有限公司 A kind of Quick Response Code decodes system and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104835173A (en) * 2015-05-21 2015-08-12 东南大学 Positioning method based on machine vision
CN109410215A (en) * 2018-08-02 2019-03-01 北京三快在线科技有限公司 Image processing method, device, electronic equipment and computer-readable medium
CN112119399A (en) * 2019-10-14 2020-12-22 深圳市大疆创新科技有限公司 Connected domain analysis method, data processing apparatus, and computer-readable storage medium
CN112837257A (en) * 2019-11-06 2021-05-25 广州达普绅智能设备有限公司 Curved surface label splicing detection method based on machine vision
CN111291743A (en) * 2020-03-31 2020-06-16 深圳前海微众银行股份有限公司 Tool disinfection monitoring method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴鹏飞 ; 常君明 ; .基于计算机视觉的标签定位检测.江汉大学学报(自然科学版).2018,(第04期),全文. *

Also Published As

Publication number Publication date
CN113554591A (en) 2021-10-26

Similar Documents

Publication Publication Date Title
CN110309687B (en) Correction method and correction device for two-dimensional code image
CN106780486B (en) Steel plate surface defect image extraction method
CN115035122B (en) Artificial intelligence-based integrated circuit wafer surface defect detection method
CN112950540B (en) Bar code identification method and equipment
CN110648349A (en) Weld defect segmentation method based on background subtraction and connected region algorithm
CN113177959B (en) QR code real-time extraction method in rapid movement process
CN111768348B (en) Defect detection method, device and computer readable storage medium
CN113077437B (en) Workpiece quality detection method and system
CN111814673B (en) Method, device, equipment and storage medium for correcting text detection bounding box
CN113610774A (en) Glass scratch defect detection method, system, device and storage medium
CN114972575A (en) Linear fitting algorithm based on contour edge
CN115311279A (en) Machine vision identification method for warp and weft defects of fabric
JP4062987B2 (en) Image area dividing method, image area dividing apparatus, and image area dividing program
CN117094975A (en) Method and device for detecting surface defects of steel and electronic equipment
CN109977714B (en) Multi-QR-code integrated visual positioning method for warehoused goods
CN114693651A (en) Rubber ring flow mark detection method and device based on image processing
CN113554591B (en) Label positioning method and device
CN112926695A (en) Image recognition method and system based on template matching
CN112507751A (en) QR code positioning method and system
CN116469090A (en) Method and device for detecting code spraying pattern, electronic equipment and storage medium
CN115239595A (en) Method for detecting qualification of two-dimensional code of packaging printed matter
CN115471650A (en) Gas pressure instrument reading method, device, equipment and medium
CN115393290A (en) Edge defect detection method, device and equipment
JP4492258B2 (en) Character and figure recognition and inspection methods
Baharlou et al. Fast and adaptive license plate recognition algorithm for persian plates

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant