CN110309687B - Correction method and correction device for two-dimensional code image - Google Patents

Correction method and correction device for two-dimensional code image Download PDF

Info

Publication number
CN110309687B
CN110309687B CN201910602805.6A CN201910602805A CN110309687B CN 110309687 B CN110309687 B CN 110309687B CN 201910602805 A CN201910602805 A CN 201910602805A CN 110309687 B CN110309687 B CN 110309687B
Authority
CN
China
Prior art keywords
dimensional code
code image
target
vertex positions
boundary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910602805.6A
Other languages
Chinese (zh)
Other versions
CN110309687A (en
Inventor
彭勤牧
尤新革
张如
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201910602805.6A priority Critical patent/CN110309687B/en
Publication of CN110309687A publication Critical patent/CN110309687A/en
Application granted granted Critical
Publication of CN110309687B publication Critical patent/CN110309687B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14172D bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1439Methods for optical code recognition including a method step for retrieval of the optical code
    • G06K7/1452Methods for optical code recognition including a method step for retrieval of the optical code detecting bar code edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/146Methods for optical code recognition the method including quality enhancement steps

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Toxicology (AREA)
  • Health & Medical Sciences (AREA)
  • Electromagnetism (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a correction method and a correction device of a two-dimensional code image, wherein the correction method comprises the steps of processing the two-dimensional code image to be corrected to obtain a boundary outline of the two-dimensional code image to be corrected; processing the boundary contour by adopting a first algorithm to obtain a first group of vertex positions, and processing the boundary contour by adopting a second algorithm to obtain a second group of vertex positions; blending the vertex positions of the same kind in the first group of vertex positions and the second group of vertex positions to obtain a target vertex position; and correcting the two-dimensional code image to be corrected based on the target vertex position to obtain a target two-dimensional code image. According to the method, the accuracy of determining the vertex position can be improved through two algorithms, and finally correction is performed according to the target vertex position, so that the deformation of the two-dimensional code image can be corrected, and the two-dimensional code can be conveniently identified.

Description

Correction method and correction device for two-dimensional code image
Technical Field
The invention belongs to the technical field of two-dimensional code identification, and particularly relates to a correction method and a correction device for a two-dimensional code image.
Background
The two-dimensional code is a black-white alternate graph which is distributed on a plane (in a two-dimensional direction) according to a certain rule by using a certain specific geometric graph so as to record data symbol information; the concept of '0' and '1' bit stream forming the internal logic basis of computer is skillfully utilized in code making, a plurality of geometric shapes corresponding to binary system are used for representing character numerical value information, and the information is automatically read through an image input device or an optoelectronic scanning device so as to realize automatic information processing. Two-dimensional codes have some commonalities in barcode technology: each code system has its specific character set; each character occupies a certain width; has certain checking function and the like. Meanwhile, the method has the characteristics of automatic recognition function of information of different rows, processing of graph rotation change and the like.
The commonly used two-dimensional code system includes: PDF417, Data Matrix, QR Code, Code 49, Code16K, Codeone, etc. The method has the characteristics of large information capacity, wide coding range, strong fault-tolerant capability, high decoding reliability, low cost, high confidentiality, good anti-counterfeiting performance and the like. The method plays an important role in the fields of information acquisition, website skipping, advertisement pushing, mobile phone e-commerce, anti-counterfeiting tracing, preferential promotion, member management, mobile phone payment and the like.
However, in the process of printing the two-dimensional code, due to reasons such as equipment precision, edge blurring and deformation of the two-dimensional code in different degrees can be caused, positioning and identification are difficult to achieve, identification time is too long, even identification cannot be achieved, and use of the two-dimensional code is affected.
In view of the above, overcoming the drawbacks of the prior art is an urgent problem in the art.
Disclosure of Invention
The invention provides a correction method and a correction device for a two-dimensional code image, aiming at obtaining two groups of vertex positions of the two-dimensional code image through two different algorithms, harmonizing the same vertex in the two groups of vertex positions to obtain a target vertex position, thereby determining the boundary point of the two-dimensional code image, improving the accuracy of determining the vertex position through the two algorithms, and finally correcting according to the target vertex position to correct the deformation of the two-dimensional code image so as to conveniently identify the two-dimensional code.
To achieve the above object, according to an aspect of the present invention, there is provided a correction method of a two-dimensional code image, the correction method including:
processing a two-dimensional code image to be corrected to obtain a boundary contour of the two-dimensional code image to be corrected;
processing the boundary contour by adopting a first algorithm to obtain a first group of vertex positions, and processing the boundary contour by adopting a second algorithm to obtain a second group of vertex positions;
blending the vertex positions of the same kind in the first group of vertex positions and the second group of vertex positions to obtain a target vertex position;
and correcting the two-dimensional code image to be corrected based on the target vertex position to obtain a target two-dimensional code image.
Preferably, after the processing the boundary contour by using the first algorithm to obtain the first group of vertex positions and the processing the boundary contour by using the second algorithm to obtain the second group of vertex positions, the method further includes:
obtaining a similarity between the first set of vertex positions and the second set of vertex positions;
and when the similarity between the first group of vertex positions and the second group of vertex positions is not smaller than a preset similarity threshold value, performing a step of blending the same type of vertex positions in the first group of vertex positions and the second group of vertex positions to obtain a target vertex position.
Preferably, the first group of vertex positions includes four vertex positions, the second group of vertex positions includes four vertex positions, and obtaining the similarity between the first group of vertex positions and the second group of vertex positions includes:
respectively acquiring a first quadrangle formed by the positions of the first group of vertexes and a second quadrangle formed by the positions of the second group of vertexes;
respectively acquiring the areas of intersection regions formed by the first quadrangle and the second quadrangle and the areas of union regions formed by the first quadrangle and the second quadrangle;
calculating an intersection ratio of the area of the intersection region and the area of the union region, and marking the similarity between the first group of vertex positions and the second group of vertex positions through the intersection ratio.
Preferably, the first algorithm is an LSD line segment detection algorithm, the first group of vertex positions includes four vertex positions, and the processing the boundary contour by using the first algorithm to obtain the first group of vertex positions includes:
processing the boundary contour by adopting the LSD line segment detection algorithm to obtain a plurality of boundary lines;
judging whether the number of the boundary lines is more than four, and if the number of the boundary lines is more than four, marking the longest boundary line in the plurality of boundary lines as a target boundary line;
sequentially traversing the lengths of the remaining boundary lines, selecting the longest boundary line as a boundary line to be verified, and calculating whether the distance between the midpoint of the boundary line to be verified and the midpoint of the target boundary line is smaller than a preset distance threshold value;
if not, marking the boundary line to be verified as a target boundary line, and if not, rejecting the boundary line to be verified until four target boundary lines are screened out;
and acquiring intersection points of the four target boundary lines to obtain four vertex positions, thereby obtaining a first group of vertex positions.
Preferably, the second algorithm is hough transform, the second group of vertex positions includes four vertex positions, and the processing the boundary contour by using the second algorithm to obtain the second group of vertex positions includes:
carrying out Hough transform on the boundary contour to obtain a boundary line segment cluster;
performing slope and intercept dichotomous clustering on the boundary line segment cluster to obtain an upper boundary line segment cluster, a lower boundary line segment cluster, a left boundary line segment cluster and a right boundary line segment cluster;
respectively performing straight line fitting on the upper boundary line segment cluster, the lower boundary line segment cluster, the left boundary line segment cluster and the right boundary line segment cluster to obtain four target boundary lines;
and acquiring intersection points of the four target boundary lines to acquire four vertex positions so as to acquire a second group of vertex positions.
Preferably, the number of the target vertices is four, and the correcting the to-be-corrected two-dimensional code image based on the target vertex position to obtain the target two-dimensional code image includes:
determining the position of the target vertex on the two-dimensional code image to be corrected so as to determine the boundary point of the two-dimensional code image to be corrected;
and carrying out perspective transformation on the two-dimensional code image to be corrected, and correcting the deformation of the two-dimensional code image to be corrected to obtain a target two-dimensional code image.
Preferably, the correction method further comprises:
performing edge detection on the target two-dimensional code image by adopting a third algorithm to obtain a first edge contour map, and performing edge detection on the target two-dimensional code image by adopting a fourth algorithm to obtain a second edge contour map;
fusing the first edge contour map and the second edge contour map to obtain a target edge contour map;
and sharpening the target two-dimensional code image through the target edge contour map so as to enhance the edge of the target two-dimensional code image.
Preferably, the third algorithm is a Canny edge detection algorithm, and the fourth algorithm is a Sobel edge detection algorithm;
fusing the first edge contour map and the second edge contour map to obtain a target edge contour map, wherein the target edge contour map comprises:
and performing pixel superposition on the first edge contour map and the second edge contour map according to a preset proportion to obtain a target edge contour.
Preferably, the processing the two-dimensional code image to be corrected to obtain the boundary contour of the two-dimensional code image to be corrected includes:
carrying out binarization processing on a two-dimensional code image to be corrected;
and performing morphological corrosion on the binarized two-dimensional code image to be corrected to obtain a boundary contour of the two-dimensional code image to be corrected.
According to another aspect of the present invention, there is provided a calibration apparatus comprising at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor programmed to perform the correction method of the present invention.
Generally, compared with the prior art, the technical scheme of the invention has the following beneficial effects: the invention provides a correction method and a correction device of a two-dimensional code image, wherein the correction method comprises the steps of processing the two-dimensional code image to be corrected to obtain a boundary outline of the two-dimensional code image to be corrected; processing the boundary contour by adopting a first algorithm to obtain a first group of vertex positions, and processing the boundary contour by adopting a second algorithm to obtain a second group of vertex positions; blending the vertex positions of the same kind in the first group of vertex positions and the second group of vertex positions to obtain a target vertex position; and correcting the two-dimensional code image to be corrected based on the target vertex position to obtain a target two-dimensional code image. According to the method, two groups of vertex positions of the two-dimensional code image are obtained through two different algorithms, the same type of vertex in the two groups of vertex positions is blended to obtain the target vertex position, so that the boundary point of the two-dimensional code image is determined, the accuracy of determining the vertex position can be improved through the two algorithms, and finally the deformation of the two-dimensional code image can be corrected according to the target vertex position so as to facilitate the identification of the two-dimensional code.
Furthermore, the edge contour image of the two-dimensional code image is obtained through multi-operator edge detection, so that the two-dimensional code image is sharpened, the edge contour of the two-dimensional code image is enhanced, the influence of fuzzy edges of the two-dimensional code can be eliminated, the two-dimensional code image with a clean background, clear bar codes and high contrast is obtained, and the accuracy of subsequent identification can be effectively improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below. It is obvious that the drawings described below are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is a schematic flowchart of a method for correcting a two-dimensional code image according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a boundary contour of a two-dimensional code image to be corrected according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a boundary line cluster converted into a boundary line according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of an intersection region and a union region between two quadrangles provided by an embodiment of the present invention;
FIG. 5 is a schematic diagram of a two-quadrilateral union region in FIG. 4 according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of an intersection region of two quadrangles in FIG. 4 according to an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of a target vertex position obtained by mutually harmonizing a first set of vertex positions and a second set of vertex positions according to an embodiment of the present invention;
fig. 8 is a schematic diagram of a corrected two-dimensional code image according to an embodiment of the present invention;
FIG. 9 is a flowchart illustrating one implementation of step 102 in FIG. 1 according to an embodiment of the present invention;
FIG. 10 is a diagram illustrating a distribution of boundary line segment clusters according to an embodiment of the present invention;
fig. 11 is a schematic flowchart of another two-dimensional code image correction method according to an embodiment of the present invention;
FIG. 12 is a schematic diagram of a first edge profile provided by an embodiment of the present invention;
FIG. 13 is a schematic diagram of a second edge profile provided by an embodiment of the present invention;
fig. 14 is a schematic structural diagram of a calibration apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Example 1:
in an actual application scene, due to the precision of printing equipment or the photographing angle, the obtained two-dimensional code image is often deformed, and subsequent two-dimensional code identification is affected. In order to solve the foregoing problem, this embodiment provides a method for correcting a two-dimensional code image, which can correct a deformed two-dimensional code image, and convert the two-dimensional code image into an image with a standard shape, so as to facilitate subsequent two-dimensional code identification.
Referring to fig. 1, the method for correcting a two-dimensional code image of the present embodiment includes the following steps:
in step 101, a two-dimensional code image to be corrected is processed to obtain a boundary contour of the two-dimensional code image to be corrected.
The two-dimensional code image to be corrected is mainly a two-dimensional code image with deformation or distortion, the deformation of the two-dimensional code can be caused by printing equipment, the two-dimensional code image is caused to deform or distort, and the deformation of the two-dimensional code can also be caused by the inclination of a shooting angle when a user uses intelligent equipment to shoot or scan the two-dimensional code.
Wherein, the code system of waiting to rectify the two-dimensional code image has: PDF417, Data Matrix, QR Code, Code 49, Code16K, or Code one, etc.
Before the boundary contour of the two-dimensional code image to be corrected is obtained, gray level processing and binarization processing are firstly carried out on the two-dimensional code image to be corrected, and then the boundary contour is segmented by adopting morphological corrosion processing.
As shown in fig. 2, the graph on the left side is the two-dimensional code image to be corrected, and the graph on the right side is the boundary contour of the two-dimensional code image to be corrected.
In step 102, the boundary contour is processed by a first algorithm to obtain a first set of vertex positions, and the boundary contour is processed by a second algorithm to obtain a second set of vertex positions.
In this embodiment, as shown in fig. 3, two different algorithms are used to process the boundary contour, so as to obtain a boundary line segment cluster (fig. 3 is a diagram on the left side), obtain a target boundary line (fig. 3 is a diagram on the right side) according to the boundary line segment cluster, and obtain a first group of vertex positions and a second group of vertex positions respectively according to intersection points between the target boundary lines, where each group of vertex positions includes position coordinates of multiple vertices, and a vertex can be understood as a boundary point of a two-dimensional code image, and a shape and an area size of the two-dimensional code image can be determined by the vertex.
In an actual application scenario, the two-dimensional code image is generally a quadrangle, in an alternative embodiment, each group of vertex positions includes position coordinates of four vertices, and an intersection point between four boundary lines enclosing the quadrangle is a vertex (boundary point) of the two-dimensional code image.
In an alternative scheme, the first algorithm is LSD (Line Segment Detector, abbreviated as LSD) Line detection, and the second algorithm is hough transform. In other embodiments, the specific types of the first algorithm and the second algorithm may also be other algorithms, and are not limited specifically herein.
In step 103, blending the vertex positions of the same kind in the first group of vertex positions and the second group of vertex positions to obtain a target vertex position.
Wherein vertex positions of the same kind refer to two vertices with the closest distance between the first set of vertex positions and the second set of vertex positions.
The vertex position can be determined according to coordinates in a preset coordinate system.
In an optional embodiment, in a preset coordinate system, harmonic averaging is performed according to horizontal and vertical coordinates of vertices of the same type to obtain a target vertex coordinate.
In step 104, the two-dimensional code image to be corrected is corrected based on the target vertex position, so as to obtain a target two-dimensional code image.
In this embodiment, the target vertex position is determined on the two-dimensional code image to be corrected, so as to determine the boundary point of the two-dimensional code image to be corrected; and then, carrying out perspective transformation on the two-dimensional code image to be corrected, correcting the deformation of the two-dimensional code image to be corrected, and obtaining a target two-dimensional code image.
According to the method, two groups of vertex positions of the two-dimensional code image are obtained through two different algorithms, the same type of vertex in the two groups of vertex positions is blended to obtain the target vertex position, so that the boundary point of the two-dimensional code image is determined, the accuracy of determining the vertex position can be improved through the two algorithms, and finally the deformation of the two-dimensional code image can be corrected according to the target vertex position so as to facilitate the identification of the two-dimensional code.
In an actual application scenario, although the specific methods for acquiring the vertex positions by the first algorithm and the second algorithm are different, for the same two-dimensional code image to be corrected, the obtained vertex positions do not have a great difference no matter which algorithm is directed at the same vertex, otherwise the vertex positions may be identified and calibrated by mistake. In order to avoid the error of vertex position identification caused by the problem of the algorithm itself and improve the accuracy, in a preferred embodiment, similarity matching needs to be performed on the first group of vertex positions and the second group of vertex positions, and the subsequent steps are executed under the condition that the similarity is satisfied.
After step 102, obtaining a similarity between the first set of vertex positions and the second set of vertex positions, and when the similarity between the first set of vertex positions and the second set of vertex positions is not less than a preset similarity threshold, executing step 103.
In an alternative embodiment, the similarity between the first set of vertex positions and the second set of vertex positions may be obtained by using an intersection-and-parallel ratio.
Specifically, the first group of vertex positions includes four vertex positions, the second group of vertex positions includes four vertex positions, and obtaining the similarity between the first group of vertex positions and the second group of vertex positions includes: respectively acquiring a first quadrangle formed by the positions of the first group of vertexes and a second quadrangle formed by the positions of the second group of vertexes; respectively acquiring the areas of intersection regions formed by the first quadrangle and the second quadrangle and the areas of union regions formed by the first quadrangle and the second quadrangle; calculating an intersection ratio of the area of the intersection region and the area of the union region, and marking the similarity between the first group of vertex positions and the second group of vertex positions through the intersection ratio.
Wherein the preset similarity threshold is that the intersection ratio may be 0.8, that is, when the area S of the first quadrangle is smaller than the predetermined threshold, the first quadrangle is a triangle1And the area S of the second quadrangle2The intersection ratio of (a) and (b) is not less than 0.8, the first set of vertex positions and the second set of vertex positions satisfy a similarity condition. Of course, the preset similarity threshold may also be other values, and is not specifically limited herein.
Wherein, the intersection ratio IOU is calculated according to the following formula: IOU ═ S1∩S2)/(S1∪S2)。
In an actual application scenario, the calculation process of the intersection ratio IOU is as follows: (1) respectively obtaining boundary lines of a first quadrangle and a second quadrangle according to vertex coordinates in the first group of vertex positions and the second group of vertex positions, and determining a cross point between the boundary line of the first quadrangle and the boundary line of the second quadrangle; (2) determining inner points and outer points of the first quadrangle and the second quadrangle respectively, wherein the inner points are defined as boundary points (vertexes) of one quadrangle in the other quadrangle, and the outer points are defined as boundary points (vertexes) of one quadrangle outside the other quadrangle; (3) sequentially connecting the inner points and the cross points to form an intersection area, and sequentially connecting the outer points and the cross points to form a parallel area; (4) dividing the intersection region and the union region into a plurality of triangles according to boundary points (vertexes), and solving the triangular area; (5) the area ratio of the intersection region to the area of the merging region is the intersection ratio.
The triangular area sum can be obtained by adopting a Helen formula, wherein the Helen formula is as follows:
Figure BDA0002119850960000071
wherein a, b and c are the side lengths of the triangle,
Figure BDA0002119850960000072
in the following description, reference is made to fig. 4 to 6 for an example, and it should be noted that, in order to clearly illustrate the implementation of the present embodiment, the two quadrangles shown in the drawings are subjected to deformation and exaggerated processing of the position difference, and in a practical application scenario, the position difference between the quadrangles obtained by using two different algorithms is very small.
Let ABCD and a 'B' C 'D' respectively be two quadrangles for similarity determination, which intersect at point E, F. Among them, E, F is called a cross point, B, D 'is an inner point, and a', B ', C', C, D, A are called outer points.
The intersection and the outer point are connected in sequence to obtain a polygon A ' B ' C ' FCDAE (as shown in FIG. 5), which is called a union region. And taking a certain point (which can be E) as a fixed point, sequentially taking two points which are continuous clockwise (or anticlockwise) from the rest points to respectively form a triangle with the E, and solving the area of the triangle through a Helen formula to further solve the area of the triangle and further solve the area.
The intersection and the interior point are connected in sequence to obtain a quadrilateral EBFD' (shown in FIG. 6), which is called an intersection region. And taking a certain point (which can be E) as a fixed point, sequentially taking two points which are continuous clockwise (or anticlockwise) from the rest points to respectively form a triangle with the E, and solving the area of the triangle through a Helen formula to further solve the area of the intersection region.
In the program execution process, the 'sequential connection' is executed according to a strict algorithm so as to correctly acquire the intersection region and the union region. The following describes the algorithm implementation process:
for a first polygon and a second polygon, the intersection point between the boundary lines of the first polygon and the second polygon is a cross point, the first polygon comprises a plurality of vertexes, the outer point positioned outside the second polygon is the outer point of the first polygon, and the inner point positioned inside the second polygon is the inner point of the first polygon; the second polygon includes a plurality of vertices, an outer point located outside the first polygon that is the second polygon, and an inner point located inside the first polygon that is the second polygon.
And (3) judging the intersection area: taking any point of the first polygon as a starting point, and recording a first point sequence corresponding to an inner point and an intersection point of the first polygon according to a preset direction, wherein the preset direction is clockwise or anticlockwise; and recording a second point sequence corresponding to the inner point and the intersection point of the second polygon according to a preset direction by taking any point of the second polygon as a starting point, wherein the preset direction is clockwise or anticlockwise. And performing preset comprehensive treatment on the first point sequence and the second point sequence to obtain a connection sequence of points on the intersection region, thereby obtaining the intersection region. Specifically, the intersection point sequence is determined in the first point sequence, the interior point sequence in the intersection point sequence is determined in the second point sequence, and the interior point sequence is inserted between the intersection point sequences in the first point sequence to obtain the connection sequence of the points on the intersection region.
And the judgment process of the region: taking any point of the first polygon as a starting point, and recording a third point sequence corresponding to an outer point and an intersection point of the first polygon according to a preset direction, wherein the preset direction is clockwise or anticlockwise; and recording a fourth point sequence corresponding to the outer point and the cross point of the second polygon according to a preset direction by taking any point of the second polygon as a starting point, wherein the preset direction is clockwise or anticlockwise. And performing preset comprehensive processing on the third point sequence and the fourth point sequence to obtain a connection sequence of points on the union region, thereby obtaining the union region. Specifically, the intersection point sequence is determined in the third point sequence, the exterior point sequence in the intersection point sequence is determined in the fourth point sequence, and the exterior point sequence is inserted between the intersection point sequences in the third point sequence, so that the connection sequence of the points on the union region is obtained.
How to acquire the intersection region and the union region when performing the similarity determination will be described in detail below with reference to fig. 4. Here, it should be noted that, in the present embodiment, a quadrilateral is exemplified, and in an actual application scenario, the foregoing algorithm process is applicable to any polygon.
As shown in fig. 4, it is assumed that the two quadrangles subjected to the similarity determination are the first quadrangle ABCD and the second quadrangle a 'B' C 'D', respectively, which intersect at the point E, F. Wherein E, F is called the intersection, B is the inner point of the first quadrilateral, D 'is the inner point of the second quadrilateral, A', B ', C' are the outer points of the second quadrilateral, C, D, A is the outer point of the first quadrilateral.
And (3) cross region judgment: taking any point on the first quadrangle ABCD as a starting point, and clockwise recording the point sequence of the inner point and the cross point on the first quadrangle ABCD, wherein the point sequence can be EBF (first point sequence); the dot sequence of the inner points and the cross points on the second quadrangle A ' B ' C ' D ' is clockwise recorded by taking any point on the second quadrangle A ' B ' C ' D ' as a starting point, and can be FD ' E (second dot sequence). And integrating the first point sequence and the second point sequence to obtain the connection sequence of the points on the intersection region as EBFD'. Specifically, in the first dot sequence, EF is the cross-point dot sequence, and in the second dot sequence, D ' is the inter-dot sequence between the cross-point dot sequences, and D ' is inserted between the cross-point dot sequences FE of the first dot sequence in a manner of a cross-point dot sequence loop (EFEF … …), so as to obtain EBFD '. Here, the cross point order EF and the cross point order FE are substantially the same cross point order, and it is sufficient to ensure that the loop can be performed by the EFEF … … method.
And area judgment: taking any point on the first quadrangle ABCD as a starting point, and recording the exterior point and the cross point order on the first quadrangle ABCD clockwise, wherein the exterior point and the cross point order can be AEFCD (third order);
and taking any point on the second quadrangle A ' B ' C ' D ' as a starting point, and recording the dot sequence of the outer points and the cross points on the second quadrangle clockwise, wherein the dot sequence can be A ' B ' C ' FE (fourth dot sequence). And integrating the third point sequence and the fourth point sequence to obtain the connection sequence of the points on the combined area, namely AEA ' B ' C ' FCD. Specifically, in the third dot sequence, EF is the cross-point dot sequence, and in the fourth dot sequence, a ' B ' C ' is the exterior dot sequence between the cross-point dot sequences, and a ' B ' C ' is inserted between the cross-point dot sequences EF of the first dot sequence in a manner that the cross-point dot sequences are circulated (EFEF … …), resulting in AEA ' B ' C ' FCD. In a specific application scenario, the number of the intersections may be more than two, and two consecutive intersections (note that, the order of the circular intersections) are sequentially taken to be processed as described above, and thus, the description is omitted here.
The method of the embodiment can be well realized through a program, so that the sequential connection meaning of the mechanism solution is calculated, the intersection region and the union region can be accurately judged, and the similarity judgment is completed.
After the similarity determination is carried out, taking the harmonic mean of two points of the same type as a final boundary point, wherein the two points of the same type are defined as points positioned in adjacent directions, namely the two closest points; if not, the correction fails. Where the harmonic mean c of the numbers a and b is defined as:
Figure BDA0002119850960000091
as shown in fig. 7, A, B, C, D represents the boundary points (vertices) of one quadrangle and a ', B', C ', and D' represent the boundary points (vertices) of the other quadrangle, which are actually obtained by two algorithms. A, A ', B', C ', D and D' are called homogeneous points, and four new boundary points of the quadrangle are obtained by taking weighted average according to horizontal and vertical coordinates of the homogeneous points, namely a, B, C and D in the drawing, wherein the a, B, C and D are target vertexes.
As shown in fig. 8, the two-dimensional code image is a two-dimensional code image obtained by correcting the two-dimensional code image to be corrected based on the target vertex position, that is, the target two-dimensional code image obtained in step 104.
In step 101, processing the two-dimensional code image to be corrected to obtain a boundary contour of the two-dimensional code image to be corrected specifically includes: and after carrying out gray level processing on the two-dimensional code image to be corrected, carrying out binarization processing on the two-dimensional code image to be corrected. And then, performing multiple morphological erosion operations on the two-dimensional code image to be corrected to obtain the boundary contour of the two-dimensional code image to be corrected.
In an optional scheme, n times of morphological erosion operations are sequentially performed on the two-dimensional code image to be corrected, and then a difference is made between the two-dimensional code image to be corrected and the image after the n +1 th morphological erosion operation, so as to obtain a boundary contour of the two-dimensional code image to be corrected, wherein n may be 5 or other numerical values. In the embodiment, the area of the black area of the image is increased and the area of the white area of the image is reduced through morphological erosion operation, so that the boundary contour of the two-dimensional code image to be corrected can be effectively extracted.
The specific method of morphological corrosion is as follows: firstly, a square structural element S (which can be 5 multiplied by 5) is set, the structural element S is utilized to corrode the two-dimensional code image X, and if the structural element moves X, the structural element is contained in the image foreground X0Then make it still belong to image foreground X0Otherwise, it belongs to the image background X1Wherein the image foreground X0Refers to a white area part, the image background X1Refers to the black area portion. The operation formula is as follows:
Figure BDA0002119850960000092
wherein X is the vector distance of the movement of the structural element, and the definition domain of X is the foreground image X0Distance vector of all points above to the origin.
In an alternative scheme, binarization can be performed by using an inter-maximum class variance method (Otsu), and the two-dimensional code image X is divided into twoImage foreground X0And image background X1. Specifically, let the two-dimensional code image be M · N in size, and let the number of pixels in the two-dimensional code image whose gray scale value is smaller than the threshold value T be N0 (foreground pixels, forming the image foreground X)0) The number of pixels with gray value greater than threshold T is recorded as N1(background pixels, forming image background X)1). Then there are: n is a radical of0+N1=M·N。
Let the ratio of foreground pixels to image be w0Average gray of u0(ii) a The proportion of background pixels in the image is w1Average gray of u1. The average gray level of the image is u, and the inter-class variance is g, then:
w0=N0/(M·N) w1=N1/(M·N) w0+w1=1
u=w0·u0+w1·u1g=w0·(u0-u)2+w1·(u1-u)2
obtaining an equivalent formula: g ═ w0·w1·(u0-u1)2And obtaining a threshold value T which enables the inter-class variance to be maximum by adopting a traversal method, namely the obtained binarization threshold value.
In the present embodiment, the larger the inter-class variance between the background and the foreground is, the larger the difference of 2 parts constituting the image is, the smaller the probability of erroneous classification is.
Further, performing binarization processing on the two-dimensional code image to be corrected according to the obtained binarization threshold value, and dividing the two-dimensional code image X to be corrected into image foreground X0And image background X1
In an optional embodiment, the first algorithm is an LSD line segment detection algorithm, the first group of vertex positions includes four vertex positions, the boundary contour is processed by the first algorithm to obtain four boundary lines, an intersection point of the four boundary lines is obtained, and four vertex coordinates are obtained, so that the first group of vertex positions is obtained.
The process of performing LSD line segment detection on the boundary contour mainly includes detecting short straight lines in the boundary contour, and then splicing the connected short straight lines to obtain the boundary line. The LSD line segment detection of this embodiment is improved based on the conventional detection method, and specifically includes the following steps:
(1) the method comprises the following steps of performing multiple morphological refinements on the boundary contour, and comprises the following specific steps: setting a square structural element S (which can be 3 multiplied by 3), and carrying out a plurality of thinning operations on the boundary outline X by using the structural element S, wherein the operation formula is as follows:
Figure BDA0002119850960000101
wherein, X × S is the transition of the hit miss, and is explained as follows: let XCIs the complement of the boundary contour X, the structuring element S being formed by two mutually disjoint parts S1And S2Composition, i.e. S ═ S1∪S2And S1∩S2Φ (Φ represents empty set). Then the hit miss transition is defined as X S ═ X Θ S1)∩(XCΘS2) And theta is a morphological corrosion operation.
(2) The line segment connection points in the fracture boundary contour. After edge refinement of the boundary contour, the number of neighbors in the 8 connected regions of edge pixels should be defined in the following cases: a pixel with two neighbors is the middle point of the line segment; a pixel with one neighbor is an endpoint of a line segment; pixels with more than two neighbors are the connection points connecting line segments. And breaking the line segment with the connecting point at the connecting point to obtain an independent boundary line segment.
(3) A gaussian kernel scale s (which may be 0.8) is set to perform gaussian downsampling on the boundary contour, and a gradient value and a gradient direction are calculated for each point. Pseudo-ordering all points according to gradient values, establishing a state list, firstly presetting all points as an UNUSED state (representing unprocessed), then setting the state of the points with gradient values smaller than a specific value rho as a USED state (representing processed), and updating the state list, wherein the specific value rho is related to angle tolerance and rho is 2/sin (t). In order to ensure that all gradient values are judged to be whether to be points on the boundary line segment or not through the following step (4), wherein the fact that less than rho is directly set to be USED is that the boundary line segment cannot be obtained because the gradient values are too small, and the judgment is not needed.
(4) And taking out the point with the maximum gradient in the state list as a seed point, and setting the state as USED. Searching for points in the surrounding angle tolerance range [ -t, t ] with the seed point as a starting point, changing the state of the points satisfying the angle tolerance range to USED, and generating a rectangle R containing all the points. Judging whether the density of the inner points of the rectangle R is larger than a threshold value D, wherein the threshold value D can be 0.2 or other data; if the point density is not larger than the threshold D, the rectangle R is truncated to generate a plurality of rectangles until the point density in the rectangles is larger than the threshold D.
(5) Calculating NFA (number of False alarms) of the rectangle R, and if the NFA is smaller than a threshold epsilon, adding the rectangle R into an output list; otherwise, the rectangle R is changed until NFA is less than a threshold ε, which may be 1 or other data. The NFA calculation formula is as follows:
NFA=N·PhO[k(r,I)≥k(r,i)]
wherein N is the number of line segments in the boundary contour image with the current size of m.n, and is (m.n)2.5. k (R, I) is a pre-established model comparison graph (contraino model), and the number of homologous points (alignedpt) in a rectangle R in I. k (R, i) is the observed image (observed image), i.e., the number of aligned pts in the rectangle R, i.
(6) Dividing all rectangles R into two types according to the length threshold, wherein one type is a long rectangle, and if the length-width ratio is greater than a threshold gamma (wherein the threshold gamma can be 10), adding the corresponding rectangle into the direct candidate rectangle; if the aspect ratio is smaller than the threshold value gamma, the rectangular R set is added after cutting along the middle point of the long side, and the judgment is performed again. The other type is a short rectangle, which may be a candidate rectangle, a part of a curve, or a short segment, and the short segment with a length less than a threshold d may be deleted, where the threshold d may be 10 or other values.
(7) And fitting points in the short rectangle by a least square method, and if the average distance between the points in the short rectangle and the fitted straight line is less than a threshold value mu (wherein the threshold value mu can be 0.5), determining the short rectangle as a candidate rectangle. Otherwise, the short rectangle is deleted.
(8) And a straight line merging stage, merging any two candidate rectangles I, J, performing least square fitting, and calculating an estimated variance, wherein the estimated variance is expressed by the following formula:
Figure BDA0002119850960000111
wherein x isiAre points in the two candidate rectangles, L is the fitted linear equation, and N is the total number of points in the two candidate rectangles.
If the two candidate rectangles belong to the same straight line, the estimated variance is smaller than a variance threshold value, otherwise, the points in the two candidate rectangles are refused to be fitted into the straight line.
Wherein the rejection region (variance threshold) is as follows:
Figure BDA0002119850960000112
wherein n isiAnd njThe number of points in the two candidate rectangles I, J, respectively, μmay be set to a value of-2, σ0 2Is 1.
(9) And performing least square fitting on the points in the candidate rectangle to obtain a final straight line cluster. The least square method fits the target with the minimum residual error, and the residual error calculation formula is as follows:
Figure BDA0002119850960000121
wherein a and b are the slope and intercept of the straight line to be solved, xiAnd yiAre point coordinates. Finally, obtaining:
Figure BDA0002119850960000122
and (4) obtaining a boundary line segment cluster corresponding to the two-dimensional code image to be corrected according to the steps (1) to (9).
In the embodiment, the steps (1), (2), (6), (7), (8) and (9) are improvements on the LSD line segment detection algorithm, and the step (1) eliminates the situation that an excessively thick boundary contour is identified as a plurality of straight lines during straight line detection, so that the boundary determination is fuzzy; step (2) breaking the irregular line segments to ensure that the result of the subsequent straight line detection meets the requirement to a greater extent; the step (6) has the function of optimizing a rectangle, deletes short rectangles in the rectangle and breaks broken lines and curves in the rectangle; deleting unnecessary curves in the rectangle to ensure the correctness of the straight line detection result; and (4) the straight line merging operation of the steps (8) and (9) greatly reduces the number of boundary lines, so that the method has feasibility of solving the vertex position.
In an actual application scenario, theoretically, there are only four boundary lines in the boundary line segment cluster, but there may be four more boundary lines at the same time, in which case, the boundary line segment cluster needs to be screened to obtain four boundary lines, and then the first group of vertex positions is obtained according to the four boundary lines.
In an alternative embodiment, referring to FIG. 9, the step 102 optimization scheme is as follows:
in step 1021, the LSD line segment detection algorithm is used to process the boundary contour to obtain a plurality of boundary lines.
Wherein the plurality of boundary lines form the aforementioned cluster of boundary line segments.
In step 1022, it is determined whether the number of the boundary lines is greater than four, and if the number of the boundary lines is greater than four, the longest boundary line of the plurality of boundary lines is marked as the target boundary line.
For the sake of clarity, referring to fig. 10, the boundary line cluster includes 8 boundary lines, and each boundary line is sorted in descending order according to the length of the boundary line, where the longest boundary line has a serial number of 1 and the shortest boundary line has a serial number of 8.
In the present embodiment, the boundary line 1 is first marked as a target boundary line.
In step 1023, the lengths of the remaining boundary lines are sequentially traversed, the longest one is selected as the boundary line to be verified, and whether the distance between the midpoint of the boundary line to be verified and the midpoint of the target boundary line is smaller than a preset distance threshold is calculated.
In step 1024, if not, marking the boundary line to be verified as a target boundary line, and if not, rejecting the boundary line to be verified until four target boundary lines are screened out.
And traversing for multiple times to screen out other target boundary lines.
And (3) traversing for the first time: traversing the lengths of the remaining boundary lines (the boundary line 2-the boundary line 8), selecting the boundary line 2 as a boundary line to be verified, and calculating whether the distance between the middle point of the boundary line to be verified (the boundary line 2) and the middle point of the target boundary line (the boundary line 1) is smaller than a preset distance threshold value. If not, marking the boundary line to be verified as a target boundary line, and if not, rejecting the boundary line to be verified. Wherein, the preset distance threshold is related to the size of the two-dimension code, and if the size of the two-dimension code is m.n, the preset distance threshold can be 0.5 (m.n)0.5
If the distance between the middle point of the boundary line 2 and the middle point of the target boundary line (boundary line 1) is not less than the preset distance threshold, the boundary line 2 is marked as the target boundary line.
And a second traversal: traversing the lengths of the remaining boundary lines (the boundary line 3-the boundary line 8), selecting the boundary line 3 as a boundary line to be verified, and calculating whether the distance between the middle point of the boundary line to be verified (the boundary line 3) and the middle point of the target boundary line (the boundary line 1 and the boundary line 2) is smaller than a preset distance threshold value. If not, marking the boundary line to be verified as a target boundary line, and if not, rejecting the boundary line to be verified.
And if the distance between the middle point of the boundary line 3 and the middle point of the target boundary line (the boundary line 2) is smaller than a preset distance threshold, rejecting the boundary line 3, wherein if the distance between the middle point of the boundary line to be verified and the middle points of all the target boundary lines is not smaller than the preset distance threshold, the corresponding boundary line to be verified is marked as the target boundary line, and otherwise rejecting the corresponding boundary line to be verified.
And a third traversal: traversing the lengths of the remaining boundary lines (the boundary line 4-the boundary line 8), selecting the boundary line 4 as a boundary line to be verified, and calculating whether the distance between the middle point of the boundary line to be verified (the boundary line 4) and the middle point of the target boundary line (the boundary line 1 and the boundary line 2) is smaller than a preset distance threshold value. If not, marking the boundary line to be verified as a target boundary line, and if not, rejecting the boundary line to be verified.
If the distance between the middle point of the boundary line 4 and the middle point of the target boundary line (the boundary line 1 and the boundary line 2) is not less than the preset distance threshold, the boundary line 4 is marked as the target boundary line.
And repeating the steps until four target boundary lines are screened out.
In step 1025, the intersection of the four object boundary lines is obtained, resulting in four vertex positions, and thus a first set of vertex positions.
In the embodiment, the boundary lines with relatively close intervals can be effectively removed by screening according to the distance between the middle points of the boundary lines, the boundary line closest to the boundary profile of the two-dimensional code is effectively obtained, and the detection accuracy is improved.
In a specific application scenario, the second algorithm is hough transform, the second group of vertex positions includes four vertex positions, and the processing the boundary contour by using the second algorithm to obtain the second group of vertex positions includes: carrying out Hough transform on the boundary contour to obtain a boundary line segment cluster; performing slope and intercept dichotomous clustering on the boundary line segment cluster to obtain an upper boundary line segment cluster, a lower boundary line segment cluster, a left boundary line segment cluster and a right boundary line segment cluster; respectively performing straight line fitting on the upper boundary line segment cluster, the lower boundary line segment cluster, the left boundary line segment cluster and the right boundary line segment cluster to obtain four target boundary lines; and acquiring intersection points of the four target boundary lines to acquire four vertex positions so as to acquire a second group of vertex positions.
In this embodiment, the RANSAC algorithm may be used to process the upper boundary line segment cluster, the lower boundary line segment cluster, the left boundary line segment cluster, and the right boundary line segment cluster to obtain four boundary lines.
First, briefly explaining the hough transform concept: reading a binary image to be processed; acquiring source pixel data of a binary image space; the parameter space of the polar coordinate system is quantized into finite value intervals to equally divide or accumulate grids, namely rho and theta, wherein the rho and the theta are parameters on the parameter space (the polar coordinate system) and respectively represent the length and the angle.
Converting each pixel coordinate point P (x, y) to a curve point of (rho, theta) through a Hough transform algorithm, and accumulating the curve points to corresponding grid data points; finding the maximum Hough value, and setting a threshold (wherein, if the size of the two-dimension code is m.n, the threshold can be 0.1 (m.n)0.5~0.2·(m·n)0.5Taking values between) and inversely transforming to the image space to obtain the four boundary lines.
In this embodiment, hough transform is performed on the boundary contour to obtain a boundary line segment cluster, and then slope and intercept binary clustering is performed on the boundary line segment cluster to obtain an upper boundary line segment cluster, a lower boundary line segment cluster, a left boundary line segment cluster and a right boundary line segment cluster. Specifically, the upper boundary line segment cluster and the lower boundary line segment cluster are separated from the left boundary line segment cluster and the right boundary line segment cluster according to the slope, the upper boundary line segment cluster is separated from the lower boundary line segment cluster according to the intercept, and the left boundary line segment cluster is separated from the right boundary line segment cluster.
Before clustering, K-means outlier detection is carried out, and the method specifically comprises the following steps:
(1) the initial abnormal data set M is an overall point set, wherein the abnormal data set M comprises point sets formed by all slopes and intercepts;
(2) arbitrarily selecting k objects (where k is 2) as initial cluster centers, assigning each object to the most similar cluster;
(3) calculating the average value of the objects in the cluster, and calculating the average radius of the cluster;
(4) solving a candidate abnormal object set N, wherein the candidate abnormal object set N refers to an object with a distance from a central object larger than the average radius and a cluster with only one object;
(5) obtaining an intersection M (M) ∩ N with the previous abnormal data set as a new abnormal data set;
(6) and repeating the steps until the abnormal data set is converged or the maximum iteration number is reached to obtain a sample.
And then, selecting a K-medoids clustering algorithm for clustering so as to avoid the influence of abnormal values on the boundary line determination algorithm again. The K-mediads clustering algorithm comprises the following steps:
(1) selecting the maximum value and the minimum value of the sample as initial mediads (central point set);
(2) distributing the remaining n-2 points to the class represented by the current best mediads according to the principle of being close to the initial mediads;
(3) if the number of a certain class is 1, deleting the class, and selecting the maximum value and the minimum value from another class as mediads;
(4) calculating the values of the criterion function when the points are new mediads for all other points except the mediads points in the class in sequence, traversing all possibilities, and selecting the point corresponding to the minimum criterion function as the new mediads; wherein the criterion function is: the sum of the distances from all other points in the current mediads to the central point is minimum;
(5) and (4) repeating the processes from (2) to (4) until all the mediads are not changed or the set maximum iteration number is reached.
In this embodiment, for the original K-medias clustering algorithm, in the step (1), it is different from any value in the original K-medias clustering algorithm, and the maximum value and the minimum value are used as the initial medias, so that the clustering efficiency can be accelerated; the method is different from the original K-mediads clustering algorithm, adds the step (3), and can prevent the influence of abnormal points.
After the slope and the intercept are processed by the K-means outlier detection and the K-means clustering algorithm, an upper boundary line segment cluster, a lower boundary line segment cluster, a left boundary line segment cluster and a right boundary line segment cluster are obtained, and then the RANSAC algorithm is adopted to process each boundary line segment cluster respectively to obtain four boundary lines.
Specifically, the RANSAC algorithm is used to achieve this goal by iteratively selecting a set of random subsets in the data, the selected subsets being assumed to be inliers and verified by the following method:
(1) performing K-means + + clustering on the original data set, and dividing the original data set into two types of data points;
(2) respectively selecting a point from the two classes to estimate a linear model, wherein the model is determined and unique;
(3) testing all other data by using the model obtained in the step (2), and if the distance from a certain point to the straight line model is less than a threshold value L, wherein the threshold value L can be the distance between 10 (or other number) pixel points, then considering the point to be also an intra-office point;
(4) if the number of the local points of the model exceeds a threshold value N, wherein the threshold value N can be 80%, the model is considered to be reasonable, and then a final straight line is fitted by using a least square method of all the local points;
(5) if the number of the local interior points is less than the threshold value N but greater than the number of the local interior points of the previous model, the model is retained, otherwise, the model is discarded;
(6) and (5) repeating the steps (2) to (5) until the maximum iteration number is reached.
In this embodiment, the original RANSAC algorithm is modified, and the algorithm efficiency can be improved by modifying the step (1) and the step (4) differently from the original RANSAC algorithm.
In an alternative embodiment, the K-means + + clustering algorithm proceeds as follows:
(1) randomly selecting a sample point from the original data set as an initial clustering center c1
(2) For each sample point x in the original data setiCalculating the shortest distance between the cluster center and the current existing cluster center, and expressing the shortest distance by D (x);
(3) selecting a new sample as a new cluster center according to the following selection principles: d (x) the probability that a larger point is selected as the cluster center is larger; (x) reflecting the probability that the point is selected by: firstly, adding the distance between each point in the data set and the nearest cluster center point, wherein the sum is expressed by sum (D (x)); taking a Random value Random between 0 and sum (D (x)) and then taking the Random value Random-D (x) until the Random value is less than or equal to 0, wherein the point is the next clustering center;
(4) repeating the steps (2) to (3) until k clustering centers are selected;
(5) for each sample x in the data setiCalculating Euclidean distances from the Euclidean distance to k clustering centers, and dividing the Euclidean distances into clusters corresponding to the clustering centers with the minimum distances;
(6) updating for each clusterCluster center, update function is represented by F (x); with CiRepresenting a set of data points in a cluster, by | CiIf | represents the number of data points in the cluster, the update function f (x) is:
Figure BDA0002119850960000161
(7) repeating (5) to (6) until the cluster center is not changed any more, and converging the square sum of errors criterion function E (x); the objective is to minimize the mean square error E, with the criterion function E (x) being:
Figure BDA0002119850960000162
the initial dataset is represented as: x ═ X1,x2,...,xnThe number of clusters is k.
In an actual application scenario, the intersection points may be taken as follows, so as to obtain the first group of vertex positions and the second group of vertex positions. In the present embodiment, four target boundary lines are processed as follows to find an intersection point, for example, a straight line L1: y is ax + b, and the straight line L2 is cx + d, and the intersection coordinates are:
Figure BDA0002119850960000163
in an optional scheme, in step 104, correcting the to-be-corrected two-dimensional code image based on the target vertex position to obtain a target two-dimensional code image specifically includes: determining the position of the target vertex on the two-dimensional code image to be corrected so as to determine the boundary point of the two-dimensional code image to be corrected; and carrying out perspective transformation on the two-dimensional code image to be corrected, and correcting the deformation of the two-dimensional code image to be corrected to obtain a target two-dimensional code image.
Wherein, the perspective transformation is mapping transformation for converting the image from an original two-dimensional space to a three-dimensional space and then converting the image from the three-dimensional space to another new two-dimensional space. The formula for mapping the original two-dimensional space to the three-dimensional space is as follows:
Figure BDA0002119850960000164
the mapping formula from the three-dimensional space to the new two-dimensional space is as follows:
x=x′/w′y=y′/w′
here, the original two-dimensional space corresponds to the original image, and the new two-dimensional space corresponds to the new image. Where (u, v) is a point on the original image, and the transformed new image point coordinates (x, y) are obtained correspondingly. Rewriting the transformation formula before, resulting in:
Figure BDA0002119850960000165
and correcting the deformation of the two-dimensional code image to be corrected according to the perspective transformation to obtain a target two-dimensional code image.
Different from the prior art, the two sets of vertex positions of the two-dimensional code image are obtained through two different algorithms, the same type of vertexes in the two sets of vertex positions are blended to obtain the target vertex position, so that the boundary point of the two-dimensional code image is determined, the accuracy of determining the vertex position can be improved through the two algorithms, and finally the deformation of the two-dimensional code image can be corrected according to the target vertex position so as to facilitate the identification of the two-dimensional code.
Example 2:
in order to solve the foregoing problems, this embodiment is further improved based on embodiment 1, and an edge contour image of the two-dimensional code image is obtained through multi-operator edge detection, so as to sharpen the two-dimensional code image, enhance the edge contour of the two-dimensional code image, and eliminate the effect of blurring the two-dimensional code edge, so as to obtain a two-dimensional code image with a clean background, a clear barcode, and a strong contrast, and thus, accuracy of subsequent identification can be effectively improved.
Referring to fig. 11, the present embodiment provides another two-dimensional code image calibration method, which further includes the following steps:
here, when the two-dimensional code image has the problems of deformation and edge blurring, the target two-dimensional code image is obtained by correcting the image according to steps 101 to 104 of embodiment 1, and then the target two-dimensional code image is edge-sharpened according to the method of this embodiment, so as to solve the problem of edge blurring. When the two-dimensional code image has no deformation but the edge of the two-dimensional code image is fuzzy, the method of the embodiment can be directly adopted to carry out edge sharpening on the target two-dimensional code image, so that the problem of edge blurring is solved. That is, the correction method of the present embodiment is applicable to different application scenarios.
In step 105, a third algorithm is used to perform edge detection on the target two-dimensional code image to obtain a first edge contour map, and a fourth algorithm is used to perform edge detection on the target two-dimensional code image to obtain a second edge contour map.
The third algorithm is a Canny edge detection algorithm, and the fourth algorithm is a Sobel edge detection algorithm. The third and fourth algorithms may also be other algorithms.
In this embodiment, two algorithms are used for performing multi-operator fusion, and in an actual application scenario, when performing multi-operator fusion, the number and the type of the algorithms may be more than 2 or more, and are not specifically limited herein, different algorithms are respectively used for performing edge detection according to the foregoing manner, and then edge contour maps corresponding to each algorithm are fused according to the following steps to obtain a target edge contour map.
In an actual application scene, before edge detection is carried out, median filtering and anisotropic-based detail preserving algorithm are carried out on a target two-dimensional code image, and then multi-operator edge detection is carried out after smoothing denoising is carried out.
In step 106, the first edge contour map and the second edge contour map are fused to obtain a target edge contour map.
In this embodiment, the first edge contour map and the second edge contour map are subjected to pixel superposition according to a preset proportion to obtain a target edge contour.
The preset ratio is 1:1 or other ratios, and is not limited herein.
In step 107, the target two-dimensional code image is sharpened through the target edge contour map to enhance the edge of the target two-dimensional code image.
In an alternative, the principle of median filtering is: the value of a point in the digital image is replaced by the median value of the points of a region around the point. Take a 3 × 3 window as an example:
g(x,y)=median(f(x-1,y-1),f(x,y-1),f(x+1,y-1),f(x-1,y),
f(x,y),f(x+1,y),f(x-1,y+1),f(x,y+1),f(x+1,y+1))
wherein f (x, y) is a pixel value of a point (x, y), g (x, y) is a pixel value after median filtering, and median is a median taking function.
In an alternative approach, the anisotropic-based detail-preserving algorithm is embodied in that in the conventional PM anisotropic diffusion model, the discrete representation can be realized by using four nearest neighbors and Laplace operator, wherein ▽ I is usedi t(x, y) represents a gradient in four directions, ci t(x, y) represents the diffusion coefficient associated therewith, It(x, y) is the value of the pixel point before processing, It+1(x, y) are the processed pixel point values:
Figure BDA0002119850960000181
Figure BDA0002119850960000182
the function g (▽ I) must be a monotonically decreasing function, satisfying g (0) ═ 1 and g (∞) ═ 0, which can be expressed as (where K is a constant):
g(▽I)=1/[1+(|▽I|/K)2]
in an actual application scene, if K is too large, the diffusion process can be transited smoothly and cause image blurring; if K is too small, the diffusion process will stop smoothing during the early iterations and produce a restored image similar to the original image. Therefore, an anisotropic diffusion model is considered and the gradient and variance are taken as two local features. Taking a 3 × 3 region as an example, the gray variance:
Figure BDA0002119850960000183
since the gray scale ratio has a large gradient variation range, the variance needs to be normalized to make the two compatible with each other:
Figure BDA0002119850960000184
the diffusion coefficient function is modified to:
Figure BDA0002119850960000185
in an optional scheme, performing multi-operator fused edge detection on the image substantially comprises: and fusing and overlapping the first edge contour map obtained by the Canny edge detection algorithm and the second edge contour map obtained by the Sobel edge detection algorithm to obtain a target edge contour map, wherein the overlapping proportion can be 0.5, and other values can be selected and set according to actual conditions.
The following describes the processing procedure of the Canny edge detection algorithm.
First, the conventional steps of the Canny edge detection algorithm are briefly described:
(1) gaussian filtered smoothed images
The Canny edge detection algorithm processes the original image using two-dimensional gaussian function derivatives, where (x, y) is the image point coordinate. Wherein the two-dimensional gaussian function is as follows:
Figure BDA0002119850960000186
the gradient vector is:
Figure BDA0002119850960000191
(2) calculating magnitude and direction of gradient
The partial derivatives in the x and y directions are P from image I (x, y), respectivelyx(i, j) and Py(i, j). Converting rectangular coordinates to polar coordinates, and converting P of pixelx(i, j) and Py(i, j) is converted into a gradient magnitude M (i, j) and a gradient direction θ (i, j). M (i, j) represents the edge strength of an arbitrary point, and θ (i, j) represents the normal vector of the arbitrary point.
Figure BDA0002119850960000192
Wherein P isxAnd PyThe definition is as follows:
Figure BDA0002119850960000193
(3) non-maximum suppression of gradient amplitude
3 multiplied by 3 neighborhood points are taken as research objects, and the gradient directions of the neighborhood points are found to be in the gradient directions of 0 degree, 45 degrees, 90 degrees and 135 degrees of a fan. If the value is greater than the neighboring point, the point may be an edge point and may be retained again for determination. Otherwise, the point is not an edge point and the value is set to 0.
(4) Detecting and connecting edges using dual thresholds
The value of point (i, j) is compared to a threshold and if the point gradient magnitude is greater than a high threshold Th, an edge point is determined. If Tl ≦ M (i, j) ≦ Th, further determination may be awaited, and if the point gradient magnitude is less than the low threshold Tl, it may be determined not to be an edge point.
In the further determination, a high threshold is used to obtain an edge image, but the edge image is discontinuous because the high threshold is used. At this time, edges that can be connected to the contour are found in the 8 neighborhoods of all edge line endpoints in the high threshold edge image, and are connected when Tl ≦ M (i, j) is satisfied.
Since the conventional Canny edge detection algorithm is prone to the problems of detecting false edges and losing details, in a preferred embodiment, the conventional Canny edge detection algorithm is modified as follows:
the improved Canny edge detection algorithm uses a multi-template convolution original image f (x, y), wherein the original image refers to the two-dimensional code image subjected to smoothing and denoising processing. When the template affects the point, calculation is carried out according to the gradient magnitude, the weighted value is changed constantly, and a Gaussian function G (x, y, sigma) is convolved with f (x, y) to obtain a smooth image.
When the pixel value changes significantly, the filter weight can be set to 0, so that smoothing of pixel change is avoided, and the calculation efficiency is improved. The improved adaptive filtering algorithm is characterized in that the weight coefficients are automatically adjusted when the weight coefficients satisfy the regions without significant changes.
Figure BDA0002119850960000194
Where k is the number of iterations, I(k)(x, y) is the image pixel value, I(k)The derivative of (x, y) is l'(k)(x,y),Φ(I′(k)(x, y)) is a monotone decreasing function, the maximum value is phi (0) -1, and when I'(k)(x, y) is increasing, phi (I'(k)(x, y)) to 0. I'(k)(x, y) can detect whether there is a sudden change in gray level, w(k)(x, y) may be described as:
Figure BDA0002119850960000201
wherein h is constant, I'(k)(x, y) is defined as:
Figure BDA0002119850960000202
weight coefficient:
Figure BDA0002119850960000203
wherein the content of the first and second substances,
Figure BDA0002119850960000204
Figure BDA0002119850960000205
Figure BDA0002119850960000206
in a practical application scenario, the inventor finds that the edge sharpening effect is the best after 5 iterations.
In this embodiment, a first edge contour map is obtained by using a modified Canny edge detection algorithm.
The processing procedure of the Sobel edge detection algorithm is specifically described below.
First, the conventional Sobel edge detection algorithm is as follows:
the gradient of the image f (x, y) at position (i, j) is defined as:
Figure BDA0002119850960000207
the magnitude of the gradient vector is:
Figure BDA0002119850960000208
in the classical Sobel operator, the landscape template and the image f (x, y) are convolved to approximate the gradient values, which can be solved by the following formula:
Gx=f(i+1,j-1)+2f(i+1,j)+f(i+1,j+1)-f(i-1,j-1)-2f(i-1,j)-f(i-1,j+1)
Gy=f(i-1,j+1)+2f(i,j+1)+f(i+1,j+1)-f(i-1,j-1)-2f(i,j-1)-f(i-1,j-1)
the traditional Sobel edge detection algorithm processes the steps of ① moving a horizontal and vertical bidirectional template from left to right and from top to bottom along an image respectively from one pixel to another pixel, wherein the center point of the template corresponds to the corresponding pixel in the image, ② converting the weight of each position in the template into the corresponding image pixel value and calculating the gradient value according to the weight, ③ replacing the image pixel value corresponding to the center point of the template with the value of a gradient vector to be represented as px, ④ setting a proper threshold value T (which can be 150), and determining the image pixel point of the image edge if px is not less than T.
Since the traditional Sobel operator is insensitive to edges in other directions, is not ideal for extracting complex texture and oblique image contours, and has low noise resistance, in a preferred embodiment, the following improvements are made to the traditional Sobel operator:
(1) alternately smoothing the image using open and closed operations in morphology;
(2) carrying out edge detection by using an expanded Sobel operator;
(3) and selecting a proper threshold value for binarization by using an edge binary Otsu algorithm.
Wherein the morphological operations are defined as follows, where F (x, y) is a grayscale image, (x, y) is image point coordinates, S (m, n) is a morphological structuring element (which may be 3 × 3 in size), (m, n) is structuring element point coordinates:
expansion:
Figure BDA0002119850960000214
and (3) corrosion: f Θ S ═ min { F (x + m, y + n) }
Opening:
Figure BDA0002119850960000212
closing:
Figure BDA0002119850960000213
compared with the traditional Sobel edge detection algorithm, the improved Sobel edge detection algorithm increases six directions which are respectively 45 degrees, 135 degrees, 180 degrees, 225 degrees, 270 degrees and 315 degrees, and can obtain more accurate edge images.
In this embodiment, the second edge contour map is obtained by a Sobel edge detection algorithm.
In this implementation, the first edge contour map and the second edge contour map are fused to obtain a target edge contour map; and sharpening the target two-dimensional code image through the target edge contour map so as to enhance the edge of the target two-dimensional code image.
Referring to fig. 12 and 13, the graph on the right side in fig. 12 is the first edge contour graph obtained by Canny operator. It can be seen that in the first edge profile, a very fine profile is included, but noise is easily misjudged as a boundary. The graph on the right side in fig. 13 is a second edge contour graph obtained by the Sobel operator. It can be seen that the effect on noise processing is better, but the edge positioning is not very accurate. In this embodiment, the first edge contour map and the second edge contour map are fused to obtain a target edge contour map, which not only can ensure a detailed contour and accurate edge positioning, but also has a better noise processing capability, and can effectively sharpen a two-dimensional code image.
In an actual application scene, further performing self-adaptive threshold binarization on the sharpened target two-dimensional code image, and then sending the processed two-dimensional code image to an identification component for identification of the two-dimensional code.
In an alternative scheme, the adaptive threshold binarization algorithm is introduced as follows:
the basic idea of the algorithm is to traverse the image and calculate a moving average, set to black if the pixel is significantly below this average, and to white otherwise. Suppose PnFor a pixel at point n in the image, fs(n) is the sum of the last s pixels at point n:
Figure BDA0002119850960000211
finally, whether the image t (n) is 1 (black) or 0 (white) depends on whether it is darker than t percent of the average of its previous s pixels. Wherein s is 1/8 width of the image and t is 15, which can achieve better effect.
Figure BDA0002119850960000221
The calculation of the average value may also take two sides symmetrical about the point n, and even a two-dimensional smooth value may be used instead of a one-dimensional smooth value. Suppose Pi0,j0For a point (i) in the image0,j0) Pixel of (f)s(i0,j0) Is a point (i)0,j0) Centered, the sum of the surrounding (s +1) · (s +1) pixel values:
Figure BDA0002119850960000222
calculating the sum of pixel values within a specified rectangular region of the image may use an image integral map, assuming that ii (x, y) represents the sum of the pixels at the top left corner of the point, as follows:
Figure BDA0002119850960000223
then with point (x)0,y0) The sum of pixel values within the rectangular area represented by point (x, y) is represented as:
S=ii(x,y)+ii(x0,y0)-ii(x,y0)-ii(x0,y)
in order to make the image pixel transitions relatively smooth, a zigzag traversal is employed to traverse the image. In order to allow the threshold calculation to consider information in the vertical direction, the average effect of the previous line is kept during traversal, and then the average value of the current line and the average value of the previous line are averaged to be used as a new average value.
Different from the prior art, the edge contour image of the two-dimensional code image is obtained through multi-operator edge detection, so that the two-dimensional code image is sharpened, the edge contour of the two-dimensional code image is enhanced, the influence of fuzzy edges of the two-dimensional code can be eliminated, the two-dimensional code image with a clean background, clear bar codes and high contrast is obtained, and the accuracy of subsequent identification can be effectively improved.
Example 3:
in an actual application scenario, there may be a vertex position recognition error caused by a problem of the algorithm itself, that is, in embodiment 1, a similarity between the first group of vertex positions and the second group of vertex positions is smaller than a preset threshold, and in order to avoid a correction error or an excessively long correction time, at this time, a vertex position obtained by one of the algorithms may be used as a target vertex position to perform image correction.
In an actual application scenario, although the detection speed of the LSD line segment detection algorithm is high, when the two-dimensional code image to be corrected is blurred, the problem of low recognition rate is likely to occur. With reference to embodiment 1, when the similarity between the first group of vertex positions and the second group of vertex positions is smaller than the preset similarity threshold, the total number of pixels in the unit area of the to-be-corrected two-dimensional code image is obtained, where the total number of pixels in the unit area may reflect the sharpness of the to-be-corrected two-dimensional code image, and when the total number of pixels in the unit area of the to-be-corrected two-dimensional code image is greater than the preset number threshold, that is, when the to-be-corrected two-dimensional code image is sharper, the first group of vertex positions (obtained by using an LSD line segment detection algorithm) may be used as target vertex positions to determine boundary points of the to-be-corrected two-dimensional code image, thereby avoiding correction errors or overlong correction time.
When the total number of pixels in the unit area of the two-dimensional code image to be corrected is not greater than a preset number threshold, that is, when the two-dimensional code image to be corrected is blurry, the second group of vertex positions can be used as target vertex positions (obtained through hough transform), and boundary points of the two-dimensional code image to be corrected are determined, so that correction errors or overlong correction time are avoided.
In an optional scheme, the total number of pixels in a unit area may be determined by a memory space occupied by the unit area image, that is, the obtaining of the total number of pixels in the unit area of the to-be-corrected two-dimensional code image specifically includes: the method comprises the steps of establishing a mapping relation between a memory space occupied by a unit area image and the total number of pixels in the unit area in advance, obtaining the memory space occupied by the unit area image of the two-dimensional code image to be corrected, and calculating the total number of pixels in the unit area of the two-dimensional code image to be corrected based on the mapping relation.
In the embodiment, in order to avoid the error correction problem caused by the algorithm, a plurality of optional execution branches are provided, and preferably, the position of the target vertex is determined by adopting a fusion mode of two algorithms so as to improve the accuracy; when a large difference exists between the vertex positions determined by the two algorithms, the LSD line segment detection algorithm or Hough transform is selectively selected according to the total number of pixels in the unit area of the two-dimensional code image to be corrected, the determined vertex positions are avoided, correction errors are avoided, or a user is informed to acquire the image again, and user experience of the user is improved.
Other processes of the two-dimensional code image correction method are described in embodiment 1 and embodiment 2, and are not described herein again.
Example 4:
referring to fig. 14, fig. 14 is a schematic structural diagram of a calibration apparatus according to an embodiment of the present invention. The correction device of the present embodiment includes one or more processors 41 and a memory 42. In fig. 14, one processor 41 is taken as an example.
The processor 41 and the memory 42 may be connected by a bus or other means, and fig. 14 illustrates the connection by a bus as an example.
The memory 42, which is a nonvolatile computer-readable storage medium based on a correction method of a two-dimensional code image, may be used to store nonvolatile software programs, nonvolatile computer-executable programs, and modules, such as the correction method of a two-dimensional code image and corresponding program instructions in embodiment 1 and/or embodiment 2. The processor 41 implements the functions of the correction method of the two-dimensional code image of embodiment 1 and/or embodiment 2 by executing various functional applications and data processing of the correction method of the two-dimensional code image by executing nonvolatile software programs, instructions, and modules stored in the memory 42.
The memory 42 may include, among other things, high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some embodiments, memory 42 may optionally include memory located remotely from processor 41, which may be connected to processor 41 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
For a method for correcting a two-dimensional code image, please refer to the description of the text in embodiments 1 and 2, which is not repeated herein.
It should be noted that, for the information interaction, execution process and other contents between the modules and units in the apparatus and system, the specific contents may refer to the description in the embodiment of the method of the present invention because the same concept is used as the embodiment of the processing method of the present invention, and are not described herein again.
Those of ordinary skill in the art will appreciate that all or part of the steps of the various methods of the embodiments may be implemented by associated hardware as instructed by a program, which may be stored on a computer-readable storage medium, which may include: a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and the like.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (8)

1. A correction method of a two-dimensional code image is characterized by comprising the following steps:
processing a two-dimensional code image to be corrected to obtain a boundary contour of the two-dimensional code image to be corrected;
processing the boundary contour by adopting a first algorithm to obtain a first group of vertex positions, and processing the boundary contour by adopting a second algorithm to obtain a second group of vertex positions;
blending the vertex positions of the same kind in the first group of vertex positions and the second group of vertex positions to obtain a target vertex position;
correcting the two-dimensional code image to be corrected based on the target vertex position to obtain a target two-dimensional code image;
after the boundary contour is processed by adopting the first algorithm to obtain a first group of vertex positions and the boundary contour is processed by adopting the second algorithm to obtain a second group of vertex positions, the method further comprises the following steps:
obtaining a similarity between the first set of vertex positions and the second set of vertex positions;
when the similarity between the first group of vertex positions and the second group of vertex positions is not smaller than a preset similarity threshold, performing a step of harmonizing the same type of vertex positions in the first group of vertex positions and the second group of vertex positions to obtain a target vertex position;
the first group of vertex positions comprises four vertex positions, the second group of vertex positions comprises four vertex positions, and the obtaining of the similarity between the first group of vertex positions and the second group of vertex positions comprises:
respectively acquiring a first quadrangle formed by the positions of the first group of vertexes and a second quadrangle formed by the positions of the second group of vertexes;
respectively acquiring the areas of intersection regions formed by the first quadrangle and the second quadrangle and the areas of union regions formed by the first quadrangle and the second quadrangle;
calculating an intersection ratio of the area of the intersection region and the area of the union region, and marking the similarity between the first group of vertex positions and the second group of vertex positions through the intersection ratio.
2. The method of claim 1, wherein the first algorithm is an LSD line segment detection algorithm, wherein the first set of vertex positions includes four vertex positions, and wherein the processing the boundary contour using the first algorithm to obtain the first set of vertex positions includes:
processing the boundary contour by adopting the LSD line segment detection algorithm to obtain a plurality of boundary lines;
judging whether the number of the boundary lines is more than four, and if the number of the boundary lines is more than four, marking the longest boundary line in the plurality of boundary lines as a target boundary line;
sequentially traversing the lengths of the remaining boundary lines, selecting the longest boundary line as a boundary line to be verified, and calculating whether the distance between the midpoint of the boundary line to be verified and the midpoint of the target boundary line is smaller than a preset distance threshold value;
if not, marking the boundary line to be verified as a target boundary line, and if not, rejecting the boundary line to be verified until four target boundary lines are screened out;
and acquiring intersection points of the four target boundary lines to obtain four vertex positions, thereby obtaining a first group of vertex positions.
3. The correction method of claim 1, wherein the second algorithm is hough transform, the second set of vertex positions includes four vertex positions, and the processing the boundary contour using the second algorithm to obtain the second set of vertex positions includes:
carrying out Hough transform on the boundary contour to obtain a boundary line segment cluster;
performing slope and intercept dichotomous clustering on the boundary line segment cluster to obtain an upper boundary line segment cluster, a lower boundary line segment cluster, a left boundary line segment cluster and a right boundary line segment cluster;
respectively performing straight line fitting on the upper boundary line segment cluster, the lower boundary line segment cluster, the left boundary line segment cluster and the right boundary line segment cluster to obtain four target boundary lines;
and acquiring intersection points of the four target boundary lines to acquire four vertex positions so as to acquire a second group of vertex positions.
4. The correction method according to claim 1, wherein the number of the target vertices is four, and the correcting the to-be-corrected two-dimensional code image based on the target vertex positions to obtain the target two-dimensional code image includes:
determining the position of the target vertex on the two-dimensional code image to be corrected so as to determine the boundary point of the two-dimensional code image to be corrected;
and carrying out perspective transformation on the two-dimensional code image to be corrected, and correcting the deformation of the two-dimensional code image to be corrected to obtain a target two-dimensional code image.
5. The correction method according to any one of claims 1 to 4, characterized in that the correction method further comprises:
performing edge detection on the target two-dimensional code image by adopting a third algorithm to obtain a first edge contour map, and performing edge detection on the target two-dimensional code image by adopting a fourth algorithm to obtain a second edge contour map;
fusing the first edge contour map and the second edge contour map to obtain a target edge contour map;
and sharpening the target two-dimensional code image through the target edge contour map so as to enhance the edge of the target two-dimensional code image.
6. The correction method according to claim 5, characterized in that the third algorithm is a Canny edge detection algorithm and the fourth algorithm is a Sobel edge detection algorithm;
fusing the first edge contour map and the second edge contour map to obtain a target edge contour map, wherein the target edge contour map comprises:
and performing pixel superposition on the first edge contour map and the second edge contour map according to a preset proportion to obtain a target edge contour.
7. The correction method according to any one of claims 1 to 4, wherein the processing the two-dimensional code image to be corrected to obtain the boundary contour of the two-dimensional code image to be corrected comprises:
carrying out binarization processing on a two-dimensional code image to be corrected;
and performing morphological corrosion on the binarized two-dimensional code image to be corrected to obtain a boundary contour of the two-dimensional code image to be corrected.
8. A calibration device, comprising at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor and programmed to perform the correction method of any of claims 1 to 7.
CN201910602805.6A 2019-07-05 2019-07-05 Correction method and correction device for two-dimensional code image Active CN110309687B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910602805.6A CN110309687B (en) 2019-07-05 2019-07-05 Correction method and correction device for two-dimensional code image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910602805.6A CN110309687B (en) 2019-07-05 2019-07-05 Correction method and correction device for two-dimensional code image

Publications (2)

Publication Number Publication Date
CN110309687A CN110309687A (en) 2019-10-08
CN110309687B true CN110309687B (en) 2020-06-05

Family

ID=68079002

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910602805.6A Active CN110309687B (en) 2019-07-05 2019-07-05 Correction method and correction device for two-dimensional code image

Country Status (1)

Country Link
CN (1) CN110309687B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110807783B (en) * 2019-10-28 2023-07-18 衢州学院 Efficient visual field region segmentation method and device for achromatic long video
CN111337011A (en) * 2019-12-10 2020-06-26 亿嘉和科技股份有限公司 Indoor positioning method based on laser and two-dimensional code fusion
CN111127357B (en) * 2019-12-18 2021-05-04 北京城市网邻信息技术有限公司 House type graph processing method, system, device and computer readable storage medium
CN111311497B (en) * 2020-02-12 2023-05-05 广东工业大学 Bar code image angle correction method and device
CN111310508B (en) * 2020-02-14 2021-08-10 北京化工大学 Two-dimensional code identification method
CN111563930B (en) * 2020-04-29 2023-07-07 达闼机器人股份有限公司 Positioning method, device, medium, electronic equipment and auxiliary positioning module
CN112651259A (en) * 2020-12-29 2021-04-13 芜湖哈特机器人产业技术研究院有限公司 Two-dimensional code positioning method and mobile robot positioning method based on two-dimensional code
CN112766012B (en) * 2021-02-05 2021-12-17 腾讯科技(深圳)有限公司 Two-dimensional code image recognition method and device, electronic equipment and storage medium
CN113034415B (en) * 2021-03-23 2021-09-14 哈尔滨市科佳通用机电股份有限公司 Method for amplifying small parts of railway locomotive image
CN113128246A (en) * 2021-03-25 2021-07-16 维沃移动通信有限公司 Information processing method and device and electronic equipment
CN113158704B (en) * 2021-04-07 2023-06-09 福州符号信息科技有限公司 Method and system for rapidly positioning Dotcode code
CN113111674A (en) * 2021-04-12 2021-07-13 广东奥普特科技股份有限公司 Aztec code positioning and decoding method, system, equipment and storage medium
CN113536822B (en) * 2021-07-28 2024-05-03 中移(杭州)信息技术有限公司 Two-dimensional code correction method and device and computer readable storage medium
CN116030120B (en) * 2022-09-09 2023-11-24 北京市计算中心有限公司 Method for identifying and correcting hexagons
CN116385742B (en) * 2023-03-20 2024-04-12 北京兆讯恒达技术有限公司 Low-quality bar code image signal extraction method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102999772A (en) * 2012-11-14 2013-03-27 韩偲铭 Novel array-type two-dimension code encoding and decoding methods
CN104424457A (en) * 2013-08-20 2015-03-18 复旦大学 Method for identifying two-dimensional code under the condition of nonlinear distortion
CN108564557A (en) * 2018-05-31 2018-09-21 京东方科技集团股份有限公司 Method for correcting image and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103745221B (en) * 2014-01-08 2017-05-24 杭州晟元数据安全技术股份有限公司 Two-dimensional code image correction method
CN105069389B (en) * 2015-07-27 2017-10-31 福建联迪商用设备有限公司 Quick Response Code piecemeal coding/decoding method and system
CN106156684B (en) * 2016-06-30 2019-01-18 南京理工大学 A kind of two-dimensional code identification method and device
CN107944324A (en) * 2017-11-16 2018-04-20 凌云光技术集团有限责任公司 A kind of Quick Response Code distortion correction method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102999772A (en) * 2012-11-14 2013-03-27 韩偲铭 Novel array-type two-dimension code encoding and decoding methods
CN104424457A (en) * 2013-08-20 2015-03-18 复旦大学 Method for identifying two-dimensional code under the condition of nonlinear distortion
CN108564557A (en) * 2018-05-31 2018-09-21 京东方科技集团股份有限公司 Method for correcting image and device

Also Published As

Publication number Publication date
CN110309687A (en) 2019-10-08

Similar Documents

Publication Publication Date Title
CN110309687B (en) Correction method and correction device for two-dimensional code image
CN112686812B (en) Bank card inclination correction detection method and device, readable storage medium and terminal
CN109948393B (en) Method and device for positioning one-dimensional bar code
CN107945111B (en) Image stitching method based on SURF (speeded up robust features) feature extraction and CS-LBP (local binary Pattern) descriptor
CN114529459B (en) Method, system and medium for enhancing image edge
CN102354363A (en) Identification method of two-dimensional barcode image on high-reflect light cylindrical metal
CN106709500B (en) Image feature matching method
CN108573184B (en) Two-dimensional code positioning method, module and computer readable storage medium
CN105844277B (en) Label identification method and device
CN112580383B (en) Two-dimensional code identification method and device, electronic equipment and storage medium
CN114359042A (en) Point cloud splicing method and device, three-dimensional scanner and electronic equipment
CN107578011A (en) The decision method and device of key frame of video
CN109508571B (en) Strip-space positioning method and device, electronic equipment and storage medium
WO2022021687A1 (en) Method for positioning quick response code area, and electronic device and storage medium
CN115082888A (en) Lane line detection method and device
CN115456003A (en) DPM two-dimensional code identification method and storage medium
CN112699704B (en) Method, device, equipment and storage device for detecting bar code
CN112686248B (en) Certificate increase and decrease type detection method and device, readable storage medium and terminal
TW201727534A (en) Barcode decoding method
CN111753573B (en) Two-dimensional code image recognition method and device, electronic equipment and readable storage medium
CN115880228A (en) Multi-defect merging method and device, computer equipment and storage medium
CN113095102B (en) Method for positioning bar code area
CN114549649A (en) Feature matching-based rapid identification method for scanned map point symbols
CN111753723B (en) Fingerprint identification method and device based on density calibration
CN111178111A (en) Two-dimensional code detection method, electronic device, storage medium and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant