CN116358448A - Coding target and positioning and decoding method based on same - Google Patents
Coding target and positioning and decoding method based on same Download PDFInfo
- Publication number
- CN116358448A CN116358448A CN202310358717.2A CN202310358717A CN116358448A CN 116358448 A CN116358448 A CN 116358448A CN 202310358717 A CN202310358717 A CN 202310358717A CN 116358448 A CN116358448 A CN 116358448A
- Authority
- CN
- China
- Prior art keywords
- point
- target
- contour
- circle
- coding
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 27
- 230000009466 transformation Effects 0.000 claims description 9
- 238000013461 design Methods 0.000 claims description 7
- 239000011159 matrix material Substances 0.000 claims description 3
- 230000004807 localization Effects 0.000 claims 8
- 238000001514 detection method Methods 0.000 claims 1
- 108091026890 Coding region Proteins 0.000 abstract description 2
- 238000005259 measurement Methods 0.000 description 3
- 238000012937 correction Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/25—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
- G01B11/2509—Color coding
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/25—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Abstract
The invention provides a coding target and a positioning and decoding method based on the coding target. The target center is corrected and optimized through the inner concentric positioning circles and the outer concentric positioning circles, higher positioning accuracy can be obtained, the area of a coding region can be reduced through using coding points distributed in an annular mode, and decoding accuracy can be improved through calculating similarity.
Description
Technical Field
The invention relates to a coding target and a positioning and decoding method based on the coding target.
Background
In many applications of vision measurement, a three-dimensional measurement task of a target is generally completed by arranging a coded target on the surface of the measured object, then identifying the target and solving the central image coordinates of the target, and finally utilizing a vision measurement algorithm. According to different coding pattern characteristics, the coding targets are mainly divided into annular coding targets, square coding targets and distributed coding targets. The coding regions of the targets are large, so that the overall size of the targets is large, more region areas are occupied when the targets are arranged, and the positioning circle regions are small, so that the positioning accuracy of the centers of the targets is not high enough. The decoding method of the traditional coding target generally directly carries out binary decoding on the detected coding points, and has lower fault tolerance rate on the condition of misidentification of the coding points.
Disclosure of Invention
The invention aims to provide a coding target and a high-precision positioning and decoding method thereof, which comprises the following steps:
the target consists of two concentric positioning circles and coding points, wherein the concentric positioning circles are used for high-precision positioning of the target, the inner concentric circle area is white, the annular area between the outer concentric circle and the inner concentric circle is black, and the coding points are white and distributed in the annular area and used for target decoding identification.
The high-precision positioning and decoding method of the coding target comprises the following steps:
step S1, shooting a target by using a calibrated camera to obtain a target gray scale Image0;
step S2, carrying out distortion correction on the target Image through a distortion coefficient calibrated by a camera to obtain a undistorted target Image1;
step S3, performing self-adaptive binarization processing on the Image1 to obtain a binary Image2;
step S4, extracting the outline in the Image2;
step S5, traversing each section of contour, and performing the following steps:
step S6, finding the leftmost point and the rightmost point of the contour, determining a straight line from the leftmost point and the rightmost point, traversing each point of the contour, judging that the point is above the straight line, and performing step S7, otherwise, performing step S8;
step S7, if the gray value of the upper point adjacent to the point is larger than the gray value of the lower point adjacent to the point, the point can be a locating circle outline point outside the target, and the point is marked as an effective outline point;
step S8, if the gray value of the upper point adjacent to the point is smaller than the gray value of the lower point adjacent to the point, the point can be a locating circle outline point outside the target, and the point is marked as an effective outline point;
step S9, if the ratio of the effective contour point number to the total contour point number on the contour is greater than a threshold value of 0.4, the contour is possibly a target outer positioning circle contour, and ellipse fitting is carried out on the contour to obtain the long and short axes, the rotation angle and the center point coordinates of the ellipse;
step S10, traversing all contour points of the contour, calculating the nearest Euclidean distance between each contour point and a fitting ellipse, and if the distance is smaller than a threshold value, the point possibly being a point on the contour of an external positioning circle;
step S11, if the ratio of the number of effective contour points to the total contour points on the contour is greater than a threshold value of 0.7, the contour is a rough positioning contour of an out-of-target positioning circle, and the contour is marked as counter_out_pre;
step S12, taking the circumscribed rectangle of the fitting ellipse of the counter_out_pre as the ROI area, and extracting all contours in the corresponding ROI area in the Image1 Image;
step S13, traversing each section of contour of the ROI, and performing the following steps:
step S14, finding the leftmost point and the rightmost point of the contour, determining a straight line from the leftmost point and the rightmost point, traversing each point of the contour, judging that the point is above the straight line, and performing step S15, otherwise performing step S16;
step S15, if the gray value of the upper point adjacent to the point is smaller than the gray value of the lower point adjacent to the point, the point may be a locating circle or a coded contour point in the target, the point is marked as an inner circle effective contour point, otherwise, the point is marked as an outer circle effective contour point;
step S16, if the gray value of the upper point adjacent to the point is larger than the gray value of the lower point adjacent to the point, the point may be a locating circle or a coded contour point in the target, the point is marked as an inner circle effective contour point, otherwise, the point is marked as an outer circle effective contour point;
step S17, if the ratio of the effective contour point of the inner circle to the total contour point corresponding to the contour is greater than a threshold value of 0.4, the contour may be a positioning circle in the target or a coded fine positioning contour, the contour is marked as counter_in, and the set of the contours is marked as counter_in_collection;
step S18, if the ratio of the effective contour point of the outer circle to the total contour point corresponding to the contour is greater than a threshold value of 0.4, the contour is possibly a fine positioning contour of a positioning circle outside the target, the contour is marked as counter_out, and the set of the contours is marked as counter_out_collection;
step S19, if the number of contours in the counter_in_collection is less than 2, or the number of contours in the counter_out_collection is not 1, the ROI area is not a target, the ROI is skipped and the next ROI area is processed, otherwise, the following steps are continuously executed;
step S20, the only counter_out of the counter_out_collection is the outer positioning circle outline of the target, and the length axis of the fitting ellipse is recorded as a o And b o The coordinates of the center of the ellipse are (x) o ,y o );
Step S21, calculating Euclidean distance between the center of each contour of the counter_in_collection and the center of the counter_out, wherein the counter_out with the smallest distance is the inner positioning circle contour of the target, and the length axes of the fitted ellipses are recorded as a respectively i And b i The coordinates of the center of the ellipse are (x) i ,y i );
Step S22, recordThen the center of the inner and outer concentric circles is the center coordinates (x) c ,y c ) Is that
Step S23, coordinates of four vertexes of a rectangular region of the target ROI are obtained, and homogeneous coordinates of pixel points in the region are recorded as P;
step S24, a square area is set and is used as a forward projection area after perspective transformation of the target ROI graph;
step S25, calculating a homography matrix H between two projections through four vertex coordinates of the ROI rectangle and four vertex coordinates of the square area;
step S26, the homogeneous coordinates P of the pixel points of the ROI area after perspective transformation ′ =hp, i.e. a forward projection image of a single target image, the target positioning circle having been forward projected as a circle;
and S27, determining the pixel position of the circle where the center of the coding point is located on the target orthographic projection image according to the proportional relation between the size design value of the target outer positioning circle and the size design value of the coding point and the pixel size of the outer positioning circle and the pixel position of the center of the target on the target orthographic projection image.
Step S28, taking one point every 1 DEG on the circle where the coding point is located, traversing 360 DEG to obtain 360 points, and recording an array formed by gray values of the 360 points as signals;
step S29, normalizing the signals;
step S30, calculating an angle theta occupied by each coding point on a circle where the coding point is located according to the size design values of the target and the coding point;
step S31, recording CodeLib as all binary codes which can be formed by the n-bit coding target;
step S32, traversing each binary code in the CodeLib, generating a reference signal array signal_ref1 with the same length as the signal length for each code, wherein the rule is as follows: traversing each bit of the code, wherein when the code bit is 1, writing corresponding theta elements in the signal_ref1 into 1, when the code bit is 0, writing corresponding theta elements in the signal_ref1 into-1, writing (360/n-theta) 1 in the signal_ref1 between two adjacent code bits, and finally writing (360/n-theta) 1 in the signal_ref1 when the last code bit is the last code bit;
step S33, recording the length of a reference signal array signal_ref2 as 2 times as long as the length of the signal_ref1, wherein the array elements are formed by connecting two signal_ref1 end to end;
step S34, calculating the correlation between the acquisition signal and each coded reference signal array signal_ref2, wherein the code corresponding to the maximum correlation is the two-dimensional code value of the target;
step S35, the two-dimensional coding is circularly shifted left by 1 bit, decimal code values corresponding to each left shift are calculated, and the minimum code value is the coding value of the target.
The invention provides a novel coding target and a high-precision identification, positioning and decoding method thereof, which optimize the pattern characteristics and the structural size of the coding target, can realize the high-precision positioning of the center of the target, and simultaneously realize the high-robustness target identification and decoding through a target decoding algorithm.
Drawings
FIG. 1 is a flow chart of a target high precision positioning and decoding method according to an embodiment of the present invention;
FIG. 2 is a schematic representation of a target according to an embodiment of the present invention;
fig. 3 is a target Image gray scale Image0;
fig. 4 is a binary Image2;
FIG. 5 is an off-target positioning circle profile for all targets;
FIG. 6 is a target ROI area;
FIG. 7 is a view of the inside and outside of the target positioning circle profile and the center of the target;
FIG. 8 is a perspective transformation of a single target image into a forward projection image;
FIG. 9 is a circle with the center of the target encoding point;
FIG. 10 is the angle θ occupied by each target point;
fig. 11 is a diagram illustrating generation of the reference signal signal_ref1.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
FIG. 1 is a flow chart of a target high-precision positioning and decoding method according to an embodiment of the invention.
As shown in fig. 2, the target is composed of two concentric positioning circles and coding points, the concentric positioning circles are used for high-precision positioning of the target, the inner concentric circle area is white, the annular area between the outer concentric circle and the inner concentric circle is black, the coding points are white and the annular area is distributed, and the coding points are used for target decoding identification.
The high-precision positioning and decoding method of the coding target comprises the following steps:
step S1, shooting a target by using a calibrated camera to obtain a target gray scale Image0, as shown in FIG. 3;
step S2, carrying out distortion correction on the target Image through a distortion coefficient calibrated by a camera to obtain a undistorted target Image1;
step S3, performing adaptive binarization processing on the Image1 to obtain a binary Image2, as shown in FIG. 4;
step S4, extracting the outline in the Image2;
step S5, traversing each section of contour, and performing the following steps:
step S6, finding the leftmost point and the rightmost point of the contour, determining a straight line from the leftmost point and the rightmost point, traversing each point of the contour, judging that the point is above the straight line, and performing step S7, otherwise, performing step S8;
step S7, if the gray value of the upper point adjacent to the point is larger than the gray value of the lower point adjacent to the point, the point can be a locating circle outline point outside the target, and the point is marked as an effective outline point;
step S8, if the gray value of the upper point adjacent to the point is smaller than the gray value of the lower point adjacent to the point, the point can be a locating circle outline point outside the target, and the point is marked as an effective outline point;
step S9, if the ratio of the effective contour point number to the total contour point number on the contour is greater than a threshold value of 0.4, the contour is possibly a target outer positioning circle contour, and ellipse fitting is carried out on the contour to obtain the long and short axes, the rotation angle and the center point coordinates of the ellipse;
step S10, traversing all contour points of the contour, calculating the nearest Euclidean distance between each contour point and a fitting ellipse, and if the distance is smaller than a threshold value, the point possibly being a point on the contour of an external positioning circle;
step S11, if the ratio of the number of effective contour points to the total contour points on the contour is greater than a threshold value of 0.7, the contour is an off-target positioning circle contour, and as shown in FIG. 5, the contour is marked as counter_out_pre;
step S12, taking the circumscribed rectangle of the fitting ellipse of the counter_out_pre as the ROI area, as shown in FIG. 6, and accurately extracting and extracting all the sub-pixel outlines in the corresponding ROI area in the Image1 Image;
step S13, traversing each section of contour of the ROI, and performing the following steps:
step S14, finding the leftmost point and the rightmost point of the contour, determining a straight line from the leftmost point and the rightmost point, traversing each point of the contour, judging that the point is above the straight line, and performing step S15, otherwise performing step S16;
step S15, if the gray value of the upper point adjacent to the point is smaller than the gray value of the lower point adjacent to the point, the point may be a locating circle or a coded contour point in the target, the point is marked as an inner circle effective contour point, otherwise, the point is marked as an outer circle effective contour point;
step S16, if the gray value of the upper point adjacent to the point is larger than the gray value of the lower point adjacent to the point, the point may be a locating circle or a coded contour point in the target, the point is marked as an inner circle effective contour point, otherwise, the point is marked as an outer circle effective contour point;
step S17, if the ratio of the effective contour point of the inner circle to the total contour point corresponding to the contour is greater than a threshold value of 0.4, the contour may be a positioning circle in the target or a coded fine positioning contour, the contour is marked as counter_in, and the set of the contours is marked as counter_in_collection;
step S18, if the ratio of the effective contour point of the outer circle to the total contour point corresponding to the contour is greater than a threshold value of 0.4, the contour is possibly a fine positioning contour of a positioning circle outside the target, the contour is marked as counter_out, and the set of the contours is marked as counter_out_collection;
step S19, if the number of contours in the counter_in_collection is less than 2, or the number of contours in the counter_out_collection is not 1, the ROI area is not a target, the ROI is skipped and the next ROI area is processed, otherwise, the following steps are continuously executed;
step S20, wherein only one counter_out of the counter_out_collection is the targetAs shown in FIG. 7, the major and minor axes of the fitted ellipse are denoted as a o And b o The coordinates of the center of the ellipse are (x) o ,y o );
Step S21, calculating Euclidean distance between the center of each contour of counter_in_collection and the center of counter_out, wherein counter_out with the smallest distance is the inner positioning circle contour of the target, and as shown in FIG. 7, the length axes of the fitted ellipses are recorded as a respectively i And b i The coordinates of the center of the ellipse are (x) i ,y i );
Step S22, recordThen the center of the inner and outer concentric circles is the center coordinates (x) c ,y c ) The result of the target centering is shown in fig. 7, calculated by the following formula.
Step S23, coordinates of four vertexes of a rectangular region of the target ROI are obtained, and homogeneous coordinates of pixel points in the region are recorded as P;
step S24, a square area is set and is used as a forward projection area after perspective transformation of the target ROI graph;
step S25, calculating a homography matrix H between two projections through four vertex coordinates of the ROI rectangle and four vertex coordinates of the square area;
step S26, the homogeneous coordinates P of the pixel points of the ROI area after perspective transformation ′ =hp, i.e. a forward projection image of a single target image, as shown in fig. 8, the target positioning circle has been forward projected as a circle;
step S27, determining the pixel position of the circle where the center of the coding point is located on the target orthographic projection image according to the proportional relation between the size design value of the target outer positioning circle and the size design value of the coding point, and the pixel size of the outer positioning circle and the pixel position of the center of the target on the target orthographic projection image, as shown in FIG. 9.
Step S28, taking one point every 1 DEG on the circle where the coding point is located, traversing 360 DEG to obtain 360 points, and recording an array formed by gray values of the 360 points as signals;
step S29, normalizing the signals;
step S30, as shown in FIG. 10, calculating the angle theta occupied by each coding point on the circle where the coding point is located according to the size design values of the target and the coding point;
step S31, recording CodeLib as all binary codes which can be formed by the n-bit coding target;
step S32, as shown in fig. 11, traverses each binary code in the CodeLib, and generates a reference signal array signal_ref1 with equal length to the signal length for each code, with the rule: traversing each bit of the code, wherein when the code bit is 1, writing corresponding theta elements in the signal_ref1 into 1, when the code bit is 0, writing corresponding theta elements in the signal_ref1 into-1, writing (360/n-theta) 1 in the signal_ref1 between two adjacent code bits, and finally writing (360/n-theta) 1 in the signal_ref1 when the last code bit is the last code bit;
step S33, recording the length of a reference signal array signal_ref2 as 2 times as long as the length of the signal_ref1, wherein the array elements are formed by connecting two signal_ref1 end to end;
step S34, calculating the correlation between the acquisition signal and each coded reference signal array signal_ref2, wherein the code corresponding to the maximum correlation is the two-dimensional code value of the target;
step S35, the two-dimensional coding is circularly shifted left by 1 bit, decimal code values corresponding to each left shift are calculated, and the minimum code value is the coding value of the target.
Claims (10)
1. The coding target is characterized by comprising two concentric positioning circles and coding points, wherein the concentric positioning circles are used for positioning the target, the inner concentric circle area is white, the annular area between the outer concentric circle and the inner concentric circle is black, and the coding is white and distributed in the annular area for decoding and identifying the target.
2. The encoded target of claim 1, wherein the number of codes is not limited to 10.
3. A method of coding target-based localization and decoding employing a coding target according to any one of claims 1 to 2, the method comprising: and (3) detecting the internal and external positioning circles of the target.
4. The method for coding target-based localization and decoding as claimed in claim 3, wherein the coding target inside and outside localization circle detection process comprises:
step S14, finding the leftmost point and the rightmost point of the coding target, determining a straight line from the leftmost point and the rightmost point, traversing each point of the coding target, judging whether a certain point is above the straight line, and if yes, performing step S15; if not, go to step S16;
step S15, if the gray value of the upper point adjacent to the point is smaller than the gray value of the lower point adjacent to the point, the point may be a locating circle or a coded contour point in the target, the point is marked as an inner circle effective contour point, otherwise, the point is marked as an outer circle effective contour point;
step S16, if the gray value of the upper point adjacent to the point is larger than the gray value of the lower point adjacent to the point, the point may be a locating circle or a coded contour point in the target, the point is marked as an inner circle effective contour point, otherwise, the point is marked as an outer circle effective contour point;
step S17, if the ratio of the effective contour point of the inner circle to the total contour point corresponding to the contour is greater than a threshold value of 0.4, the contour may be a positioning circle in the target or a coded fine positioning contour, the contour is marked as counter_in, and the set of the contours is marked as counter_in_collection;
step S18, if the ratio of the effective contour point of the outer circle to the total contour point corresponding to the contour is greater than a threshold value of 0.4, the contour is possibly a fine positioning contour of a positioning circle outside the target, the contour is marked as counter_out, and the set of the contours is marked as counter_out_collection;
step S19, if the number of contours in the counter_in_collection is less than 2, or the number of contours in the counter_out_collection is not 1, the ROI area is not a target, the ROI is skipped and the next ROI area is processed, otherwise, the following steps are continuously executed;
step S20, the only counter_out of the counter_out_collection is the outer positioning circle outline of the target, and the length axis of the fitting ellipse is recorded as a o And b o The coordinates of the center of the ellipse are (x) o ,y o );
Step S21, calculating Euclidean distance between the center of each contour of the counter_in_collection and the center of the counter_out, wherein the counter_out with the smallest distance is the inner positioning circle contour of the target, and the length axes of the fitted ellipses are recorded as a respectively i And b i The coordinates of the center of the ellipse are (x) i ,y i )。
5. The encoding target-based localization and decoding method of claim 3, further comprising:
perspective transformation process when encoding target decoding.
6. The method of coding target-based localization and decoding of claim 5, wherein the perspective transformation process upon decoding of the coding target comprises:
step S23, coordinates of four vertexes of a rectangular region of the coding target ROI are obtained, and homogeneous coordinates of pixel points in the region are recorded as P;
step S24, a square area is set and is used as a forward projection area after perspective transformation of the target ROI graph;
step S25, calculating a homography matrix H between two projections through four vertex coordinates of the ROI rectangle and four vertex coordinates of the square area;
step S26, obtaining a forward projection image of the single target image by using homogeneous coordinates P' =hp of the pixel points of the ROI area after perspective transformation, wherein the target positioning circle is forward projected to be a circle.
7. The encoding target-based localization and decoding method of claim 3, further comprising: and (3) code value signal acquisition process during target decoding.
8. The method of claim 7, wherein the code value signal acquisition process when decoding the target comprises:
step S28, taking one point every 1 DEG on the circle where the coding point is located, traversing 360 DEG to obtain 360 points, and recording an array formed by gray values of the 360 points as signals;
step S29, normalizing the signals.
9. The method for coding target-based localization and decoding as defined in claim 3, further comprising: a reference signal is generated and a decoding process.
10. The method of coding target-based localization and decoding of claim 8, wherein generating the reference signal and decoding process comprises:
step S30, calculating an angle theta occupied by each coding point on a circle where the coding point is located according to the size design values of the target and the coding point;
step S31, recording CodeLib as all binary codes which can be formed by the n-bit coding target;
step S32, as shown in fig. 11, traverses each binary code in the CodeLib, and generates a reference signal array signal_ref1 with equal length to the signal length for each code, with the rule: traversing each bit of the code, wherein when the code bit is 1, writing corresponding theta elements in the signal_ref1 into 1, when the code bit is 0, writing corresponding theta elements in the signal_ref1 into-1, writing (360/n-theta) 1 in the signal_ref1 between two adjacent code bits, and finally writing (360/n-theta) 1 in the signal_ref1 when the last code bit is the last code bit;
step S33, recording the length of a reference signal array signal_ref2 as 2 times as long as the length of the signal_ref1, wherein the array elements are formed by connecting two signal_ref1 end to end;
step S34, calculating the correlation between the acquisition signal and each coded reference signal array signal_ref2, wherein the code corresponding to the maximum correlation is the two-dimensional code value of the target.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310358717.2A CN116358448A (en) | 2023-04-04 | 2023-04-04 | Coding target and positioning and decoding method based on same |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310358717.2A CN116358448A (en) | 2023-04-04 | 2023-04-04 | Coding target and positioning and decoding method based on same |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116358448A true CN116358448A (en) | 2023-06-30 |
Family
ID=86931784
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310358717.2A Pending CN116358448A (en) | 2023-04-04 | 2023-04-04 | Coding target and positioning and decoding method based on same |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116358448A (en) |
-
2023
- 2023-04-04 CN CN202310358717.2A patent/CN116358448A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107633192B (en) | Bar code segmentation and reading method based on machine vision under complex background | |
CN108764004B (en) | Annular coding mark point decoding and identifying method based on coding ring sampling | |
US7571864B2 (en) | Method and system for creating and using barcodes | |
CN109285198B (en) | Method for coding and identifying annular coding mark points | |
CN110415257B (en) | Gas-liquid two-phase flow overlapped bubble image segmentation method | |
WO2017041600A1 (en) | Chinese-sensitive code feature pattern detection method and system | |
CN109215016B (en) | Identification and positioning method for coding mark | |
US20100034467A1 (en) | Image recognition and distance calculation methods and devices | |
CN113129384B (en) | Binocular vision system flexible calibration method based on one-dimensional coding target | |
CN111241860A (en) | Positioning and decoding method for arbitrary material annular code | |
CN114792104A (en) | Method for identifying and decoding ring-shaped coding points | |
CN111256607B (en) | Deformation measurement method based on three-channel mark points | |
CN115345870A (en) | Method for realizing large-scene precise deformation monitoring based on monocular camera and self-luminous target with code | |
CN113313628B (en) | Affine transformation and mean pixel method-based annular coding point robustness identification method | |
CN113129397B (en) | Decoding method of parallelogram coding mark based on graphic geometric relation | |
CN107808165B (en) | Infrared image matching method based on SUSAN corner detection | |
Zou et al. | Design of a new coded target with large coding capacity for close—range photogrammetry and research on recognition algorithm | |
CN110969612B (en) | Two-dimensional code printing defect detection method | |
CN116358448A (en) | Coding target and positioning and decoding method based on same | |
CN111931537B (en) | Granular QR two-dimensional code positioning method | |
CN111199163A (en) | Edge detection and positioning identification method of annular code | |
CN115512343A (en) | Method for correcting and recognizing reading of circular pointer instrument | |
CN115376131A (en) | Design and identification method of dot-shaped coding mark | |
CN114781417A (en) | Two-dimensional code identification method, two-dimensional code identification device and electronic equipment | |
CN112184803B (en) | Calibration plate and calibration method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |