CN107490346B - RFID multi-label network three-dimensional measurement modeling method based on vision - Google Patents

RFID multi-label network three-dimensional measurement modeling method based on vision Download PDF

Info

Publication number
CN107490346B
CN107490346B CN201710727530.XA CN201710727530A CN107490346B CN 107490346 B CN107490346 B CN 107490346B CN 201710727530 A CN201710727530 A CN 201710727530A CN 107490346 B CN107490346 B CN 107490346B
Authority
CN
China
Prior art keywords
image
rfid label
rfid
vertical
ith
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710727530.XA
Other languages
Chinese (zh)
Other versions
CN107490346A (en
Inventor
俞晓磊
庄笑
汪东华
赵志敏
陆东升
刘振鲁
刘梦婕
董定邦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Institute Of Quality And Standardization
Original Assignee
Jiangsu Institute Of Quality And Standardization
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Institute Of Quality And Standardization filed Critical Jiangsu Institute Of Quality And Standardization
Priority to CN201710727530.XA priority Critical patent/CN107490346B/en
Publication of CN107490346A publication Critical patent/CN107490346A/en
Application granted granted Critical
Publication of CN107490346B publication Critical patent/CN107490346B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K17/00Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

Aiming at the requirements of warehouse entry and exit information acquisition and cargo inventory of cargoes in modern smart warehouse logistics, a RFID multi-tag network three-dimensional measurement modeling method based on vision is provided, images of RFID tags are obtained from multiple angles by using two cameras, the obtained images are processed by adopting an iterative threshold segmentation and morphology method, and three-dimensional coordinates of the RFID tags are obtained by utilizing a template matching method. The method effectively reduces the complexity of the algorithm and can acquire the three-dimensional coordinates of the RFID label in real time. The invention has important theoretical and application values for modern intelligent warehouse logistics.

Description

RFID multi-label network three-dimensional measurement modeling method based on vision
Technical Field
The invention relates to the field of RFID technology and Internet of things application, in particular to three-dimensional coordinate determination and network modeling of an RFID multi-label network, particularly relates to three-dimensional measurement modeling of the RFID multi-label network by introducing a three-dimensional visual analysis method, and belongs to the field of metering test.
Background
Radio Frequency Identification (RFID) is a novel non-contact automatic Identification technology, and has been widely applied in many fields such as modern logistics, intelligent transportation, and production automation, especially in intelligent logistics, because of the efficiency of system Identification, a large number of RFID tags are needed for warehouse entry and exit information acquisition and cargo inventory of bulk cargos, and at present, an effective automatic measurement and calculation means is lacking for modeling of an RFID multi-tag network and Identification, distribution, arrangement, combination, positioning and the like of the RFID multi-tag.
In the field of three-dimensional image measurement modeling, in the existing method, a single camera is used for carrying out three-dimensional measurement modeling on an image, the direction of the camera needs to be continuously adjusted, object images in the same state are obtained from different angles, the operation is complex, the real-time performance is poor, and the method is difficult to be applied to occasions with higher real-time requirements. In the existing literature, a single camera is less used for three-dimensional measurement modeling of an image, and the three-dimensional measurement modeling of an RFID multi-tag network by using an image method is more rarely reported. In order to meet the requirements of warehouse entry and exit information acquisition and cargo inventory of cargos in modern intelligent warehouse logistics, the method for carrying out three-dimensional measurement modeling on the RFID multi-tag network by utilizing the image correlation theory is low in algorithm complexity, small in calculated amount, high in efficiency and good in robustness and is significant. According to the invention, the horizontal camera and the vertical camera are used for obtaining the image information of the RFID label from multiple angles, so that the complexity of performing three-dimensional measurement modeling on the RFID label by using an image method is reduced.
The iterative threshold segmentation is an effective method in image segmentation by a threshold method, and the optimal threshold value of the segmentation is obtained by an iterative mode, so that the method has certain adaptivity. The morphological algorithm can perform morphological operations such as erosion and expansion on the image on the basis of threshold segmentation so as to retain and extract specific geometric features in the image, and the morphological algorithm is easy to realize in a parallel processing mode and has high real-time performance. The template matching algorithm has the advantages of simple calculation method, convenient template selection, easy realization and the like, and is particularly suitable for the field of the Internet of things with higher real-time requirement.
Disclosure of Invention
The invention provides a vision-based RFID multi-tag network three-dimensional measurement modeling method, which comprises the following steps:
the first step is as follows: building a test platform, wherein the test platform consists of an RFID reader-writer, a reader-writer antenna, an RFID tag support, a control computer, a servo motor, a vertical camera, a horizontal camera, a guide rail and a turntable;
the second step is as follows: the method comprises the steps of RFID label vertical overlook image acquisition, namely, using a vertical camera to acquire images of a turntable and an RFID label, processing the images by using an iterative threshold segmentation and morphology method, and numbering all RFID label mark points;
the third step: and an RFID label radial distance extraction step, namely taking two chords from the turntable vertical overlook image obtained in the second step, making a vertical bisector of the chords, taking the intersection point of the vertical bisector as the center of the turntable, selecting one RFID label mark point as a template, matching the ith RFID label mark point by using template matching, and obtaining the distance r from the ith RFID label mark point to the center of the turntableiThe distance is the radial distance of the ith RFID label;
the fourth step: measuring the horizontal coordinate of the RFID label, namely adjusting a servo motor to drive a turntable to rotate to obtain the rotating angle theta of the ith RFID labeliThen thetaiAnd r obtained in the above third stepiNamely the horizontal coordinate parameter of the marking point of the ith RFID label, and further obtaining the horizontal coordinate of the ith RFID label as (r)icosθi,risinθi);
The fifth step: the method comprises the steps of RFID label side view image acquisition, controlling a computer to control a servo motor to drive a turntable to rotate, controlling a horizontal camera to be adjusted back and forth, enabling the horizontal camera to clearly image one label, taking the label as a template, measuring the vertical distance h from the central point of the template to the turntable, namely the vertical coordinate of the template, enabling the horizontal camera to accurately focus the ith RFID label, and obtaining the ith RFID labelMatching the template and the ith RFID label by utilizing template matching, and calculating the pixel number difference value c between the central point of the ith RFID label and the central point of the templatei
A sixth step: the RFID label vertical coordinate measuring step, namely determining the side length a of each pixel in the image according to the triangle similarity principle, and calculating the vertical coordinate H of the ith RFID label in the vertical direction according to the vertical coordinate of the side-view image label template and the pixel number difference value of the ith RFID label and the template in the vertical directioniWhen the center point of the ith RFID label is positioned below the center point of the template, Hi=h-ciX a, when the center point of the ith RFID label is above the center point of the template, Hi=h+ci×a;
A seventh step of: an RFID label three-dimensional coordinate determining step, namely determining the ith RFID label three-dimensional coordinate as (r)icosθi,risinθi,Hi);
An eighth step: and repeating the third, fourth, fifth, sixth and seventh steps, and measuring the horizontal coordinates and the vertical coordinates of all the RFID labels to obtain the three-dimensional coordinates of all the RFID labels.
The vision-based RFID multi-tag network three-dimensional measurement modeling method includes, in the second step, the iterative threshold segmentation, and the following steps:
the method comprises the following steps: normalizing the vertical overlook image and setting an iterative convergence parameter T0And selecting an initial estimation threshold T1(min g (x, y) + max g (x, y))/2, wherein (x, y) is the coordinate of a pixel point in the image, g (x, y) is the normalized image gray value, min g (x, y) is the minimum value of g (x, y), and max g (x, y) is the maximum value of g (x, y);
step two: by estimating the threshold value T1Dividing the vertical overhead image into G1And G2Two parts, wherein G1Is { (x, y) | g (x, y) > T1},G2Is { (x, y) | g (x, y) < T1};
Step three: calculation of G1And G2Average gray value mu of all pixels in1And mu2And a new threshold value T2=(μ12)/2;
Step four: if | T2-T2|<T0Then T is2I.e. the optimal threshold, otherwise, T is set2Is assigned to T1
Step five: repeating the second, third and fourth steps until the optimal threshold is obtained, and utilizing the finally obtained optimal threshold T2And performing threshold segmentation on the vertical overhead view image to obtain a binary image I of the vertical overhead view image.
The vision-based RFID multi-tag network three-dimensional measurement modeling method described above, wherein the morphological method in the second step includes the following steps:
the method comprises the following steps: performing expansion operation on the binary image of the vertical overlook image to obtain an expanded image
Figure BSA0000149579010000031
Wherein B is a structural element, I is a binary image of a vertical overlook image,
Figure BSA0000149579010000032
denotes swelling, P1Is the image after expansion;
step two: carrying out corrosion operation on the image to obtain a corroded image P2=P1Theta B, wherein B is a structural element, P1For the expanded image, Θ represents corrosion, P2Is an image after corrosion;
step three: repeating the first and second steps, and filling the image to obtain an image P3To P3And executing the operation of the second step to obtain the morphologically processed image.
The vision-based RFID multi-tag network three-dimensional measurement modeling method described above, wherein the template matching in the third step and the fifth step, comprises the following steps:
the method comprises the following steps: selecting a certain specific image as a reference template image, translating the known reference template image point by point in a search area of the image, and traversing each pixel point in the search area;
step two: determining normalized cross-correlation coefficients
Figure BSA0000149579010000041
Matching the reference template image with the search area image with the same size in the original image, wherein (m, n) is pixel point coordinates, T (m, n) is the reference template image,
Figure BSA0000149579010000042
is the mean value of the reference template image, M is the maximum value of the horizontal coordinate of the pixel point coordinate, N is the maximum value of the coordinate of the pixel point coordinate column, Si,j(m, n) is a search area image,
Figure BSA0000149579010000043
the mean value of the search area image is obtained;
step three: and translating the reference template image to the next pixel point, and repeating the step two until all the positions are matched, wherein the area with the normalized cross-correlation coefficient NC (i, j) larger than the set threshold lambda is the matched area to be searched.
In the above method for modeling RFID multi-tag network three-dimensional measurement based on vision, in the fifth step, the control computer controls the horizontal camera to adjust back and forth, and the steps are as follows:
the method comprises the following steps: measuring the distance L from the initial position of the level camera to the center of the turntable1
Step two: controlling a computer to control a servo motor to drive a turntable to rotate, enabling the ith RFID label to be in a vertical state with a horizontal camera, and calculating the distance d between the ith RFID label and the horizontal camerai=L1-ri
Step three: calculating the object space distance required by the horizontal camera to accurately focus the ith RFID label
Figure BSA0000149579010000044
Wherein l' is the distance from the center of the lens of the horizontal camera to the CCD sensor in the horizontal camera, and f is the focal length of the horizontal camera;
step four: distance delta L needing to be adjusted when horizontal camera accurately focuses on ith RFID labeli=di-liIf Δ LiGreater than zero, the horizontal camera moves Δ L in a direction approaching the ith RFID tagiIf Δ LiLess than zero, the horizontal camera moves Δ L in a direction away from the ith RFID tagi
The RFID multi-tag network three-dimensional measurement modeling method based on vision described above, wherein the triangle similarity principle described in the sixth step includes the following steps:
the method comprises the following steps: controlling a computer to control a horizontal camera to adjust back and forth, enabling the horizontal camera to clearly image one of the labels, measuring the distance from the label to the horizontal camera, recording the distance as S, the distance l' from the center of a lens of the horizontal camera to a CCD sensor in the horizontal camera, and the side length e of a CCD pixel;
step two: calculating pixel side lengths of an image
Figure BSA0000149579010000051
Drawings
FIG. 1: test platform structure diagram
FIG. 2: 7 RFID tag top view
FIG. 3: schematic diagram for measuring horizontal coordinate of 1 st RFID (radio frequency identification) tag
FIG. 4: iterative threshold segmentation processing result graph
FIG. 5: swelling treatment result graph
FIG. 6: graph of corrosion treatment results
FIG. 7: filling processing result graph
FIG. 8: graph of morphological processing results
FIG. 9: 7 RFID label vertical overlook image template matching processing result graph
FIG. 10: 1 st RFID label horizontal image template matching processing result graph
Detailed Description
A RFID multi-tag network three-dimensional measurement modeling method based on vision comprises the following steps:
the first step is as follows: building a test platform, wherein the test platform is composed of a 1-RFID reader-writer, a 2-reader-writer antenna, a 3-RFID label, a 4-RFID label support, a 5-control computer, a 6-servo motor, a 7-vertical camera, an 8-horizontal camera, A9-guide rail and a 10-rotary table, a marking point is pasted at the bottom of the 4-RFID label support, the 1-RFID reader-writer is respectively connected with the 2-reader-writer antenna and the 5-control computer, the 7-vertical camera and the 8-horizontal camera are respectively connected with the 5-control computer, the RFID label adopts an ultrahigh frequency electronic label-H47, the reader-writer adopts an Impinj speed Revolution R420 reader-writer, the reader-writer antenna adopts a Laird A9028 far-field antenna, the maximum RF output power is 30dBm, and the test platform is shown in figure 1;
the second step is as follows: an RFID label vertical overlook image acquisition step, wherein a vertical camera is used for carrying out image acquisition on the turntable and the RFID label, an iterative threshold segmentation and morphology method is used for processing the vertical overlook image, all 7 RFID label mark points are numbered, and the 7 RFID label top views are shown in FIG. 2;
the third step: and an RFID label radial distance extraction step, namely taking two chords from the turntable image obtained in the second step as vertical bisectors of the chords, taking the intersection point of the vertical bisectors as the center of the turntable, selecting one RFID label mark point as a template, matching the 1 st RFID label mark point by using a template matching algorithm, and obtaining the distance r from the 1 st RFID label mark point to the center of the turntable1198.4mm, namely the radial distance of the 1 st RFID tag;
the fourth step: and an RFID label horizontal coordinate measuring step, namely adjusting a servo motor to drive a turntable to rotate, and obtaining the angle theta of the 1 st RFID label rotation which is 48.4 degrees and r obtained in the third step1198.4mm is the horizontal coordinate parameter of the marking point of the 1 st RFID tag, the horizontal two-dimensional coordinate of the 1 st RFID tag is further calculated to be (131.7mm, 148.4mm), and the schematic diagram of measuring the horizontal coordinate of the 1 st RFID tag is shown in fig. 3;
the fifth step: the method comprises the steps of RFID label side-looking image acquisition, controlling a servo motor to drive a turntable to rotate, controlling a horizontal camera to be adjusted front and back, enabling the horizontal camera to clearly image one label, taking the label as a template, measuring the vertical distance h between a template central point and the turntable as 328.0mm, namely the vertical coordinate of the template, enabling the horizontal camera to accurately focus the 1 st RFID label, obtaining the 1 st RFID label side-looking image, matching the template and the 1 st RFID label by utilizing template matching, and calculating the pixel number difference value c between the central point of the 1 st RFID label and the template central point1=71;
A sixth step: an RFID label vertical coordinate measuring step, wherein according to a triangular similarity principle, the size of each pixel in an image is determined to be a being 0.75mm, wherein 0.75mm is the size of the side length of each pixel, according to the vertical coordinate of a template, the difference value of the number of pixels of the 1 st RFID label and the template in the vertical direction, and the longitudinal coordinate value of the central point of the 1 st RFID label is greater than the longitudinal coordinate value of the central point of the template, the central point of the 1 st RFID label is judged to be above the central point of the template, and therefore the vertical coordinate H of the 1 st RFID label in the vertical direction is calculated1=h+c1×a=327.85+0.75×71=381.1mm;
A seventh step of: an RFID label three-dimensional coordinate determining step, namely determining the 1 st RFID label three-dimensional coordinate to be (131.7, 148.4, 381.1);
an eighth step: repeating the third, fourth, fifth, sixth and seventh steps, measuring the horizontal two-dimensional coordinates and the vertical coordinates of all 7 RFID tags, namely obtaining the three-dimensional coordinates of all 7 RFID tags as (131.7, 148.4, 381.1), (116.5, 179.2, 355.9), (-232.4, 200.3, 350.5), (-168.7, -58.8, 350.3), (47.5, -106.4, 328.5), (174.2, 40.7, 340.1), (174.9, -16.7, 379.9).
The second step of the vision-based RFID multi-tag network three-dimensional measurement modeling method is iterative threshold segmentation, and the steps are as follows:
the method comprises the following steps: normalizing the vertical overlook image and setting an iterative convergence parameter T00.01, and select a primaryInitial estimated threshold value T1=0.5178;
Step two: by estimating the threshold value T1Dividing the vertical overhead image into G, 0.51781And G2Two parts, wherein G1Is { (x, y) | g (x, y) > T1},G2Is { (x, y) | g (x, y) < T1};
Step three: calculation of G1And G2Average gray value mu of all pixels in10.6615 and μ20.1165, and a new threshold T2=(0.6615+0.1165)/2=0.3890;
Step four: i T2-T1If | is less than 0.01, then T2I.e. the optimal threshold, otherwise, T is set2Is assigned to T1
Step five: repeating the second, third and fourth steps until the optimal threshold is obtained, and utilizing the finally obtained optimal threshold T2The vertical overhead image is subjected to threshold segmentation to obtain a binarized image I of the vertical overhead image, which is shown in fig. 4.
The morphological method in the second step of the vision-based RFID multi-tag network three-dimensional measurement modeling method comprises the following steps:
the method comprises the following steps: performing expansion operation on the binary image of the vertical overlook image to obtain an expanded image
Figure BSA0000149579010000071
Wherein B is a structural element, I is a binary image of a vertical overlook image,
Figure BSA0000149579010000072
denotes swelling, P1The dilated image is shown in fig. 5;
step two: carrying out corrosion operation on the image to obtain a corroded image P2=P1Theta B, wherein B is a structural element, P1For the expanded image, Θ represents corrosion, P2The corroded image is shown in FIG. 6;
step three: repeating the first and second steps, and filling the image to obtain an image P3The filled image is shown in FIG. 7 for P3And (5) executing the operation of the step two to obtain an image after morphological processing, wherein a final effect image of the morphological processing is shown in fig. 8.
The third step and the fifth step of the visual-based RFID multi-tag network three-dimensional measurement modeling method for template matching are as follows:
the method comprises the following steps: selecting a certain specific image as a reference template image, translating the known reference template image point by point in a search area of the image, and traversing each pixel point in the search area;
step two: determining normalized cross-correlation coefficients
Figure BSA0000149579010000081
Matching the reference template image with the search area image with the same size in the original image, wherein (m, n) is pixel point coordinates, T (m, n) is the reference template image,
Figure BSA0000149579010000082
is the mean value of the reference template image, M is the maximum value of the horizontal coordinate of the pixel point coordinate, N is the maximum value of the coordinate of the pixel point coordinate column, Si,j(m, n) is a search area image,
Figure BSA0000149579010000083
the mean value of the search area image is obtained;
step three: and translating the reference template image to the next pixel point, and repeating the second step until all the positions are matched, wherein the area with the normalized cross-correlation coefficient NC (i, j) larger than the set threshold value lambda being 0.75 is the matching area to be searched, the 7 RFID tags vertically overlook the image template matching result as shown in fig. 9, and the 1 st RFID tag horizontal image template matching result as shown in fig. 10.
In the fifth step of the vision-based three-dimensional measurement modeling method for the RFID multi-tag network, the control computer controls the horizontal camera to adjust back and forth, and the steps are as follows:
the method comprises the following steps: measuring the distance L from the initial position of the level camera to the center of the turntable1=1.512m;
Step two: controlling a computer to control a servo motor to drive a turntable to rotate, enabling the 1 st RFID label to be in a vertical state with a horizontal camera, and calculating the distance d between the 1 st RFID label and the horizontal camera1=1.512m-0.406m=1.106m;
Step three: calculating the object space distance required by the horizontal camera to accurately focus the 1 st RFID label
Figure BSA0000149579010000084
Wherein l 'is the distance from the center of the horizontal camera lens to the CCD sensor in the horizontal camera, l' is 0.00597m, and f is 0.006 m;
step four: distance delta L needing to be adjusted when the 1 st RFID label is accurately focused by the horizontal camera1=d1-|l1|=1.106m-(1.194m)=-0.088m,ΔL1Less than zero, the horizontal camera moves 0.088m away from the 1 st RFID tag.
The visual-based RFID multi-tag network three-dimensional measurement modeling method based on the sixth step adopts the triangle similarity principle, and comprises the following steps:
the method comprises the following steps: controlling a computer to control a horizontal camera to adjust back and forth, enabling the horizontal camera to clearly image one of labels, measuring the distance from the label to the horizontal camera, and recording the distance as S1.194 m, the distance from the center of a lens of the horizontal camera to a CCD sensor in the horizontal camera, i' 0.00597m, and the side length e of a CCD pixel as 3.75 mu m;
step two: calculating pixel side lengths of an image
Figure BSA0000149579010000091

Claims (6)

1. A RFID multi-tag network three-dimensional measurement modeling method based on vision comprises the following steps:
the first step is as follows: building a test platform, wherein the test platform consists of an RFID reader-writer, a reader-writer antenna, an RFID tag support, a control computer, a servo motor, a vertical camera, a horizontal camera, a guide rail and a turntable;
the second step is as follows: the method comprises the steps of RFID label vertical overlook image acquisition, namely, using a vertical camera to acquire images of a turntable and an RFID label, processing the images by using an iterative threshold segmentation and morphology method, and numbering all RFID label mark points;
the third step: and an RFID label radial distance extraction step, namely taking two chords from the turntable vertical overlook image obtained in the second step, making a vertical bisector of the chords, taking the intersection point of the vertical bisector as the center of the turntable, selecting one RFID label mark point as a template, matching the ith RFID label mark point by using template matching, and obtaining the distance r from the ith RFID label mark point to the center of the turntableiThe distance is the radial distance of the ith RFID label;
the fourth step: measuring the horizontal coordinate of the RFID label, namely adjusting a servo motor to drive a turntable to rotate to obtain the rotating angle theta of the ith RFID labeliThen thetaiAnd r obtained in the above third stepiNamely the horizontal coordinate parameter of the marking point of the ith RFID label, and further obtaining the horizontal coordinate of the ith RFID label as (r)icosθi,risinθi);
The fifth step: the method comprises the steps of RFID label side-looking image acquisition, controlling a servo motor to be controlled by a control computer to drive a turntable to rotate, controlling a horizontal camera to be controlled by the control computer to be adjusted front and back, enabling the horizontal camera to clearly image one label, taking the label as a template, measuring the vertical distance h from a template central point to the turntable, namely the vertical coordinate of the template, enabling the horizontal camera to accurately focus the ith RFID label, obtaining the ith RFID label side-looking image, matching the template and the ith RFID label by utilizing template matching, and calculating the ith RFID labelDifference value c of pixel number between central point and template central pointi
A sixth step: the RFID label vertical coordinate measuring step, according to the triangle similarity principle, determining the side length a of each pixel in the image, according to the vertical coordinate of the side-view image label template and the pixel number difference value of the ith RFID label and the template in the vertical direction, calculating to obtain the vertical coordinate H of the ith RFID label in the vertical directioniWhen the center point of the ith RFID label is positioned below the center point of the template, Hi=h-ciX a, when the center point of the ith RFID label is above the center point of the template, Hi=h+ci×a;
A seventh step of: an RFID label three-dimensional coordinate determining step, namely determining the ith RFID label three-dimensional coordinate as (r)icosθi,risinθi,Hi);
An eighth step: and repeating the third, fourth, fifth, sixth and seventh steps, and measuring the horizontal coordinates and the vertical coordinates of all the RFID labels to obtain the three-dimensional coordinates of all the RFID labels.
2. The vision-based RFID multi-tag network three-dimensional measurement modeling method of claim 1, wherein the iterative threshold segmentation of the second step comprises the following steps:
the method comprises the following steps: normalizing the vertical overlook image and setting an iterative convergence parameter T0And selecting an initial estimation threshold T1(min g (x, y) + max g (x, y))/2, wherein (x, y) is the coordinate of a pixel point in the image, g (x, y) is the normalized image gray value, min g (x, y) is the minimum value of g (x, y), and max g (x, y) is the maximum value of g (x, y);
step two: by estimating the threshold value T1Dividing the vertical overhead image into G1And G2Two parts, wherein G1Is { (x, y) | g (x, y) > T1},G2Is { (x, y) | g (x, y) < T1};
Step three: calculation of G1And G2Average gray scale of all pixels inValue of mu1And mu2And a new threshold value T2=(μ12)/2;
Step four: if | T2-T1|<T0Then T is2I.e. the optimal threshold, otherwise, T is set2Is assigned to T1
Step five: repeating the second, third and fourth steps until the optimal threshold is obtained, and utilizing the finally obtained optimal threshold T2And performing threshold segmentation on the vertical overhead view image to obtain a binary image I of the vertical overhead view image.
3. The vision-based RFID multi-tag network three-dimensional measurement modeling method of claim 1, wherein the morphological method of the second step comprises the following steps:
the method comprises the following steps: performing expansion operation on the binary image of the vertical overlook image to obtain an expanded image
Figure FSB0000192632430000021
Wherein B is a structural element, I is a binary image of a vertical overlook image,
Figure FSB0000192632430000022
denotes swelling, P1Is the image after expansion;
step two: carrying out corrosion operation on the image to obtain a corroded image P2=P1Theta B, wherein B is a structural element, P1For the expanded image, Θ represents corrosion, P2Is an image after corrosion;
step three: repeating the first and second steps, and filling the image to obtain an image P3To P3And executing the operation of the second step to obtain the morphologically processed image.
4. The vision-based RFID multi-tag network three-dimensional measurement modeling method of claim 1, wherein the template matching in the third step and the fifth step comprises the following steps:
the method comprises the following steps: selecting a certain specific image as a reference template image, translating the known reference template image point by point in a search area of the image, and traversing each pixel point in the search area;
step two: determining normalized cross-correlation coefficients
Figure FSB0000192632430000031
Matching the reference template image with the search area image with the same size in the original image, wherein (m, n) is pixel point coordinates, T (m, n) is the reference template image,
Figure FSB0000192632430000032
is the mean value of the reference template image, M is the maximum value of the horizontal coordinate of the pixel point coordinate, N is the maximum value of the coordinate of the pixel point coordinate column, Si,j(m, n) is a search area image,
Figure FSB0000192632430000033
the mean value of the search area image is obtained;
step three: and translating the reference template image to the next pixel point, and repeating the step two until all the positions are matched, wherein the area with the normalized cross-correlation coefficient NC (i, j) larger than the set threshold lambda is the matched area to be searched.
5. The vision-based RFID multi-tag network three-dimensional measurement modeling method of claim 1, wherein in the fifth step the control computer controls the horizontal camera to adjust back and forth, the steps are as follows:
the method comprises the following steps: measuring the distance L from the initial position of the level camera to the center of the turntable1
Step two: controlling a computer to control a servo motor to drive a turntable to rotate, enabling the ith RFID label to be in a vertical state with a horizontal camera, and calculating the distance d between the ith RFID label and the horizontal camerai=L1-ri
Step three: computingObject space distance required by accurate focusing of ith RFID label by horizontal camera
Figure FSB0000192632430000034
Wherein l' is the distance from the center of the lens of the horizontal camera to the CCD sensor in the horizontal camera, and f is the focal length of the horizontal camera;
step four: distance delta L needing to be adjusted when horizontal camera accurately focuses on ith RFID labeli=di-liIf Δ LiGreater than zero, the horizontal camera moves Δ L in a direction approaching the ith RFID tagiIf Δ LiLess than zero, the horizontal camera moves Δ L in a direction away from the ith RFID tagi
6. The vision-based RFID multi-tag network three-dimensional measurement modeling method of claim 1, wherein the triangle similarity principle in the sixth step comprises the following steps:
the method comprises the following steps: controlling a computer to control a horizontal camera to adjust back and forth, enabling the horizontal camera to clearly image one of the labels, measuring the distance from the label to the horizontal camera, recording the distance as S, the distance l' from the center of a lens of the horizontal camera to a CCD sensor in the horizontal camera, and the side length e of a CCD pixel;
step two: calculating pixel side lengths of an image
Figure FSB0000192632430000041
CN201710727530.XA 2017-08-17 2017-08-17 RFID multi-label network three-dimensional measurement modeling method based on vision Active CN107490346B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710727530.XA CN107490346B (en) 2017-08-17 2017-08-17 RFID multi-label network three-dimensional measurement modeling method based on vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710727530.XA CN107490346B (en) 2017-08-17 2017-08-17 RFID multi-label network three-dimensional measurement modeling method based on vision

Publications (2)

Publication Number Publication Date
CN107490346A CN107490346A (en) 2017-12-19
CN107490346B true CN107490346B (en) 2021-05-28

Family

ID=60645492

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710727530.XA Active CN107490346B (en) 2017-08-17 2017-08-17 RFID multi-label network three-dimensional measurement modeling method based on vision

Country Status (1)

Country Link
CN (1) CN107490346B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108426527B (en) * 2018-01-23 2019-08-16 淮阴工学院 A kind of RFID label tag three-dimensional coordinate automatic testing method based on DLT
CN109003261A (en) * 2018-06-29 2018-12-14 北京科技大学 Heterogeneous phase three-dimensional construction method and device
CN109061616B (en) * 2018-08-31 2022-11-04 南通大学 Moving target positioning method
CN111561869B (en) * 2020-05-22 2021-11-23 延锋彼欧武汉汽车外饰系统有限公司 Intelligent detection device and method for vehicle body space position and application thereof
CN111780689B (en) * 2020-07-13 2021-06-29 四川大学 Optimal rotation angle determination method based on cross-correlation structured light 360-degree measurement
CN113139395A (en) * 2021-03-29 2021-07-20 南京航空航天大学 Multi-label optimal distribution method of radio frequency identification system based on computer vision

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100866459B1 (en) * 2007-07-20 2008-10-31 재단법인서울대학교산학협력재단 Location tracking system
CN101325694A (en) * 2007-06-13 2008-12-17 中国科学院自动化研究所 System and method for monitoring intelligent video combining wireless radio frequency recognition technique
CN102184436A (en) * 2011-05-16 2011-09-14 重庆大学 Method for sensing position of object in Internet of things
CN102656422A (en) * 2010-01-20 2012-09-05 法罗技术股份有限公司 Multi-functional coordinate measurement machines
JP2012221481A (en) * 2011-04-07 2012-11-12 Lsi Japan Kk Walking support device for vision-impaired person
US8847761B1 (en) * 2004-10-01 2014-09-30 Emc Corporation Anonymous transaction tokens
US8947678B2 (en) * 2013-12-11 2015-02-03 Faro Technologies, Inc. Method for correcting three-dimensional measurements of a spherically mounted retroreflector
CN105182319A (en) * 2015-08-12 2015-12-23 西安斯凯智能科技有限公司 Target positioning system and target positioning method based on radio frequency and binocular vision
CN105354521A (en) * 2015-11-27 2016-02-24 江苏省标准化研究院 BP neural network-based RFID label distribution optimum configuration method
CN105718929A (en) * 2016-01-21 2016-06-29 成都信息工程大学 All-weather unknown environment high precision rapid circular object positioning method and system
CN106295936A (en) * 2015-05-29 2017-01-04 深圳镭博万科技有限公司 Wheel hub type identification device and wheel hub mark system for tracing and managing
CN106557906A (en) * 2016-11-30 2017-04-05 长威信息科技发展股份有限公司 A kind of warehouse management method and system based on three-dimensional modeling
CN108426527A (en) * 2018-01-23 2018-08-21 淮阴工学院 A kind of RFID tag three-dimensional coordinate automatic testing method based on DLT

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8847761B1 (en) * 2004-10-01 2014-09-30 Emc Corporation Anonymous transaction tokens
CN101325694A (en) * 2007-06-13 2008-12-17 中国科学院自动化研究所 System and method for monitoring intelligent video combining wireless radio frequency recognition technique
KR100866459B1 (en) * 2007-07-20 2008-10-31 재단법인서울대학교산학협력재단 Location tracking system
CN102656422A (en) * 2010-01-20 2012-09-05 法罗技术股份有限公司 Multi-functional coordinate measurement machines
JP2012221481A (en) * 2011-04-07 2012-11-12 Lsi Japan Kk Walking support device for vision-impaired person
CN102184436A (en) * 2011-05-16 2011-09-14 重庆大学 Method for sensing position of object in Internet of things
US8947678B2 (en) * 2013-12-11 2015-02-03 Faro Technologies, Inc. Method for correcting three-dimensional measurements of a spherically mounted retroreflector
CN106295936A (en) * 2015-05-29 2017-01-04 深圳镭博万科技有限公司 Wheel hub type identification device and wheel hub mark system for tracing and managing
CN105182319A (en) * 2015-08-12 2015-12-23 西安斯凯智能科技有限公司 Target positioning system and target positioning method based on radio frequency and binocular vision
CN105354521A (en) * 2015-11-27 2016-02-24 江苏省标准化研究院 BP neural network-based RFID label distribution optimum configuration method
CN105718929A (en) * 2016-01-21 2016-06-29 成都信息工程大学 All-weather unknown environment high precision rapid circular object positioning method and system
CN106557906A (en) * 2016-11-30 2017-04-05 长威信息科技发展股份有限公司 A kind of warehouse management method and system based on three-dimensional modeling
CN108426527A (en) * 2018-01-23 2018-08-21 淮阴工学院 A kind of RFID tag three-dimensional coordinate automatic testing method based on DLT

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Radio Frequency Identification (RFID) and communication technologies for solid waste bin and truck monitoring system;M. A. Hannan;《Waste Management》;20111231;全文 *
RFID标签封装设备中机器视觉系统设计与实现;刘辉;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》;20080315;全文 *

Also Published As

Publication number Publication date
CN107490346A (en) 2017-12-19

Similar Documents

Publication Publication Date Title
CN107490346B (en) RFID multi-label network three-dimensional measurement modeling method based on vision
CN112017240B (en) Tray identification and positioning method for unmanned forklift
CN103020945A (en) Remote sensing image registration method of multi-source sensor
CN113538486B (en) Method for improving identification and positioning accuracy of automobile sheet metal workpiece
CN103136525B (en) High-precision positioning method for special-shaped extended target by utilizing generalized Hough transformation
CN110378957B (en) Torpedo tank car visual identification and positioning method and system for metallurgical operation
CN111260289A (en) Micro unmanned aerial vehicle warehouse checking system and method based on visual navigation
CN103530590A (en) DPM (direct part mark) two-dimensional code recognition system
CN101556647A (en) mobile robot visual orientation method based on improved SIFT algorithm
CN115609591B (en) Visual positioning method and system based on 2D Marker and compound robot
CN110084830B (en) Video moving object detection and tracking method
CN106127258B (en) A kind of target matching method
CN111198496A (en) Target following robot and following method
Du et al. Object tracking based on improved MeanShift and SIFT
CN111311618A (en) Circular arc workpiece matching and positioning method based on high-precision geometric primitive extraction
Tribak et al. QR code recognition based on principal components analysis method
CN111964680A (en) Real-time positioning method of inspection robot
CN112734844A (en) Monocular 6D pose estimation method based on octahedron
CN111167731B (en) Product sorting method, product sorting system and intelligent sorting robot
CN104268499A (en) Method for detecting barcode tags of commodities with barcodes
CN113313701A (en) Electric vehicle charging port two-stage visual detection positioning method based on shape prior
CN107895166B (en) Method for realizing target robust recognition based on feature descriptor by geometric hash method
Sun et al. Multi-stage refinement feature matching using adaptive ORB features for robotic vision navigation
CN111932617B (en) Method and system for realizing real-time detection and positioning of regular objects
CN109635692B (en) Scene re-identification method based on ultrasonic sensor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant