CN108399377B - Optical positioning method based on mode classification - Google Patents

Optical positioning method based on mode classification Download PDF

Info

Publication number
CN108399377B
CN108399377B CN201810129583.6A CN201810129583A CN108399377B CN 108399377 B CN108399377 B CN 108399377B CN 201810129583 A CN201810129583 A CN 201810129583A CN 108399377 B CN108399377 B CN 108399377B
Authority
CN
China
Prior art keywords
receiver
coordinate
coordinates
transmitter
label
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810129583.6A
Other languages
Chinese (zh)
Other versions
CN108399377A (en
Inventor
翁冬冬
李跃
李冬
荀航
胡翔
骆乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanchang Virtual Reality Detection Technology Co ltd
Beijing Institute of Technology BIT
Original Assignee
Nanchang Virtual Reality Detection Technology Co ltd
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanchang Virtual Reality Detection Technology Co ltd, Beijing Institute of Technology BIT filed Critical Nanchang Virtual Reality Detection Technology Co ltd
Priority to CN201810129583.6A priority Critical patent/CN108399377B/en
Publication of CN108399377A publication Critical patent/CN108399377A/en
Application granted granted Critical
Publication of CN108399377B publication Critical patent/CN108399377B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/22Source localisation; Inverse modelling

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an optical positioning method based on mode classification, which can simply, conveniently and efficiently determine the corresponding relation between a scanning signal and an emitter, obtain a positioning result in real time and improve the refreshing frequency of tracking data. Compared with the traversal method with great operation amount, the method simplifies the matching calculation of the receiving signal and the transmitting signal source based on mode classification, improves the calculation efficiency when a plurality of transmitters are used in cascade connection, can reduce all possible calculation conditions to be below 3 coordinate combinations, can complete the matching of the receiving signal and the transmitting signal source in one time under more conditions, and greatly reduces the operation amount and the operation time. In addition, the model of the mode classification can be obtained through off-line operation, and when the model is actually used, data is directly input to obtain an output result; and when the relative position of the emitter is fixed, it can be used continuously.

Description

Optical positioning method based on mode classification
Field of the method
The invention belongs to the technical field of tracking and positioning, and relates to positioning based on an extended tracking method of HTC-VIVE (high-temperature video-video), in particular to an optical positioning method based on mode classification.
Background
The HTC-VIVE is composed of a transmitter and a photosensitive receiver. The emitter can emit periodic optical signals to scan the tracking area, and the receiver converts the optical signals into digital signals after receiving the scanning signals of the emitter, so that the image coordinates of the receiver relative to the emitter are obtained. After a certain number of receivers are scanned by the emitters, the spatial pose of the rigid body formed by the receivers can be obtained by a computer vision algorithm. However, HTC-VIVE tracking requires the transmitter to send a frame synchronization scanning signal first, and then scan the horizontal and vertical directions sequentially. When multiple transmitters are used in cascade, only one transmitter can be operated in the same time period to avoid signal interference, which results in a doubling of the tracking data refresh rate when multiple transmitters are used in cascade. Since the larger the tracking area, the more transmitters are required, current HTC VIVE systems use only two transmitters, and their tracking area is also limited to a 5m x 5m space, in order to guarantee a sufficient tracking data refresh rate.
An extended tracking method based on HTC-VIVE with patent application number 201710545643.8 is characterized in that the method extends the HTC-VIVE, and uses a transmitter coding scheme combining a synchronous controller and a stroboscope and a corresponding subsequent decoding algorithm, so that the problem of signal interference of multiple transmitters is solved in principle, the upper limit of the number of the transmitters is enlarged, and the tracking range of a system is increased. However, in the determination method of the correspondence relationship between the scanning signal and the emitter, a method of "traversing all possibilities" is adopted. As the number of transmitters increases, the amount of computation increases sharply, so that the computation time increases. When determining the correspondence of the scanning signal to the emitter, the time required for matching between the signal and the emitter increases, resulting in a delay in the positioning result, reducing the refresh frequency of the tracking data.
Disclosure of Invention
In view of this, the present invention provides an optical positioning method based on mode classification, which can simply and efficiently determine the corresponding relationship between scanning signals and emitters, obtain positioning results in real time, and improve the refresh frequency of tracking data.
In order to achieve the above object, the optical positioning method based on mode classification of the present invention comprises the following steps:
step 1, laying HTC-VIVE, wherein all M emitters in the HTC-VIVE synchronously emit signals; selecting receiver sampling points in the tracking range of the HTC-VIVE, and imaging each sampling point in sequence to obtain corresponding training data; the training data acquisition mode corresponding to the sampling point is as follows:
step 1.1, setting up a coordinate arrangement combination mode label: the permutation and combination mode of the image coordinates (u, v) of the receiver in the M transmitters in the u direction and the v direction is shared
Figure BDA0001574512650000021
Attaching corresponding label to each coordinate arrangement combination mode;
step 1.2, aiming at each receiver sampling point, sequencing image coordinates (u, v) of the receiver sampling points in M transmitters according to the same set rule to obtain label labels of the receiver sampling points under the sequencing rule, wherein the image coordinates and the corresponding labels under the sequencing rule are training data of the sampling points;
step 2, carrying out classification training on the training data obtained in the step 1 through a classifier to obtain a classification model F (); the input of the classification model F () is an image coordinate combination obtained by ordering the image coordinates of the sampling points of the receiver in the M transmitters according to the rule in the step 1.2, and the output is the probability that the image coordinate combination corresponds to each label in the step 1;
and 3, positioning the receiver based on the classification model F () obtained in the step 2, and comprising the following substeps:
step 3.1, ordering the image coordinates of the receivers in the M transmitters according to the rule in the step 1.2 to obtain a coordinate combination to be detected, and inputting the coordinate combination to be detected into a classification model F () to obtain the probability that the coordinate combination to be detected corresponds to various label labels;
3.2, sequentially selecting coordinate permutation and combination modes corresponding to the labels according to the sequence of the probability from large to small, and executing the step 3.3;
step 3.3, calculating the theoretical three-dimensional position of the receiver in the selected transmitter coordinate combination mode;
calculating the coordinates of the receiver in a transmitter image coordinate system theoretically according to the obtained theoretical three-dimensional position of the receiver; comparing the coordinates of the receiver in the transmitter image coordinate system theoretically with the actual image coordinates of the receiver in the transmitter, if the error between the coordinates and the actual image coordinates is smaller than a set threshold value, determining the image coordinates of the receiver in each transmitter image coordinate system according to the current transmitter coordinate combination mode, completing the tracking of the receiver, otherwise, eliminating the current transmitter image coordinate combination mode, then returning to the step 3.2 to select a new image coordinate combination mode, and repeatedly executing the step 3.3 until the error between the coordinates and the actual image coordinates is smaller than the set threshold value.
In the step 3.2, the transmitter coordinate combination mode with the probability of label being 0 is excluded to obtain the total number S of the transmitter coordinate combination modes corresponding to the receiver in the current scanning period; and sequentially selecting a group from the S transmitter coordinate combination modes according to the sequence of the probabilities from large to small, and executing the step 3.3.
Wherein the set threshold size is determined in accordance with receiver noise and calculation error.
And the coordinates in the u direction and the v direction are sorted from small to large or from large to small.
Has the advantages that:
compared with the traversal method with great operand, the invention simplifies the matching calculation of the received signal and the transmitting signal source (namely the image coordinate of the receiver and the transmitter) based on the mode classification, improves the calculation efficiency when a plurality of transmitters are used in cascade, can reduce all possible calculation conditions to be below 3 coordinate combinations, can complete the matching of the received signal and the transmitting signal source in more cases at one time, and greatly reduces the operand and the running time.
In addition, the model of the mode classification can be obtained through off-line operation, and when the model is actually used, data is directly input to obtain an output result; and when the relative position of the emitter is fixed, it can be used continuously.
Drawings
FIG. 1 is a flow chart of matching image coordinates to emitters based on pattern classification according to the present invention.
Fig. 2 is a structural diagram of an extended tracking system constructed based on the HTC-VIVE system according to the present invention.
FIG. 3 is a graph of training data according to the present invention.
FIG. 4 shows the probabilities P of P1-P36labelSchematic representation.
Detailed Description
The invention is described in detail below by way of example with reference to the accompanying drawings.
The invention relates to an extended tracking method based on HTC VIVE, which realizes the signal synchronization of a plurality of transmitters by adding a synchronous controller and a flash device on an HTC VIVE transmitter, and increases the number of the transmitters which work simultaneously from the original 2 to dozens (theoretically, the number is unlimited, but the increase of the number causes the complex calculation). The extended tracking method may be implemented in accordance with the patent application No. 201710545643.8. The invention adopts a mode classification method on the basis of the extended tracking method, and carries out one-to-one corresponding classification on the HTC VIVE multi-transmitter signals received by the receiver, thereby simplifying the corresponding relation matching calculation of the receiver multi-signals and the signal source, increasing the extended limit of the number of transmitters working simultaneously under the condition of ensuring the refreshing frequency of tracking data, and further greatly improving the tracking range of the tracking system. The invention can be used in the application fields of motion capture, surgical navigation, virtual reality and the like which need to be tracked and positioned.
The flow chart of matching the image coordinates and the emitters based on the pattern classification in the invention is shown in fig. 1, and the structure chart of the extended tracking system based on the HTC-view system in the invention is shown in fig. 2, taking the case of 3 emitters as an example. After the tracking device is built, the local coordinate systems of the three transmitters can be determined, as shown by O1, O2, and O3 in fig. 3. In fig. 3, the inside of the cuboid is a tracking range, and a world coordinate system, such as the coordinate system O in fig. 3, is established by selecting any point in a tracking areawXwYwZwAs shown, the coordinates of the three transmitters in the world coordinate system can be determined. In this way, the rotational and translational relationships between the three transmitter local coordinate systems and the world coordinate system can be determined. In the world coordinate system according to a certain sampling intervalThe interval (set according to the calculation accuracy and speed) selects the sample points within the tracking range, as shown by the grid intersections in fig. 3.
The embodiment comprises the following steps:
step 1, laying HTC-VIVE, wherein all M transmitters in the HTC-VIVE synchronously transmit signals; selecting receiver sampling points in the tracking range of the HTC-VIVE, and sequentially positioning each sampling point to obtain corresponding training data; the training data acquisition mode corresponding to the sampling point is as follows:
step 1.1, setting up a coordinate arrangement combination mode label: the permutation and combination mode of the image coordinates (u, v) of the receiver in the M transmitters in the u direction and the v direction is shared
Figure BDA0001574512650000054
Attaching corresponding label to each coordinate arrangement combination mode;
let the three-dimensional coordinate of the receiver in the world coordinate system be Xw=[x,y,z]TWith corresponding image coordinates x in different emittersi=[ui,vi]T1,2,3, M, the subscript i representing the transmitter number; m represents the total number of transmitters; in this example, M is 3. According to the principle of projection imaging, XwAnd xiSatisfies the following formula:
Figure BDA0001574512650000051
wherein
Figure BDA0001574512650000052
Figure BDA0001574512650000053
Is a coordinate XwAnd xiIs expressed in terms of homogeneous coordinates, PiThe projection matrix of the ith emitter can be obtained through initial calibration.
The method for obtaining the image coordinate system of the receiver at three transmitters by adopting the calculation mode as the formula (1)The coordinates of the image are obtained, and the sampling point P of the receiver is obtainedwA set of image coordinates (u) of1,u2,u3;v1,v2,v3)。
To (u)1,u2,u3;v1,v2,v3) Further processing is carried out to convert (u)1,u2,u3) And (v)1,v2,v3) The sequence is performed according to a set rule, which is generally from small to large or from large to small, in this embodiment, (u) is1,u2,u3) And (v)1,v2,v3) In order from small to large. To (u)1,u2,u3) The possible ways of ordering the three data from small to large are common
Figure BDA0001574512650000061
Similarly, for (v)1,v2,v3) Common possible ways of ordering from small to large
Figure BDA0001574512650000062
And (4) seed preparation. Thus, (u)1,u2,u3) And (v)1,v2,v3) After being arranged separately, the components are combined, in total
Figure BDA0001574512650000063
And (4) carrying out the following steps. Each of the 36 possible cases is labeled with a label, i.e., label 1, label 2, 2 … …, k … …, label 36, where k is 1,2,3 … … 36, and for any set of image coordinate data of a specific sort, each label corresponds uniquely to a specific sort of the image coordinate data. Table 1 gives an example that label labels label k correspond to an arrangement combination mode of image coordinate data, and when the arrangement combination mode is actually used, the correspondence between the label labels and the arrangement combination mode is not fixed, but a specific sorting mode can only correspond to one label.
Table 1 correspondence table of arrangement and combination of label k and image coordinate data
Figure BDA0001574512650000064
And step 1.2, aiming at each receiver sampling point, sequencing the image coordinates (u, v) of the receiver sampling points in the M transmitters according to the same set rule to obtain the label of the receiver sampling points under the sequencing rule, wherein the image coordinates and the corresponding labels under the sequencing rule are training data of the sampling points. In this embodiment, a sorted function is defined, and the sorted function can order (a1, a2, a3) from small to large by (b1, b2, b3) ═ sorted (a1, a2, a3), where b1 is<b2<b3 and bi ∈ { a1, a2, a3} (i ═ 1,2, 3); using sorted function pair (u)1,u2,u3) And (v)1,v2,v3) Sorted from small to large to get sorted (u)1,u2,u3) And sorted (v)1,v2,v3)。[sorted(u1,u2,u3);sorted(v1,v2,v3)]The label is K and is recorded as:
[sorted(u1,u2,u3);sorted(v1,v2,v3)]<——>label ═ K; wherein K is 1,2,3 … or 36;
[sorted(u1,u2,u3);sorted(v1,v2,v3)]<——>and label is K, which is a complete set of training data corresponding to the sampling point Pw. Specifically, let us say that, in the tracking area, at a point Pw in the world coordinate system, the image coordinate of the receiver under the transmitter number one is (u)1=300,v1200), and the image coordinates under the second emitter and the third emitter are (u) respectively2=100,v2=400)、(u3=500,v3100). The image coordinates for the u-direction of the three emitters are obtained in order from small to large: u. of2<u1<u3. Therefore, after sorting, the sorting mode of the transmitter numbers corresponding to the u direction is (2,1, 3). Similarly, the image coordinates of the three transmitters in the v direction are obtained from small to large: v. of3<v1<v2Therefore, after sorting, the transmitter numbers corresponding to the v direction are sorted in (3,1, 2). Combining the sorting modes in the u and v directions to obtain [ (2,1, 3); (3,1,2)]Obtaining [ (2,1,3) by looking up the table 1; (3,1,2)]The label is 20. The training data corresponding to the sampling point Pw is:
[(100,300,500);(100,200,400)]<——>20
in a tracking area under a world coordinate system, a large number of sampling points are collected, and corresponding training data are obtained through the modes of the steps 1.1-1.2 respectively. For example, in the world coordinate system, within the tracking range of-400 cm < x <400cm, -400cm < y <400cm, and 0cm < z <200cm, if the sampling interval is 4cm, 200 × 50 — 2000000 sampling points can be obtained, i.e. 200 ten thousand sets of training data are obtained.
And 2, carrying out classification training on the training data obtained in the step 1 through a classifier to obtain a classification model F (), namely a functional relation F ().
The functional relation F () is input as the image coordinate combination obtained by the image coordinates of the sampling points of the receiver in the M transmitters according to the rule sequence in the step 1.2, and is output as the probability that the image coordinate combination corresponds to each label in the step 1, namely F ([ u ] u)a,ub,uc:va,vb,vc])=Plabel
Wherein, PlabelIs a multi-dimensional vector representing a set of input data ua,ub,uc:va,vb,vc]The probabilities corresponding to the various label tags in table 1 are shown in fig. 4. As can be seen from FIG. 4, the probability of only two label tags is not 0, and therefore [ u [ u ] ]a,ub,uc:va,vb,vc]This set of data can correspond to only one of the two label tags, directly narrowing the calculation to the two label tags in fig. 4 with probability not 0.
Generally, a Bayesian classifier, a softmax classifier or a maximum likelihood classifier and the like are adopted to obtain a classification model F (); and inputting a group of image coordinate data into the classification model F (), so that the probability of each label corresponding to the input data can be obtained.
And 3, positioning the receiver based on the classification model F () obtained in the step 2, and comprising the following substeps:
step 3.1, in practical application, it is only known that the receiver receives 3 x scanning signals and 3 y scanning signals, but it is not known to which transmitter the 3 x scanning signals respectively correspond, and similarly, it is also not known to which transmitter the 3 y scanning signals respectively correspond. Suppose that the image coordinates of the 3 scanning signals received by the receiver in the u direction are sorted according to the sorting mode set in step 1.2 to (u)a,ub,uc) The image coordinates in the v direction are sorted according to the sorting mode set in the step 1.2 to be (v)a,vb,vc). Will (u)a,ub,uc) And (v)a,vb,vc) Combining to obtain the coordinate combination to be detected [ (u)a,ub,uc);(va,vb,vc)]. Combining the coordinates to be detected [ (u)a,ub,uc);(va,vb,vc)]And inputting the probability of the coordinate combination to be detected corresponding to various label labels into a classification model F (). The corresponding case is assumed to be:
P([(ua,ub,uc);(va,vb,vc)]——>label=2)=0.3
P([(ua,ub,uc);(va,vb,vc)]——>label=12)=0.1
P([(ua,ub,uc);(va,vb,vc)]——>label=23)=0.6
P([(ua,ub,uc);(va,vb,vc)]——>label=else)=0
from the results of the classification, [ (u)a,ub,uc);(va,vb,vc)]The probability corresponding to label 23 is the highest, but there is no guarantee that it must correspond to 23, since the possibilities of label 2 and label 12 cannot be excluded. Therefore, step 3.2 is performed to make a further decision to find the only correct label.
And 3.2, sequencing the probabilities of the coordinate combination to be detected corresponding to each label in a descending order to obtain:
P(label=23)>P(label=2)>P(label=12)>P(label=else)
since P (label) is 0, these possibilities can be excluded, and the total number of corresponding transmitter coordinate combinations of the receiver in the current scanning period is:
S=3 (2)
selecting a group of emitter coordinate combinations from the S emitter coordinate combinations according to the sequence of probability from large to small to execute the step 3.3;
step 3.3, in the transmitter coordinate combination selected in the step 3.2, under the jth image coordinate combination, the actual image coordinate of the receiver corresponding to the ith transmitter is xij(obtained from the scan time of the emitter), where j 1,2,3, S, xijThe corresponding homogeneous image coordinates are noted
Figure BDA0001574512650000092
From equation (1), the projection equations between the receiver and the 3 transmitters can be found:
Figure BDA0001574512650000091
wherein P is1,P2,P3The projection matrix for 3 emitters can be obtained by initial calibration.
Will be provided with
Figure BDA0001574512650000101
Substituting equation (1) results in equation (4), where λ is the unknown non-zero coefficient (according to homogeneous coordinate definition, when λ ≠ 0,
Figure BDA0001574512650000102
and
Figure BDA0001574512650000103
equivalent, representing the same coordinates).
Figure BDA0001574512650000104
If P is seti=[pi1,pi2,pi3]T(pi1,pi2,pi3Are respectively a matrix PiThree rows of (a) and (b), then equation (4) can be expanded to write the following three equations:
Figure BDA0001574512650000105
if it is
Figure BDA0001574512650000106
And Pi=[pi1,pi2,pi3]TIf the condition is known, the value of lambda can be obtained from the 3 rd equation in the formula (5), and then the unknown number is obtained by substituting lambda into the first two equations
Figure BDA0001574512650000107
Two independent equations of (a). It can be seen that a set of projection equations from three-dimensional space coordinates to two-dimensional image coordinates can provide 2 equations for
Figure BDA0001574512650000108
Independent equations of (c).
Formula (3) includes 3 sets of projection equations, and can provide 2 × 3 ═ 6 projection equations in total
Figure BDA0001574512650000109
Independent equations of the receiver, and the spatial three-dimensional coordinates of the receiver
Figure BDA00015745126500001010
(homogeneous coordinate form) is an unknown number comprising 3 unknowns [ x, y, z ]]T(the non-homogeneous spatial coordinates of the receiver are denoted Xw=[x,y,z]T
Figure BDA00015745126500001011
) Therefore, the method can be obtained by solving the over-determined linear equation set
Figure BDA00015745126500001012
Obtaining the theoretical three-dimensional position of the receiver under the j-th group of image coordinate combination by using the optimal solution in the least square sense and recording the theoretical three-dimensional position as
Figure BDA00015745126500001013
(homogeneous coordinate format).
Theoretical three-dimensional position of receiver to be obtained
Figure BDA00015745126500001014
Substituting into formula (1), it can be calculated that in the j-th image coordinate combination, the theoretical image coordinate of the receiver corresponding to the i-th transmitter is x in the j-th image coordinate combinationij' (i ═ 1,2,3), the receiver corresponds to theoretical image coordinates in the 3 transmitters as follows:
Figure BDA0001574512650000111
using discriminant function f (x)ij) Evaluating the j image coordinate combination and the three-dimensional reconstruction result, wherein the concrete form of the discriminant function is shown as a formula (7):
Figure BDA0001574512650000112
wherein xij *Is xijNon-homogeneous coordinate form of `, xij′=[xij *,1]T
If the current image coordinate combination is consistent with the correct condition, the theoretical value of the discrimination function should be 0, and if the image coordinate combination is wrong, the value of the discrimination function must be greater than 0.
In practical application, considering factors such as receiver noise and calculation error, a corresponding threshold may be set, for example, thresh is 1, when the value of the discriminant function is smaller than thresh, it is considered that the j-th group of image coordinate combinations currently attempted matches the actual situation, and the calculation result X is calculatedwThe method is effective; and if the value of the discrimination function is greater than thresh, eliminating the image coordinate combination, returning to the step 3.3 to select a new image coordinate combination, and repeatedly executing the step 3.4 until the value of the discrimination function is less than thresh.
Based on a discriminant function f (x)ij) The method can correctly screen out the only correct image coordinate combination, thereby successfully calculating the three-dimensional coordinate of the receiver and the correct image coordinate corresponding relation in the embodiment. Assuming that label 23 is correct, the corresponding results are shown in table 2.
TABLE 2 corresponding relationship of discriminated transmitter and receiver image coordinates
Emitter 1 Emitter 2 Emitter 3
u-direction image coordinates u1=ub u2=ua u3=uc
v direction image coordinates v1=va v2=vc v3=vb
Therefore, the invention matches the image coordinate with the transmitter by using a mode classification method, reduces the operation amount and further reduces the delay caused by matching the image coordinate with the transmitter.
In summary, the above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (4)

1. An optical positioning method based on mode classification is characterized by comprising the following steps:
step 1, laying HTC-VIVE, wherein all M emitters in the HTC-VIVE synchronously emit signals; selecting receiver sampling points in the tracking range of the HTC-VIVE, and imaging each sampling point in sequence to obtain corresponding training data; the training data acquisition mode corresponding to the sampling point is as follows:
step 1.1, setting up a coordinate arrangement combination mode label: the permutation and combination mode of the image coordinates (u, v) of the receiver in the M transmitters in the u direction and the v direction is shared
Figure FDA0003329309170000011
Attaching corresponding label to each coordinate arrangement combination mode;
step 1.2, aiming at each receiver sampling point, sequencing image coordinates (u, v) of the receiver sampling points in M transmitters according to the same rule when corresponding label is attached to each coordinate permutation and combination mode to obtain the label under the rule, wherein the image coordinates and the corresponding labels under the rule are training data of the sampling points;
step 2, carrying out classification training on the training data obtained in the step 1 through a classifier to obtain a classification model F (); the input of the classification model F () is an image coordinate combination obtained by ordering the image coordinates of the sampling points of the receiver in the M transmitters according to the rule in the step 1.2, and the output is the probability that the image coordinate combination corresponds to each label in the step 1;
and 3, positioning the receiver based on the classification model F () obtained in the step 2, and comprising the following substeps:
step 3.1, ordering the image coordinates of the receivers in the M transmitters according to the rule in the step 1.2 to obtain a coordinate combination to be detected, and inputting the coordinate combination to be detected into a classification model F () to obtain the probability that the coordinate combination to be detected corresponds to various label labels;
3.2, sequentially selecting coordinate permutation and combination modes corresponding to the labels according to the sequence of the probability from large to small, and executing the step 3.3;
step 3.3, calculating the theoretical three-dimensional position of the receiver in the selected transmitter coordinate combination mode;
calculating the coordinates of the receiver in a transmitter image coordinate system theoretically according to the obtained theoretical three-dimensional position of the receiver; comparing the coordinates of the receiver in the transmitter image coordinate system theoretically with the actual image coordinates of the receiver in the transmitter, if the error between the coordinates and the actual image coordinates is smaller than a set threshold value, determining the image coordinates of the receiver in each transmitter image coordinate system according to the current transmitter coordinate combination mode, completing the tracking of the receiver, otherwise, eliminating the current transmitter image coordinate combination mode, then returning to the step 3.2 to select a new image coordinate combination mode, and repeatedly executing the step 3.3 until the error between the coordinates and the actual image coordinates is smaller than the set threshold value.
2. The optical positioning method based on pattern classification as claimed in claim 1, wherein in step 3.2, the transmitter coordinate combination mode with probability of label being 0 is excluded to obtain the total number S of the transmitter coordinate combination modes corresponding to the receiver in the current scanning period; and sequentially selecting a group from the S transmitter coordinate combination modes according to the sequence of the probabilities from large to small, and executing the step 3.3.
3. A method as claimed in claim 1 or 2, wherein the threshold is determined based on receiver noise and calculation error.
4. A mode classification based optical locating method according to claim 1, characterized in that the coordinates in the u-direction and the v-direction are ordered from small to large or from large to small.
CN201810129583.6A 2018-02-08 2018-02-08 Optical positioning method based on mode classification Active CN108399377B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810129583.6A CN108399377B (en) 2018-02-08 2018-02-08 Optical positioning method based on mode classification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810129583.6A CN108399377B (en) 2018-02-08 2018-02-08 Optical positioning method based on mode classification

Publications (2)

Publication Number Publication Date
CN108399377A CN108399377A (en) 2018-08-14
CN108399377B true CN108399377B (en) 2022-04-08

Family

ID=63096424

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810129583.6A Active CN108399377B (en) 2018-02-08 2018-02-08 Optical positioning method based on mode classification

Country Status (1)

Country Link
CN (1) CN108399377B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102135620A (en) * 2010-01-21 2011-07-27 郭瑞 Geometric statistical characteristic-based global scan matching method
CN106908764A (en) * 2017-01-13 2017-06-30 北京理工大学 A kind of multiple target optical tracking method
CN107610173A (en) * 2017-08-11 2018-01-19 北京圣威特科技有限公司 A kind of real-time location method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013086678A1 (en) * 2011-12-12 2013-06-20 北京航空航天大学 Point matching and pose synchronization determining method for planar models and computer program product
CN104375117B (en) * 2013-08-12 2018-05-04 无锡知谷网络科技有限公司 Object localization method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102135620A (en) * 2010-01-21 2011-07-27 郭瑞 Geometric statistical characteristic-based global scan matching method
CN106908764A (en) * 2017-01-13 2017-06-30 北京理工大学 A kind of multiple target optical tracking method
CN107610173A (en) * 2017-08-11 2018-01-19 北京圣威特科技有限公司 A kind of real-time location method and device

Also Published As

Publication number Publication date
CN108399377A (en) 2018-08-14

Similar Documents

Publication Publication Date Title
CN108052896B (en) Human body behavior identification method based on convolutional neural network and support vector machine
CN104112292B (en) Medical image-processing apparatus, medical image processing method and medical imaging processing routine
Deng et al. Deep bingham networks: Dealing with uncertainty and ambiguity in pose estimation
CN110234085B (en) Indoor position fingerprint map generation method and system based on anti-migration network
Wang et al. STORM: Structure-based overlap matching for partial point cloud registration
CN110136202A (en) A kind of multi-targets recognition and localization method based on SSD and dual camera
JP2012128744A (en) Object recognition device, object recognition method, learning device, learning method, program and information processing system
CN109165540A (en) A kind of pedestrian&#39;s searching method and device based on priori candidate frame selection strategy
Wang et al. Point linking network for object detection
CN107509245B (en) Extended tracking method based on HTC VIVE
CN109348410A (en) Indoor orientation method based on global and local joint constraint transfer learning
KR101758064B1 (en) Estimator learning method and pose estimation mehtod using a depth image
Spera et al. EgoCart: A benchmark dataset for large-scale indoor image-based localization in retail stores
CN109840518A (en) A kind of visual pursuit method of combining classification and domain adaptation
Jiang et al. Active object detection in sonar images
Dong et al. Visual localization via few-shot scene region classification
Tan et al. An efficient fingerprint database construction approach based on matrix completion for indoor localization
CN108399377B (en) Optical positioning method based on mode classification
Bui et al. D2S: Representing local descriptors and global scene coordinates for camera relocalization
JP5450703B2 (en) Method and apparatus for determining a spatial area in which a target is located
Xu et al. Partial descriptor update and isolated point avoidance based template update for high frame rate and ultra-low delay deformation matching
Wang et al. MapLoc: LSTM-based Location Estimation using Uncertainty Radio Maps
Wang et al. Deep Weakly Supervised Positioning for Indoor Mobile Robots
Li et al. Learning Feature Matching via Matchable Keypoint-Assisted Graph Neural Network
CN112597954B (en) Multi-person gesture estimation method and system based on bottom-up

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Weng Dongdong

Inventor after: Li Yue

Inventor after: Li Dong

Inventor after: Xun Hang

Inventor after: Hu Xiang

Inventor after: Luo Le

Inventor before: Weng Dongdong

Inventor before: Li Yue

Inventor before: Li Dong

Inventor before: Xun Hang

Inventor before: Hu Xiang

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant