CN110321902B - Indoor automatic visual fingerprint acquisition method based on SOCP - Google Patents

Indoor automatic visual fingerprint acquisition method based on SOCP Download PDF

Info

Publication number
CN110321902B
CN110321902B CN201910384564.2A CN201910384564A CN110321902B CN 110321902 B CN110321902 B CN 110321902B CN 201910384564 A CN201910384564 A CN 201910384564A CN 110321902 B CN110321902 B CN 110321902B
Authority
CN
China
Prior art keywords
position information
socp
calculating
representing
fingerprint acquisition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910384564.2A
Other languages
Chinese (zh)
Other versions
CN110321902A (en
Inventor
谭学治
殷锡亮
马琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN201910384564.2A priority Critical patent/CN110321902B/en
Publication of CN110321902A publication Critical patent/CN110321902A/en
Application granted granted Critical
Publication of CN110321902B publication Critical patent/CN110321902B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching

Abstract

The invention discloses an indoor automatic visual fingerprint acquisition method based on SOCP (service on demand), and relates to an indoor automatic visual fingerprint acquisition method. The invention aims to solve the problems that the algorithm of the traditional manual visual fingerprint acquisition method is time-consuming and labor-consuming, and the accuracy of an offline database generated by the automatic fingerprint acquisition method based on particle filtering is not high. Firstly, the method comprises the following steps: estimating a step frequency; II, secondly: estimating a traveling step set according to a Gaussian model; thirdly, the method comprises the following steps: calculating the position information of each frame of image; fourthly, the method comprises the following steps: extracting SURF characteristics of the image; fifthly: calculating matched SURF characteristics of two adjacent frames of sampling images; sixthly, the method comprises the following steps: calculating the relative rotation and position information of two frames of sampling images; seventhly, the method comprises the following steps: and fusing the position information of each frame obtained in the third step with the relative rotation and position information of two frames of sampling images obtained in the sixth step, establishing an SOCP model, and solving a global optimal value by using an interior point method to obtain the position information of the visual fingerprint. The invention belongs to the technical field of indoor positioning and data fusion.

Description

Indoor automatic visual fingerprint acquisition method based on SOCP
Technical Field
The invention relates to the technical field of indoor positioning and data fusion, in particular to an indoor automatic visual fingerprint acquisition method.
Background
In the field of visual positioning, the visual positioning needs to complete positioning work by utilizing abundant image fingerprint information, a certain amount of image fingerprint information needs to be acquired in an off-line stage by any visual indoor positioning method, the traditional off-line acquisition method mainly depends on manual acquisition, image information is acquired at a preset position in an indoor scene needing positioning, and the position information of an image fingerprint is acquired by accurately measuring an acquisition position, so that a large amount of manpower, material resources and time are consumed to complete the acquisition work of the visual fingerprint information in the indoor scene of a certain scale. With the development of visual positioning technology in recent years, automatic fingerprint acquisition methods have been derived to replace the traditional manual acquisition methods. The automatic fingerprint collection process mainly depends on recording videos in an indoor area to be positioned to finish fingerprint collection, and the automatic fingerprint collection method is mainly divided into two types, wherein one type depends on specific fingerprint collection equipment, such as a laser range finder and the like to calibrate the position information of fingerprints, and the other type uses common fingerprint collection equipment, and corrects the position information of the fingerprints through an algorithm by presetting a video recording path in the area to be collected and combining a kinematic equation.
A typical algorithm for automatically acquiring the visual fingerprints by using common fingerprint acquisition equipment is a particle filter algorithm, namely a zero-mean Gaussian random process is established as a motion equation, a visual odometer is established as a system equation, an indoor environment is divided into a plurality of acquisition routes, a stereo camera is used for recording a visual fingerprint acquisition video while walking along an appointed route, and the solutions of the motion equation and the system equation are corrected through the particle filter algorithm. In the algorithm, theoretical analysis shows that the kinematic equation cannot accurately describe the visual fingerprint information in the advancing process, and the system equation has no solution when the advancing path is close to a straight line, so that the accuracy of the visual database generated by the algorithm is not high.
Disclosure of Invention
The invention aims to solve the problems that the traditional manual visual fingerprint acquisition method is time-consuming and labor-consuming, and the accuracy of an offline database generated by the automatic fingerprint acquisition method based on particle filtering is not high, and provides an indoor automatic visual fingerprint acquisition method based on SOCP (second-order cone programming).
An indoor automatic visual fingerprint acquisition method based on SOCP comprises the following specific processes:
the method comprises the following steps: estimating the step frequency under the condition of minimum work cost by using a human walking motion model;
step two: estimating a traveling step set according to a Gaussian model based on the step frequency obtained in the step one;
step three: calculating the position information of each frame of image according to the interval time, the step length and the step frequency of two adjacent sampling images in the collected video;
step four: extracting SURF characteristics of the image, and saving the positions of the SURF characteristics and descriptors of the SURF characteristics;
step five: calculating matched SURF characteristics of two adjacent sampling images based on the descriptors of the SURF characteristics of the images in the step four;
step six: matching SURF characteristics according to the internal reference matrix information of the acquisition camera, and calculating the relative rotation and position information of two frames of sampling images by using a five-point method;
step seven: and (4) fusing the position information of each frame obtained in the third step with the relative rotation and position information of the two frames of sampling images obtained in the sixth step, establishing an SOCP model, and solving a global optimum value by using an interior point method to obtain the position information of the visual fingerprint.
The invention has the beneficial effects that:
in order to overcome the defects of the traditional manual acquisition method and the automatic fingerprint acquisition algorithm, the invention adopts a new algorithm and improves the precision of the visual fingerprint database.
When the method is used for indoor automatic visual fingerprint acquisition, the precision is higher compared with the automatic fingerprint acquisition method based on particle filtering, the time consumption is shorter compared with the traditional fingerprint acquisition method, the requirement on visual fingerprint acquisition equipment is lower in the same positioning scene, and the automatic fingerprint acquisition method based on the particle filtering algorithm needs to use a stereo camera, and the visual fingerprint position information closest to the true value is obtained by establishing an SOCP model and fusing the solutions of a motion equation and a visual equation.
According to the experimental result, in the same positioning scene, the time average value of the visual fingerprint acquisition is only 11 minutes, the time consumed by the traditional manual visual fingerprint acquisition method is close to 5 hours, and the average error of the database generated by the visual fingerprint acquisition by using the EPnP algorithm is 1.04m, the minimum positioning error is 0.81m, and the maximum positioning error is 1.47 m. The confidence probabilities of the error of the visual fingerprint database generated by the invention within 1.2m and 1.4m are respectively close to 83.3 percent and 93.3 percent, and the automatic fingerprint acquisition method based on the particle filter algorithm is only 35 percent and 66.7 percent.
Drawings
FIG. 1 is a flow chart of the present invention;
fig. 2 is a graph of error curves for indoor positioning using a visual fingerprint database acquired according to the present invention.
Detailed Description
The first embodiment is as follows: the indoor automatic visual fingerprint acquisition method based on SOCP comprises the following specific processes:
the method comprises the following steps: estimating the step frequency under the condition of minimum work cost by using a human walking motion model;
step two: estimating a traveling step set according to a Gaussian model based on the step frequency obtained in the step one;
step three: calculating the position information of each frame of image according to the interval time, the step length and the step frequency of two adjacent sampling images in the collected video;
step four: extracting SURF characteristics of the image, and saving the positions of the SURF characteristics and descriptors of the SURF characteristics;
step five: calculating matched SURF characteristics of two adjacent sampling images based on the descriptors of the SURF characteristics of the images in the step four;
step six: matching SURF characteristics according to the internal reference matrix information of the acquisition camera, and calculating the relative rotation and position information of two frames of sampling images by using a five-point method;
step seven: and (4) fusing the position information of each frame obtained in the third step with the relative rotation and position information of the two frames of sampling images obtained in the sixth step, establishing an SOCP model, and solving a global optimum value by using an interior point method to obtain the position information of the visual fingerprint.
The second embodiment is as follows: the first difference between the present embodiment and the specific embodiment is: estimating step frequency under the condition of minimum work doing cost by using a human walking motion model in the first step; the specific process is as follows:
step one, calculating the total energy consumption in the walking motion model of the person:
when a person walks in an indoor environment, which is constituted by a flat traveling path, the person is more apt to walk in a most labor-saving mode in a normal state. In the walking process, the interference of other factors is not considered, people move at a constant speed, energy loss caused by the change of gravitational potential energy and kinetic energy is mainly considered in a motion model, as shown in a formula (1),
Figure BDA0002054404600000031
wherein WtRepresenting the total energy consumption, WgRepresenting the work required to overcome the change in gravitational potential energy,
Figure BDA0002054404600000032
representing the work required to be done due to kinetic energy changes;
step two, solving the work W required for overcoming the change of the gravitational potential energygAnd work required to overcome kinetic energy changes
Figure BDA0002054404600000033
The geometrical relationship between the legs, the ground and the vertical direction when a person walks is shown as the formula (2),
Figure BDA0002054404600000034
wherein l is the length of the leg of the person,
Figure BDA0002054404600000035
representing the step length, theta represents the included angle between the taken leg and the vertical direction, and h is the change quantity of the gravity center height in the advancing process;
the work W required to overcome the change of the gravitational potential energy is calculated by the formula (2)g
Figure BDA0002054404600000036
Wherein M is the mass of the person, g is the acceleration of gravity, and v is the speed of the person's movement;
the work required to overcome the kinetic energy change is calculated from equation (4)
Figure BDA0002054404600000041
Figure BDA0002054404600000042
Wherein M is the mass of a human,
Figure BDA0002054404600000043
representing the step length, v is the speed of the human movement;
step one, solving the step frequency:
the step frequency f is shown in equation (5):
Figure BDA0002054404600000044
wherein
Figure BDA0002054404600000045
Representing the step length, v is the speed of the human movement;
according to formulas (2) to (5), formula (1) can be rewritten as formula (6):
Figure BDA0002054404600000046
equation (6) derives f bytWhere/df is 0, formula (7) can be obtained:
Figure BDA0002054404600000047
wherein g is the acceleration of gravity and l is the leg length of the person.
Other steps and parameters are the same as those in the first embodiment.
The third concrete implementation mode: the present embodiment differs from the first or second embodiment in that: estimating a traveling step set according to a Gaussian model in the second step; the specific process is as follows:
step two, firstly, according to the time length T of the recorded videovThe step frequency f, the total required number of steps for the advancing route is n ═ Tv×f;
Step two, generating a step size data set obeying Gaussian distribution according to the Gaussian model
Figure BDA0002054404600000048
Figure BDA0002054404600000049
Wherein the content of the first and second substances,
Figure BDA00020544046000000411
represents the mean value of the step size and,
Figure BDA00020544046000000412
represents the standard deviation of the step size, and N () represents a normal distribution;
the step size data set is then multiplied by the step frequency to obtain a set of lengths traveled per second
Figure BDA00020544046000000410
Other steps and parameters are the same as those in the first or second embodiment.
The fourth concrete implementation mode: the difference between this embodiment mode and one of the first to third embodiment modes is: in the third step, the position information of each frame of image is calculated according to the interval time, the step length and the step frequency of two adjacent sampling images in the collected video; the specific process is as follows:
and (3) establishing a model formula 9 by combining the step length data set obtained in the step two with the hand shaking, and generating the position information of each sampling frame:
the method comprises the following steps that a handheld mobile phone collects visual fingerprints (the handheld mobile phone collects images and positions the images to obtain image visual fingerprints), in the process of collecting the visual fingerprints, due to shaking of hands, a camera can shake three-dimensionally relative to the position of a human body, and shaking of each dimension obeys Gaussian distribution to obtain a formula (9):
Figure BDA0002054404600000051
wherein h isx,hy,hzRespectively, are taken as the x-axis,deviation values (three-dimensional distances) of the hand with respect to the human body on the y-axis and the z-axis (world coordinate system), δhStandard deviation for sloshing;
the position of each frame of image can be calculated by equation (10):
Figure BDA0002054404600000052
wherein C isx,Cy,CzCoordinates of x-axis, y-axis, z-axis (world coordinate system), delta, respectively, for each frame imagetThe interval time between two adjacent sampling frames is,
Figure BDA0002054404600000053
t is more than or equal to 1 and less than or equal to Tv
Figure BDA0002054404600000054
The length of travel of the t-1 second,
Figure BDA0002054404600000055
is tth second
Figure BDA0002054404600000056
The time at which a sample frame is located, δtIs the time interval between two adjacent sample frames.
Other steps and parameters are the same as those in one of the first to third embodiments.
The fifth concrete implementation mode: the difference between this embodiment and one of the first to fourth embodiments is: calculating the matched SURF characteristics of two adjacent frames of sampling images in the step five; the process specifically comprises the following steps:
after the feature point description is finished, feature point matching is carried out, Euclidean distances are respectively calculated for one feature point in the current frame image and all feature points in the next frame image, and the Euclidean distance E of the nearest neighbor feature point is selected from the Euclidean distancesmin1Euclidean distance E of sub-nearest neighbor feature pointsmin2(minimum Euclidean distance E is selected)min1And a second small Euclidean distance Emin2) Calculating the ratio gamma of the two, and the ratio gamma is less than or equal toIf the feature point is the feature point of the threshold Thre, the feature point is considered to be the correctly matched feature point, otherwise, the feature point is the incorrectly matched feature point, and the correctly matched feature points are connected to form a feature point pair; the feature point matching formula is shown in formula (11)
Figure BDA0002054404600000061
Other steps and parameters are the same as in one of the first to fourth embodiments.
The sixth specific implementation mode: the difference between this embodiment and one of the first to fifth embodiments is: in the sixth step, SURF characteristics are matched according to the internal reference matrix information of the acquisition camera, and the relative rotation and position information of the two frames of images is calculated by using a five-point method; the specific process is as follows:
sixthly, calculating a relative rotation matrix R and a translational vector t by using a five-point method; the specific process is as follows:
according to the epipolar geometry theorem, a pair of feature point homogeneous coordinates q ″ (x ", y", 1) (pixel coordinate system) and q ″ ' (x ' ", y '", 1) (pixel coordinate system) which match arbitrarily can be obtained, and the following equation is satisfied
q″TKTEKq″′=0 (12)
Wherein, x 'is the abscissa on the characteristic point pixel coordinate system, y' is the ordinate on the characteristic point pixel coordinate system, x 'is the abscissa on the characteristic point pixel coordinate system, y' is the ordinate on the characteristic point pixel coordinate system; k represents an inverse matrix of the camera internal reference matrix, and E is an essential matrix containing relative rotation and position information of two frames of images; t is transposition;
depending on the nature of the essential matrix, E should also satisfy equations (13) and (14):
Det(KTEK)=0 (13)
Figure BDA0002054404600000062
simultaneous (12), (13), (14), using a five-point method to solve the system of equations, a minimum of 5 pairs of matching points are needed, tr () represents the trace of the matrix;
thus, according to E ≡ t]×R, calculating a group of relative rotation matrixes R and translation vectors t by every 5 pairs of matching pairs;
sixthly, calculating the probability of each group of R and t; the specific process is as follows:
the number of matching pairs is m, so the number of matching pair combinations using 5 pairs is m
Figure BDA0002054404600000063
When the number is large, the corresponding operation cost is high, so that a sampling method is adopted to replace the traversal of all combinations, and a threshold value is set
Figure BDA0002054404600000064
When in use
Figure BDA0002054404600000065
When it is selected
Figure BDA0002054404600000066
A combination of
Figure BDA0002054404600000067
Then, select all
Figure BDA0002054404600000068
A combination of two;
from equation (15), the probability p for each combination is calculateds
Figure BDA0002054404600000071
Wherein p iseE is the probability of one of the feature pairs in the combination appearing in all combinations, e is the label of one matching pair, and s is the label of a group of matching pairs;
sixthly, calculating the weight of each group of R and t; the specific process is as follows:
after R and t are calculated, the reflection error of each group of matched pairs can be calculated according to the return band of the formula (12)εrGenerally, the weight of the set of feature pairs should be greater when the reflection error is smaller, and thus there is we=1/εr. According to equation (16), the weight of each combination is calculated as wi
Figure BDA0002054404600000072
Wherein
Figure BDA0002054404600000073
A weight for a matching pair in the combination of matching pairs;
after all the combination weights are obtained, the final weight is obtained through normalization
Figure BDA0002054404600000074
Other steps and parameters are the same as those in one of the first to fifth embodiments.
The seventh embodiment: the difference between this embodiment and one of the first to sixth embodiments is: fusing the position information of each frame obtained in the step three with the relative rotation and position information of the two frames of sampling images obtained in the step six in the step seven, establishing an SOCP model, and solving a global optimal value by using an interior point method to obtain the position information of the visual fingerprint; the specific process is as follows:
for each frame image, k pieces of position information can be obtained through the third step and the sixth step, and for 2k pieces of position information, the position information closest to the true value is found, and an SOCP model is established, wherein the process is as follows (17):
Figure BDA0002054404600000075
equation (17) is a typical SOCP problem, where the vector
Figure BDA0002054404600000076
Representing an optimization vector, i.e. the true 3D position coordinates of each frame of the image, vector coRepresenting the k sets of image 3D position coordinates obtained by step six, vector boRepresenting the 3D position coordinates, vectors, of the k sets of images obtained by step three
Figure BDA0002054404600000077
Representing the upper limit, vector, of the 3D position coordinates of the current frame image
Figure BDA0002054404600000078
Representing the lower limit, p, of the 3D position coordinates of the current frame imageEProbability vector, P, representing each set of data calculated in step threeVRepresenting the weight probability vector of each group of data calculated in the step six,
Figure BDA0002054404600000081
Figure BDA0002054404600000082
represents an unknown variable; α, β are equalization parameters, and can be calculated by the following equation (18):
α+β=1 (18)
and (4) solving the global optimal value by using an interior point method to obtain the position information of the visual fingerprint.
Other steps and parameters are the same as those in one of the first to sixth embodiments.
The specific implementation mode is eight: the present embodiment differs from one of the first to seventh embodiments in that: in the first step, g takes 9.8, and l takes 1 m.
Other steps and parameters are the same as those in one of the first to seventh embodiments.
The specific implementation method nine: the present embodiment differs from the first to eighth embodiments in that: and in the fifth step, the Thre value of the threshold is 0.8.
Other steps and parameters are the same as those in one to eight of the embodiments.
The detailed implementation mode is ten: the present embodiment differs from one of the first to ninth embodiments in that: the step six is two
Figure BDA0002054404600000083
The value is 100.
Other steps and parameters are the same as those in one of the first to ninth embodiments.
The following examples were used to demonstrate the beneficial effects of the present invention:
the first embodiment is as follows:
1. and at the 3 layers of the student activity center of Harbin university, video acquisition equipment is used for acquiring videos of the positioning area.
2. And establishing a minimum work cost motion model, solving a motion equation, and combining the interval time of two adjacent sampling frames in the video file to obtain a motion equation solution.
3. And (3) extracting features of the acquired video file by using an SURF algorithm, and calculating relative rotation and displacement information of two adjacent sampling frames by using a five-point method.
4. And establishing an SOCP model by using solutions of the visual equation and the motion equation, and solving a global optimum value by using an interior point method.
5. The error accumulation probability curve of the generated visual fingerprint database obtained by the algorithm of the present invention according to the accurate visual positioning result is shown in fig. 2.
The present invention is capable of other embodiments and its several details are capable of modifications in various obvious respects, all without departing from the spirit and scope of the present invention.

Claims (9)

1. An indoor automatic visual fingerprint acquisition method based on SOCP is characterized in that: the method comprises the following specific processes:
the method comprises the following steps: estimating the step frequency under the condition of minimum work cost by using a human walking motion model;
step two: estimating a traveling step set according to a Gaussian model based on the step frequency obtained in the step one;
step three: calculating the position information of each frame of image according to the interval time, the step length and the step frequency of two adjacent sampling images in the collected video;
step four: extracting SURF characteristics of the image, and saving the positions of the SURF characteristics and descriptors of the SURF characteristics;
step five: calculating matched SURF characteristics of two adjacent sampling images based on the descriptors of the SURF characteristics of the images in the step four;
step six: matching SURF characteristics according to the internal reference matrix information of the acquisition camera, and calculating the relative rotation and position information of two frames of sampling images by using a five-point method;
step seven: fusing the position information of each frame obtained in the third step with the relative rotation and position information of the two frames of sampling images obtained in the sixth step, establishing an SOCP model, and solving a global optimal value by using an interior point method to obtain the position information of the visual fingerprint;
estimating step frequency under the condition of minimum work doing cost by using a human walking motion model in the first step;
the specific process is as follows:
step one, calculating the total energy consumption in the walking motion model of the person:
as shown in the formula (1),
Figure FDA0003022238870000011
wherein WtRepresenting the total energy consumption, WgRepresenting the work required to overcome the change in gravitational potential energy,
Figure FDA0003022238870000012
representing the work required to be done due to kinetic energy changes;
step two, solving the work W required for overcoming the change of the gravitational potential energygAnd work required to overcome kinetic energy changes
Figure FDA0003022238870000013
The geometrical relationship between the legs, the ground and the vertical direction when a person walks is shown as the formula (2),
Figure FDA0003022238870000014
wherein l is the length of the leg of the person,
Figure FDA0003022238870000015
representing the step length, theta represents the included angle between the taken leg and the vertical direction, and h is the change quantity of the gravity center height in the advancing process;
the work W required to overcome the change of the gravitational potential energy is calculated by the formula (2)g
Figure FDA0003022238870000021
Wherein M is the mass of the person, g is the acceleration of gravity, and v is the speed of the person's movement;
the work required to overcome the kinetic energy change is calculated from equation (4)
Figure FDA0003022238870000022
Figure FDA0003022238870000023
Wherein M is the mass of a human,
Figure FDA0003022238870000024
representing the step length, v is the speed of the human movement;
step one, solving the step frequency:
the step frequency f is shown in equation (5):
Figure FDA0003022238870000025
wherein
Figure FDA0003022238870000026
Representing step size, v being human movementThe speed of (d);
according to the formulae (2) to (5), the formula (1) is rewritten to the formula (6):
Figure FDA0003022238870000027
equation (6) derives f byt(ii)/df ═ 0, to give formula (7):
Figure FDA0003022238870000028
wherein g is the acceleration of gravity and l is the leg length of the person.
2. The SOCP-based indoor automatic visual fingerprint acquisition method according to claim 1, characterized in that: estimating a traveling step set according to a Gaussian model in the second step;
the specific process is as follows:
step two, firstly, according to the time length T of the recorded videovThe step frequency f, the total required number of steps for the advancing route is n ═ Tv×f;
Step two, generating a step size data set obeying Gaussian distribution according to the Gaussian model
Figure FDA0003022238870000029
Figure FDA00030222388700000210
Wherein the content of the first and second substances,
Figure FDA00030222388700000211
represents the mean value of the step size and,
Figure FDA00030222388700000212
represents the standard deviation of the step size, and N () represents a normal distribution;
the step size data set is then multiplied by the step frequency to obtain a set of lengths traveled per second
Figure FDA00030222388700000213
3. The SOCP-based indoor automatic visual fingerprint acquisition method according to claim 2, characterized in that: in the third step, the position information of each frame of image is calculated according to the interval time, the step length and the step frequency of two adjacent sampling images in the collected video;
the specific process is as follows:
and (3) establishing a model formula (9) by combining the step length data set obtained in the step two with the hand shaking, and generating the position information of each sampling frame:
the handheld camera collects visual fingerprints, due to shaking of hands, the camera can shake three-dimensionally relative to the position of a human body, and shaking of each dimension obeys Gaussian distribution to obtain a formula (9):
hx~N(0,δh),
hy~N(0,δh), (9)
hz~N(0,δh)
wherein h isx,hy,hzThe deviation values of the hand relative to the human body on the x axis, the y axis and the z axis are respectively deltahStandard deviation for sloshing;
the position of each frame of image is calculated by equation (10):
Figure FDA0003022238870000031
wherein C isx,Cy,CzCoordinates of x-axis, y-axis, z-axis, delta, respectively, for each frame imagetThe interval time between two adjacent sampling frames is,
Figure FDA0003022238870000032
t is more than or equal to 1 and less than or equal to Tv
Figure FDA0003022238870000033
The length of travel of the t-1 second,
Figure FDA0003022238870000034
is tth second
Figure FDA0003022238870000035
The time at which a sample frame is located, δtIs the time interval between two adjacent sample frames.
4. The SOCP-based indoor automatic visual fingerprint acquisition method according to claim 3, characterized in that: calculating the matched SURF characteristics of two adjacent frames of sampling images in the step five;
the process specifically comprises the following steps:
respectively calculating Euclidean distances of one feature point in the current frame image and all feature points in the next frame image, and selecting the Euclidean distance E of the nearest neighbor feature pointmin1Euclidean distance E of sub-nearest neighbor feature pointsmin2Calculating the ratio gamma of the two;
regarding the feature points with the ratio gamma smaller than or equal to the threshold Thre, the feature points are considered to be correctly matched, otherwise, the feature points are wrongly matched, and the correctly matched feature points are connected to form feature point pairs;
the feature point matching formula is shown in formula (11)
Figure FDA0003022238870000041
5. The SOCP-based indoor automatic visual fingerprint acquisition method according to claim 4, characterized in that: in the sixth step, SURF characteristics are matched according to the internal reference matrix information of the acquisition camera, and the relative rotation and position information of the two frames of images is calculated by using a five-point method;
the specific process is as follows:
sixthly, calculating a relative rotation matrix R and a translational vector t by using a five-point method; the specific process is as follows:
according to the epipolar geometry theorem, a pair of feature point homogeneous coordinates q ″ (x ", y", 1) and q '″' (x '″, y' ″,1) which are arbitrarily matched are obtained, and the following formula is satisfied
q″TKTEKq″′=0 (12)
Wherein, x 'is the abscissa on the characteristic point pixel coordinate system, y' is the ordinate on the characteristic point pixel coordinate system, x 'is the abscissa on the characteristic point pixel coordinate system, y' is the ordinate on the characteristic point pixel coordinate system; k represents an inverse matrix of the camera internal reference matrix, and E is an essential matrix containing relative rotation and position information of two frames of images; t is transposition;
depending on the nature of the essential matrix, E should also satisfy equations (13) and (14):
Det(KTEK)=0 (13)
Figure FDA0003022238870000042
simultaneous (12), (13), (14), using a five-point method to solve the system of equations, a minimum of 5 pairs of matching points are needed, tr () represents the trace of the matrix;
thus, according to E ≡ t]×R, calculating a group of relative rotation matrixes R and translation vectors t by every 5 pairs of matching pairs;
sixthly, calculating the probability of each group of R and t; the specific process is as follows:
the number of matching pairs is m, so the number of matching pair combinations using 5 pairs is m
Figure FDA0003022238870000043
Setting a threshold value
Figure FDA0003022238870000044
When in use
Figure FDA0003022238870000045
When it is selected
Figure FDA0003022238870000046
A combination of
Figure FDA0003022238870000047
Then, select all
Figure FDA0003022238870000048
A combination of two;
from equation (15), the probability p for each combination is calculateds
Figure FDA0003022238870000051
Wherein p iseE is the probability of one of the feature pairs in the combination appearing in all combinations, e is the label of one matching pair, and s is the label of a group of matching pairs;
sixthly, calculating the weight of each group of R and t; the specific process is as follows:
according to equation (16), the weight of each combination is calculated as wi
Figure FDA0003022238870000052
Wherein
Figure FDA0003022238870000053
A weight for a matching pair in the combination of matching pairs;
after all the combination weights are obtained, the final weight is obtained through normalization
Figure FDA0003022238870000054
6. The SOCP-based indoor automatic visual fingerprint acquisition method according to claim 5, characterized in that: fusing the position information of each frame obtained in the step three with the relative rotation and position information of the two frames of sampling images obtained in the step six in the step seven, establishing an SOCP model, and solving a global optimal value by using an interior point method to obtain the position information of the visual fingerprint;
the specific process is as follows:
for each frame image, k pieces of position information are obtained through the third step and the sixth step, and for 2k pieces of position information, the position information closest to the true value is found, and an SOCP model is established, wherein the process is as follows (17):
Figure FDA0003022238870000055
vector in the formula
Figure FDA0003022238870000056
Representing an optimization vector, i.e. the true 3D position coordinates of each frame of the image, vector coRepresenting the k sets of image 3D position coordinates obtained by step six, vector boRepresenting the 3D position coordinates, vectors, of the k sets of images obtained by step three
Figure FDA0003022238870000057
Representing the upper limit, vector, of the 3D position coordinates of the current frame image
Figure FDA0003022238870000058
Representing the lower limit, p, of the 3D position coordinates of the current frame imageEProbability vector, P, representing each set of data calculated in step threeVRepresenting the weight probability vector of each group of data calculated in the step six;
Figure FDA0003022238870000059
represents an unknown variable; alpha and beta are equalization parameters;
α, β are calculated from the following formula (18):
α+β=1 (18)
and (4) solving the global optimal value by using an interior point method to obtain the position information of the visual fingerprint.
7. The SOCP-based indoor automatic visual fingerprint acquisition method according to claim 6, characterized in that: in the first step, g takes 9.8, and l takes 1 m.
8. The SOCP-based indoor automatic visual fingerprint acquisition method according to claim 7, characterized in that: and in the fifth step, the Thre value of the threshold is 0.8.
9. The SOCP-based indoor automatic visual fingerprint acquisition method according to claim 8, characterized in that: the step six is two
Figure FDA0003022238870000061
The value is 100.
CN201910384564.2A 2019-05-09 2019-05-09 Indoor automatic visual fingerprint acquisition method based on SOCP Active CN110321902B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910384564.2A CN110321902B (en) 2019-05-09 2019-05-09 Indoor automatic visual fingerprint acquisition method based on SOCP

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910384564.2A CN110321902B (en) 2019-05-09 2019-05-09 Indoor automatic visual fingerprint acquisition method based on SOCP

Publications (2)

Publication Number Publication Date
CN110321902A CN110321902A (en) 2019-10-11
CN110321902B true CN110321902B (en) 2021-07-13

Family

ID=68118955

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910384564.2A Active CN110321902B (en) 2019-05-09 2019-05-09 Indoor automatic visual fingerprint acquisition method based on SOCP

Country Status (1)

Country Link
CN (1) CN110321902B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110967014B (en) * 2019-10-24 2023-10-31 国家电网有限公司 Machine room indoor navigation and equipment tracking method based on augmented reality technology
CN111198365A (en) * 2020-01-16 2020-05-26 东方红卫星移动通信有限公司 Indoor positioning method based on radio frequency signal
CN112905798B (en) * 2021-03-26 2023-03-10 深圳市阿丹能量信息技术有限公司 Indoor visual positioning method based on character identification

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101226061A (en) * 2008-02-21 2008-07-23 上海交通大学 Method for locating walker
CN104023228A (en) * 2014-06-12 2014-09-03 北京工业大学 Self-adaptive indoor vision positioning method based on global motion estimation
CN107193279A (en) * 2017-05-09 2017-09-22 复旦大学 Robot localization and map structuring system based on monocular vision and IMU information
CN109342993A (en) * 2018-09-11 2019-02-15 宁波大学 Wireless sensor network target localization method based on RSS-AoA hybrid measurement

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9576183B2 (en) * 2012-11-02 2017-02-21 Qualcomm Incorporated Fast initialization for monocular visual SLAM
CN106162555B (en) * 2016-09-26 2019-09-10 湘潭大学 Indoor orientation method and system
CN106595653A (en) * 2016-12-08 2017-04-26 南京航空航天大学 Wearable autonomous navigation system for pedestrian and navigation method thereof
CN107103056B (en) * 2017-04-13 2021-01-29 哈尔滨工业大学 Local identification-based binocular vision indoor positioning database establishing method and positioning method
CN107367709B (en) * 2017-06-05 2019-07-26 宁波大学 Arrival time robust weighted least-squares localization method is based in hybird environment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101226061A (en) * 2008-02-21 2008-07-23 上海交通大学 Method for locating walker
CN104023228A (en) * 2014-06-12 2014-09-03 北京工业大学 Self-adaptive indoor vision positioning method based on global motion estimation
CN107193279A (en) * 2017-05-09 2017-09-22 复旦大学 Robot localization and map structuring system based on monocular vision and IMU information
CN109342993A (en) * 2018-09-11 2019-02-15 宁波大学 Wireless sensor network target localization method based on RSS-AoA hybrid measurement

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Automatic Visual Fingerprinting for Indoor Image-Based Localization Applications;Farhang Vedadi et.al;《IEEE Trans. Syst., Man》;20170516;第1-13页 *
基于二阶锥规划SOCP的RSS测距的定位方案;刘潇潇;《办公自动化》;20181225;第17-21页 *

Also Published As

Publication number Publication date
CN110321902A (en) 2019-10-11

Similar Documents

Publication Publication Date Title
CN110321902B (en) Indoor automatic visual fingerprint acquisition method based on SOCP
CN109410321B (en) Three-dimensional reconstruction method based on convolutional neural network
CN107403163B (en) A kind of laser SLAM closed loop automatic testing method based on deep learning
CN110866079B (en) Generation and auxiliary positioning method of intelligent scenic spot live-action semantic map
CN113436260B (en) Mobile robot pose estimation method and system based on multi-sensor tight coupling
CN112233177B (en) Unmanned aerial vehicle pose estimation method and system
CN107741234A (en) The offline map structuring and localization method of a kind of view-based access control model
CN109648558B (en) Robot curved surface motion positioning method and motion positioning system thereof
CN110446159A (en) A kind of system and method for interior unmanned plane accurate positioning and independent navigation
CN108107462B (en) RTK and high-speed camera combined traffic sign post attitude monitoring device and method
CN106887037B (en) indoor three-dimensional reconstruction method based on GPU and depth camera
CN107680133A (en) A kind of mobile robot visual SLAM methods based on improvement closed loop detection algorithm
CN111060924B (en) SLAM and target tracking method
CN103413352A (en) Scene three-dimensional reconstruction method based on RGBD multi-sensor fusion
CN108734737A (en) The method that view-based access control model SLAM estimation spaces rotate noncooperative target shaft
US20230112991A1 (en) Method of high-precision 3d reconstruction of existing railway track lines based on uav multi-view images
CN111156997B (en) Vision/inertia combined navigation method based on camera internal parameter online calibration
CN110992487B (en) Rapid three-dimensional map reconstruction device and reconstruction method for hand-held airplane fuel tank
CN114011608B (en) Spraying process optimization system based on digital twinning and spraying optimization method thereof
CN110319772A (en) Vision large span distance measuring method based on unmanned plane
CN112833892B (en) Semantic mapping method based on track alignment
CN114111799B (en) Unmanned aerial vehicle aerial-shooting path planning method for high-macromonomer fine modeling
CN110223380A (en) Fusion is taken photo by plane and the scene modeling method of ground multi-view image, system, device
CN110675453A (en) Self-positioning method for moving target in known scene
CN109459759A (en) City Terrain three-dimensional rebuilding method based on quadrotor drone laser radar system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant