CN110321902A - A kind of indoor automatic vision fingerprint collecting method based on SOCP - Google Patents

A kind of indoor automatic vision fingerprint collecting method based on SOCP Download PDF

Info

Publication number
CN110321902A
CN110321902A CN201910384564.2A CN201910384564A CN110321902A CN 110321902 A CN110321902 A CN 110321902A CN 201910384564 A CN201910384564 A CN 201910384564A CN 110321902 A CN110321902 A CN 110321902A
Authority
CN
China
Prior art keywords
socp
position information
calculating
representing
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910384564.2A
Other languages
Chinese (zh)
Other versions
CN110321902B (en
Inventor
谭学治
殷锡亮
马琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN201910384564.2A priority Critical patent/CN110321902B/en
Publication of CN110321902A publication Critical patent/CN110321902A/en
Application granted granted Critical
Publication of CN110321902B publication Critical patent/CN110321902B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Collating Specific Patterns (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

A kind of indoor automatic vision fingerprint collecting method based on SOCP, the present invention relates to indoor automatic vision fingerprint collecting methods.The purpose of the present invention is to solve traditional-handwork acquisition visual fingerprint algorithms time-consuming, effort, the not high problem of the offline database precision that the automatic fingerprint collecting method based on particle filter generates.One: estimation cadence;Two: traveling set of steps is estimated according to Gauss model;Three: calculating the location information of every frame image;Four: extracting the SURF feature of image;Five: calculating the matching SURF feature of adjacent two frame samplings image;Six: calculating the relative rotation and location information of two frame sampling images;Seven: the relative rotation for the two frame sampling images that the location information for every frame that three obtain is obtained with six and location information being merged, SOCP model is established, using interior point method, solves global optimum to get the location information of visual fingerprint is arrived.The invention belongs to indoor positionings and Data fusion technique field.

Description

Indoor automatic visual fingerprint acquisition method based on SOCP
Technical Field
The invention relates to the technical field of indoor positioning and data fusion, in particular to an indoor automatic visual fingerprint acquisition method.
Background
In the field of visual positioning, the visual positioning needs to complete positioning work by utilizing abundant image fingerprint information, a certain amount of image fingerprint information needs to be acquired in an off-line stage by any visual indoor positioning method, the traditional off-line acquisition method mainly depends on manual acquisition, image information is acquired at a preset position in an indoor scene needing positioning, and the position information of an image fingerprint is acquired by accurately measuring an acquisition position, so that a large amount of manpower, material resources and time are consumed to complete the acquisition work of the visual fingerprint information in the indoor scene of a certain scale. With the development of visual positioning technology in recent years, automatic fingerprint acquisition methods have been derived to replace the traditional manual acquisition methods. The automatic fingerprint collection process mainly depends on recording videos in an indoor area to be positioned to finish fingerprint collection, and the automatic fingerprint collection method is mainly divided into two types, wherein one type depends on specific fingerprint collection equipment, such as a laser range finder and the like to calibrate the position information of fingerprints, and the other type uses common fingerprint collection equipment, and corrects the position information of the fingerprints through an algorithm by presetting a video recording path in the area to be collected and combining a kinematic equation.
A typical algorithm for automatically acquiring the visual fingerprints by using common fingerprint acquisition equipment is a particle filter algorithm, namely a zero-mean Gaussian random process is established as a motion equation, a visual odometer is established as a system equation, an indoor environment is divided into a plurality of acquisition routes, a stereo camera is used for recording a visual fingerprint acquisition video while walking along an appointed route, and the solutions of the motion equation and the system equation are corrected through the particle filter algorithm. In the algorithm, theoretical analysis shows that the kinematic equation cannot accurately describe the visual fingerprint information in the advancing process, and the system equation has no solution when the advancing path is close to a straight line, so that the accuracy of the visual database generated by the algorithm is not high.
Disclosure of Invention
The invention aims to solve the problems that the traditional manual visual fingerprint acquisition method is time-consuming and labor-consuming, and the accuracy of an offline database generated by the automatic fingerprint acquisition method based on particle filtering is not high, and provides an indoor automatic visual fingerprint acquisition method based on SOCP (second-order cone programming).
An indoor automatic visual fingerprint acquisition method based on SOCP comprises the following specific processes:
the method comprises the following steps: estimating the step frequency under the condition of minimum work cost by using a human walking motion model;
step two: estimating a traveling step set according to a Gaussian model based on the step frequency obtained in the step one;
step three: calculating the position information of each frame of image according to the interval time, the step length and the step frequency of two adjacent sampling images in the collected video;
step four: extracting SURF characteristics of the image, and saving the positions of the SURF characteristics and descriptors of the SURF characteristics;
step five: calculating matched SURF characteristics of two adjacent sampling images based on the descriptors of the SURF characteristics of the images in the step four;
step six: matching SURF characteristics according to the internal reference matrix information of the acquisition camera, and calculating the relative rotation and position information of two frames of sampling images by using a five-point method;
step seven: and (4) fusing the position information of each frame obtained in the third step with the relative rotation and position information of the two frames of sampling images obtained in the sixth step, establishing an SOCP model, and solving a global optimum value by using an interior point method to obtain the position information of the visual fingerprint.
The invention has the beneficial effects that:
in order to overcome the defects of the traditional manual acquisition method and the automatic fingerprint acquisition algorithm, the invention adopts a new algorithm and improves the precision of the visual fingerprint database.
When the method is used for indoor automatic visual fingerprint acquisition, the precision is higher compared with the automatic fingerprint acquisition method based on particle filtering, the time consumption is shorter compared with the traditional fingerprint acquisition method, the requirement on visual fingerprint acquisition equipment is lower in the same positioning scene, and the automatic fingerprint acquisition method based on the particle filtering algorithm needs to use a stereo camera, and the visual fingerprint position information closest to the true value is obtained by establishing an SOCP model and fusing the solutions of a motion equation and a visual equation.
According to the experimental result, in the same positioning scene, the time average value of the visual fingerprint acquisition is only 11 minutes, the time consumed by the traditional manual visual fingerprint acquisition method is close to 5 hours, and the average error of the database generated by the visual fingerprint acquisition by using the EPnP algorithm is 1.04m, the minimum positioning error is 0.81m, and the maximum positioning error is 1.47 m. The confidence probabilities of the error of the visual fingerprint database generated by the invention within 1.2m and 1.4m are respectively close to 83.3 percent and 93.3 percent, and the automatic fingerprint acquisition method based on the particle filter algorithm is only 35 percent and 66.7 percent.
Drawings
FIG. 1 is a flow chart of the present invention;
fig. 2 is a graph of error curves for indoor positioning using a visual fingerprint database acquired according to the present invention.
Detailed Description
The first embodiment is as follows: the indoor automatic visual fingerprint acquisition method based on SOCP comprises the following specific processes:
the method comprises the following steps: estimating the step frequency under the condition of minimum work cost by using a human walking motion model;
step two: estimating a traveling step set according to a Gaussian model based on the step frequency obtained in the step one;
step three: calculating the position information of each frame of image according to the interval time, the step length and the step frequency of two adjacent sampling images in the collected video;
step four: extracting SURF characteristics of the image, and saving the positions of the SURF characteristics and descriptors of the SURF characteristics;
step five: calculating matched SURF characteristics of two adjacent sampling images based on the descriptors of the SURF characteristics of the images in the step four;
step six: matching SURF characteristics according to the internal reference matrix information of the acquisition camera, and calculating the relative rotation and position information of two frames of sampling images by using a five-point method;
step seven: and (4) fusing the position information of each frame obtained in the third step with the relative rotation and position information of the two frames of sampling images obtained in the sixth step, establishing an SOCP model, and solving a global optimum value by using an interior point method to obtain the position information of the visual fingerprint.
The second embodiment is as follows: the first difference between the present embodiment and the specific embodiment is: estimating step frequency under the condition of minimum work doing cost by using a human walking motion model in the first step; the specific process is as follows:
step one, calculating the total energy consumption in the walking motion model of the person:
when a person walks in an indoor environment, which is constituted by a flat traveling path, the person is more apt to walk in a most labor-saving mode in a normal state. In the walking process, the interference of other factors is not considered, people move at a constant speed, energy loss caused by the change of gravitational potential energy and kinetic energy is mainly considered in a motion model, as shown in a formula (1),
wherein WtRepresenting the total energy consumption, WgRepresenting the work required to overcome the change in gravitational potential energy,representing the work required to be done due to kinetic energy changes;
step two, solving the work W required for overcoming the change of the gravitational potential energygAnd work required to overcome kinetic energy changes
The geometrical relationship between the legs, the ground and the vertical direction when a person walks is shown as the formula (2),
wherein l is the length of the leg of the person,representing the step length, theta represents the included angle between the taken leg and the vertical direction, and h is the change quantity of the gravity center height in the advancing process;
the work W required to overcome the change of the gravitational potential energy is calculated by the formula (2)g
Wherein M is the mass of the person, g is the acceleration of gravity, and v is the speed of the person's movement;
the work required to overcome the kinetic energy change is calculated from equation (4)
Wherein M is the mass of a human,representing the step length, v is the speed of the human movement;
step one, solving the step frequency:
the step frequency f is shown in equation (5):
whereinRepresenting the step length, v is the speed of the human movement;
according to formulas (2) to (5), formula (1) can be rewritten as formula (6):
equation (6) derives f bytWhere/df is 0, formula (7) can be obtained:
wherein g is the acceleration of gravity and l is the leg length of the person.
Other steps and parameters are the same as those in the first embodiment.
The third concrete implementation mode: the present embodiment differs from the first or second embodiment in that: estimating a traveling step set according to a Gaussian model in the second step; the specific process is as follows:
step two, firstly, according to the time length T of the recorded videovThe step frequency f, the total required number of steps for the advancing route is n ═ Tv×f;
Step two, generating a step size data set obeying Gaussian distribution according to the Gaussian model
Wherein,represents the mean value of the step size and,represents the standard deviation of the step size, and N () represents a normal distribution;
the step size data set is then multiplied by the step frequency to obtain a set of lengths traveled per second
Other steps and parameters are the same as those in the first or second embodiment.
The fourth concrete implementation mode: the difference between this embodiment mode and one of the first to third embodiment modes is: in the third step, the position information of each frame of image is calculated according to the interval time, the step length and the step frequency of two adjacent sampling images in the collected video; the specific process is as follows:
and (3) establishing a model formula 9 by combining the step length data set obtained in the step two with the hand shaking, and generating the position information of each sampling frame:
the method comprises the following steps that a handheld mobile phone collects visual fingerprints (the handheld mobile phone collects images and positions the images to obtain image visual fingerprints), in the process of collecting the visual fingerprints, due to shaking of hands, a camera can shake three-dimensionally relative to the position of a human body, and shaking of each dimension obeys Gaussian distribution to obtain a formula (9):
wherein h isx,hy,hzThe deviation values (three-dimensional distances) of the hand relative to the human body on the x-axis, the y-axis and the z-axis (world coordinate system) and deltahStandard deviation for sloshing;
the position of each frame of image can be calculated by equation (10):
wherein C isx,Cy,CzCoordinates of x-axis, y-axis, z-axis (world coordinate system), delta, respectively, for each frame imagetThe interval time between two adjacent sampling frames is,t is more than or equal to 1 and less than or equal to TvThe length of travel of the t-1 second,is tth secondThe time at which a sample frame is located, δtIs the time interval between two adjacent sample frames.
Other steps and parameters are the same as those in one of the first to third embodiments.
The fifth concrete implementation mode: the difference between this embodiment and one of the first to fourth embodiments is: calculating the matched SURF characteristics of two adjacent frames of sampling images in the step five; the process specifically comprises the following steps:
after the feature point description is finished, feature point matching is carried out, Euclidean distances are respectively calculated for one feature point in the current frame image and all feature points in the next frame image, and the Euclidean distance E of the nearest neighbor feature point is selected from the Euclidean distancesmin1Euclidean distance E of sub-nearest neighbor feature pointsmin2(minimum Euclidean distance E is selected)min1And a second small Euclidean distance Emin2) Calculating the ratio gamma of the two, regarding the feature points of which the ratio gamma is less than or equal to a threshold Thre as the feature points which are correctly matched, and otherwise, regarding the feature points which are incorrectly matched, connecting the correctly matched feature points to form feature point pairs; the feature point matching formula is shown in formula (11)
Other steps and parameters are the same as in one of the first to fourth embodiments.
The sixth specific implementation mode: the difference between this embodiment and one of the first to fifth embodiments is: in the sixth step, SURF characteristics are matched according to the internal reference matrix information of the acquisition camera, and the relative rotation and position information of the two frames of images is calculated by using a five-point method; the specific process is as follows:
sixthly, calculating a relative rotation matrix R and a translational vector t by using a five-point method; the specific process is as follows:
according to the epipolar geometry theorem, a pair of feature point homogeneous coordinates q ″ (x ", y", 1) (pixel coordinate system) and q ″ ' (x ' ", y '", 1) (pixel coordinate system) which match arbitrarily can be obtained, and the following equation is satisfied
q″TKTEKq″′=0 (12)
Wherein, x 'is the abscissa on the characteristic point pixel coordinate system, y' is the ordinate on the characteristic point pixel coordinate system, x 'is the abscissa on the characteristic point pixel coordinate system, y' is the ordinate on the characteristic point pixel coordinate system; k represents an inverse matrix of the camera internal reference matrix, and E is an essential matrix containing relative rotation and position information of two frames of images; t is transposition;
depending on the nature of the essential matrix, E should also satisfy equations (13) and (14):
Det(KTEK)=0 (13)
simultaneous (12), (13), (14), using a five-point method to solve the system of equations, a minimum of 5 pairs of matching points are needed, tr () represents the trace of the matrix;
thus, according to E ≡ t]×R, calculating a group of relative rotation matrixes R and translation vectors t by every 5 pairs of matching pairs;
sixthly, calculating the probability of each group of R and t; the specific process is as follows:
the number of matching pairs is m, so the number of matching pair combinations using 5 pairs is mWhen the number is large, the corresponding operation cost is high, so that a sampling method is adopted to replace the traversal of all combinations, and a threshold value is setWhen in useWhen it is selectedA combination ofThen, select allA combination of two;
from equation (15), the probability p for each combination is calculateds
Wherein p iseE is the probability of one of the feature pairs in the combination appearing in all combinations, e is the label of one matching pair, and s is the label of a group of matching pairs;
sixthly, calculating the weight of each group of R and t; the specific process is as follows:
after R and t are calculated, the reflection error epsilon of each group of matched pairs can be calculated according to the return band of the formula (12)rGenerally, the weight of the set of feature pairs should be greater when the reflection error is smaller, and thus there is we=1/εr. According to equation (16), the weight of each combination is calculated as wi
WhereinA weight for a matching pair in the combination of matching pairs;
after all the combination weights are obtained, the final weight is obtained through normalization
Other steps and parameters are the same as those in one of the first to fifth embodiments.
The seventh embodiment: the difference between this embodiment and one of the first to sixth embodiments is: fusing the position information of each frame obtained in the step three with the relative rotation and position information of the two frames of sampling images obtained in the step six in the step seven, establishing an SOCP model, and solving a global optimal value by using an interior point method to obtain the position information of the visual fingerprint; the specific process is as follows:
for each frame image, k pieces of position information can be obtained through the third step and the sixth step, and for 2k pieces of position information, the position information closest to the true value is found, and an SOCP model is established, wherein the process is as follows (17):
equation (17) is a typical SOCP problem, where the vectorRepresenting an optimization vector, i.e. the true 3D position coordinates of each frame of the image, vector coRepresenting the k sets of image 3D position coordinates obtained by step six, vector boRepresenting the 3D position coordinates, vectors, of the k sets of images obtained by step threeRepresenting the upper limit, vector, of the 3D position coordinates of the current frame imageRepresenting the lower limit, p, of the 3D position coordinates of the current frame imageEProbability vector, P, representing each set of data calculated in step threeVRepresenting the weight probability vector of each group of data calculated in the step six, represents an unknown variable; α, β are equalization parameters, and can be calculated by the following equation (18):
α+β=1 (18)
and (4) solving the global optimal value by using an interior point method to obtain the position information of the visual fingerprint.
Other steps and parameters are the same as those in one of the first to sixth embodiments.
The specific implementation mode is eight: the present embodiment differs from one of the first to seventh embodiments in that: in the first step, g takes 9.8, and l takes 1 m.
Other steps and parameters are the same as those in one of the first to seventh embodiments.
The specific implementation method nine: the present embodiment differs from the first to eighth embodiments in that: and in the fifth step, the Thre value of the threshold is 0.8.
Other steps and parameters are the same as those in one to eight of the embodiments.
The detailed implementation mode is ten: the present embodiment differs from one of the first to ninth embodiments in that: the step six is twoThe value is 100.
Other steps and parameters are the same as those in one of the first to ninth embodiments.
The following examples were used to demonstrate the beneficial effects of the present invention:
the first embodiment is as follows:
1. and at the 3 layers of the student activity center of Harbin university, video acquisition equipment is used for acquiring videos of the positioning area.
2. And establishing a minimum work cost motion model, solving a motion equation, and combining the interval time of two adjacent sampling frames in the video file to obtain a motion equation solution.
3. And (3) extracting features of the acquired video file by using an SURF algorithm, and calculating relative rotation and displacement information of two adjacent sampling frames by using a five-point method.
4. And establishing an SOCP model by using solutions of the visual equation and the motion equation, and solving a global optimum value by using an interior point method.
5. The error accumulation probability curve of the generated visual fingerprint database obtained by the algorithm of the present invention according to the accurate visual positioning result is shown in fig. 2.
The present invention is capable of other embodiments and its several details are capable of modifications in various obvious respects, all without departing from the spirit and scope of the present invention.

Claims (10)

1. An indoor automatic visual fingerprint acquisition method based on SOCP is characterized in that: the method comprises the following specific processes:
the method comprises the following steps: estimating the step frequency under the condition of minimum work cost by using a human walking motion model;
step two: estimating a traveling step set according to a Gaussian model based on the step frequency obtained in the step one;
step three: calculating the position information of each frame of image according to the interval time, the step length and the step frequency of two adjacent sampling images in the collected video;
step four: extracting SURF characteristics of the image, and saving the positions of the SURF characteristics and descriptors of the SURF characteristics;
step five: calculating matched SURF characteristics of two adjacent sampling images based on the descriptors of the SURF characteristics of the images in the step four;
step six: matching SURF characteristics according to the internal reference matrix information of the acquisition camera, and calculating the relative rotation and position information of two frames of sampling images by using a five-point method;
step seven: and (4) fusing the position information of each frame obtained in the third step with the relative rotation and position information of the two frames of sampling images obtained in the sixth step, establishing an SOCP model, and solving a global optimum value by using an interior point method to obtain the position information of the visual fingerprint.
2. The SOCP-based indoor automatic visual fingerprint acquisition method according to claim 1, characterized in that: estimating step frequency under the condition of minimum work doing cost by using a human walking motion model in the first step;
the specific process is as follows:
step one, calculating the total energy consumption in the walking motion model of the person:
as shown in the formula (1),
wherein WtRepresenting the total energy consumption, WgRepresenting the work required to overcome the change in gravitational potential energy,representing the work required to be done due to kinetic energy changes;
step one and two, askWork W required to overcome gravitational potential energy changegAnd work required to overcome kinetic energy changes
The geometrical relationship between the legs, the ground and the vertical direction when a person walks is shown as the formula (2),
wherein l is the length of the leg of the person,representing the step length, theta represents the included angle between the taken leg and the vertical direction, and h is the change quantity of the gravity center height in the advancing process;
the work W required to overcome the change of the gravitational potential energy is calculated by the formula (2)g
Wherein M is the mass of the person, g is the acceleration of gravity, and v is the speed of the person's movement;
the work required to overcome the kinetic energy change is calculated from equation (4)
Wherein M is the mass of a human,representing the step length, v is the speed of the human movement;
step one, solving the step frequency:
the step frequency f is shown in equation (5):
whereinRepresenting the step length, v is the speed of the human movement;
according to the formulae (2) to (5), the formula (1) is rewritten to the formula (6):
equation (6) derives f byt(ii)/df ═ 0, to give formula (7):
wherein g is the acceleration of gravity and l is the leg length of the person.
3. The SOCP-based indoor automatic visual fingerprint acquisition method according to claim 2, characterized in that: estimating a traveling step set according to a Gaussian model in the second step;
the specific process is as follows:
step two, firstly, according to the time length T of the recorded videovThe step frequency f, the total required number of steps for the advancing route is n ═ Tv×f;
Step two, generating a step size data set obeying Gaussian distribution according to the Gaussian model
Wherein,represents the mean value of the step size and,represents the standard deviation of the step size, and N () represents a normal distribution;
the step size data set is then multiplied by the step frequency to obtain a set of lengths traveled per second
4. The SOCP-based indoor automatic visual fingerprint acquisition method according to claim 3, characterized in that: in the third step, the position information of each frame of image is calculated according to the interval time, the step length and the step frequency of two adjacent sampling images in the collected video;
the specific process is as follows:
and (3) establishing a model formula (9) by combining the step length data set obtained in the step two with the hand shaking, and generating the position information of each sampling frame:
the handheld camera collects visual fingerprints, due to shaking of hands, the camera can shake three-dimensionally relative to the position of a human body, and shaking of each dimension obeys Gaussian distribution to obtain a formula (9):
wherein h isx,hy,hzThe deviation values of the hand relative to the human body on the x axis, the y axis and the z axis are respectively deltahStandard deviation for sloshing;
the position of each frame of image is calculated by equation (10):
wherein C isx,Cy,CzCoordinates of x-axis, y-axis, z-axis, delta, respectively, for each frame imagetThe interval time between two adjacent sampling frames is,t is more than or equal to 1 and less than or equal to TvThe length of travel of the t-1 second,is tth secondThe time at which a sample frame is located, δtIs the time interval between two adjacent sample frames.
5. The SOCP-based indoor automatic visual fingerprint acquisition method according to claim 4, characterized in that: calculating the matched SURF characteristics of two adjacent frames of sampling images in the step five;
the process specifically comprises the following steps:
respectively calculating Euclidean distances of one feature point in the current frame image and all feature points in the next frame image, and selecting the Euclidean distance E of the nearest neighbor feature pointmin1Euclidean distance E of sub-nearest neighbor feature pointsmin2Calculating the ratio gamma of the two;
regarding the feature points with the ratio gamma smaller than or equal to the threshold Thre, the feature points are considered to be correctly matched, otherwise, the feature points are wrongly matched, and the correctly matched feature points are connected to form feature point pairs;
the feature point matching formula is shown in formula (11)
6. The SOCP-based indoor automatic visual fingerprint acquisition method according to claim 5, characterized in that: in the sixth step, SURF characteristics are matched according to the internal reference matrix information of the acquisition camera, and the relative rotation and position information of the two frames of images is calculated by using a five-point method;
the specific process is as follows:
sixthly, calculating a relative rotation matrix R and a translational vector t by using a five-point method; the specific process is as follows:
according to the epipolar geometry theorem, a pair of feature point homogeneous coordinates q ″ (x ", y", 1) and q '″' (x '″, y' ″,1) which are arbitrarily matched are obtained, and the following formula is satisfied
q″TKTEKq″′=0 (12)
Wherein, x 'is the abscissa on the characteristic point pixel coordinate system, y' is the ordinate on the characteristic point pixel coordinate system, x 'is the abscissa on the characteristic point pixel coordinate system, y' is the ordinate on the characteristic point pixel coordinate system; k represents an inverse matrix of the camera internal reference matrix, and E is an essential matrix containing relative rotation and position information of two frames of images; t is transposition;
depending on the nature of the essential matrix, E should also satisfy equations (13) and (14):
Det(KTEK)=0 (13)
simultaneous (12), (13), (14), using a five-point method to solve the system of equations, a minimum of 5 pairs of matching points are needed, tr () represents the trace of the matrix;
thus, according to E ≡ t]×R, calculating a group of relative rotation matrixes R and translation vectors t by every 5 pairs of matching pairs;
sixthly, calculating the probability of each group of R and t; the specific process is as follows:
the number of matching pairs is m, so the number of matching pair combinations using 5 pairs is mSetting a threshold valueWhen in useWhen it is selectedA combination ofThen, select allA combination of two;
from equation (15), the probability p for each combination is calculateds
Wherein p iseE is the probability of one of the feature pairs in the combination appearing in all combinations, e is the label of one matching pair, and s is the label of a group of matching pairs;
sixthly, calculating the weight of each group of R and t; the specific process is as follows:
according to equation (16), the weight of each combination is calculated as wi
WhereinA weight for a matching pair in the combination of matching pairs;
after all the combination weights are obtained, the final weight is obtained through normalization
7. The SOCP-based indoor automatic visual fingerprint acquisition method according to claim 6, characterized in that: fusing the position information of each frame obtained in the step three with the relative rotation and position information of the two frames of sampling images obtained in the step six in the step seven, establishing an SOCP model, and solving a global optimal value by using an interior point method to obtain the position information of the visual fingerprint;
the specific process is as follows:
for each frame image, k pieces of position information are obtained through the third step and the sixth step, and for 2k pieces of position information, the position information closest to the true value is found, and an SOCP model is established, wherein the process is as follows (17):
vector in the formulaRepresenting an optimization vector, i.e. the true 3D position coordinates of each frame of the image, vector coRepresenting the k sets of image 3D position coordinates obtained by step six, vector boRepresenting the 3D position coordinates, vectors, of the k sets of images obtained by step threeRepresenting the upper limit, vector, of the 3D position coordinates of the current frame imageRepresenting the lower limit, p, of the 3D position coordinates of the current frame imageEProbability vector, P, representing each set of data calculated in step threeVRepresenting the weight probability vector of each group of data calculated in the step six;represents an unknown variable; alpha and beta are equalization parameters;
α, β are calculated from the following formula (18):
α+β=1 (18)
and (4) solving the global optimal value by using an interior point method to obtain the position information of the visual fingerprint.
8. The SOCP-based indoor automatic visual fingerprint acquisition method according to claim 7, characterized in that: in the first step, g takes 9.8, and l takes 1 m.
9. The SOCP-based indoor automatic visual fingerprint acquisition method according to claim 8, characterized in that: and in the fifth step, the Thre value of the threshold is 0.8.
10. The SOCP-based indoor automatic visual fingerprint acquisition method according to claim 9, characterized in that: the step six is twoThe value is 100.
CN201910384564.2A 2019-05-09 2019-05-09 Indoor automatic visual fingerprint acquisition method based on SOCP Expired - Fee Related CN110321902B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910384564.2A CN110321902B (en) 2019-05-09 2019-05-09 Indoor automatic visual fingerprint acquisition method based on SOCP

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910384564.2A CN110321902B (en) 2019-05-09 2019-05-09 Indoor automatic visual fingerprint acquisition method based on SOCP

Publications (2)

Publication Number Publication Date
CN110321902A true CN110321902A (en) 2019-10-11
CN110321902B CN110321902B (en) 2021-07-13

Family

ID=68118955

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910384564.2A Expired - Fee Related CN110321902B (en) 2019-05-09 2019-05-09 Indoor automatic visual fingerprint acquisition method based on SOCP

Country Status (1)

Country Link
CN (1) CN110321902B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110967014A (en) * 2019-10-24 2020-04-07 国家电网有限公司 Method for indoor navigation and equipment tracking of machine room based on augmented reality technology
CN111198365A (en) * 2020-01-16 2020-05-26 东方红卫星移动通信有限公司 Indoor positioning method based on radio frequency signal
CN112905798A (en) * 2021-03-26 2021-06-04 深圳市阿丹能量信息技术有限公司 Indoor visual positioning method based on character identification

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101226061A (en) * 2008-02-21 2008-07-23 上海交通大学 Method for locating walker
US20140126769A1 (en) * 2012-11-02 2014-05-08 Qualcomm Incorporated Fast initialization for monocular visual slam
CN104023228A (en) * 2014-06-12 2014-09-03 北京工业大学 Self-adaptive indoor vision positioning method based on global motion estimation
CN106162555A (en) * 2016-09-26 2016-11-23 湘潭大学 Indoor orientation method and system
CN106595653A (en) * 2016-12-08 2017-04-26 南京航空航天大学 Wearable autonomous navigation system for pedestrian and navigation method thereof
CN107103056A (en) * 2017-04-13 2017-08-29 哈尔滨工业大学 A kind of binocular vision indoor positioning database building method and localization method based on local identities
CN107193279A (en) * 2017-05-09 2017-09-22 复旦大学 Robot localization and map structuring system based on monocular vision and IMU information
CN107367709A (en) * 2017-06-05 2017-11-21 宁波大学 Arrival time robust weighted least-squares localization method is based in hybird environment
CN109342993A (en) * 2018-09-11 2019-02-15 宁波大学 Wireless sensor network target localization method based on RSS-AoA hybrid measurement

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101226061A (en) * 2008-02-21 2008-07-23 上海交通大学 Method for locating walker
US20140126769A1 (en) * 2012-11-02 2014-05-08 Qualcomm Incorporated Fast initialization for monocular visual slam
CN104023228A (en) * 2014-06-12 2014-09-03 北京工业大学 Self-adaptive indoor vision positioning method based on global motion estimation
CN106162555A (en) * 2016-09-26 2016-11-23 湘潭大学 Indoor orientation method and system
CN106595653A (en) * 2016-12-08 2017-04-26 南京航空航天大学 Wearable autonomous navigation system for pedestrian and navigation method thereof
CN107103056A (en) * 2017-04-13 2017-08-29 哈尔滨工业大学 A kind of binocular vision indoor positioning database building method and localization method based on local identities
CN107193279A (en) * 2017-05-09 2017-09-22 复旦大学 Robot localization and map structuring system based on monocular vision and IMU information
CN107367709A (en) * 2017-06-05 2017-11-21 宁波大学 Arrival time robust weighted least-squares localization method is based in hybird environment
CN109342993A (en) * 2018-09-11 2019-02-15 宁波大学 Wireless sensor network target localization method based on RSS-AoA hybrid measurement

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
FARHANG VEDADI ET.AL: "Automatic Visual Fingerprinting for Indoor Image-Based Localization Applications", 《IEEE TRANS. SYST., MAN》 *
刘潇潇: "基于二阶锥规划SOCP的RSS测距的定位方案", 《办公自动化》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110967014A (en) * 2019-10-24 2020-04-07 国家电网有限公司 Method for indoor navigation and equipment tracking of machine room based on augmented reality technology
CN110967014B (en) * 2019-10-24 2023-10-31 国家电网有限公司 Machine room indoor navigation and equipment tracking method based on augmented reality technology
CN111198365A (en) * 2020-01-16 2020-05-26 东方红卫星移动通信有限公司 Indoor positioning method based on radio frequency signal
CN112905798A (en) * 2021-03-26 2021-06-04 深圳市阿丹能量信息技术有限公司 Indoor visual positioning method based on character identification
CN112905798B (en) * 2021-03-26 2023-03-10 深圳市阿丹能量信息技术有限公司 Indoor visual positioning method based on character identification

Also Published As

Publication number Publication date
CN110321902B (en) 2021-07-13

Similar Documents

Publication Publication Date Title
CN111739063B (en) Positioning method of power inspection robot based on multi-sensor fusion
CN109307508B (en) Panoramic inertial navigation SLAM method based on multiple key frames
CN113436260B (en) Mobile robot pose estimation method and system based on multi-sensor tight coupling
CN107945220B (en) Binocular vision-based reconstruction method
CN110321902B (en) Indoor automatic visual fingerprint acquisition method based on SOCP
CN107741234A (en) The offline map structuring and localization method of a kind of view-based access control model
CN109166149A (en) A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU
CN109648558B (en) Robot curved surface motion positioning method and motion positioning system thereof
CN107590827A (en) A kind of indoor mobile robot vision SLAM methods based on Kinect
CN107680133A (en) A kind of mobile robot visual SLAM methods based on improvement closed loop detection algorithm
CN107193279A (en) Robot localization and map structuring system based on monocular vision and IMU information
CN106887037B (en) indoor three-dimensional reconstruction method based on GPU and depth camera
CN112233177A (en) Unmanned aerial vehicle pose estimation method and system
CN103791902B (en) It is applicable to the star sensor autonomous navigation method of high motor-driven carrier
CN110675453B (en) Self-positioning method for moving target in known scene
CN111156997A (en) Vision/inertia combined navigation method based on camera internal parameter online calibration
CN110992487B (en) Rapid three-dimensional map reconstruction device and reconstruction method for hand-held airplane fuel tank
CN103150728A (en) Vision positioning method in dynamic environment
CN108107462A (en) The traffic sign bar gesture monitoring device and method that RTK is combined with high speed camera
Zhang et al. Vision-aided localization for ground robots
CN112833892B (en) Semantic mapping method based on track alignment
CN108519102A (en) A kind of binocular vision speedometer calculation method based on reprojection
CN111307146A (en) Virtual reality wears display device positioning system based on binocular camera and IMU
Huai et al. Stereo-inertial odometry using nonlinear optimization
CN116429116A (en) Robot positioning method and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210713

CF01 Termination of patent right due to non-payment of annual fee