CN104484881B - Image capture-based Visual Map database construction method and indoor positioning method using database - Google Patents

Image capture-based Visual Map database construction method and indoor positioning method using database Download PDF

Info

Publication number
CN104484881B
CN104484881B CN201410810776.XA CN201410810776A CN104484881B CN 104484881 B CN104484881 B CN 104484881B CN 201410810776 A CN201410810776 A CN 201410810776A CN 104484881 B CN104484881 B CN 104484881B
Authority
CN
China
Prior art keywords
image
matrix
point
prime
formula
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410810776.XA
Other languages
Chinese (zh)
Other versions
CN104484881A (en
Inventor
谭学治
关凯
马琳
郭士增
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin University of Technology Robot Group Co., Ltd.
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN201410810776.XA priority Critical patent/CN104484881B/en
Publication of CN104484881A publication Critical patent/CN104484881A/en
Application granted granted Critical
Publication of CN104484881B publication Critical patent/CN104484881B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image capture-based Visual Map database construction method and an indoor positioning method using the database, relates to the field of indoor positioning and navigation, and aims at solving the problem of low WLAN positioning precision due to the influence of multiple factors such as door opening and closing, people walking and wall blocking. The indoor positioning method comprises the following steps: establishing an offline Visual Map, extracting feature points of a photo shot by a smartphone of a user to match with the feature points of images on fingerprints at positions in the Visual Map, and precisely estimating the position of the user by using epipolar geometry. The image capture-based Visual Map database construction method and the indoor positioning method using the database are suitable for indoor positioning occasions.

Description

Visual Map database building methods based on IMAQ and using the database Indoor orientation method
Technical field
The present invention relates to indoor positioning navigation field.
Background technology
With the popularization of smart mobile phone, position location services are increasingly subject to people's attention.And present satellite Position system simply solves the problems, such as outdoor positioning, and indoors in environment, satellite-signal can be caused positioning by serious decay The heavy losses of precision.In recent years the indoor orientation method based on WLAN is disposed conveniently due to it, has obtained more widely grinding Study carefully, but current achievement in research shows, and its positioning precision the factor such as is blocked by room stream of people's density, wall is affected larger.Base Robot localization field is started from the research of the localization method of vision, but recently as the smart mobile phone with camera Development and the lifting of mobile phone computing capability, the indoor orientation method of view-based access control model is simple due to its equipment needed thereby, except user is equipped with Need not add other hardware devices outside the smart mobile phone of camera, and impacted factor it is relatively fewer the features such as and obtain extensive Concern.
The content of the invention
The present invention is to solve radio signal indoors due to the unlatching by door and closure, the walking about of people, wall The multiple factors of stop affect, the problem for causing WLAN positioning precisions low, so as to provide a kind of Visual based on IMAQ Map database building methods and the indoor orientation method using the database.
Based on the Visual Map database building methods of IMAQ, it is realized by following steps:
Step one, the indoor environment for positioning as needed, select origin of coordinates P0(X0,Y0), set up two-dimentional rectangular co-ordinate System;
Step 2, from the beginning of the origin of coordinates, set up a reference point every 0.5m every 1 meter, along y-axis along x-axis, each ginseng Examination point respectively according to 0 °, 90 °, 180 °, 270 ° of angle, or according to 45 °, 135 °, 225 °, 315 ° of even angle interval collection 4 width images, and calculate the spin matrix R of camera when the reference point gathers imager
Step 3, using 128 SURF algorithms to each reference point 4 angle acquisitions to image carry out characteristic point and carry Take, and the spin matrix R that the coordinate with reference point in step one gained coordinate system and step 2 are obtainedrIt is corresponding and store, it is complete Into the foundation of off-line phase Visual Map databases.
Using the indoor orientation method of above-mentioned Visual Map databases, it is realized by following steps:
Step A, on-line stage, it would be desirable to which the image that positioning service user provides carries out feature using 128 SURF algorithms The extraction of point information;
Step B, the characteristic point to image adopt KNN algorithms, retrieve in Visual Map databases and provide spy Levy the characteristic point of the k width images for a little most matching, and record the corresponding position coordinates P (X, Y) of k characteristic point and spin matrix Rr
Step C, using RANSAC algorithms, reject the Mismatching point that matching image and user are provided between image;
Step D, the 8 pairs of match points filtered out in each pair matching image, are accurately positioned using Epipolar geometry, complete room Interior positioning.
In step C, using RANSAC algorithms, the method that matching image and user provide the Mismatching point between image is rejected Comprise the following steps:
Step C1, from match point 4 pairs of match points are selected, and calculate homography matrix H;
Step C2, the homography matrix H for calculating acquisition according to step C1 calculate theory of remaining match point in correspondence image Coordinate;
Step C3, make the value of i plus 1, calculate the Euclidean distance of i-th match point theoretical coordinate and actual coordinate;Begin at the place of i It is worth for 0;
Step C4, Euclidean distance that step C3 obtains is judged whether less than default Euclidean distance threshold value t, if it is determined that knot Fruit is yes, then execution step C5;If it is judged that being no, then execution step C3 is returned;
Step C5, make the value of n plus 1, and judge the value of n whether more than default iterations n0;If it is judged that be it is yes, Then terminate;If it is judged that being no, then execution step C1 is returned;
N represents iterations, and initial value is 0.
Pinpoint method is carried out in step D using Epipolar geometry to comprise the following steps:
Step D1, the mathematical description that fundamental matrix F, fundamental matrix F are two width image restriction relations is sought using 8 methods, it is right The basic relational expression of fundamental matrix F is in the geometry of pole:
X'TFX=0 (1)
Wherein, X and X' represent that respectively two width match coordinate of a pair of the match points in image in its pixel coordinate system;Sieve Select 8 couples of match point X in each pair matching image described in step Ci(ui,vi, 1), Xi'(ui',vi', 1), i=1,2 ..., 8 generations In entering formula (2), fundamental matrix F={ f are madeij' (i=1,2,3;J=1,2,3), such as shown in formula (2):
Fundamental matrix F is calculated by solving the system of linear equations;
Wherein, f'=(f'11,f'12,f'13,f'21,f'22,f'23,f'32,f'32,f'33)T
Step D2, calculating set up the inner parameter matrix K of camera used by Visual Map databases1, and calculate to be positioned The camera internal parameter matrix K of user2;Camera internal parameter matrix K is given by formula (3):
Wherein, f is camera focus, kuAnd kvRepresent the pixel spot size of camera, u0And v0Respectively the size of expression image is U axles and v axles include the number of pixel in image coordinate system, and θ represents the angle in image coordinate system between u axles and v axles;
Essential matrix E is tried to achieve according to formula (4):
Ε=K2 TFK1 (4)
Step D3, user's picture-taking position is contained due to the essential matrix E that step D2 is calculated match figure with database As spin matrix R and transfer vector t between present position, such as shown in formula (5),
E=[t]×R (5)
Wherein, []×Make difficulties title matrix, such as shown in formula (6),
Then spin matrix R and transfer vector t are obtained by essential matrix E;
Step D4, transfer vector t obtained in step D3 show that this step will with user's picture-taking position as referential It is changed into the coordinate system of step A, such as shown in formula (7),
tw=-Rr -1R-1t (7)
Wherein, twRepresent in the world coordinate system of regulation user's picture-taking position with match direction between picture position to Amount, Rr -1Represent the inverse matrix of matching image spin matrix of camera when collected;
The two-dimensional position of the direction vector of two width images and one of matching image in step D5, the known world coordinate system Coordinate (Xd,Yd), determine the straight line of an overmatching image, such as shown in formula (8),
Wherein, twIt is transfer vector that step D4 draws, is the column vector of 3 × 1, tw(2) t is representedwThe unit of the second row Element, in the same manner tw(1) t is representedwThe element of the first row;
Step D6, will slightly match the k width image that obtains and upload image with user respectively and obtain 4 straight lines, 4 straight lines are deposited In some intersection points, optimum point is completed using formula (9), i.e.,:The determination of positioning result:
Wherein, dij(xi,yi) represent i-th (i=1,2 ..., 6) individual intersection point to the individual straight line of jth (j=1,2,3,4) away from From NjRepresent that what image corresponding to j-th straight line and user provided image matches points, dij(xi,yi) expression formula such as formula
(10) represent:
Wherein, ai=t (2), bi=-t (1), ci=-t (2) xi+t(1)yi, xiAnd yiThe position for representing i-th intersection point is sat Mark.
Recover the algorithm of spin matrix R and transfer matrix t in step D3 from essential matrix E, it is comprised the following steps:
Step D31, the essential matrix E of 3 × 3 ranks is decomposed into into E=[ea eb ec] form, wherein, ea,eb,ecFor 3 × 1 rank column vector;To ea,eb,ecThree column vectors ask two-by-two apposition to obtain ea×eb,ea×ecAnd eb×ec, and select wherein that amplitude is most Big one group;If ea×ebAs a result amplitude maximum;
Step D32, according to formula (11) and formula (12), matrix V=[v is calculated respectivelya vb vc] and matrix U= [ua ub uc]:
Step D33, structural matrix D-shaped formula are as follows:
According to matrix V and matrix U, draw shown in transfer vector t such as formula (14):
T=[u13 u23 u33] (14)
Wherein, u13The element of the row of the 1st rows of representing matrix U the 3rd, u23The element of the row of the 2nd rows of representing matrix U the 3rd, u33Represent The element of the row of the 3rd row of matrix U the 3rd;
Shown in spin matrix R such as formula (15):
Find out that spin matrix R there are two values RaOr Rb
Step D34, structural matrix Ha=[Ra| t], Hb=[Ra|-t], Hc=[Rb| t], Hd=[Ra|-t];Wherein, matrix Ha=[Ra| t] building method be:Being merged by the spin matrix R of 3 × 3 ranks and transfer vector t of 3 × 1 ranks becomes 4 × 4 ranks Vectorial Ha, such as shown in formula (16):
Hb, Hc, HdMake in the same manner;
Step D35, order vector P=[1,1,1,1]T, and calculate L1=HaP, L2=HbP, L3=HcP, L4=HdP, works as Li(i =1,2,3, when 4) meeting condition (17), take LiCorresponding R and t is used as final spin matrix R and transfer vector t:
Wherein, LiIt is the column vector of 4 × 1 ranks, LiAnd L (3)i(4) vector L is represented respectivelyiThe column element of the 3rd row the 1st and The column element of 4 row the 1st.
The present invention sets up the rotation that mode records the position of each reference point, camera using the Visual Map of IMAQ The characteristic point information that matrix and SURF algorithm draw, the photograph captured by user mobile phone positioned to needs extracts characteristic point, will It is matched with the characteristic point of each image in database, selects some width images of matching degree highest and figure corresponding thereto The positional information and spin matrix of picture, rejects the Mismatching point in matching image, finally using right using RANSAC algorithms afterwards Pole geometry completes the estimation to customer location.WLAN positioning precisions of the present invention are high.
Description of the drawings
Fig. 1 is that the Visual Map based on IMAQ of the present invention set up and indoor orientation method process schematic.
Fig. 2 is the schematic diagram to reference point locations finger print information and corresponding image information collecting.
Fig. 3 is the schematic flow sheet of RANSAC algorithms.
Specific embodiment
Specific embodiment one, the Visual Map database building methods based on IMAQ, it comprises the steps:
Step one:The indoor environment for positioning as needed, selects suitable origin of coordinates P0(X0,Y0), set up two-dimentional right angle Coordinate system;
Step 2:From the beginning of the origin of coordinates, along x-axis every 1 meter, a reference point is set up every 0.5m along y-axis, each ginseng Examination point gathers 4 width images, respectively according to 0 °, 90 °, 180 °, 270 ° or 45 °, 135 °, 225 °, 315 ° of even angle be spaced into Row IMAQ, as shown in Figure 3.And when calculating the reference point collection image camera spin matrix Rr
Step 3:Using 128 SURF algorithms to 4 angle acquisitions of each reference point to image carry out characteristic point and carry Take, and the spin matrix that the coordinate with reference point in step one gained coordinate system and step 2 are obtained is corresponding, be finally completed from The foundation of line stage Visual Map.
Specific embodiment two, using the indoor positioning side of the Visual Map databases described in specific embodiment one Method, it is realized by following steps:
Step A, on-line stage, it would be desirable to which the image that positioning service user provides adopts 128 SURF algorithms in mobile phone terminal The extraction of characteristic point information is carried out, and characteristic point is sent to into server;
Step B, the characteristic point to image adopt KNN algorithms, retrieve most matched with provided characteristic point in the server K width images characteristic point, and record the corresponding position coordinates P (X, Y) of k characteristic point and spin matrix Rr
Step C, using RANSAC algorithms, reject the Mismatching point that matching image and user are provided between image, improve calmly Position precision, the algorithm flow chart is as shown in Fig. 3.Wherein t represents the threshold value of Euclidean distance set in advance, and n represents iteration time Number, initial value is 0, n0Represent iterations set in advance;
Step D, the 8 pairs of match points filtered out in each pair matching image, are accurately positioned using Epipolar geometry, and by position Confidence breath is sent to user mobile phone end from server, completes positioning.
Fine indoor orientation method based on Epipolar geometry.It comprises the steps:
Step D1, seek fundamental matrix F using 8 methods.Fundamental matrix F is the mathematical description of two width image restriction relations.It is right The basic relational expression of fundamental matrix F is in the geometry of pole:
X'TFX=0 (1)
Wherein, X and X' represent that respectively two width match coordinate of a pair of the match points in image in its pixel coordinate system.Will 8 couples of match point X that step 7 is obtained in embodiment onei(ui,vi, 1), Xi'(ui',vi', 1), i=1,2 ..., 8 substitute into formula (2) in, fundamental matrix F={ f are madeij' (i=1,2,3;J=1,2,3), such as shown in formula (2):
Wherein, f'=(f'11,f'12,f'13,f'21,f'22,f'23,f'32,f'32,f'33)T
Directly calculate fundamental matrix F by solving the system of linear equations.
Step D2, calculating set up the inner parameter matrix K of camera used by Visual Map databases1, and calculate to be positioned The camera internal parameter matrix K of user2.Camera internal parameter matrix K is given by formula (3):
Wherein, f is camera focus, ku, kvRepresent the pixel spot size of camera, u0, v0The size for representing image is that image is sat U axles and v axles include the number of pixel in mark system, and θ represents the angle in image coordinate system between u axles and v axles, as schemed institute Show.On this basis, essential matrix E is tried to achieve by formula (4):
Ε=K2 TFK1 (4)
Step D3, user's picture-taking position is contained due to the essential matrix E that step D2 is calculated match figure with database As spin matrix R and transfer vector t between present position, such as shown in formula (5):
E=[t]×R (5)
Wherein, []×Make difficulties title matrix, such as shown in formula (6):
R and t is obtained such that it is able to pass through essential matrix E.
Step D4, transfer vector t obtained in step D3 show that this step will with user's picture-taking position as referential It is changed into the coordinate system of step A, such as shown in formula (7):
tw=-Rr -1R-1t (7)
Wherein twRepresent in the world coordinate system of regulation user's picture-taking position with match direction between picture position to Amount, Rr -1Represent the inverse matrix of matching image spin matrix of camera when collected.
Step D5:The direction vector and one of image of two width images (herein refers to matching figure in the known world coordinate system Picture) two-dimensional position coordinate (Xd,Yd), it may be determined that shown in the straight line of an overmatching image, such as formula (8):
Wherein, twIt is transfer vector that step D4 draws, is the column vector of 3 × 1, tw(2) t is representedwThe unit of the second row Element, in the same manner tw(1) t is representedwThe element of the first row.
Step D6:To slightly match the k width image that obtains upload with user respectively image according to above-mentioned steps obtain 4 it is straight There are some intersection points in line, 4 straight lines, using formula (9) the optimum point i.e. determination of positioning result is completed:
Wherein dij(xi,yi) represent i-th (i=1,2 ..., 6) individual intersection point to the individual straight line of jth (j=1,2,3,4) away from From NjRepresent that what image corresponding to j-th straight line and user provided image matches points, dij(xi,yi) expression formula such as formula (10) represent:
Wherein:ai=t (2), bi=-t (1), ci=-t (2) xi+t(1)yi, xiAnd yiThe position for representing i-th intersection point is sat Mark.
Recover the algorithm of spin matrix R and transfer matrix t from essential matrix E, it is comprised the following steps:
Step D31, the essential matrix E of 3 × 3 ranks is decomposed into into E=[ea eb ec] form, wherein:ea,eb,ecFor 3 × 1 rank column vector.To ea,eb,ecThree column vectors ask two-by-two apposition to obtain ea×eb,ea×ecAnd eb×ec, and select wherein that amplitude is most Big one group.For ease of the execution of step once, it is assumed here that ea×ebAs a result amplitude maximum.
Step D32, according to formula (11) and formula (12), matrix V=[v is calculated respectivelya vb vc] and matrix U= [ua ub uc]:
Step D33:Structural matrix D-shaped formula is as follows:
Matrix V and matrix U, draw shown in transfer vector t such as formula (14),
T=[u13 u23 u33] (14)
Wherein u13The element of the row of the 1st rows of representing matrix U the 3rd, u23The element of the row of the 2nd rows of representing matrix U the 3rd, u33Represent square The element of the row of the 3rd rows of battle array U the 3rd.Shown in spin matrix R such as formula (15),
It can be seen that spin matrix R has two values RaOr Rb
Step D34:Structural matrix Ha=[Ra| t], Hb=[Ra|-t], Hc=[Rb| t], Hd=[Ra|-t].Matrix Ha= [Ra| t] building method be:Merged the vector for becoming 4 × 4 ranks by the spin matrix R of 3 × 3 ranks and transfer vector t of 3 × 1 ranks Ha, such as shown in formula (16),
Hb, Hc, HdMake in the same manner.
Step D35:Order vector P=[1,1,1,1]T, and calculate L1=HaP, L2=HbP, L3=HcP, L4=HdP, works as Li(i =1,2,3, when 4) meeting condition (17), take LiCorresponding R and t is used as final spin matrix R and transfer vector t.
Wherein, LiIt is the column vector of 4 × 1 ranks, LiAnd L (3)i(4) vector L is represented respectivelyiThe column element of the 3rd row the 1st and The column element of 4 row the 1st.
The present invention first by setting up offline Visual Map, and extract user's smart mobile phone clap photograph characteristic point and The characteristic point of each position fingerprint epigraph is matched in Visual Map, and the position of user is subsequently estimated using Epipolar geometry Put.Vision positioning system is divided into two steps and sets up the off-line phase of Visual Map and online location estimation stage.
The Visual Map set up in off-line phase are as shown in table 1.
Table 1
RP1 (X1,Y1) Rr1 f11 f12 …… f1m
RP2 (X2,Y2) Rr2 f21 f22 …… f2m
…… …… …… …… …… …… ……
RPn (Xn,Yn) Rrn fn1 fn2 …… fnm
Wherein, Rri(i=1,2 ..., the spin matrix of camera, f when n) representing the pair picture of collection i-thij(i=1,2 ..., N, j=1,2 ..., m) represent j-th characteristic vector obtained using SURF algorithm to the i-th secondary picture.
Visual Map are made up of reference point (Reference Point, abbreviation RP), and reference point is believed comprising three parts Breath is respectively the physical location of reference point, on the position with the spin matrix of some sheet photos of different angle shots and every The characteristic vector that each characteristic point information is represented in photo is opened, wherein, n represents the shooting different from each reference point of reference point number The product of number of angles, m represents the feature point number that each image uses SURF algorithm to extract.Here entered using SURF algorithm The extraction of row characteristic point, is because that the algorithm keeps indeformable to rotation, scaling, brightness change, to visual angle change, affine Conversion, noise also keep a certain degree of stability.
The positioning mode that on-line stage is adopted is that the characteristic point of photograph captured by user's smart mobile phone is extracted with SURF algorithm, And matched it with the characteristic point of each image in database, select some width images of matching degree highest and corresponding thereto Image positional information and spin matrix, afterwards using RANSAC (Random, Sample Consensus) algorithm reject With the Mismatching point in image, finally the estimation to customer location is completed using Epipolar geometry.

Claims (4)

1. the Visual Map database building methods based on IMAQ, is characterized in that:It is realized by following steps:
Step one, the indoor environment for positioning as needed, select origin of coordinates P0(X0,Y0), set up two-dimensional Cartesian coordinate system;
Step 2, from the beginning of the origin of coordinates, set up a reference point, each reference point every 0.5m every 1 meter, along y-axis along x-axis Respectively according to 0 °, 90 °, 180 °, 270 ° of angle, or according to 45 °, 135 °, 225 °, 315 ° of even angle interval gathers 4 width Image, and calculate the spin matrix R of camera when the reference point gathers imager
Step 3, using 128 SURF algorithms to each reference point 4 angle acquisitions to image carry out feature point extraction, And the spin matrix R that the coordinate with reference point in step one gained coordinate system and step 2 are obtainedrIt is corresponding and store, complete from The foundation of line stage Visual Map database.
2. the indoor orientation method of the Visual Map databases of claim 1 is utilized, be it is characterized in that:It is by following steps reality It is existing:
Step A, on-line stage, it would be desirable to which the image that positioning service user provides carries out characteristic point letter using 128 SURF algorithms The extraction of breath;
Step B, the characteristic point to image adopt KNN algorithms, retrieve in Visual Map databases and provided characteristic point The characteristic point of the k width images for most matching, and record the corresponding position coordinates P (X, Y) of k characteristic point and spin matrix Rr
Step C, using RANSAC algorithms, reject the Mismatching point that matching image and user are provided between image;
Step D, the 8 pairs of match points filtered out in each pair matching image, are accurately positioned using Epipolar geometry, complete indoor fixed Position.
3. the indoor orientation method of utilization Visual Map databases according to claim 2, it is characterised in that step C In, using RANSAC algorithms, reject matching image and user provides the method for the Mismatching point between image and comprises the following steps:
Step C1, from match point 4 pairs of match points are selected, and calculate homography matrix H;
Step C2, the homography matrix H for calculating acquisition according to step C1 calculate theoretical coordinate of remaining match point in correspondence image;
Step C3, make the value of i plus 1, calculate the Euclidean distance of i-th match point theoretical coordinate and actual coordinate;Place's initial value of i is 0;
Step C4, Euclidean distance that step C3 obtains is judged whether less than default Euclidean distance threshold value t, if it is judged that being It is, then execution step C5;If it is judged that being no, then execution step C3 is returned;
Step C5, make the value of n plus 1, and judge the value of n whether more than default iterations n0;If it is judged that being yes, then tie Beam;If it is judged that being no, then execution step C1 is returned;
N represents iterations, and initial value is 0.
4. the indoor orientation method of utilization Visual Map databases according to claim 3, it is characterised in that in step D Pinpoint method is carried out using Epipolar geometry to comprise the following steps:
Step D1, the mathematical description that fundamental matrix F, fundamental matrix F are two width image restriction relations is sought using 8 methods, to extremely several The basic relational expression of fundamental matrix F is in what:
X'TFX=0 (1)
Wherein, X and X' represent that respectively two width match coordinate of a pair of the match points in image in its pixel coordinate system;Filter out 8 couples of match point X in each pair matching image described in step Ci(ui,vi, 1), Xi'(ui',vi', 1), i=1,2 ..., 8 substitute into public affairs In formula (2), fundamental matrix F={ f ' are madeij(i=1,2,3;J=1,2,3), such as shown in formula (2):
u 1 ′ u 1 u 1 ′ v 1 u 1 ′ v 1 ′ u 1 v 1 ′ v 1 ′ v 1 ′ u 1 v 1 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . u 8 ′ u 8 u 8 ′ v 8 u 8 ′ v 8 ′ u 8 v 8 ′ u 8 ′ v 8 ′ u 8 v 8 1 f ′ = 0 - - - ( 2 )
Fundamental matrix F is calculated by solving the system of linear equations;
Wherein, f'=(f '11,f′12,f′13,f′21,f′22,f′23,f′32,f′32,f′33)T
Step D2, calculating set up the inner parameter matrix K of camera used by Visual Map databases1, and calculate user's to be positioned Camera internal parameter matrix K2;Camera internal parameter matrix K is given by formula (3):
K = k u f k u cot θ u 0 0 k v f / s i n θ v 0 0 0 1 - - - ( 3 )
Wherein, f is camera focus, kuAnd kvRepresent the pixel spot size of camera, u0And v0The size for representing image respectively is image U axles and v axles include the number of pixel in coordinate system, and θ represents the angle in image coordinate system between u axles and v axles;
Essential matrix E is tried to achieve according to formula (4):
E=K2 TFK1 (4)
Step D3, user's picture-taking position is contained due to the essential matrix E that step D2 is calculated and image institute is matched in database Spin matrix R and transfer vector t between place position, such as shown in formula (5),
E=[t]×R (5)
Wherein, []×Make difficulties title matrix, such as shown in formula (6),
x 1 x 2 x 3 × = 0 - x 3 x 2 x 3 0 - x 1 - x 2 x 1 0 - - - ( 6 )
Then spin matrix R and transfer vector t are obtained by essential matrix E;
Step D4, transfer vector t obtained in step D3 show that this is walked its turn with user's picture-taking position as referential In shifting to the coordinate system of step A, such as shown in formula (7),
tw=-Rr -1R-1t (7)
Wherein, twRepresent regulation world coordinate system in user's picture-taking position with match the direction vector between picture position, Rr -1Represent the inverse matrix of matching image spin matrix of camera when collected;
The two-dimensional position coordinate of the direction vector of two width images and one of matching image in step D5, the known world coordinate system (Xd,Yd), determine the straight line of an overmatching image, such as shown in formula (8),
y = t w ( 2 ) t w ( 1 ) · ( x - X d ) + Y d - - - ( 8 )
Wherein, twIt is transfer vector that step D4 draws, is the column vector of 3 × 1, tw(2) t is representedwThe element of the second row, T in the same mannerw(1) t is representedwThe element of the first row;
Step D6, will slightly match the k width image that obtains and upload image with user respectively and obtain 4 straight lines, if 4 straight lines are present Dry intersection point, completes optimum point, i.e., using formula (9):The determination of positioning result:
m i n x , y { N j d i j ( x i , y i ) } - - - ( 9 )
Wherein, dij(xi,yi) represent i-th (i=1,2 ..., 6) individual intersection point to jth (j=1,2,3, the 4) distance of individual straight line, Nj Represent that what image corresponding to j-th straight line and user provided image matches points, dij(xi,yi) expression formula such as formula (10) table Show:
d i j ( x i , y i ) = | a i x i + b i y i + c i | a i 2 + b i 2 - - - ( 10 )
Wherein, ai=t (2), bi=-t (1), ci=-t (2) xi+t(1)yi, xiAnd yiRepresent the position coordinates of i-th intersection point.
CN201410810776.XA 2014-12-23 2014-12-23 Image capture-based Visual Map database construction method and indoor positioning method using database Active CN104484881B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410810776.XA CN104484881B (en) 2014-12-23 2014-12-23 Image capture-based Visual Map database construction method and indoor positioning method using database

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410810776.XA CN104484881B (en) 2014-12-23 2014-12-23 Image capture-based Visual Map database construction method and indoor positioning method using database

Publications (2)

Publication Number Publication Date
CN104484881A CN104484881A (en) 2015-04-01
CN104484881B true CN104484881B (en) 2017-05-10

Family

ID=52759421

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410810776.XA Active CN104484881B (en) 2014-12-23 2014-12-23 Image capture-based Visual Map database construction method and indoor positioning method using database

Country Status (1)

Country Link
CN (1) CN104484881B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105157681B (en) * 2015-08-23 2018-07-24 西北工业大学 Indoor orientation method, device and video camera and server
CN105225240B (en) * 2015-09-25 2017-10-03 哈尔滨工业大学 The indoor orientation method that a kind of view-based access control model characteristic matching is estimated with shooting angle
CN106228538B (en) * 2016-07-12 2018-12-11 哈尔滨工业大学 Binocular vision indoor orientation method based on logo
CN106295512B (en) * 2016-07-27 2019-08-23 哈尔滨工业大学 Vision data base construction method and indoor orientation method in more correction lines room based on mark
CN106643690A (en) * 2016-09-21 2017-05-10 中国第汽车股份有限公司 Method for high-precision positioning of automobile through scene recognition
CN108921899B (en) * 2018-07-13 2021-05-28 哈尔滨工业大学 Indoor visual positioning method for solving fundamental matrix based on pixel threshold
CN109407699A (en) * 2018-10-29 2019-03-01 宋永端 Autonomous flight localization method in a kind of unmanned plane room
CN114494376B (en) * 2022-01-29 2023-06-30 山西华瑞鑫信息技术股份有限公司 Mirror image registration method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100484655B1 (en) * 2005-01-22 2005-04-22 (주)바른기술 Line scan camera using penta-prism
CN1719477A (en) * 2005-05-19 2006-01-11 上海交通大学 Calibration method of pick up camera or photographic camera geographic distortion
CN102353684A (en) * 2011-06-23 2012-02-15 南京林业大学 Method for acquiring laser meat image by double-laser triangle method
CN103729842A (en) * 2013-12-20 2014-04-16 中原工学院 Fabric defect detection method based on local statistical characteristics and overall significance analysis

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100484655B1 (en) * 2005-01-22 2005-04-22 (주)바른기술 Line scan camera using penta-prism
CN1719477A (en) * 2005-05-19 2006-01-11 上海交通大学 Calibration method of pick up camera or photographic camera geographic distortion
CN102353684A (en) * 2011-06-23 2012-02-15 南京林业大学 Method for acquiring laser meat image by double-laser triangle method
CN103729842A (en) * 2013-12-20 2014-04-16 中原工学院 Fabric defect detection method based on local statistical characteristics and overall significance analysis

Also Published As

Publication number Publication date
CN104484881A (en) 2015-04-01

Similar Documents

Publication Publication Date Title
CN104484881B (en) Image capture-based Visual Map database construction method and indoor positioning method using database
CN104050681B (en) A kind of road vanishing Point Detection Method method based on video image
CN106919944A (en) A kind of wide-angle image method for quickly identifying based on ORB algorithms
CN104596519B (en) Vision positioning method based on RANSAC algorithms
CN102404595B (en) Epipolar line rectification method capable of providing instruction for shooting of 3-dimensional programs
CN106846378B (en) A kind of across the video camera object matching and tracking of the estimation of combination topology of spacetime
CN104457758B (en) Video-acquisition-based Visual Map database establishing method and indoor visual positioning method using database
CN105447441A (en) Face authentication method and device
CN107016646A (en) One kind approaches projective transformation image split-joint method based on improved
CN106709950A (en) Binocular-vision-based cross-obstacle lead positioning method of line patrol robot
CN106595702B (en) A kind of multisensor spatial registration method based on astronomy calibration
CN106096621B (en) Based on vector constraint drop position detection random character point choosing method
CN105279769A (en) Hierarchical particle filtering tracking method combined with multiple features
WO2023284070A1 (en) Weakly paired image style transfer method based on pose self-supervised generative adversarial network
CN106295512A (en) Many correction line indoor vision data base construction method based on mark and indoor orientation method
CN107958443A (en) A kind of fingerprint image joining method based on crestal line feature and TPS deformation models
CN109117851A (en) A kind of video image matching process based on lattice statistical constraint
CN106023187A (en) Image registration method based on SIFT feature and angle relative distance
CN105426872A (en) Face age estimation method based on correlation Gaussian process regression
WO2015165227A1 (en) Human face recognition method
CN105488541A (en) Natural feature point identification method based on machine learning in augmented reality system
CN103955950A (en) Image tracking method utilizing key point feature matching
CN109919832A (en) One kind being used for unpiloted traffic image joining method
CN107845107A (en) A kind of optimization method of perspective image conversion
CN107610136A (en) Well-marked target detection method based on the sequence of convex closure structure center query point

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20190614

Address after: 150000 Heilongjiang Harbin Dalian economic and Trade Zone, the North Road and Xingkai Road intersection

Patentee after: Harbin University of Technology Robot Group Co., Ltd.

Address before: 150001 No. 92 West straight street, Nangang District, Heilongjiang, Harbin

Patentee before: Harbin Institute of Technology

TR01 Transfer of patent right