Binocular vision indoor orientation method based on logo
Technical field
The present invention relates to binocular vision indoor orientation methods, in particular to the binocular vision indoor positioning side based on logo
Method.
Background technique
The continuous demand of the rapid development of internet and wearable device and the application based on location aware, so that position is felt
Know that being increasingly becoming people relies on and essential information.Reliable location information is brought under many different environment for user
More preferable prior user experience and impression.People increasingly be unable to do without those positions based on GPS and map in daily life
Set service and navigation system.It is clear that outdoor positioning technology is highly developed, and many mobile devices are also all referring to making
With outdoor positioning technology.GPS system, GLONASS navigation system, Galileo navigation system and Beidou Navigation System are to make at present
With relatively broad global position system.Due to the influence of indoor environment factor, satellite positioning signal can not directly meet interior
The demand of location-based service, therefore GPS is appropriate only for outdoor scene.Currently, the relevant technologies of location-based service and
Industry is just continued to develop towards indoor direction, and target is that immanent location-based service is provided for people.
Indoor positioning has many methods at present, but there are implementation condition complexity, the not high disadvantages of positioning accuracy.Such as it is logical
The indoor positioning for crossing GPS or WIFI signal can not continuously provide the location information of user well, it means that traditional
The method accuracy under this closed environment interfered indoors, all there is disadvantages for sensibility and robustness more.But vision
Method can but simplify, effective location information is directly effectively extracted in geostationary indoor environment, and
Other external informations are not depended on, solving indoor positioning with the method for machine vision is a feasible scheme.
Indoors in positioning field, precision and cost be restrict universal principal element, then it is a kind of it is simple and convenient, be easy to
Identification, and absolute location coordinates are included, the proposition of the marker with certain error correcting capability, and the mark is taken using video camera
Will object information carries out image procossing, the method for finally determining position, is exactly to carry out target identification using computer vision to exist
Application in indoor positioning.These markers can be marker generally existing in building, such as emergency exit, number etc.
Deng, or some pattern icons artificially provided can solve the problems, such as in the form of software after being matched, precision it is reliable and
And it is at low cost.
In machine vision, binocular vision is current one of hot spot, because easier can extract mesh compared with monocular
Three-dimensional information is marked, more mesh is again simple easily to be realized, is had well so building Binocular Stereo Vision System and positioning indoors
Effect.Binocular Stereo Vision System is usually the two width digital pictures for obtaining measured object simultaneously from different perspectives by twin camera,
Or two width digital pictures of measured object are obtained from different perspectives in different moments by single camera, and recover based on principle of parallax
The three-dimensional geometric information of object rebuilds object three-dimensional contour outline and position, and then can obtain relative coordinate.
And unlike common images match, such as wearable device, people walks about in interior direction and direction are all
It is unfixed.In open shopping square, many scene images, time but more can be captured, user be along
The road in narrow long corridor goes ahead, and towards front.In order to user preferably to experience, in user towards just
When front, two sides metope object will be existed in the image in the form of a low-angle, cause difficulty to the identification of target.
Summary of the invention
It is to be not easy to extract asking for target three-dimensional information there are monocular the purpose of the present invention is to solve the prior art
Topic, and the binocular vision indoor orientation method based on logo proposed.
Above-mentioned goal of the invention is achieved through the following technical solutions:
Step 1: establishing the vision map data base of logo Image Acquisition;
Step 2: the calibration of binocular camera inside and outside parameter;Using Zhang Zhengyou chessboard calibration method, the interior of left video camera is acquired
The intrinsic parameter of parameter and right video camera;The outer parameter of the relative pose relationship of left video camera and right video camera is determined using Matlab
R and T;
Step 3: shooting logo image using binocular camera, logo image vision information characteristics and Visual are utilized
The visual information feature of Map database images is matched;Retain match point for the rejecting of Mismatching point;
Step 4: the pixel coordinate according to obtained in step 3WithCalculate vertex P1In left video camera
Coordinate system OclXclYclZclUnder three-dimensional coordinate;The pixel coordinate according to obtained in step 3 Calculate vertex P2
In the coordinate system O of left video cameraclXclYclZclUnder three-dimensional coordinate;
Step 5: the three-dimensional coordinate acquired according to step 4 In vision map data base
The vertex P of the logo image of storage1And P2Coordinate under world coordinate system establishes coordinate transformation relation, acquires left video camera and exists
Coordinate under world coordinate system, the as position coordinates of user.
Invention effect
In machine vision, binocular vision is current one of hot spot, because easier can extract mesh compared with monocular
Three-dimensional information is marked, more mesh is again simple and easy, so having very using two video cameras as the eyes of people position indoors
Good effect.Binocular Stereo Vision System is the computing system of a triangle in fact, by two video cameras in left and right different
Same target object is taken in position, or with monocular-camera follow shot, shines same object in different location, and based on view
Poor principle recovers the far and near information of target, and can rebuild the three-D profile of object, and then can obtain relative coordinate, and regards
The information interference of feel is smaller, and precision is higher.Therefore this research is main chooses binocular vision system to complete, it is high-precision to realize
Indoor positioning.
The present invention is existing in the Feature Points Matching and shooting image and database by shooting image pair and database images
Conversion between the coordinate system of real coordinate system is directed in long corridor to realize the positioning function of online user for both walls
Logo image information on body carries out indoor positioning.This positioning system is divided into two steps: establishing offline database stage and online
The location estimation stage.Off-line phase needs to construct the vision map data base containing visual signature information in scene indoors,
A certain number of logo images are specially arranged in long corridor indoors, and guarantee that each moment has logo to fall in binocular camera shooting
In two image of machine, the logo and to image photographic on the long corridor wall of face in the preferable situation of illumination condition, and to a left side for image
Two-dimensional coordinate in upper angle and the alive boundary's coordinate system of upper right angular vertex is recorded.When user's tuning on-line, pass through binocular camera shooting
Head shoots a pair of of image, carries out SURF characteristic matching and ranging with image in offline database, just can determine that by coordinate transform
The coordinate of the position of shooting.When distance of the video camera away from logo is 3.5m, position error can reach 0.22m.Cumulative distribution function
It is 0.22m that figure such as Fig. 6, which can be seen that the result in 68% probability distribution,.
Detailed description of the invention
Fig. 1 is that the RANSAC algorithm that specific embodiment five proposes rejects Mismatching point and estimates homography matrix H flow chart;
Fig. 2 is the relation schematic diagram between the coordinate system that specific embodiment four proposes;
Fig. 3 is the relational graph (d=between the camera coordinate system that specific embodiment seven proposes and image physical coordinates system
1,2);
Fig. 4 is the binocular vision system figure (d=1,2) that specific embodiment three proposes;
Fig. 5 is the indoor coordinate system schematic diagram (top view) that specific embodiment two proposes;
Fig. 6 is the cumulative distribution function figure that specific embodiment one proposes.
Specific embodiment
Specific embodiment 1: the binocular vision indoor orientation method based on logo of present embodiment, specifically according to
Following steps preparation:
Step 1: establishing vision map (Visual Map) database of logo Image Acquisition;
Step 2: the calibration of binocular camera inside and outside parameter;Using Zhang Zhengyou chessboard calibration method, the interior of left video camera is acquired
The intrinsic parameter of parameter and right video camera;The outer parameter of the relative pose relationship of left video camera and right video camera is determined using Matlab
R and T;
Step 3: shooting logo image using binocular camera, logo image vision information characteristics and Visual are utilized
The visual information feature of Map database images is matched;Retain match point for the rejecting of Mismatching point;
Step 4: the pixel coordinate according to obtained in step 3WithCalculate vertex P1In left video camera
Coordinate system OclXclYclZclUnder three-dimensional coordinate;The pixel coordinate according to obtained in step 3 Calculate vertex P2
In the coordinate system O of left video cameraclXclYclZclUnder three-dimensional coordinate;
Step 5: the three-dimensional coordinate acquired according to step 4 With vision map (Visual
Map the vertex P of the logo image) stored in database1And P2Coordinate under world coordinate system establishes coordinate transformation relation, asks
Obtain coordinate of the left video camera under world coordinate system, the as position coordinates of user;
According to step 1, the location information of available each visual information feature;By step 3, it can determine and clap
The database images of images match are taken the photograph, and so knows that the position coordinates for the visual information feature that shooting image includes;By step 3,
Position coordinates of the available visual information under camera coordinate system;By two different visual informations, two can be determined
Position coordinates under camera coordinate system;Position of the video camera under world coordinates can be acquired by calculating coordinate;?
Calculate online user position.
Present embodiment effect:
The Feature Points Matching and shooting image and database that present embodiment passes through shooting image pair and database images
Conversion between the coordinate system of middle reality coordinate system is directed in long corridor to realize the positioning function of online user for two
Logo image information on side wall body carries out indoor positioning.This positioning system is divided into two steps: establish the offline database stage and
The online location estimation stage.Off-line phase needs to construct the vision map datum containing visual signature information in scene indoors
A certain number of logo images are specially arranged in library indoors in long corridor, and guarantee that each moment has logo to fall in binocular
In two image of video camera, the logo and to image photographic on the long corridor wall of face in the preferable situation of illumination condition, and to image
The upper left corner and the alive boundary's coordinate system of upper right angular vertex in two-dimensional coordinate recorded.When user's tuning on-line, pass through binocular
Camera shoots a pair of of image, carries out SURF characteristic matching and ranging with image in offline database, just can by coordinate transform
Determine the coordinate of the position of shooting.When distance of the video camera away from logo is 3.5m, position error can reach 0.22m.Cumulative distribution
It is 0.22m that functional arrangement such as Fig. 6, which can be seen that the result in 68% probability distribution,.
Specific embodiment 2: the present embodiment is different from the first embodiment in that: logo figure is established in step 1
As vision map (Visual Map) database detailed process of acquisition are as follows:
Step 1 one: the indoor environment positioned as needed selects world coordinate system origin O (xw0,zw0), establish plane
Two-dimensional Cartesian coordinate system OwXwZw;
Step 1 two: it is acquired using monocular-camera to logo image is fixed on indoor wall, and records and clapped
Take the photograph the vertex P of logo image information1Position coordinates under world coordinate systemWith vertex P2Under world coordinate system
Position coordinatesSave the vertex P of the logo image of monocular-camera shooting1Pixel coordinate p1=(u1,v1),
Define the vertex P of the logo image of monocular-camera shooting2Pixel coordinate p2=(u2,v2), by pixel coordinate (u1,v1), as
Plain coordinate (u2,v2), position coordinatesAnd position coordinatesIt is stored in vision map (Visual Map) data
In library;Position coordinates indicate as shown in Figure 5;
Logo image is obtained visual information feature by the extraction that SURF algorithm carries out characteristic point information by step 1 three;
By visual information characteristic storage to vision map (Visual Map) database;Q=1 ... n, n are logo image number.Other steps
Rapid and parameter is same as the specific embodiment one.
Specific embodiment 3: the present embodiment is different from the first and the second embodiment in that: binocular is taken the photograph in step 2
The calibration of camera inside and outside parameter;Using Zhang Zhengyou chessboard calibration method, the intrinsic parameter of left video camera and the internal reference of right video camera are acquired
Number;The outer parameter R and T of the relative pose relationship of left video camera and right video camera is determined using Matlab specifically:
Step 2 one calibrates a left side in Matlab software by chessboard plane chart board using Zhang Zhengyou chessboard calibration method
Video camera and right camera model parameter matrix, the intrinsic parameter f of left video camera is read out according to model parameter matrixlWith right camera shooting
The intrinsic parameter f of machiner, by flAnd frIt is saved;
Step 2 two, determined using Matlab left video camera and right video camera relative pose relationship outer parameter R and T;
And R and T are saved in Matlab;Wherein,Indicate left camera coordinate system origin O in Fig. 4clWith right camera shooting
Machine coordinate origin OcrBetween spin matrix;
If providing that a rotation relationship is decomposed into rotates clockwise α degree around the x-axis of coordinate system
Angle rotates clockwise beta angle around y-axis and rotates clockwise γ degree angle around z-axis, thenWherein,
Indicate left camera coordinate system origin O in Fig. 4clWith right camera coordinate system origin OcrBetween translation
Vector;txIndicate OcrAnd OclIn the shift value of x-axis direction, tyIndicate OcrAnd OclShift value in y-axis direction, tzIndicate OcrWith
OclShift value in z-axis direction;
Shooting image size is 1292 pixels × 964 pixels;Gridiron pattern calibrating template, each square lattice are printed first
For 30mm × 30mm, specification is 9 × 6, and scaling board is attached on rigid plane hardboard and keeps surfacing;By constantly adjusting
The distance between calibrating template and binocular camera and angle respectively acquire 15 chessboard plane chart board figures using left and right video camera
Picture.Other steps and parameter are the same as one or two specific embodiments.
Specific embodiment 4: unlike one of present embodiment and specific embodiment one to three: sharp in step 3
Logo image is shot with binocular camera, utilizes the vision of logo image vision information characteristics and Visual Map database images
Information characteristics are matched;Retain match point for the rejecting of Mismatching point specifically:
Step 3 one defines camera coordinate system, image physical coordinates system and image pixel coordinates system;World coordinate system,
Positional relationship between camera coordinate system, image physical coordinates system and image pixel coordinates system is as shown in Figure 2;Wherein to CCD
Plane has done a mirror image about camera lens origin;
(1), by camera optical center left in binocular camera position OclIt is defined as binocular camera coordinate origin;And it enables double
The Z of lens cameraCThe optical axis coincidence of axis and left video camera;Wherein, binocular camera includes left video camera and right video camera;It will be left
The coordinate system of video camera is defined as OclXclYclZcl;The coordinate system of right video camera is defined as OcrXcrYcrZcr;
(2), logo image pixel coordinates system is defined as two-dimensional Cartesian coordinate system O1UV is as shown in Fig. 2, in the coordinate system
In with vertex O1For coordinate origin, pixel center point coordinate is (u0,v0);World coordinate system, camera coordinate system, image object
The positional relationship managed between coordinate system and image pixel coordinates system is as shown in Figure 2;
Step 3 two: the logo image that left video camera is shot is obtained by the extraction that SURF algorithm carries out characteristic point information
Visual information feature;
Step 3 three: the logo image vision information characteristics point and vision map (Visual of left video camera shooting are calculated
Map) the arest neighbors Euclidean distance of the visual information characteristic point of database retrieves and left video camera according to arest neighbors Euclidean distance
Image, that is, database images of most matched vision map (Visual Map) database of the logo image of shooting;By database diagram
The visual information feature of picture and left video camera, which extract to obtain visual information feature, to carry out Feature Points Matching and obtains m to match point;
Step 3 four: the logo image that right video camera is shot is obtained by the extraction that SURF algorithm carries out characteristic point information
Visual information feature;The visual information feature of database images and right video camera are extracted to obtain visual information feature progress feature
Point matching obtains K to match point;
Step 3 five: the Mismatching point in step 3 three between matching image is rejected using RANSAC algorithm;
Step 3 six: the K through step 3 four is rejected to the Mismatching point in match point using RANSAC algorithm;
Step 3 seven: pass through the homography matrix H estimated1、H2With the logo image top stored in Visual Map database
Point P1With vertex P2Pixel coordinate calculating find out homography matrix H1The vertex P of the logo image of corresponding left video camera shooting1's
Pixel Pl 1And Pl 2, find out homography matrix H2The vertex P of the logo image of corresponding right video camera shooting2PixelWithWherein, the l and r that subscript indicates respectively represent left and right cameras, and 1 and the 2 of superscript expression respectively represent the left and right of picture
Upper angular vertex;
Relationship between homography matrix and matching image pair is expressed as:
For Pl 1、Pl 2、OrE is left video camera l or right video camera r;HjFor H1Or H2;pjFor p1Or p2
By the vertex P of the logo image of vision map in step 1 (Visual Map) database purchase1With vertex P2Picture
Plain coordinate p1、p2It is brought into formula (1) respectively, the vertex P of the logo image of left video camera shooting is calculated1Corresponding pixel
Point Pl 1, the shooting of left video camera logo image vertex P2Corresponding pixel Pl 2Pixel coordinate, right video camera shooting
The vertex P of logo image1Corresponding pixelWith the vertex P of the logo image of right video camera shooting2Corresponding pixel
Pixel coordinate;Wherein, pixel Pl 1Pixel coordinate be expressed asPixel Pl 2Pixel coordinate be expressed asPixelPixel coordinate be expressed asPixelPixel coordinate be expressed asMatching figure
Picture matches the logo image shot for left video camera with the correspondence image in Visual Map database or right video camera is clapped
The logo image taken the photograph and corresponding images match in Visual Map database;
When user's tuning on-line, the industrial camera and camera lens that are fixed on tripod using two while left and right two is shot
Indoor photo comprising fixed visual information feature;Three of online user is shot respectively left images and offline database
Image carries out characteristic point by SURF algorithm and slightly matches;Due to that can have some error hidings when SURF algorithm carries out match point calculating
Point, these Mismatching points will lead to the accuracy decline of indoor positioning;Therefore, RANSAC (RANdom Sample is utilized
Consensus) algorithm carries out the rejecting of Mismatching point, improves matched accuracy;And with RANSAC algorithm to homography matrix
It is estimated, calculates the pixel coordinate of the logo image upper left corner and the upper right corner under pixel coordinate system in two camera reviews;
The effect of homography matrix is to describe the two-dimensional projection transformation relationship between two planes, with two as 4 pairs of corresponding points in plane
Homography matrix can be found out, threshold value is set as t and n0Meet condition: at least n0The actual position coordinate of a match point and by list
When the Euclidean distance between the position coordinates of matrix inverse being answered to be less than t, then it is assumed that the homography matrix can satisfy this group matching figure
The transformation relation of picture;RANSAC algorithm weeds out the Mismatching point for being unsatisfactory for above-mentioned homography matrix by such method.Other steps
Rapid and parameter is identical as one of specific embodiment one to three.
Specific embodiment 5: unlike one of present embodiment and specific embodiment one to four: in step 3 five
Mismatching point in step 3 three between matching image is rejected using RANSAC algorithm specifically:
The initial value on step 3 May Day, the number of iterations n and match point i is set as 0;
Step 3 five or two estimates that left video cameras are clapped to 4 pairs of match points of selection in match point using m obtained in step 3 three
Take the photograph the homography matrix H between logo image and database logo image1, by calculating the homography matrix H obtained1Calculate step 3 three
Obtained in theoretical coordinate of the m to match point remaining in match point in corresponding logo image coordinate system;
I-th of match point theoretical coordinate and step 3 two obtain in step 3 five or three, calculating database logo image i-th
The Euclidean distance of a visual information characteristic point coordinate;Wherein, 1≤i≤m;
Whether the Euclidean distance that step 3 the May 4th, judgment step 353 obtain is less than preset Euclidean distance threshold value t=
0.01, if it is judged that be it is yes, then follow the steps 355;If it is judged that be it is no, then return to step 353;
Step 3 five or five enables the value of n add 1, and judges whether the value of n is greater than preset the number of iterations n0=20;If sentenced
Disconnected result be it is yes, then save the matching double points, and save homography matrix H1;If it is judged that be it is no, reject the matching double points,
And return to step 352;Detailed process is as shown in Figure 1.Other steps and one of parameter and specific embodiment one to four
It is identical.
Specific embodiment 6: unlike one of present embodiment and specific embodiment one to five: in step 3 six
K through step 3 four is rejected to the Mismatching point in match point using RANSAC algorithm specifically:
Step 361, the number of iterations n ' and K are set as 0 to the initial value of the match point k in match point;
Step 3 six or two estimates that left video cameras are clapped to 4 pairs of match points of selection in match point using K obtained in step 3 four
Take the photograph the homography matrix H between logo image and database logo image2, by calculating the homography matrix H obtained2Calculate step 3 four
Obtained in theoretical coordinate of the remaining match point in corresponding logo image coordinate system in match point;
Step 3 six or three calculates k-th of visual information characteristic point that k-th of match point theoretical coordinate is obtained with step 3 four
The Euclidean distance of coordinate;Wherein, 1≤k≤K;
Whether the Euclidean distance that step 3 six or four, judgment step 363 obtain is less than preset Euclidean distance threshold value t, such as
Fruit judging result be it is yes, then follow the steps 365;If it is judged that be it is no, then return to step 363;
Step 3 six or five enables the value of n ' add 1, and judges whether the value of n ' is greater than preset the number of iterations n '0=20;If
Judging result be it is yes, then terminate, and save homography matrix H2;If it is judged that be it is no, then return to step 362;Tool
Body process is as shown in Figure 1.Other steps and parameter are identical as one of specific embodiment one to five.
Specific embodiment 7: unlike one of present embodiment and specific embodiment one to six: root in step 4
According to pixel coordinate obtained in step 3WithCalculate vertex P1In the coordinate system O of left video cameraclXclYclZclUnder
Three-dimensional coordinate;The pixel coordinate according to obtained in step 3 Calculate vertex P2In the coordinate of left video camera
It is OclXclYclZclUnder three-dimensional coordinate specifically:
Step 4 one: according to pixel coordinate OrLogo image top after asking conversion
Point P1Or vertex P2In the coordinate (x, y) under physical coordinates system as shown in Figure 2 as shown in formula (2):
Wherein, the logo image of binocular camera shooting is logo image or the shooting of right video camera of left video camera shooting
Logo image;(x, y) is (x1 l,y1 l)、(x1 r,y1 r)、(x2 l,y2 l) or (x2 r,y2 r);(x1 l,y1 l) be conversion after left video camera
The logo image vertex P of shooting1Physical coordinates, (x1 r,y1 r) it is the logo image vertex P that the right video camera after converting is shot1
Physical coordinates;(x2 l,y2 l) it is the logo image vertex P that the left video camera after converting is shot2Physical coordinates, (x2 r,y2 r) be
The logo image vertex P of right video camera shooting after conversion2Physical coordinates;(u, v) is Or
Dx is the amplification coefficient of x;Dy is the amplification coefficient of y;Shown in homogeneous coordinates such as formula (3) with expression matrix:
Definition pixel center point coordinate is (u0,v0) it is used as coordinate origin O2, logo image physical coordinates system is defined as
Two-dimensional Cartesian coordinate system O2XY, X-axis and Y-axis respectively with the X of camera coordinate systemclAxis and YclAxis is parallel, usually with millimeter (mm)
For coordinate unit;
Step 4 two: the logo image physical coordinates system O after asking conversion2XY and left camera coordinate system OclXclYclZclIt
Between relationship;Relationship between camera coordinate system and image physical coordinates system is as shown in Figure 3;
OclThe intersection point p of P and the plane of delineationcl(x, y) or OcrThe intersection point p of P and the plane of delineationcr(x, y) meets fundamental relation such as
Shown in formula (4):
Wherein, P point is P1Point or P2Point;
(xc,yc,zc) it is binocular camera coordinate, (xc,yc,zc) be Or For vertex P under left camera coordinate system1Coordinate;For vertex P under right camera coordinate system1Coordinate;For left camera coordinate system
Lower vertex P2Coordinate;For vertex P under right camera coordinate system2Coordinate;OclP is P in P1Or vertex P2;
Shown in homogeneous coordinates form such as formula (6) with expression matrix:
If left camera coordinate system is OclXclYclZclIt is O with right camera coordinate systemcrXcrYcrZcr, image physical coordinates
System is respectively OlXlYlZlAnd OrXrYrZr, left focal length of camera is intrinsic parameter fl;Right focal length of camera is intrinsic parameter fr, f flOr
fr;
Then shown in the Perspective transformation model of video camera such as formula (8):
(xl,yl) it is (x1 l,y1 l) or (x2 l,y2 l);(xr,yr) it is (x1 r,y1 r) or (x2 r,y2 r);
With space conversion matrix MlrIndicate the position transformational relation between left and right cameras coordinate system, as shown in formula (9):
Wherein, Mlr=[R | T];R is spin matrix, and T is translation vector;MlrThe matrix formed for R and T;
Step 4 three: the coordinate under left camera coordinate system is solved;The intrinsic parameter obtained using step 2 and outer parameter,
Coordinate (x, y) is obtained according to step 4 one;Vertex P under left camera coordinate system is calculated1Coordinate (xcl, ycl, zcl);
Therefore, it is indicated by matrix equation (11):
Solution matrix equation can find out P1Coordinate of the point under left camera coordinate system, ρrFor proportionality coefficient,
As shown in formula (12):
Wherein, according to pixel Pl 1Homogeneous coordinates in image physical coordinates system are expressed asAnd pixel
Homogeneous coordinates in image physical coordinates system are expressed asSolve P1Coordinate of the point under left camera coordinate system
ForAnd P2Coordinate of the point under left camera coordinate system beFig. 4 is binocular used in the present invention
Vision system schematic diagram;So far, the position coordinates of logo image top left corner apex under left camera coordinate system are solved;In the following,
Solve the position of the left video camera under world coordinate system;
It is obtained and left and right shooting figure using the matching result for shooting image and Visual Map database images in step 3
As the database images to match;Logo image vertex P can be obtained according to homography matrix and projection matrix1With vertex P2It is taking the photograph
Then image pixel coordinates in camera imaging coordinate system acquire the logo image upper left corner and upper right angular vertex and sit in left video camera
Three-dimensional coordinate under mark system.Other steps and parameter are identical as one of specific embodiment one to six.
Specific embodiment 8: unlike one of present embodiment and specific embodiment one to seven: root in step 5
The three-dimensional coordinate acquired according to step 4 It is stored in vision map (Visual Map) database
Logo image vertex P1And P2Coordinate under world coordinate system establishes coordinate transformation relation, acquires left video camera in the world
Coordinate under coordinate system, the as position coordinates of user specifically:
Step 5 one, the world coordinate system O according to foundationwXwZwAs shown in figure 5, enabling XwThe direction that axis is directed toward is east, Zw
The direction that axis is directed toward is the north, i.e., for floor building more regular at present, there are the directions of following four kinds of walls, such as
Shown in Fig. 5;The case where being shot according to video camera from north orientation south, θ indicates left video camera C point and database logo image vertex P1's
Line and vertex P1And P2Between angle;Wherein, (0, π) θ ∈;
Step 5 two: C point is calculated away from P1The distance D of point1, C point is calculated away from P2The distance D of point2, it is known that P1The coordinate of pointAnd P1P2Length D3, calculate position of the video camera C point under world coordinate system;Video camera C point is video camera
Optical center;
P is calculated by formula (12)1Point and P2Point is in left camera coordinate system OclXclYclZclUnder three-dimensional coordinate beWithThen acquire the following length:
Step 5 three: the angle θ is found out;According to the three of triangle ABC side length D1、D2And D3It acquires shown in angle θ such as formula (15):
Step 5 four: coordinate (x of the point C under world coordinate system can be obtained by obtaining positioning coordinate by step 5 threec, zc),
As shown in formula (16);
Similarly the target of other directions can also find out each side length of triangle by this method first, then determine angle
Position of the video camera under world coordinate system is finally calculated in θ.Other steps and parameter and specific embodiment one to seven it
One is identical.