Vision data base construction method and indoor positioning in more correction lines room based on mark
Method
Technical field
Vision data base construction method and indoor positioning side in the present invention relates to a kind of more correction lines room based on mark
Method belongs to indoor scene field of locating technology.
Background technique
In recent years, due to relevant device and technology are widely progressive and practical application scene in based on location-based service
The necessity of seamless solution, indoor locating system obtain extensive concern and research, these systems open a kind of complete
The new technical field of automatic target detection and localization.In an outdoor environment, shop satellite navigation system (Global
Navigation Satellite System, GNSS) positioning result be user location is accurately estimated it is most reliable come
One of source.However indoors or in closed environment, lead to the tight of positioning accuracy since satellite-signal will receive serious decaying
It loses again, GNSS is infeasible.
Currently, the research hotspot of indoor locating system technology mainly includes WiFi location technology and bluetooth location technology.Its
In, WiFi location technology has the characteristics that wireless access point has disposed range extensively and can spread breeding, utilizes user's smart phone
The signal strength of each wireless access point detected completes the estimation to user location.However, the technology heavy dependence wirelessly connects
The number of access point, and sharply decline in the positioning accuracy of the neighboring areas such as the inlet and outlet of the actual environments such as market.Bluetooth
Location technology completes the estimation to user location by the intensity of the Bluetooth signal received using user, and can realize 1 meter
Positioning accuracy, however the technology limits its application in practice seeking high latency phenomenon existing for the Bluetooth signal stage.It is comprehensive
The advantage and disadvantage for closing the above indoor positioning technologies, need a kind of scalability strong, and lower deployment cost is low, and required time delay is small, positioning
Precision height and stable positioning system.Due to containing information abundant in image, the smart phone with imaging sensor
High popularization and expansible transplantability and computer processing technology such as greatly enhance at the advantages, so that the interior of view-based access control model is fixed
Position technology becomes new research hotspot.
Vision indoor positioning technologies are not necessarily to additional deployment, and the indoor scene picture that need to be only shot by user terminal is matched
Established indoor scene database is closed, and robustness stronger estimated result more accurate to user location can be completed.And
And since image includes scene information abundant, vision indoor positioning can also provide better Visualization Service for user, this
It is that other positioning methods are incomparable.The vision indoor positioning technologies proposed at present mainly use location fingerprint algorithm,
Database is established jointly by the scene image and user location that shoot terminal, and data are then obtained using quick retrieval
Most matched scene image and corresponding position in library complete positioning service.However, traditional location fingerprint side based on Epipolar geometry
Method haves the defects that apparent, the i.e. positioning result concentration that depends critically upon location fingerprint.It is fixed when location fingerprint is intensive
Position precision is promoted, but database storage capacity increases, and causes bigger retrieval time delay, vice versa.
Based on the above reasons, a kind of vision indoor orientation method based on mark is needed, is schemed using marks several in scene
Picture and its location information replace location fingerprint and image information used in traditional algorithm, and use acceleration robust features algorithm
(Speed Up Robust Features, SURF) and the estimation for correcting the completion of line location algorithm to user location more.It is correct more
The location algorithm of line, the dependence for being capable of providing additional the constraint relationship to reduce location algorithm for prior information, so that algorithm
It can apply in the changeable actual scene of situation.
Summary of the invention
The purpose of the present invention is to solve the prior art be used only a correction line, can not avoid camera lens parameter into
The shortcomings that row location Calculation, and propose in a kind of more correction lines room based on mark vision data base construction method and indoor fixed
Position method.
Vision data base construction method in a kind of more correction lines room based on mark characterized by comprising
Step 1 one: Image Acquisition is carried out to mark target using camera sensor, obtained image is denoted as image Ik(k
=1,2,3);
Step 1 two: two correction line l are established in the mark targetk1And lk2, in the lk1And lk2Upper each choosing of difference
N sampled point is taken, and l is obtained by the sampled pointk1And lk2Linear equation;The correction line lk1In world coordinate system
Be positioned parallel to level ground, and lk1For a ray, starting point is located in the mark target;The correction line lk2
Position is perpendicular to level ground and perpendicular to l in world coordinate systemk1, starting point is located in the mark target;
Step 1 three: using acceleration robust features algorithm to IkCharacteristic point matrix is extracted, obtains that image I can be characterizedkFeature
Characteristic point matrix Fk;
Step 1 four: it solves and corrects line lk1Linear equation, sample point coordinate and origin in image pixel coordinates system,
Specifically: it sets n sampled point and constitutes pixel coordinate matrixWhereinWithRespectively indicate the horizontal seat of n sampled point
Vector sum ordinate vector is marked, is the column vector for being n × 1;Origin is lk1Left side endpoint;If the l of straight linek1Table
It is y=α up to formk1x+βk1, parameter alphak1And βk1Use matrix Wk1=[αk1 βk1] indicate, then Wk1It is acquired by following formula:
Wherein X is in column vector XnRight side added a column element 1 and the rank matrix of new n × 2 that is formed.
Step 1 five: it solves and corrects line lk2Linear equation, sample point coordinate and origin in image pixel coordinates system,
Specifically: it sets n sampled point and constitutes pixel coordinate matrixWhereinWithRespectively indicate the cross of n sampled point
Coordinate vector and ordinate vector are the column vector for being n × 1;If the l of straight linek2Expression-form be y=αk2x+βk2, ginseng
Number αk2And βk2Use matrix Wk2=[αk2 βk2] indicate, then Wk2It is acquired by following formula:
Step 1 six: by the elevation angle of the camera sensor in real world coordinates systemAnd the camera sensor
With the relative altitude Z for correcting line0, parameter matrix Wk1, parameter matrix Wk2With characteristic point matrix FkDeng deposit database, as image
IkAll information.
Step 1 seven: step 1 one is repeated to step 1 six, until database sharing finishes.
The present invention also provides a kind of more correction line indoor orientation methods based on mark, include the following steps:
Step 2 one: user is obtained by camera sensor with tagged image, and image I is denoted ason, by described image
IonBy accelerating robust features algorithm to extract characteristic point matrix, and the characteristic point matrix is sent to server end, it is described
Mark includes two correction lines;What the characteristic point matrix referred in the present invention referred both to be extracted by acceleration robust features algorithm.
Step 2 two: in server end, image I is calculatedonCharacteristic point matrix and database in the feature of image that stores
Arest neighbors Euclidean distance between dot matrix retrieves and the figure in the database according to the arest neighbors Euclidean distance
As IonThe image with same mark is denoted as I by the most matched image with same markopt, by IoptWith IonIt carries out
Robust features are accelerated to match to obtain m to match point;The database is the data as described in any one of claims 1 to 3
Library, the arest neighbors Euclidean distance are the Euclidean distance being calculated using nearest neighbor algorithm;
Step 2 three: the m is rejected to the Mismatching point in match point using RANSAC algorithm, and obtains described image
IoptFeature Descriptor to described image IonFeature Descriptor homography matrix H;Define image IoptIn image pixel coordinates
The coordinate P of the lower n sampled point for correcting line of systemiFor (xi,yi,1)T, it is mapped to image IonIn image pixel coordinates system coordinate
For Qi=(xi,yi,1)T, wherein i=1,2 ..., nl;Then mapping relations are expressed as:
[Q1 Q2 … Qn]=H [P1 P2 … Pn] (3)
Step 2 four: image I is solvedonMiddle correction line lk1Corresponding parametric equation, detailed process are as follows: define the lk1's
Coordinates matrix is Q=[Q1 Q2 … Qn], the Q is separated, the 1st row of matrix Q and the separation of the 3rd row are formed into new square
Battle array X, using the Q rest part as vector Y, then image IonMiddle correction line lk1Linear equation be expressed as
Wherein Y and X is it is known that defined parameters matrix W1=[α1,β1], it acquires:
W1=YXT(XXT)-1 (4)
The W1For in described image IonIn represent correct line lk1;
Step 2 five: image I is solvedonMiddle correction line lk2Corresponding parametric equation, detailed process and four phase of step 2
Together, the parameter matrix W acquired2Are as follows:
W2=YXT(XXT)-1 (5)
The W2For in described image IonIn represent correct line lk2;
Step 2 six: it solves and corrects line lk1On point from reference frame map to image physical coordinates system conversion close
System;
Step 2 seven: it solves and corrects line lk2On point from reference frame map to image physical coordinates system conversion close
System;
Step 2 eight: position coordinates [X of the camera in reference frame is solved0,Y0] and steering angle θ, according to formula (13)
It finds out:
Wherein, variable b and variable c as off-line phase and homography matrix geometrical-restriction relation obtain as a result, being known
Amount;Variable Z0WithIt can artificially measure to obtain as prior information;The focal length f of camera can be solved according to formula (18);
θ value, which is substituted into formula (13), can obtain Y0Value be
X can be found out by formula (10) and formula (11)0For
Step 2 nine: position coordinates of the user in real world, detailed process are solved are as follows: define and entangle in reference frame
Main track lk1Coordinate value of the starting point in world coordinate system be Tr=[Xr,Yr,Zr]T, plane and the world where mark target
Rotation angular dependence between coordinate system is Rr, wherein XrIndicate that reference frame and world coordinates tie up to the offset of X-direction;
YrIndicate that reference frame and world coordinates tie up to the offset of Y direction;ZrIndicate that reference frame and world coordinates tie up to Z
The offset of axis direction;Use Ls=[X0,Y0,1]TIndicate the coordinate at any point in reference frame, then it corresponds to true generation
Coordinate X in boundaryrMeet following relational expression:
Lr=Rr -1·Ls-Tr (22)
According to formula (22), coordinate information of the user in reference frame is converted into user in real world coordinates
Position coordinates, to complete estimation to user location.
The invention has the benefit that compared with prior art, the present invention adds additional a correction line, so that indoors
The camera parameter of acquisition user terminal is not needed during positioning, camera parameter is often difficult to obtain, therefore method of the invention
This parameter is avoided, so that the result of indoor positioning is more efficient, more accurate.
Detailed description of the invention
Fig. 1 is the flow chart of vision data base construction method in more correction lines room based on mark of the embodiment of the present invention;
Fig. 2 is the schematic diagram of the reference frame of the embodiment of the present invention, camera coordinates system, image physical coordinates system;
Fig. 3 is the correction line options schematic diagram of the embodiment of the present invention;
Fig. 4 is the flow chart of vision positioning method in more correction lines room based on mark of the embodiment of the present invention;
Fig. 5 is the flow chart of step 2 three or five in the embodiment of the present invention.
Specific embodiment
The present invention is suitable for indoor positioning, such as when user is inside market is accurately positioned the position of user.This
The method of invention is divided into database construction method and indoor orientation method, and when indoor positioning needs to use the image letter in database
Breath, therefore the building of advanced row database is needed in actual use.It needs to utilize having with user when positioning indoors
The equipment of camera sensor carries out, and carries out matching primitives by the image in the image and database that take and finally obtains user
Position.
The present invention relates to four kinds of coordinate systems: reference frame, camera coordinates system, image pixel coordinates system and image physics
Coordinate system.Its schematic diagram is as shown in Fig. 2, RCS is reference frame, and CCS is camera coordinates system, and ICS is image physical coordinates system.
R0, C0For coordinate origin.Wherein reference frame is to identify in target by the coordinate of (X, Y, Z) three orthogonal axis compositions
Line l corrects in systemk1The coboundary right direction of target is identified as X-axis, the vertical target that identifies is Y-axis inwards, identifies target
lk2It is perpendicular to the ground downwards Z axis, origin is the upper left corner for identifying target.The origin of camera coordinates system is the optical center of camera, phase
The optical axis of machine is as W axis, U axis and V axis, and wherein U-V plane is parallel to the imaging plane of camera.Image physical coordinates system is two dimension
Coordinate system, for indicating that project objects are in camera CCD (Charge-coupledDevice) sensor imaging plane in the true world
On image, u axis and v axis in coordinate system are respectively equivalent to U axis and V axis in camera coordinates system.Image pixel coordinates system with
The upper left corner that image indicates target (such as poster) is origin, and with origin to the right for X-axis, origin is Y-axis downwards, wherein image pixel
Coordinate system establishes coordinate correspondence relationship between image physical coordinates system by way of scaling.
Specific embodiment 1: vision data base construction method packet in more correction lines room based on mark of present embodiment
It includes:
Step 1 one: Image Acquisition is carried out to mark target using camera sensor, obtained image is denoted as image Ik(k
=1,2,3);
Step 1 two: two correction line l are established in mark targetk1And lk2, in lk1And lk2Upper each n sampled point of selection,
And l is obtained by sampled pointk1And lk2Linear equation under image pixel coordinates system;Correct line lk1In world coordinate system
It is positioned parallel to level ground, and lk1For a ray, starting point is located in mark target;Correct line lk2In world coordinates
Position is perpendicular to level ground and perpendicular to l in systemk1, starting point, which is located at, to be identified in target;
Step 1 three: using acceleration robust features algorithm to IkCharacteristic point matrix is extracted, obtains that image I can be characterizedkFeature
Characteristic point matrix Fk;
Step 1 four: it solves and corrects line lk1Linear equation, sample point coordinate and origin in image pixel coordinates system,
Specifically: it sets n sampled point and constitutes pixel coordinate matrixWhereinWithRespectively indicate the horizontal seat of n sampled point
Vector sum ordinate vector is marked, is the column vector for being n × 1;Origin is lk1Left side endpoint;If the l of straight linek1Table
It is y=α up to formk1x+βk1, parameter alphak1And βk1Use matrix Wk1=[αk1 βk1] indicate, then Wk1It is acquired by following formula:
Wherein X is in column vector XnRight side added a column element 1 and the rank matrix of new n × 2 that is formed;
Step 1 five: it solves and corrects line lk2Linear equation, sample point coordinate and origin in image pixel coordinates system,
Specifically: it sets n sampled point and constitutes pixel coordinate matrixWhereinWithRespectively indicate the cross of n sampled point
Coordinate vector and ordinate vector are the column vector for being n × 1;If the l of straight linek2Expression-form be y=αk2x+βk2, ginseng
Number αk2And βk2Use matrix Wk2=[αk2 βk2] indicate, then Wk2It is acquired by following formula:
Step 1 six: by the elevation angle of the camera sensor in real world coordinates systemAnd the camera sensor
With the relative altitude Z for correcting line0, parameter matrix Wk1, parameter matrix Wk2With characteristic point matrix FkDeng deposit database, as image
IkAll information;
Step 1 seven: step 1 one is repeated to step 1 six, until database sharing finishes.
It should be strongly noted that the correction line that the present invention is additional by setting, has obtained one in the prior art
The parameter not provided, the i.e. elevation angleThe present invention passes through the elevation angleInstead of camera parameter in the prior art, due to the elevation angleHold very much
It is easily obtained by survey calculation, two camera parameters are often difficult to obtain according to the difference of each camera, therefore herein for the elevation angleCalculating and using innovation of the invention can be embodied well, i.e., with easy acquisition amount instead of difficult acquisition amount, significantly
Increase the versatility and accuracy rate of method.
The database construction method of present embodiment is also referred to as off-line phase.
Specific embodiment 2: the present embodiment is different from the first embodiment in that: mark target is poster.That is structure
Poster is chosen to be target by us when building database, is shot to poster, and correction line is artificially drawn on poster.In step
The step of sampled point is chosen in rapid 1 as artificially draws the process for correcting line.Specifically it can be by artificially judging to select 8
A, this 8 points are judged substantially point-blank by range estimation.
Other steps and parameter are same as the specific embodiment one.
Specific embodiment 3: the present embodiment is different from the first and the second embodiment in that: lk1For the top of poster
Boundary line, lk2For the left side boundary line of the poster, origin is the marginal point in the poster upper left corner.I.e. default poster is rectangle, two sides
Boundary line is one group of non-parallel side, selects mutually orthogonal two lines to be for the ease of subsequent calculating, other selection modes can also
To complete this programme.
Other steps and parameter are the same as one or two specific embodiments.
Specific embodiment 4: more correction line indoor orientation methods based on mark of present embodiment include:
Step 2 one: user is obtained by camera sensor with tagged image, and image I is denoted ason, by image IonIt is logical
It crosses acceleration robust features algorithm and extracts characteristic point matrix, and characteristic point matrix is sent to server end, mark includes two
Line is corrected, i.e., is the image for correcting line with two with tagged image.
User can be by the process that camera sensor obtains image has camera function by mobile phone of user etc.
Equipment is obtained by shooting.
Step 2 two: in server end, image I is calculatedonCharacteristic point matrix and database in the feature of image that stores
Arest neighbors Euclidean distance between dot matrix retrieves and image I in the database according to arest neighbors Euclidean distanceonMost
Image with same mark is denoted as I by the image with same mark matchedopt, by IoptWith IonCarry out acceleration robust features
Matching obtains m to match point;Database is the database in specific embodiment one, and arest neighbors Euclidean distance is to use arest neighbors
The Euclidean distance that algorithm is calculated;
" most matching " indicates that Euclidean distance is most short, and the process of " obtaining m to match point " can be to preset an Euclidean
Distance threshold, by the Euclidean distance for certain a pair of the point for accelerating robust features to match if it is less than this threshold in two images
Value, then it is assumed that this pair of point is match point.
Step 2 three: the m is rejected to the Mismatching point in match point using RANSAC algorithm, and obtains described image
IoptFeature Descriptor to described image IonFeature Descriptor homography matrix H;Define image IoptIn image pixel coordinates
The coordinate P of the lower n sampled point for correcting line of systemiFor (xi,yi,1)T, it is mapped to image IonIn image pixel coordinates system coordinate
For Qi=(xi,yi,1)T, wherein i=1,2 ..., nl;Then mapping relations are expressed as:
[Q1 Q2 … Qn]=H [P1 P2 … Pn] (3)
Step 2 four: image I is solvedonMiddle correction line lk1Corresponding parametric equation, detailed process are as follows: define the lk1's
Coordinates matrix is Q=[Q1 Q2 … Qn], Q is separated, the 1st row of matrix Q and the separation of the 3rd row are formed into new matrix X,
Using Q rest part as vector Y, then image IonMiddle correction line lk1Linear equation be expressed asWherein Y and X
It is known that defined parameters matrix W1=[α1,β1], it acquires:
W1=YXT(XXT)-1 (4)
W1For in described image IonIn represent correct line lk1;
Step 2 five: image I is solvedonMiddle correction line lk2Corresponding parametric equation, detailed process is identical as step 2 four,
The parameter matrix W acquired2Are as follows:
W2=YXT(XXT)-1 (5)
W2For in described image IonIn represent correct line lk2;
Step 2 six: it solves and corrects line lk1On point from reference frame map to image physical coordinates system conversion close
System;
Step 2 seven: it solves and corrects line lk2On point from reference frame map to image physical coordinates system conversion close
System;
Step 2 eight: position coordinates [X of the camera in reference frame is solved0,Y0] and steering angle θ, according to formula (13)
It finds out:
Wherein, variable b and variable c as off-line phase and homography matrix geometrical-restriction relation obtain as a result, being known
Amount;Variable Z0WithIt can artificially measure to obtain as prior information;The focal length f of camera can be solved according to formula (18);
θ value, which is substituted into formula (13), can obtain Y0Value be
X can be found out by formula (10) and formula (11)0For
Step 2 nine: position coordinates of the user in real world, detailed process are solved are as follows: define and entangle in reference frame
Main track lk1Coordinate value of the starting point in world coordinate system be Tr=[Xr,Yr,Zr]T, plane and the world where mark target
Rotation angular dependence between coordinate system is Rr, wherein XrIndicate that reference frame and world coordinates tie up to the offset of X-direction;
YrIndicate that reference frame and world coordinates tie up to the offset of Y direction;ZrIndicate that reference frame and world coordinates tie up to Z
The offset of axis direction;Use Ls=[X0,Y0,1]TIndicate the coordinate at any point in reference frame, then it corresponds to true generation
Coordinate X in boundaryrMeet following relational expression:
Lr=Rr -1·Ls-Tr (22)
According to formula (22), coordinate information of the user in reference frame is converted into user in real world coordinates
Position coordinates, to complete estimation to user location.
The indoor orientation method of present embodiment is also referred to as on-line stage.
Specific embodiment 5: present embodiment is unlike specific embodiment four: in step 2 three, using
RANSAC algorithm is rejected the m and is specifically included to the Mismatching point in match point:
Step 231: 0 is set by the initial value of the number of iterations n and match point i;
Step 2 three or two: from the m obtained in step 2 two to 4 pairs of match points are selected in match point, image I is calculatedon
With image IoptBetween homography matrix H, it is corresponding at its to remaining match point in match point that the m is calculated by homography matrix H
Identify the theoretical coordinate under the image coordinate system of target;
Step 2 three or three: image I is calculatedoptIn i-th match point theoretical coordinate and described image IonIth feature point
The Euclidean distance of coordinate;Wherein, 1≤i≤m;
Step 2 three or four: judging whether the Euclidean distance in the step 2 three or three is less than Euclidean distance threshold value, if so,
Step 2 three or five is executed, if it is not, thening follow the steps 233;Euclidean distance threshold value herein can be 0.01.
Step 2 three or five: adding 1 for the value of n, judges whether the value of n is greater than preset the number of iterations n0=20;If so, protecting
The matching double points are deposited, and save homography matrix H;If it is not, then rejecting the matching double points, and execute step 2 three or two.
Other steps and parameter are identical as one of specific embodiment one to four.
Specific embodiment 6: present embodiment is unlike specific embodiment four or five: in step 2 six, being located at
Described image IonPixel coordinate system in the correction line l that has found outk1Pixel coordinate equation by transformation after in image physics
Equation l is expressed as on coordinate system1: u+bv+c=0;User is provided when acquiring image, present position is in current identification target
Coordinate is (X in reference frame0,Y0,Z0);Wherein, Z0It as priori conditions, is obtained by artificially measuring, (X0,Y0) indicate wait ask
The user location of solution;Correct line lk1There is another expression-form l in image physical coordinates system1', l1' expression-form etc.
It is same as the correction line equation l1: u+bv+c=0;It solves and corrects line lk1On point from reference frame map to image physics sit
The transformational relation of mark system, which is equal to, solves equation l1: the parameter of u+bv+c=0, detailed process are as follows:
Step 261: reference frame is converted into camera coordinates system, detailed process are as follows: will be former in reference frame
Point R0By appraising vector (- X through discussion0,-Y0,-Z0) be converted to the origin of camera coordinates system;X-Y plane is rotated counterclockwise along Z axis, is made
Y-Z plane is parallel to the V-W plane in camera coordinates system, rotation angle θ;Y-Z plane is rotated counterclockwise along X-axis, makes X-Y plane
It is parallel to U-V plane, rotation angle is
Step 2 six or two: it solves and corrects line lk1On point the transformational relation of camera coordinates system is mapped to from reference frame;
Provide that direction of rotation counterclockwise is positive direction, and reference frame is left-handed coordinate system criterion;It will be any in reference frame
A bit [x, y, z]TMap to the coordinate [u, v, w] in camera coordinates systemTIt indicates are as follows:
Wherein, [u, v, w]T[x, y, z]TIt is any point in camera coordinates system and reference frame respectively, and equal sign
Variable x in right side0,y0,z0It is by the coordinate vector [X under spin matrix R and reference frame0,Y0,Z0] obtained, it can
To be expressed as following formula:
Spin matrix R is acquired by formula (8):
Wherein, RZMatrix indicates first to rotate the spin matrix that θ degree obtains, R counterclockwise along Z axisXMatrix was indicated along the X-axis inverse time
Needle rotationSpend obtained spin matrix;
The coordinate representation for defining a point P in X-axis in reference frame is [x, 0,0]T, then it is mapped in camera coordinates system
Coordinate [ux,vx,wx]TIt can be obtained by following formula:
So that
Step 2 six or three: it solves and corrects line lk1Point from camera coordinates system map to image physical coordinates system conversion close
System;
Define the point [u in camera coordinates systemx,vx] to correspond to the coordinate of point in image physical coordinates system be [up,vp],
It can be obtained according to the geometric maps relationship of camera
Wherein, f indicates the focal length of camera.Bring formula (11) into formula (10), and it is available about u to eliminate variable xpAnd vp's
Equation:
What formula (12) and u+bv+c=0 were indicated on ICS is same straight line, according to the method for undetermined coefficients, it can be deduced that
Other steps and parameter are identical as specific embodiment four or five.
Specific embodiment 7: unlike one of present embodiment and specific embodiment four to six:
In step 2 seven, I is definedonImage pixel coordinates system in the correction line l that has found outk2Image pixel coordinates side
Journey is expressed as equation l after transformation on ICS2: u+b1v+c1=0, equation l2Parameter it is known and be equal to l'2, solve
Correct line lk2On point from the transformational relation that reference frame maps to image physical coordinates system be equal to solve equation l'2, tool
Body process is as follows:
The step 2 July 1st: it solves and corrects line lk2Point the transformational relation of camera coordinates system is mapped to from reference frame;It is fixed
Justice corrects line lk2On some coordinate in reference frame be [0,0, z], it is transformed into camera from reference frame and is sat
After mark system, the coordinate [u in camera coordinates systemy,vy,wy]TFor
Wherein, matrix R indicates the spin matrix from RCS to ICS, identical as R obtained by formula, through x in formula (7)0,y0,z0
Value bring into after formula, obtain following formula:
Step 2 seven or two: it solves and corrects line lk2Point from camera coordinates system map to image physical coordinates system conversion close
System;Solve the point [u in camera coordinates systemy,vy,wy]TMap to the point [u in image physical coordinates systemp,vp], the mapping process
It is the process of an equal proportion scaling and projection, passes through mapping relations shown in formula and can be obtained after eliminating variable z
Wherein formula (16) and equation l2: u+b1v+c1=0 fasten expression in image physical coordinates is same straight line;
According to the method for undetermined coefficients, it can be deduced that
Wherein, variable b1,c1,Z0,It is known quantity, two equatioies in convolution (17) can obtain
Step 2 seven or three: position coordinates [X of the camera in reference frame is solved0,Y0] and steering angle θ;According to formula
(13) it finds out
Wherein, variable b and variable c be database sharing process and homography matrix geometrical-restriction relation obtain as a result, being
Known quantity;Variable Z0WithIt is obtained as prior information by artificially measurement;The focal length f of camera is solved according to formula (18);
θ value, which is substituted into formula (13), can obtain Y0Value be
X can be found out by formula and formula0For
Other steps and parameter are identical as one of specific embodiment four to six.
The present invention can also have other various embodiments, without deviating from the spirit and substance of the present invention, this field
Technical staff makes various corresponding changes and modifications in accordance with the present invention, but these corresponding changes and modifications all should belong to
The protection scope of the appended claims of the present invention.