CN103438904B - A kind of inertial positioning method and system using vision auxiliary corrective - Google Patents

A kind of inertial positioning method and system using vision auxiliary corrective Download PDF

Info

Publication number
CN103438904B
CN103438904B CN201310386423.7A CN201310386423A CN103438904B CN 103438904 B CN103438904 B CN 103438904B CN 201310386423 A CN201310386423 A CN 201310386423A CN 103438904 B CN103438904 B CN 103438904B
Authority
CN
China
Prior art keywords
data
correction
location
location mark
inertial sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310386423.7A
Other languages
Chinese (zh)
Other versions
CN103438904A (en
Inventor
罗富强
纪家纬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Qingqiu Intelligent Manufacturing Co.,Ltd.
Original Assignee
SHENZHEN YUHENG INTERACTIVE TECHNOLOGY DEVELOPMENT Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHENZHEN YUHENG INTERACTIVE TECHNOLOGY DEVELOPMENT Co Ltd filed Critical SHENZHEN YUHENG INTERACTIVE TECHNOLOGY DEVELOPMENT Co Ltd
Priority to CN201310386423.7A priority Critical patent/CN103438904B/en
Publication of CN103438904A publication Critical patent/CN103438904A/en
Application granted granted Critical
Publication of CN103438904B publication Critical patent/CN103438904B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Navigation (AREA)

Abstract

The invention discloses the inertial positioning method and system using vision auxiliary corrective.This system includes inertial sensor, for the location mark of mark location point, for obtaining the photographing module of image information, graphics processing unit and the correction unit of location mark.Graphics processing unit calculates spatial positional information according to the image information that photographing module produces, and transmits spatial positional information to correcting unit, and the sensing data of inertial sensor are corrected by correction unit according to spatial positional information.The method is by arranging separate photographing module and location mark;Photographing module is arranged on to correct end and position mark and is arranged on inertial sensor end, or location mark is arranged on correction end and photographing module is arranged on inertial sensor end;Utilize photographing module shooting location mark, and be calculated correction data according to shooting results;Utilize correction data that the sensing data of inertial sensor is corrected;There is the advantages such as correction time is short, calibration accuracy, power consumption are few.

Description

A kind of inertial positioning method and system using vision auxiliary corrective
Technical field
The present invention relates to inertial positioning method and system, the inertia particularly relating to use vision auxiliary corrective is fixed Method for position and system.
Background technology
The current inertial sensor extensive application in terms of space orientation, makes us it also seen that in application side Obvious problem existing for face, maximum problem is done in 3-dimensional space, overcorrect location Individual, the most incorrect in the location of this point after movement and directed information, user is if desired Obtain bearing direction information accurately and must press check key the most again, when user is constantly when mobile, Need constantly the most artificial correction just can obtain referring to that effect of that dozen.Inconvenience caused by this problem What profit was serious has hit the experience of user!
Summary of the invention
The technical problem to be solved in the present invention is, it is provided that a kind of correct simple, little power consumption, use and regard The inertial positioning method and system of feel auxiliary corrective.
The technical solution adopted for the present invention to solve the technical problems is: provide one to use vision auxiliary school Positive inertial positioning method, comprises the following steps:
S1: separate photographing module and location mark are set;Described photographing module be arranged on correction end and And described location mark is arranged on inertial sensor end, or described location mark be arranged on correction end and And described photographing module is arranged on inertial sensor end;
S2: utilize described photographing module shooting described location mark, and be calculated school according to shooting results Correction data;Utilize described correction data that the sensing data of described inertial sensor is corrected.
Use in described step S1 of inertial positioning method of vision auxiliary corrective in the present invention, described shooting Module includes two camera heads with fixed range;Described location mark includes one or more location Mark.
Use in described step S2 of inertial positioning method of vision auxiliary corrective in the present invention, including:
S2-1: benchmark setting procedure;
S2-2: aligning step;
This benchmark setting procedure includes:
S2-1-1: described camera head and location mark are disposed on known relative position, and measure Record range data;
S2-1-2: in this relative position, utilizes the shooting of described camera head to obtain described location mark and exists The image information of this relative position, and this image information is converted into start position data;
S2-1-3: according to fixing between described start position data, range data and described camera head Range data, is calculated the benchmark data of described photographing module;
Described aligning step includes:
S2-2-1: utilize the shooting of described camera head to obtain the described location mark figure at any correction position As information, and this image information is converted into location data correction;
S2-2-2: according between described location data correction, the benchmark data of photographing module, camera head Fixed range data are calculated location mark actual location data, and according to described actual location data pair Described inertial sensor end position sensing data carry out data fusion correction.
Use in described step S2-1-2 of inertial positioning method of vision auxiliary corrective in the present invention, institute State the starting position coordinates that start position data includes being converted into the image information of described relative position;
S2-1-3: according between described starting position coordinates, range data and described camera head fixing away from From data, it is calculated the benchmark data of described photographing module;
In described step S2-2-1, described location data correction includes believing the image of described correction position Breath is converted into correction position coordinate;
In described step S2-2-2, the actual location data of described location mark is included in described correction bit The said two camera head put with described location mark the straight angle of company, described camera head with The actual bit that described location mark even straight vector size and the described location calculated identify Put coordinate.
Use in described step S2 of inertial positioning method of vision auxiliary corrective in the present invention, including:
S2-1: in original position, utilizes described camera head to obtain described location mark in this original position Image information, and this image information is converted into start position data;
S2-2: at any correction position, utilizes described camera head to obtain described location mark any at this The image information of correction position, and this image information is converted into location data correction;
S2-3: calculated with described start position data by described location data correction, obtains described used Property sensor moves to the displacement correction data of described any correction position from described original position;
S2-4: utilize described inertial sensor to sense it and move to described any correction from described original position The displacement of position, it is thus achieved that measured displacements data;
S2-5: described displacement correction data and described measured displacements data are carried out data fusion, corrects institute State the sensing data of inertial sensor.
Use in described step S2-1 of inertial positioning method of vision auxiliary corrective in the present invention, described Beginning position data includes the image information of described original position being converted into starting position coordinates and combining institute The fixed range data stated between benchmark data and the camera head of camera head are calculated described start bit The said two camera head put with described location mark the straight angle of company, described camera head with Described location mark even straight vector size;
In described step S2-2, described location data correction includes the image information of described correction position Between being converted into correction position coordinate and combining benchmark data and the camera head of said two camera head Fixed range data be calculated the said two camera head of described correction position and described location mark The straight angle of company, described camera head and described location mark even straight vector size.
Use in described step S1 of inertial positioning method of vision auxiliary corrective in the present invention, described shooting Module includes at least one camera head;Described location mark include two or more have fixing relatively away from From location mark.
Use in described step S2 of inertial positioning method of vision auxiliary corrective in the present invention, including:
S2-1: benchmark setting procedure;
S2-2: aligning step;
This benchmark setting procedure includes:
S2-1-1: described camera head and location mark are disposed on known relative position, and measure Record range data;
S2-1-2: in this relative position, utilizes the shooting of described camera head to obtain described location mark and exists The image information of this relative position, and this image information is converted start position data;
S2-1-3: utilize the inertial sensor sensing reference attitude in this relative position of inertial sensor end Angle information;
S2-1-4: according to described reference attitude angle information, described start position data, range data and institute State the fixed range data between the mark of location, calculate the benchmark data of described photographing module;
Described aligning step includes:
S2-2-1: at any correction position, utilizes described inertial sensor to obtain correction position attitude angle letter Breath;
S2-2-2: utilize the shooting of described camera head to obtain described location mark at this any correction position Image information, and this image information is converted into location data correction;
S2-2-3: according to described correction position attitude angle information, location data correction, the benchmark of photographing module Fixed range data between data, location mark are calculated actual location data, and according to described reality Border position data to described inertial sensor end sensing data carry out data fusion correction.
Use in described step S2 of inertial positioning method of vision auxiliary corrective in the present invention, including:
S2-1: in original position, utilizes described camera head to obtain described location mark in this original position Image information, and the initial attitude angle information combining inertial sensor obtains start position data;
S2-2: at any correction position, utilizes described camera head to obtain described location mark any at this The image information of correction position, and the correction position attitude angle information combining inertial sensor obtains correction bit Put data;
S2-3: calculated with described start position data by described location data correction, obtains described used Property sensor moves to the displacement correction data of described any correction position from described original position;
S2-4: utilize described inertial sensor to sense it and move to described any correction from described original position The displacement of position, it is thus achieved that measured displacements data;
S2-5: described displacement correction data and described measured displacements data carry out data fusion, corrects institute State the sensing data of inertial sensor.
The present invention also provides for a kind of inertial positioning system using vision auxiliary corrective, including inertia sensing Device, this system also includes: for the location mark of mark location point, for obtaining described location mark The photographing module of image information, graphics processing unit and correction unit;
Described photographing module is arranged at correction end and described location mark is arranged on inertial sensor end, Described correction unit is connected with described inertial sensor communication by wirelessly or non-wirelessly mode, or described fixed Bit-identify is arranged on correction end and described photographing module is arranged on inertial sensor end, described correction list First and described inertial sensor is connected by wirelessly or non-wirelessly mode communication;
The image information that described graphics processing unit produces according to described photographing module calculates space bit confidence Breath, and by the transmission of described spatial positional information to described correction unit, described correction unit is according to institute's rheme Put control information the sensing data of described inertial sensor are corrected.
In the inertial positioning system of the use vision auxiliary corrective of the present invention, described photographing module includes two The individual camera head with fixed range, described location mark includes one or more location mark;Or
Described photographing module includes at least one camera head;Described location mark includes two or more tool The location having fixing relative distance identifies.
Implement the present invention and have the benefit that the location being provided separately by utilizing photographing module to shoot Mark, is calculated correction data, it is achieved the correction to the sensing data of inertial sensor, has school The advantages such as the positive time is short, calibration accuracy, power consumption are few.
Accompanying drawing explanation
Below in conjunction with drawings and Examples, the invention will be further described, in accompanying drawing:
Fig. 1 is the schematic block that the present invention uses the first embodiment of the inertial positioning system of vision auxiliary corrective Body;
Fig. 2 is that the present invention uses the benchmark of the first embodiment of the inertial positioning method of vision auxiliary corrective to set Determine the principle schematic of step;
Fig. 3 is that the present invention uses the correction of the first embodiment of the inertial positioning method of vision auxiliary corrective to walk Rapid principle schematic;
Fig. 4 is the schematic block that the present invention uses the 3rd embodiment of the inertial positioning system of vision auxiliary corrective Body;
Fig. 5 is that the present invention uses the benchmark of the 3rd embodiment of the inertial positioning method of vision auxiliary corrective to set Determine the principle schematic of step;
Fig. 6 is that the present invention uses the correction of the 3rd embodiment of the inertial positioning method of vision auxiliary corrective to walk Rapid principle schematic.
Detailed description of the invention
In order to the technical characteristic of the present invention, purpose and effect are more clearly understood from, now compare accompanying drawing Describe the detailed description of the invention of the present invention in detail.
The inertial positioning method and system of the use vision auxiliary corrective of the present invention, by arranging separate taking the photograph As module and location mark;Wherein, photographing module is arranged on to correct end and position mark and is arranged on used Property sensor side, or location mark be arranged on correction end and photographing module be arranged on inertial sensor End.Then, utilize photographing module shooting location mark, and be calculated correction according to shooting results According to;Utilize correction data that the sensing data of inertial sensor is corrected, little electricity can consumed In the case of realize the error correction to the inertial sensor in inertial positioning device.
As Figure 1-3, it is the inertial positioning system of use vision auxiliary corrective and the method for the present invention First embodiment.
The inertial positioning system of this use vision auxiliary corrective, including inertial sensor 21, for identifying calmly The location mark 22 in site, for obtaining the photographing module of image information of location mark 22, image procossing Unit 12 and correction unit 13.
In the present embodiment, photographing module is arranged at correction end 10, has fixed range including at least two Camera head 11;Certainly, the quantity of camera head 11 can also be more.This correction end 10 can be The various devices being fixedly installed such as display screen, shooting screen, TV, game host;And camera head 11 CMOS alignment sensor or other kinds of picture pick-up device can be used, be used for catching location mark 22 Image.
This location mark 22 is arranged on inertial sensor end 20, identifies the position of inertial sensor end 20. In the present embodiment, this inertial sensor end 20 is plinker, 3D mouse etc., this location mark 22 It is one, is arranged at the front of inertial sensor end 20, carry out shooting for photographing module and catch.This location Mark 22 can be infrared LED lamp, it is of course also possible to select other mark, such as reflection plaster, The paster etc. that aberration is bigger, it is possible to be easy in the image of photographing module shooting identified.
It should be understood that the quantity of this location mark 22 can also be two or more, by identifying difference The id information of location mark 22, obtains the position of location mark 22 successively.Such as, by controlling The time that LED separately lights, or control LED light line flicker light and shade, making into can be with the light wave (light of information Ripple is also electromagnetic wave), with indivedual ID messages, for identification.
This graphics processing unit 12 and correction unit 13 are simultaneously located at correction end 10, graphics processing unit The spatial positional information of the 12 image information calculating location marks 22 produced according to photographing module, and by space Positional information is sent to correct unit 13, then has correction unit 13 to process spatial positional information, Obtain position correction information, and the sensing data of inertial sensor 21 are corrected.This correction unit 13 Can be connected by the way of wirelessly or non-wirelessly with between inertial sensor 21, thus by inertial sensor 21 Sensing data be corrected.
When using vision auxiliary corrective to carry out inertial positioning, it is necessary first to carry out benchmark setting, the most again It is corrected step.
When performing this benchmark setting procedure, camera head 11 and location mark 22 are disposed on known Relative position, and measure obtain camera head 11 and location mark 22 between range data;Profit The location mark 22 image information in this relative position is obtained with camera head 11 shooting, and by this figure As information is converted into start position data;And according to start position data, range data and two shooting dresses Put the fixed range data between 11 and be calculated the benchmark data of photographing module, and make with this benchmark data For correction reference.
Such as, as in figure 2 it is shown, stand in 2 meters of distances (i.e. range data) band location mark 22(figure Middle B point indication, such as infrared LED) inertial sensor end 20 just to A in two camera head 11(figures Point indication) centre position of line, the location mark 22 of 2 meters of distances is shot by camera head 11 Image information, and this image information is converted into start position data.The principle linearly propagated according to light, The method that the length of side of similar triangles is proportional can be passed through, utilize below equation to calculate, to obtain The benchmark data of photographing module:
Hx 1 h 1 + Hx 2 h 2 = L
tan α = x 1 h 1 , tan β = x 2 h 2
Wherein, H is range data (can measure in advance and obtain), and L is consolidating between two camera heads 11 Set a distance data (can be measured in advance and obtain), x1And x2(camera head can be passed through for start position data 11 shooting projections obtain), h1And h2Respectively need the benchmark data tried to achieve, using this benchmark data as The benchmark data of two camera heads 11.It should be understood that this benchmark data can include horizontal direction, Vertical direction or other directions, use for subsequent correction step.
May be appreciated, it is also possible to again measure benchmark data in multiple positions: by camera head 11 with fixed Bit-identify 22 is disposed on the second relative position, and obtains second distance data;Utilize camera head 11 shootings obtain the location mark 22 image information in this second relative position, and this image information are turned Turn to the second start position data;By the second start position data, second distance data and camera head Fixed range data between 11 are calculated the second benchmark data of photographing module, and by described second base Quasi-data merge with benchmark data and obtain benchmark data.Such as, again correct the position of 3 meters, utilize Primary method, thus obtain benchmark data more accurately.
When sensing data generation deviation, when causing the sensing of sensor side inaccurate, sensor can be passed through End triggering button, system automatically starts or other modes trigger, start correction.
When correction starts, camera head 11 shooting is utilized to obtain location mark 22 at any correction position Image information, and this image information is converted into location data correction;Then, according to correction position number According to, fixed range data etc. between the benchmark data of photographing module, camera head 11, it is calculated two Individual camera head 11 and location identify the straight angle of company, camera head 11 and the location mark 22 of 22 The actual position coordinate etc. of the location mark connect straight vector size, calculating, and positioned Mark 22(i.e. inertial sensor end 20) actual location data, and utilize this actual location data to this The sensing data of correction position are corrected.
Such as, as it is shown on figure 3, when inertial sensor end 20 is positioned at any correction position, in figure, A refers to Be shown as the position of two camera heads 11, B is the position that location identifies, can be according to below equation:
Hx 1 h 1 + Hx 2 h 2 = L
tan α = x 1 h 1 , tan β = x 2 h 2
Wherein, H is required distance (i.e. inertial sensor end 20 and the distance of camera head 11), h1And h2It is respectively the basic parameter of the camera head 11 that N Reference Alignment part obtains, x1And x2For by taking the photograph As device 11 shoots and process the location data correction of the location mark 22 obtained;L is two camera heads Fixed range data between 11, can measure in advance.
Utilize two above formula can try to achieve respectively the distance of inertial sensor end 20 and camera head 11 with And attitude, thus obtain the actual location data of inertial sensor end 20, and according to this actual bit Put the sensing data that inertial sensor 21 sensed by data to be corrected.Inertia is passed by actual location data The sensing data of sensor 21 are corrected using Kalman filtering, EKF, a young waiter in a wineshop or an inn The methods such as multiplication, are filtered being calculated to after actual location data and sensing data fusion.
It should be understood that in addition to vertical dimension H, location mark 22 respectively with two camera heads The horizontal range of 11, difference in height also can be tried to achieve according to similar proportionate relationship, selectes the zero in space It it is the 3 d space coordinate that can make location mark 22.
In the inertial positioning system of use vision auxiliary corrective and the second embodiment of method of the present invention, By merging original position and the data of correction position, realize the correction of sensing data.
Similar with a upper embodiment, this system includes inertial sensor 21, equally for mark location point Position mark 22, for obtaining the photographing module of image information of location mark 22, graphics processing unit 12 and correction unit 13 etc..Photographing module includes two camera heads 11 with fixed range, fixing It is arranged on correction end 10;Depending on bit-identify 22 be one or more, be fixed on inertial sensor end 20.
In original position, utilize camera head 11 to obtain location mark 22 and believe at the image of this original position Breath, and this image information is converted into start position data.This start position data includes original position Image information be converted into starting position coordinates the benchmark data combining camera head 11 and fixed range Data are calculated two camera heads 11 of original position and the straight folder of company of location mark 22 Angle, camera head 11 and location mark 22 even straight vector size etc..
Then, inertial sensor end 20 is moved to optional position, as correction position, utilize shooting dress Put the 11 acquisition location marks 22 image information at this any correction position, and this image information is converted For location data correction.Same, this location data correction includes changing the image information of correction position Become correction position coordinate and combine the benchmark data of two camera heads 11 and fixed range data calculate Two camera heads 11 and the straight angle of company of location mark 22, camera head to correction position 11 connect straight vector size etc. with location mark 22.
Then, the location data correction obtained is calculated with start position data, obtains inertia sensing Device 21 moves to the displacement correction data of any correction position from original position.Such as, to inertial sensor The data such as end 20 moves to correction position from original position, phasor difference poor relative to the angle of photographing module, Utilize digital filter to carry out digital filtering, be calculated displacement correction data.It should be understood that it is digital Filtering can be Kalman filtering, extension shape Kalman filtering, method of least square, weighted mean etc..
Meanwhile, inertial sensor end 20, during original position moves to correction position, utilizes inertia Sensor 21 senses, it is thus achieved that move to the measured displacements data of any correction position from original position. Same, it is possible to use digital filter carries out digital filtering to measured displacements data, and digital filtering is permissible For Kalman filtering, extension shape Kalman filtering, method of least square, weighted mean etc..
Then, the displacement correction data obtained and measured displacements data are carried out data fusion, eliminate accumulation Drift, the sensing data of correction inertial sensor 21, to realize being accurately positioned of inertial sensor end 20.
It should be understood that in the present embodiment, the method that first embodiment can be used, obtain two shootings The benchmark data of device 11, therefore not to repeat here.
As shown in Figure 4, it is the inertial positioning system and the of method of use vision auxiliary corrective of the present invention Three embodiments.This system include inertial sensor 41, for the location of mark location point mark 31, for Obtain the photographing module of image information of location mark 31, graphics processing unit 32 and correction unit 33 Deng, essentially identical with first embodiment, therefore do not repeat.
In the present embodiment, photographing module includes at least one camera head 42, is fixedly mounted on inertia and passes Sensor end 40, depending on bit-identify 31 include two or more, and be arranged on correction with fixing relative distance End 30.
The method includes benchmark setting procedure and aligning step equally.When performing this benchmark setting procedure, Camera head 42 and location mark 31 are disposed on known relative position, and measurement is taken the photograph As the range data between device 42 and location mark 31;Camera head 42 shooting is utilized to obtain location mark Knowledge 31 is in the image information of this relative position, and this image information is converted into start position data;With Time, utilize the inertial sensor 41 of inertial sensor end 40 to sense the reference attitude in this relative position Angle information.Then, according to reference attitude angle information, start position data, range data and location mark Know the fixed range data between 31, be calculated the benchmark data of photographing module, and with this benchmark data As correction reference.
Such as, as it is shown in figure 5, stand in 2 meters of distances (i.e. range data) band camera head 42(and scheme Middle A pointed location) inertial sensor end 40 just to two location mark 31(such as infrared LEDs, in figure B pointed location) centre position of line, the location mark 31 of 2 meters of distances is shot by camera head 42 Image information, and this image information is converted into start position data.According to light linearly propagate former Reason, can pass through the method that the length of side of similar triangles is proportional, utilize below equation to calculate, with Obtain the benchmark data of photographing module:
h = LH x 1 + x 2
tan α = x 1 h , tan β = x 2 h
Wherein, H is range data (can measure in advance and obtain), and L is consolidating between two location marks 31 Set a distance data (can be measured in advance and obtain), x1And x2(camera head can be passed through for start position data 42 shooting projection transform obtain), h respectively needs the benchmark data tried to achieve, using this benchmark data as taking the photograph Benchmark data as device 42.It should be understood that this benchmark data can include horizontal direction, vertical Direction or other directions, use for subsequent correction step.
May be appreciated, it is also possible to again measure benchmark data in multiple positions: by camera head 42 with fixed Bit-identify 31 is disposed on the second relative position, and obtains second distance data;Utilize camera head 42 shootings obtain the location mark 31 image information in this second relative position, and this image information are turned Turn to the second start position data;By the second start position data, second distance data and location mark Fixed range data between 31 are calculated the second benchmark data of photographing module, and by the second base value Benchmark data is obtained according to merging with benchmark data.Such as, again correct the position of 3 meters, utilize first Secondary method, thus obtain benchmark data more accurately.
When sensing data generation deviation, when causing the sensing of sensor side inaccurate, sensor can be passed through End triggering button, system automatically starts or other modes trigger, start correction.
When correction starts, inertial sensor 41 is utilized to obtain correction position attitude angle information;And utilize Camera head 42 shooting obtains the location mark 31 image information at any correction position, and by this image Information is converted into location data correction;Then, according to correction position attitude angle information, location data correction, Fixed range data etc. between the benchmark data of camera head 42, location mark 31, are calculated and take the photograph As device 42 and the straight angle of company of two location marks 31, camera head 42 and location mark 31 The actual position coordinate etc. of the camera head connect straight vector size, calculating, and imaged Device 42(i.e. inertial sensor end 40) actual location data, and utilize this actual location data to this The sensing data of correction position are corrected.
Such as, as shown in Figure 6, when inertial sensor end 40 is positioned at any correction position, in figure, A Pointed location is the position of camera head 42, and B show the position of two location marks 31, can basis Below equation:
H = Lh x 1 + x 2
tan α = x 1 h , tan β = x 2 h
Wherein, H is required distance (i.e. inertial sensor end 40 and location identifies the distance of 31), The basic parameter of the camera head 42 that on the basis of h, correction portion obtains, x1And x2For by camera head 42 Shoot and process the location data correction of the location mark 31 obtained;L is between two location marks 31 Fixed range, can measure in advance.
Utilize two above formula can try to achieve respectively the distance of inertial sensor end 40 and location mark 31 with And attitude, thus obtain the actual location data of inertial sensor end 40, and according to this actual bit Put the sensing data that inertial sensor 41 sensed by data to be corrected.Inertia is passed by actual location data The sensing data of sensor 41 are corrected using Kalman filtering, EKF, a young waiter in a wineshop or an inn The methods such as multiplication, are filtered being calculated to after actual location data and sensing data fusion.
It should be understood that in addition to vertical dimension H, location mark 31 respectively with two camera heads The horizontal range of 42, difference in height also can be tried to achieve according to similar proportionate relationship, selectes the zero in space It it is the 3 d space coordinate that can make location mark 31.
In the 4th embodiment of the inertial positioning method of the use vision auxiliary corrective of the present invention, by melting Close original position and the data of correction position, realize the correction of sensing data.
Similar with a upper embodiment, location mark 31 includes two location marks 31 with fixed range, It is fixedly installed on correction end 30;And camera head 42 is one or more, it is fixed on inertial sensor end 40。
In original position, utilize camera head 42 to obtain location mark 31 and believe at the image of this original position Cease, and the initial attitude angle information combining inertial sensor 41 obtains start position data.This original position Data include the image information of original position being converted into starting position coordinates and combining camera head 42 The fixed range data of benchmark data and location mark 31 be calculated the camera head 42 of original position with The straight angle of company of two location marks 31, camera head 42 and location mark 31 even straight arrow Amount size etc..
Then, inertial sensor end 40 is moved to optional position, as correction position, utilize shooting dress Put the 42 acquisition location marks 31 image information at this any correction position, and combine inertial sensor 41 Correction position attitude angle information obtain location data correction.Same, this location data correction include by The image information of correction position be converted into correction position coordinate the benchmark data combining camera head 42 and The fixed range data of location mark 31 are calculated camera head 42 and the two location marks of correction position The straight angle of company, camera head 42 and the location mark 31 even straight vector size etc. of 31.
Then, the location data correction obtained is calculated with start position data, obtains inertia sensing Device 41 moves to the displacement correction data of any correction position from original position.Such as, to inertial sensor End 40 moves to correction position from original position, relative to poor, the phasor difference of angle etc. of location mark 31 Data, utilize digital filter to carry out digital filtering, are calculated displacement correction data.It should be understood that Digital filtering can be Kalman filtering, extension shape Kalman filtering, method of least square, weighted mean Deng.
Meanwhile, inertial sensor end 40, during original position moves to correction position, utilizes inertia Sensor 41 senses, it is thus achieved that move to the measured displacements data of any correction position from original position. Same, it is possible to use digital filter carries out digital filtering to measured displacements data, and digital filtering is permissible For Kalman filtering, extension shape Kalman filtering, method of least square, weighted mean etc..
Then, the displacement correction data obtained and measured displacements data are carried out data fusion, eliminate accumulation Drift, the sensing data of correction inertial sensor 41, to realize being accurately positioned of inertial sensor end 40.
It should be understood that in the present embodiment, the method that first embodiment can be used, obtain two shootings The benchmark data of device 42, therefore not to repeat here.
The foregoing is only the preferred embodiments of the present invention, be not limited to the present invention, for this For the technical staff in field, the present invention can have various modifications and variations.All spirit in the present invention and Within principle, any modification, equivalent substitution and improvement etc. made, should be included in the right of the present invention Within claimed range.

Claims (7)

1. the inertial positioning method using vision auxiliary corrective, it is characterised in that comprise the following steps:
S1: separate photographing module and location mark are set;Described photographing module is arranged on correction end and described location mark is arranged on inertial sensor end, or described location mark is arranged on correction end and described photographing module is arranged on inertial sensor end;
S2: utilize described photographing module shooting described location mark, and be calculated correction data according to shooting results;Utilize described correction data that the sensing data of described inertial sensor is corrected;
In described step S1, described photographing module includes two camera heads with fixed range;Described location mark includes one or more location mark;
In described step S2, including:
S2-1: benchmark setting procedure;
S2-2: aligning step;
This benchmark setting procedure includes:
S2-1-1: described camera head and location mark are disposed on known relative position, and measurement obtains range data;
S2-1-2: in this relative position, utilizes the shooting of described camera head to obtain the described location mark image information in this relative position, and this image information is converted into start position data;
S2-1-3: according to the fixed range data between described start position data, range data and described camera head, be calculated the benchmark data of described photographing module;
Described aligning step includes:
S2-2-1: utilize the shooting of described camera head to obtain the described location mark image information at any correction position, and this image information is converted into location data correction;
S2-2-2: be calculated location mark actual location data according to the fixed range data between described location data correction, the benchmark data of photographing module, camera head, and according to described actual location data, the position sensing data of described inertial sensor end carried out data fusion correction.
The inertial positioning method of use vision auxiliary corrective the most according to claim 1, it is characterised in that in described step S2-1-2, described start position data includes the starting position coordinates image information of described relative position being converted into;S2-1-3: according to the fixed range data between described starting position coordinates, range data and described camera head, be calculated the benchmark data of described photographing module;
In described step S2-2-1, described location data correction includes the image information of described correction position is converted into correction position coordinate;
In described step S2-2-2, the actual location data of described location mark is included in the said two camera head of described correction position and the straight angle of company of described location mark, described camera head and described location mark even straight vector size and the actual position coordinate of described location mark calculated.
The inertial positioning method of use vision auxiliary corrective the most according to claim 1, it is characterised in that in described step S2, including:
S2-3: calculated with described start position data by described location data correction, obtains described inertial sensor and moves to the displacement correction data of described any correction position from described original position;
S2-4: utilize described inertial sensor to sense its displacement moving to described any correction position from described original position, it is thus achieved that measured displacements data;
S2-5: described displacement correction data and described measured displacements data are carried out data fusion, corrects the sensing data of described inertial sensor.
The inertial positioning method of use vision auxiliary corrective the most according to claim 3, it is characterized in that, in described step S2-1, described start position data includes that the image information of described original position is converted into starting position coordinates and the fixed range data between the benchmark data combining described camera head and camera head are calculated the straight angle of company, described camera head and the described location mark even straight vector size that the said two camera head of described original position identifies with described location;
In described step S2-2, described location data correction includes that the image information of described correction position is converted into correction position coordinate and the fixed range data between the benchmark data combining said two camera head and camera head are calculated the straight angle of company, described camera head and the described location mark even straight vector size that the said two camera head of described correction position identifies with described location.
5. the inertial positioning method using vision auxiliary corrective, it is characterised in that comprise the following steps:
S1: separate photographing module and location mark are set;Described photographing module is arranged on correction end and described location mark is arranged on inertial sensor end, or described location mark is arranged on correction end and described photographing module is arranged on inertial sensor end;
S2: utilize described photographing module shooting described location mark, and be calculated correction data according to shooting results;Utilize described correction data that the sensing data of described inertial sensor is corrected;
In described step S1, described photographing module includes at least one camera head;Described location mark includes two or more location mark with fixing relative distance;
In described step S2, including:
S2-1: benchmark setting procedure;
S2-2: aligning step;
This benchmark setting procedure includes:
S2-1-1: described camera head and location mark are disposed on known relative position, and measurement obtains range data;
S2-1-2: in this relative position, utilizes the shooting of described camera head to obtain the described location mark image information in this relative position, and this image information is converted into start position data;
S2-1-3: utilize the inertial sensor sensing reference attitude angle information in this relative position of inertial sensor end;
S2-1-4: the fixed range data between identifying according to described reference attitude angle information, described start position data, range data and described location, calculate the benchmark data of described photographing module;
Described aligning step includes:
S2-2-1: at any correction position, utilizes described inertial sensor to obtain correction position attitude angle information;
S2-2-2: utilize the shooting of described camera head to obtain the described location mark image information at this any correction position, and this image information is converted into location data correction;
S2-2-3: be calculated actual location data according to the fixed range data between described correction position attitude angle information, location data correction, the benchmark data of photographing module, location mark, and according to described actual location data, the sensing data of described inertial sensor end carried out data fusion correction.
The inertial positioning method of use vision auxiliary corrective the most according to claim 5, it is characterised in that in described step S2, including:
S2-3: calculated with described start position data by described location data correction, obtains described inertial sensor and moves to the displacement correction data of described any correction position from described original position;
S2-4: utilize described inertial sensor to sense its displacement moving to described any correction position from described original position, it is thus achieved that measured displacements data;
S2-5: described displacement correction data and described measured displacements data are carried out data fusion, corrects the sensing data of described inertial sensor.
7. use an inertial positioning system for vision auxiliary corrective, including inertial sensor, it is characterised in that
Also include: for the location mark of mark location point, for obtaining the photographing module of image information, graphics processing unit and the correction unit of described location mark;
Described photographing module is arranged at correction end and described location mark is arranged on inertial sensor end, described correction unit is connected with described inertial sensor communication by wirelessly or non-wirelessly mode, or described location mark is arranged on correction end and described photographing module is arranged on inertial sensor end, and described correction unit is connected by wirelessly or non-wirelessly mode communication with described inertial sensor;
The image information that described graphics processing unit produces according to described photographing module calculates spatial positional information, and by the transmission of described spatial positional information to described correction unit, the sensing data of described inertial sensor are corrected by described correction unit according to described position correction information;
Described photographing module includes that two camera heads with fixed range, described location mark include one or more location mark;Or
Described photographing module includes at least one camera head;Described location mark includes two or more location mark with fixing relative distance.
CN201310386423.7A 2013-08-29 2013-08-29 A kind of inertial positioning method and system using vision auxiliary corrective Active CN103438904B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310386423.7A CN103438904B (en) 2013-08-29 2013-08-29 A kind of inertial positioning method and system using vision auxiliary corrective

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310386423.7A CN103438904B (en) 2013-08-29 2013-08-29 A kind of inertial positioning method and system using vision auxiliary corrective

Publications (2)

Publication Number Publication Date
CN103438904A CN103438904A (en) 2013-12-11
CN103438904B true CN103438904B (en) 2016-12-28

Family

ID=49692600

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310386423.7A Active CN103438904B (en) 2013-08-29 2013-08-29 A kind of inertial positioning method and system using vision auxiliary corrective

Country Status (1)

Country Link
CN (1) CN103438904B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104035115B (en) * 2014-06-06 2017-01-25 中国科学院光电研究院 Vision-aided satellite navigation and positioning method, and positioning machine
CN105698784A (en) * 2016-03-22 2016-06-22 成都电科创品机器人科技有限公司 Indoor robot positioning system and method
US10012517B2 (en) * 2016-08-01 2018-07-03 Infinity Augmented Reality Israel Ltd. Method and system for calibrating components of an inertial measurement unit (IMU) using scene-captured data
CN106340043A (en) * 2016-08-24 2017-01-18 深圳市虚拟现实技术有限公司 Image identification spatial localization method and image identification spatial localization system
JP6601352B2 (en) * 2016-09-15 2019-11-06 株式会社デンソー Vehicle posture estimation device
CN108007460A (en) * 2016-11-01 2018-05-08 博世汽车部件(苏州)有限公司 The method and system of mobile object location are determined in environment is predicted
CN106595635B (en) * 2016-11-30 2020-12-08 北京特种机械研究所 AGV positioning method fusing data of multiple positioning sensors
CN106597562B (en) * 2016-12-02 2018-08-21 哈尔滨工业大学 Mars gravitation ground simulation system based on double duct vertical thrust technologies
US11175398B2 (en) * 2016-12-21 2021-11-16 The Boeing Company Method and apparatus for multiple raw sensor image enhancement through georegistration
CN109405850A (en) * 2018-10-31 2019-03-01 张维玲 A kind of the inertial navigation positioning calibration method and its system of view-based access control model and priori knowledge
CN109631887B (en) * 2018-12-29 2022-10-18 重庆邮电大学 Inertial navigation high-precision positioning method based on binocular, acceleration and gyroscope
CN109631875A (en) * 2019-01-11 2019-04-16 京东方科技集团股份有限公司 The method and system that a kind of pair of sensor attitude fusion measurement method optimizes
CN110398258B (en) * 2019-08-13 2021-04-20 广州广电计量检测股份有限公司 Performance testing device and method of inertial navigation system
CN111197982B (en) * 2020-01-10 2022-04-12 北京航天众信科技有限公司 Heading machine pose deviation rectifying method, system and terminal based on vision and strapdown inertial navigation
CN111780748B (en) * 2020-05-16 2022-04-15 北京航天众信科技有限公司 Heading machine pose deviation rectifying method and system based on binocular vision and strapdown inertial navigation
CN112405526A (en) * 2020-10-26 2021-02-26 北京市商汤科技开发有限公司 Robot positioning method and device, equipment and storage medium
CN115022604A (en) * 2022-04-21 2022-09-06 新华智云科技有限公司 Projection book and use method and system thereof

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101782642A (en) * 2010-03-09 2010-07-21 山东大学 Method and device for absolutely positioning measurement target by multi-sensor fusion
CN102104791A (en) * 2009-12-17 2011-06-22 财团法人工业技术研究院 Video camera calibration system and coordinate data generation system, and method thereof

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5559997B2 (en) * 2009-07-17 2014-07-23 株式会社トプコン Position measuring method and position measuring apparatus

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102104791A (en) * 2009-12-17 2011-06-22 财团法人工业技术研究院 Video camera calibration system and coordinate data generation system, and method thereof
CN101782642A (en) * 2010-03-09 2010-07-21 山东大学 Method and device for absolutely positioning measurement target by multi-sensor fusion

Also Published As

Publication number Publication date
CN103438904A (en) 2013-12-11

Similar Documents

Publication Publication Date Title
CN103438904B (en) A kind of inertial positioning method and system using vision auxiliary corrective
CN106017436B (en) BIM augmented reality setting-out system based on total station and photogrammetric technology
EP2527787B1 (en) Aerial photograph image pickup method and aerial photograph image pickup apparatus
US11015930B2 (en) Method for 2D picture based conglomeration in 3D surveying
CN111091587B (en) Low-cost motion capture method based on visual markers
CN109414221B (en) Measurement and customization system for orthotic devices
CN108269286A (en) Polyphaser pose correlating method based on combination dimensional mark
CN103017666A (en) Method and assembly for determining the position of a measurement point in geometric space
CN102661717A (en) Monocular vision measuring method for iron tower
CN106970354B (en) A kind of 3-D positioning method based on multiple light courcess and photosensor array
JP2013219541A (en) Photographing system and photographing method
CN109785381A (en) A kind of optical inertial fusion space-location method, positioning device and positioning system
CN108952742A (en) A kind of shield machine guidance method and system based on machine vision
CN104240262A (en) Camera external parameter calibration device and calibration method for photogrammetry
JP6516109B2 (en) Operation display system and program
CN106643567B (en) A kind of method of calibration and system of lane shift system producing line scaling board
CN109387194A (en) A kind of method for positioning mobile robot and positioning system
CN105698784A (en) Indoor robot positioning system and method
CN105380592A (en) Wearable equipment for pupil distance detection and implementation method thereof
CN107329379A (en) Double-deck alignment device and double-deck alignment methods
CN107449394A (en) Total powerstation electronics center support system and its centering automatic compensating method
CN109099889A (en) Close range photogrammetric system and method
CN207197440U (en) A kind of vision positioning system of high accuracy positioning large area product
CN107847187B (en) Apparatus and method for motion tracking of at least part of a limb
KR20150077081A (en) Method and System for Vehicle Stereo Camera Calibration

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231026

Address after: C-128, No. 69 Shuangfeng Road, Economic Development Zone, Pukou District, Nanjing City, Jiangsu Province, 210000

Patentee after: Jiangsu Qingqiu Intelligent Manufacturing Co.,Ltd.

Address before: 518049, Guangdong, Shenzhen Futian District Meilin Road, No. three, blue sky green homes on the third floor of the annex

Patentee before: SHENZHEN YUHENG INTERACTIVE TECHNOLOGY DEVELOPMENT Co.,Ltd.