CN103438904A - Inertial positioning method and system using vision-aided correction - Google Patents

Inertial positioning method and system using vision-aided correction Download PDF

Info

Publication number
CN103438904A
CN103438904A CN2013103864237A CN201310386423A CN103438904A CN 103438904 A CN103438904 A CN 103438904A CN 2013103864237 A CN2013103864237 A CN 2013103864237A CN 201310386423 A CN201310386423 A CN 201310386423A CN 103438904 A CN103438904 A CN 103438904A
Authority
CN
China
Prior art keywords
data
correction
positioning mark
inertial sensor
camera head
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013103864237A
Other languages
Chinese (zh)
Other versions
CN103438904B (en
Inventor
罗富强
纪家纬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Qingqiu Intelligent Manufacturing Co.,Ltd.
Original Assignee
SHENZHEN YUHENG INTERACTIVE TECHNOLOGY DEVELOPMENT Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHENZHEN YUHENG INTERACTIVE TECHNOLOGY DEVELOPMENT Co Ltd filed Critical SHENZHEN YUHENG INTERACTIVE TECHNOLOGY DEVELOPMENT Co Ltd
Priority to CN201310386423.7A priority Critical patent/CN103438904B/en
Publication of CN103438904A publication Critical patent/CN103438904A/en
Application granted granted Critical
Publication of CN103438904B publication Critical patent/CN103438904B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Navigation (AREA)

Abstract

The invention discloses an inertial positioning method and an inertial positioning system using vision-aided corrosion. The inertial positioning system comprises an inertial sensor, a positioning identifier, an image pickup module, an image processing unit and a correcting unit, wherein the positioning identifier is used for identifying a positioning point; the image pickup module is used for obtaining image information of the positioning identifier. The image processing unit is used for calculating space position information according to the image information generated by the image pickup module and transmitting the space position information to the correcting unit; the correcting unit is used for carrying out correction on sensed data of the inertial sensor according to the space position information. The inertial positioning method comprises the steps of arranging the image pickup module and the positioning identifier which are separate, wherein the image pickup module is arranged at the correcting end and the positioning identifier is arranged at the inertial sensor end; or the positioning identifier is arranged at the correcting end and the image pickup module is arranged at the inertial sensor end; picking up the positioning identifier by utilizing the image pickup module, and calculating according to the pickup result to obtain the correction data; correcting the sensed data of the inertial sensor by utilizing the correction data. The inertial positioning method has the advantages of being short in correction time, accurate in correction, less in power consumption and the like.

Description

A kind of auxiliary inertial positioning method and system of proofreading and correct of vision of using
Technical field
The present invention relates to the inertial positioning method and system, relate in particular to and use the auxiliary inertial positioning method and system of proofreading and correct of vision.
Background technology
The widespread use of inertial sensor aspect space orientation at present, let us also seen in the existing obvious problem of application facet, maximum problem is one and has done overcorrect and locate a point in 3 dimension spaces, location and directed information at this point after movement are just incorrect, if need to obtaining bearing direction information accurately, the user must again press again check key, as the user, constantly when mobile, need constantly artificial correction repeatedly just can obtain referring to that effect of that dozen.The strike that the caused not convenient property of this problem is serious user's experience!
Summary of the invention
The technical problem to be solved in the present invention is, a kind of inertial positioning method and system simple, little power consumption, that use the auxiliary correction of vision of proofreading and correct are provided.
The technical solution adopted for the present invention to solve the technical problems is: a kind of auxiliary inertial positioning method of proofreading and correct of vision of using is provided, comprises the following steps:
S1: photographing module and positioning mark separately is set; Described photographing module is arranged on the correction end and described positioning mark is arranged on the inertial sensor end, or described positioning mark is arranged on the correction end and described photographing module is arranged on the inertial sensor end;
S2: utilize described photographing module to take described positioning mark, and calculate correction data according to shooting results; Utilize described correction data to be proofreaied and correct the sensing data of described inertial sensor.
In the present invention, use in the described step S1 of the auxiliary inertial positioning method of proofreading and correct of vision, described photographing module comprises two camera heads with fixed range; Described positioning mark comprises one or more positioning marks.
Use in the described step S2 of the auxiliary inertial positioning method of proofreading and correct of vision in the present invention, comprising:
S2-1: benchmark is set step;
S2-2: aligning step;
This benchmark is set step and is comprised:
S2-1-1: described camera head and positioning mark separately are arranged on to known relative position place, and measurement obtains range data;
S2-1-2: at this relative position place, utilize described camera head to take and obtain the image information of described positioning mark at this relative position place, and this image information is converted into to start position data;
S2-1-3: according to the fixed range data between described start position data, range data and described camera head, calculate the reference data of described photographing module;
Described aligning step comprises:
S2-2-1: utilize described camera head to take and obtain the image information of described positioning mark at any correction position, and this image information is converted into to location data correction;
S2-2-2: calculate the positioning mark actual location data according to the reference data of described location data correction, photographing module, fixed range data between camera head, and according to described actual location data to described inertial sensor end the location sensing data carry out the data fusion correction.
In the present invention, use in the described step S2-1-2 of the auxiliary inertial positioning method of proofreading and correct of vision, described start position data comprises the reference position coordinate that the image information of described relative position is converted to;
S2-1-3: according to the fixed range data between described reference position coordinate, range data and described camera head, calculate the reference data of described photographing module;
In described step S2-2-1, described location data correction comprises and converts the image information of described correction position to the correction position coordinate;
In described step S2-2-2, the actual location data of described positioning mark is included in that described two camera heads of described correction position and the straight angle of company of described positioning mark, described camera head and described positioning mark connect straight vector size and the actual position coordinate of the described positioning mark that calculates.
Use in the described step S2 of the auxiliary inertial positioning method of proofreading and correct of vision in the present invention, comprising:
S2-1: in reference position, utilize described camera head to obtain the image information of described positioning mark in this reference position, and this image information is converted into to start position data;
S2-2: at any correction position, utilize described camera head to obtain the image information of described positioning mark at this any correction position, and this image information is converted into to location data correction;
S2-3: described location data correction and described start position data are calculated, obtain described inertial sensor and move to the displacement correction data of described any correction position from described reference position;
S2-4: utilize described inertial sensor sensing it moves to the displacement of described any correction position from described reference position, obtains the measured displacements data;
S2-5: described displacement correction data and described measured displacements data are carried out to data fusion, proofread and correct the sense data of described inertial sensor.
In the present invention, use in the described step S2-1 of the auxiliary inertial positioning method of proofreading and correct of vision, described start position data comprises the image information of described reference position is converted to the reference position coordinate and connects straight vector size in conjunction with described two camera heads and the straight angle of company of described positioning mark, described camera head and described positioning mark that the reference data of described camera head and the fixed range data between camera head calculate described reference position;
In described step S2-2, described location data correction comprises the image information of described correction position is converted to the correction position coordinate and connects straight vector size in conjunction with described two camera heads and the straight angle of company of described positioning mark, described camera head and described positioning mark that the reference data of described two camera heads and the fixed range data between camera head calculate described correction position.
In the present invention, use in the described step S1 of the auxiliary inertial positioning method of proofreading and correct of vision, described photographing module comprises at least one camera head; Described positioning mark comprises two or more fixedly positioning marks of relative distance that have.
Use in the described step S2 of the auxiliary inertial positioning method of proofreading and correct of vision in the present invention, comprising:
S2-1: benchmark is set step;
S2-2: aligning step;
This benchmark is set step and is comprised:
S2-1-1: described camera head and positioning mark separately are arranged on to known relative position place, and measurement obtains range data;
S2-1-2: at this relative position place, utilize described camera head to take and obtain the image information of described positioning mark at this relative position place, and this image information is transformed to start position data;
S2-1-3: utilize the reference attitude angle information of the inertial sensor sensing of inertial sensor end at this relative position place;
S2-1-4: according to the fixed range data between described reference attitude angle information, described start position data, range data and described positioning mark, calculate the reference data of described photographing module;
Described aligning step comprises:
S2-2-1: at any correction position, utilize described inertial sensor to obtain correction position attitude angle information;
S2-2-2: utilize described camera head to take and obtain the image information of described positioning mark at this any correction position, and this image information is converted into to location data correction;
S2-2-3: calculate actual location data according to the reference data of described correction position attitude angle information, location data correction, photographing module, the fixed range data between positioning mark, and according to described actual location data to described inertial sensor end sense data carry out the data fusion correction.
Use in the described step S2 of the auxiliary inertial positioning method of proofreading and correct of vision in the present invention, comprising:
S2-1: in reference position, utilize described camera head to obtain the image information of described positioning mark in this reference position, and obtain start position data in conjunction with the initial attitude angle information of inertial sensor;
S2-2: at any correction position, utilize described camera head to obtain the image information of described positioning mark at this any correction position, and obtain location data correction in conjunction with the correction position attitude angle information of inertial sensor;
S2-3: described location data correction and described start position data are calculated, obtain described inertial sensor and move to the displacement correction data of described any correction position from described reference position;
S2-4: utilize described inertial sensor sensing it moves to the displacement of described any correction position from described reference position, obtains the measured displacements data;
S2-5: described displacement correction data and described measured displacements data are carried out to data fusion, proofread and correct the sense data of described inertial sensor.
The present invention also provides a kind of auxiliary inertial positioning system of proofreading and correct of vision that uses, comprise inertial sensor, this system also comprises: for the positioning mark of mark location point, for photographing module, graphics processing unit and the correcting unit of the image information of obtaining described positioning mark;
Described photographing module is arranged at the correction end and described positioning mark is arranged on the inertial sensor end, described correcting unit is connected with described inertial sensor communication by wireless or wired mode, perhaps described positioning mark is arranged on and proofreaies and correct end and described photographing module is arranged on the inertial sensor end, and described correcting unit is connected by wireless or wired mode communication with described inertial sensor;
The image information computer memory positional information that described graphics processing unit produces according to described photographing module, and described spatial positional information is transferred to described correcting unit, described correcting unit is proofreaied and correct the sense data of described inertial sensor according to described position correction information.
In the auxiliary inertial positioning system of proofreading and correct of use vision of the present invention, described photographing module comprises two camera heads with fixed range, and described positioning mark comprises one or more positioning marks; Perhaps
Described photographing module comprises at least one camera head; Described positioning mark comprises two or more fixedly positioning marks of relative distance that have.
The beneficial effect that enforcement the present invention brings is: by utilizing photographing module to take, divide the positioning mark be arranged, calculate correction data, the correction of realization to the sensing data of inertial sensor, have the advantages such as short, calibration accuracy correction time, power consumption are few.
The accompanying drawing explanation
Below in conjunction with drawings and Examples, the invention will be further described, in accompanying drawing:
Fig. 1 is the signal framework that the present invention uses the first embodiment of the auxiliary inertial positioning system of proofreading and correct of vision;
Fig. 2 is that the present invention uses the benchmark of the first embodiment of the auxiliary inertial positioning method of proofreading and correct of vision to set the principle schematic of step;
Fig. 3 is the principle schematic of the aligning step of the present invention the first embodiment of using the auxiliary inertial positioning method of proofreading and correct of vision;
Fig. 4 is the signal framework that the present invention uses the 3rd embodiment of the auxiliary inertial positioning system of proofreading and correct of vision;
Fig. 5 is that the present invention uses the benchmark of the 3rd embodiment of the auxiliary inertial positioning method of proofreading and correct of vision to set the principle schematic of step;
Fig. 6 is the principle schematic of the aligning step of the present invention the 3rd embodiment of using the auxiliary inertial positioning method of proofreading and correct of vision.
Embodiment
Understand for technical characterictic of the present invention, purpose and effect being had more clearly, now contrast accompanying drawing and describe the specific embodiment of the present invention in detail.
The auxiliary inertial positioning method and system of proofreading and correct of use vision of the present invention, the photographing module separated by setting and positioning mark; Wherein, photographing module is arranged on the correction end and positioning mark is arranged on the inertial sensor end, or positioning mark is arranged on the correction end and photographing module is arranged on the inertial sensor end.Then, utilize photographing module to take positioning mark, and calculate correction data according to shooting results; Utilize correction data to be proofreaied and correct the sensing data of inertial sensor, can realize the error correction to the inertial sensor in the inertial positioning device in the situation that consume seldom electric weight.
As Figure 1-3, be the auxiliary inertial positioning system of proofreading and correct of use vision of the present invention and the first embodiment of method.
This uses the auxiliary inertial positioning system of proofreading and correct of vision, comprises inertial sensor 21, for the positioning mark 22 of mark location point, for photographing module, graphics processing unit 12 and the correcting unit 13 of the image information of obtaining positioning mark 22.
In the present embodiment, photographing module is arranged at proofreaies and correct end 10, comprises at least two camera heads 11 with fixed range; Certainly, the quantity of camera head 11 can also be more.This correction end 10 can be the various devices that are fixedly installed such as display screen, shooting screen, TV, game host; And camera head 11 can adopt the picture pick-up device of CMOS alignment sensor or other types, for catching the image of positioning mark 22.
This positioning mark 22 is arranged on inertial sensor end 20, identifies the position of inertial sensor end 20.In the present embodiment, this inertial sensor end 20 is plinker, 3D mouse etc., and this positioning mark 22 is one, is arranged at the place ahead of inertial sensor end 20, for photographing module, takes seizure.This positioning mark 22 can be the infrared LED lamp, certainly, also can select other sign, and such as reflection plaster, paster that aberration is larger etc. is identified and gets final product in the image that can be convenient to take at photographing module.
Understandable, the quantity of this positioning mark 22 also can, for two or more, by identifying the id information of different positioning marks 22, obtain the position of positioning mark 22 successively.For example, the time separately lighted by controlling the LED lamp, or control LED light flicker light and shade, making into can be with the light wave (light wave is also electromagnetic wave) of information, with indivedual ID messages, for identification.
This graphics processing unit 12 and correcting unit 13 are arranged on simultaneously proofreaies and correct end 10, the spatial positional information of the image information compute location sign 22 that graphics processing unit 12 produces according to photographing module, and spatial positional information is sent to correcting unit 13, then there are 13 pairs of spatial positional informations of correcting unit to be processed, obtain position correction information, and the sense data of inertial sensor 21 is proofreaied and correct.Can be connected by wireless or wired mode between this correcting unit 13 and inertial sensor 21, thereby the sense data of inertial sensor 21 is proofreaied and correct.
When using the auxiliary correction of vision to carry out inertial positioning, at first need to carry out the benchmark setting, and then carry out aligning step.
While carrying out this benchmark setting step, camera head 11 and positioning mark 22 separately are arranged on to known relative position place, and measurement obtains the range data between camera head 11 and positioning mark 22; Utilize camera head 11 to take and obtain the image information of positioning mark 22 at this relative position place, and this image information is converted into to start position data; And calculate the reference data of photographing module according to the fixed range data between start position data, range data and two camera heads 11, and using this reference data as correction reference.
For example, as shown in Figure 2, stand in B point indication in band positioning mark 22(figure for 2 meters distances (being range data), as infrared LED) inertial sensor end 20 over against A point indication in two camera head 11(figure) centre position of line, take the image information of the positioning mark 22 of 2 meters distances by camera head 11, and convert this image information to start position data.Principle according to light along rectilinear propagation, can pass through the proportional method of the length of side of similar triangles, utilizes following formula to be calculated, to obtain the reference data of photographing module:
Hx 1 h 1 + Hx 2 h 2 = L
tan α = x 1 h 1 , tan β = x 2 h 2
Wherein, H is range data (can measure in advance), and L is two fixed range data (can measure in advance) between camera head 11, x 1and x 2for start position data (can take projection by camera head 11 obtains), h 1and h 2be respectively the reference data that need to try to achieve, using the reference data of this reference data as two camera heads 11.Understandable, this reference data can comprise horizontal direction, vertical direction or other directions, for follow-up aligning step, uses.
Understandable, can also be in a plurality of positions assay standard data again: camera head 11 and positioning mark 22 separately are arranged on to the second relative position place, and obtain the second distance data; Utilize camera head 11 to take and obtain the image information of positioning mark 22 at this second relative position place, and this image information is converted into to the second start position data; Fixed range data between the second start position data, second distance data and camera head 11 are calculated to the second reference data of photographing module, and described the second reference data and reference data fusion are obtained to reference data.For example, the position of 3 meters, again proofread and correct, utilize primary method, thereby obtain reference data more accurately.
When sensing data generation deviation, while causing the sensing of sensor side inaccurate, can automatically start by triggering button, the system of sensor side or other modes trigger, start and proofread and correct.
When proofreading and correct startup, utilize camera head 11 to take and obtain the image information of positioning mark 22 at any correction position, and this image information is converted into to location data correction; Then, according to the reference data of location data correction, photographing module, the fixed range data between camera head 11 etc., calculate that two camera heads 11 connect straight vectors size with the straight angle of company, the camera head 11 of positioning mark 22 with positioning mark 22, the actual position coordinate of the positioning mark that calculates etc., and obtain positioning mark 22(, be inertial sensor end 20) actual location data, and utilize this actual location data to be proofreaied and correct the sense data of this correction position.
For example, as shown in Figure 3, when inertial sensor end 20 is positioned at any correction position, in figure, A is designated as the position of two camera heads 11, the position that B is positioning mark, can be according to following formula:
Hx 1 h 1 + Hx 2 h 2 = L
tan α = x 1 h 1 , tan β = x 2 h 2
Wherein, H is required distance (being the distance of inertial sensor end 20 and camera head 11), h 1and h 2be respectively the basic parameter of the camera head 11 that N Reference Alignment partly obtains, x 1and x 2for take and process the location data correction of the positioning mark 22 obtained by camera head 11; L is two fixed range data between camera head 11, can measure in advance.
Utilize above two formula can try to achieve respectively distance and the attitude of inertial sensor end 20 and camera head 11, thereby obtain the actual location data of inertial sensor end 20, and according to this actual location data, the sense data of 21 sensings of inertial sensor is proofreaied and correct.Actual location data proofreaies and correct to the sense data of inertial sensor 21 methods such as can adopting Kalman filtering, EKF, least square method, to after actual location data and sense data fusion, carrying out filtering, calculates.
Understandable, except vertical range H, positioning mark 22 respectively with the horizontal range of two camera heads 11, difference in height also can be tried to achieve according to similar proportionate relationship, true origin in selected space is to make the 3 d space coordinate of positioning mark 22.
In the second embodiment of the auxiliary inertial positioning system of proofreading and correct of use vision of the present invention and method, by merging the data of reference position and correction position, realize the correction of sensing data.
Similar with a upper embodiment, this system comprises inertial sensor 21 equally, for the positioning mark 22 of mark location point, for photographing module, graphics processing unit 12 and the correcting unit 13 etc. of the image information of obtaining positioning mark 22.Photographing module comprises two camera heads 11 with fixed range, is fixedly installed on and proofreaies and correct end 10; And positioning mark 22 is one or more, be fixed on inertial sensor end 20.
In reference position, utilize camera head 11 to obtain the image information of positioning mark 22 in this reference position, and this image information is converted into to start position data.This start position data comprises that two camera heads 11 that the image information of reference position is converted to the reference position coordinate and calculates reference position in conjunction with reference data and the fixed range data of camera head 11 connect straight vector sizes etc. with the straight angle of company, the camera head 11 of positioning mark 22 with positioning mark 22.
Then, inertial sensor end 20 is moved to optional position, as correction position, utilize camera head 11 to obtain the image information of positioning mark 22 at this any correction position, and this image information is converted into to location data correction.Same, this location data correction comprises the image information of correction position is converted to the correction position coordinate and connects straight vector sizes etc. with the straight angle of company, the camera head 11 of positioning mark 22 with positioning mark 22 in conjunction with two camera heads 11 that reference data and the fixed range data of two camera heads 11 calculate correction position.
Then, the location data correction and the start position data that obtain are calculated, obtain inertial sensor 21 and move to the displacement correction data of any correction position from reference position.For example, inertial sensor end 20 is moved to correction position from reference position, the data such as, phasor difference poor with respect to the angle of photographing module, utilize digital filter to carry out digital filtering, calculates the displacement correction data.Understandable, digital filtering can be Kalman filtering, expansion shape Kalman filtering, least square method, weighted mean etc.
Simultaneously, inertial sensor end 20 moves to the process of correction position from reference position, utilizes inertial sensor 21 to carry out sensing, obtains the measured displacements data that move to any correction position from reference position.Same, can utilize digital filter to carry out digital filtering to the measured displacements data, digital filtering can be Kalman filtering, expansion shape Kalman filtering, least square method, weighted mean etc.
Then, the displacement correction data and the measured displacements data that obtain are carried out to data fusion, eliminate the accumulation drift, proofread and correct the sense data of inertial sensor 21, to realize the accurate location of inertial sensor end 20.
Understandable, in the present embodiment, can adopt the method for the first embodiment, obtain the reference data of two camera heads 11, therefore not to repeat here.
As shown in Figure 4, be the auxiliary inertial positioning system of proofreading and correct of use vision of the present invention and the 3rd embodiment of method.This system comprises inertial sensor 41, for the positioning mark 31 of mark location point, for photographing module, graphics processing unit 32 and the correcting unit 33 etc. of the image information of obtaining positioning mark 31, basic identical with the first embodiment, therefore do not repeat.
In the present embodiment, photographing module comprises at least one camera head 42, is fixedly mounted on inertial sensor end 40, and that positioning mark 31 comprises is two or more, and is arranged on and proofreaies and correct end 30 with fixing relative distance.
The method comprises benchmark setting step and aligning step equally.While carrying out this benchmark setting step, camera head 42 and positioning mark 31 separately are arranged on to known relative position place, and measurement obtains the range data between camera head 42 and positioning mark 31; Utilize camera head 42 to take and obtain the image information of positioning mark 31 at this relative position place, and this image information is converted into to start position data; Simultaneously, utilize the reference attitude angle information of inertial sensor 41 sensings of inertial sensor end 40 at this relative position place.Then, the fixed range data according between reference attitude angle information, start position data, range data and positioning mark 31, calculate the reference data of photographing module, and using this reference data as correction reference.
For example, as shown in Figure 5, stand in 2 meters distances (being range data) with A indication position in band camera head 42(figure) inertial sensor end 40 over against two positioning mark 31(as infrared LED, B indication position in figure) centre position of line, take the image information of the positioning mark 31 of 2 meters distances by camera head 42, and convert this image information to start position data.Principle according to light along rectilinear propagation, can pass through the proportional method of the length of side of similar triangles, utilizes following formula to be calculated, to obtain the reference data of photographing module:
h = LH x 1 + x 2
tan α = x 1 h , tan β = x 2 h
Wherein, H is range data (can measure in advance), and L is two fixed range data (can measure in advance) between positioning mark 31, x 1and x 2for start position data (can take projections by camera head 42 be converted to), h is respectively the reference data that need to try to achieve, usings the reference data of this reference data as camera head 42.Understandable, this reference data can comprise horizontal direction, vertical direction or other directions, for follow-up aligning step, uses.
Understandable, can also be in a plurality of positions assay standard data again: camera head 42 and positioning mark 31 separately are arranged on to the second relative position place, and obtain the second distance data; Utilize camera head 42 to take and obtain the image information of positioning mark 31 at this second relative position place, and this image information is converted into to the second start position data; Fixed range data between the second start position data, second distance data and positioning mark 31 are calculated to the second reference data of photographing module, and the second reference data and reference data fusion are obtained to reference data.For example, the position of 3 meters, again proofread and correct, utilize primary method, thereby obtain reference data more accurately.
When sensing data generation deviation, while causing the sensing of sensor side inaccurate, can automatically start by triggering button, the system of sensor side or other modes trigger, start and proofread and correct.
When proofreading and correct startup, utilize inertial sensor 41 to obtain correction position attitude angle information; And utilize camera head 42 to take and obtain the image information of positioning mark 31 at any correction position, and this image information is converted into to location data correction; Then, according to the reference data of correction position attitude angle information, location data correction, camera head 42, the fixed range data between positioning mark 31 etc., the actual position coordinate of the camera head that the straight angle of company, camera head 42 that calculates camera head 42 and two positioning marks 31 connect straight vectors size with positioning mark 31, calculate etc., and obtain camera head 42(, be inertial sensor end 40) actual location data, and utilize this actual location data to be proofreaied and correct the sense data of this correction position.
For example, as shown in Figure 6, when inertial sensor end 40 is positioned at any correction position, in figure, the position that A indication position is camera head 42, B is depicted as the position of two positioning marks 31, can be according to following formula:
H = Lh x 1 + x 2
tan α = x 1 h , tan β = x 2 h
Wherein, H is required distance (being inertial sensor end 40 and the distance of positioning mark 31), the basic parameter that h is the camera head 42 that partly obtains of N Reference Alignment, x 1and x 2for take and process the location data correction of the positioning mark 31 obtained by camera head 42; L is two fixed ranges between positioning mark 31, can measure in advance.
Utilize above two formula can try to achieve respectively distance and the attitude of inertial sensor end 40 and positioning mark 31, thereby obtain the actual location data of inertial sensor end 40, and according to this actual location data, the sense data of 41 sensings of inertial sensor is proofreaied and correct.Actual location data proofreaies and correct to the sense data of inertial sensor 41 methods such as can adopting Kalman filtering, EKF, least square method, to after actual location data and sense data fusion, carrying out filtering, calculates.
Understandable, except vertical range H, positioning mark 31 respectively with the horizontal range of two camera heads 42, difference in height also can be tried to achieve according to similar proportionate relationship, true origin in selected space is to make the 3 d space coordinate of positioning mark 31.
In the 4th embodiment of the auxiliary inertial positioning method of proofreading and correct of use vision of the present invention, by merging the data of reference position and correction position, realize the correction of sensing data.
Similar with a upper embodiment, positioning mark 31 comprises two positioning marks 31 with fixed range, is fixedly installed on and proofreaies and correct end 30; And camera head 42 is one or more, be fixed on inertial sensor end 40.
In reference position, utilize camera head 42 to obtain the image information of positioning mark 31 in this reference position, and obtain start position data in conjunction with the initial attitude angle information of inertial sensor 41.This start position data comprises the image information of reference position is converted to the reference position coordinate and calculates the camera head 42 of reference position and the straight angle of company, the camera head 42 of two positioning marks 31 connects straight vector sizes etc. with positioning mark 31 in conjunction with the fixed range data of the reference data of camera head 42 and positioning mark 31.
Then, inertial sensor end 40 is moved to optional position, as correction position, utilize camera head 42 to obtain the image information of positioning mark 31 at this any correction position, and obtain location data correction in conjunction with the correction position attitude angle information of inertial sensor 41.Same, this location data correction comprises the straight angle of company, camera head 42 that convert the image information of correction position to the correction position coordinate and calculate the camera head 42 of correction position and two positioning marks 31 in conjunction with the fixed range data of the reference data of camera head 42 and positioning mark 31 and the positioning mark 31 straight vector sizes of company etc.
Then, the location data correction and the start position data that obtain are calculated, obtain inertial sensor 41 and move to the displacement correction data of any correction position from reference position.For example, inertial sensor end 40 is moved to correction position from reference position, the data such as, phasor difference poor with respect to the angle of positioning mark 31, utilize digital filter to carry out digital filtering, calculates the displacement correction data.Understandable, digital filtering can be Kalman filtering, expansion shape Kalman filtering, least square method, weighted mean etc.
Simultaneously, inertial sensor end 40 moves to the process of correction position from reference position, utilizes inertial sensor 41 to carry out sensing, obtains the measured displacements data that move to any correction position from reference position.Same, can utilize digital filter to carry out digital filtering to the measured displacements data, digital filtering can be Kalman filtering, expansion shape Kalman filtering, least square method, weighted mean etc.
Then, the displacement correction data and the measured displacements data that obtain are carried out to data fusion, eliminate the accumulation drift, proofread and correct the sense data of inertial sensor 41, to realize the accurate location of inertial sensor end 40.
Understandable, in the present embodiment, can adopt the method for the first embodiment, obtain the reference data of two camera heads 42, therefore not to repeat here.
The foregoing is only the preferred embodiments of the present invention, be not limited to the present invention, for a person skilled in the art, the present invention can have various modifications and variations.Within the spirit and principles in the present invention all, any modification of doing, be equal to replacement, improvement etc., within all should being included in claim scope of the present invention.

Claims (11)

1. one kind is used the auxiliary inertial positioning method of proofreading and correct of vision, it is characterized in that, comprises the following steps:
S1: photographing module and positioning mark separately is set; Described photographing module is arranged on the correction end and described positioning mark is arranged on the inertial sensor end, or described positioning mark is arranged on the correction end and described photographing module is arranged on the inertial sensor end;
S2: utilize described photographing module to take described positioning mark, and calculate correction data according to shooting results; Utilize described correction data to be proofreaied and correct the sensing data of described inertial sensor.
2. the auxiliary inertial positioning method of proofreading and correct of use vision according to claim 1, is characterized in that, in described step S1, described photographing module comprises two camera heads with fixed range; Described positioning mark comprises one or more positioning marks.
3. the auxiliary inertial positioning method of proofreading and correct of use vision according to claim 2, is characterized in that, in described step S2, comprising:
S2-1: benchmark is set step;
S2-2: aligning step;
This benchmark is set step and is comprised:
S2-1-1: described camera head and positioning mark separately are arranged on to known relative position place, and measurement obtains range data;
S2-1-2: at this relative position place, utilize described camera head to take and obtain the image information of described positioning mark at this relative position place, and this image information is converted into to start position data;
S2-1-3: according to the fixed range data between described start position data, range data and described camera head, calculate the reference data of described photographing module;
Described aligning step comprises:
S2-2-1: utilize described camera head to take and obtain the image information of described positioning mark at any correction position, and this image information is converted into to location data correction;
S2-2-2: calculate the positioning mark actual location data according to the reference data of described location data correction, photographing module, fixed range data between camera head, and according to described actual location data to described inertial sensor end the location sensing data carry out the data fusion correction.
4. the auxiliary inertial positioning method of proofreading and correct of use vision according to claim 3, is characterized in that, in described step S2-1-2, described start position data comprises the reference position coordinate that the image information of described relative position is converted to; S2-1-3: according to the fixed range data between described reference position coordinate, range data and described camera head, calculate the reference data of described photographing module;
In described step S2-2-1, described location data correction comprises and converts the image information of described correction position to the correction position coordinate;
In described step S2-2-2, the actual location data of described positioning mark is included in that described two camera heads of described correction position and the straight angle of company of described positioning mark, described camera head and described positioning mark connect straight vector size and the actual position coordinate of the described positioning mark that calculates.
5. the auxiliary inertial positioning method of proofreading and correct of use vision according to claim 2, is characterized in that, in described step S2, comprising:
S2-1: in reference position, utilize described camera head to obtain the image information of described positioning mark in this reference position, and this image information is converted into to start position data;
S2-2: at any correction position, utilize described camera head to obtain the image information of described positioning mark at this any correction position, and this image information is converted into to location data correction;
S2-3: described location data correction and described start position data are calculated, obtain described inertial sensor and move to the displacement correction data of described any correction position from described reference position;
S2-4: utilize described inertial sensor sensing it moves to the displacement of described any correction position from described reference position, obtains the measured displacements data;
S2-5: described displacement correction data and described measured displacements data are carried out to data fusion, proofread and correct the sense data of described inertial sensor.
6. use vision according to claim 5 is assisted the inertial positioning method of proofreading and correct, it is characterized in that, in described step S2-1, described start position data comprises the image information of described reference position is converted to the reference position coordinate and connects straight vector size in conjunction with described two camera heads and the straight angle of company of described positioning mark, described camera head and described positioning mark that the reference data of described camera head and the fixed range data between camera head calculate described reference position;
In described step S2-2, described location data correction comprises the image information of described correction position is converted to the correction position coordinate and connects straight vector size in conjunction with described two camera heads and the straight angle of company of described positioning mark, described camera head and described positioning mark that the reference data of described two camera heads and the fixed range data between camera head calculate described correction position.
7. the auxiliary inertial positioning method of proofreading and correct of use vision according to claim 1, is characterized in that, in described step S1, described photographing module comprises at least one camera head; Described positioning mark comprises two or more fixedly positioning marks of relative distance that have.
8. the auxiliary inertial positioning method of proofreading and correct of use vision according to claim 7, is characterized in that, in described step S2, comprising:
S2-1: benchmark is set step;
S2-2: aligning step;
This benchmark is set step and is comprised:
S2-1-1: described camera head and positioning mark separately are arranged on to known relative position place, and measurement obtains range data;
S2-1-2: at this relative position place, utilize described camera head to take and obtain the image information of described positioning mark at this relative position place, and this image information is transformed to start position data;
S2-1-3: utilize the reference attitude angle information of the inertial sensor sensing of inertial sensor end at this relative position place;
S2-1-4: according to the fixed range data between described reference attitude angle information, described start position data, range data and described positioning mark, calculate the reference data of described photographing module;
Described aligning step comprises:
S2-2-1: at any correction position, utilize described inertial sensor to obtain correction position attitude angle information;
S2-2-2: utilize described camera head to take and obtain the image information of described positioning mark at this any correction position, and this image information is converted into to location data correction;
S2-2-3: calculate actual location data according to the reference data of described correction position attitude angle information, location data correction, photographing module, the fixed range data between positioning mark, and according to described actual location data to described inertial sensor end sense data carry out the data fusion correction.
9. the auxiliary inertial positioning method of proofreading and correct of use vision according to claim 7, is characterized in that, in described step S2, comprising:
S2-1: in reference position, utilize described camera head to obtain the image information of described positioning mark in this reference position, and obtain start position data in conjunction with the initial attitude angle information of inertial sensor;
S2-2: at any correction position, utilize described camera head to obtain the image information of described positioning mark at this any correction position, and obtain location data correction in conjunction with the correction position attitude angle information of inertial sensor;
S2-3: described location data correction and described start position data are calculated, obtain described inertial sensor and move to the displacement correction data of described any correction position from described reference position;
S2-4: utilize described inertial sensor sensing it moves to the displacement of described any correction position from described reference position, obtains the measured displacements data;
S2-5: described displacement correction data and described measured displacements data are carried out to data fusion, proofread and correct the sense data of described inertial sensor.
10. one kind is used the auxiliary inertial positioning system of proofreading and correct of vision, comprises inertial sensor, it is characterized in that,
Also comprise: for the positioning mark of mark location point, for photographing module, graphics processing unit and the correcting unit of the image information of obtaining described positioning mark;
Described photographing module is arranged at the correction end and described positioning mark is arranged on the inertial sensor end, described correcting unit is connected with described inertial sensor communication by wireless or wired mode, perhaps described positioning mark is arranged on and proofreaies and correct end and described photographing module is arranged on the inertial sensor end, and described correcting unit is connected by wireless or wired mode communication with described inertial sensor;
The image information computer memory positional information that described graphics processing unit produces according to described photographing module, and described spatial positional information is transferred to described correcting unit, described correcting unit is proofreaied and correct the sense data of described inertial sensor according to described position correction information.
11. the auxiliary inertial positioning system of proofreading and correct of use vision according to claim 10 is characterized in that described photographing module comprises two camera heads with fixed range, described positioning mark comprises one or more positioning marks; Perhaps
Described photographing module comprises at least one camera head; Described positioning mark comprises two or more fixedly positioning marks of relative distance that have.
CN201310386423.7A 2013-08-29 2013-08-29 A kind of inertial positioning method and system using vision auxiliary corrective Active CN103438904B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310386423.7A CN103438904B (en) 2013-08-29 2013-08-29 A kind of inertial positioning method and system using vision auxiliary corrective

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310386423.7A CN103438904B (en) 2013-08-29 2013-08-29 A kind of inertial positioning method and system using vision auxiliary corrective

Publications (2)

Publication Number Publication Date
CN103438904A true CN103438904A (en) 2013-12-11
CN103438904B CN103438904B (en) 2016-12-28

Family

ID=49692600

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310386423.7A Active CN103438904B (en) 2013-08-29 2013-08-29 A kind of inertial positioning method and system using vision auxiliary corrective

Country Status (1)

Country Link
CN (1) CN103438904B (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104035115A (en) * 2014-06-06 2014-09-10 中国科学院光电研究院 Vision-aided satellite navigation and positioning method, and positioning machine
CN105698784A (en) * 2016-03-22 2016-06-22 成都电科创品机器人科技有限公司 Indoor robot positioning system and method
CN106340043A (en) * 2016-08-24 2017-01-18 深圳市虚拟现实技术有限公司 Image identification spatial localization method and image identification spatial localization system
CN106597562A (en) * 2016-12-02 2017-04-26 哈尔滨工业大学 Mars gravitation ground simulation system based on double-duct vertical propulsion technology
CN106595635A (en) * 2016-11-30 2017-04-26 北京特种机械研究所 AGV (automated guided vehicle) positioning method with combination of multiple positioning sensor data
CN108007460A (en) * 2016-11-01 2018-05-08 博世汽车部件(苏州)有限公司 The method and system of mobile object location are determined in environment is predicted
CN108225309A (en) * 2016-12-21 2018-06-29 波音公司 Enhance the method and apparatus of multiple raw sensor images by geographic registration
CN109405850A (en) * 2018-10-31 2019-03-01 张维玲 A kind of the inertial navigation positioning calibration method and its system of view-based access control model and priori knowledge
CN109631875A (en) * 2019-01-11 2019-04-16 京东方科技集团股份有限公司 The method and system that a kind of pair of sensor attitude fusion measurement method optimizes
CN109631887A (en) * 2018-12-29 2019-04-16 重庆邮电大学 Inertial navigation high-precision locating method based on binocular, acceleration and gyroscope
CN109716062A (en) * 2016-09-15 2019-05-03 株式会社电装 Posture estimation device
CN109791048A (en) * 2016-08-01 2019-05-21 无限增强现实以色列有限公司 Usage scenario captures the method and system of the component of data calibration Inertial Measurement Unit (IMU)
CN110398258A (en) * 2019-08-13 2019-11-01 广州广电计量检测股份有限公司 A kind of performance testing device and method of inertial navigation system
CN111197982A (en) * 2020-01-10 2020-05-26 北京航天众信科技有限公司 Heading machine pose deviation rectifying method, system and terminal based on vision and strapdown inertial navigation
CN111780748A (en) * 2020-05-16 2020-10-16 北京航天众信科技有限公司 Heading machine pose deviation rectifying method and system based on binocular vision and strapdown inertial navigation
WO2022088613A1 (en) * 2020-10-26 2022-05-05 北京市商汤科技开发有限公司 Robot positioning method and apparatus, device and storage medium
CN115022604A (en) * 2022-04-21 2022-09-06 新华智云科技有限公司 Projection book and use method and system thereof

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101782642A (en) * 2010-03-09 2010-07-21 山东大学 Method and device for absolutely positioning measurement target by multi-sensor fusion
CN102104791A (en) * 2009-12-17 2011-06-22 财团法人工业技术研究院 Video camera calibration system and coordinate data generation system, and method thereof
US20110158475A1 (en) * 2009-07-17 2011-06-30 Kabushiki Kaisha Topcon Position Measuring Method And Position Measuring Instrument

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110158475A1 (en) * 2009-07-17 2011-06-30 Kabushiki Kaisha Topcon Position Measuring Method And Position Measuring Instrument
CN102104791A (en) * 2009-12-17 2011-06-22 财团法人工业技术研究院 Video camera calibration system and coordinate data generation system, and method thereof
CN101782642A (en) * 2010-03-09 2010-07-21 山东大学 Method and device for absolutely positioning measurement target by multi-sensor fusion

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104035115A (en) * 2014-06-06 2014-09-10 中国科学院光电研究院 Vision-aided satellite navigation and positioning method, and positioning machine
CN105698784A (en) * 2016-03-22 2016-06-22 成都电科创品机器人科技有限公司 Indoor robot positioning system and method
US11125581B2 (en) 2016-08-01 2021-09-21 Alibaba Technologies (Israel) LTD. Method and system for calibrating components of an inertial measurement unit (IMU) using scene-captured data
CN109791048A (en) * 2016-08-01 2019-05-21 无限增强现实以色列有限公司 Usage scenario captures the method and system of the component of data calibration Inertial Measurement Unit (IMU)
CN106340043A (en) * 2016-08-24 2017-01-18 深圳市虚拟现实技术有限公司 Image identification spatial localization method and image identification spatial localization system
CN109716062A (en) * 2016-09-15 2019-05-03 株式会社电装 Posture estimation device
CN109716062B (en) * 2016-09-15 2021-04-20 株式会社电装 Posture estimation device
CN108007460A (en) * 2016-11-01 2018-05-08 博世汽车部件(苏州)有限公司 The method and system of mobile object location are determined in environment is predicted
CN106595635B (en) * 2016-11-30 2020-12-08 北京特种机械研究所 AGV positioning method fusing data of multiple positioning sensors
CN106595635A (en) * 2016-11-30 2017-04-26 北京特种机械研究所 AGV (automated guided vehicle) positioning method with combination of multiple positioning sensor data
CN106597562A (en) * 2016-12-02 2017-04-26 哈尔滨工业大学 Mars gravitation ground simulation system based on double-duct vertical propulsion technology
CN108225309A (en) * 2016-12-21 2018-06-29 波音公司 Enhance the method and apparatus of multiple raw sensor images by geographic registration
CN108225309B (en) * 2016-12-21 2023-09-26 波音公司 Method and apparatus for enhancing multiple raw sensor images by geographic registration
CN109405850A (en) * 2018-10-31 2019-03-01 张维玲 A kind of the inertial navigation positioning calibration method and its system of view-based access control model and priori knowledge
CN109631887A (en) * 2018-12-29 2019-04-16 重庆邮电大学 Inertial navigation high-precision locating method based on binocular, acceleration and gyroscope
CN109631875A (en) * 2019-01-11 2019-04-16 京东方科技集团股份有限公司 The method and system that a kind of pair of sensor attitude fusion measurement method optimizes
CN110398258A (en) * 2019-08-13 2019-11-01 广州广电计量检测股份有限公司 A kind of performance testing device and method of inertial navigation system
CN111197982A (en) * 2020-01-10 2020-05-26 北京航天众信科技有限公司 Heading machine pose deviation rectifying method, system and terminal based on vision and strapdown inertial navigation
CN111197982B (en) * 2020-01-10 2022-04-12 北京航天众信科技有限公司 Heading machine pose deviation rectifying method, system and terminal based on vision and strapdown inertial navigation
CN111780748A (en) * 2020-05-16 2020-10-16 北京航天众信科技有限公司 Heading machine pose deviation rectifying method and system based on binocular vision and strapdown inertial navigation
WO2022088613A1 (en) * 2020-10-26 2022-05-05 北京市商汤科技开发有限公司 Robot positioning method and apparatus, device and storage medium
CN115022604A (en) * 2022-04-21 2022-09-06 新华智云科技有限公司 Projection book and use method and system thereof

Also Published As

Publication number Publication date
CN103438904B (en) 2016-12-28

Similar Documents

Publication Publication Date Title
CN103438904A (en) Inertial positioning method and system using vision-aided correction
US9377301B2 (en) Mobile field controller for measurement and remote control
CN106017436B (en) BIM augmented reality setting-out system based on total station and photogrammetric technology
CN101858755B (en) Method for calibrating star sensor
CN102798350B (en) Method, device and system for measuring deflection of arm support
CN111091587B (en) Low-cost motion capture method based on visual markers
CN106970354B (en) A kind of 3-D positioning method based on multiple light courcess and photosensor array
CN102661717A (en) Monocular vision measuring method for iron tower
CN102538793B (en) Double-base-line non-cooperative target binocular measurement system
CN104217439A (en) Indoor visual positioning system and method
CN108269286A (en) Polyphaser pose correlating method based on combination dimensional mark
EP2778706B1 (en) Position correction device using visible light communication and method thereof
US10337863B2 (en) Survey system
US9927253B2 (en) System and stereoscopic range determination method for a roadway lighting system
CN102538694A (en) Method and device for monitoring deformation of base point of dam abutment
JP2015010911A (en) Airborne survey method and device
US10724860B2 (en) Surveying device and survey system
CN103795935A (en) Camera shooting type multi-target locating method and device based on image rectification
CN103443580A (en) System and method for calibrating a vehicle measurement reference system
KR100564236B1 (en) Self-localization apparatus and method of mobile robot
CN102572066A (en) Handheld electronic device with distance measurement function and distance measurement method of handheld electronic device
CN110542418A (en) Indoor pipeline positioning method integrating two-dimensional code and inertial sensor
CN214199982U (en) Structure displacement measuring system
CN106840137B (en) Automatic positioning and orienting method of four-point type heading machine
CN117488887A (en) Foundation pit multi-measuring-point integrated monitoring method based on monocular vision

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231026

Address after: C-128, No. 69 Shuangfeng Road, Economic Development Zone, Pukou District, Nanjing City, Jiangsu Province, 210000

Patentee after: Jiangsu Qingqiu Intelligent Manufacturing Co.,Ltd.

Address before: 518049, Guangdong, Shenzhen Futian District Meilin Road, No. three, blue sky green homes on the third floor of the annex

Patentee before: SHENZHEN YUHENG INTERACTIVE TECHNOLOGY DEVELOPMENT Co.,Ltd.