CN208751577U - A kind of robot indoor locating system - Google Patents

A kind of robot indoor locating system Download PDF

Info

Publication number
CN208751577U
CN208751577U CN201821540787.0U CN201821540787U CN208751577U CN 208751577 U CN208751577 U CN 208751577U CN 201821540787 U CN201821540787 U CN 201821540787U CN 208751577 U CN208751577 U CN 208751577U
Authority
CN
China
Prior art keywords
sensor
robot
axis
inertial
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201821540787.0U
Other languages
Chinese (zh)
Inventor
施敏慧
马国军
何康
缪晓磊
陈扣化
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangyin City Leiao Robot Technology Co Ltd
Original Assignee
Jiangyin City Leiao Robot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangyin City Leiao Robot Technology Co Ltd filed Critical Jiangyin City Leiao Robot Technology Co Ltd
Priority to CN201821540787.0U priority Critical patent/CN208751577U/en
Application granted granted Critical
Publication of CN208751577U publication Critical patent/CN208751577U/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The utility model discloses a kind of robot indoor locating systems, including being sequentially connected the inertial sensor being arranged on mobile robot, sensor control block and host computer and visual sensor;As the robot moves, inertial sensor generates inertial data in real time, and visual sensor acquires scene image in real time;Sensor control block is connected with inertial sensor, for carrying out operation control and the transmitting-receiving work of data to inertial sensor, host computer is connected with sensor control block, for handling acquired data, host computer also with visual sensor connection communication, for obtaining the image of acquisition and being handled, inertial sensor is with visual sensor for the motion track of robot to be positioned and tracked.The utility model combines the advantages of two class sensors, has higher robustness, and can carry out three-dimensional reconstruction to scene, and whole positioning accuracy is higher and hardware is built simple, flexible, has high value of practical.

Description

A kind of robot indoor locating system
Technical field
The utility model relates to a kind of robot indoor locating systems.
Background technique
Robot want can normal use in real life, in addition to complicated machine-building, driving device, control system with The algorithm of outer bottom is its intelligentized core place.The key technology of robot is that by Camera calibration function, When especially robot carries out mobile working in true environment, at least need to know three things about itself: " I am at what Place ", " which type of is around me ", " how I will get to specified place ".
Visual sensor is closest to the mode that the mankind obtain external image, passes through visual sensor (generally camera) energy It is enough to provide environmental characteristic information abundant to robot, and the data of camera do not have drift.It can by Feature Correspondence Algorithm In the hope of camera motion state between two frames, and three-dimensional environment information can also be rebuild.But when scene areas line When reason is less or camera motion is too fast, the image that can all cause camera to obtain carries out causing error hiding rate excessively high when characteristic matching Or it fails to match, so that calculated posture information be made mistake occur, and is difficult to use in the three-dimensional reconstruction of scene.
Inertial sensor, which carries out positioning, can provide a preferable initial motion state, and inertial sensor can mention For gravity direction.But when only using inertial sensor and being positioned, for inertial sensor in the short time positioning accuracy compared with Height, but prolonged positioning meeting, so that excessive drift occurs in result, general inertial sensor drifts about for the measurement of angle It is in a linear relationship relative to the time, and the variation of square rank is presented relative to the time for the measurement drift of position.
The patent application of publication number CN107560613A discloses in a kind of robot chamber based on nine axis inertial sensors Trajectory Tracking System and method, wherein method includes to utilize the calculated posture square of gyroscope using PI control algolithm is introduced Battle array carries out coordinate system conversion to accelerometer and magnetometer measures value, and the transformed error that when conversion generates can be used to correct appearance State matrix, to improve whole positioning accuracy.This method is with the presence of following disadvantage: 1) positioning result long-time numerical behavior is tired Count error;2) inertial sensor data is only relied on, under the environment big for electromagnetic interference strength, inertial data is unreliable;3) only The track of robot can be tracked, and environmental map cannot be rebuild, be unable to satisfy robot and carry out navigator fix When demand to environmental map.
The needs of when in order to improve the precision of robot indoor positioning and meet robot navigation's positioning to environmental map, melt The advantages of conjunction both inertial sensor and visual sensor, improves the precision and robust performance of location algorithm, and can rebuild The three-dimensional map of scene out.
Summary of the invention
The purpose of this utility model is for the three of the precision and robust performance and scene that improve robot indoor positioning Map is tieed up, a kind of robot indoor locating system and localization method are provided.
The utility model uses nine axis inertia sensings being made of three axis accelerometer, three-axis gyroscope and three axle magnetometer Device is as inertial sensor module.It introduces the coordinate system rotation relationship based on quaternary number and describes method, use three axis accelerometer The output valve of three-axis gyroscope is corrected, with the output valve of three axle magnetometer to obtain accurate rotation relationship.Accurate rotation It also include the direction of motion information of robot in relationship.And coordinate system conversion and removal are carried out to the 3-axis acceleration value of acquisition Gravity processing, by treated, 3-axis acceleration value carries out the velocity information in the available robot moving process of integral operation. Using a kind of idle period (i.e. point of zero velocity) judgment method, for detecting the idle period in robot moving process, in conjunction with The idle period judged is modified speed, carries out integral operation again to revised speed then and can get robot shifting Displacement information during dynamic.
The utility model carries out calibration processing to the camera being fixed in robot first, after using calibration after the completion of calibration Parameter, processing is corrected to the scene image got, to remove pattern distortion.Image after correction is mentioned using feature Algorithm is taken, the characteristic point between consecutive frame and frame in image is extracted, machine is estimated by the variation relation between characteristic point The motion state of people.Pass between vector machines people movement and the observation information of sensor is described by building pose figure again System seeks out more accurate initial pose using figure optimization, and by g2o figure Optimization Solution library, calculates the figure optimization problem, With information such as the poses that solves more accurate robot, and scene three-dimensional map is constructed on this basis, finally fusion is used Property data and vision data handle robot localization.
In order to achieve the above purpose, the utility model adopts the following technical solution is achieved.
A kind of robot indoor locating system, including movable machine people are sequentially connected and are arranged in the robot Inertial sensor, sensor control block and host computer, it is characterised in that: the host computer, which is also respectively connected, to be arranged in machine Visual sensor on device people;When the robot is mobile, the inertial sensor generates 3-axis acceleration value, three axis in real time Magnitude of angular velocity and three-axle magnetic field intensity value, the visual sensor acquire the scene image in motion process in real time;The biography Sensor control module is connected with inertial sensor, for carrying out the transmitting-receiving work of operation control and data to inertial sensor To make, the host computer is connected with sensor control block, for handling acquired data, the host computer and view Feel sensor connection communication, for obtaining ambient image information and handling information, the inertial sensor and vision are passed Sensor is for being positioned and being tracked to the motion track of robot;Wherein the inertial sensor is to incorporate 3-axis acceleration Nine axis inertial sensors of meter, three-axis gyroscope and three axle magnetometer, or to integrate 3-axis acceleration, six axis of three-axis gyroscope used Property sensor nine axis inertial sensors are formed plus individual three axle magnetometer, or use the three axis acceleration being individually encapsulated Their data are combined processing to reach the effect of nine axis inertial sensors by degree meter, three-axis gyroscope and three axle magnetometer The inertial sensor of fruit;The sensor control block is arm processor, single-chip microcontroller or microprocessor;The visual sensor For the visual apparatus of monocular camera, binocular camera, RGB-D camera or other available scene images.
A kind of localization method of robot indoor locating system of the utility model, includes the following steps:
Step 1 obtains the nine axis inertial datas generated in robot moving process by host computer, and carries out to data pre- The noise jamming in data is eliminated in processing;Specific method is:
Step 1-1, the described host computer is with sample frequency FsNine axis inertial sensor datas are read, the data of reading are saved Into csv file;
Step 1-2, the csv file completed will be saved to import in upper computer software, and 3-axis acceleration value therein is adopted It is filtered with Butterworth lowpass filters;
Step 2, the output valve that sensor module is described using Quaternion Method are transformed into navigational coordinate system by carrier coordinate system Between spin matrix;Specific method is:
Step 2-1, the inertia of three axis accelerometer, three axle magnetometer and three-axis gyroscope composition is chosen in navigation system Coordinate system where measuring unit is carrier coordinate system b, and the origin of the coordinate system is the center of gravity of carrier, and 3 axis respectively correspond carrier Horizontal axis, the longitudinal axis and vertical pivot;Choosing geographic coordinate system is the referenced coordinate system that navigates, and is indicated with n, indicates n using Quaternion Method It is to the rotation relationship between b system;
Step 2-2, rotation quaternary number renewal equation is solved using quaternion differential equation;
Step 2-3, spin matrix renewal equation is solved using quaternary number renewal equation;
Step 3 corrects three axis angular rate values using 3-axis acceleration value and three-axle magnetic field intensity value to obtain more accurately Spin matrix;Specific method is:
Step 3-1, it is counted using 3-axis acceleration, acquires error matrix err using following formulaa
In formula, vx、vy、vzIt is converted by spin matrix to carrier coordinate for the acceleration of gravity under navigational coordinate system (n system) In the calculated value of three axis, a after system (b system)x、ay、azThe acceleration of gravity measured for the sensor module in carrier coordinate system is three The output valve of axis;
Step 3-2, using three axle magnetometer data, error matrix err is acquired using following formulam
In formula, mx、my、mzIt is converted by spin matrix to carrier coordinate for the field strength values under navigational coordinate system (n system) In the calculated value of three axis, c after system (b system)x、cy、czThe magnetic field strength measured for the sensor module in carrier coordinate system is in three axis Output valve;
Step 3-3, to error matrix erraAnd errmUsing addition of matrices operation, total error e rr can be acquired;
Step 3-4, using PI algorithm, using following formula, then the output of amendment three-axis gyroscope can be gone by error e rr Value;
Gyro_new=gyro+kp·err+ki·err·Δt
In formula, gyro is the output valve of three-axis gyroscope, and gyro_new is revised three-axis gyroscope calculated value, kpFor Proportional gain in PI algorithm is denoted as, kiFor the integral gain in PI algorithm, Δ t is the sampling period;
Step 3-5, revised three-axis gyroscope data are substituted into above-mentioned steps (2), can be then obtained using Quaternion Method Obtain more accurate spin matrix;
3-axis acceleration value under carrier coordinate system is transformed into navigational coordinate system by spin matrix by step 4, to conversion 3-axis acceleration value afterwards carries out integral operation, obtains the velocity information in robot moving process;Specific method is:
Step 4-1, the 3-axis acceleration value in carrier coordinate system geographical coordinate is transformed by spin matrix to fasten;
Step 4-2, Conversion of measurement unit is carried out to 3-axis acceleration value, removal gravity obtains speed renewal equation after influencing, goes Except erection rate renewal equation, solution procedure are as follows after integrator drift:
It step 4-2-1, is m/s by the Conversion of measurement unit of 3-axis acceleration value2
Step 4-2-2, which removes the intrinsic 1g vertical gravitational component of Z axis acceleration value, to be influenced;
3-axis acceleration value after step 4-2-3 influences removal gravity integrates, and obtains speed renewal equation;
Whether step 5 judges whether robot is in idle period by 3-axis acceleration value, i.e., be zero velocity, utilizes This idle period is modified speed, to the integral operation again of revised speed, then can get robot moving process In displacement information;Specific method is:
Step 5-1, given threshold amin, idle period is relatively sought by numerical value using 3-axis acceleration value;
Step 5-2, using following formula, the integrator drift d that idle period speed is not 0 generation is calculated;
D=dr*dt
In formula, dr is drift speed, and dt is the length of idle period, i.e. drift duration.
Step 5-4, speed required in step 4 is subtracted to required integrator drift, then can acquire revised speed more New equation;
Step 5-5, integral operation is carried out to revised speed again, then can acquire displacement renewal equation;
Characterized by further comprising:
Step 6 pre-processes camera, obtains camera parameter and is used for image rectification;
Characteristic point after step 7, extraction correction in image, and carry out characteristic matching and removal exterior point;
Step 8, using the characteristic point after matching, camera motion transformation is calculated by the geometrical relationship between characteristic point and is closed System;
Step 9 constructs pose figure by the motion transform relationship got;
Step 10, the pose figure optimization problem for solving building correct camera motion pose, and rebuild scene map;
Step 11 carries out fusion treatment to inertial data and vision data to complete the accurate positioning to robot.
It is further preferred that acquisition camera parameter described in step 6 carries out pretreated comprise the concrete steps that:
Step 1: making scaling board using the black and white chess flaking grid pattern paper printed;
Step 2: shooting scaling board in different angle using camera, the image of shooting is saved;
Step 3: using Corner Detection Algorithm, angle point in detection image, and save angular coordinate;
Step 4: the relationship in the angular coordinate and original scaling board that pass through preservation between the true coordinate of angle point, carries out Camera parameter calibration;
Step 5: carrying out image rectification using the camera parameter calibrated, the image after correction can be used for the feature in later period Point extraction step.
It is further preferred that characteristic point described in step 7 extraction and match and remove comprising the concrete steps that for exterior point:
Step 1: using feature point extraction algorithm process to the image after correction, the characteristic point in image is extracted;
Step 2: handled using Feature Points Matching algorithm the characteristic point between two images, to obtain between image The characteristic point to match;
Step 3: carrying out error hiding rejecting processing to point the characteristics of matching, the Mismatching point in matching result is removed.
It is further preferred that the resolving of camera motion transformation relation described in step 8 comprises the concrete steps that:
Step 1: constructing the Epipolar geometry between them by the inherent projective rejection between correct matching characteristic point Relationship;
Step 2: being analyzed by Epipolar geometry relationship, the basis matrix comprising camera motion information can be decomposited And essential matrix,;
Step 3: according to the mathematical relationship of basis matrix and essential matrix, essential matrix is selected to carry out calculation processing, it can be with Acquire camera motion.
It is comprised the concrete steps that it is further preferred that correcting camera motion by pose figure in step 9 and step 10:
Step 1: constructing the constraint relationship between camera motion and image data by figure optimization algorithm, about using this Beam relationship constructs least square relational expression;
Step 2: keeping the error between camera motion estimated value and true value minimum by minimizing the relational expression;
Step 3: constantly iterative calculation, seeks the motion information of camera when error minimum, that is, it include camera position and posture Information;
Step 4: being believed on the basis of obtaining optimal camera motion information using the depth that triangulation calculates characteristic point Breath, to obtain the three-dimensional coordinate of characteristic point, can construct three-dimensional point cloud map by three-dimensional coordinate point.
It is further preferred that by constraint in step 11, robot motion's state that inertial data is solved with Fusion treatment is carried out by the motion state that vision data solves, so that the positioning to robot is more accurate.
Compared with prior art, the utility model is had the following advantages and beneficial effects:
(1) it is fixed to complete Indoor Robot movement for the advantage of the utility model combination inertial navigation positioning and visual sensor Position and tracking, and environmental map can be rebuild;
(2) the utility model not only can be positioned and be tracked to robot, moreover it is possible to carry out three-dimensional reconstruction, weight to scene Scene figure after building can be used for the path planning to robot motion;
(3) the utility model overcomes the unreliable situation of single data for only using inertial sensor, has higher practical valence Value.
Detailed description of the invention
A kind of structural schematic diagram of robot indoor locating system of Fig. 1;
A kind of robot indoor orientation method of Fig. 2 part inertial data positioning flow schematic diagram;
Fig. 3 is the flow chart for calculating spin matrix part;
Fig. 4 is to be modified partial process view to spin matrix;
Fig. 5 is calculating speed renewal equation partial process view;
Fig. 6 is to be modified partial process view to speed renewal equation;
Fig. 7 is to calculate displacement renewal equation partial process view;
Fig. 8 is camera calibration flow chart;
Fig. 9 is Epipolar geometry relational graph;
Figure 10 is the pose figure of building.
Specific embodiment
Below with reference to attached drawing, the technical solution of the utility model is described in detail.
It is a kind of structural schematic diagram of robot indoor locating system of the utility model with reference to Fig. 1.In the robot chamber Positioning system, including movable machine people are sequentially connected the inertial sensor being arranged in the robot, sensor control Module and host computer, it is characterised in that: the visual sensor being arranged in robot is also respectively connected in the host computer;Work as institute When stating robot movement, it is strong that the inertial sensor generates 3-axis acceleration value, three axis angular rate values and three-axle magnetic field in real time Angle value, the visual sensor acquire the scene image in motion process in real time;The sensor control block and inertia sensing Device is connected, for carrying out operation control and the transmitting-receiving work of data, the host computer and sensor control to inertial sensor Molding block is connected, for handling acquired data, the host computer and visual sensor connection communication, for obtaining It takes ambient image information and information is handled, the inertial sensor and visual sensor are used for the moving rail to robot Mark is positioned and is tracked;Wherein the inertial sensor is to incorporate three axis accelerometer, three-axis gyroscope and three axis magnetic force Meter nine axis inertial sensors, or integrate 3-axis acceleration, three-axis gyroscope six axis inertial sensors plus one individually Three axle magnetometer is to form nine axis inertial sensors, or uses three axis accelerometer, three-axis gyroscope and three axis being individually encapsulated Their data are combined processing to achieve the effect that the inertial sensor of nine axis inertial sensors by magnetometer;The biography Sensor control module is arm processor, single-chip microcontroller or microprocessor;The visual sensor be monocular camera, binocular camera, The visual apparatus of RGB-D camera or other available scene images.
Monocular camera is connected by this programme by taking monocular camera as an example, through USB with host computer, is used for real-time reception machine people Taken scene image in moving process.
Arm processor, single-chip microcontroller or microprocessor can be used in sensor control block, by the input of sensor control block End is connected by serial ports with inertial sensor output end, is configured for the operating mode to inertial sensor, and receive The data received are finally passed through USB transmission to host computer by the data that inertial sensor generates.
Host computer preferentially selects notebook personal computer, be equipped in host computer experimental data is analyzed and processed it is soft Part.
Any a movable machine people can be selected in robot, is produced herein using Willow Garage company, the U.S. Miniature mobile robot turtlebot for, the robot is compact and flexible, and be upper, middle and lower three-decker, lower layer be move Mobile robot chassis, middle layer and upper layer, which come with, to be supported and be fixed by bracket between three layers with the disk of loading, can be by inertia Sensor and sensor control block are fixed on the middle layer disk of robot, and a notebook is then placed in upper disc, depending on Feel that sensor can be then fixed on notebook.
A kind of localization method of robot indoor locating system of the utility model is that the utility model is a kind of with reference to Fig. 2 The flow diagram of the inertial data position portion of the localization method of robot indoor locating system;With reference to Fig. 3, to calculate rotation The flow chart of matrix part;With reference to Fig. 4, partial process view is modified for spin matrix;With reference to Fig. 5, updated for calculating speed Equation partial process view;With reference to Fig. 6, partial process view is modified for speed renewal equation;With reference to Fig. 7, to calculate displacement more New equation partial process view;It is camera calibration flow chart with reference to Fig. 8;With reference to Fig. 9, the Epipolar geometry relationship that is characterized between a little Figure;Specifically comprise the following steps: with reference to Figure 10 for the pose figure of building
(1) the nine axis inertial datas generated in robot moving process are obtained by host computer, and data is located in advance Reason eliminates the noise jamming in data;
Specifically include following content:
A) nine axis inertia sensing units are fixed in robot, host computer is with sample frequency FsRead machine people is moved through The data that Cheng Zhongjiu axis inertia sensing unit generates, and data are saved as into csv file.
B) the csv file saved is read, using MATLAB software in order to detect machine by 3-axis acceleration value Idle period in people's moving process needs to be filtered pretreatment to 3-axis acceleration value.In order to observe machine in real time Environment locating for device people, speed is between normal person's speed of travel and velocity when control robot is mobile, and normal row The cadence of people is in 1Hz between 2.4Hz when walking, and cadence is up to 3.5Hz when running, in order to accurately detect machine People's idle period all in moving process, uses cut-off frequecy of passband for 3Hz, and the Bart that stopband cutoff frequency is 10Hz is fertile This low-pass filter is filtered 3-axis acceleration value, and the filter window of the filter is as shown in Figure 2.
(2) spin moment that sensor values is transformed between navigational coordinate system by carrier coordinate system is described using Quaternion Method Battle array;
Specific step is as follows:
Step 1: choosing three axis accelerometer, three axle magnetometer and three-axis gyroscope composition usually in navigation system Coordinate system where Inertial Measurement Unit is carrier coordinate system b, and the origin of the coordinate system is the center of gravity of carrier, and 3 axis respectively correspond Horizontal axis, the longitudinal axis and the vertical pivot of carrier;Choosing geographic coordinate system is the referenced coordinate system that navigates, and is generally indicated with n.Use quaternary number Method can very easily indicate n system to the transformational relation between b system;
It is that the spin matrix of n system to b system indicates in formula are as follows:
Step 2: rotation quaternary number can be solved using quaternion differential equation;
WhereinIndicate quaternary number multiplication,Three axis angular rates for carrier coordinate system b relative to navigational coordinate system n.It is false If the sampling period is Δ t, then the renewal equation of quaternary number q are as follows:
It can achieve the purpose being updated to spin matrix by the renewal equation of quaternary number q.
(3) go the output valve of amendment three-axis gyroscope to obtain using the output valve of three axis accelerometer and three axle magnetometer More accurate spin matrix;
Specific amendment step is as follows:
Assuming that theoretical output valve of the gravitational acceleration vector in navigational coordinate system (n system) is(subscript ^ Indicate unitization processing), pass through spin matrixValue after transformation under carrier coordinate system (b system) is expressed asThe value for the acceleration of gravity that the three axis accelerometer in carrier coordinate system measures simultaneously is expressed asAssuming thatAccurately, thenBut due toThere are error so being corrected using the errorCross product is done to two vectors, then has error matrix:
Assuming that magnetic field strength is in n systemOutput valve after spin matrix converts in b system isAnd the magnetic field strength that three axle magnetometer measures in carrier coordinate system isThen have Error matrix:
Then total transformed error are as follows:
Err=erra+errm (7)
If compensated three-axis gyroscope calculated value is denoted as gyro_new, the proportional gain in PI algorithm is denoted as kp, integral Gain is denoted as ki, then have:
Gyro_new=gyro+kp·err+ki·err·Δt (8)
Revised three-axis gyroscope calculated value is substituted into formula (3), then rotation quaternary number can be modified, thus right Spin matrix is modified.
(4) the 3-axis acceleration value under carrier coordinate system is transformed by navigational coordinate system by spin matrix, after conversion 3-axis acceleration value carry out integral operation, to obtain the velocity information in robot moving process.
Specific step is as follows:
Step 1: the 3-axis acceleration value in carrier coordinate system is obtained revised spin matrix by step (3) Inverse-transform matrixIt is transformed into geographical coordinate to fasten, then has:
In formula: anFor the acceleration output valve in three axis directions under geographic coordinate system, abFor three axis sides under carrier coordinate system Upward acceleration output valve.The spin matrix of geographic coordinate system (n system), the spin matrix are arrived for carrier coordinate system (b system) Quaternary number indicate are as follows:
Step 2: to anConversion of measurement unit is carried out, removal gravity obtains speed renewal equation after influencing, removal integrator drift is repaired Positive speed renewal equation.Solution procedure is as follows:
Step is 1.: by anConversion of measurement unit be m/s2, the 3-axis acceleration value output after conversion unit is denoted as acc, then has:
Acc=an*9.81 (11)
Step is 2.: the intrinsic 1g vertical gravitational component of removal Z axis influences, and the accelerometer output valve of Z axis in acc is denoted as Accz then has:
Accz=accz-9.81 (12)
Step is 3.: the 3-axis acceleration value after influencing on removal gravity integrates, and obtains speed renewal equation, then has:
Vel (t)=vel (t-1)+acc (t) Δ t (13)
In formula, Δ t is the sampling period, and acc (t) is the 3-axis acceleration value of t moment, and vel (t) is required t moment Velocity amplitude.
(5) judge whether robot is in idle period by 3-axis acceleration value, i.e., whether be zero velocity.Utilize this Idle period is modified speed, to the integral operation again of revised speed, then can get in robot moving process Displacement information.Specific step is as follows:
Step 1: judging idle period by 3-axis acceleration value.Solution procedure is as follows:
If t moment x, y, z 3-axis acceleration value is respectively accx (t), accy (t), accz (t), then three axis of t moment Acceleration value vector sum are as follows:
Set a minimum threshold amin, when the 3-axis acceleration value vector sum modulus value of t moment is 0 or is less than or equal to this Then think that robot is in idle period at this time when threshold value;Idle period is indicated with stationary, is examined when meeting idle period The punctual stationary of mark is 1, and stationary is 0 in the case of other, then then having:
Step 2: calculating the integrator drift d that idle period speed is not 0 generation.Solution procedure is as follows:
D=dr*dt (16)
In formula, dr is drift speed.Dt is the length of idle period, i.e. drift duration.I and j is respectively idle period Start and ends.velstationaryIt (j-1) is a static preceding sample rate.
Step 3: using speed required in integrator drift amendment step (4), the speed of revised t moment is denoted as v (t), then have:
In above formula, the speed when stationary is 1 i.e. idle period at this time is set to 0, other periods are then according to update The value of formula calculating speed.
Step 4: carrying out integral operation again to revised speed, then t moment is displaced pos (t) renewal equation:
Pos (t)=pos (t-1)+v (t) Δ t (19)
(6) calibration processing is carried out to camera, calibrated camera inside and outside parameter is used to correct the image of acquisition.Specific step It is rapid as follows:
Step 1: this case column calibration for cameras is made using for Zhang Zhengyou calibration method of black and white Chinese chess lattice Scaling board, using 7 × 9 square black and white checkerboard grid, the actual size of each grid is 30mm × 30mm, angle point in chessboard Total quantity be 48, the resolution ratio of monocular camera is 752 × 480 pixels;
Step 2: black and white checkerboard grid is printed upon on paper fixation afterwards on the wall to reduce the use difficulty of scaling board, from Different angle and direction controlling camera shoot at least 20 paper images, and save;
Step 3: using Corner Detection Algorithm, angle point in detection image is simultaneously marked, and saves the coordinate letter of angle point Breath;
Step 4: carrying out camera mark using the relationship between the angular coordinate estimated value and angular coordinate true value saved It is fixed, the internal reference matrix and radial distortion and tangential distortion parameter of camera can be calibrated;
Step 5: the scene image shot using the parameter role calibrated in monocular camera, can be corrected image Processing is to remove the distortion situation in image.
(7) to after correction image carry out feature point extraction with match and remove exterior point.Specific step is as follows:
Step 1: the feature point extraction algorithm being often used has SIFT, SURF, FAST and ORB algorithm, all using any algorithm Feature point extraction can be carried out to image;
Step 2: carrying out characteristic point using characteristic point of the quick approximate KNN search library to image between frame and frame Match;
Step 3: the Mismatching point i.e. exterior point in the characteristic point matched can be rejected using random sampling unification algorism.
(8) the motion transform relationship of camera is obtained by the correct matching double points obtained in step (7).Specific steps are such as Under:
Step 1: can be constructed by matching correct characteristic point between frame and frame in image by inherent projective rejection Epipolar geometry relationship as shown in Figure 9:
The plane of delineation where two field pictures is respectively Il、Ir, by IlFrame moves to IrIt needs to rotate by one between frame Convert R and translation transformation t.Ol、OrRespectively image center is the location of under different frame image.Assuming that PlFor IlPlane An interior characteristic point, is matched to corresponding points by Feature Correspondence Algorithm, in IrIt is P the location of in planer.Assuming that PlWith PrBetween matching it is correct, error hiding is not present, then it is known that PlAnd PrIt is the same spatial point on two imaging planes It projects, at this time connecting line segmentWithExtend two line segments and is intersected in point P, point P, Ol、OrThe plane that three points are constituted Referred to as polar plane.OlOrLine is known as baseline, OlOrLine intersects at plane Il、IrIn point el、er, point el、erReferred to as pole, RayWithRespectively polar plane and imaging plane Il、IrBetween intersection.
Step 2: including basis matrix and essential matrix in Epipolar geometry, to recover camera between two field pictures Motion transform relationship includes rotation transformation R and translation transformation t in the relationship.Solution procedure is as follows:
Step is 1.: assuming that spatial point P is in IlCoordinate under coordinate system where plane are as follows:
P=[X, Y, Z]T (20)
Step is 2.: monocular camera can calculate two characteristic point P usually using pinhole camera model, by the modell And PrLocation of pixels, after being handled by homogeneous partial differential are as follows:
In above formula, K is by the calibrated camera internal reference matrix of step (6), and R and t are transported by the camera between two frames It is dynamic.
Step is 3.: PlAnd PrPixel coordinate be normalized after coordinate be set as c1And c2, then:
Step is 4.: formula (22) substituted into formula (21), are obtained:
c2=Rc1+t (23)
Step is 5.: the antisymmetric matrix of t is denoted as t^.Formula (23) two sides while premultiplication t^, then:
t^c2=t^Rc1 (24)
Step is 6.: formula (24) two sides all premultiplication simultaneously:
Due to Ol、P、OrCoplanar, then the left side of above formula is 0 according to the result of product of vertical vector, is obtained:
Step is 7.: formula (22) substituted into above formula, are obtained:
Step is 8.: formula (26) and formula (27) are the two kinds of forms constrained pole, to including rotation and translation in the constraint of pole Relationship.By basis matrix, it is denoted as F, essential matrix is denoted as E, then has:
By formula (28) it is found that only differing the internal reference matrix K of a camera between basis matrix F and essential matrix E, and camera Internal reference matrix passed through and obtained by calibration, therefore camera rotary motion can be obtained by the resolving to essential matrix Matrix R and translational motion vector t.
Step 3: being resolved to essential matrix, available camera rotary motion matrix R and translational motion vector t.Tool Steps are as follows for body:
The matrix that essential matrix E is one 3 × 3, wherein including 9 unknown parameters to be solved.The solution of essential matrix E has Have inwardness: after singular value decomposition (SVD), the form of solution must be [σ, σ, 0]T.Since monocular camera can not determine Dimensional information, i.e. essential matrix E can be regarded as multiplied by its definition, this constant is still met after any one non-zero constant Zoom factor under different scale, then (1,1,0) directly can be set as the singular value of essential matrix, then just only remaining 8 in E A unknown number.If the SVD of E is decomposed are as follows:
E=U Σ VT (29)
In formula (29), U, V are orthogonal matrix, and Σ is singular value matrix, from the foregoing, it can be understood that Σ=diag (1,1,0).Pass through After SVD is decomposed, for the solution of E in above formula, there can be the solution of two groups of possible R and t:
In formula (30), RZ(pi/2) indicates that spin matrix R has done 90 ° of the obtained matrix of rotation around Z axis.Due to right In t and the obtained essential matrix E of-t and-E be of equal value, so, the available 4 groups of possible solutions of R and t are decomposed from E, so And any one spatial point P is in the front of camera imaging plane, therefore only needs the R and t by decompositing, calculate spatial point with Depth value between camera, if being all the value greater than 0 by the depth value of a certain group of R and t all spatial points solved, then This group solution is exactly required R and t.
(9) after seeking the motion transform relationship between frame and frame by step (8), carrier is described by building pose figure Relationship between the movement of robot or camera and the observation information of camera sensor constrains the camera between two frames with this Movement, to obtain more accurate motion transform relationship.Specific step is as follows:
Figure optimization is that description nonlinear optimal problem is gone by graph theory principle, and the side of figure can be used in nonlinear optimal problem Formula indicates, figure can be by several vertex, and several sides composition being connected with these vertex.Pose figure interior joint can be with For the corresponding camera pose of every frame image, or the robot pose acquired calculated by transformation matrix, and side can use two frames Between the constraint information of motion transform indicate.Assuming that PiThe dynamic bit acquired is calculated by dynamic transformation matrix for one group Appearance sequence, the transformation matrix are made of the translation vector t of one 3 × 3 spin matrix R and one 3 × 1, therefore the pose sequence Column point has 6 freedom degrees.Assuming that the transformation matrix between two neighboring pose i-1 and i is usedIt indicates, then point Pi-1Transformation To PiFormula can be usedIt acquires.Picture frame continuous for one group, it is assumed that the corresponding camera pose P of initial frame0Table Show, the corresponding pose P of any one frame k camera laterkIt can be calculated with this following formula:
The pose diagram of building is intended to as shown in Figure 10, and specific construction step is as follows:
Step is 1.: one initial pose P of selection0, it is added in pose figure, as the start node in figure;
Step is 2.: knowing the motion transform square between first frame and the second frame after through front-end processing using formula (31) Battle array T0 1On the basis of, acquire robot pose P at this time1, add it as second node in pose figure, and pass through The transformation matrix establishes the transformation the constraint relationship between first node and second node, establishes a line E0
Step is 3.: similarly, by the motion transform matrices T between the second frame and third frame of acquisition1 2, again by formula (31) Calculate the corresponding robot pose P of third frame2, it is added into pose figure and establishes third node, and with motion transform square Battle array T1 2The constraint relationship between second node and third node is established as the Article 2 side E in pose figure1
Step is 4.: and so on, by the motion transform matrices between the N-1 frame and nth frame of acquisitionBy formula (31) the corresponding robot pose PN of nth frame is calculated, is added into pose figure and establishes n-th node, then with motion transform MatrixThe the constraint relationship between N-1 node and n-th node is established as the N articles side EN-1 in pose figure.
(10) essence of the pose figure of step (9) building is least square problem, schemes optimization library by g2o and solves the minimum Two multiply problem, to minimize position and attitude error, solve more accurate posture information.Specific step is as follows:
Step 1: carrying out pose figure optimization processing using g2o:
G2o is that standard drawing optimizes Open Framework, provides required operation for figure optimization problem and can be in pose figure Node location optimizes adjustment.The Open Framework is integrated with the iterative solution algorithm of most least square problem, is applicable in In most of nonlinear optimal problem.
It calls g2o to optimize figure optimization problem, there is following 5 step:
Step is 1.: selection or the type for independently defining suitable vertex and side;
Step is 2.: building pose figure;
Step is 3.: the linear equation solver in selection g2o;
Step is 4.: matrix-block derivation algorithm and iterative strategy in selection g2o;
Step is 5.: constantly iterative calculation is until solving optimum results.
Pose figure is optimized by g2o frame, an adjustment can be carried out to the node pose finally solved, Keep the pose solved accurate and there is robustness.
Step 2: estimate the depth of camera by principle of triangulation, so that it is determined that the three-dimensional coordinate of characteristic point, when obtaining A large amount of characteristic points three-dimensional coordinate after by PCL point cloud library then can be with drawing three-dimensional point cloud scene map.Specific step is as follows:
Assuming that camera plane IlUnder establish camera coordinates system I1, select I1Coordinate system is the space at this time with reference to world coordinate system The coordinate of point P in this coordinate system is set as P=(X, Y, Z, 1)T, there are one for the conversion between camera coordinates system and world coordinate system A transformation matrix T, T are the outer ginseng matrix of camera, and inside includes the spin matrix R and translation vector t after g2o optimization.Due to being Occur without spin with translation using camera coordinates system as with reference to world coordinate system, therefore between two coordinate systems, defines first frame Outer ginseng matrix between two coordinate systems of image is unit matrix, is denoted as T0, it obtains:
T0For fixed and standard outer ginseng matrix.Similarly, camera coordinates system I2With camera coordinates system I1For reference frame, So at this time I2Outer ginseng matrix under coordinate system is denoted as T, since T-phase is for the outer ginseng matrix T of standard0Done opposite rotation and Translation, can indicate are as follows:
By point PlAnd PrWith the projection relation of spatial point P, obtain:
In above formula, T and T ' be respectively under two camera coordinates systems by essential matrix resolve relative to standard Outer ginseng matrix T0Outer ginseng matrix, in conjunction with the obtained P of Feature Points Matching algorithmlAnd PrThe two-dimensional coordinate of two o'clock can then pass through Corresponding relationship calculates point P in the three-dimensional coordinate of world coordinate system in formula.It can then be drawn visually by PCL point cloud library again Scene point cloud chart.
(11) by constraint, robot motion's positioning result of the solution of inertial data and vision data is melted Conjunction processing.Specific constraint condition is as follows:
Step is 1.: when robot is static, the positioning result solved using inertial data, and use static detection algorithm Robot stationary state is detected, the camera given up at this time solves data;
Step is 2.: as robot motion, if the characteristic point extracted in the image of shooting is abundant, and correct matching characteristic Point quantity and all characteristic points between ratio be greater than setting threshold value, then to inertial data solve robot motion's pose with The motion pose that vision data solves is weighted and averaged processing, and specific weight ratio is set according to the actual situation, made Positioning calculation is carried out to robot with the pose after weighted average;
Step is 3.: as robot motion, if the characteristic point extracted in the image of shooting is rare or correct matching characteristic Ratio between point quantity and all characteristic points is less than the threshold value of setting, then gives up vision data using inertial data to robot Carry out positioning calculation.
The above case study on implementation is only the technical idea for illustrating the utility model, and the protection of the utility model cannot be limited with this Range, it is all according to the utility model proposes technical idea, any changes made on the basis of the technical scheme each falls within this Within the scope of utility model protection.

Claims (3)

1. a kind of robot indoor locating system, including movable machine people, are sequentially connected and are arranged in the robot Inertial sensor, sensor control block and host computer, it is characterised in that: the host computer, which is also connected with, to be arranged in robot Visual sensor;When the robot is mobile, the inertial sensor generates 3-axis acceleration value, three axis angular rates in real time Value and three-axle magnetic field intensity value, the visual sensor acquire the scene image in motion process in real time;The sensor control Molding block is connected with inertial sensor, described for carrying out operation control and the transmitting-receiving work of data to inertial sensor Host computer is connected with sensor control block, for handling acquired data, the host computer and visual sensing Device connection communication, for obtaining ambient image information and handling information, the inertial sensor and visual sensor are used It is positioned and is tracked in the motion track to robot;Wherein the inertial sensor is to incorporate three axis accelerometer, three Nine axis inertial sensors of axis gyroscope and three axle magnetometer, or integrate the six axis inertia biography of 3-axis acceleration, three-axis gyroscope Sensor forms nine axis inertial sensors plus an individual three axle magnetometer, or uses the 3-axis acceleration being individually encapsulated Their data are combined processing to achieve the effect that nine axis inertial sensors by meter, three-axis gyroscope and three axle magnetometer Inertial sensor.
2. a kind of robot indoor locating system according to claim 1, it is characterised in that: the sensor control block For arm processor, single-chip microcontroller or microprocessor.
3. a kind of robot indoor locating system according to claim 1, it is characterised in that: the visual sensor is single Mesh camera, binocular camera, RGB-D camera or other can obtain the visual apparatus of scene image.
CN201821540787.0U 2018-09-20 2018-09-20 A kind of robot indoor locating system Expired - Fee Related CN208751577U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201821540787.0U CN208751577U (en) 2018-09-20 2018-09-20 A kind of robot indoor locating system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201821540787.0U CN208751577U (en) 2018-09-20 2018-09-20 A kind of robot indoor locating system

Publications (1)

Publication Number Publication Date
CN208751577U true CN208751577U (en) 2019-04-16

Family

ID=66083674

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201821540787.0U Expired - Fee Related CN208751577U (en) 2018-09-20 2018-09-20 A kind of robot indoor locating system

Country Status (1)

Country Link
CN (1) CN208751577U (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111220155A (en) * 2020-03-04 2020-06-02 广东博智林机器人有限公司 Method, device and processor for estimating pose based on binocular vision inertial odometer
CN111595332A (en) * 2020-04-13 2020-08-28 宁波深寻信息科技有限公司 Full-environment positioning method integrating inertial technology and visual modeling
CN112862818A (en) * 2021-03-17 2021-05-28 合肥工业大学 Underground parking lot vehicle positioning method combining inertial sensor and multi-fisheye camera
CN113267181A (en) * 2021-06-22 2021-08-17 武汉大学 Construction method of foot-type odometer suitable for foot-type robot
CN114199275A (en) * 2020-09-18 2022-03-18 阿里巴巴集团控股有限公司 Parameter determination method and device for sensor

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111220155A (en) * 2020-03-04 2020-06-02 广东博智林机器人有限公司 Method, device and processor for estimating pose based on binocular vision inertial odometer
CN111595332A (en) * 2020-04-13 2020-08-28 宁波深寻信息科技有限公司 Full-environment positioning method integrating inertial technology and visual modeling
CN114199275A (en) * 2020-09-18 2022-03-18 阿里巴巴集团控股有限公司 Parameter determination method and device for sensor
CN112862818A (en) * 2021-03-17 2021-05-28 合肥工业大学 Underground parking lot vehicle positioning method combining inertial sensor and multi-fisheye camera
CN113267181A (en) * 2021-06-22 2021-08-17 武汉大学 Construction method of foot-type odometer suitable for foot-type robot
CN113267181B (en) * 2021-06-22 2022-08-16 武汉大学 Construction method of foot-type odometer suitable for foot-type robot

Similar Documents

Publication Publication Date Title
CN109141433A (en) A kind of robot indoor locating system and localization method
CN208751577U (en) A kind of robot indoor locating system
CN104658012B (en) Motion capture method based on inertia and optical measurement fusion
CN111156998B (en) Mobile robot positioning method based on RGB-D camera and IMU information fusion
CN105222772B (en) A kind of high-precision motion track detection system based on Multi-source Information Fusion
Panahandeh et al. Vision-aided inertial navigation based on ground plane feature detection
CN107314778B (en) Calibration method, device and system for relative attitude
CN109540126A (en) A kind of inertia visual combination air navigation aid based on optical flow method
Lobo et al. Vision and inertial sensor cooperation using gravity as a vertical reference
CN110095116A (en) A kind of localization method of vision positioning and inertial navigation combination based on LIFT
CN109648558B (en) Robot curved surface motion positioning method and motion positioning system thereof
CN110125928A (en) A kind of binocular inertial navigation SLAM system carrying out characteristic matching based on before and after frames
CN105953796A (en) Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone
CN106052584B (en) A kind of view-based access control model and the orbit space linear measurement method of Inertia information fusion
CN109631887A (en) Inertial navigation high-precision locating method based on binocular, acceleration and gyroscope
CN110617814A (en) Monocular vision and inertial sensor integrated remote distance measuring system and method
CN104848861B (en) A kind of mobile device attitude measurement method based on picture drop-out point identification technology
CN108052103A (en) The crusing robot underground space based on depth inertia odometer positions simultaneously and map constructing method
CN105953795B (en) A kind of navigation device and method for the tour of spacecraft surface
CN113781582A (en) Synchronous positioning and map creating method based on laser radar and inertial navigation combined calibration
CN107909614A (en) Crusing robot localization method under a kind of GPS failures environment
CN108364304A (en) A kind of system and method for the detection of monocular airborne target
CN107782309A (en) Noninertial system vision and double tops instrument multi tate CKF fusion attitude measurement methods
CN111307146A (en) Virtual reality wears display device positioning system based on binocular camera and IMU
CN112179373A (en) Measuring method of visual odometer and visual odometer

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190416

Termination date: 20200920

CF01 Termination of patent right due to non-payment of annual fee