CN107314770A - A kind of mobile robot and its master controller, alignment system and method - Google Patents

A kind of mobile robot and its master controller, alignment system and method Download PDF

Info

Publication number
CN107314770A
CN107314770A CN201710499985.0A CN201710499985A CN107314770A CN 107314770 A CN107314770 A CN 107314770A CN 201710499985 A CN201710499985 A CN 201710499985A CN 107314770 A CN107314770 A CN 107314770A
Authority
CN
China
Prior art keywords
msub
mrow
mobile robot
position coordinate
theoretical position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710499985.0A
Other languages
Chinese (zh)
Other versions
CN107314770B (en
Inventor
张国亮
管林波
刘力上
吴光号
王培建
黄鸿
卓云之
施江林
关慧敏
陶熠昆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Guozi Robot Technology Co Ltd
Original Assignee
Zhejiang Guozi Robot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Guozi Robot Technology Co Ltd filed Critical Zhejiang Guozi Robot Technology Co Ltd
Priority to CN201710499985.0A priority Critical patent/CN107314770B/en
Publication of CN107314770A publication Critical patent/CN107314770A/en
Application granted granted Critical
Publication of CN107314770B publication Critical patent/CN107314770B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation

Abstract

The invention discloses a kind of localization method for mobile robot, including:The theoretical position coordinate of mobile robot is calculated and recorded according to prefixed time interval;Obtain the absolute location coordinates for the graphic code that mobile robot is shot in the process of walking;Corresponding DR position coordinate under being obtained according at least two adjacent theoretical position coordinates at the time of when shooting graphic code;Site error is obtained according to the DR position coordinate and the absolute location coordinates;The current theoretical position coordinate that the positional error compensation is extrapolated in mobile robot.The invention also discloses a kind of alignment system for mobile robot;The present invention discloses a kind of master controller for mobile robot again;The invention discloses a kind of mobile robot with master controller.Above-mentioned localization method, it can be deduced that the position deviant of robot, carries out accurate position correction.

Description

A kind of mobile robot and its master controller, alignment system and method
Technical field
The present invention relates to localization for Mobile Robot technical field, more particularly to a kind of mobile robot and its master controller, Alignment system and method.
Background technology
With the progress and development in epoch, the application of mobile robot is more and more extensive.It follows that to fixed The efficiency of bit function and the raising of required precision.For the mobile robot of indoor application, the scheme of current main flow is by taking the photograph As head shoots the Quick Response Code of specified location, the two-dimension code image photographed is then recognized, the position letter preserved in Quick Response Code is obtained Breath, so as to realize the positioning to mobile robot.
For at present, from Quick Response Code is shot to completing to recognize, typical flow is:First, image information is from camera CMOS chip is transferred to the process chip of camera;Secondly, the process chip of camera carries out the processing such as noise reduction, compression;Then, Camera is by picture by network transmission to master controller;Then, master controller decompresses compressed picture;Finally, master control Device processed recognizes the positional information that Quick Response Code is included in picture by algorithm.
According to condition differences such as picture pixels size, CPU disposal abilities, the network bandwidth, illumination, current superior performance is used Hardware platform, from shoot Quick Response Code to complete identification be at least also required to 20ms~50ms or so times.Due to mobile robot Motion state is constantly in, the speed of service of current indoor mobile robot has generally reached 2m/s~3m/s.Therefore knowing Not Wan Cheng the moment physical location with identification obtain positional information there is 4cm~15cm errors.
In order to eliminate above-mentioned error, scheme common at present has:
First method is, mobile robot stops movement when shooting Quick Response Code, is parsed until Quick Response Code, obtains Just start to re-move after positional information;
Second method is, it is believed that mobile robot moves with uniform velocity (speed v), permanent from time for completing identification is photographed Fixed (time difference t), therefore the value of compensation is equal to v × t;
The third method is directly to carry out Quick Response Code identification by camera, reduces network output, picture processing (noise reduction, pressure Contracting, decompression etc.) time overhead, and combine second method and carry out compensation in the lump.
The scheme of the first shooting of stopping, makes robot often have deceleration by a Quick Response Code, pause and accelerated Journey, can cause the inefficiency of robot.Less use actual at present.
Second of compensation scheme, be first for having used the camera of different model and the mobile robot of controller, It is huge from the time difference for photographing completion identification because the conditions such as picture size, CPU disposal abilities, the network bandwidth are different Greatly, even up to tens to hundreds of milliseconds, it is therefore desirable to determine the time difference under different condition;Secondly as Different field Illumination condition, surface condition are different, thus it is caused from photograph the time difference that completes identification may also be in more than 10ms;Most Afterwards, due to the operation pose problem of mobile robot, cause the Quick Response Code in the picture photographed can the rotation of generally existing necessarily Turn, because rotation inclination angle is in different size, several milliseconds of recognition time error can be caused.Therefore, this compensation method, first Need to demarcate hardware, add workload;Secondly, rough compensation can only be carried out, effect is limited, it is impossible to realize accurate Compensation.
The third compensation scheme, the camera cost for possessing two-dimensional code authentication function first is higher;Secondly, it can not still solve Illumination condition is different, surface condition is different, Quick Response Code has the reasons such as rotation inclination angle, causes each recognition time not quite identical The problem of.
The content of the invention
, can be accurate it is an object of the invention to provide a kind of mobile robot and its master controller, alignment system and method Learn the position deviant of mobile robot, so as to carry out accurate position correction.
To achieve the above object, the present invention provides a kind of localization method for mobile robot, including:
The theoretical position coordinate of mobile robot is calculated and recorded according to prefixed time interval;
Obtain the absolute location coordinates for the graphic code that mobile robot is shot in the process of walking;
Corresponding under being obtained according at least two adjacent theoretical position coordinates at the time of when shooting graphic code DR position coordinate;
Site error is obtained according to the DR position coordinate and the absolute location coordinates;
The current theoretical position coordinate that the positional error compensation is extrapolated in mobile robot.
The localization method provided relative to above-mentioned background technology, the present invention, in mobile robot walking process, at interval of Prefixed time interval will calculate and record the theoretical position coordinate of mobile robot, when mobile robot pass through graphic code in, Graphic code is shot in the process of walking, and the absolute location coordinates corresponding to graphic code are known using graphic code;Work as mobile robot This moment of graphic code is photographed, figure is photographed using the mobile robot that do not obtain in the same time corresponding to theoretical position coordinate The DR position coordinate at this moment of shape code;Namely on the basis of the time, know mobile robot reach graphic code when when Carve, and known using the theoretical position coordinate for calculating and recording according to prefixed time interval when mobile robot reaches graphic code When at the time of corresponding DR position coordinate;DR position coordinate and absolute location coordinates are contrasted, position mistake is obtained Difference;This moment of graphic code is photographed in mobile robot until calculating site error during this, mobile machine People persistently walks;When obtaining site error, this when the current theoretical position coordinate of mobile robot inscribed missed by position Difference is compensated;Due to photographing this moment of graphic code from mobile robot up to calculating to this process of site error In duration be usually 20ms~50ms, namely mobile robot photographs this moment of graphic code in mobile robot The distance walked on afterwards corresponding to 20ms~50ms;And 20ms~50ms time will not produce enough accumulations and miss Difference, thus can directly by positional error compensation in mobile robot in the current of distance corresponding to 20ms~50ms that walk on Theoretical position coordinate, so that it is guaranteed that the accurate current accurate coordinates for knowing mobile robot, the position of accurate calibration mobile robot Put.
Preferably, it is described to calculate and wrapped the step of recording mobile robot theoretical position coordinate according to prefixed time interval Include:
Minimum is obtained according to the maximum operational speed V and positioning precision Q of mobile robot, and by formula t=Q/V calculating Time interval t;
The prefixed time interval is determined according to the minimum interval t.
Preferably, it is described obtained according to the theoretical position coordinate at the time of when shooting graphic code under corresponding reckoning The step of position coordinates, includes:
When T is the end points of the prefixed time interval at the time of when shooting graphic code, then the DR position coordinate For the theoretical position coordinate corresponding to the end points of prefixed time interval this described;
When T is not the end points of the prefixed time interval at the time of when shooting graphic code, then the DR position is sat Mark (XT, YT) calculated by below equation:
Wherein:tn≤T≤tn+1, tnWith tn+1For the end points of two adjacent time intervals,
In tnWhen to inscribe corresponding theoretical position coordinate be (xn, yn);
In tn+1When to inscribe corresponding theoretical position coordinate be (xn+1, yn+1)。
Preferably, it is described to wrap the step of obtain site error according to the DR position coordinate and the absolute location coordinates Include:
According to the DR position coordinate (XT, YT) and absolute location coordinates (X, Y), and by formula (Δ X, Δ Y)= (X-XT, Y-YT) calculate obtain the site error (Δ X, Δ Y).
Preferably, the step of the current theoretical position coordinate that the positional error compensation is extrapolated in mobile robot Suddenly include:
As the current time T of mobile robotnowFor a certain prefixed time interval end points when, then the current reason It is the current theoretical position coordinate (X corresponding to the end points of the prefixed time interval by position coordinatesnow, Ynow);Using institute State current theoretical position coordinate (Xnow, Ynow) and the site error (Δ X, Δ Y) and pass through formula (Xprecise, Yprecise)= (Xnow+ Δ X, Ynow+ Δ Y) calculate obtain current accurate coordinates (Xprecise, Yprecise)。
The present invention also provides a kind of alignment system for mobile robot, including:
Theoretical position computing module:Theoretical position for mobile robot to be calculated and recorded according to prefixed time interval is sat Mark;
Absolute position acquisition module:Absolute position for obtaining the graphic code that mobile robot is shot in the process of walking Coordinate;
Dead reckoning module:Corresponding under at the time of for being obtained according to the theoretical position coordinate when shooting graphic code DR position coordinate;
Site error acquisition module:Missed for obtaining position according to the DR position coordinate and the absolute location coordinates Difference;
Current absolute location module:For the current theoretical position for extrapolating the positional error compensation in mobile robot Put coordinate.
Preferably, the theoretical position computing module includes:
Minimum interval computing unit:For the maximum operational speed V and positioning precision Q according to mobile robot, and Calculated by formula t=Q/V and obtain minimum interval t;
Prefixed time interval determining unit:For determining the prefixed time interval according to the minimum interval t.
Preferably, the dead reckoning module is specially:
Time interval endpoint location calculates module:Between T is the preset time at the time of for when shooting graphic code Every end points when, then the DR position coordinate for the prefixed time interval end points corresponding to the theoretical position sit Mark;
Non-temporal interval end points dead reckoning module:When T is not described default at the time of for when shooting graphic code Between be spaced end points when, then the DR position coordinate (XT, YT) calculated by below equation:
Wherein:tn≤T≤tn+1, tnWith tn+1For the end points of two adjacent time intervals,
In tnWhen to inscribe corresponding theoretical position coordinate be (xn, yn);
In tn+1When to inscribe corresponding theoretical position coordinate be (xn+1, yn+1)。
The present invention also provides a kind of master controller for mobile robot, including the positioning system described in any of the above-described System, in addition to realize that the theoretical position computing module and the absolute position acquisition module possess synchronous absolute time RTC block.
The present invention also provides a kind of mobile robot, it is characterised in that including above-mentioned master controller.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing There is the accompanying drawing used required in technology description to be briefly described, it should be apparent that, drawings in the following description are only this The embodiment of invention, for those of ordinary skill in the art, on the premise of not paying creative work, can also basis The accompanying drawing of offer obtains other accompanying drawings.
The flow chart for the localization method for mobile robot that Fig. 1 is provided by the embodiment of the present invention;
The structured flowchart for the alignment system for mobile robot that Fig. 2 is provided by the embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Site preparation is described, it is clear that described embodiment is only a part of embodiment of the invention, rather than whole embodiments.It is based on Embodiment in the present invention, it is every other that those of ordinary skill in the art are obtained under the premise of creative work is not made Embodiment, belongs to the scope of protection of the invention.
In order that those skilled in the art more fully understand the present invention program, below in conjunction with the accompanying drawings and specific implementation The present invention is described in further detail for mode.
It refer to Fig. 1 and Fig. 2, the stream for the localization method for mobile robot that Fig. 1 is provided by the embodiment of the present invention Cheng Tu;The structured flowchart for the alignment system for mobile robot that Fig. 2 is provided by the embodiment of the present invention.
A kind of localization method for mobile robot that the present invention is provided, as shown in Figure of description 1, mainly includes:
S1, the theoretical position coordinate for being calculated according to prefixed time interval and recording mobile robot;
S2, the absolute location coordinates for obtaining the graphic code that mobile robot is shot in the process of walking;
S3, according at least two adjacent theoretical position coordinates obtain shoot graphic code when at the time of under corresponding to DR position coordinate;
S4, site error obtained according to the DR position coordinate and the absolute location coordinates;
S5, the current theoretical position coordinate for extrapolating the positional error compensation in mobile robot.
In step sl, mobile robot in the process of walking, can calculate mobile machine every prefixed time interval The theoretical position coordinate of people;For example, prefixed time interval can be 10ms, then when mobile robot starts walking, obtains To the theoretical position coordinate of the mobile robot under 0ms, and record;When mobile robot has walked 10ms, moving machine is calculated Theoretical position coordinate during device people walking 10ms, and record;By that analogy, the table of the theoretical position coordinate corresponding to the time is obtained The arrays such as lattice;When calculating, it can be calculated by the speed of travel, direction of travel of mobile robot etc.;And according to differential Principle, for very of short duration period, it is believed that within the period, mobile robot is to travel at the uniform speed, with side Just calculate.
In step s 2, mobile robot in the process of walking, during by graphic code, is shot to graphic code, so that Obtain the absolute location coordinates that graphic code is contained;Continue with above-mentioned example explanation, it is assumed that when mobile robot walks 15ms, Graphic code is photographed, and absolute location coordinates are obtained using the means of prior art;That is, during mobile robot walking 15ms Absolute location coordinates are understood;Mobile robot is photographing graphic code until obtaining absolute location coordinates during this, mobile Robot persistently walks.Wherein, graphic code can be Quick Response Code etc..
In step s3, it is corresponding under at the time of mobile robot should obtain mobile robot when photographing graphic code DR position coordinate;That is, mobile robot is 15ms when photographing graphic code, pass through the number such as step S1 forms obtained Group, calculates the DR position coordinate for obtaining the mobile robot in 15ms, by the arrays such as the step S1 forms obtained are logical Cross the speed of travel, direction of travel etc. to carry out calculating what is obtained, therefore there is certain error, thus calculate obtain in 15ms The absolute location coordinates that the DR position coordinate of mobile robot should also be contained with graphic code have site error.
Step S4 is to calculate the site error between DR position coordinate and absolute location coordinates;
The current theoretical position coordinate for extrapolating positional error compensation in mobile robot in step S5;If that is, Mobile robot is photographing graphic code until the time needed for obtaining absolute location coordinates during this is 20ms~50ms, The mobile robot current theoretical position coordinate residing when obtaining absolute location coordinates is then calculated by way of step S1, And by site error direct compensation in current theoretical position coordinate, and then realize pinpoint purpose.
For in above-mentioned steps S1, the step of mobile robot theoretical position coordinate is calculated and recorded according to prefixed time interval Suddenly include:
Minimum is obtained according to the maximum operational speed V and positioning precision Q of mobile robot, and by formula t=Q/V calculating Time interval t;
The prefixed time interval is determined according to the minimum interval t.
Assuming that the maximum operational speed of mobile robot is 3m/s, positioning accuracy request is 1cm, therefore should at least be pressed 3.3ms (=0.01m ÷ 3m/s) preserves coordinate value for interval.Minimum interval t is 3.3ms;In actual applications, to enter one Step improves precision, can be interval preservation coordinate value by 1ms, namely prefixed time interval is 1ms.
A tables of data is just preserved in the master controller of mobile robot, using 1ms as interval, be have recorded in Each point in time On theoretical position coordinate.Wherein at the time point of the tables of data, be the absolute time of master controller.
Corresponding reckoning position under being obtained in step S2 according to the theoretical position coordinate at the time of when shooting graphic code The step of putting coordinate includes:
When T is the end points of the prefixed time interval at the time of when shooting graphic code, then the DR position coordinate For the theoretical position coordinate corresponding to the end points of prefixed time interval this described;
When T is not the end points of the prefixed time interval at the time of when shooting graphic code, then the DR position is sat Mark (XT, YT) calculated by below equation:
Wherein:tn≤T≤tn+1, tnWith tn+1For the end points of two adjacent time intervals,
In tnWhen to inscribe corresponding theoretical position coordinate be (xn, yn);
In tn+1When to inscribe corresponding theoretical position coordinate be (xn+1, yn+1)。
We continue with above-mentioned example explanation, and prefixed time interval can be 10ms, then when mobile robot starts walking When, the theoretical position coordinate of the mobile robot under 0ms is obtained, and record;When mobile robot has walked 10ms, calculate and move Theoretical position coordinate during mobile robot walking 10ms, and record;By that analogy;
If T is when being exactly 10ms at the time of when mobile robot shoots graphic code, then the theoretical position corresponding to 10ms Coordinate is DR position coordinate;
If T is 15ms at the time of during mobile robot shooting graphic code, due to passing through step S1 forms obtained etc. Array does not have theoretical position coordinate (only having the data such as 10ms, 20ms) corresponding during 15ms, it is therefore desirable to public using interpolation Formula calculates the DR position coordinate (X corresponding to 15msT, YT);
DR position coordinate (XT, YT) calculated by below equation:
Wherein:tn≤T≤tn+1, tnWith tn+1For the end points of two adjacent time intervals,
In tnWhen to inscribe corresponding theoretical position coordinate be (xn, yn);
In tn+1When to inscribe corresponding theoretical position coordinate be (xn+1, yn+1)。
Herein, T is 15ms, tnAs 10ms, tn+1As 20ms;Corresponding theoretical position is inscribed during 10ms to sit It is designated as (xn, yn), it is (x that corresponding theoretical position coordinate is inscribed during 20msn+1, yn+1);(xn, yn) and (xn+1, yn+1) it is logical Obtained coordinate value can be calculated by crossing step S1.
Certainly, for the DR position coordinate (X corresponding to 15msT, YT) other interpolation in numerical analysis can also be used Formula is obtained, and is not limited only to interpolation formula as described herein.
In step s 4, the step of obtaining site error according to the DR position coordinate and the absolute location coordinates is wrapped Include:
According to the DR position coordinate (XT, YT) and absolute location coordinates (X, Y), and by formula (Δ X, Δ Y)= (X-XT, Y-YT) calculate obtain the site error (Δ X, Δ Y).
DR position coordinate (X is obtained it should be evident that calculatingT, YT) after, the absolute position acquired in graphic code will be passed through Put coordinate (X, Y) and DR position coordinate (XT, YT) subtraction is done, so as to obtain site error (Δ X, Δ Y).
In step s 5, the step for the current theoretical position coordinate positional error compensation extrapolated in mobile robot Suddenly include:
As the current time T of mobile robotnowFor a certain prefixed time interval end points when, then the current reason It is the current theoretical position coordinate (X corresponding to the end points of the prefixed time interval by position coordinatesnow, Ynow);Using institute State current theoretical position coordinate (Xnow, Ynow) and the site error (Δ X, Δ Y) and pass through formula (Xprecise, Yprecise)= (Xnow+ Δ X, Ynow+ Δ Y) calculate obtain current accurate coordinates (Xprecise, Yprecise)。
Continue to illustrate with above-mentioned example, prefixed time interval can be 10ms, then when mobile robot starts walking, obtain To the theoretical position coordinate of the mobile robot under 0ms, and record;When mobile robot has walked 10ms, moving machine is calculated Theoretical position coordinate during device people walking 10ms, and record;By that analogy;If at the time of during mobile robot shooting graphic code When T is 15ms, site error (Δ X, Δ Y) is obtained;Mobile robot during site error (Δ X, Δ Y) is now obtained to continue Walked 50ms;The current theoretical position coordinate (X of mobile robot when obtaining 50ms in the way of step S1now, Ynow);And Site error (Δ X, Δ Y) is compensated in (Xnow, Ynow), obtain the current accurate coordinates of the mobile robot in 50ms (Xprecise, Yprecise)。
The present invention also provides a kind of alignment system for mobile robot, as shown in Figure of description 2, including:
Theoretical position computing module 101:Theoretical position for mobile robot to be calculated and recorded according to prefixed time interval Put coordinate;
Absolute position acquisition module 102:For obtaining the absolute of the graphic code that mobile robot is shot in the process of walking Position coordinates;
Dead reckoning module 103:Institute under at the time of for being obtained according to the theoretical position coordinate when shooting graphic code Corresponding DR position coordinate;
Site error acquisition module 104:For being obtained in place according to the DR position coordinate and the absolute location coordinates Put error;
Current absolute location module 105:For the current reason for extrapolating the positional error compensation in mobile robot By position coordinates.
Wherein, theoretical position computing module 101 includes:
Minimum interval computing unit:For the maximum operational speed V and positioning precision Q according to mobile robot, and Calculated by formula t=Q/V and obtain minimum interval t;
Prefixed time interval determining unit:For determining the prefixed time interval according to the minimum interval t.
Dead reckoning module 103 is specially:
Time interval endpoint location calculates module:Between T is the preset time at the time of for when shooting graphic code Every end points when, then the DR position coordinate for the prefixed time interval end points corresponding to the theoretical position sit Mark;
Non-temporal interval end points dead reckoning module:When T is not described default at the time of for when shooting graphic code Between be spaced end points when, then the DR position coordinate (XT, YT) calculated by below equation:
Wherein:tn≤T≤tn+1, tnWith tn+1For the end points of two adjacent time intervals,
In tnWhen to inscribe corresponding theoretical position coordinate be (xn, yn);
In tn+1When to inscribe corresponding theoretical position coordinate be (xn+1, yn+1)。
The present invention also provides a kind of master controller for mobile robot, including the positioning system described in any of the above-described System, in addition to realize that it is synchronous absolute that the theoretical position computing module 101 and the absolute position acquisition module 102 possess The RTC block of time.
From the point of view of mobile robot, its core is master controller.Master controller is included by connecting camera There is the picture of graphic code, then main controller recognizes that the picture obtains the absolute location coordinates included in graphic code;Master control simultaneously Device processed connects running gear driver, and driving running gear starts movement by mobile robot.In the specific embodiment of the present invention In, it can be attached between master controller and camera by Fast Ethernet.It is of course also possible to which other lead to using USB etc. Letter mode.
Master controller possesses RTC block, with an independent absolute time.Simultaneously camera to master controller carry out when Between it is synchronous so that camera has also obtained absolute time.
In a particular embodiment of the present invention, based on the IEEE1588 using Ethernet between master controller and camera " the precision interval clock synchronous protocol standard of network measure and control system " carries out time synchronized, and synchronization accuracy is higher than 10us.Certainly, Other methods of synchronization such as NTP, pulse per second (PPS) can also be used to realize.
In mobile robot walking process, camera photographs the picture comprising graphic code.Because camera has existed Absolute time is obtained in step S2, therefore camera understands that the absolute time of shooting time is T.Then, camera will be carried The image data of time stamp T is sent to master controller.It is arranged such, the time synchronized carried out between master controller and camera; Camera stamps the timestamp comprising absolute time information to the picture shot;Unit is pushed away in the position of master controller in internal memory It has recorded and extrapolate coordinate value at each time point.
The present invention also provides a kind of mobile robot, including above-mentioned master controller.The other parts of mobile robot can It is not reinflated herein with reference to prior art.
The mobile robot that the present invention is provided, increases after time synchronized function, Ke Yiwei between camera and master controller All picture increase absolute time timestamps photographed.Even if robot is continued to move to during picture is parsed, also may be used So that the positional information parsed and picture time stamp to be associated.Then can accurately it obtain in particular moment, actual bit Put the robot location calculated with master controller.Then the position deviant of robot can be drawn, accurate position school is carried out It is accurate.
It should be noted that in this manual, such as first and second etc relational terms are used merely to one Entity makes a distinction with other several entities, and not necessarily require or imply between these entities exist it is any this actual Relation or order.
Detailed Jie has been carried out to mobile robot provided by the present invention and its master controller, alignment system and method above Continue.Specific case used herein is set forth to the principle and embodiment of the present invention, and the explanation of above example is only It is the method and its core concept for being used to help understand the present invention.It should be pointed out that for those skilled in the art For, under the premise without departing from the principles of the invention, some improvement and modification can also be carried out to the present invention, these improve and repaiied Decorations are also fallen into the protection domain of the claims in the present invention.

Claims (10)

1. a kind of localization method for mobile robot, it is characterised in that including:
The theoretical position coordinate of mobile robot is calculated and recorded according to prefixed time interval;
Obtain the absolute location coordinates for the graphic code that mobile robot is shot in the process of walking;
Corresponding DR position coordinate under being obtained according to the theoretical position coordinate at the time of when shooting graphic code;
Site error is obtained according to the DR position coordinate and the absolute location coordinates;
The current theoretical position coordinate that the positional error compensation is extrapolated in mobile robot.
2. localization method according to claim 1, it is characterised in that described to be calculated according to prefixed time interval and record shifting The step of mobile robot theoretical position coordinate, includes:
Minimum time is obtained according to the maximum operational speed V and positioning precision Q of mobile robot, and by formula t=Q/V calculating It is spaced t;
The prefixed time interval is determined according to the minimum interval t.
3. localization method according to claim 2, it is characterised in that described to be obtained clapping according to the theoretical position coordinate Include under at the time of when taking the photograph graphic code the step of corresponding DR position coordinate:
When T is the end points of the prefixed time interval at the time of when shooting graphic code, then the DR position coordinate is to be somebody's turn to do The theoretical position coordinate corresponding to the end points of the prefixed time interval;
When T is not the end points of the prefixed time interval at the time of when shooting graphic code, then the DR position coordinate (XT, YT) calculated by below equation:
<mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>T</mi> </msub> <mo>,</mo> <msub> <mi>Y</mi> <mi>T</mi> </msub> <mo>)</mo> <mo>=</mo> <mo>(</mo> <msub> <mi>x</mi> <mi>n</mi> </msub> <mo>+</mo> <mo>(</mo> <mrow> <mi>T</mi> <mo>-</mo> <msub> <mi>t</mi> <mi>n</mi> </msub> </mrow> <mo>)</mo> <mo>&amp;times;</mo> <mfrac> <mrow> <msub> <mi>x</mi> <mrow> <mi>n</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mi>x</mi> <mi>n</mi> </msub> </mrow> <mrow> <msub> <mi>t</mi> <mrow> <mi>n</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mi>t</mi> <mi>n</mi> </msub> </mrow> </mfrac> <mo>,</mo> <msub> <mi>y</mi> <mi>n</mi> </msub> <mo>+</mo> <mo>(</mo> <mrow> <mi>T</mi> <mo>-</mo> <msub> <mi>t</mi> <mi>n</mi> </msub> </mrow> <mo>)</mo> <mo>&amp;times;</mo> <mfrac> <mrow> <msub> <mi>y</mi> <mrow> <mi>n</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mi>y</mi> <mi>n</mi> </msub> </mrow> <mrow> <msub> <mi>t</mi> <mrow> <mi>n</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mi>t</mi> <mi>n</mi> </msub> </mrow> </mfrac> <mo>)</mo> <mo>,</mo> </mrow>
Wherein:tn≤T≤tn+1, tnWith tn+1For the end points of two adjacent time intervals,
In tnWhen to inscribe corresponding theoretical position coordinate be (xn, yn);
In tn+1When to inscribe corresponding theoretical position coordinate be (xn+1, yn+1)。
4. the localization method according to claims 1 to 3 any one, it is characterised in that described according to the DR position The step of coordinate and the absolute location coordinates obtain site error includes:
According to the DR position coordinate (XT, YT) and absolute location coordinates (X, Y), and pass through formula (Δ X, Δ Y)=(X-XT, Y-YT) calculate obtain the site error (Δ X, Δ Y).
5. localization method according to claim 4, it is characterised in that it is described by the positional error compensation in mobile machine The step of current theoretical position coordinate that people extrapolates, includes:
As the current time T of mobile robotnowFor a certain prefixed time interval end points when, then the current theoretical position Put the current theoretical position coordinate (X corresponding to the end points that coordinate is the prefixed time intervalnow, Ynow);Worked as using described Preceding theoretical position coordinate (Xnow, Ynow) and the site error (Δ X, Δ Y) and pass through formula (Xprecise, Yprecise)=(Xnow+ Δ X, Ynow+ Δ Y) calculate obtain current accurate coordinates (Xprecise, Yprecise)。
6. a kind of alignment system for mobile robot, it is characterised in that including:
Theoretical position computing module:Theoretical position coordinate for mobile robot to be calculated and recorded according to prefixed time interval;
Absolute position acquisition module:Absolute position for obtaining the graphic code that mobile robot is shot in the process of walking is sat Mark;
Dead reckoning module:It is corresponding under at the time of for being obtained according to the theoretical position coordinate when shooting graphic code to push away Calculate position coordinates;
Site error acquisition module:For obtaining site error according to the DR position coordinate and the absolute location coordinates;
Current absolute location module:For the positional error compensation to be sat in the current theoretical position that mobile robot is extrapolated Mark.
7. alignment system according to claim 6, it is characterised in that the theoretical position computing module includes:
Minimum interval computing unit:For the maximum operational speed V and positioning precision Q according to mobile robot, and pass through Formula t=Q/V is calculated and is obtained minimum interval t;
Prefixed time interval determining unit:For determining the prefixed time interval according to the minimum interval t.
8. alignment system according to claim 7, it is characterised in that the dead reckoning module is specially:
Time interval endpoint location calculates module:T is the prefixed time interval at the time of for when shooting graphic code During end points, then the DR position coordinate is the theoretical position coordinate corresponding to the end points of the prefixed time interval;
Non-temporal interval end points dead reckoning module:Between T is for the preset time at the time of for when shooting graphic code Every end points when, then the DR position coordinate (XT, YT) calculated by below equation:
<mrow> <mo>(</mo> <msub> <mi>X</mi> <mi>T</mi> </msub> <mo>,</mo> <msub> <mi>Y</mi> <mi>T</mi> </msub> <mo>)</mo> <mo>=</mo> <mo>(</mo> <msub> <mi>x</mi> <mi>n</mi> </msub> <mo>+</mo> <mo>(</mo> <mrow> <mi>T</mi> <mo>-</mo> <msub> <mi>t</mi> <mi>n</mi> </msub> </mrow> <mo>)</mo> <mo>&amp;times;</mo> <mfrac> <mrow> <msub> <mi>x</mi> <mrow> <mi>n</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mi>x</mi> <mi>n</mi> </msub> </mrow> <mrow> <msub> <mi>t</mi> <mrow> <mi>n</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mi>t</mi> <mi>n</mi> </msub> </mrow> </mfrac> <mo>,</mo> <msub> <mi>y</mi> <mi>n</mi> </msub> <mo>+</mo> <mo>(</mo> <mrow> <mi>T</mi> <mo>-</mo> <msub> <mi>t</mi> <mi>n</mi> </msub> </mrow> <mo>)</mo> <mo>&amp;times;</mo> <mfrac> <mrow> <msub> <mi>y</mi> <mrow> <mi>n</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mi>y</mi> <mi>n</mi> </msub> </mrow> <mrow> <msub> <mi>t</mi> <mrow> <mi>n</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mi>t</mi> <mi>n</mi> </msub> </mrow> </mfrac> <mo>)</mo> <mo>,</mo> </mrow>
Wherein:tn≤T≤tn+1, tnWith tn+1For the end points of two adjacent time intervals,
In tnWhen to inscribe corresponding theoretical position coordinate be (xn, yn);
In tn+1When to inscribe corresponding theoretical position coordinate be (xn+1, yn+1)。
9. a kind of master controller for mobile robot, it is characterised in that including as described in any one of claim 6 to 8 Alignment system, in addition to realize that it is synchronous absolute that the theoretical position computing module and the absolute position acquisition module possess The RTC block of time.
10. a kind of mobile robot, it is characterised in that including master controller as claimed in claim 9.
CN201710499985.0A 2017-06-27 2017-06-27 A kind of mobile robot and its master controller, positioning system and method Active CN107314770B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710499985.0A CN107314770B (en) 2017-06-27 2017-06-27 A kind of mobile robot and its master controller, positioning system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710499985.0A CN107314770B (en) 2017-06-27 2017-06-27 A kind of mobile robot and its master controller, positioning system and method

Publications (2)

Publication Number Publication Date
CN107314770A true CN107314770A (en) 2017-11-03
CN107314770B CN107314770B (en) 2019-08-30

Family

ID=60179667

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710499985.0A Active CN107314770B (en) 2017-06-27 2017-06-27 A kind of mobile robot and its master controller, positioning system and method

Country Status (1)

Country Link
CN (1) CN107314770B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108709559A (en) * 2018-06-11 2018-10-26 浙江国自机器人技术有限公司 A kind of mobile robot positioning system and its localization method
CN108759853A (en) * 2018-06-15 2018-11-06 浙江国自机器人技术有限公司 A kind of robot localization method, system, equipment and computer readable storage medium
CN108871339A (en) * 2018-06-29 2018-11-23 深圳市富微科创电子有限公司 A kind of positioning system and method based on OID coding
CN109443392A (en) * 2018-12-10 2019-03-08 北京艾瑞思机器人技术有限公司 Navigation error determines method and device, navigation control method, device and equipment
CN112230256A (en) * 2019-07-15 2021-01-15 苏州宝时得电动工具有限公司 Autonomous robot, positioning calibration method and device thereof, and storage medium
CN112543415A (en) * 2020-12-24 2021-03-23 安标国家矿用产品安全标志中心有限公司 Method and system for determining maximum dynamic positioning error

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120158282A1 (en) * 2010-12-15 2012-06-21 Electronics And Telecommunications Research Institute Camera-based indoor position recognition apparatus and method
US20150202774A1 (en) * 2014-01-23 2015-07-23 Lam Research Corporation Touch auto-calibration of process modules
CN105737820A (en) * 2016-04-05 2016-07-06 芜湖哈特机器人产业技术研究院有限公司 Positioning and navigation method for indoor robot
CN106168803A (en) * 2016-04-18 2016-11-30 深圳众为兴技术股份有限公司 A kind of location aware method for moving robot
CN106444750A (en) * 2016-09-13 2017-02-22 哈尔滨工业大学深圳研究生院 Two-dimensional code positioning-based intelligent warehousing mobile robot system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120158282A1 (en) * 2010-12-15 2012-06-21 Electronics And Telecommunications Research Institute Camera-based indoor position recognition apparatus and method
US20150202774A1 (en) * 2014-01-23 2015-07-23 Lam Research Corporation Touch auto-calibration of process modules
CN105737820A (en) * 2016-04-05 2016-07-06 芜湖哈特机器人产业技术研究院有限公司 Positioning and navigation method for indoor robot
CN106168803A (en) * 2016-04-18 2016-11-30 深圳众为兴技术股份有限公司 A kind of location aware method for moving robot
CN106444750A (en) * 2016-09-13 2017-02-22 哈尔滨工业大学深圳研究生院 Two-dimensional code positioning-based intelligent warehousing mobile robot system

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108709559A (en) * 2018-06-11 2018-10-26 浙江国自机器人技术有限公司 A kind of mobile robot positioning system and its localization method
CN108709559B (en) * 2018-06-11 2020-05-22 浙江国自机器人技术有限公司 Mobile robot positioning system and positioning method thereof
CN108759853A (en) * 2018-06-15 2018-11-06 浙江国自机器人技术有限公司 A kind of robot localization method, system, equipment and computer readable storage medium
CN108871339A (en) * 2018-06-29 2018-11-23 深圳市富微科创电子有限公司 A kind of positioning system and method based on OID coding
CN109443392A (en) * 2018-12-10 2019-03-08 北京艾瑞思机器人技术有限公司 Navigation error determines method and device, navigation control method, device and equipment
CN109443392B (en) * 2018-12-10 2022-09-27 北京旷视机器人技术有限公司 Navigation error determination method and device, navigation control method, device and equipment
CN112230256A (en) * 2019-07-15 2021-01-15 苏州宝时得电动工具有限公司 Autonomous robot, positioning calibration method and device thereof, and storage medium
CN112230256B (en) * 2019-07-15 2024-04-09 苏州宝时得电动工具有限公司 Autonomous robot, positioning calibration method and device thereof, and storage medium
CN112543415A (en) * 2020-12-24 2021-03-23 安标国家矿用产品安全标志中心有限公司 Method and system for determining maximum dynamic positioning error
CN112543415B (en) * 2020-12-24 2024-02-23 安标国家矿用产品安全标志中心有限公司 Method and system for determining maximum dynamic positioning error

Also Published As

Publication number Publication date
CN107314770B (en) 2019-08-30

Similar Documents

Publication Publication Date Title
CN107314770A (en) A kind of mobile robot and its master controller, alignment system and method
CN107255476B (en) Indoor positioning method and device based on inertial data and visual features
WO2018077176A1 (en) Wearable device and method for determining user displacement in wearable device
CN107179091B (en) A kind of AGV walking vision positioning error correcting method
US20170337701A1 (en) Method and system for 3d capture based on structure from motion with simplified pose detection
KR20160146196A (en) Embedded system, fast structured light based 3d camera system and method for obtaining 3d images using the same
CN105196292B (en) Visual servo control method based on iterative duration variation
EP1498745A3 (en) Enhanced real time kinematics determination method and apparatus
CN101859439A (en) Movement tracking device for man-machine interaction and tracking method thereof
CN111405139B (en) Time synchronization method, system, visual mileage system and storage medium
CN103994765A (en) Positioning method of inertial sensor
CN109143205A (en) Integrated transducer external parameters calibration method, apparatus
CN111609868A (en) Visual inertial odometer method based on improved optical flow method
CN104864866A (en) Aerial vehicle flight error correcting device and correcting method as well as unmanned aerial vehicle
JP2016006415A (en) Method and apparatus for estimating position of optical marker in optical motion capture
CN110096152A (en) Space-location method, device, equipment and the storage medium of physical feeling
KR102084252B1 (en) System and method for simultaneous reconsttuction of initial 3d trajectory and velocity using single camera images
CN106646441A (en) Indoor mobile robot positioning system combining environment information and indoor mobile robot positioning method thereof
CN113587934A (en) Robot, indoor positioning method and device and readable storage medium
CN106595601A (en) Camera six-degree-of-freedom pose accurate repositioning method without hand eye calibration
CN109035343A (en) A kind of floor relative displacement measurement method based on monitoring camera
KR101380852B1 (en) Slam system and method for mobile robots with environment picture input from user
CN110428461A (en) In conjunction with the monocular SLAM method and device of deep learning
WO2019080879A1 (en) Data processing method, computer device, and storage medium
CN111158482B (en) Human body motion gesture capturing method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant