CN108036786A - Position and posture detection method, device and computer-readable recording medium based on auxiliary line - Google Patents

Position and posture detection method, device and computer-readable recording medium based on auxiliary line Download PDF

Info

Publication number
CN108036786A
CN108036786A CN201711249452.3A CN201711249452A CN108036786A CN 108036786 A CN108036786 A CN 108036786A CN 201711249452 A CN201711249452 A CN 201711249452A CN 108036786 A CN108036786 A CN 108036786A
Authority
CN
China
Prior art keywords
sampled point
ceiling
axis coordinate
posterior
course angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711249452.3A
Other languages
Chinese (zh)
Other versions
CN108036786B (en
Inventor
吕文君
皮明
杜晓冬
李泽瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Youth Tiancheng Technology Co Ltd
Original Assignee
Anhui Youth Tiancheng Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Youth Tiancheng Technology Co Ltd filed Critical Anhui Youth Tiancheng Technology Co Ltd
Priority to CN201711249452.3A priority Critical patent/CN108036786B/en
Publication of CN108036786A publication Critical patent/CN108036786A/en
Application granted granted Critical
Publication of CN108036786B publication Critical patent/CN108036786B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation

Abstract

The invention discloses a kind of position and posture detection method based on auxiliary line, device and computer-readable recording medium, which comprises the following steps:Testing process initialization, collection sensing data, pose prior estimate, extraction course angle detection value set, course angle Posterior estimator, extraction position detection value set and position Posterior estimator, final output posture information.The present invention utilizes Inertial Measurement Unit, the combination of the first-class sensor of shooting and algorithm for estimating, can detect the pose of robot in real time.Compared to prior art, the present invention has the characteristics that low cost, high accuracy and high reliability, is the acquisition of the posture information of a variety of robots, there is provided a kind of detection means of high performance-price ratio.

Description

Position and posture detection method, device and computer-readable recording medium based on auxiliary line
Technical field
The present invention relates to robot perception technical field, more particularly to a kind of position and posture detection method based on auxiliary line, Device and computer-readable recording medium.
Background technology
Robot is the installations of automated execution work.It can not only receive mankind commander, but also can run advance volume The program of row, can also be according to the principle guiding principle action formulated with artificial intelligence technology.Its task is assistance or the substitution mankind The work of work, is not only widely used in the industry such as industry, agricultural, national defence, medical treatment, service, but also is removing mines, searching Catch, rescue, radiating and space field etc. is harmful is applied well with dangerous situation.Therefore, the research of robot technology has been Through obtaining the common concern of countries in the world.
One basic function of robot is mobile, or referred to as Pose Control, this is also that robot realizes other work( The basis of energy.And accurately Pose Control is to rely on accurate pose detection, one common belief is that the " essence of control Degree is the precision that can not exceed feedback (detection) ", so accurate pose detecting system is just particularly important.
For the robot being operated on two dimensional surface, pose detection is included to course angle and two-dimensional position coordinate Detection.There are the following problems for traditional method:
1) the course angle detection realized using magnetometer can be used only in the environment of outdoor spaciousness mostly, for indoor environment, If there is a large amount of ferromagnetic, electromagnetic interferences, which may fail;
2) the course angle detection realized using gyroscope can become more and more unreliable over time, and it is small to drift about Gyroscope cost it is excessive, can not be used in many inexpensive application scenarios;
3) more satellite positioning means can be used only under outdoor spacious environment mostly, be not suitable for the woods, valley, interior, In the environment that the satellite-signals such as underground, underwater, tunnel can not reach;
4) the radio-positioning means based on WiFi, UWB are there is the problems such as multipath interference, and orientation range has Limit, if to expand orientation range will certainly will increase the cost of equipment;
5) positioning means of view-based access control model are influenced be subject to illumination, for light it is too strong, excessively secretly based on the field acutely converted The problems such as scape, the precision of the means is relatively low, and in addition also characteristic error matching, sighting distance are blocked;
6) positioning means of view-based access control model, WiFi or magnetic field fingerprint, since needs do substantial amounts of test to obtain in advance The fingerprint characteristic of robot location, so the means are not practical enough, and can changed scene (particularly magnetic field to fingerprint Fingerprint), which is insecure.
The content of the invention
It is an object of the invention to provide a kind of position and posture detection method based on auxiliary line, for indoor environment.
It is an object of the invention to provide a kind of apparatus for detecting position and posture based on auxiliary line, for indoor environment.
For this reason, one aspect of the present invention provides a kind of position and posture detection method based on auxiliary line, the smallpox in localization region Plate or ground are configured with auxiliary line, comprise the following steps:
S101:To sampled point sequence number t, sampling interval T, the first color auxiliary line spacing distance EX, the second color auxiliary line Spacing distance EY, t-th sampled point forward movement speed VF,tWith lateral movement speed VL,t, after the pose of t-th of sampled point Test estimate vectorInitialization assignment is carried out, whereinWithRepresent that the X-axis of t-th of sampled point is sat respectively The posterior estimate of mark, Y-axis coordinate and course angle, course angle are defined as counterclockwise rotation of the robot direction of advance relative to X-axis Gyration;
S102:By sampled point sequence number from increasing t ← t+1, the data of Inertial Measurement Unit are gathered, obtain t-th sampled point Yaw speed rt, forward acceleration aF,tWith side acceleration aL,t;Collection imaging sensor obtains the ceiling of t-th of sampled point Or ground image
S103:Utilize the yaw speed r of t-th obtained of sampled point of step S102t, forward acceleration aF,tAdd with lateral Speed aL,t, and based on the pose Posterior estimator vector of the t-1 sampled pointPose prior estimate is carried out, to obtain t-th The pose prior estimate vector of sampled point
S104:According to the ceiling or ground image of step S102 t-th of the sampled point obtainedExtract t-th of sampled point Ceiling or ground facial vision course angle detection value set ΘtWith Hough distance sets ρX,t、ρY,t
S105:According to the ceiling of step S104 t-th of the sampled point obtained or ground facial vision course angle detection value set Θt, the course angle prior estimate of t-th of the sampled point obtained with step 3Course angle Posterior estimator is carried out, to obtain t The course angle posterior estimate of a sampled point
S1106:The Hough distance sets ρ of t-th of sampled point is obtained according to step S104X,t、ρY,t, obtained with step S105 The course angle posterior estimate takenExtract the ceiling or ground facial vision X-axis coordinate measurement value set of t-th of sampled pointWith The ceiling or ground facial vision Y-axis coordinate measurement value set of t-th of sampled pointAnd
S107:According to the ceiling or ground facial vision X-axis coordinate measurement value set of step S104 t-th of the sampled point obtainedWith ceiling or ground facial vision Y-axis coordinate measurement value setWith the X-axis coordinate of step S103 t-th of the sampled point obtained Prior estimateWith Y-axis coordinate prior estimatePosition coordinates Posterior estimator is carried out, to obtain the X of t-th of sampled point Axial coordinate Posterior estimatorWith Y-axis coordinate Posterior estimatorAnd
S108:Repeat step S102 to S107, exports the pose Posterior estimator vector of each sampled point, i.e. pose detects Value.
According to another aspect of the present invention, there is provided a kind of apparatus for detecting position and posture based on auxiliary line, including:Inertia measurement Unit, for detecting yaw speed, forward acceleration and side acceleration;Imaging sensor, for gathering ceiling or ground Image, the camera lens vertically ceiling, and ensure that robot center exists of described image sensor when gathering ceiling image The upright projection point of ceiling is placed exactly in the most lower left corner of ceiling image, the described image sensor when gathering ground image Camera lens vertically ground, and ensure that upright projection point of the robot center on ground is placed exactly in the most lower-left of ground image Angle;Data processing unit, for performing pose detection program, to obtain posture information, the pose detection program is upon execution Realize above-mentioned steps S101 to S108.
Present invention also offers a kind of computer-readable recording medium, is stored with pose detection program described above.
Present invention also offers a kind of robot with above-mentioned position and posture detection method.
Compared with conventional art, the invention has the advantages that:1) due to the participation of no magnetometric sensor, so this hair Bright ferromagnetic interference, the electromagnetic interference of can be applied to has stronger accuracy of detection and environmental suitability compared with the environment of horn of plenty;2) Since the characteristic body of ceiling is previously set, so seldom there is the problem of road sign erroneous matching, it is steady to improve system It is qualitative;3) since the image processing algorithm being related to is fairly simple, so the computation complexity of the present invention is relatively low, so as to low Run on the data processor of cost, so on the one hand reduce hardware cost, on the other hand improve detection frequency and system Reliability;4) it is relatively easy to due to laying auxiliary line in ceiling, so the present invention has larger positioning area coverage.
In addition to objects, features and advantages described above, the present invention also has other objects, features and advantages. Below with reference to figure, the present invention is described in further detail.
Brief description of the drawings
The accompanying drawings which form a part of this application are used for providing a further understanding of the present invention, and of the invention shows Meaning property embodiment and its explanation are used to explain the present invention, do not form inappropriate limitation of the present invention.In the accompanying drawings:
Fig. 1 is the flow chart of the position and posture detection method according to the present invention based on ceiling vision;
Fig. 2 is the signal of the ceiling auxiliary line in the position and posture detection method according to the present invention based on ceiling vision Figure;And
Fig. 3 is the structure diagram of the apparatus for detecting position and posture according to the present invention based on ceiling vision.
Embodiment
It should be noted that in the case where there is no conflict, the feature in embodiment and embodiment in the application can phase Mutually combination.Below with reference to the accompanying drawings and the present invention will be described in detail in conjunction with the embodiments.
The present invention can be detected in real time using Inertial Measurement Unit, the combination of the first-class sensor of shooting and algorithm for estimating The pose of robot.Compared to prior art, the present invention has the characteristics that low cost, high accuracy and high reliability, is a variety of machines A kind of acquisition of the posture information of device people, there is provided detection means of high performance-price ratio.
A kind of position and posture detection method based on ceiling auxiliary line of the present invention comprises the following steps:Testing process is initial Change, collection sensing data, pose prior estimate, extraction course angle detection value set, course angle Posterior estimator, extraction position seat Mark detection value set and position coordinates Posterior estimator, final output posture information.It is as shown in Figure 1, specific as follows:
Step 1, testing process initialization
Assignment is initialized to sampled point sequence number t:t←0;According to actual conditions to sampling interval T, red auxiliary line spacer From EX, blue auxiliary line spacing distance EY, t-th sampled point forward movement speed VF,tWith lateral movement speed VL,t, t-th The pose Posterior estimator vector of sampled pointCarry out initialization assignment;Wherein WithRepresent respectively The posterior estimate of the X-axis coordinate of t-th of sampled point, Y-axis coordinate and course angle;Course angle is defined as robot direction of advance phase For the rotated counterclockwise by angle of X-axis;The transposition of matrix A ' representing matrix A;
Before the implementation of this programme, the ground iso-surface patch auxiliary line in the region of positioning service is being needed, it is specific as follows:
The ceiling of localization region is configured with a series of colors as the red parallel lines parallel to X-axis, and adjacent is parallel Line has equal interval EX;Meanwhile it is blueness parallel to Y-axis that the ceiling of localization region, which is also configured with a series of colors, Parallel lines, adjacent parallel lines have equal interval EY.Effect is as shown in Fig. 2, here in order to distinguish red auxiliary line mark For dotted line, the less dotted line in gap or solid line should be used as far as possible in practice.
Step 2, collection sensing data
By sampled point sequence number from t ← t+1 is increased, the data of Inertial Measurement Unit are gathered, obtain the yaw speed of t-th of sampled point Rate rt, forward acceleration aF,tWith side acceleration aL,t;Collection imaging sensor obtains the ceiling image of t-th of sampled point
Step 3, pose prior estimate
The yaw speed r of t-th of the sampled point obtained using step 2t, forward acceleration aF,tWith side acceleration aL,t, And based on the pose Posterior estimator vector of the t-1 sampled pointPose prior estimate is carried out, to obtain t-th of sampled point Pose prior estimate vectorIt is specific as follows:
VF,t=VF,t-1+TaF,t
VL,t=VL,t-1+TaL,t
Step 4, extraction course angle detection value set
The ceiling image of t-th of the sampled point obtained according to step 3Extract the ceiling vision boat of t-th of sampled point To angle detection value set Θt.It is specific as follows:
Step 401:The ceiling image of t-th of the sampled point collected to step 102Carry out respectively based on it is red with The carrying out image threshold segmentation of blueness, to obtain the bianry image for only including corresponding color auxiliary line respectivelyWithWherein,In White area is red auxiliary line,Middle white area is blue auxiliary line, and remainder is black;
Step 402:The bianry image obtained to step 401WithClosing operation of mathematical morphology, skeletal extraction are carried out successively Operation, cut operator, its purpose be respectively interior minuscule hole in blank map picture, extract auxiliary line center line, eliminate be free on Or sprig on auxiliary line skeleton is depended on, obtain the bianry image set for only including auxiliary line center line of t-th of sampled pointWith
Step 403:To the bianry image obtained in step 402WithHough transform is carried out, to obtain t respectively The Hough distance sets ρ of a sampled pointX,t、ρY,tWith Hough angle sets ∈X,t、∈Y,t
Step 404:The Hough angle sets ∈ obtained based on step 403X,t, calculate ceiling vision course angle detected value Set Θt, it is specific as follows:Θt=∪E ∈ ∈ X, t{-e,-e+180}。
Step 5, course angle Posterior estimator
The ceiling vision course angle detection value set Θ of t-th of the sampled point obtained according to step 4t, obtained with step 3 T-th of sampled point course angle prior estimateCourse angle Posterior estimator is carried out, to obtain the course of t-th of sampled point Angle posterior estimateIt is specific as follows:
If ΘtIt is not empty set, then calculates the course angle Posterior estimator set of t-th of sampled pointIt is as follows:
Wherein abs () is to seek absolute value sign,For ΘtI-th of element, δθ> 0 screens threshold value, N for course angleΘ For ΘtElement number;
IfIt is not empty set, calculates course angle posterior estimateIt is specific as follows:
Wherein,ForIn i-th of element, NθForElement number;
If ΘtBe empty set orIt is empty set, calculates the course angle posterior estimate of t-th of sampled pointSpecifically such as Under:
Step 6, extraction position coordinates detection value set
The Hough distance sets ρ of t-th of sampled point is obtained according to step 404X,t、ρY,t, the course angle with step 5 acquisition Posterior estimateExtract the ceiling vision X-axis coordinate measurement value set of t-th of sampled pointWith the day of t-th of sampled point Card vision Y-axis coordinate measurement value setIt is specific as follows:
IfThen
IfThen
IfThen
IfThen
Wherein, MXWith MYThe quantity of respectively red auxiliary line and blue auxiliary line.
Step 7, position coordinates Posterior estimator
The ceiling vision X-axis coordinate measurement value set of t-th of the sampled point obtained according to step 4With ceiling vision Y-axis coordinate measurement value setThe X-axis coordinate prior estimate of t-th of the sampled point obtained with step 3It is first with Y-axis coordinate Test estimationPosition coordinates Posterior estimator is carried out, to obtain the X-axis coordinate Posterior estimator of t-th of sampled pointWith Y-axis coordinate Posterior estimatorIt is specific as follows:
Step 701:Calculate X-axis coordinate posterior estimate
IfIt is not empty set, then calculates the X-axis coordinate Posterior estimator set of t-th of sampled pointIt is as follows:
Wherein abs () is to ask absolute value to accord with Number,ForI-th of element, δx> 0 screens threshold value, N for X-axis coordinateXForElement number;
IfIt is not empty set, calculates X-axis coordinate posterior estimateIt is as follows:
Wherein,ForIn i-th of element, NxForElement number;
IfBe empty set orIt is empty set, calculates the X-axis coordinate posterior estimate of t-th of sampled point
Step 702:Calculate Y-axis coordinate posterior estimate
IfIt is not empty set, then calculates the Y-axis coordinate Posterior estimator set of t-th of sampled pointIt is as follows:
Wherein abs () is to seek absolute value sign,ForI-th of element, δy> 0 screens threshold value for Y-axis coordinate, NYForElement number;
IfIt is not empty set, calculates Y-axis coordinate posterior estimateIt is as follows:
Wherein,ForIn i-th of element, NyForElement number;
IfBe empty set orIt is empty set, calculates the Y-axis coordinate posterior estimate of t-th of sampled point
As pose Posterior estimator vector.
Repeat step 2 exports the pose Posterior estimator vector of each sampled point, i.e. pose detected value to step 7.
In another embodiment, there is provided a kind of position and posture detection method based on ground facial vision, it is unlike the embodiments above Part is that auxiliary line configures on the ground, the configuration of the auxiliary line on the configuration mode and ceiling of the auxiliary line on ground Method is identical, and imaging sensor is replaced by collection ground image by collection ceiling image, also for the ground in other steps Image is handled.
The above-mentioned detection method of the present invention is real in a manner of pose detection program is run on the data processing unit of robot Apply, the present invention gives a kind of computer-readable recording medium for being stored with pose detection program for this.
The present invention gives the device for being used for realization above-mentioned position and posture detection method, as shown in figure 3, a kind of be based on ceiling The apparatus for detecting position and posture of vision, including:
Inertial Measurement Unit, for detecting yaw speed, forward acceleration and side acceleration;
Imaging sensor, for gathering ceiling image, the camera lens vertically ceiling, and protecting of described image sensor Card robot center is placed exactly in the most lower left corner of ceiling image in the upright projection point of ceiling;
Data processing unit, for performing pose detection program, to obtain posture information, the pose detection program is being held Following steps are realized during row:Testing process initialization, collection sensing data, pose prior estimate, extraction course angle detected value Set, course angle Posterior estimator, extraction position coordinates detection value set and position coordinates Posterior estimator.
Ensure that robot center is placed exactly in the benefit in the most lower left corner of ceiling image in the upright projection point of ceiling It is:Since robot central point is overlapped with image most lower-left angle point, so facilitate the calculating of back location coordinate.
The foregoing is only a preferred embodiment of the present invention, is not intended to limit the invention, for the skill of this area For art personnel, the invention may be variously modified and varied.Within the spirit and principles of the invention, that is made any repaiies Change, equivalent substitution, improvement etc., should all be included in the protection scope of the present invention.

Claims (10)

1. a kind of position and posture detection method based on auxiliary line, it is characterised in that be configured with the ceiling of localization region or ground Auxiliary line, the detection method comprise the following steps:
S101:To sampled point sequence number t, sampling interval T, the first color auxiliary line spacing distance EX, the second color auxiliary line spacer From EY, t-th sampled point forward movement speed VF, tWith lateral movement speed VL, t, the pose Posterior estimator of t-th of sampled point VectorInitialization assignment is carried out, whereinWithX-axis coordinate, the Y of t-th of sampled point are represented respectively The posterior estimate of axial coordinate and course angle, course angle are defined as the counterclockwise anglec of rotation of the robot direction of advance relative to X-axis Degree;
S102:By sampled point sequence number from t ← t+1 is increased, the data of Inertial Measurement Unit are gathered, obtain the yaw of t-th of sampled point Speed rt, forward acceleration aF, tWith side acceleration aL, t;Collection imaging sensor obtains the ceiling or ground of t-th of sampled point Face image
S103:Utilize the yaw speed r of t-th obtained of sampled point of step S102t, forward acceleration aF, tWith side acceleration aL, t, and based on the pose Posterior estimator vector of the t-1 sampled pointPose prior estimate is carried out, is adopted with obtaining t-th The pose prior estimate vector of sampling point
S104:According to the ceiling or ground image of step S102 t-th of the sampled point obtainedExtract the day of t-th of sampled point Card or ground facial vision course angle detection value set ΘtWith Hough distance sets ρX, t、ρY, t
S105:According to the ceiling of step S104 t-th of the sampled point obtained or ground facial vision course angle detection value set Θt, with The course angle prior estimate for t-th of sampled point that step 3 obtainsCourse angle Posterior estimator is carried out, is adopted with obtaining t-th The course angle posterior estimate of sampling point
S1106:The Hough distance sets ρ of t-th of sampled point is obtained according to step S104X, t、ρY, t, obtained with step S105 Course angle posterior estimateExtract the ceiling or ground facial vision X-axis coordinate measurement value set of t-th of sampled pointWith t The ceiling or ground facial vision Y-axis coordinate measurement value set of a sampled pointAnd
S107:According to the ceiling or ground facial vision X-axis coordinate measurement value set of step S104 t-th of the sampled point obtainedWith Ceiling or ground facial vision Y-axis coordinate measurement value setWith the X-axis coordinate priori of step S103 t-th of the sampled point obtained EstimationWith Y-axis coordinate prior estimatePosition coordinates Posterior estimator is carried out, is sat with obtaining the X-axis of t-th of sampled point Mark Posterior estimatorWith Y-axis coordinate Posterior estimatorAnd
S108:Repeat step S102 to S107, exports the pose Posterior estimator vector of each sampled point, i.e. pose detected value.
2. the position and posture detection method according to claim 1 based on auxiliary line, it is characterised in that the configuration of the auxiliary line Method is as follows:
The ceiling of localization region or ground, which are configured with, a series of has the first color CXThe parallel lines parallel to X-axis, it is adjacent Parallel lines have equal interval EX;Meanwhile the ceiling of localization region or ground are also configured with a series of having the second color CY The parallel lines parallel to Y-axis, adjacent parallel lines have equal interval EY
3. the position and posture detection method according to claim 2 based on auxiliary line, it is characterised in that in the step S103 The pose prior estimate vector of t-th of sampled pointCalculating process it is as follows:
<mrow> <msub> <mover> <mi>&amp;theta;</mi> <mo>^</mo> </mover> <mrow> <mi>t</mi> <mo>,</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <msub> <mover> <mi>&amp;theta;</mi> <mo>^</mo> </mover> <mrow> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>+</mo> <msub> <mi>Tr</mi> <mi>t</mi> </msub> <mo>;</mo> </mrow>
VF, t=VF, t-1+TaF, t
VL, t=VL, t-1+TaL, t
And
4. the position and posture detection method according to claim 3 based on auxiliary line, it is characterised in that the step S104 includes Following steps:
S401:The ceiling or ground image of t-th of the sampled point collected to step S102Carry out being based on the first color respectively With the carrying out image threshold segmentation of the second color, with obtain respectively only include corresponding color auxiliary line a bianry imageWithIts In,Middle white area is the first color auxiliary line,Middle white area is the second color auxiliary line, and remainder is black Color;
S402:The bianry image obtained to step S401WithClosing operation of mathematical morphology, skeletal extraction operation are carried out successively, are cut Branch operation, auxiliary line skeleton is free on or depends on interior minuscule hole, the center line for extracting auxiliary line, elimination in blank map picture On sprig, obtain the binary map image set platform for only including auxiliary line center line of t-th sampled pointWith
S403:To the bianry image obtained in step S402WithHough transform is carried out, to obtain t-th of sampling respectively The Hough distance sets ρ of pointX, t、ρY, tWith Hough angle sets ∈X, t、∈Y, t;And
S404:The Hough angle sets ∈ obtained based on step S403X, t, calculate ceiling or ground facial vision course angle detected value Set Θt, it is specific as follows:Θt=UE ∈ ∈ X, t{-e ,-e+180 }.
5. the position and posture detection method according to claim 4 based on auxiliary line, it is characterised in that in the step S105 The course angle posterior estimate of t sampled pointCalculating process it is as follows:
If ΘtIt is not empty set, then calculates the course angle Posterior estimator set of t-th of sampled pointIt is as follows:
<mrow> <msub> <mover> <mi>&amp;Theta;</mi> <mo>^</mo> </mover> <mi>t</mi> </msub> <mo>=</mo> <mo>{</mo> <msubsup> <mi>&amp;Theta;</mi> <mi>t</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msubsup> <mo>|</mo> <mi>a</mi> <mi>b</mi> <mi>s</mi> <mrow> <mo>(</mo> <msub> <mover> <mi>&amp;theta;</mi> <mo>^</mo> </mover> <mrow> <mi>t</mi> <mo>,</mo> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msubsup> <mi>&amp;Theta;</mi> <mi>t</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msubsup> <mo>)</mo> </mrow> <mo>&amp;le;</mo> <msub> <mi>&amp;delta;</mi> <mi>&amp;theta;</mi> </msub> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>...</mn> <mo>,</mo> <msub> <mi>N</mi> <mi>&amp;Theta;</mi> </msub> <mo>}</mo> <mo>,</mo> </mrow>
Wherein abs () is to seek absolute value sign,For ΘtI-th of element, δθ> 0 screens threshold value, N for course angleΘFor ΘtElement number;
IfIt is not empty set, calculates course angle posterior estimateIt is specific as follows
<mrow> <msub> <mover> <mi>&amp;theta;</mi> <mo>^</mo> </mover> <mi>t</mi> </msub> <mo>=</mo> <mfrac> <mrow> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>N</mi> <mi>&amp;theta;</mi> </msub> </msubsup> <msubsup> <mover> <mi>&amp;Theta;</mi> <mo>^</mo> </mover> <mi>t</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msubsup> </mrow> <msub> <mi>N</mi> <mi>&amp;theta;</mi> </msub> </mfrac> <mo>,</mo> </mrow>
Wherein,ForIn i-th of element, NθForElement number;
If ΘtBe empty set orIt is empty set, calculates the course angle posterior estimate of t-th of sampled pointIt is specific as follows:
6. the position and posture detection method according to claim 5 based on auxiliary line, it is characterised in that the step S107 includes Following steps:
S701:Calculate X-axis coordinate posterior estimateComprise the following steps:
IfIt is not empty set, then calculates the X-axis coordinate Posterior estimator set of t-th of sampled pointIt is as follows:
Wherein abs () is to seek absolute value sign,ForI-th of element, δx> 0 screens threshold value, N for X-axis coordinateXForElement number;
IfIt is not empty set, calculates X-axis coordinate posterior estimateIt is as follows:
Wherein,ForIn i-th of element, NxForElement number;
IfBe empty set orIt is empty set, calculates the X-axis coordinate posterior estimate of t-th of sampled point
S702:Calculate Y-axis coordinate posterior estimateComprise the following steps:
IfIt is not empty set, then calculates the Y-axis coordinate Posterior estimator set of t-th of sampled pointIt is as follows:
Wherein abs () is to seek absolute value sign,ForI-th of element, δy> 0 screens threshold value, N for Y-axis coordinateYForElement number;
IfIt is not empty set, calculates Y-axis coordinate posterior estimateIt is as follows:
Wherein,ForIn i-th of element, NyForElement number;
IfBe empty set orIt is empty set, calculates the Y-axis coordinate posterior estimate of t-th of sampled point ThenAs pose Posterior estimator vector.
7. a kind of computer-readable recording medium, is stored with pose detection program, it is characterised in that the pose detection program exists Following steps are realized when being performed on processor:
S101:To sampled point sequence number t, sampling interval T, the first color auxiliary line spacing distance EX, the second color auxiliary line spacer From EY, t-th sampled point forward movement speed VF, tWith lateral movement speed VL, t, the pose Posterior estimator of t-th of sampled point VectorInitialization assignment is carried out, whereinWithX-axis coordinate, the Y of t-th of sampled point are represented respectively The posterior estimate of axial coordinate and course angle, course angle are defined as the counterclockwise anglec of rotation of the robot direction of advance relative to X-axis Degree;
S102:By sampled point sequence number from t ← t+1 is increased, the data of Inertial Measurement Unit are gathered, obtain the yaw of t-th of sampled point Speed rt, forward acceleration aF, tWith side acceleration aL, t;Collection imaging sensor obtains the ceiling or ground of t-th of sampled point Face image
S103:Utilize the yaw speed r of t-th obtained of sampled point of step S102t, forward acceleration aF, tWith side acceleration aL, t, and based on the pose Posterior estimator vector of the t-1 sampled pointPose prior estimate is carried out, is adopted with obtaining t-th The pose prior estimate vector of sampling point
S104:According to the ceiling or ground image of step S102 t-th of the sampled point obtainedExtract the day of t-th of sampled point Card or ground facial vision course angle detection value set ΘtWith Hough distance sets ρX, t、ρY, t
S105:According to the ceiling of step S104 t-th of the sampled point obtained or ground facial vision course angle detection value set Θt, with The course angle prior estimate for t-th of sampled point that step 3 obtainsCourse angle Posterior estimator is carried out, to obtain t-th of sampling The course angle posterior estimate of point
S1106:The Hough distance sets ρ of t-th of sampled point is obtained according to step S104X, t、ρY, t, obtained with step S105 Course angle posterior estimateExtract the ceiling or ground facial vision X-axis coordinate measurement value set of t-th of sampled pointWith t The ceiling or ground facial vision Y-axis coordinate measurement value set of a sampled pointAnd
S107:According to the ceiling or ground facial vision X-axis coordinate measurement value set of step S104 t-th of the sampled point obtainedWith Ceiling or ground facial vision Y-axis coordinate measurement value setWith the X-axis coordinate priori of step S103 t-th of the sampled point obtained EstimationWith Y-axis coordinate prior estimatePosition coordinates Posterior estimator is carried out, is sat with obtaining the X-axis of t-th of sampled point Mark Posterior estimatorWith Y-axis coordinate Posterior estimatorAnd
S108:Repeat step S102 to S107, exports the pose Posterior estimator vector of each sampled point, i.e. pose detected value.
A kind of 8. apparatus for detecting position and posture, it is characterised in that including:
Inertial Measurement Unit, for detecting yaw speed, forward acceleration and side acceleration;
Imaging sensor, for gathering ceiling or ground image, the mirror of described image sensor when gathering ceiling image Head vertically ceiling, and ensure that robot center is placed exactly in the most left of ceiling image in the upright projection point of ceiling Inferior horn, the camera lens of the described image sensor vertically ground, and ensure robot center on ground when gathering ground image Upright projection point be placed exactly in the most lower left corner of ground image;
Data processing unit, for performing pose detection program, to obtain posture information, the pose detection program is upon execution Realize following steps:
S101:To sampled point sequence number t, sampling interval T, the first color auxiliary line spacing distance EX, the second color auxiliary line spacer From EY, t-th sampled point forward movement speed VF, tWith lateral movement speed VL, t, the pose Posterior estimator of t-th of sampled point VectorInitialization assignment is carried out, whereinWithX-axis coordinate, the Y of t-th of sampled point are represented respectively The posterior estimate of axial coordinate and course angle, course angle are defined as the counterclockwise anglec of rotation of the robot direction of advance relative to X-axis Degree;
S102:By sampled point sequence number from t ← t+1 is increased, the data of Inertial Measurement Unit are gathered, obtain the yaw of t-th of sampled point Speed rt, forward acceleration aF, tWith side acceleration aL, t;Collection imaging sensor obtains the ceiling image of t-th of sampled point
S103:Utilize the yaw speed r of t-th obtained of sampled point of step S102t, forward acceleration aF, tWith side acceleration aL, t, and based on the pose Posterior estimator vector of the t-1 sampled pointPose prior estimate is carried out, is adopted with obtaining t-th The pose prior estimate vector of sampling point
S104:According to the ceiling image of step S102 t-th of the sampled point obtainedThe ceiling for extracting t-th of sampled point regards Feel course angle detection value set ΘtWith Hough distance sets ρX, t、ρY, t
S105:According to the ceiling vision course angle detection value set Θ of step S104 t-th of the sampled point obtainedt, with step 3 The course angle prior estimate of t-th of the sampled point obtainedCourse angle Posterior estimator is carried out, with t-th of sampled point of acquisition Course angle posterior estimate
S1106:The Hough distance sets ρ of t-th of sampled point is obtained according to step S104X, t、ρY, t, obtained with step S105 Course angle posterior estimateExtract the ceiling vision X-axis coordinate measurement value set of t-th of sampled pointWith t-th of sampling The ceiling vision Y-axis coordinate measurement value set of pointAnd
S107:According to the ceiling vision X-axis coordinate measurement value set of step S104 t-th of the sampled point obtainedWith ceiling Vision Y-axis coordinate measurement value setWith the X-axis coordinate prior estimate of step S103 t-th of the sampled point obtainedWith Y Axial coordinate prior estimatePosition coordinates Posterior estimator is carried out, to obtain the X-axis coordinate Posterior estimator of t-th of sampled point With Y-axis coordinate Posterior estimatorAnd
S108:Repeat step S102 to S107, exports the pose Posterior estimator vector of each sampled point, i.e. pose detected value.
9. a kind of robot, including position detecting device, it is characterised in that the position detecting device is according to claim 8 The apparatus for detecting position and posture.
10. robot according to claim 9, it is characterised in that the artificial unmanned plane of the machine or wheeled robot.
CN201711249452.3A 2017-12-01 2017-12-01 Pose detection method and device based on auxiliary line and computer readable storage medium Active CN108036786B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711249452.3A CN108036786B (en) 2017-12-01 2017-12-01 Pose detection method and device based on auxiliary line and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711249452.3A CN108036786B (en) 2017-12-01 2017-12-01 Pose detection method and device based on auxiliary line and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN108036786A true CN108036786A (en) 2018-05-15
CN108036786B CN108036786B (en) 2021-02-09

Family

ID=62095108

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711249452.3A Active CN108036786B (en) 2017-12-01 2017-12-01 Pose detection method and device based on auxiliary line and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN108036786B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109458977A (en) * 2018-10-21 2019-03-12 合肥优控科技有限公司 Robot orientation method, system and computer readable storage medium
CN109709804A (en) * 2018-12-20 2019-05-03 安徽优思天成智能科技有限公司 A kind of attitude detecting method of servomechanism
CN111650936A (en) * 2020-06-03 2020-09-11 杭州迦智科技有限公司 Servo control method, processor, storage medium and movable platform
CN112773272A (en) * 2020-12-29 2021-05-11 深圳市杉川机器人有限公司 Moving direction determining method and device, sweeping robot and storage medium
CN114115212A (en) * 2020-08-26 2022-03-01 宁波方太厨具有限公司 Cleaning robot positioning method and cleaning robot adopting same

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090210092A1 (en) * 2008-02-15 2009-08-20 Korea Institute Of Science And Technology Method for self-localization of robot based on object recognition and environment information around recognized object
CN103207634A (en) * 2013-03-20 2013-07-17 北京工业大学 Data fusion system and method of differential GPS (Global Position System) and inertial navigation in intelligent vehicle
CN106123908A (en) * 2016-09-08 2016-11-16 北京京东尚科信息技术有限公司 Automobile navigation method and system
CN106323294A (en) * 2016-11-04 2017-01-11 新疆大学 Positioning method and device for patrol robot of transformer substation
CN106525053A (en) * 2016-12-28 2017-03-22 清研华宇智能机器人(天津)有限责任公司 Indoor positioning method for mobile robot based on multi-sensor fusion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090210092A1 (en) * 2008-02-15 2009-08-20 Korea Institute Of Science And Technology Method for self-localization of robot based on object recognition and environment information around recognized object
CN103207634A (en) * 2013-03-20 2013-07-17 北京工业大学 Data fusion system and method of differential GPS (Global Position System) and inertial navigation in intelligent vehicle
CN106123908A (en) * 2016-09-08 2016-11-16 北京京东尚科信息技术有限公司 Automobile navigation method and system
CN106323294A (en) * 2016-11-04 2017-01-11 新疆大学 Positioning method and device for patrol robot of transformer substation
CN106525053A (en) * 2016-12-28 2017-03-22 清研华宇智能机器人(天津)有限责任公司 Indoor positioning method for mobile robot based on multi-sensor fusion

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109458977A (en) * 2018-10-21 2019-03-12 合肥优控科技有限公司 Robot orientation method, system and computer readable storage medium
CN109709804A (en) * 2018-12-20 2019-05-03 安徽优思天成智能科技有限公司 A kind of attitude detecting method of servomechanism
CN111650936A (en) * 2020-06-03 2020-09-11 杭州迦智科技有限公司 Servo control method, processor, storage medium and movable platform
CN111650936B (en) * 2020-06-03 2023-01-17 杭州迦智科技有限公司 Servo control method, processor, storage medium and movable platform
CN114115212A (en) * 2020-08-26 2022-03-01 宁波方太厨具有限公司 Cleaning robot positioning method and cleaning robot adopting same
CN112773272A (en) * 2020-12-29 2021-05-11 深圳市杉川机器人有限公司 Moving direction determining method and device, sweeping robot and storage medium

Also Published As

Publication number Publication date
CN108036786B (en) 2021-02-09

Similar Documents

Publication Publication Date Title
CN108036786A (en) Position and posture detection method, device and computer-readable recording medium based on auxiliary line
CN109658461B (en) Unmanned aerial vehicle positioning method based on cooperation two-dimensional code of virtual simulation environment
CN102435188B (en) Monocular vision/inertia autonomous navigation method for indoor environment
CN102313536B (en) Method for barrier perception based on airborne binocular vision
Bazin et al. Motion estimation by decoupling rotation and translation in catadioptric vision
CN107451593B (en) High-precision GPS positioning method based on image feature points
CN107808407A (en) Unmanned plane vision SLAM methods, unmanned plane and storage medium based on binocular camera
CN104596502A (en) Object posture measuring method based on CAD model and monocular vision
CN106556412A (en) The RGB D visual odometry methods of surface constraints are considered under a kind of indoor environment
CN104281148A (en) Mobile robot autonomous navigation method based on binocular stereoscopic vision
CN105222788A (en) The automatic correcting method of the aircraft course deviation shift error of feature based coupling
CN108665499B (en) Near distance airplane pose measuring method based on parallax method
CN112967339B (en) Vehicle pose determining method, vehicle control method and device and vehicle
CN109141396A (en) The UAV position and orientation estimation method that auxiliary information is merged with random sampling unification algorism
Sehgal et al. Real-time scale invariant 3D range point cloud registration
CN112805766A (en) Apparatus and method for updating detailed map
Kostavelis et al. Visual odometry for autonomous robot navigation through efficient outlier rejection
Deng et al. A binocular vision-based measuring system for UAVs autonomous aerial refueling
CN113740864A (en) Self-pose estimation method for soft landing tail segment of detector based on laser three-dimensional point cloud
Lv et al. FVC: A novel nonmagnetic compass
Butt et al. Monocular SLAM initialization using epipolar and homography model
Brink et al. Probabilistic outlier removal for robust landmark identification in stereo vision based SLAM
CN105225232A (en) A kind of colour of view-based access control model attention mechanism warship cooperative target detection method
CN107945212A (en) Infrared small and weak Detection of Moving Objects based on inertial navigation information auxiliary and background subtraction
CN108088446B (en) Mobile robot course angle detection method, device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant