CN107917710A - A kind of positioning in real time of the interior based on single line laser and three-dimensional map construction method - Google Patents

A kind of positioning in real time of the interior based on single line laser and three-dimensional map construction method Download PDF

Info

Publication number
CN107917710A
CN107917710A CN201711092661.1A CN201711092661A CN107917710A CN 107917710 A CN107917710 A CN 107917710A CN 201711092661 A CN201711092661 A CN 201711092661A CN 107917710 A CN107917710 A CN 107917710A
Authority
CN
China
Prior art keywords
msub
mrow
mtr
mtd
straightway
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711092661.1A
Other languages
Chinese (zh)
Other versions
CN107917710B (en
Inventor
姚剑
谈彬
罗磊
李礼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201711092661.1A priority Critical patent/CN107917710B/en
Publication of CN107917710A publication Critical patent/CN107917710A/en
Application granted granted Critical
Publication of CN107917710B publication Critical patent/CN107917710B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a kind of positioning in real time of the interior based on single line laser and three-dimensional map construction method, by the way that single line laser rangefinder is fixed on a rotating device, the defects of single line laser rangefinder can only obtain plane 2D data is overcome, realizes and indoor 3D environmental datas is obtained.For acquired 3D scan datas, the scan line matching algorithm that the present invention passes through accurate a structure map developing algorithm and a robust, realize the six-freedom degree pose estimation to laser scanning line, so as to obtain the movement locus of sensor, and combining laser scanning data have constructed the three-dimensional map of the environment scanned.The present invention simultaneously realizes a kind of accurately indoor positioning and map constructing method in real time.

Description

A kind of positioning in real time of the interior based on single line laser and three-dimensional map construction method
Technical field
The invention belongs to indoor three-dimensional environment Cartographic Technique field, is related to a kind of positioning in real time of the interior based on single line laser With three-dimensional map construction method, available for the real-time positioning service in indoor environment and meet the structure to indoor three-dimensional environment map Build demand.
Background technology
With the fast development of the technologies such as computer technology, automatic technology, artificial intelligence technology, and people's lives water Flat steps up, and indoor service machine Man's Demands are constantly being increased.Traditional industrial robot is different from, is serviced Robot needs to tackle working environment more complicated and changeable.Therefore intellectually and automatically is current institute of indoor service robot Necessary two characteristics, the service robot of autonomous intelligence can effectively work in complicated unknown environment.In order to realize The demand of autonomous intelligence, robot have been necessarily required to feel three-dimensional environment, and determine the energy of oneself position in the environment Power.The sensor of traditional acquisition environmental data mainly has two kinds of visual sensor and laser sensor.Visual sensing utensil There is relatively low cost, and use more convenient, typically there is monocular camera (such as industrial camera), binocular camera, depth Camera (such as RGB-D cameras).But the field angle of this kind of visual sensor is all universal less than normal, and to the change of ambient lighting Change all very sensitive.And visual sensor is compared, the field angle of laser sensor is larger, and the change to ambient lighting is not It is very sensitive.In addition, laser sensor has higher range accuracy, be conducive to build high-precision environmental map.
Laser sensor can be subdivided into 2D laser sensors and 3D laser sensor again.3D laser sensor can be direct The 3 d scan data of environment is obtained, therefore is widely used in the airmanship in outdoor environment.But 3D laser sensor Cost it is high, be not suitable for the service robot to work in environment indoors.In contrast, the cost of 2D laser sensors compared with It is low, also possess the range accuracy and field angle suitable with 3D laser sensor.However, due to 2D laser sensors can only obtain it is flat Face two-dimensional scan data, therefore often it is only used for plane motion track and the structure indoor plane map of estimated sensor, The hectorslam and gmapping to increase income at present is exactly typical algorithm therein.But for indoor service robot, only Only determine that its plane pose and structure plane environmental map are far from being enough.
The content of the invention
In order to determine the six-freedom degree pose of robot, and indoor three-dimensional environment map is built, and also to drop The cost of low sensor, the present invention provides a kind of positioning in real time of the interior based on 2D laser sensors and three-dimensional environment map structure Construction method.
The technical solution adopted in the present invention is:A kind of positioning in real time of the interior based on single line laser is built with three-dimensional map Method, it is characterised in that comprise the following steps:
Step 1:The original point cloud data on two-dimensional scan line (Scan) is obtained, extraction is straight in two-dimensional scan line (Scan) Line segment, and the scanline groups for completely covering a three dimensions are referred to as Sweep into a set;
Step 2:Plane is extracted to the straightway set obtained in step 1;
Step 3:For first Sweep, using the plane information extracted, carry out principal direction extraction and vacuate;
Step 4:Build initial configuration map;
Step 5:Wall scroll Scan and structure map match, extraction need to optimize a little pair;
Step 6:Optimize the distance d between corresponding points, calculate the pose estimate of Scan;
Step 7:Iterative cycles step 5- steps 6, terminate until all Scan in a Sweep optimize, then will put cloud It is added in structure map, carries out structure map growth;
Step 8:Loop iteration step 5- steps 7, until Point Cloud Processing finishes.
Compared with existing technology, the useful achievement that the present invention has is:A kind of laser of lower cost has been used to sweep Strategy is retouched, is realized by the scan line matching algorithm of accurate a structure map developing algorithm and a robust to sensing The real-time pose estimation of device and the accurate composition to indoor three-dimensional environment.Provided for indoor intelligent service-delivery machine man-based development Effective technological means.
Brief description of the drawings
Fig. 1 is the flow chart of the embodiment of the present invention;
Fig. 2 is the structure result schematic diagram of structure map in the embodiment of the present invention;
Fig. 3 is the result schematic diagram that three-dimensional scenic map is superimposed with movement locus in the embodiment of the present invention.
Specific implementation method
Understand for the ease of those of ordinary skill in the art and implement the present invention, with reference to the accompanying drawings and embodiments to this hair It is bright to be described in further detail, it will be appreciated that implementation example described herein is merely to illustrate and explain the present invention, not For limiting the present invention.
Referring to Fig.1, a kind of positioning in real time of the interior based on single line laser provided by the invention and three-dimensional map construction method, Comprise the following steps:
Step 1.1:Since a starting point in scan line, 10 scanning elements (including starting point) after it are swept Described point carries out least squares line fitting, and digital simulation error;
The expression-form of error of fitting is as follows:
Wherein, a, b are Straight Line Fitting Parameters, (xi,yi) be i-th of scanning element two-dimensional coordinate, j exists for current straightway The sequence number of starting point on current scan line, N are preset value, and the present embodiment takes 10;
Step 1.2:As error of fitting εl< TlWhen, it is believed that obtain an initial straight line segment;Otherwise, just skip current Starting point, continues the operation of repeat step 1.1, untill it have found an initial segment since its next point;Wherein TlIt is the threshold value of fitting a straight line error, can use 50mm;
Step 1.3:After obtaining an initial straight line segment, this initial straight line segment is expanded;Specifically include following Sub-step:
Step 1.3.1:From a point P after current initial straight line segmentiStart, calculate point PiTo current straightway away from From d, wherein i > j+N, j represents point sequence number of the current straightway starting point in scan line, and N can use 10;If d < Td, then should Point belongs to current straightway, by point PiIt is added to realize the extension to straightway in current straightway, at the same it is again straight to this Line segment is fitted, and updates the fitting parameter of straightway;Otherwise it is assumed that current straightway cannot continue to expand, straightway Expansion operation terminate;Wherein TdFor the max-thresholds of distance between beeline and dot, 50mm can use;
Step 1.3.2:If current straightway is successfully expanded, continue the operation of repeat step 1.3.1, Zhi Daozhi Untill line segment cannot continue expansion;
Step 1.4:After the expansion of straight line section is completed, this straightway is added to candidate's straightway set In, and since next point of current straightway repeat step 1.1- steps 1.3 operation, until can not be again from Current Scan Untill new straightway is extracted on line;
Step 1.5:For candidate's straightway all on current scan line, the operation for merging and deleting to it, To obtain final straightway;Specifically include following sub-step:
Step 1.5.1:Since i-th candidate's straightway, judge that i+1 bar candidate's straightway adjacent thereto whether can Merge with it, Rule of judgment is as follows:
Wherein, viRepresent the direction vector of i-th straightway, θ (vi,vi+1) represent two straightway direction vectors folder Angle, TθFor threshold value, 1 ° is can use,Represent last point of i-th candidate's straightway,Represent i+1 bar candidate's straightway First point,Represent at the distance between 2 points, TdFor threshold value, 200mm can use;
When two neighboring candidate straightways direction vector angle be less than threshold value and head and the tail point the distance between again smaller than It it is one by this two candidate's Straight-line segments mergences, and recalculate the fitting parameter of the straightway after merging during threshold value;
Step 1.5.2:The operation of 1.5.1 is repeated, untill the candidate's straightway that can not remerge;Calculate and merge The length of straightway afterwards, rejects the straightway that length is less than 100mm, works as using remaining straightway as what final extraction obtained Straightway in preceding scan line, and store in order.
Step 2, plane is extracted to the line segment aggregate obtained in step 1.Comprise the following steps that:
Step 2.1:By scan line pose value, the two-dimentional straightway in corresponding scan line is converted into three-dimensional straight line segment; If the scan line is stationary acquisition, its pose can be directly obtained by the motor rotation angle that scanning device provides.Such as Fruit is not stationary acquisition, then can obtain the pose estimate of scan line by the method in step 5- steps 6.When to one After all scan lines in Sweep complete the extraction operation of three-dimensional straight line segment, one group can be obtained and be distributed in whole scanning circumstance Three-dimensional straight line segment aggregate, be denoted asWherein,Represent the j-th strip straightway in i-th scan line;
Step 2.2:Straight line section in optional L Affiliated scan line SiAdjacent scanning lines Si+1,Si-1On, Find withCoplanar straightway;
Plane equation is described as:
Xn=d,
Wherein, x is the point in plane, and n is the normal vector of plane, and d is a constant;WillWith with its adjacent scanning lines Straightway least square plane fitting is carried out according to above-mentioned plane equation, if error of fitting is less than preset value, then recognize To have found an initial plane Pl.Otherwise, straight line section is changed, the operation of repeat step 2.2 is initial flat until finding one Untill face;
Step 2.3:Search belongs to the straightway of Pl around Pl, for the straight line section l around Pl, if it meets During the following conditions, then it is assumed that l belongs to plane Pl:
Wherein, xiFor the point on straightway l;
Straightway l is added in plane Pl, and the fitting parameter of Calculation Plane Pl again;
Step 2.4:The operation of repeat step 2.3, constantly expands plane Pl, until there is no can around plane Pl Untill being merged into the straightway in plane Pl, then just obtained a complete ground level Pl this moment;
Step 2.5:ForIn remaining straightway repeat step 2.2- steps 2.4 operation, until there is no Untill new plane can be extracted to.
Step 3, for first Sweep, we are carried out principal direction extraction and are vacuated using the plane information extracted. Key step includes:
Step 3.1:Initialize three orthogonal main directions D={ d1,d2,d3}={ { 1,0,0 }, { 0,1,0 }, { 0,0,1 } }, Calculate the normal vector n of each planep, and it is unitization;
Step 3.2:Each plane and three principal directions are subjected to multiplication cross, n respectivelyi=np×di, choose niInstitute is right when minimum The principal direction answered as with the matched principal direction d in the facep
Step 3.3:Using the normal vector pair matched, principal direction optimization is carried out by ceres, and then extract optimal main side To;
It is described to carry out principal direction optimization by ceres, specifically include following sub-step:
Step 3.3.1:By D={ d1,d2,d3Transformation matrix r={ rotateX, rotateY, rotateZ } conduct The optimization item of ceres;
Step 3.3.2:The multiplication cross of vector pair after being matched to each plane computations, s=np×dp, | | s | | as ceres The residual error item of optimization;
Step 3.3.3:By continuous iteration optimization r, the rotation of corresponding principal direction when optimization error is less than preset value is obtained Torque battle array, 3*3 spin matrix R, and then the principal direction after optimization are transformed to by r at this time
Step 3.4:Structure point map cloud is uniformly vacuated.
Step 4, initial configuration map is built.Basic step is as follows:
Step 4.1:For each plane, the definite and nearest principal direction in the face, and calculate two vector angles:
If | θ |≤predetermined threshold value, can use 15 °, then carries out in-plane optimization;
The progress in-plane optimization, specifically includes following sub-step:
Step 4.1.1:A cloud C is moved to using center of gravity as in origin system, forms and puts a cloud Cg
Step 4.1.2:Utilize rotation axis s=np×dpSpin matrix R is sought with rotation angle θ;
Step 4.1.3:Cloud C will be putgRotation transformation is C'g=RCg
Step 4.1.4:Cloud C' will be putgSwitch back to world coordinate system;
Step 4.2:Judge whether structure map has been initialised., will optimization if not yet initializing plane map Plane afterwards is added directly into structure map, to complete the initialization to structure map.If structure map is initial Change, then the similar plane of plane after judging to whether there is in structure map to optimization;
Being merged if having plane of similarity, merging condition is that plane vector is equal, and plan range d < predetermined threshold values, The present embodiment takes 0.1m;
Wherein xp,yp,zpIt is the center of gravity in a face, A, B, C, D are the plane equation coefficients in another face;
Merging process includes following sub-step:
Step 4.2.1:Calculate the focus point P after mergingg
Wherein Pg1,Pg2It is two plane focus points, size1,size2Represent two planes points;
Step 4.2.2:A cloud is found that direction moves to focus point according to plane;
Wherein T1,T2Represent two planar point cloud translational movements;
If without plane of similarity, by the fully-flattened of the spot projection in plane to plane equation, plane is added and is tied Structure map;
Step 4.3:Circulation step 4.1- steps 4.2, until no face optimizes, then form structure map.
Step 5, wall scroll Scan and structure map match, extraction need to optimize a little pair.
Basic step is as follows:
Step 5.1:Determine the initial pose of current Scan;
WhereinRepresent the pose of the i-th -1 Scan and i-th Scan;Represent the i-th -1 article Scan and the The pose of the corresponding motors of i bars Scan;
Step 5.2:Extract match point;The straightway progress of Scan is down-sampled, down-sampled point is searched in structure map Match point form matching double points.
Step 6, the distance d between corresponding points is optimized.
Wherein, it is the three-dimensional coordinate of the scanning element;By the pose of every Scan item as an optimization, the distance between corresponding points As residual error item, optimized by ceres, when the difference of two suboptimization poses, then optimization terminates, and obtains the excellent of every scan line Change pose.
Step 7, a cloud, until a Sweep spots cloud optimization terminates, is then added to structurally by iterative cycles step 5-6 In figure, structure map growth is carried out.Comprise the following steps that:
Step 7.1:Face extraction is carried out according to step 1- steps 2 to new Sweep;
Step 7.2:For each extraction face PiCandidate face is found in structure map;Candidate's noodles part is two faces Angle is less than angle threshold Tθ, 15 ° are can use, and the distance on two sides is less than distance threshold Td, can use 100mm;
Wherein xp,yp,zpIt is the center of gravity in a face, A, B, C, D are the plane equation coefficients in another face;nnewRepresent extraction The normal vector in face, nstructRepresent the vector in candidate face in structure map.
If there is no candidate face, by the fully-flattened of the spot projection on this face to plane equation, and plane is added and is tied Structure map;If there is candidate face, step 7.3 is carried out;
Step 7.3:Searched for by the way that candidate face is established k-d tree, determine merging face Pmerge
Specifically include following sub-step:
Step 7.3.1:Candidate's millet cake cloud is established into k-d tree, and is the index relative in face belonging to the foundation of candidate's millet cake cloud;
Step 7.3.2:To PiEvery line segment in middle composition face takes two-end-point and midpoint as Searching point;
Step 7.3.3:Search is closest to point in k-d tree, and records the affiliated plane information of point of proximity, most most adjacent at last The most plane of near point number is as merging face Pmerge
Step 7.4:Carry out PiDirection optimization, make itself and PmergeDirection is consistent, and two sides is merged, and is structurally schemed Growth.
Step 8, loop iteration step 5-7, until Point Cloud Processing finishes;Final structure map is as shown in Fig. 2, three-dimensional The result schematic diagram that scene map is superimposed with movement locus is as shown in Figure 3.
On hardware device, a scanning laser range finder is fixed to a stable high-precision hand-held by the present invention On rotating platform.Rotating platform can drive scanning laser range finder and be revolved with the rotation both of which that continuously rotates and shake the head Turn, so as to fulfill the acquisition to three-dimensional environment data.For acquired three-dimensional environment data, the present invention devises a kind of accurate Structure map developing algorithm and a kind of robust scan line matching algorithm, realize real-time positioning in environment indoors with And the function of three-dimensional environment map structuring.
The present invention provides sweeping for a kind of accurate structure map developing algorithm based on structural information and robust Lines matching algorithm is retouched, realizes the real-time pose track to sensor and the accurate composition to indoor three-dimensional environment.For interior The development of intellect service robot provides effective technical foundation, has quick and precisely, and practicality is stronger, can apply and three Tie up the several scenes such as modeling, service robot.
It should be appreciated that the part that this specification does not elaborate belongs to the prior art.
It should be appreciated that the above-mentioned description for preferred embodiment is more detailed, can not therefore be considered to this The limitation of invention patent protection scope, those of ordinary skill in the art are not departing from power of the present invention under the enlightenment of the present invention Profit is required under protected ambit, can also be made replacement or deformation, be each fallen within protection scope of the present invention, this hair It is bright scope is claimed to be determined by the appended claims.

Claims (8)

1. a kind of positioning in real time of the interior based on single line laser and three-dimensional map construction method, it is characterised in that including following step Suddenly:
Step 1:The original point cloud data on two-dimensional scan line Scan is obtained, straightway is extracted in two-dimensional scan line Scan, and By the scanline groups for completely covering a three dimensions into a set, Sweep is denoted as;
Step 2:Plane is extracted to the straightway set obtained in step 1;
Step 3:For first Sweep, using the plane information extracted, carry out principal direction extraction and vacuate;
Step 4:Build initial configuration map;
Step 5:Wall scroll Scan and structure map match, extraction need to optimize a little pair;
Step 6:Optimize the distance d between corresponding points, calculate the pose estimate of Scan;
Step 7:Iterative cycles step 5- steps 6, terminate until all Scan in a Sweep optimize, then add a cloud Into structure map, structure map growth is carried out;
Step 8:Loop iteration step 5- steps 7, until Point Cloud Processing finishes.
2. the positioning in real time of the interior based on single line laser according to claim 1 and three-dimensional map construction method, its feature It is, extract straightway described in step 1 in two-dimensional scan line Scan, specific implementation includes following sub-step:
Step 1.1:Since a starting point in scan line, least-squares line plan is carried out to N number of scanning element after it Close, and digital simulation error;
The expression-form of error of fitting is as follows:
<mrow> <msub> <mi>&amp;epsiv;</mi> <mi>l</mi> </msub> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mi>j</mi> </mrow> <mi>N</mi> </munderover> <mo>|</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>ax</mi> <mi>i</mi> </msub> <mo>+</mo> <mi>b</mi> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>,</mo> </mrow>
Wherein, a, b are Straight Line Fitting Parameters, (xi,yi) be i-th of scanning element two-dimensional coordinate, j for current straightway current The sequence number of starting point in scan line, N are preset value;
Step 1.2:As error of fitting εl< TlWhen, it is believed that obtain an initial straight line segment;Otherwise, current starting is just skipped Point, continues the operation of repeat step 1.1, untill it have found an initial segment since its next point;Wherein TlIt is The threshold value of fitting a straight line error;
Step 1.3:After obtaining an initial straight line segment, this initial straight line segment is expanded;Specifically include following sub-step Suddenly:
Step 1.3.1:From a point P after current initial straight line segmentiStart, calculate point PiTo the distance d of current straightway, Wherein i > j+N, j represent point sequence number of the current straightway starting point in scan line;If d < Td, then the point belong to current straight Line segment, by point PiIt is added to realize the extension to straightway in current straightway, while the straightway is fitted again, Update the fitting parameter of straightway;Otherwise it is assumed that current straightway cannot continue to expand, the expansion operation knot of straightway Beam;Wherein TdFor the max-thresholds of distance between beeline and dot;
Step 1.3.2:If current straightway is successfully expanded, continue the operation of repeat step 1.3.1, until straightway Untill cannot continuing expansion;
Step 1.4:After the expansion of straight line section is completed, this straightway is added in candidate's straightway set, and The operation of repeat step 1.1- steps 1.3 since next point of current straightway, until can not be again from current scan line Untill extracting new straightway;
Step 1.5:For candidate's straightway all on current scan line, the operation for merging and deleting to it, with To final straightway;Specifically include following sub-step:
Step 1.5.1:Since i-th candidate's straightway, judge whether i+1 bar candidate straightway adjacent thereto can be with it Merge, Rule of judgment is as follows:
<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mi>&amp;theta;</mi> <mo>(</mo> <msub> <mi>v</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>v</mi> <mrow> <mi>i</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> <mo>&lt;</mo> <msub> <mi>T</mi> <mi>&amp;theta;</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>d</mi> <mrow> <mo>(</mo> <msubsup> <mi>P</mi> <mi>e</mi> <mi>i</mi> </msubsup> <mo>,</mo> <msubsup> <mi>P</mi> <mi>s</mi> <mrow> <mi>i</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>)</mo> </mrow> <mo>&lt;</mo> <msub> <mi>T</mi> <mi>d</mi> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> </mrow>
Wherein, viRepresent the direction vector of i-th straightway, θ (vi,vi+1) represent two straightway direction vectors angle, Tθ For threshold value,Represent last point of i-th candidate's straightway,Represent first point of i+1 bar candidate's straightway,Represent at the distance between 2 points, TdFor threshold value;
When the angle of the direction vector of two neighboring candidate straightways is less than the distance between threshold value and head and the tail point again smaller than threshold value When, it is one by this two candidate's Straight-line segments mergences, and recalculate the fitting parameter of the straightway after merging;
Step 1.5.2:The operation of 1.5.1 is repeated, untill the candidate's straightway that can not remerge;Calculate after merging The length of straightway, rejects the straightway that length is less than predetermined threshold value, works as using remaining straightway as what final extraction obtained Straightway in preceding scan line, and store in order.
3. the positioning in real time of the interior based on single line laser according to claim 1 and three-dimensional map construction method, its feature It is, the specific implementation of step 2 includes following sub-step:
Step 2.1:By scan line pose value, the two-dimentional straightway in corresponding scan line is converted into three-dimensional straight line segment;
Judge whether the scan line is stationary acquisition;If the scan line is stationary acquisition, its pose is set by scanning The standby motor rotation angle provided directly obtains;If not stationary acquisition, then swept by method in step 5- steps 6 Retouch the pose estimate of line;
After all scan lines in a Sweep complete the extraction operation of three-dimensional straight line segment, can obtain one group be distributed in it is whole Three-dimensional straight line segment aggregate in a scanning circumstance, is denoted asWherein,Represent the j-th strip straight line in i-th scan line Section;
Step 2.2:Straight line section in optional L Affiliated scan line SiAdjacent scanning lines Si+1,Si-1On, find WithCoplanar straightway;
Plane equation is described as:
Xn=d,
Wherein, x is the point in plane, and n is the normal vector of plane, and d is a constant;WillWith with it is straight in its adjacent scanning lines Line segment carries out least square plane fitting according to above-mentioned plane equation, if error of fitting is less than preset value, then it is assumed that find One initial plane Pl;Otherwise, straight line section is changed, the operation of repeat step 2.2, is until finding an initial plane Only;
Step 2.3:Search belongs to the straightway of Pl around Pl, for the straight line section l around Pl, if its satisfaction is following During condition, then it is assumed that l belongs to plane Pl:
<mrow> <msup> <mi>&amp;Delta;d</mi> <mn>2</mn> </msup> <mo>=</mo> <mfrac> <mn>1</mn> <mi>n</mi> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>n</mi> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>&amp;CenterDot;</mo> <mi>n</mi> <mo>-</mo> <mi>d</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>&lt;</mo> <msub> <mi>T</mi> <mrow> <mi>&amp;Delta;</mi> <mi>d</mi> </mrow> </msub> <mo>,</mo> </mrow>
Wherein, xiFor the point on straightway l;
Straightway l is added in plane Pl, and the fitting parameter of Calculation Plane Pl again;
Step 2.4:The operation of repeat step 2.3, constantly expands plane Pl, until there is no can merge around plane Pl Untill the straightway in plane Pl, then just obtained a complete ground level Pl this moment;
Step 2.5:ForIn remaining straightway repeat step 2.2- steps 2.4 operation, until there is no new Untill plane can be extracted to.
4. the positioning in real time of the interior based on single line laser according to claim 1 and three-dimensional map construction method, its feature It is, the specific implementation of step 3 includes following sub-step:
Step 3.1:Initialize three orthogonal main directions D={ d1,d2,d3}={ { 1,0,0 }, { 0,1,0 }, { 0,0,1 } }, calculate The normal vector n of each planep, and it is unitization;
Step 3.2:Each plane and three principal directions are subjected to multiplication cross, n respectivelyi=np×di, choose niIt is corresponding when minimum Principal direction as with the matched principal direction d in the facep
Step 3.3:Using the normal vector pair matched, principal direction optimization is carried out by ceres, and then extract optimal principal direction;
It is described to carry out principal direction optimization by ceres, specifically include following sub-step:
Step 3.3.1:By D={ d1,d2,d3Transformation matrix r={ rotateX, rotateY, rotateZ } as ceres's Optimize item;
Step 3.3.2:The multiplication cross of vector pair after being matched to each plane computations, s=np×dp, | | s | | optimize as ceres Residual error item;
Step 3.3.3:By continuous iteration optimization r, the spin moment for optimizing corresponding principal direction when error is less than preset value is obtained Battle array, 3*3 spin matrix R, and then the principal direction after optimization are transformed to by r at this time
Step 3.4:Structure point map cloud is uniformly vacuated.
5. the positioning in real time of the interior based on single line laser according to claim 1 and three-dimensional map construction method, its feature It is, the specific implementation of step 4 includes following sub-step:
Step 4.1:For each plane, the definite and nearest principal direction in the face, and calculate two vector angles:
<mrow> <mi>&amp;theta;</mi> <mo>=</mo> <mi>arccos</mi> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>n</mi> <mi>p</mi> </msub> <mo>&amp;CenterDot;</mo> <msub> <mi>d</mi> <mi>p</mi> </msub> </mrow> <mrow> <mo>|</mo> <msub> <mi>n</mi> <mi>p</mi> </msub> <mo>|</mo> <mo>|</mo> <msub> <mi>d</mi> <mi>p</mi> </msub> <mo>|</mo> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>;</mo> </mrow>
If | θ |≤predetermined threshold value, carries out in-plane optimization;
The progress in-plane optimization, specifically includes following sub-step:
Step 4.1.1:A cloud C is moved to using center of gravity as in origin system, forms and puts a cloud Cg
<mrow> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msubsup> <mi>x</mi> <mi>i</mi> <mo>&amp;prime;</mo> </msubsup> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>y</mi> <mi>i</mi> <mo>&amp;prime;</mo> </msubsup> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>z</mi> <mi>i</mi> <mo>&amp;prime;</mo> </msubsup> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>x</mi> <mi>i</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>y</mi> <mi>i</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>z</mi> <mi>i</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>x</mi> <mi>g</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>y</mi> <mi>g</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>z</mi> <mi>g</mi> </msub> </mtd> </mtr> </mtable> </mfenced> </mrow>
Step 4.1.2:Utilize rotation axis s=np×dpSpin matrix R is sought with rotation angle θ;
Step 4.1.3:Cloud C will be putgRotation transformation is C'g=RCg
Step 4.1.4:Cloud C' will be putgSwitch back to world coordinate system;
<mrow> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>x</mi> <mi>i</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>y</mi> <mi>i</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>z</mi> <mi>i</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msubsup> <mi>x</mi> <mi>i</mi> <mo>&amp;prime;</mo> </msubsup> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>y</mi> <mi>i</mi> <mo>&amp;prime;</mo> </msubsup> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>z</mi> <mi>i</mi> <mo>&amp;prime;</mo> </msubsup> </mtd> </mtr> </mtable> </mfenced> <mo>+</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>x</mi> <mi>g</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>y</mi> <mi>g</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>z</mi> <mi>g</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>;</mo> </mrow>
Step 4.2:Judge whether structure map has been initialised;
If not yet initializing plane map, the plane after optimization is added directly into structure map, to complete to structure The initialization of map;
If structure map has been initialised, then judges that the plane after being whether there is in structure map to optimization is similar Plane;
Merged if having plane of similarity, merging condition is that plane vector is equal, and plan range d < predetermined threshold values;
<mrow> <mi>d</mi> <mo>=</mo> <mfrac> <mrow> <mo>|</mo> <msub> <mi>Ax</mi> <mi>p</mi> </msub> <mo>+</mo> <msub> <mi>By</mi> <mi>p</mi> </msub> <mo>+</mo> <msub> <mi>Cz</mi> <mi>p</mi> </msub> <mo>+</mo> <mi>D</mi> <mo>|</mo> </mrow> <msqrt> <mrow> <msup> <mi>A</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>B</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>C</mi> <mn>2</mn> </msup> </mrow> </msqrt> </mfrac> <mo>;</mo> </mrow>
Wherein xp,yp,zpIt is the center of gravity in a face, A, B, C, D are the plane equation coefficients in another face;
Merging process includes following sub-step:
Step 4.2.1:Calculate the focus point P after mergingg
<mrow> <msub> <mi>P</mi> <mi>g</mi> </msub> <mo>=</mo> <mfrac> <mrow> <mo>(</mo> <msub> <mi>P</mi> <mrow> <mi>g</mi> <mn>1</mn> </mrow> </msub> <mo>*</mo> <msub> <mi>size</mi> <mn>1</mn> </msub> <mo>+</mo> <msub> <mi>P</mi> <mrow> <mi>g</mi> <mn>2</mn> </mrow> </msub> <mo>*</mo> <msub> <mi>size</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mi>size</mi> <mn>1</mn> </msub> <mo>+</mo> <msub> <mi>size</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mfrac> <mo>;</mo> </mrow>
Wherein Pg1,Pg2It is two plane focus points, size1,size2Represent two planes points;
Step 4.2.2:A cloud is found that direction moves to focus point according to plane;
<mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <msub> <mi>T</mi> <mn>1</mn> </msub> <mo>=</mo> <mo>(</mo> <msub> <mi>P</mi> <mi>g</mi> </msub> <mo>-</mo> <msub> <mi>P</mi> <mrow> <mi>g</mi> <mn>1</mn> </mrow> </msub> <mo>)</mo> <mo>&amp;CenterDot;</mo> <msub> <mi>n</mi> <mn>1</mn> </msub> <mo>&amp;CenterDot;</mo> <msub> <mi>n</mi> <mn>1</mn> </msub> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>T</mi> <mn>2</mn> </msub> <mo>=</mo> <mrow> <mo>(</mo> <msub> <mi>P</mi> <mi>g</mi> </msub> <mo>-</mo> <msub> <mi>P</mi> <mrow> <mi>g</mi> <mn>2</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>&amp;CenterDot;</mo> <msub> <mi>n</mi> <mn>2</mn> </msub> <mo>&amp;CenterDot;</mo> <msub> <mi>n</mi> <mn>2</mn> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced>
Wherein T1,T2Represent two planar point cloud translational movements;
If without plane of similarity, by the fully-flattened of the spot projection in plane to plane equation, plane is added structurally Figure;
Step 4.3:Circulation step 4.1- steps 4.2, until no face optimizes, then form structure map.
6. the positioning in real time of the interior based on single line laser according to claim 1 and three-dimensional map construction method, its feature It is, the specific implementation of step 5 includes following sub-step:
Step 5.1:Determine the initial pose of current Scan;
<mrow> <msubsup> <mi>P</mi> <mi>i</mi> <mi>s</mi> </msubsup> <mo>=</mo> <msubsup> <mi>P</mi> <mrow> <mi>i</mi> <mo>-</mo> <mn>1</mn> </mrow> <mi>s</mi> </msubsup> <mo>-</mo> <msubsup> <mi>P</mi> <mrow> <mi>i</mi> <mo>-</mo> <mn>1</mn> </mrow> <mi>m</mi> </msubsup> <mo>+</mo> <msubsup> <mi>P</mi> <mi>i</mi> <mi>m</mi> </msubsup> <mo>;</mo> </mrow>
WhereinRepresent the pose of the i-th -1 Scan and i-th Scan;Represent the i-th -1 Scan and i-th The pose of the corresponding motors of Scan;
Step 5.2:Extract match point;The straightway progress of Scan is down-sampled, of down-sampled point is searched in structure map Matching double points are formed with.
7. the positioning in real time of the interior based on single line laser according to claim 1 and three-dimensional map construction method, its feature It is, the specific implementation process of step 6 is:
<mrow> <mi>d</mi> <mo>=</mo> <msqrt> <mrow> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>l</mi> </msub> <mo>-</mo> <msub> <mi>x</mi> <mi>m</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>l</mi> </msub> <mo>-</mo> <msub> <mi>y</mi> <mi>m</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>z</mi> <mi>l</mi> </msub> <mo>-</mo> <msub> <mi>z</mi> <mi>m</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> <mo>;</mo> </mrow>
Wherein, (xi, yi, zi) be i-th of scanning element three-dimensional coordinate;By the pose of every Scan item as an optimization, between corresponding points Distance as residual error item, optimized by ceres, as the difference dP < T of two suboptimization posesdP, then optimize and terminate, obtain every The optimization pose of bar scan line.
8. the positioning in real time of the interior based on single line laser according to claim 1 and three-dimensional map construction method, its feature It is, the specific implementation of step 7 includes following sub-step:
Step 7.1:Face extraction is carried out according to step 1- steps 2 to new Sweep;
Step 7.2:For each extraction face PiCandidate face is found in structure map;Candidate's noodles part is the angle in two faces Less than angle threshold Tθ, and the distance on two sides is less than distance threshold Td
<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mi>&amp;theta;</mi> <mo>=</mo> <mi>arcsin</mi> <mrow> <mo>(</mo> <mo>|</mo> <msub> <mi>n</mi> <mrow> <mi>n</mi> <mi>e</mi> <mi>w</mi> </mrow> </msub> <mo>&amp;times;</mo> <msub> <mi>n</mi> <mrow> <mi>s</mi> <mi>t</mi> <mi>r</mi> <mi>u</mi> <mi>c</mi> <mi>t</mi> </mrow> </msub> <mo>|</mo> <mo>)</mo> </mrow> <mo>&lt;</mo> <msub> <mi>T</mi> <mi>&amp;theta;</mi> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>d</mi> <mo>=</mo> <mfrac> <mrow> <mo>|</mo> <msub> <mi>Ax</mi> <mi>p</mi> </msub> <mo>+</mo> <msub> <mi>By</mi> <mi>p</mi> </msub> <mo>+</mo> <msub> <mi>Cz</mi> <mi>p</mi> </msub> <mo>+</mo> <mi>D</mi> <mo>|</mo> </mrow> <msqrt> <mrow> <msup> <mi>A</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>B</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>C</mi> <mn>2</mn> </msup> </mrow> </msqrt> </mfrac> <mo>&lt;</mo> <msub> <mi>T</mi> <mi>d</mi> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>;</mo> </mrow>
Wherein xp,yp,zpIt is the center of gravity in a face, A, B, C, D are the plane equation coefficients in another face;nnewThe extraction face of expression Normal vector, nstructRepresent the vector in candidate face in structure map;
If there is no candidate face, added structurally by the fully-flattened of the spot projection on this face to plane equation, and by plane Figure;If there is candidate face, step 7.3 is carried out;
Step 7.3:Searched for by the way that candidate face is established k-d tree, determine merging face Pmerge
Specifically include following sub-step:
Step 7.3.1:Candidate's millet cake cloud is established into k-d tree, and is the index relative in face belonging to the foundation of candidate's millet cake cloud;
Step 7.3.2:To PiEvery line segment in middle composition face takes two-end-point and midpoint as Searching point;
Step 7.3.3:Search and records the affiliated plane information of point of proximity closest to point in k-d tree, most closest point at last The most plane of number is as merging face Pmerge
Step 7.4:Carry out PiDirection optimization, make itself and PmergeDirection is consistent, and two sides is merged, and carries out structurally figure life It is long.
CN201711092661.1A 2017-11-08 2017-11-08 Indoor real-time positioning and three-dimensional map construction method based on single line laser Active CN107917710B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711092661.1A CN107917710B (en) 2017-11-08 2017-11-08 Indoor real-time positioning and three-dimensional map construction method based on single line laser

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711092661.1A CN107917710B (en) 2017-11-08 2017-11-08 Indoor real-time positioning and three-dimensional map construction method based on single line laser

Publications (2)

Publication Number Publication Date
CN107917710A true CN107917710A (en) 2018-04-17
CN107917710B CN107917710B (en) 2021-03-16

Family

ID=61895285

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711092661.1A Active CN107917710B (en) 2017-11-08 2017-11-08 Indoor real-time positioning and three-dimensional map construction method based on single line laser

Country Status (1)

Country Link
CN (1) CN107917710B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109655805A (en) * 2019-01-25 2019-04-19 南京理工大学 A kind of laser radar positioning method being overlapped length estimate based on scan lines
CN110764096A (en) * 2019-09-24 2020-02-07 浙江华消科技有限公司 Three-dimensional map construction method for disaster area, robot and robot control method
CN110895833A (en) * 2018-09-13 2020-03-20 北京京东尚科信息技术有限公司 Method and device for three-dimensional modeling of indoor scene
CN111077495A (en) * 2019-12-10 2020-04-28 亿嘉和科技股份有限公司 Positioning recovery method based on three-dimensional laser
CN111308481A (en) * 2020-02-21 2020-06-19 深圳市银星智能科技股份有限公司 Laser positioning method and device and mobile robot
CN112418288A (en) * 2020-11-17 2021-02-26 武汉大学 GMS and motion detection-based dynamic vision SLAM method
CN112700464A (en) * 2021-01-15 2021-04-23 腾讯科技(深圳)有限公司 Map information processing method and device, electronic equipment and storage medium
CN113256722A (en) * 2021-06-21 2021-08-13 浙江华睿科技有限公司 Pose determination method, pose determination device and storage medium
CN114332232A (en) * 2022-03-11 2022-04-12 中国人民解放军国防科技大学 Smart phone indoor positioning method based on space point, line and surface feature hybrid modeling

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150020900A (en) * 2013-08-19 2015-02-27 부경대학교 산학협력단 System for location recognization and mapping using laser scanner, and method for location recognization using the same
CN104615880A (en) * 2015-01-31 2015-05-13 电子科技大学中山学院 Rapid ICP (inductively coupled plasma) method for point cloud matching of three-dimensional laser radar
CN105354883A (en) * 2015-11-25 2016-02-24 武汉大学 3ds Max fast and precise three-dimensional modeling method and system based on point cloud
CN106599108A (en) * 2016-11-30 2017-04-26 浙江大学 Method for constructing multi-mode environmental map in three-dimensional environment
CN107239076A (en) * 2017-06-28 2017-10-10 仲训昱 The AGV laser SLAM methods matched based on virtual scan with ranging

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150020900A (en) * 2013-08-19 2015-02-27 부경대학교 산학협력단 System for location recognization and mapping using laser scanner, and method for location recognization using the same
CN104615880A (en) * 2015-01-31 2015-05-13 电子科技大学中山学院 Rapid ICP (inductively coupled plasma) method for point cloud matching of three-dimensional laser radar
CN105354883A (en) * 2015-11-25 2016-02-24 武汉大学 3ds Max fast and precise three-dimensional modeling method and system based on point cloud
CN106599108A (en) * 2016-11-30 2017-04-26 浙江大学 Method for constructing multi-mode environmental map in three-dimensional environment
CN107239076A (en) * 2017-06-28 2017-10-10 仲训昱 The AGV laser SLAM methods matched based on virtual scan with ranging

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LI LI, ET AL.: "LASER-BASED SLAM WITH EFFICIENT OCCUPANCY LIKELIHOOD MAP LEARNING FOR DYNAMIC INDOOR SCENES", 《ISPRS ANNALS OF THE PHOTOGRAMMETRY REMOTE SENSING AND SPATIAL INFORMATION SCIENCES》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110895833A (en) * 2018-09-13 2020-03-20 北京京东尚科信息技术有限公司 Method and device for three-dimensional modeling of indoor scene
CN109655805A (en) * 2019-01-25 2019-04-19 南京理工大学 A kind of laser radar positioning method being overlapped length estimate based on scan lines
CN110764096A (en) * 2019-09-24 2020-02-07 浙江华消科技有限公司 Three-dimensional map construction method for disaster area, robot and robot control method
CN111077495A (en) * 2019-12-10 2020-04-28 亿嘉和科技股份有限公司 Positioning recovery method based on three-dimensional laser
CN111077495B (en) * 2019-12-10 2022-02-22 亿嘉和科技股份有限公司 Positioning recovery method based on three-dimensional laser
CN111308481A (en) * 2020-02-21 2020-06-19 深圳市银星智能科技股份有限公司 Laser positioning method and device and mobile robot
CN112418288A (en) * 2020-11-17 2021-02-26 武汉大学 GMS and motion detection-based dynamic vision SLAM method
CN112418288B (en) * 2020-11-17 2023-02-03 武汉大学 GMS and motion detection-based dynamic vision SLAM method
CN112700464A (en) * 2021-01-15 2021-04-23 腾讯科技(深圳)有限公司 Map information processing method and device, electronic equipment and storage medium
CN113256722A (en) * 2021-06-21 2021-08-13 浙江华睿科技有限公司 Pose determination method, pose determination device and storage medium
CN113256722B (en) * 2021-06-21 2021-10-15 浙江华睿科技股份有限公司 Pose determination method, pose determination device and storage medium
CN114332232A (en) * 2022-03-11 2022-04-12 中国人民解放军国防科技大学 Smart phone indoor positioning method based on space point, line and surface feature hybrid modeling

Also Published As

Publication number Publication date
CN107917710B (en) 2021-03-16

Similar Documents

Publication Publication Date Title
CN107917710A (en) A kind of positioning in real time of the interior based on single line laser and three-dimensional map construction method
Zhou et al. To learn or not to learn: Visual localization from essential matrices
CN113409410B (en) Multi-feature fusion IGV positioning and mapping method based on 3D laser radar
CN105843223B (en) A kind of mobile robot three-dimensional based on space bag of words builds figure and barrier-avoiding method
CN112070770B (en) High-precision three-dimensional map and two-dimensional grid map synchronous construction method
CN110223379A (en) Three-dimensional point cloud method for reconstructing based on laser radar
CN110146099B (en) Synchronous positioning and map construction method based on deep learning
Nüchter et al. Heuristic-based laser scan matching for outdoor 6D SLAM
US10402978B1 (en) Method for detecting pseudo-3D bounding box based on CNN capable of converting modes according to poses of objects using instance segmentation and device using the same
CN111523545B (en) Article searching method combined with depth information
CN111523610B (en) Article identification method for efficient labeling of samples
KR102309712B1 (en) Method for detecting pseudo-3d bounding box to be used for military purpose, smart phone or virtual driving based-on cnn capable of converting modes according to conditions of objects and device using the same
Moreau et al. Crossfire: Camera relocalization on self-supervised features from an implicit representation
CN114266821A (en) Online positioning method and device, terminal equipment and storage medium
Zhang et al. Lidar odometry and mapping based on two-stage feature extraction
Song et al. Robot autonomous sorting system for intelligent logistics
Mahajan et al. Intelligent image correlation using genetic algorithms for measuring surface deformations in the autonomous inspection of structures
CN111339342A (en) Three-dimensional model retrieval method based on angle ternary center loss
Nuchter et al. Extracting drivable surfaces in outdoor 6d slam
Chai et al. Fast vision-based object segmentation for natural landmark detection on Indoor Mobile Robot
Hernández et al. Visual SLAM with oriented landmarks and partial odometry
Kostusiak et al. On the efficiency of population-based optimization in finding best parameters for RGB-D visual odometry
Tazaki et al. Loop detection of outdoor environment using proximity points of 3D pointcloud
Chen et al. Multi-robot point cloud map fusion algorithm based on visual SLAM
Zhang et al. Encode: a deep point cloud odometry network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant