CN107563308A - SLAM closed loop detection methods based on particle swarm optimization algorithm - Google Patents
SLAM closed loop detection methods based on particle swarm optimization algorithm Download PDFInfo
- Publication number
- CN107563308A CN107563308A CN201710685453.6A CN201710685453A CN107563308A CN 107563308 A CN107563308 A CN 107563308A CN 201710685453 A CN201710685453 A CN 201710685453A CN 107563308 A CN107563308 A CN 107563308A
- Authority
- CN
- China
- Prior art keywords
- frame picture
- particle
- picture
- key frame
- mrow
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses the SLAM closed loop detection methods based on particle swarm optimization algorithm, mainly solve the problems, such as that bag of words BOW method off-line training processes are complex in existing closed loop detection technique.Its detecting step is:(1) judge whether acquired present frame picture is key frame picture;(2) description of current key frame picture is calculated;(3) judge whether current key frame picture is the first frame key frame picture;(4) create frame picture and describe word bank;(5) the description word bank of frame picture is expanded;(6) judgment frame picture describes whether key frame picture number in word bank is equal to 50;(7) optimal frames picture is obtained;(8) current key frame picture is matched with optimal frames picture;(9) judge violence matching to whether being equal to 25 pairs;(10) output meets the optimal frames picture under closed loop conditions.
Description
Technical field
The invention belongs to image technique field, the one kind further related in technical field of robot vision is based on particle
The instant positioning of colony optimization algorithm and the detection of composition SLAM (simultaneous localization and mapping) closed loop
Method.The present invention obtains picture to be detected by camera, is looked for by particle swarm optimization algorithm in the picture got
A frame picture most like with picture to be detected is sought, available for realizing closed loop detection method.
Background technology
Robot technology is described as one of 21 century extremely potential ten big field at present.Immediately positioning and composition
SLAM refers to that robot determines self space position, and the environment in space residing for foundation in circumstances not known by sensor information
Model.In recent years as the appearance of efficient and cheap RGB-D cameras, vision SLAM have become the focus of research, but only
Robot pose is estimated by the information of single vision sensor acquisition, the error of previous time is passed to lower a period of time
Carve, accumulative pose drift will inevitably occur, and cause us can not establish consistent map.Asked to solve pose drift
Topic, closed loop detection are responsible for, when circumstances not known is explored by robot, judging whether the scene that robot is presently in is explored before
Occurred during environment, and detected that correct effective closed loop added extra pose constraint, this is for robot global posture
Accurate correction is necessary, and can eliminate accumulated error very well, obtain globally consistent track and map.
Dong Haixia and Zeng Liansun is in its paper delivered " research of closed loop detection algorithm in vision SLAM " (in:Micro computer
With applying MICROCOMPUTER AND ITS APPLICATIONS, 2016,35 (5):1-3,7) a kind of closed loop inspection is proposed in
Method of determining and calculating combines bag of words technology and visual dictionary technology in computer vision, make use of when handling image
The method of BRIEF+FAST key points.This method utilizes the lexicographic tree that off-line phase generates by the binary descriptor space of image
Discretization, the view data library structure of training image generation are mainly made up of grade bag of words, inversion index and direct index.It is inverted
Index and direct index improve the efficiency of algorithm, in order to ensure the reliability of closed loop testing result, enter for the image of matching
Checking is gone.But the weak point that this method still has is:Off-line training characteristic point EMS memory occupation is big, and time consumption is tight
Weight.Off-line training process can not meet robot in a wide range of interior needs for performing SLAM for a long time, because for closed loop spy
The vision word of survey is generated based on the scene image observed before robot, can not describe robot well in the future
The scene image observed.
Patent document " a kind of robot closed loop detection method based on the deep learning " (application that Shandong University applies at it
Number:CN201710018162.1 publication numbers:CN106780631A a kind of robot closed loop inspection based on deep learning is disclosed in)
Survey method.This method obtains the RGB image and three-dimensional data of first frame environment by (1), by the RGB image and three-dimensional data of environment
The RGB+DEPTH four-way images that registration obtains environment are carried out, the RGB+DEPTH four-ways image is input to convolutional Neural
In network, the feature extraction result as first frame is exported using the intermediate layer of convolutional neural networks;(2) obtained using the method for (1)
Take the feature extraction result of continuous N frames;(3) nth frame and the feature extraction result of M frames are subjected to characteristic matching, according to feature
Matching result judges whether closed loop occurs.Weak point is existing for this method:Because the vision word of training generation is originally played a game
Portion's feature point description symbol is quantified, and is not accounted for data of the local scale consistency FAST characteristic points in scene image and is closed
Connection problem, causing robot that the high similar scene of diverse location can be mistakenly considered to Same Scene causes closed loop detection that error occurs.
Patent document " a kind of closed loop detection method of indoor scene identification " (application that Shanghai Communications University applies at it
Number:CN201710033034.4 publication numbers:CN106897666A a kind of closed loop detection method based on indoor scene) is disclosed.
Cluster generation visual vocabulary is carried out using description subvector of the K-means clustering algorithms to all images in this method, and utilized
Bag of words BOW (the Bag Of Words) vectors of visual vocabulary generation current scene image;Calculate the BOW vectors of current scene image
Judge current field with the similarity of bag of words BOW vectors and the uniformity of detection current scene image for having stored historic scenery image
Whether scape image occurs closed loop.The patent application propose method existing for weak point be:The closed loop detection of bag of words BOW methods
Effect is highly relied on word quantity, and the closed loop detection of high accuracy needs to safeguard the word folder that size is constantly incremental, detects
Journey is complex.
The content of the invention
The present invention is directed to above-mentioned the shortcomings of the prior art, it is proposed that a kind of SLAM based on particle swarm optimization algorithm is closed
Ring detection method.The present invention calculates description by the current key frame picture obtained to depth camera, utilizes particle group optimizing
Algorithm quick and precisely obtains the optimal frames similar to current key frame picture in the key frame picture got describes word bank
Picture, realize SLAM closed loop detection process.
Realizing the technical thought of the present invention is, is determined local scale consistency FAST by extracting current key frame delineation subsolution
Characteristic point data related question, current key frame delineation then is described into word bank progress in frame picture, and closed loop is realized in search in real time
Detection.
To achieve the above object, key step of the present invention includes as follows:
(1) present frame picture is obtained using depth RGB-D cameras, judges whether acquired present frame picture is key frame
Picture, if so, then performing step (2), otherwise, give up acquired present frame picture;
(2) description of current key frame picture is calculated:
(2a) extracts 500 scale invariability FAST from current key frame picture;
(2b) describes subformula using rotational invariance BRIEF, calculates each Scale invariant in current key frame picture
Property FAST characteristic points rotational invariance BRIEF description son;
(2c) according to the following formula, calculates description of current key frame picture:
Wherein, l represents description of current key frame picture, and ∑ represents sum operation, and u is represented in current key frame picture
U-th of scale invariability FAST characteristic point, u=1,2 ..., 500, g (u) represent that u-th of yardstick is not in current key frame picture
It is denatured rotational invariance BRIEF description of FAST characteristic points;
(3) judge whether current key frame picture is the first frame key frame picture, if so, step (4) is then performed, otherwise,
Perform step (5);
(4) create frame picture and describe word bank:
A null set for being used to deposit frame picture is created in calculator memory;
(5) the description word bank of frame picture is expanded:
Description of previous key frame picture is added in the description word bank of frame picture;
(6) judgment frame picture describes whether key frame picture number in word bank is more than 50, if so, step (7) is then performed, it is no
Then, step (1) is performed;
(7) optimal frames picture is obtained:
(7a) according to the following formula, calculates current key frame picture description and frame picture describes any one frame picture in word bank and retouched
State the manhatton distance of son:
Wherein, d (l, r) represents that current key frame picture describes sub- l and frame picture and describes jth frame picture description in word bank
Manhatton distance between r, j value are less than frame picture and describe frame number in word bank, n represent current key frame picture description and
Frame picture describes the dimension that jth frame picture in word bank describes sub- r, n=1,2 ..., 256, | | expression takes absolute value operation, ln
Represent that current key frame picture describes the value that sub- l n-th is tieed up, rnRepresent that frame picture describes jth frame picture in word bank and describes sub- r n-th
The value of dimension;
The number of particle in population is arranged to 10 by (7b), and the positional representation frame picture of each particle describes to scheme in word bank
Piece jth frame, and different initial position and initial velocity are assigned for each particle at random;
(7c) utilizes fitness formula, and current key frame delineation is described in word bank each with frame picture after calculating initialization
The fitness value of corresponding description in the position of particle;
The individual history optimal location of (7d) using the initial position of each particle as the particle, will be adapted in all particles
Global optimum position of the personal best particle of angle value highest particle as population;
The iterations of each particle is arranged to 30 by (7e);
(7f) utilizes particle rapidity formula, calculates the speed of each particle;
(7g) utilizes particle position formula, calculates the position of each particle;
(7h) utilizes fitness formula, calculates current key frame delineation and is described with frame picture in word bank when each grain of former generation
The fitness value of description corresponding to the position of son;
(7i) is by each particle when the fitness value and the individual history optimal location of its preceding an iteration of previous iteration
Fitness value compares, the current individual history optimal location using the big position of fitness value as each particle, by all particles
Global optimum position of the personal best particle of current fitness value highest particle as population;
(7j) judges whether current iteration number is equal to 30, if so, step (7k) is then performed, otherwise, by current iteration time
Step (7f) is performed after number plus 1;
(7k) obtains the optimal frames picture of global optimum's opening position of population;
(8) violence matching pair is obtained:
Using violence matching process, current key frame picture is matched with optimal frames picture, obtains current key frame
Picture matches pair with the violence of optimal frames picture;
(9) violence matching is judged to whether being equal to 25 pairs, if so, then performing step (10), otherwise, performs step (1);
(10) current key frame picture and the success of optimal frames picture match, form closed loop, and the optimal frames picture of matching is defeated
Go out.
The present invention has the following advantages that compared with the conventional method:
First, description of the present invention due to introducing current key frame picture, for describing current key frame picture, root
All scale invariability FAST characteristic points of current key frame picture are combined according to description of current key frame picture, are overcome
Due to training the vision word generated, this is quantified prior art to local feature point description symbol, does not account for local feature
Data correlation problem of the point in scene image, causes robot the high similar scene of diverse location can be mistakenly considered into Same Scene
The problem of causing closed loop detection that error occurs so that the present invention has local scale consistency FAST characteristic points in scene image
Data correlation problem it is closer the advantages of.
Second, the present invention describes word bank due to introducing frame picture, the key frame picture description son deposit to having got
In the description word bank of frame picture, the problem of prior art off-line training characteristic point EMS memory occupation is big, and time consumption is serious is overcome,
Make the present invention that there is the advantages of real-time online operation.
3rd, particle swarm optimization algorithm is employed due to obtaining optimal frames picture in the present invention, each particle passes through shared
Global optimum position, population is set quick and precisely to find the optimal frames similar to current key frame in frame picture describes word bank,
The closed loop Detection results for overcoming bag of words BOW methods in the prior art are highly relied on word quantity, the closed loop inspection of high accuracy
Survey needs to safeguard the word folder that size is constantly incremental, and detection process is complex, and having the present invention, closed loop detection time is short, inspection
The advantages of survey process is fairly simple.
Brief description of the drawings
Fig. 1 is the flow chart of the present invention;
Fig. 2 is the time cost figure that the present invention uses every frame picture description of 200 frame pictures calculating in nyuv2 data sets;
Fig. 3 is the flow chart that the present invention uses particle swarm optimization algorithm;
Fig. 4 is global optimum's fitness convergence curve figure of the present invention;
Embodiment
The present invention is described in further detail below in conjunction with the accompanying drawings.
Referring to the drawings 1, the step of being realized to the present invention, is described in further detail.
Step 1, present frame picture is obtained using depth RGB-D cameras, judges whether acquired present frame picture is pass
Key frame picture, if so, then performing step 2, otherwise, give up acquired present frame picture.
Described key frame picture refers to, the 1st frame picture acquired in depth RGB-D cameras and acquired 10 it is whole
Frame picture during several times.
Step 2, description of current key frame picture is calculated.
500 scale invariability FAST are extracted from current key frame picture.
Described the step of 500 scale invariability FAST characteristic points are extracted from key frame picture is:By current key
Any pixel point is the center of circle in frame picture, and 3 pixels are 16 pixels on the circumference of radius, in the direction of the clock from 1 to
16 numberings, the gray value of center of circle pixel is individually subtracted with the gray value of 16 pixels on circumference, will be greater than 10 9 differences
Corresponding center of circle pixel is as scale invariability FAST characteristic points.
Subformula is described using rotational invariance BRIEF, calculates each scale invariability in current key frame picture
Rotational invariance BRIEF description of FAST characteristic points.
It is as follows that described rotational invariance BRIEF describes subformula:
Wherein, f (p) represents rotational invariance BRIEF description of p-th scale invariability FAST characteristic point, i represent with
Scale invariability FAST characteristic points p be the center of circle neighborhood in i-th pair pixel, i=1,2 ..., 256, m represent p-th of yardstick
The m dimensions of rotational invariance BRIEF description of consistency FAST characteristic points, m value and i value correspondent equal, τ (xi,
yi) represent using scale invariability FAST characteristic points p as the neighborhood in the center of circle in i-th pair pixel pixel xi, yiPoint rotates not to place
It is denatured BRIEF description;The i-th pair pixel x in using scale invariability FAST characteristic points p as the neighborhood in the center of circleiGray value
More than pixel yiGray value when, τ (xi,yi)=1, in addition, τ (xi,yi)=0.
According to the following formula, description of current key frame picture is calculated:
Wherein, l represents description of current key frame picture, and ∑ represents sum operation, and u is represented in current key frame picture
U-th of scale invariability FAST characteristic point, u=1,2 ..., 500, g (u) represent that u-th of yardstick is not in current key frame picture
It is denatured rotational invariance BRIEF description of FAST characteristic points.
Fig. 2 is that the present invention uses 200 frame pictures in nyuv2 data sets to be calculated often according to the description subformula for calculating frame picture
The time cost figure of frame picture description.Transverse axis in Fig. 2 represents 200 frame pictures in nyuv2 data sets, and the longitudinal axis represents to calculate
The time of each frame picture description.Curve in Fig. 2 represents that it is required to calculate description for each frame picture in nyuv2 data sets
Time changing curve.By the curve in Fig. 2 demonstrate it is proposed by the present invention calculate per frame picture describe submethod time it is very fast,
The time for calculating frame picture description is about 2.1ms.
Step 3, judge whether current key frame picture is the first frame key frame picture, if so, step 4 is then performed, otherwise,
Perform step 5.
Step 4, create frame picture and describe word bank.
A null set for being used to deposit frame picture is created in calculator memory.
Step 5, the description word bank of frame picture is expanded.
Description of previous key frame picture is added in the description word bank of frame picture.
Step 6, judgment frame picture describes whether key frame picture number in word bank is more than 50, if so, step 7 is then performed, it is no
Then, step 1 is performed.
Step 7, optimal frames picture is obtained.
According to the following formula, calculate current key frame picture description and frame picture describes any one frame picture description in word bank
Manhatton distance:
Wherein, d (l, r) represents that current key frame picture describes sub- l and frame picture and describes jth frame picture description in word bank
Manhatton distance between r, j value are less than frame picture and describe frame number in word bank, n represent current key frame picture description and
Frame picture describes the dimension that jth frame picture in word bank describes sub- r, n=1,2 ..., 256, | | expression takes absolute value operation, ln
Represent that current key frame picture describes the value that sub- l n-th is tieed up, rnRepresent that frame picture describes jth frame picture in word bank and describes sub- r n-th
The value of dimension.
With reference to accompanying drawing 3, this step is illustrated further.
1st step, the number of particle in population is arranged to 10, the positional representation frame picture of each particle is described in word bank
Picture jth frame, and different initial position and initial velocity are assigned for each particle at random.
2nd step, using fitness formula, current key frame delineation is described in word bank often with frame picture after calculating initialization
The fitness value of corresponding description in the position of individual particle.
Described fitness formula is as follows:
Wherein, it is picture jth frame that h, which represents that current key frame picture describes sub- l and frame picture to describe particle position in word bank,
The sub- r of description fitness value.
3rd step, the individual history optimal location using the initial position of each particle as the particle, it will be fitted in all particles
The personal best particle of angle value highest particle is answered as the global optimum position of population.
4th step, the iterations of each particle is arranged to 30.
5th step, using particle rapidity formula, calculate the speed of each particle.
The speed formula of described particle is as follows:
Vv(k)=ω Vv(k-1)+cσ1(wvb(k-1)-wv(k-1))+cσ2(wg(k-1)-wv(k-1))
Wherein, Vv(k) speed of v-th of particle kth time iteration is represented, ω represents inertia weight, Vv(k-1) represent v-th
The speed of -1 iteration of particle kth, c represent Studying factors, σ1Represent the random number on [0,1] section, wvb(k-1) v is represented
The individual history optimal location of individual -1 iteration of particle kth, wv(k-1) position of -1 iteration of v-th of particle kth, σ are represented2Table
Show the random number on [0,1] section, wg(k-1) optimal location undergone when representing all -1 iteration of particle kth.
6th step, using particle position formula, calculate the position of each particle.
The location formula of described particle is as follows:
wv(k)=wv(k-1)+1*Vv(k)
Wherein, wv(k) position of v-th of particle kth time iteration is represented.
7th step, using fitness formula, calculate current key frame delineation and described with frame picture in word bank when former generation is each
The fitness value of description corresponding to the position of particle.
Described fitness formula is as follows:
Wherein, it is picture jth frame that h, which represents that current key frame picture describes sub- l and frame picture to describe particle position in word bank,
The sub- r of description fitness value.
8th step, by each particle when the fitness value of previous iteration and the individual history optimal location of its preceding an iteration
Fitness value compare, the current individual history optimal location using the big position of fitness value as each particle, by all grains
Global optimum position of the personal best particle of sub current fitness value highest particle as population.
9th step, judges whether current iteration number is equal to 30, if so, the step of this step the 10th is then performed, otherwise, will be current
Iterations performs the step of this step the 5th after adding 1.
10th step, obtain the optimal frames picture of global optimum's opening position of population.
Step 8, violence matching pair is obtained.
Using violence matching process, current key frame picture is matched with optimal frames picture, obtains current key frame
Picture matches pair with the violence of optimal frames picture.
The violence matching process comprises the following steps that:
1st step, obtain the rotation of current key frame picture and optimal frames picture each scale invariability FAST characteristic points
Consistency BRIEF description.
2nd step, son is described to current key frame picture each scale invariability FAST characteristic point rotational invariances BRIEF
Son, which is described, with optimal frames picture each scale invariability FAST characteristic point rotational invariances BRIEF carries out xor operation.
3rd step, 1 number is counted from the binary result of xor operation, it is corresponding different when 1 number is less than into 3
Or computing point is to the violence matching pair as key frame picture and optimal frames picture.
Step 9, violence matching is judged to whether being equal to 25 pairs, if so, then performing step 10, otherwise, performs step 1;
Step 10, current key frame picture and the success of optimal frames picture match, form closed loop, by the optimal frames picture of matching
Output.
The effect of the present invention can be described further by following experiment.
1. emulation experiment condition:
The hardware test platform of emulation experiment of the present invention is:I5-3317U CPU, dominant frequency 1.7Ghz, internal memory 4GB, software are put down
Platform is:The operating systems of Windows 7, Matlab R2015b and Visual Studio 2013, data set:Data set size is
The nyuv2 data sets of 200 frame pictures, 10000 frame laboratory data pictures of camera typing.
2. emulation experiment content and result:
The purpose of emulation experiment of the present invention is to describe word bank in frame picture by current key frame delineation to be searched in real time
Suo Shixian closed loops detect.
Fig. 4 is after using depth RGB-D camera typings laboratory picture of the present invention, to be found most using particle swarm optimization algorithm
The convergence curve figure of excellent frame picture.Transverse axis in Fig. 4 represents the number of particle iteration, and the longitudinal axis represents particle global optimum fitness
Value.Curve in Fig. 4 represents the change curve of the increase particle global optimum fitness value with iterations.By in Fig. 4
Curve demonstrate it is proposed by the present invention use particle swarm optimization algorithm, after the generation of particle iteration 30 convergence can find global optimum adaptation
The higher frame picture of angle value, meet the real-time of closed loop detection.
Above content is to combine specific preferred embodiment further description made for the present invention, it is impossible to is assert
The specific implementation of the present invention is confined to these explanations.For general technical staff of the technical field of the invention,
On the premise of not departing from present inventive concept, some simple deduction or replace can also be made, should all be considered as belonging to the present invention's
Protection domain.
Claims (8)
1. a kind of SLAM closed loop detection methods based on particle swarm optimization algorithm, comprise the following steps:
(1) present frame picture is obtained using depth RGB-D cameras, judges whether acquired present frame picture is key frame figure
Piece, if so, then performing step (2), otherwise, give up acquired present frame picture;
(2) description of current key frame picture is calculated:
(2a) extracts 500 scale invariability FAST from current key frame picture;
(2b) describes subformula using rotational invariance BRIEF, calculates each scale invariability in current key frame picture
Rotational invariance BRIEF description of FAST characteristic points;
(2c) according to the following formula, calculates description of current key frame picture:
<mrow>
<mi>l</mi>
<mo>=</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>u</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>500</mn>
</munderover>
<mi>g</mi>
<mrow>
<mo>(</mo>
<mi>u</mi>
<mo>)</mo>
</mrow>
</mrow>
Wherein, l represents description of current key frame picture, and ∑ represents sum operation, and u represents u in current key frame picture
Individual scale invariability FAST characteristic points, u=1,2 ..., 500, g (u) represent u-th of scale invariability in current key frame picture
Rotational invariance BRIEF description of FAST characteristic points;
(3) judge whether current key frame picture is the first frame key frame picture, if so, then performing step (4), otherwise, perform
Step (5);
(4) create frame picture and describe word bank:
A null set for being used to deposit frame picture is created in calculator memory;
(5) the description word bank of frame picture is expanded:
Description of previous key frame picture is added in the description word bank of frame picture;
(6) judgment frame picture describes whether key frame picture number in word bank is more than 50, if so, then performing step (7), otherwise, holds
Row step (1);
(7) optimal frames picture is obtained:
(7a) according to the following formula, calculates current key frame picture description and frame picture describes any one frame picture description in word bank
Manhatton distance:
<mrow>
<mi>d</mi>
<mrow>
<mo>(</mo>
<mi>l</mi>
<mo>,</mo>
<mi>r</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>n</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>256</mn>
</munderover>
<mo>|</mo>
<msub>
<mi>l</mi>
<mi>n</mi>
</msub>
<mo>-</mo>
<msub>
<mi>r</mi>
<mi>n</mi>
</msub>
<mo>|</mo>
</mrow>
Wherein, d (l, r) represents that current key frame picture describes sub- l and frame picture and describe jth frame picture in word bank to describe between sub- r
Manhatton distance, j value is less than frame picture and describes frame number in word bank, and n represents current key frame picture description and frame
Picture describes the dimension that jth frame picture in word bank describes sub- r, n=1,2 ..., 256, | | expression takes absolute value operation, lnTable
Show that current key frame picture describes the value that sub- l n-th is tieed up, rnRepresent that frame picture describes jth frame picture in word bank and describe sub- r n-th to tie up
Value;
The number of particle in population is arranged to 10 by (7b), and the positional representation frame picture of each particle describes picture in word bank
J frames, and different initial position and initial velocity are assigned for each particle at random;
(7c) utilizes fitness formula, and current key frame delineation describes each particle in word bank with frame picture after calculating initialization
Corresponding description in position fitness value;
The individual history optimal location of (7d) using the initial position of each particle as the particle, by fitness value in all particles
Global optimum position of the personal best particle of highest particle as population;
The iterations of each particle is arranged to 30 by (7e);
(7f) utilizes particle rapidity formula, calculates the speed of each particle;
(7g) utilizes particle position formula, calculates the position of each particle;
(7h) utilizes fitness formula, calculates current key frame delineation and is described with frame picture in word bank when each particle of former generation
The fitness value of description corresponding to position;
(7i) is by each particle when the adaptation of the fitness value and the individual history optimal location of its preceding an iteration of previous iteration
Angle value compares, the current individual history optimal location using the big position of fitness value as each particle, and all particles are current
Global optimum position of the personal best particle of fitness value highest particle as population;
(7j) judges whether current iteration number is equal to 30, if so, then performing step (7k), otherwise, current iteration number is added into 1
Step (7f) is performed afterwards;
(7k) obtains the optimal frames picture of global optimum's opening position of population;
(8) violence matching pair is obtained:
Using violence matching process, current key frame picture is matched with optimal frames picture, obtains current key frame picture
Violence with optimal frames picture matches pair;
(9) violence matching is judged to whether being equal to 25 pairs, if so, then performing step (10), otherwise, performs step (1);
(10) current key frame picture and the success of optimal frames picture match, form closed loop, the optimal frames picture of matching are exported.
2. the SLAM closed loop detection methods according to claim 1 based on particle swarm optimization algorithm, it is characterised in that step
(1) the key frame picture described in refers to, the 1st frame picture and 10 acquired integer acquired in depth RGB-D cameras
Times when frame picture.
3. the SLAM closed loop detection methods according to claim 1 based on particle swarm optimization algorithm, it is characterised in that step
Described in (2a) from key frame picture extract 500 scale invariability FAST characteristic points the step of be:By current key frame
Any pixel point is the center of circle in picture, and 3 pixels are 16 pixels on the circumference of radius, in the direction of the clock from 1 to 16
Numbering, the gray value of center of circle pixel is individually subtracted with the gray value of 16 pixels on circumference, will be greater than 10 9 difference institutes
Corresponding center of circle pixel is as scale invariability FAST characteristic points.
4. the SLAM closed loop detection methods according to claim 1 based on particle swarm optimization algorithm, it is characterised in that step
It is as follows that rotational invariance BRIEF described in (2b) describes subformula:
<mrow>
<mi>f</mi>
<mrow>
<mo>(</mo>
<mi>p</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<munder>
<mo>&Sigma;</mo>
<mrow>
<mn>1</mn>
<mo>&le;</mo>
<mi>i</mi>
<mo>&le;</mo>
<mn>256</mn>
</mrow>
</munder>
<msup>
<mn>2</mn>
<mrow>
<mi>m</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msup>
<mi>&tau;</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mi>i</mi>
</msub>
<mo>,</mo>
<msub>
<mi>y</mi>
<mi>i</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
Wherein, f (p) represents rotational invariance BRIEF description of p-th of scale invariability FAST characteristic point, and i is represented with yardstick
Consistency FAST characteristic points p be the center of circle neighborhood in i-th pair pixel, i=1,2 ..., 256, m represent p-th of Scale invariant
Property FAST characteristic points rotational invariance BRIEF description m dimensions, m value and i value correspondent equal, τ (xi,yi) table
Show using scale invariability FAST characteristic points p as the neighborhood in the center of circle in i-th pair pixel pixel xi, yiPoint is to locating rotational invariance
BRIEF description;The i-th pair pixel x in using scale invariability FAST characteristic points p as the neighborhood in the center of circleiGray value be more than
Pixel yiGray value when, τ (xi,yi)=1, in addition, τ (xi,yi)=0.
5. the SLAM closed loop detection methods according to claim 1 based on particle swarm optimization algorithm, it is characterised in that step
Fitness formula described in (7c), step (7h) is as follows:
<mrow>
<mi>h</mi>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mrow>
<mn>1</mn>
<mo>+</mo>
<mi>d</mi>
<mrow>
<mo>(</mo>
<mi>l</mi>
<mo>,</mo>
<mi>r</mi>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
</mrow>
Wherein, h represents that current key frame picture describes sub- l and frame picture and describes particle position retouching for picture jth frame in word bank
State sub- r fitness value.
6. the SLAM closed loop detection methods according to claim 1 based on particle swarm optimization algorithm, it is characterised in that step
The speed formula of particle described in (7f) is as follows:
Vv(k)=ω Vv(k-1)+cσ1(wvb(k-1)-wv(k-1))+cσ2(wg(k-1)-wv(k-1))
Wherein, Vv(k) speed of v-th of particle kth time iteration is represented, ω represents inertia weight, Vv(k-1) v-th of particle is represented
The speed of -1 iteration of kth, c represent Studying factors, σ1Represent the random number on [0,1] section, wvb(k-1) v-th is represented
The individual history optimal location of sub- -1 iteration of kth, wv(k-1) position of -1 iteration of v-th of particle kth, σ are represented2Represent
Random number on [0,1] section, wg(k-1) optimal location undergone when representing all -1 iteration of particle kth.
7. the SLAM closed loop detection methods according to claim 1 based on particle swarm optimization algorithm, it is characterised in that step
The location formula of particle described in (7g) is as follows:
wv(k)=wv(k-1)+1*Vv(k)
Wherein, wv(k) position of v-th of particle kth time iteration is represented.
8. the SLAM closed loop detection methods according to claim 1 based on particle swarm optimization algorithm, it is characterised in that step
(8) violence matching process described in comprises the following steps that:
1st step, obtain current key frame picture and the invariable rotary of optimal frames picture each scale invariability FAST characteristic points
Property BRIEF description son;
2nd step, to current key frame picture each scale invariability FAST characteristic point rotational invariances BRIEF descriptions and most
Excellent each scale invariability of frame picture FAST characteristic point rotational invariances BRIEF descriptions carries out xor operation;
3rd step, 1 number is counted from the binary result of xor operation, corresponding XOR is transported when 1 number is less than into 3
Point is calculated to the violence matching pair as key frame picture and optimal frames picture.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710685453.6A CN107563308B (en) | 2017-08-11 | 2017-08-11 | SLAM closed loop detection method based on particle swarm optimization algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710685453.6A CN107563308B (en) | 2017-08-11 | 2017-08-11 | SLAM closed loop detection method based on particle swarm optimization algorithm |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107563308A true CN107563308A (en) | 2018-01-09 |
CN107563308B CN107563308B (en) | 2020-01-31 |
Family
ID=60974045
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710685453.6A Active CN107563308B (en) | 2017-08-11 | 2017-08-11 | SLAM closed loop detection method based on particle swarm optimization algorithm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107563308B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109117851A (en) * | 2018-07-06 | 2019-01-01 | 航天星图科技(北京)有限公司 | A kind of video image matching process based on lattice statistical constraint |
CN109186616A (en) * | 2018-09-20 | 2019-01-11 | 禾多科技(北京)有限公司 | Lane line assisted location method based on high-precision map and scene search |
CN109887033A (en) * | 2019-03-01 | 2019-06-14 | 北京智行者科技有限公司 | Localization method and device |
CN109902619A (en) * | 2019-02-26 | 2019-06-18 | 上海大学 | Image closed loop detection method and system |
CN111551184A (en) * | 2020-03-27 | 2020-08-18 | 上海大学 | Map optimization method and system for SLAM of mobile robot |
CN111899905A (en) * | 2020-08-05 | 2020-11-06 | 哈尔滨工程大学 | Fault diagnosis method and system based on nuclear power device |
CN112086010A (en) * | 2020-09-03 | 2020-12-15 | 中国第一汽车股份有限公司 | Map generation method, map generation device, map generation equipment and storage medium |
CN112861988A (en) * | 2021-03-04 | 2021-05-28 | 西南科技大学 | Feature matching method based on attention-seeking neural network |
CN115100365A (en) * | 2022-08-25 | 2022-09-23 | 国网天津市电力公司高压分公司 | Camera optimal baseline acquisition method based on particle swarm optimization |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1684143B1 (en) * | 2005-01-25 | 2009-06-17 | Samsung Electronics Co., Ltd. | Apparatus and method for estimating location of mobile body and generating map of mobile body environment using upper image of mobile body environment, and computer readable recording medium storing computer program controlling the apparatus |
CN102426606A (en) * | 2011-11-11 | 2012-04-25 | 南京财经大学 | Method for retrieving multi-feature image based on particle swarm algorithm |
CN104331911A (en) * | 2014-11-21 | 2015-02-04 | 大连大学 | Improved second-order oscillating particle swarm optimization based key frame extraction method |
CN106568432A (en) * | 2016-10-20 | 2017-04-19 | 上海物景智能科技有限公司 | Moving robot primary pose obtaining method and system |
-
2017
- 2017-08-11 CN CN201710685453.6A patent/CN107563308B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1684143B1 (en) * | 2005-01-25 | 2009-06-17 | Samsung Electronics Co., Ltd. | Apparatus and method for estimating location of mobile body and generating map of mobile body environment using upper image of mobile body environment, and computer readable recording medium storing computer program controlling the apparatus |
CN102426606A (en) * | 2011-11-11 | 2012-04-25 | 南京财经大学 | Method for retrieving multi-feature image based on particle swarm algorithm |
CN104331911A (en) * | 2014-11-21 | 2015-02-04 | 大连大学 | Improved second-order oscillating particle swarm optimization based key frame extraction method |
CN106568432A (en) * | 2016-10-20 | 2017-04-19 | 上海物景智能科技有限公司 | Moving robot primary pose obtaining method and system |
Non-Patent Citations (4)
Title |
---|
余杰: ""基于ORB关键帧闭环检测算法的SLAM研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
施尚杰: ""基于Kinect多移动机器人3D同步定位与制图"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
王开宇: ""基于全景视觉机器人的粒子群优化FastSLAM算法研究"", 《科技创新与应用》 * |
辛冠希: ""基于RGB-D摄像机的同步定位与建图研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109117851A (en) * | 2018-07-06 | 2019-01-01 | 航天星图科技(北京)有限公司 | A kind of video image matching process based on lattice statistical constraint |
CN109186616A (en) * | 2018-09-20 | 2019-01-11 | 禾多科技(北京)有限公司 | Lane line assisted location method based on high-precision map and scene search |
CN109902619A (en) * | 2019-02-26 | 2019-06-18 | 上海大学 | Image closed loop detection method and system |
CN109887033A (en) * | 2019-03-01 | 2019-06-14 | 北京智行者科技有限公司 | Localization method and device |
CN109887033B (en) * | 2019-03-01 | 2021-03-19 | 北京智行者科技有限公司 | Positioning method and device |
CN111551184A (en) * | 2020-03-27 | 2020-08-18 | 上海大学 | Map optimization method and system for SLAM of mobile robot |
CN111551184B (en) * | 2020-03-27 | 2021-11-26 | 上海大学 | Map optimization method and system for SLAM of mobile robot |
CN111899905A (en) * | 2020-08-05 | 2020-11-06 | 哈尔滨工程大学 | Fault diagnosis method and system based on nuclear power device |
CN112086010A (en) * | 2020-09-03 | 2020-12-15 | 中国第一汽车股份有限公司 | Map generation method, map generation device, map generation equipment and storage medium |
CN112861988A (en) * | 2021-03-04 | 2021-05-28 | 西南科技大学 | Feature matching method based on attention-seeking neural network |
CN115100365A (en) * | 2022-08-25 | 2022-09-23 | 国网天津市电力公司高压分公司 | Camera optimal baseline acquisition method based on particle swarm optimization |
CN115100365B (en) * | 2022-08-25 | 2023-01-20 | 国网天津市电力公司高压分公司 | Camera optimal baseline acquisition method based on particle swarm optimization |
Also Published As
Publication number | Publication date |
---|---|
CN107563308B (en) | 2020-01-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107563308A (en) | SLAM closed loop detection methods based on particle swarm optimization algorithm | |
CN107506702B (en) | Multi-angle-based face recognition model training and testing system and method | |
CN110135249B (en) | Human behavior identification method based on time attention mechanism and LSTM (least Square TM) | |
CN111156984A (en) | Monocular vision inertia SLAM method oriented to dynamic scene | |
CN109341703B (en) | Visual SLAM algorithm adopting CNNs characteristic detection in full period | |
CN107563286A (en) | A kind of dynamic gesture identification method based on Kinect depth information | |
CN111709311A (en) | Pedestrian re-identification method based on multi-scale convolution feature fusion | |
CN110008913A (en) | Pedestrian re-identification method based on fusion of attitude estimation and viewpoint mechanism | |
CN103984936A (en) | Multi-sensor multi-feature fusion recognition method for three-dimensional dynamic target recognition | |
CN111161315A (en) | Multi-target tracking method and system based on graph neural network | |
Sun et al. | R4 Det: Refined single-stage detector with feature recursion and refinement for rotating object detection in aerial images | |
CN113779643B (en) | Signature handwriting recognition system and method based on pre-training technology and storage medium | |
CN114662497A (en) | False news detection method based on cooperative neural network | |
CN110533661A (en) | Adaptive real-time closed-loop detection method based on characteristics of image cascade | |
Xie et al. | Hierarchical forest based fast online loop closure for low-latency consistent visual-inertial SLAM | |
Abdullah et al. | Vehicle counting using deep learning models: a comparative study | |
Singh et al. | Simultaneous tracking and action recognition for single actor human actions | |
Guo et al. | Optimal path planning in field based on traversability prediction for mobile robot | |
CN116310416A (en) | Deformable object similarity detection method based on Radon transformation and electronic equipment | |
CN113435398B (en) | Signature feature identification method, system, equipment and storage medium based on mask pre-training model | |
CN106558065A (en) | The real-time vision tracking to target is realized based on color of image and texture analysiss | |
CN115235505A (en) | Visual odometer method based on nonlinear optimization | |
CN112102399B (en) | Visual mileage calculation method based on generative antagonistic network | |
CN114373091A (en) | Gait recognition method based on deep learning fusion SVM | |
CN111832548A (en) | Train positioning method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |