CN103186896A - Secondary detection method and equipment for space invariables constructed by space map - Google Patents
Secondary detection method and equipment for space invariables constructed by space map Download PDFInfo
- Publication number
- CN103186896A CN103186896A CN2011104442178A CN201110444217A CN103186896A CN 103186896 A CN103186896 A CN 103186896A CN 2011104442178 A CN2011104442178 A CN 2011104442178A CN 201110444217 A CN201110444217 A CN 201110444217A CN 103186896 A CN103186896 A CN 103186896A
- Authority
- CN
- China
- Prior art keywords
- amount
- space
- space invariance
- invariance
- dij
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a secondary detection method and equipment for space invariables constructed by a space map. The method comprises the steps that the n space invariables preliminarily detected are acquired from a group of observation images obtained by an observation main body through two observations of an unknown space in a first pose and a second pose in sequence; distances among the corresponding space invariables in the two observations are calculated respectively; the distance between the ith space invariable and the jth space invariable is dij during the first observation; the distance between the ith space invariable and the jth space invariable is dij' during the second observation; i and j are less than or equal to n; j is not equal to i; the specific number of space invariables are voted based on a preset voting policy; the N correct space invariables are judged in the n space invariables according to a voting result and a preset judgment policy, so that the incorrectly matched space invariables are filtered; and N is a natural number and is less than or equal to n.
Description
Technical field
The present invention relates to the quick location estimation method of space invariance amount secondary detection method and apparatus and robot and equipment for the space map structuring.More particularly, the quick location estimation method of robot and the equipment that relates to the space invariance amount secondary detection method and apparatus that can reduce calculated amount and improve travelling speed and use this space invariance amount secondary detection method and apparatus.
Background technology
Three-dimensional localization and map structuring are one of mobile robot's critical support technology of carrying out movement navigation, path planning, object identification and semantic understanding in circumstances not known.Robot by the three-dimensional observation data of coupling at different observation positions, can calculate the position of robot in continuous moving process.The core of coupling observation data and calculating location information is to find the space invariance amount in the observed result exactly twice, and then is the location variation that reference point calculates twice observation station with the space invariance amount, finishes the location estimation of robot.
Figure 1A, 1B show based on twice observation data and carry out the synoptic diagram that position and attitude changes estimation.In Figure 1A, show the observation data of robot under (0,0,0,0,0,0) position and attitude.In Figure 1B, show robot and move to (x, y, z, roll, pitch, yaw) observation data under the position and attitude.Wherein, the detected 6 groups of space invariance amounts (6 groups three-dimensional point to) from twice observation data of 6 box indicatings among Figure 1A, the 1B.Here need to prove that so-called space invariance amount refers to identical spatial point in twice observation that observation main body (that is the robot here) is carried out with different poses.Because the movement of robot, in twice observation, the position of relative 6 the space invariance points of robot changes, can calculate based on changing (x, y, z, roll, pitch, concrete numerical value yaw), thus the position and attitude of finishing robot is estimated.Though it is pointed out that among Figure 1A, the 1B to be described as example with 6 groups of space invariance amounts, in theory, only need just can try to achieve (x, y, z, roll, pitch, concrete numerical value yaw) by the 3 groups of space invariance of conllinear points not in the space.
This shows that can detect twice space invariance amount (also can be described as unique point) in the observation data exactly is the key of problem.There have been a lot of achievements in research can more efficiently from two dimensional gray image or three-dimensional point cloud observation data, detect space invariance amount (for example, by the mode of gray-scale value coupling) at present, but still had than mistake the situation that can make a mistake and detect.
Fig. 2 A, 2B show incorrect space invariance amount and cause the synoptic diagram that the position and attitude mistake is estimated.In Fig. 2 A, 2B, owing to the 4th group of incorrect coupling with the 6th group of space invariance amount, will cause the mistake of robot location's attitude to be estimated.Therefore, existing solution is generally carried out aftertreatment to the space invariance amount that Preliminary detection goes out, and carries out the secondary detection of space invariance amount, filters out misdata.
Existing space invariant secondary detection is used RANSAC (RANdom SAmpling Consensus) method.This method is passed through repeatedly random experiments, selects the space invariance amount.In each experiment, picked at random is three groups of space invariance amounts (A1, A2 wherein, A3), these three groups of invariants calculate position and attitude information, and are standard with this position and attitude information, whether the space invariance amount of judging other is correct, will find all and (A1, A2, A3) amount of the space invariance accurately M1=(A1 of maintenance space invariance character, A2, A3, A4, ... An), and the error of calculating this group space invariance amount M1 that obtains like this.Through random experiments repeatedly, with obtain the space invariance amount that many groups select (M1, M2, M3 ..., Mn).Choose one group of error minimum wherein as the final space invariance amount that filters out misdata.
This method needs repeatedly calculating robot's position and attitude variable quantity in each random experiments, calculated amount is big, and travelling speed is slow.In addition, when experiment number more after a little while, can not find best space invariance amount.
Summary of the invention
In view of above situation, the present invention proposes the new election based on the space invariance amount and detect the space invariance amount secondary detection method and apparatus of strategy and the method and apparatus that uses the quick location estimation of robot of this space invariance amount secondary detection method and apparatus, it does not need repeatedly calculating robot's position and attitude variable quantity, calculated amount is little, travelling speed is fast, compares with the RANSAC method through test and can improve 1-2 the order of magnitude.
An aspect according to the embodiment of the invention, a kind of space invariance amount secondary detection method for the space map structuring is provided, described method comprises: by utilizing an observation main body successively with first pose and second pose unknown space to be carried out in one group of observed image that twice observation obtains, obtain n the space invariance amount that Preliminary detection arrives, wherein said observation main body possesses an image acquisition units at least obtaining observed image, and n is natural number; Calculate the distance of corresponding space invariant in twice observation respectively, wherein for i space invariance amount, the distance with j space invariance amount in observation for the first time is dij, distance with j space invariance amount in observation for the second time is dij ', i wherein, j is all smaller or equal to n, and j ≠ i; Based on predetermined temporal voting strategy, the space invariance amount of specific quantity is voted, wherein voting for some space invariance amounts refers to be voted for described some space invariance amounts by every other space invariance amount; And among n space invariance amount, judge N correct space invariance amount according to voting results and predetermined decision plan, thus filter out the space invariance amount of correct coupling, wherein N is natural number and is less than or equal to n.
Preferably, in the method according to the embodiment of the invention, described space invariance amount refers to identical spatial point in twice observation that the observation main body is carried out with different poses.
Preferably, in the method according to the embodiment of the invention, described predetermined temporal voting strategy is:
Judge | dij-dij ' |/max (| dij|, | dij ' |) whether less than predetermined threshold, if | dij-dij ' |/max (| dij|, | dij ' |) less than described predetermined threshold, then i space variable and j space variable logrolling, otherwise vote against mutually.
Preferably, in the method according to the embodiment of the invention, described observation main body is robot.
Preferably, in the method according to the embodiment of the invention, all space invariance amounts are voted, and described predetermined decision plan is: judge that obtaining the maximum N of an affirmative vote space invariance amount is correct space invariance amount.
Preferably, in the method according to the embodiment of the invention, determine the numerical value of N according to design accuracy.
Preferably, in the method according to the embodiment of the invention, select a space invariance amount to vote arbitrarily, and described predetermined decision plan is: if selected space invariance amount obtains the affirmative vote greater than n/2, then determine that it is correct space invariance amount, and will be judged to be correct space invariance amount for its space invariance amount of voting for, deletion simultaneously is its space invariance amount of voting against; And if selected space invariance amount does not obtain the affirmative vote greater than n/2, then determine that it is wrong space invariance amount, and choose one of remaining space invariant arbitrarily, repeat above processing then, till obtaining being judged to be correct space invariance amount.
According to another aspect of the embodiment of the invention, provide a kind of robot quick location estimation method, thereby comprised the steps: that robot carries out twice observation with initial pose and current pose to unknown space respectively and obtains one group of observed image; Preliminary detection should be organized the space invariance amount of expression same space point in the observed image; According to said method the space invariance amount is carried out secondary detection, in order to filter out the not space invariance amount of correct coupling; And the current pose that obtains robot based on the space invariance amount of correct coupling.
Another aspect according to the embodiment of the invention, a kind of space invariance amount secondary detection equipment for the space map structuring is provided, described equipment comprises: acquiring unit, be used for by utilizing an observation main body successively with first pose and second pose one group of observed image that twice observation obtains to be carried out in unknown space, obtain n the space invariance amount that Preliminary detection arrives, wherein said observation main body possesses an image acquisition units at least obtaining observed image, and n is natural number; Metrics calculation unit, be used for calculating respectively the distance of twice observation corresponding space invariant, wherein for i space invariance amount, distance with j space invariance amount in observation for the first time is dij, distance with j space invariance amount in observation for the second time is dij ', i wherein, j is all smaller or equal to n, and j ≠ i; The ballot unit is used for based on predetermined temporal voting strategy, and the space invariance amount of specific quantity is voted, and wherein voting for some space invariance amounts refers to be voted for described some space invariance amounts by every other space invariance amount; And identifying unit, be used among n space invariance amount, judging N correct space invariance amount according to voting results and predetermined decision plan, thereby filter out the not space invariance amount of correct coupling that wherein N is natural number and is less than or equal to n.
Preferably, in the equipment according to the embodiment of the invention, described space invariance amount refers to identical spatial point in twice observation that the observation main body is carried out with different poses.
Preferably, in the equipment according to the embodiment of the invention, described predetermined temporal voting strategy is: judge | djj-dij ' |/max (| dij|, | dij ' |) whether less than predetermined threshold, if | dij-dij ' |/max (| dij|, | dij ' |) less than described predetermined threshold, then i space variable and j space variable logrolling, otherwise vote against mutually.
Preferably, in the equipment according to the embodiment of the invention, described observation main body is robot.
Preferably, in the equipment according to the embodiment of the invention, vote to all space invariance amounts in described ballot unit, and described predetermined decision plan is: judge that obtaining the maximum N of an affirmative vote space invariance amount is correct space invariance amount.
Preferably, in the equipment according to the embodiment of the invention, determine the numerical value of N according to design accuracy.
Preferably, in the equipment according to the embodiment of the invention, described ballot unit selects a space invariance amount to vote arbitrarily, and described predetermined decision plan is: if selected space invariance amount obtains the affirmative vote greater than n/2, then determine that it is correct space invariance amount, and will be judged to be correct space invariance amount for its space invariance amount of voting for, deletion simultaneously is its space invariance amount of voting against; And if selected space invariance amount does not obtain the affirmative vote greater than n/2, then determine that it is wrong space invariance amount, and choose one of remaining space invariant arbitrarily, repeat above processing then, till obtaining being judged to be correct space invariance amount.
According to another aspect of the embodiment of the invention, provide a kind of robot quick location estimation equipment, comprising: image acquisition units, obtain one group of observed image thereby be used for initial pose and current pose twice observation being carried out in unknown space respectively; The space invariance amount of expression same space point in this group observed image that space invariance amount Preliminary detection unit, Preliminary detection image acquisition units obtain; Aforesaid space invariance amount secondary detection equipment is used for the detected space invariance amount in space invariance amount Preliminary detection unit is carried out secondary detection, in order to filter out the not space invariance amount of correct coupling; And the pose computing unit, be used for the current pose based on the space invariance amount acquisition robot of correct coupling.
Space invariance amount secondary detection method and apparatus by according to the embodiment of the invention can greatly reduce calculated amount and promote processing speed, improves the matching precision of space invariance amount simultaneously.In addition, adopt the quick location estimation method of robot and the equipment of above-mentioned space invariance amount secondary detection method and apparatus, can when reduce calculated amount and promoting processing speed, finish pose better and estimate.
Description of drawings
Figure 1A-1B shows based on twice observation data and carries out the synoptic diagram that position and attitude changes estimation;
Fig. 2 A-2B shows incorrect space invariance amount and causes the synoptic diagram that the position and attitude mistake is estimated;
Fig. 3 A-3B shows the relative distance between each space invariance amount in twice observation respectively;
Fig. 4 shows the process flow diagram according to the flow process of the space invariance amount secondary detection method that is used for the space map structuring of the embodiment of the invention;
Fig. 5 shows the process flow diagram according to the flow process of the quick location estimation method of robot of the embodiment of the invention;
Fig. 6 shows the block diagram according to the configuration of the space invariance amount secondary detection equipment that is used for the space map structuring of the embodiment of the invention; And
Fig. 7 shows the block diagram according to the configuration of the quick location estimation equipment of the robot of the embodiment of the invention.
Embodiment
Below with reference to accompanying drawings of the present invention each preferred embodiment is described.Provide following description with reference to accompanying drawing, to help the understanding by claim and the example embodiment of the present invention that equivalent was limited thereof.It comprises the various details of help understanding, but that they can only be counted as is exemplary.Therefore, those skilled in the art will recognize that, can carry out various changes and modification to embodiment described herein, and do not depart from the scope of the present invention and spirit.And, clear more succinct in order to make instructions, with the detailed description of omitting well known function and structure.
In the method and apparatus according to the embodiment of the invention, be basis for estimation with the relative distance between each space invariance amount, each invariant is carried out vote by ballot, rapid screening goes out correct space invariance amount.Fig. 3 A-3B shows the relative distance between each space invariance amount in twice observation respectively.Shown in Fig. 3 A, 3B, for correct space invariance amount (1,2,3,4,5), the relative distance in twice observation between them remains unchanged.
Fig. 4 shows the process flow diagram according to the flow process of the space invariance amount secondary detection method that is used for the space map structuring of the embodiment of the invention.
As shown in Figure 4, described method comprises the steps:
At first, at step S401, by utilizing an observation main body successively with first pose and second pose unknown space to be carried out in one group of observed image that twice observation obtains, obtain n the space invariance amount that Preliminary detection arrives, wherein said observation main body possesses an image acquisition units at least obtaining observed image, and n is natural number.For example, observation main body described here can be robot, and first pose can be (0,0,0,0,0,0), second pose can be (x, y, z, roll, pitch, yaw).In addition, it is pointed out that described space invariance amount refers to identical spatial point in twice observation that the observation main body is carried out with different poses.
Then, at step S402, calculate the distance of corresponding space invariant in twice observation respectively, wherein for i space invariance amount, distance with j space invariance amount in observation for the first time is dij, and the distance with j space invariance amount in observation for the second time is dij ', wherein i, j is all smaller or equal to n, and j ≠ i.
Then, at step S403, based on predetermined temporal voting strategy, the space invariance amount of specific quantity is voted.Here, according to the difference of the quantity of the space invariance amount of voting, can be divided into full ballot and the part two kinds of embodiment that vote.It is pointed out that votes for some space invariance amounts refers to be voted for described some space invariance amounts by every other space invariance amount.
For example, described predetermined temporal voting strategy is: judge | dij-dij ' |/max (| dij|, | dij ' |) whether less than predetermined threshold, if | dij-dij ' |/max (| dij|, | dij ' |) less than described predetermined threshold, then i space variable and j space variable logrolling, otherwise vote against mutually.
At last, at step S404, judge N correct space invariance amount according to voting results and predetermined decision plan among n space invariance amount, thereby filter out the not space invariance amount of correct coupling, wherein N is natural number and is less than or equal to n.
Below, full ballot embodiment is at first described.In this embodiment, all space invariance amounts are voted.In this case, the described predetermined decision plan among the above-mentioned steps S404 is: judge that obtaining the maximum N of an affirmative vote space invariance amount is correct space invariance amount.
It is pointed out that the N here can adjust according to design accuracy.For example, design accuracy is more high, and then the quantity of the space invariance amount of choosing in the above-mentioned decision plan is more big, and namely the N value is more big.Otherwise then the N value is more little.In general, the span of N is 50~300.
As mentioned above, the accuracy of existing RANSAC method is subjected to the restriction of experiment number, only can find space invariance amount comparatively accurately.And by the embodiment that should vote entirely according to the present invention, can find space invariance amount the most accurately.And owing to do not need repeatedly the position and attitude variable quantity of calculating observation main body (that is, robot), all calculating only relate to simple distance calculating and size is judged, so have reduced calculated amount significantly, and processing speed gets a promotion.After tested, performance can promote 1~2 order of magnitude.
Next, the embodiment that partly votes will be described.This embodiment is with the difference of the full ballot embodiment that describes before, is not that whole space invariance amounts are voted.Alternatively, select a space invariance amount to vote at first arbitrarily.In this case, described predetermined decision plan among the above-mentioned steps S404 is: if selected space invariance amount obtains the affirmative vote greater than n/2, then determine that it is correct space invariance amount, and will be judged to be correct space invariance amount for its space invariance amount of voting for, deletion simultaneously is its space invariance amount of voting against; If selected space invariance amount does not obtain the affirmative vote greater than n/2, then determine that it is wrong space invariance amount, and choose one of remaining space invariant arbitrarily, repeat above processing then, till obtaining being judged to be correct space invariance amount.
According to above description as can be known, the part embodiment that votes has further reduced operand and has promoted processing speed on the basis of full ballot embodiment.But the embodiment that partly votes compares with full ballot embodiment, and shortcoming is that precision is not high.In other words, can not as full ballot embodiment, can find space invariance amount the most accurately.Therefore, according to concrete designing requirement, can suitably choose full ballot or the embodiment that partly votes.
Space invariance amount secondary detection method according to the embodiment of the invention has more than been described.Apparently, this method can be applied to the quick location estimation of robot of the prior art.Because the space invariance amount secondary detection method according to the embodiment of the invention can find the space invariance amount of correct coupling when reducing calculated amount and promoting processing speed, thereby the position and attitude that can finish robot is better estimated.
The flow process of having used according to the quick location estimation method of robot of the space invariance amount secondary detection method of the embodiment of the invention is described below with reference to Fig. 5.
As shown in Figure 5, the quick location estimation method of robot comprises the steps:
At first, at step S501, thereby carrying out twice observation with initial pose and current pose to unknown space respectively, robot obtains one group of observed image.
Then, at step S502, Preliminary detection should be organized the space invariance amount of expression same space point in the observation data.For example, as previously mentioned, in the prior art, there have been a lot of achievements in research can more efficiently from two dimensional gray image or three-dimensional point cloud observation data, detect space invariance amount (for example, the mode of mating by gray-scale value) at present.This is the preliminary coupling of space invariance amount.
Next, at step S503, as above with reference to the described space invariance amount of Fig. 4 secondary detection method the space invariance amount is carried out secondary detection by basis, in order to filter out the space invariance amount of correct coupling.
At last, at step S504, obtain the current pose of robot based on the space invariance amount of correct coupling.
Below, with reference to the space invariance amount secondary detection equipment that is used for space map structuring of Fig. 6 description according to the embodiment of the invention.As shown in Figure 6, described equipment 600 comprises acquiring unit 601, metrics calculation unit 602, ballot unit 603 and identifying unit 604.
Acquiring unit 601 is by utilizing an observation main body successively with first pose and second pose unknown space to be carried out in one group of observed image that twice observation obtains, obtains n the space invariance amount that Preliminary detection arrives.Wherein, described observation main body possesses an image acquisition units at least obtaining observed image, and n is natural number.
Relative distance between each space invariance amount that ballot unit 603 calculates according to metrics calculation unit 602 based on predetermined temporal voting strategy, is voted to the space invariance amount of specific quantity.
Here, difference according to the quantity of the space invariance amount of voting, can be divided into full ballot and the part two kinds of embodiment that vote, and vote for some space invariance amounts and to refer to be voted for described some space invariance amounts by every other space invariance amount.
Identifying unit 604 is judged N correct space invariance amount according to voting results and predetermined decision plan that ballot unit 603 obtains among n space invariance amount, thereby filter out the not space invariance amount of correct coupling, wherein N is natural number and is less than or equal to n.
Because in the space invariance amount secondary detection equipment according to the embodiment of the invention, about " predetermined temporal voting strategy ", " embodiment entirely votes ", " part vote embodiment " and content and the corresponding content in the aforesaid space invariant secondary detection method of " being scheduled to decision plan " identical, so for brevity, its details repeats no more.
Similarly, above-mentioned space invariance amount secondary detection equipment 600 according to the embodiment of the invention also can be applied to the quick location estimation equipment of robot of the prior art.With reference to Fig. 7 the quick location estimation equipment of having used according to the space invariance amount secondary detection equipment of the embodiment of the invention 700 of robot is described below.
As shown in Figure 7, the quick location estimation equipment 700 of robot comprises: image acquisition units 701, space invariance amount Preliminary detection unit 702, aforesaid space invariance amount secondary detection equipment 600 and pose computing unit 703.
Thereby image acquisition units 701 is carried out twice observation with initial pose and current pose to unknown space respectively and is obtained one group of observed image.
Space invariance amount Preliminary detection unit 702 is in this group observed image that image acquisition units 701 is obtained, and Preliminary detection is represented the space invariance amount of same space point;
600 pairs of space invariance amounts of space invariance amount secondary detection equipment Preliminary detection unit, 702 detected space invariance amounts are carried out secondary detection, in order to filter out the not space invariance amount of correct coupling;
Pose computing unit 703 obtains the current pose of robot based on the space invariance amount of the correct coupling after filtering through space invariance amount secondary detection equipment 600.
Above with reference to accompanying drawing, described according to the space invariance amount secondary detection method and apparatus of the embodiment of the invention and the quick location estimation method of robot and the equipment of having used it.Space invariance amount secondary detection method and apparatus by according to the embodiment of the invention can greatly reduce calculated amount and promote processing speed, improves the matching precision of space invariance amount simultaneously.In addition, adopt the quick location estimation method of robot and the equipment of above-mentioned space invariance amount secondary detection method and apparatus, can when reduce calculated amount and promoting processing speed, finish pose better and estimate.
Need to prove, in this manual, term " comprises ", " comprising " or its any other variant are intended to contain comprising of nonexcludability, thereby make and comprise that process, method, article or the equipment of a series of key elements not only comprise those key elements, but also comprise other key elements of clearly not listing, or also be included as the intrinsic key element of this process, method, article or equipment.Do not having under the situation of more restrictions, the key element that is limited by statement " comprising ... ", and be not precluded within process, method, article or the equipment that comprises described key element and also have other identical element.
At last, need to prove also that above-mentioned a series of processing not only comprise the processing of carrying out by the time sequence with order described here, and comprise parallel or respectively rather than the processing of carrying out in chronological order.
Through the above description of the embodiments, those skilled in the art can be well understood to the present invention and can realize by the mode that software adds essential hardware platform, can certainly all implement by software.Based on such understanding, all or part of can the embodying with the form of software product that technical scheme of the present invention contributes to background technology, this computer software product can be stored in the storage medium, as ROM/RAM, magnetic disc, CD etc., comprise that some instructions are with so that a computer equipment (can be personal computer, server, the perhaps network equipment etc.) carry out the described method of some part of each embodiment of the present invention or embodiment.
More than the present invention is described in detail, used specific case herein principle of the present invention and embodiment set forth, the explanation of above embodiment just is used for helping to understand method of the present invention and core concept thereof; Simultaneously, for one of ordinary skill in the art, according to thought of the present invention, the part that all can change in specific embodiments and applications, in sum, this description should not be construed as limitation of the present invention.
Claims (16)
1. space invariance amount secondary detection method that is used for the space map structuring, described method comprises:
By utilizing an observation main body successively with first pose and second pose unknown space to be carried out in one group of observed image that twice observation obtains, obtain n the space invariance amount that Preliminary detection arrives, wherein said observation main body possesses an image acquisition units at least obtaining observed image, and n is natural number;
Calculate the distance of corresponding space invariant in twice observation respectively, wherein for i space invariance amount, the distance with j space invariance amount in observation for the first time is dij, distance with j space invariance amount in observation for the second time is dij ', i wherein, j is all smaller or equal to n, and j ≠ i;
Based on predetermined temporal voting strategy, the space invariance amount of specific quantity is voted, wherein voting for some space invariance amounts refers to be voted for described some space invariance amounts by every other space invariance amount; And
Judge N correct space invariance amount according to voting results and predetermined decision plan among n space invariance amount, thereby filter out the not space invariance amount of correct coupling, wherein N is natural number and is less than or equal to n.
2. method according to claim 1, wherein said space invariance amount refer to identical spatial point in twice observation that the observation main body is carried out with different poses.
3. method according to claim 1, wherein said predetermined temporal voting strategy is:
Judge | dij-dij ' |/max (| dij|, | dij ' |) whether less than predetermined threshold, if | dij-dij ' |/max (| dij|, | dij ' |) less than described predetermined threshold, then i space variable and j space variable logrolling, otherwise vote against mutually.
4. method according to claim 1, wherein said observation main body is robot.
5. method according to claim 1, wherein
All space invariance amounts are voted, and
Described predetermined decision plan is: judge that obtaining the maximum N of an affirmative vote space invariance amount is correct space invariance amount.
6. method according to claim 1, wherein
Select a space invariance amount to vote arbitrarily, and
Described predetermined decision plan is: if selected space invariance amount obtains the affirmative vote greater than n/2, then determine that it is correct space invariance amount, and will be judged to be correct space invariance amount for its space invariance amount of voting for, deletion simultaneously is its space invariance amount of voting against;
If selected space invariance amount does not obtain the affirmative vote greater than n/2, then determine that it is wrong space invariance amount, and choose one of remaining space invariant arbitrarily, repeat above processing then, till obtaining being judged to be correct space invariance amount.
7. method according to claim 5 is wherein determined the numerical value of N according to design accuracy.
8. the quick location estimation method of robot comprises the steps:
Thereby robot carries out twice observation with initial pose and current pose to unknown space respectively obtains one group of observed image;
Preliminary detection should be organized the space invariance amount of expression same space point in the observed image;
By method according to claim 1 the space invariance amount is carried out secondary detection, in order to filter out the not space invariance amount of correct coupling; And
Obtain the current pose of robot based on the space invariance amount of correct coupling.
9. space invariance amount secondary detection equipment that is used for the space map structuring, described equipment comprises:
Acquiring unit, be used for by utilizing an observation main body successively with first pose and second pose one group of observed image that twice observation obtains to be carried out in unknown space, obtain n the space invariance amount that Preliminary detection arrives, wherein said observation main body possesses an image acquisition units at least obtaining observed image, and n is natural number;
Metrics calculation unit, be used for calculating respectively the distance of twice observation corresponding space invariant, wherein for i space invariance amount, distance with j space invariance amount in observation for the first time is dij, distance with j space invariance amount in observation for the second time is dij ', i wherein, j is all smaller or equal to n, and j ≠ i;
The ballot unit is used for based on predetermined temporal voting strategy, and the space invariance amount of specific quantity is voted, and wherein voting for some space invariance amounts refers to be voted for described some space invariance amounts by every other space invariance amount; And
Identifying unit is used for judging N correct space invariance amount according to voting results and predetermined decision plan among n space invariance amount, thereby filters out the not space invariance amount of correct coupling, and wherein N is natural number and is less than or equal to n.
10. equipment according to claim 9, wherein said space invariance amount refer to identical spatial point in twice observation that the observation main body is carried out with different poses.
11. equipment according to claim 9, wherein said predetermined temporal voting strategy is:
Judge | dij-dij ' |/max (| dij|, | dij ' |) whether less than predetermined threshold, if | dij-dij ' |/max (| dij|, | dij ' |) less than described predetermined threshold, then i space variable and j space variable logrolling, otherwise vote against mutually.
12. equipment according to claim 9, wherein said observation main body is robot.
13. equipment according to claim 9, wherein
Vote to all space invariance amounts in described ballot unit, and
Described predetermined decision plan is: judge that obtaining the maximum N of an affirmative vote space invariance amount is correct space invariance amount.
14. equipment according to claim 9, wherein
Described ballot unit selects a space invariance amount to vote arbitrarily, and
Described predetermined decision plan is: if selected space invariance amount obtains the affirmative vote greater than n/2, then determine that it is correct space invariance amount, and will be judged to be correct space invariance amount for its space invariance amount of voting for, deletion simultaneously is its space invariance amount of voting against;
If selected space invariance amount does not obtain the affirmative vote greater than n/2, then determine that it is wrong space invariance amount, and choose one of remaining space invariant arbitrarily, repeat above processing then, till obtaining being judged to be correct space invariance amount.
15. equipment according to claim 13 is wherein determined the numerical value of N according to design accuracy.
16. the quick location estimation equipment of robot comprises:
Image acquisition units is obtained one group of observed image thereby be used for initial pose and current pose twice observation being carried out in unknown space respectively;
The space invariance amount of expression same space point in this group observed image that space invariance amount Preliminary detection unit, Preliminary detection image acquisition units obtain;
Space invariance amount secondary detection equipment according to claim 9 is used for the detected space invariance amount in space invariance amount Preliminary detection unit is carried out secondary detection, in order to filter out the not space invariance amount of correct coupling; And
The pose computing unit is used for the current pose based on the space invariance amount acquisition robot of correct coupling.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201110444217.8A CN103186896B (en) | 2011-12-27 | 2011-12-27 | For the space invariance amount secondary detection method and apparatus of space map structuring |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201110444217.8A CN103186896B (en) | 2011-12-27 | 2011-12-27 | For the space invariance amount secondary detection method and apparatus of space map structuring |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103186896A true CN103186896A (en) | 2013-07-03 |
CN103186896B CN103186896B (en) | 2018-06-01 |
Family
ID=48678054
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201110444217.8A Active CN103186896B (en) | 2011-12-27 | 2011-12-27 | For the space invariance amount secondary detection method and apparatus of space map structuring |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103186896B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1698067A (en) * | 2003-04-28 | 2005-11-16 | 索尼株式会社 | Image recognition device and method, and robot device |
CN101295363A (en) * | 2007-04-23 | 2008-10-29 | 三菱电机株式会社 | Method and system for determining objects poses from range images |
US20100312386A1 (en) * | 2009-06-04 | 2010-12-09 | Microsoft Corporation | Topological-based localization and navigation |
-
2011
- 2011-12-27 CN CN201110444217.8A patent/CN103186896B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1698067A (en) * | 2003-04-28 | 2005-11-16 | 索尼株式会社 | Image recognition device and method, and robot device |
CN101295363A (en) * | 2007-04-23 | 2008-10-29 | 三菱电机株式会社 | Method and system for determining objects poses from range images |
US20100312386A1 (en) * | 2009-06-04 | 2010-12-09 | Microsoft Corporation | Topological-based localization and navigation |
Non-Patent Citations (4)
Title |
---|
厉茂海等: "基于单目视觉的移动机器人全局定位", 《机器人》 * |
厉茂海等: "基于单目视觉的移动机器人全局定位", 《机器人》, 31 March 2007 (2007-03-31) * |
崔平远等: "基于三维地形匹配的月球软着陆导航方法研究", 《宇航学报》 * |
张满满: "基于SIFT特征点的无缝图像拼接方法", 《百科论坛》 * |
Also Published As
Publication number | Publication date |
---|---|
CN103186896B (en) | 2018-06-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112847343B (en) | Dynamic target tracking and positioning method, device, equipment and storage medium | |
CN108692739B (en) | Integrity monitoring method for navigation system with heterogeneous measurements | |
CN104615986B (en) | The method that pedestrian detection is carried out to the video image of scene changes using multi-detector | |
Li et al. | Automatically and accurately matching objects in geospatial datasets | |
CN104006740B (en) | Object detecting method and article detection device | |
CN107274679A (en) | Vehicle identification method, device, equipment and computer-readable recording medium | |
CN109902141A (en) | The method and autonomous agents of motion planning | |
CN109964182A (en) | Method and system for vehicle analysis | |
Roquel et al. | Decomposition of conflict as a distribution on hypotheses in the framework on belief functions | |
CN104240542B (en) | A kind of airdrome scene maneuvering target recognition methods based on geomagnetic sensor network | |
Nalepa et al. | Adaptive guided ejection search for pickup and delivery with time windows | |
Chen et al. | Multiscale geometric and spectral analysis of plane arrangements | |
Harada et al. | Experiments on learning-based industrial bin-picking with iterative visual recognition | |
CN113378694A (en) | Method and device for generating target detection and positioning system and target detection and positioning | |
CN115667849A (en) | Method for determining a starting position of a vehicle | |
US9547983B2 (en) | Analysis method and analyzing device | |
CN106772358A (en) | A kind of multisensor distribution method based on CPLEX | |
Shuai et al. | A ship target automatic recognition method for sub-meter remote sensing images | |
CN103186896A (en) | Secondary detection method and equipment for space invariables constructed by space map | |
Gempita et al. | Implementation of K-NN fingerprint method on receiving server for indoor mobile object tracking | |
CN103871048B (en) | Straight line primitive-based geometric hash method real-time positioning and matching method | |
CN115983007A (en) | Method and device for extracting coincident track, electronic equipment and storage medium | |
Galba et al. | Public transportation bigdata clustering | |
CN106537090A (en) | Device and method for determining at least one position of a mobile terminal | |
CN117178292A (en) | Target tracking method, device, system and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |