CN106352877A - Moving device and positioning method thereof - Google Patents

Moving device and positioning method thereof Download PDF

Info

Publication number
CN106352877A
CN106352877A CN201610652818.0A CN201610652818A CN106352877A CN 106352877 A CN106352877 A CN 106352877A CN 201610652818 A CN201610652818 A CN 201610652818A CN 106352877 A CN106352877 A CN 106352877A
Authority
CN
China
Prior art keywords
mobile device
stack features
feature descriptor
description
son
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610652818.0A
Other languages
Chinese (zh)
Other versions
CN106352877B (en
Inventor
庞富民
陈子冲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Weilan Continental Beijing Technology Co ltd
Original Assignee
Ninebot Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ninebot Beijing Technology Co Ltd filed Critical Ninebot Beijing Technology Co Ltd
Priority to CN201610652818.0A priority Critical patent/CN106352877B/en
Publication of CN106352877A publication Critical patent/CN106352877A/en
Priority to PCT/CN2017/096945 priority patent/WO2018028649A1/en
Application granted granted Critical
Publication of CN106352877B publication Critical patent/CN106352877B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a moving device and a positioning method thereof. The positioning method includes: in the moving process when the moving device collects visual feature points, extracting first feature descriptors of the visual feature points collected at the current moment; performing closed-loop detection on the first feature descriptors and feature descriptors extracted before respectively; when a closed loop is detected on the basis of the first feature descriptors and second feature descriptors, determining pose of the moving device at the current moment through spatial coordinates of the visual feature points described by the second feature descriptors which are one of the feature descriptors extracted before. By the moving device and the positioning method, the technical problem that positioning accuracy is affected seriously by accumulated errors of pose estimation when the moving device moves is solved, and accuracy of positioning based on the moving device is improved.

Description

A kind of mobile device and its localization method
Technical field
The present invention relates to field of locating technology, more particularly, to a kind of mobile device and its localization method.
Background technology
The robot of existing view-based access control model and inertia device carries out indoor orientation method and is broadly divided into two big class: 1) sets up ring The localization method of condition figure, such as: vision slam (simultaneous localization and mapping, immediately positioning with Map structuring) technology, 2) do not need to set up the localization method of environmental map, such as: vision/inertia speedometer technology.
Set up the localization method of environmental map: robot generally, while estimating self-position attitude, will build to environment On the spot scheme, by optimizing each position and attitude and map Road target relative position relation in robot itself track and track To obtain the positional information of robot.The positioning method accuracy that this sets up environmental map is higher, but the building of indoor environment map It is necessary to consume the substantial amounts of calculation resources of robot in the vertical optimized algorithm to environmental information be brought into positioning, therefore excellent The operand changing algorithm often becomes the bottleneck that the real-time of localization method of environmental map is set up in impact.And existing do not need to build The application in a mobile device of the localization method of vertical environmental map makes real-time be guaranteed, but is as the increasing of movement locus Long, mobile device accumulate in moving process under error to self-position Attitude estimation, lead to position of mobile equipment attitude Estimation difference can continue to increase, have a strong impact on positioning precision.
Content of the invention
The embodiment of the present invention is passed through to provide a kind of mobile device and its localization method, solves to be based in prior art and moves The positioning of device can continue to increase to the estimation difference of mobile device pose, and has a strong impact on the technical problem of positioning precision.
In a first aspect, embodiments providing a kind of localization method of mobile device, comprising:
Gather in the moving process of visual signature point in described mobile device, extract current time gathered visual signature point First stack features description son;
Described first stack features description is described son with the every stack features extracting before respectively and carries out closed loop detection;
When describing son and closed loop is detected with the second stack features based on described first stack features description, by described second The space coordinatess of the visual signature point described by stack features description, determine the position in described current time for the described mobile device Appearance, wherein, described second stack features description is the one of which in the described each group Feature Descriptor extracting before.
Preferably, described described first stack features description is described son with the every stack features extracting before respectively and closed Ring detects, comprising:
Described first stack features description is described son with the described every stack features extracting before respectively and carries out Similar contrasts, Determine that the described each group Feature Descriptor extracting before and described first stack features describe the gestational edema default condition of similarity enough respectively Description quantum count;
Judge that the described each group Feature Descriptor extracting before and described first stack features describe the gestational edema default phase enough respectively Whether the description quantum count like condition is more than predetermined number threshold value, wherein, meets the description quantum count of described default condition of similarity Closed loop is detected more than characterizing during described predetermined number threshold value.
Preferably, described by described first stack features description respectively with described before every stack features of extracting describe son and enter Row Similar contrasts, judge whether to meet default condition of similarity, comprising:
Each Feature Descriptor in described first stack features description is described with the every stack features extracting before respectively Each Feature Descriptor in son is contrasted;
Judge whether the vector angle between the Feature Descriptor being contrasted is less than predetermined angle threshold value, wherein, in institute State vector angle and be less than during described predetermined angle threshold value and characterize the Feature Descriptor that contrasted and meet described default condition of similarity.
Preferably, the described space coordinatess by the visual signature point described by described second stack features description, determine Go out the pose in described current time for the described mobile device, comprising:
Determine the multiple Feature Descriptors in described second stack features description;
Determine two dimensional image coordinate in current time acquired image frames for the plurality of Feature Descriptor correspondence;
Based on space coordinatess of visual signature point described by the plurality of Feature Descriptor, described two dimensional image coordinate, with And the Intrinsic Matrix of the built-in image acquisition units of described mobile device sets up the transfer of the pose representing described mobile device Matrix:
t = r t 0 1 = arg min t σ i | | ktx i - p i ^ | |
Wherein, t is described transfer matrix, xiDescribed by the plurality of Feature Descriptor, the space of visual signature point is sat Mark,For two dimensional image coordinate in described current time acquired image frames for the plurality of Feature Descriptor correspondence, k is described The Intrinsic Matrix of the built-in image acquisition units of mobile device, r is the attitude of described mobile device, and t is described mobile device Position;
Solve described transfer matrix and obtain the pose in described current time for the described mobile device.
Preferably, in the described moving process in mobile device collection visual signature point, methods described also includes:
Gather inertial data in described moving process for the described mobile device and visual information;
Motion in described moving process for the described mobile device is estimated based on described inertial data and described visual information Track.
Preferably, in the described space coordinatess by the visual signature point described by described second stack features description, really Make described mobile device after the pose of described current time, methods described also includes:
Based on a determination that the described mobile device going out replaces being based on described inertial data and institute in the pose of described current time State the pose in the corresponding moment of visual information estimation, to revise described movement locus.
Preferably, described every stack features description extracting before is particularly as follows: collect every time during key images frame from institute State and in key images frame, extract one group, wherein, described key images frame is successively from described mobile dress according to pre-set space interval Determine in all images frame putting collection.
Second aspect, embodiments provides a kind of mobile device, comprising:
Extraction unit, for, in the moving process of described mobile device collection visual signature point, extracting current time institute First stack features description of collection visual signature point;
Detector unit, is carried out for described first stack features description is described son with the every stack features extracting before respectively Closed loop detects;
Determining unit, for being described son and closed loop is being detected based on described first stack features description and the second stack features When, by the space coordinatess of the visual signature point described by described second stack features description, determine that described mobile device exists The pose of described current time, wherein, described second stack features description is in the described each group Feature Descriptor extracting before One of which.
Preferably, described detector unit, comprising:
Contrast subunit, described first stack features description is described son with the described every stack features extracting before respectively and enters Row Similar contrasts, determine that the described each group Feature Descriptor extracting before and described first stack features describe gestational edema foot pre- respectively If the description quantum count of condition of similarity;
Judgment sub-unit, judges the described each group Feature Descriptor extracting before and described first stack features description respectively Whether the description quantum count meeting default condition of similarity is more than predetermined number threshold value, wherein, meets described default condition of similarity Description quantum count is more than sign during described predetermined number threshold value and closed loop is detected.
Preferably, described contrast subunit, specifically for:
Each Feature Descriptor in described first stack features description is described with the every stack features extracting before respectively Each Feature Descriptor in son is contrasted;
Judge whether the vector angle between the Feature Descriptor being contrasted is less than predetermined angle threshold value, wherein, in institute State vector angle and be less than during described predetermined angle threshold value and characterize the Feature Descriptor that contrasted and meet described default condition of similarity.
Preferably, described determining unit, comprising:
First determination subelement, for determining the multiple Feature Descriptors in described second stack features description;
Second determination subelement, for determining the plurality of Feature Descriptor correspondence in current time acquired image frames Two dimensional image coordinate;
Matrix sets up subelement, for based on space coordinatess of visual signature point described by the plurality of Feature Descriptor, The Intrinsic Matrix of the Built-in Image collecting unit of described two dimensional image coordinate and described mobile device is set up and is represented described shifting The transfer matrix of the pose of dynamic device:
t = r t 0 1 = arg min t σ i | | ktx i - p i ^ | |
Wherein, t is described transfer matrix, xiDescribed by the plurality of Feature Descriptor, the space of visual signature point is sat Mark,For two dimensional image coordinate in described current time acquired image frames for the plurality of Feature Descriptor correspondence, k is described The Intrinsic Matrix of the Built-in Image collecting unit of mobile device, r is the attitude of described mobile device, and t is described mobile device Position;
Solve subelement, obtain the pose in described current time for the described mobile device for solving described transfer matrix.
Preferably, described mobile device also includes:
Collecting unit, for gathering inertial data in described moving process for the described mobile device and visual information;
Track estimation unit, for estimating described mobile device described based on described inertial data and described visual information Movement locus in moving process.
Preferably, described mobile device also includes:
Amending unit, for based on a determination that the described mobile device going out replaces based on described in the pose of described current time The pose in the corresponding moment of inertial data and the estimation of described visual information, to revise described movement locus.
Preferably, described every stack features description extracting before is particularly as follows: collect every time during key images frame from institute State and in key images frame, extract one group, wherein, described key images frame is successively from described mobile dress according to pre-set space interval Determine in all images frame putting collection.
One or more technical scheme provided in an embodiment of the present invention, at least achieves following technique effect or advantage:
Visual signature point is gathered in moving process by mobile device, extracts current time gathered visual signature point First stack features description describes son with the every stack features extracting before and carries out closed loop detection, so that it is determined that whether again mobile device Once the same area through being passed through before.Then, based on the first stack features description with before extract second group special Levy description when closed loop is detected, by the space coordinatess of the visual signature point of the second stack features description son description, determine shifting Dynamic device current time pose such that it is able to when mobile device is again through the same area according to previous time record The space coordinatess of visual signature point recalculate the current pose of mobile device, to revise the position in closed loop location for the mobile device Appearance, thus eliminate the deviation accumulation that pose is estimated, to solve under mobile device accumulates in moving process to itself position The error that appearance is estimated, and have a strong impact on the technical problem of positioning precision, to effectively increase in the case of not setting up environmental map Based on the precision of mobile device positioning, it is achieved thereby that being accurately positioned in the case of not setting up environmental map, to guarantee simultaneously Real-time based on mobile device positioning and positioning precision.
Brief description
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing Have technology description in required use accompanying drawing be briefly described it should be apparent that, drawings in the following description be only this Inventive embodiment, for those of ordinary skill in the art, on the premise of not paying creative work, can also basis The accompanying drawing providing obtains other accompanying drawings.
Fig. 1 is the flow chart of the localization method of mobile device in the embodiment of the present invention;
Fig. 2 is the refined flow chart of step s103 in Fig. 1;
Fig. 3 is the function unit figure of mobile device in the embodiment of the present invention.
Specific embodiment
The embodiment of the present invention passes through the localization method of mobile device and the mobile device providing, and is existed with solving mobile device The error under accumulating in moving process, itself pose estimated, and have a strong impact on the technical problem of positioning precision.The present invention is implemented The technical scheme of example is to solve above-mentioned technical problem, and general thought is as follows:
Gather in the moving process of visual signature point in mobile device, extract the of current time gathered visual signature point One stack features description, the first stack features description is described son with the every stack features extracting before respectively and carries out closed loop detection. Such as, mobile device is to be equipped with the robot of image acquisition units, the image acquisition units of carrying can for fisheye camera or Function is better than other cameras of fisheye camera, scanning device.Be can be seen that by this two steps and moved by mobile device During gather for closed loop detection Feature Descriptor, thus describing the period of the day from 11 p.m. to 1 a.m collecting stack features every time, all with before The every stack features extracting describe son and carry out closed loop detection, thus carrying out closed loop detection by circulate, determine each current time Whether the region reaching is the same area reaching before.
And then based on first stack features description son with before extract each group Feature Descriptor in one of which detection During to closed loop, describe the space coordinatess of the visual signature point described by son by this stack features extracting before, determine movement Device current time pose it can be seen that before passing through extract this stack features description son described by visual signature point Space coordinatess determine the pose in current time for the mobile device, mobile device can be modified when closed loop is detected Pose state, thus eliminate the error that itself pose is estimated under mobile device accumulates in moving process, thus improve Based on the positioning precision of mobile device positioning, it is achieved thereby that being accurately positioned in the case of not setting up environmental map, with simultaneously Ensure that the real-time based on mobile device positioning and positioning precision.
Purpose, technical scheme and advantage for making the embodiment of the present invention are clearer, below in conjunction with the embodiment of the present invention In accompanying drawing, the technical scheme in the embodiment of the present invention is clearly and completely described it is clear that described embodiment is The a part of embodiment of the present invention, rather than whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art The every other embodiment being obtained under the premise of not making creative work, broadly falls into the scope of protection of the invention.
With reference to shown in Fig. 1, Fig. 1 is the flow chart of the localization method of mobile device in the embodiment of the present invention, this localization method Comprise the steps:
S101, mobile device gather visual signature point moving process in, extract the gathered visual signature of current time First stack features description of point.
Specifically, Feature Descriptor is vector, particularly for the visual signature point in description institute acquired image frames to Amount.First stack features description son be for describe one group of each visual signature point in current time acquired image frame to Amount.Specifically, visual signature point is the point having surrounding feature, such as: table angle, stool lower limb, door angle etc. are visual signature point, Do not enumerated herein.
During mobile device moves, the image acquisition units that mobile device is carried carry out image acquisition, often Secondary image acquisition units collect the visual signature point in acquired image frames after picture frame, then extract the spy of these visual signatures Levy description, to obtain stack features description corresponding to a picture frame.
In one embodiment, the first stack features description extracting current time gathered visual signature point is concrete For: the Feature Descriptor extracting visual signature point in current time institute acquired image frames is the first stack features description, and record carries First group of description taking.In order to reduce the consumption to mobile device computing resource, in another specific embodiment, extract current First stack features description of moment gathered visual signature point is particularly as follows: be key in the picture frame determining current time collection During picture frame, extract the Feature Descriptor (feature of the visual signature point in the picture frame of current time collection Descriptor) it is the first stack features description.
S102, the first stack features description is described son with the every stack features extracting before respectively and carries out closed loop detection.
In s102, in the embodiment of every stack features description before extracting and s101, extract the first stack features description The embodiment of son is same or similar:
Specifically, in one embodiment, every stack features description that before extracts is particularly as follows: in image every time before When collecting unit collects picture frame, extract the Feature Descriptor of visual signature point in acquired image frame, a picture frame Correspondence extracts stack features description, and stack features description that record extracts every time, thus obtain it described in s102 The each group Feature Descriptor of front extraction.
Specifically, in order to reduce the consumption to mobile device computing resource, in another specific embodiment: extract before Every stack features description, for extracting stack features description when collecting key images frame every time from key images frame, collects Image in frame be not during key images frame then do not carry out extract Feature Descriptor.Wherein, key images frame is according to default sky Between be spaced and determine from all images frame of mobile device collection successively.Specifically, each image acquisition units collection During to picture frame, carry out judging whether acquired image frame is key images frame based on pre-set space interval, if being judged as closing Key picture frame just extracts the Feature Descriptor of visual signature point in institute's acquired image frames, if judged result is not key images frame, Do not carry out extracting Feature Descriptor.
In specific implementation process, pre-set space interval is carried out according to the calculation resources of mobile device and positioning precision demand Setting, is not specifically limited herein.For example, pre-set space is spaced apart 0.5m, then the initial position that mobile device starts After the picture frame of collection is defined as key frame, then the picture frame that gathers when often moving 0.5 with mobile device after initial position It is judged as key images frame, and the picture frame that mobile device collects in other positions is not key images frame, such as: (0m, 0.5m), the picture frame that (0.5m, 1m), (1m, 1.5m) ... in the distance collect all is judged as it not being key images frame.
Below the embodiment carrying out closed loop detection in s102 is specifically described: by the first stack features description respectively Describe son with the every stack features extracting before and carry out Similar contrasts, determine each group Feature Descriptor extracting before and respectively One stack features describe the description quantum count that the gestational edema presets condition of similarity enough;Respectively judge before extract each group Feature Descriptor with First stack features describe the gestational edema and whether preset the description quantum count of condition of similarity enough more than predetermined number threshold value, wherein, meet pre- If the description quantum count of condition of similarity is more than to characterize during predetermined number threshold value closed loop is detected.Thus being retouched based on the first stack features State son and detect during closed loop then it is assumed that passes through before mobile device arrival is corresponding with a certain stack features description extracting before The same area.
Specifically, below taking be extracted three stack features description before current time as a example, to once being closed The embodiment citing description of ring detection, thus according to following citing description, those skilled in the art can know that other moment enter The embodiment of row closed loop detection:
T1 moment before current time (i.e. t4 moment), t2 moment, t3 moment correspondence are extracted three stack features descriptions Son.For convenience, it is respectively designated as: the b stack features description that a stack features description that the t1 moment extracts is sub, the t2 moment extracts C stack features description that son, t3 moment extract, d stack features description (i.e. the first stack features description) that the t4 moment extracts, then Independently execute following three step: d stack features description is described son with a stack features and carries out Similar contrasts, determine d stack features The description quantum count meeting default condition of similarity between description and a stack features description is a;By d stack features description and b group Feature Descriptor carries out Similar contrasts, determines and meets default condition of similarity between d stack features description and b stack features description Description quantum count be b, d stack features description is described son and carries out Similar contrasts with c stack features, determines that d stack features describe The description quantum count meeting default condition of similarity between son and c stack features description is c.Next, it is determined that whether describing quantum count a More than predetermined number threshold value, judge to describe whether quantum count b is more than predetermined number threshold value, and judge whether describe quantum count c More than predetermined number threshold value.If judging, describing quantum count a is more than predetermined number threshold value then it is assumed that current time reaches the t1 moment The same area being reached;If description quantum count b is more than predetermined number threshold value then it is assumed that the current time arrival t2 moment is arrived The same area reaching;If description quantum count c is more than predetermined number threshold value then it is assumed that the current time arrival t3 moment was reached The same area.
In specific implementation process, predetermined number threshold value is arranged according to the actual requirements, such as, arranges pre- in the present embodiment If amount threshold is 3, characterizes when the description quantum count meeting default condition of similarity is more than 3 and closed loop is detected.Such as, meet in advance If the description quantum count of condition of similarity has 4,5 or 6 etc. all to characterize closed loop is detected.
Specifically, below the circulation carrying out closed loop detection in s102 is described in detail:
It it is the t2 moment in current time, (this is that b group is special to the first stack features description of the gathered visual signature point of extraction Levy description), a stack features being extracted with the t1 moment are described son and carry out closed loop detection.Then, it is the t3 moment in current time, carry Take a group that the first stack features description (this describes son for c stack features) of gathered visual signature point were extracted with the t1 moment special Levy that description carries out closed loop detection, the b stack features that also extracted with the t2 moment are described son and carry out closed loop detection.Then, when current Carve as the t4 moment, extract first stack features description (this is that the description of d stack features is sub) of gather visual signature point respectively with The b stack features that a stack features description that the t1 moment extracts carries out closed loop detection, the t2 moment extracts describe son and carry out closed loop detection, And the c stack features that extract of t3 moment describe son and carry out closed loop detection.Circulate successively, thus in t2, t3, t4, t4, t6 ... each Current time extracts the first stack features and describes the period of the day from 11 p.m. to 1 a.m, describes son with the every stack features extracting before current time respectively and carries out Closed loop detects.
Specifically, default condition of similarity is that the vector angle describing between son is less than predetermined angle threshold value.Judge whether full The default condition of similarity specific embodiment of foot is: by each Feature Descriptor difference premise therewith in the first stack features description Each Feature Descriptor in every stack features description taking is contrasted;Judge between the Feature Descriptor that contrasted to Whether amount angle is less than predetermined angle threshold value, and wherein, the vector angle between the Feature Descriptor being contrasted is less than default Characterize two Feature Descriptors being contrasted during angle threshold and meet default condition of similarity, thus judging to describe the coupling of son Degree.
In specific implementation process, predetermined angle threshold value is arranged according to the actual requirements.Such as, predetermined angle threshold value is set to 30 degree, then the vector angle between two Feature Descriptors being contrasted is that [0,30] degree just meets default condition of similarity, no It is then to be unsatisfactory for default condition of similarity, such as, predetermined angle threshold value is set to 15 degree, then two Feature Descriptors being contrasted Between vector angle be that [0,15] degree just meets default condition of similarity, be otherwise unsatisfactory for default condition of similarity.
S103, when describing son and closed loop is detected with the second stack features based on the first stack features description, by second group The space coordinatess of the visual signature point described by Feature Descriptor, determine the pose in current time for the mobile device, wherein, the One of which in each group Feature Descriptor of the front for it extraction of two stack features description.
Specifically, in s103: shift position includes the position in current time for the shift position in the pose of current time And attitude.With reference to shown in Fig. 2, in one embodiment, by the visual signature point described by the second stack features description Space coordinatess, determine the pose in current time for the mobile device, comprise the steps:
S1031, the multiple Feature Descriptors determined in the second stack features description.
Specifically, the multiple Feature Descriptors determined are to meet in advance with the Feature Descriptor in the first stack features description If the Feature Descriptor of condition of similarity.The quantity of the Feature Descriptor determined from the second stack features description is according to present count Amount threshold value setting.Such as predetermined number threshold value is 3, then determine from the second stack features description and the first stack features description In Feature Descriptor meet 4 Feature Descriptors of default condition of similarity.
Illustrated with predetermined number threshold value for 3: step s102 has been determined and the from the second stack features description One stack features describe the Feature Descriptor that the gestational edema presets condition of similarity enough: 5 features that have such as meeting default condition of similarity are retouched State son or 6 Feature Descriptors or 7 Feature Descriptors etc., then just retouch from this 5 or 6 or 7 features in s1031 State and in son, determine 4.With the first stack features, only 4 that the gestational edema presets condition of similarity enough are described in second stack features description Feature Descriptor, then this 4 Feature Descriptors all determine.Illustrated with predetermined number threshold value for 4: step s102 from Determine in second stack features description and describe, with the first stack features, the Feature Descriptor that the gestational edema presets condition of similarity enough, such as: There are 5 Feature Descriptors or 6 Feature Descriptors or 7 Feature Descriptors or 8 Feature Descriptors etc., then in s1031 In: then determine 5 from this 5 or 6 or 7 or 8 Feature Descriptors.
S1032, determine two dimensional image coordinate in current time acquired image frames for multiple Feature Descriptor correspondences.
Specifically, the multiple Feature Descriptors determined are different, and such as, the Feature Descriptor determined has: " table angle 1 " Feature Descriptor, the Feature Descriptor at " table angle 2 ", the Feature Descriptor of " stool lower limb 1 ", the Feature Descriptor of " stool lower limb 2 ", Then: two dimensional image coordinate in current time acquired image frames for the Feature Descriptor of determination " table angle 1 ", determine " table angle 2 " Two dimensional image coordinate in current time acquired image frames for the Feature Descriptor, the Feature Descriptor determining " stool lower limb 1 " is current Two dimensional image coordinate in moment acquired image frames, the Feature Descriptor determining " stool lower limb 2 " is in current time acquired image frames Two dimensional image coordinate.In specific implementation process, mated by visual signature, match have confirmed in s1031 should Two dimensional image coordinate in current time acquired image frames for the multiple visual signature points of the corresponding description of multiple Feature Descriptors.
S1033, based on space coordinatess of visual signature point described by multiple Feature Descriptors, the two dimensional image determined sits The Intrinsic Matrix of mark and the built-in image acquisition units of mobile device sets up the transfer square of the pose representing mobile device Battle array:
t = r t 0 1 = arg min t σ i | | ktx i - p i ^ | |
Wherein, t is transfer matrix, xiThe space coordinatess of visual signature point described by multiple Feature Descriptors,For multiple Two dimensional image coordinate in current time acquired image frames for the Feature Descriptor correspondence, k is the built-in image acquisition of mobile device The Intrinsic Matrix of unit, r is the attitude of mobile device, and t is the position of mobile device.
In one embodiment, the space of required visual signature point described by multiple feature descriptions in s1033 is sat Mark is obtained by mode is implemented as follows: after image acquisition units collect picture frame every time, records each in institute's acquired image frames The space coordinatess of visual signature point.In another embodiment, after each image acquisition units collect key images frame, record institute The space coordinatess of each visual signature point in collection key images frame.Then after s1032, from the second stack features description of record In son, the space coordinatess of each visual signature point determine that the space of the visual signature point described by the plurality of Feature Descriptor is sat Mark.
Specifically, the quantity of the Feature Descriptor that transfer matrix t determines according to s1031 determines.In a specific embodiment In, s1031 is particularly as follows: determine that from the second stack features description describing the gestational edema with the first stack features presets condition of similarity enough 4 Feature Descriptors, then the space coordinatess based on this visual signature point described by 4 Feature Descriptors: x1、x2、x3、x4, this 4 Visual signature point described by individual Feature Descriptor corresponds to the two dimensional image coordinate in current time institute acquired image frames: The Intrinsic Matrix k of the built-in image acquisition units of mobile device sets up the 4*4 transfer of the pose representing mobile device Matrix t.
Finally execute s1034: the transfer matrix solving s1033 foundation obtains the pose in current time for the mobile device.Tool Body, solve and obtain including mobile device in attitude r of current time and position t in the pose of current time.
In further technical scheme, the mobile device that the embodiment of the present invention is determined is used in the pose of current time Revise the movement locus estimated based on inertial data and visual information.
Specific embodiment is: gathers in the moving process of visual signature point in mobile device, collection mobile device is being moved Inertial data during dynamic and visual information, estimate mobile device in moving process based on inertial data and visual information Movement locus.Specifically, imu (inertial measurement unit, the inertia measurement list by carrying in mobile device Unit) collection inertial data in moving process for the mobile device.And imu includes accelerometer and gyroscope, accelerometer and gyroscope After acceleration in corresponding measurement mobile device itself moving process and angular velocity, extrapolate the position in each moment for the mobile device Put and attitude, the image acquisition units that mobile device is carried are acquired visual information in moving process for the mobile device, Using visual information, the position of the mobile device extrapolated and attitude are estimated further, moved with obtaining mobile device During movement locus.It is next based on the mobile device that s103 determines to replace being based on inertial data in the pose of current time The pose in the corresponding moment estimated with visual information, to revise the movement locus estimated based on inertial data and visual information.Tool Body, the mobile device that the transfer matrix solving s1033 foundation is obtained replaces being based in attitude r of current time and position t Attitude r in corresponding moment of inertial data and visual information estimation and position t, are revised based on inertial data and vision letter with reaching The effect of the movement locus that breath is estimated.
Based on same inventive concept, embodiments provide a kind of mobile device, with reference to shown in Fig. 3, including as follows Functional unit:
Extraction unit 201, for, in the moving process of mobile device collection visual signature point, extracting current time and being adopted First stack features description of collection visual signature point;
Detector unit 202, is carried out for the first stack features description is described son with the every stack features extracting before respectively Closed loop detects:
Determining unit 203, for when describing son and closed loop is detected with the second stack features based on the first stack features description, By the space coordinatess of the visual signature point described by the second stack features description, determine the position in current time for the mobile device Appearance, wherein, the second stack features describe the one of which in each group Feature Descriptor of the front for it extraction of son.
Preferably, detector unit 202, comprising:
Contrast subunit, by first stack features description son describe to the every stack features extracting before respectively son carry out similar right Ratio determines that each group Feature Descriptor extracting before and the first stack features describe the description that the gestational edema presets condition of similarity enough respectively Quantum count;
It is default enough that judgment sub-unit, each group Feature Descriptor extracting before judging respectively and the first stack features describe the gestational edema Whether the description quantum count of condition of similarity is more than predetermined number threshold value, and wherein, the description quantum count meeting default condition of similarity is big Characterize when predetermined number threshold value and closed loop is detected.
Preferably, contrast subunit, specifically for:
Each Feature Descriptor in first stack features description is described in son with the every stack features extracting before respectively Each Feature Descriptor contrasted;
Judge whether the vector angle between the Feature Descriptor that contrasted is less than predetermined angle threshold value, wherein, to Amount angle is less than and characterizes the Feature Descriptor satisfaction default condition of similarity being contrasted during predetermined angle threshold value.
Preferably, determining unit 203, comprising:
First determination subelement, for determining the multiple Feature Descriptors in the second stack features description;
Second determination subelement, for determining two dimension in current time acquired image frames for multiple Feature Descriptor correspondences Image coordinate:
Matrix sets up subelement, for based on the space coordinatess of visual signature point, two dimension described by multiple Feature Descriptors The Intrinsic Matrix of the Built-in Image collecting unit of image coordinate and mobile device sets up turning of the pose representing mobile device Shifting matrix:
t = r t 0 1 = arg min t σ i | | ktx i - p i ^ | |
Wherein, t is transfer matrix, xiThe space coordinatess of visual signature point described by multiple Feature Descriptors,For multiple Two dimensional image coordinate in current time acquired image frames for the Feature Descriptor correspondence, k is the Built-in Image collection of mobile device The Intrinsic Matrix of unit, r is the attitude of mobile device, and t is the position of mobile device;
Solve subelement, obtain the pose in current time for the mobile device for solving transfer matrix.
Preferably, this mobile device also includes:
Collecting unit, for gathering inertial data in moving process for the mobile device and visual information;
Track estimation unit, for estimating motion in moving process for the mobile device based on inertial data and visual information Track.
Preferably, this mobile device also includes:
Amending unit, for based on a determination that the mobile device going out and regards based on inertial data in the pose replacement of current time The pose in the corresponding moment that feel information is estimated, with correction motion track.
Preferably, every stack features description before extracting is particularly as follows: collect every time during key images frame from crucial figure As extracting one group in frame, wherein, key images frame is successively from all images of mobile device collection according to pre-set space interval Determine in frame.
One or more technical scheme provided in an embodiment of the present invention, at least achieves following technique effect or advantage:
Visual signature point is gathered in moving process by mobile device, extracts current time gathered visual signature point First stack features description describes son with the every stack features extracting before and carries out closed loop detection, so that it is determined that whether again mobile device Once the same area through being passed through before.Then, based on the first stack features description with before extract second group special Levy description when closed loop is detected, by the space coordinatess of the visual signature point of the second stack features description son description, determine shifting Dynamic device current time pose such that it is able to when mobile device is again through the same area according to previous time record The space coordinatess of visual signature point recalculate the current pose of mobile device, to revise the position in closed loop location for the mobile device Appearance, thus eliminate the deviation accumulation that pose is estimated, to solve under mobile device accumulates in moving process to itself position The error that appearance is estimated, and have a strong impact on the technical problem of positioning precision, to effectively increase in the case of not setting up environmental map Based on the precision of mobile device positioning, it is achieved thereby that being accurately positioned in the case of not setting up environmental map, to guarantee simultaneously Real-time based on mobile device positioning and positioning precision.
In description mentioned herein, illustrate a large amount of details.It is to be appreciated, however, that the enforcement of the present invention Example can be put into practice in the case of not having these details.In some instances, known method, structure are not been shown in detail And technology, so as not to obscure the understanding of this description.
Similarly it will be appreciated that in order to simplify the disclosure and help understand one or more of each inventive aspect, Above in the description to the exemplary embodiment of the present invention, each feature of the present invention is grouped together into single enforcement sometimes In example, figure or descriptions thereof.However, the method for the disclosure should be construed to reflect an intention that i.e. required guarantor The application claims of shield more features than the feature being expressly recited in each claim.More precisely, it is such as following Claims reflected as, inventive aspect is all features less than single embodiment disclosed above.Therefore, The claims following specific embodiment are thus expressly incorporated in this specific embodiment, wherein each claim itself All as the separate embodiments of the present invention.
Those skilled in the art are appreciated that and the module in the equipment in embodiment can be carried out adaptively Change and they are arranged in one or more equipment different from this embodiment.Can be the module in embodiment or list Unit or assembly be combined into a module or unit or assembly, and can be divided in addition multiple submodule or subelement or Sub-component.In addition to such feature and/or at least some of process or unit exclude each other, can adopt any Combination is to all features disclosed in this specification (including adjoint claim, summary and accompanying drawing) and so disclosed Where method or all processes of equipment or unit are combined.Unless expressly stated otherwise, this specification (includes adjoint power Profit requires, summary and accompanying drawing) disclosed in each feature can carry out generation by the alternative features providing identical, equivalent or similar purpose Replace.
Although additionally, it will be appreciated by those of skill in the art that some embodiments in this include institute in other embodiments Including some features rather than further feature, but the combination of the feature of different embodiment means to be in the scope of the present invention Within and form different embodiments.For example, in detail in the claims, one of arbitrarily all may be used of embodiment required for protection To be used in mode in any combination.
The all parts embodiment of the present invention can be realized with hardware, or to run on one or more processor Client modules realize, or with combinations thereof realize.It will be understood by those of skill in the art that can make in practice To realize the reinforcement protection of software installation bag according to embodiments of the present invention with microprocessor or digital signal processor (dsp) The some or all functions of some or all parts in device.The present invention is also implemented as being retouched here for execution Some or all equipment of the method stated or program of device (for example, computer program and computer program). Such program realizing the present invention can store on a computer-readable medium, or can have one or more signal Form.Such signal can be downloaded from internet website and obtain, or on carrier signal provide, or with any its He provides form.
It should be noted that above-described embodiment the present invention will be described rather than limits the invention, and ability Field technique personnel can design alternative embodiment without departing from the scope of the appended claims.In the claims, Any reference markss between bracket should not be configured to limitations on claims.Word " inclusion " does not exclude the presence of not Element listed in the claims or step.If in the unit claim listing equipment for drying, some in these devices Individual can be to be embodied by same hardware branch.The use of word first, second, and third does not indicate that any suitable Sequence.These words can be construed to title.
Although preferred embodiments of the present invention have been described, but those skilled in the art once know basic creation Property concept, then can make other change and modification to these embodiments.So, claims are intended to be construed to including excellent Select embodiment and fall into being had altered and changing of the scope of the invention.
Obviously, those skilled in the art can carry out the various changes and modification essence without deviating from the present invention to the present invention God and scope.So, if these modifications of the present invention and modification belong to the scope of the claims in the present invention and its equivalent technologies Within, then the present invention is also intended to comprise these changes and modification.

Claims (14)

1. a kind of localization method of mobile device is it is characterised in that include:
Gather in the moving process of visual signature point in described mobile device, extract the of current time gathered visual signature point One stack features description;
Described first stack features description is described son with the every stack features extracting before respectively and carries out closed loop detection;
When describing son and closed loop is detected with the second stack features based on described first stack features description, special by described second group Levy the space coordinatess of the visual signature point described by description, determine the pose in described current time for the described mobile device, Wherein, described second stack features description is the one of which in the described each group Feature Descriptor extracting before.
2. the localization method of mobile device as claimed in claim 1 is it is characterised in that described describe described first stack features Son describes son with the every stack features extracting before respectively and carries out closed loop detection, comprising:
Described first stack features description is described son with the described every stack features extracting before respectively and carries out Similar contrasts, respectively Determine that the described each group Feature Descriptor extracting before and described first stack features describe the gestational edema and preset retouching of condition of similarity enough State quantum count;
Judge that the described each group Feature Descriptor extracting before describes the gestational edema to described first stack features and presets similar bar enough respectively Whether the description quantum count of part is more than predetermined number threshold value, and wherein, the description quantum count meeting described default condition of similarity is more than Characterize during described predetermined number threshold value and closed loop is detected.
3. the localization method of mobile device as claimed in claim 2 is it is characterised in that described describe described first stack features Son describes son with the described every stack features extracting before respectively and carries out Similar contrasts, judges whether to meet default condition of similarity, bag Include:
Each Feature Descriptor in described first stack features description is described in son with the every stack features extracting before respectively Each Feature Descriptor contrasted;
Judge whether the vector angle between the Feature Descriptor that contrasted is less than predetermined angle threshold value, wherein, described to Amount angle is less than the Feature Descriptor that during described predetermined angle threshold value, sign is contrasted and meets described default condition of similarity.
4. the mobile device as described in claim 1,2 or 3 localization method it is characterised in that described by described second group The space coordinatess of the visual signature point described by Feature Descriptor, determine the position in described current time for the described mobile device Appearance, comprising:
Determine the multiple Feature Descriptors in described second stack features description;
Determine two dimensional image coordinate in current time acquired image frames for the plurality of Feature Descriptor correspondence;
Based on the space coordinatess of visual signature point, described two dimensional image coordinate, Yi Jisuo described by the plurality of Feature Descriptor State the built-in image acquisition units of mobile device Intrinsic Matrix set up represent described mobile device pose transfer matrix:
t = r t 0 1 = arg min t σ i | | ktx i - p i ^ | |
Wherein, t is described transfer matrix, xiThe space coordinatess of visual signature point described by the plurality of Feature Descriptor,For Two dimensional image coordinate in described current time acquired image frames for the plurality of Feature Descriptor correspondence, k is described mobile dress Put the Intrinsic Matrix of built-in image acquisition units, r is the attitude of described mobile device, t is the position of described mobile device;
Solve described transfer matrix and obtain the pose in described current time for the described mobile device.
5. the localization method of mobile device as claimed in claim 1 is it is characterised in that described gather vision spy in mobile device Levy in moving process a little, methods described also includes:
Gather inertial data in described moving process for the described mobile device and visual information;
Movement locus in described moving process for the described mobile device are estimated based on described inertial data and described visual information.
6. the localization method of mobile device as claimed in claim 5 is it is characterised in that described by described second stack features Description son described by visual signature point space coordinatess, determine described mobile device described current time pose it Afterwards, methods described also includes:
Based on a determination that the described mobile device going out replaces based on described inertial data and described regards in the pose of described current time The pose in the corresponding moment that feel information is estimated, to revise described movement locus.
7. the localization method of mobile device as claimed in claim 1 is it is characterised in that the every stack features extracting before described are retouched State son and extract one group particularly as follows: collecting every time from described key images frame during key images frame, wherein, described key images Frame is to determine from all images frame of described mobile device collection successively according to pre-set space interval.
8. a kind of mobile device is it is characterised in that include:
Extraction unit, for, in the moving process of described mobile device collection visual signature point, extracting current time and being gathered First stack features description of visual signature point;
Detector unit, carries out closed loop for described first stack features description is described son with the every stack features extracting before respectively Detection;
Determining unit, for when being described son and closed loop is detected with the second stack features based on described first stack features description, leading to Cross the space coordinatess of the visual signature point described by described second stack features description, determine that described mobile device is worked as described The pose in front moment, wherein, described second stack features description in each group Feature Descriptor that extracts before described wherein One group.
9. mobile device as claimed in claim 8 is it is characterised in that described detector unit, comprising:
Contrast subunit, described first stack features description is described son with the described every stack features extracting before respectively and carries out phase Like contrasting, determine that the described each group Feature Descriptor extracting before and described first stack features describe the gestational edema default phase enough respectively Description quantum count like condition;
Judgment sub-unit, judges that the described each group Feature Descriptor extracting before and described first stack features describe gestational edema foot respectively Whether the description quantum count of default condition of similarity is more than predetermined number threshold value, wherein, meets the description of described default condition of similarity Quantum count is more than sign during described predetermined number threshold value and closed loop is detected.
10. mobile device as claimed in claim 9 is it is characterised in that described contrast subunit, specifically for:
Each Feature Descriptor in described first stack features description is described in son with the every stack features extracting before respectively Each Feature Descriptor contrasted;
Judge whether the vector angle between the Feature Descriptor that contrasted is less than predetermined angle threshold value, wherein, described to Amount angle is less than the Feature Descriptor that during described predetermined angle threshold value, sign is contrasted and meets described default condition of similarity.
11. mobile devices as described in claim 8,9 or 10 are it is characterised in that described determining unit, comprising:
First determination subelement, for determining the multiple Feature Descriptors in described second stack features description;
Second determination subelement, for determining two dimension in current time acquired image frames for the plurality of Feature Descriptor correspondence Image coordinate;
Matrix sets up subelement, for based on space coordinatess of visual signature point described by the plurality of Feature Descriptor, described The Intrinsic Matrix of the Built-in Image collecting unit of two dimensional image coordinate and described mobile device is set up and is represented described mobile dress The transfer matrix of the pose put:
t = r t 0 1 = arg min t σ i | | ktx i - p i ^ | |
Wherein, t is described transfer matrix, xiThe space coordinatess of visual signature point described by the plurality of Feature Descriptor,For Two dimensional image coordinate in described current time acquired image frames for the plurality of Feature Descriptor correspondence, k is described mobile dress The Intrinsic Matrix of the Built-in Image collecting unit put, r is the attitude of described mobile device, and t is the position of described mobile device;
Solve subelement, obtain the pose in described current time for the described mobile device for solving described transfer matrix.
12. mobile devices as claimed in claim 8 are it is characterised in that described mobile device also includes:
Collecting unit, for gathering inertial data in described moving process for the described mobile device and visual information;
Track estimation unit, for estimating described mobile device in described movement based on described inertial data and described visual information During movement locus.
13. mobile devices as claimed in claim 12 are it is characterised in that described mobile device also includes:
Amending unit, for based on a determination that the described mobile device going out replaces being based on described inertia in the pose of described current time The pose in the corresponding moment of data and the estimation of described visual information, to revise described movement locus.
14. mobile devices as claimed in claim 8 are it is characterised in that every stack features description extracting before described is concrete For: collect every time and extract one group from described key images frame during key images frame, wherein, described key images frame be according to Pre-set space interval is determined successively from all images frame of described mobile device collection.
CN201610652818.0A 2016-08-10 2016-08-10 A kind of mobile device and its localization method Active CN106352877B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201610652818.0A CN106352877B (en) 2016-08-10 2016-08-10 A kind of mobile device and its localization method
PCT/CN2017/096945 WO2018028649A1 (en) 2016-08-10 2017-08-10 Mobile device, positioning method therefor, and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610652818.0A CN106352877B (en) 2016-08-10 2016-08-10 A kind of mobile device and its localization method

Publications (2)

Publication Number Publication Date
CN106352877A true CN106352877A (en) 2017-01-25
CN106352877B CN106352877B (en) 2019-08-23

Family

ID=57843765

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610652818.0A Active CN106352877B (en) 2016-08-10 2016-08-10 A kind of mobile device and its localization method

Country Status (2)

Country Link
CN (1) CN106352877B (en)
WO (1) WO2018028649A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018028649A1 (en) * 2016-08-10 2018-02-15 纳恩博(北京)科技有限公司 Mobile device, positioning method therefor, and computer storage medium
CN107907131A (en) * 2017-11-10 2018-04-13 珊口(上海)智能科技有限公司 Alignment system, method and the robot being applicable in
CN108364310A (en) * 2017-01-26 2018-08-03 三星电子株式会社 Solid matching method and equipment, image processing equipment and its training method
CN110207537A (en) * 2019-06-19 2019-09-06 赵天昊 Fire Control Device and its automatic targeting method based on computer vision technique
CN110275540A (en) * 2019-07-01 2019-09-24 湖南海森格诺信息技术有限公司 Semantic navigation method and its system for sweeping robot
US10436590B2 (en) 2017-11-10 2019-10-08 Ankobot (Shanghai) Smart Technologies Co., Ltd. Localization system and method, and robot using the same
WO2019219077A1 (en) * 2018-05-18 2019-11-21 京东方科技集团股份有限公司 Positioning method, positioning apparatus, positioning system, storage medium, and method for constructing offline map database
WO2019242628A1 (en) * 2018-06-19 2019-12-26 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for pose determination
WO2020258187A1 (en) * 2019-06-27 2020-12-30 深圳市大疆创新科技有限公司 State detection method and apparatus and mobile platform

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019228520A1 (en) * 2018-06-01 2019-12-05 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for indoor positioning
CN111383282B (en) * 2018-12-29 2023-12-01 杭州海康威视数字技术股份有限公司 Pose information determining method and device
CN110293563B (en) * 2019-06-28 2022-07-26 炬星科技(深圳)有限公司 Method, apparatus, and storage medium for estimating pose of robot
CN110334560B (en) * 2019-07-16 2023-04-07 山东浪潮科学研究院有限公司 Two-dimensional code positioning method and device
CN112284399B (en) * 2019-07-26 2022-12-13 北京魔门塔科技有限公司 Vehicle positioning method based on vision and IMU and vehicle-mounted terminal
CN112634360B (en) * 2019-10-08 2024-03-05 北京京东乾石科技有限公司 Visual information determining method, device, equipment and storage medium
CN111105459B (en) * 2019-12-24 2023-10-20 广州视源电子科技股份有限公司 Descriptive sub map generation method, positioning method, device, equipment and storage medium
CN114415698B (en) * 2022-03-31 2022-11-29 深圳市普渡科技有限公司 Robot, positioning method and device of robot and computer equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120306847A1 (en) * 2011-05-31 2012-12-06 Honda Motor Co., Ltd. Online environment mapping
CN103869814A (en) * 2012-12-17 2014-06-18 联想(北京)有限公司 Terminal positioning and navigation method and mobile terminal
US20140324249A1 (en) * 2013-03-19 2014-10-30 Alberto Daniel Lacaze Delayed Telop Aid
CN104374395A (en) * 2014-03-31 2015-02-25 南京邮电大学 Graph-based vision SLAM (simultaneous localization and mapping) method
CN104851094A (en) * 2015-05-14 2015-08-19 西安电子科技大学 Improved method of RGB-D-based SLAM algorithm
CN105783913A (en) * 2016-03-08 2016-07-20 中山大学 SLAM device integrating multiple vehicle-mounted sensors and control method of device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1569558A (en) * 2003-07-22 2005-01-26 中国科学院自动化研究所 Moving robot's vision navigation method based on image representation feature
US20080195316A1 (en) * 2007-02-12 2008-08-14 Honeywell International Inc. System and method for motion estimation using vision sensors
CN102109348B (en) * 2009-12-25 2013-01-16 财团法人工业技术研究院 System and method for positioning carrier, evaluating carrier gesture and building map
US9243916B2 (en) * 2013-02-21 2016-01-26 Regents Of The University Of Minnesota Observability-constrained vision-aided inertial navigation
CN106352877B (en) * 2016-08-10 2019-08-23 纳恩博(北京)科技有限公司 A kind of mobile device and its localization method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120306847A1 (en) * 2011-05-31 2012-12-06 Honda Motor Co., Ltd. Online environment mapping
CN103869814A (en) * 2012-12-17 2014-06-18 联想(北京)有限公司 Terminal positioning and navigation method and mobile terminal
US20140324249A1 (en) * 2013-03-19 2014-10-30 Alberto Daniel Lacaze Delayed Telop Aid
CN104374395A (en) * 2014-03-31 2015-02-25 南京邮电大学 Graph-based vision SLAM (simultaneous localization and mapping) method
CN104851094A (en) * 2015-05-14 2015-08-19 西安电子科技大学 Improved method of RGB-D-based SLAM algorithm
CN105783913A (en) * 2016-03-08 2016-07-20 中山大学 SLAM device integrating multiple vehicle-mounted sensors and control method of device

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018028649A1 (en) * 2016-08-10 2018-02-15 纳恩博(北京)科技有限公司 Mobile device, positioning method therefor, and computer storage medium
CN108364310A (en) * 2017-01-26 2018-08-03 三星电子株式会社 Solid matching method and equipment, image processing equipment and its training method
US11900628B2 (en) 2017-01-26 2024-02-13 Samsung Electronics Co., Ltd. Stereo matching method and apparatus, image processing apparatus, and training method therefor
CN107907131A (en) * 2017-11-10 2018-04-13 珊口(上海)智能科技有限公司 Alignment system, method and the robot being applicable in
US10436590B2 (en) 2017-11-10 2019-10-08 Ankobot (Shanghai) Smart Technologies Co., Ltd. Localization system and method, and robot using the same
WO2019219077A1 (en) * 2018-05-18 2019-11-21 京东方科技集团股份有限公司 Positioning method, positioning apparatus, positioning system, storage medium, and method for constructing offline map database
US11295472B2 (en) 2018-05-18 2022-04-05 Boe Technology Group Co., Ltd. Positioning method, positioning apparatus, positioning system, storage medium, and method for constructing offline map database
WO2019242628A1 (en) * 2018-06-19 2019-12-26 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for pose determination
US11781863B2 (en) 2018-06-19 2023-10-10 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for pose determination
CN110207537A (en) * 2019-06-19 2019-09-06 赵天昊 Fire Control Device and its automatic targeting method based on computer vision technique
WO2020258187A1 (en) * 2019-06-27 2020-12-30 深圳市大疆创新科技有限公司 State detection method and apparatus and mobile platform
CN110275540A (en) * 2019-07-01 2019-09-24 湖南海森格诺信息技术有限公司 Semantic navigation method and its system for sweeping robot

Also Published As

Publication number Publication date
WO2018028649A1 (en) 2018-02-15
CN106352877B (en) 2019-08-23

Similar Documents

Publication Publication Date Title
CN106352877A (en) Moving device and positioning method thereof
CN107677285B (en) The path planning system and method for robot
CN105751230B (en) A kind of controlling of path thereof, paths planning method, the first equipment and the second equipment
CN106647742B (en) Movement routine method and device for planning
CN111274974B (en) Positioning element detection method, device, equipment and medium
CN108489482A (en) The realization method and system of vision inertia odometer
CN109298629B (en) System and method for guiding mobile platform in non-mapped region
CN106092104A (en) The method for relocating of a kind of Indoor Robot and device
CN110377025A (en) Sensor aggregation framework for automatic driving vehicle
CN110268354A (en) Update the method and mobile robot of map
CN104732187B (en) A kind of method and apparatus of image trace processing
CN109959377A (en) A kind of robot navigation's positioning system and method
CN110084832A (en) Correcting method, device, system, equipment and the storage medium of camera pose
CN107888828A (en) Space-location method and device, electronic equipment and storage medium
CN109540148A (en) Localization method and system based on SLAM map
CN107478220A (en) Unmanned plane indoor navigation method, device, unmanned plane and storage medium
EP3146729A1 (en) Fiducial marker patterns, their automatic detection in images, and applications thereof
CN103919663B (en) Blind person's outdoor environment cognitive method
CN105593877A (en) Object tracking based on dynamically built environment map data
CN110986969B (en) Map fusion method and device, equipment and storage medium
CN106643801A (en) Detection method of poisoning accuracy and electronic equipment
CN104931057B (en) A kind of any position localization method, the apparatus and system of robot
CN106708037A (en) Autonomous mobile equipment positioning method and device, and autonomous mobile equipment
CN110533694A (en) Image processing method, device, terminal and storage medium
CN104320759A (en) Fixed landmark based indoor positioning system fingerprint database construction method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20221229

Address after: 100192 203, floor 2, building A-1, North Territory, Dongsheng science and Technology Park, Zhongguancun, No. 66, xixiaokou Road, Haidian District, Beijing

Patentee after: Weilan continental (Beijing) Technology Co.,Ltd.

Address before: Room C206, B-2 Building, North Territory of Dongsheng Science Park, Zhongguancun, 66 Xixiaokou Road, Haidian District, Beijing, 100192

Patentee before: NINEBOT (BEIJING) TECH Co.,Ltd.

TR01 Transfer of patent right