CN109579844B - Positioning method and system - Google Patents

Positioning method and system Download PDF

Info

Publication number
CN109579844B
CN109579844B CN201811473786.3A CN201811473786A CN109579844B CN 109579844 B CN109579844 B CN 109579844B CN 201811473786 A CN201811473786 A CN 201811473786A CN 109579844 B CN109579844 B CN 109579844B
Authority
CN
China
Prior art keywords
main control
control subsystem
data
robot main
pose data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811473786.3A
Other languages
Chinese (zh)
Other versions
CN109579844A (en
Inventor
于慧君
唐尚华
彭倍
马俊
曾志
周吴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201811473786.3A priority Critical patent/CN109579844B/en
Publication of CN109579844A publication Critical patent/CN109579844A/en
Application granted granted Critical
Publication of CN109579844B publication Critical patent/CN109579844B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

The application discloses a positioning method and a positioning system, and relates to the technical field of robot positioning. The method comprises the steps that a chassis control chip obtains first pose data of a wheel type odometer model according to rotation speed data and angle data; the robot main control subsystem calculates second pose data of the visual odometer model according to the image data acquired by the monocular camera; the robot main control subsystem performs time stamp alignment and motion track alignment on the first pose data and the second pose data, and restores the optimal camera scale of the monocular camera; the robot main control subsystem performs scale recovery on the second pose data according to the optimal camera scale; and the robot main control subsystem fuses the first pose data with the second pose data after the scale recovery to obtain final pose data of the wheeled mobile robot. The method and the system disclosed by the application can solve the problems that the monocular camera has no scale and poor robustness in the positioning process, and can also solve the problems of accumulated error of the wheel type odometer and skidding of the wheels of the robot.

Description

Positioning method and system
Technical Field
The application relates to the technical field of robot positioning, in particular to a positioning method and a positioning system.
Background
Positioning navigation is one of the preconditions for realizing the intellectualization of the robot, and is a key factor for endowing the robot with sensing and action capabilities.
At present, the traditional robot positioning method usually adopts a wheel type odometer for calculation, and the wheel type odometer has the advantages that the positioning accuracy of short time and short distance is very high, but the dead reckoning positioning method has accumulated errors, can not eliminate errors according to own information, and can not overcome the influence of factors such as wheel slipping.
Disclosure of Invention
Accordingly, the present application is directed to a positioning method and system for improving the above-mentioned problems.
In order to achieve the above purpose, the present application adopts the following technical scheme:
in a first aspect, an embodiment of the present application provides a positioning method applied to a positioning system of a wheeled mobile robot, where the positioning system includes an encoder, a gyroscope, a monocular camera, a chassis control chip, and a robot main control subsystem, and the encoder is mounted on a wheel of the wheeled mobile robot, and the method includes:
the chassis control chip obtains first pose data of a wheel type odometer model according to the rotating speed data of the wheels collected by the encoder and the angle data of the wheel type mobile robot collected by the gyroscope, wherein the first pose data comprises a first position and a first speed;
the robot main control subsystem calculates second pose data of the visual odometer model according to the image data acquired by the monocular camera, wherein the second pose data comprises a second position and a second speed;
the robot main control subsystem performs time stamp alignment and motion trail alignment on the first pose data and the second pose data, and restores the optimal camera scale of the monocular camera;
the robot main control subsystem performs scale recovery on the second pose data according to the optimal camera scale;
and the robot main control subsystem fuses the first pose data with the second pose data after scale recovery to obtain final pose data of the wheeled mobile robot.
Optionally, the method further comprises:
the robot main control subsystem carries out loop detection on each key frame image in the image data;
and when loop-back occurs, the robot main control subsystem performs repositioning calculation on the wheeled mobile robot.
Optionally, the robot main control subsystem performs loop detection on each key frame image in the image data, including:
the robot main control subsystem extracts a plurality of FAST corner points from each key frame image in the image data, and calculates BRIEF descriptors of each FAST corner point;
the robot main control subsystem calculates the similarity between the current frame and the previous key frame through a DBoW2 algorithm according to each FAST corner and the corresponding BRIEF descriptor;
and when the similarity is larger than a set threshold value, the robot main control subsystem judges that loop-back occurs.
Optionally, the number of wheels of the encoder and the wheeled mobile robot is multiple and one-to-one, and the chassis control chip obtains first pose data of the wheeled odometer model according to the rotation speed data of the wheels collected by the encoder and the angle data of the wheeled mobile robot collected by the gyroscope, including:
the chassis control chip obtains speed data and a first attitude angle of the wheeled mobile robot under a current coordinate system according to the rotating speed data of the wheels collected by each encoder;
the chassis control chip calculates a second attitude angle of the wheeled mobile robot under a global coordinate system according to a pre-established gyroscope error model;
the chassis control chip performs Kalman filtering fusion on the first attitude angle and the second attitude angle to obtain a final attitude angle of the wheeled mobile robot;
and the chassis control chip calculates the speed and position information of the wheeled mobile robot under a world coordinate system according to the speed data and the final attitude angle, and obtains the first attitude data.
Optionally, the number of the wheels is 3, the included angle between every two wheels is 120 °, and the speed data of the wheeled mobile robot under the current coordinate system is:
wherein v is x 、v y Respectively representing the speeds of the x-axis and the y-axis in the current coordinate system, ω represents the rotational speed about its own geometric center in the current coordinate system, ω 1 、ω 2 、ω 3 The rotation speeds of the three wheels are respectively represented, L is the radius of the chassis of the wheeled mobile robot, and R is the radius of the wheels.
Optionally, the robot main control subsystem calculates second pose data of the visual odometer model according to the image data acquired by the monocular camera, including:
the robot main control subsystem extracts FAST corner points from the image data and performs LK optical flow tracking to obtain image feature point matching information;
the robot main control subsystem issues the image feature point matching information according to preset image feature point issuing frequency;
the robot main control subsystem sets a first frame of the image data as a key frame, and other image frames determine whether the first frame is set as the key frame according to the feature point number of the last key frame image tracked by the current image and the average parallax of the feature points;
the robot main control subsystem establishes a sliding window for image tracking;
the robot main control subsystem obtains a rotation matrix through the position relation of each frame of images in the sliding window through epipolar geometry, three-dimensional reconstruction and PnP algorithm calculation, a yaw angle is selected as an initial rotation matrix for the obtained rotation matrix, translation on the horizontal planes of an x axis and a y axis is measured by a translation vector, a reprojection error cost function is established, 3-degree-of-freedom minimum reprojection error calculation is carried out, and a rotation matrix and a translation matrix which lack dimensions among image key frames are obtained.
Optionally, the robot main control subsystem performs timestamp alignment and motion track alignment on the first pose data and the second pose data, and restores an optimal camera scale of the monocular camera, including:
the robot master control subsystem time-stamps the first pose data with the second pose data;
the robot main control subsystem performs track alignment on the first pose data and the second pose data after the time stamps are aligned;
and the robot main control subsystem obtains the optimal camera scale of the monocular camera by solving the least square solution of the loss function.
In a second aspect, an embodiment of the present application provides a positioning system applied to a wheeled mobile robot, including: the system comprises an encoder, a gyroscope, a monocular camera, a chassis control chip and a robot main control subsystem, wherein the encoder is arranged on a wheel of a wheeled mobile robot;
the chassis control chip is used for obtaining first pose data of a wheel type odometer model according to the rotating speed data of the wheels collected by the encoder and the angle data of the wheel type mobile robot collected by the gyroscope, wherein the first pose data comprises a first position and a first speed;
the robot main control subsystem is used for calculating second pose data of the visual odometer model according to the image data acquired by the monocular camera, and the second pose data comprises a second position and a second speed;
the robot main control subsystem is further used for performing time stamp alignment and motion track alignment on the first pose data and the second pose data, and recovering the optimal camera scale of the monocular camera;
the robot main control subsystem is further used for performing scale recovery on the second pose data according to the optimal camera scale;
the robot main control subsystem is further used for fusing the first pose data with the second pose data after scale recovery to obtain final pose data of the wheeled mobile robot.
Optionally, the robot main control subsystem is further configured to perform loop detection on each image key frame in the image data; and performing repositioning calculation on the wheeled mobile robot when loop-back occurs.
Optionally, the number of wheels of the encoder and the wheel type mobile robot is multiple and corresponds to one;
the chassis control chip is used for obtaining speed data and a first attitude angle of the wheeled mobile robot under a current coordinate system according to the rotating speed data of the wheels collected by each encoder;
the chassis control chip is also used for calculating a second attitude angle of the wheeled mobile robot under the current global coordinate system according to a gyroscope error model which is established in advance;
the chassis control chip is further used for carrying out Kalman filtering fusion on the first attitude angle and the second attitude angle to obtain a final attitude angle of the wheeled mobile robot;
the chassis control chip is further used for calculating speed and position information of the wheeled mobile robot under a world coordinate system according to the speed data and the final attitude angle, and obtaining the first attitude data.
Compared with the prior art, the application has the beneficial effects that:
the positioning method and the positioning system provided by the application aim at the problem that the wheel type odometer has accumulated errors and cannot be automatically eliminated, the influence of external factors such as wheel slipping and the like on positioning precision is further optimized through the idea of sensor fusion, the wheel type odometer and the monocular vision odometer are combined, the mutual defects are overcome, the respective advantages are reserved, the problem that the monocular camera has no scale and poor robustness in the positioning process can be overcome, and meanwhile, the problem of accumulated errors of the wheel type odometer can be solved.
Drawings
Fig. 1 is a schematic structural diagram of a positioning system according to a preferred embodiment of the present application.
Fig. 2 is a flowchart of a positioning method according to a preferred embodiment of the present application.
Fig. 3 is a flow chart of the sub-steps of step S101 in fig. 2.
Fig. 4 is a flow chart of the sub-steps of step S102 in fig. 2.
Fig. 5 is a flow chart of the sub-steps of step S103 in fig. 2.
Fig. 6 is a flow chart of the sub-steps of step S106 in fig. 2.
Reference numerals illustrate: a 110-encoder; 120-gyroscopes; 130-monocular camera; 140-chassis control chip; 150-a robot master control subsystem.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the application, as presented in the figures, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
The terms "first," "second," "third," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
Referring to fig. 1, a schematic structural diagram of a positioning system according to a preferred embodiment of the present application is provided, the positioning system is applied to a wheeled mobile robot, the positioning system includes an encoder 110, a gyroscope 120, a monocular camera 130, a chassis control chip 140 and a robot main control subsystem 150, the chassis control chip 140 is respectively connected to the encoder 110, the gyroscope 120 and the robot main control subsystem 150 for data communication or interaction, and the robot main control subsystem 150 is connected to the monocular camera 130 for data communication or interaction.
The chassis control chip 140 is configured to obtain first pose data of a wheel type odometer model according to rotational speed data of wheels collected by the encoder 110 and angle data of the wheel type mobile robot collected by the gyroscope 120.
In the embodiment of the present application, the encoder 110 is mounted on a wheel of the wheeled mobile robot, and is configured to collect rotational speed data of a corresponding wheel, and send the collected rotational speed data to the chassis control chip 140. The number of wheels of the wheeled mobile robot may be plural, and in this case, the number of encoders 110 may be plural, and the number of encoders 110 is equal to the number of wheels and is set in one-to-one correspondence. When the number of the encoders 110 is plural, each encoder 110 transmits the collected rotational speed data of the corresponding wheel to the chassis control chip 140. The gyroscope 120 is mounted on the chassis of the wheeled mobile robot, and is configured to collect angle data of the wheeled mobile robot, and send the collected angle data to the chassis control chip 140. The chassis control chip 140 performs operation according to the rotation speed data of the wheels collected by each encoder 110 and the angle data of the wheeled mobile robot collected by the gyroscope 120, so as to obtain first pose data of the wheeled odometer model, wherein the first pose data comprises a first position and a first speed.
Specifically, first, the chassis control chip 140 obtains speed data and a first attitude angle of the wheeled mobile robot under the current coordinate system according to the rotational speed data of the wheels collected by each encoder 110. For example, when the number of wheels and encoders 110 is 3, and the included angle between each two wheels is 120 °, the speed data of the wheeled mobile robot in the current coordinate system is:wherein v is x 、v y Respectively representing the speeds of the x-axis and the y-axis in the current coordinate system, ω represents the rotational speed about its own geometric center in the current coordinate system, ω 1 、ω 2 、ω 3 The rotation speeds of the three wheels are respectively represented, L is the radius of the chassis of the wheeled mobile robot, and R is the radius of the wheels.
The chassis control chip 140 establishes a gyroscope error model according to angle data detected by the gyroscope 120 for a plurality of times and actual angle data, and after obtaining the angle data collected by the gyroscope 120, the chassis control chip 140 calculates a second attitude angle of the wheeled mobile robot currently under a global coordinate system according to the gyroscope error model established in advance, and simultaneously performs single degree-of-freedom constraint on the second attitude angle, namely, selects a rotation angle (yaw angle) of the wheeled mobile robot around a vertical axis.
After the first attitude angle and the second attitude angle are obtained, the chassis control chip 140 performs Kalman filtering fusion on the first attitude angle and the second attitude angle to obtain a final attitude angle of the wheeled mobile robot. Finally, the chassis control chip 140 calculates the speed and the position information of the wheeled mobile robot in the world coordinate system according to the obtained speed information and the second attitude angle of the wheeled mobile robot in the global coordinate system, so as to obtain the first pose data, wherein the first pose data comprises a first position and a first speed.
The robot main control subsystem 150 is configured to calculate second pose data of the visual odometer model from the image data acquired by the monocular camera 130, the second pose data including a second position and a second speed.
The monocular camera 130 is mounted on a wheeled mobile robot and connected to the robot main control subsystem 150, and the monocular camera 130 acquires image data in a view field region of the wheeled mobile robot during movement and transmits the acquired image data to the robot main control subsystem 150. After obtaining the image data sent by the monocular camera 130, the robot main control subsystem 150 extracts FAST corner points (Features from Accelerated Segment Test) from the image data, and performs LK streamer tracking to obtain image feature point matching information. The robot main control subsystem 150 issues the image feature point matching information according to a preset image feature point issuing frequency. Then, the robot main control subsystem 150 sets the first frame of the image data as a key frame, determines whether the first frame is set as a key frame according to the feature point number of the image of the previous key frame tracked by the current image and the average parallax of the feature point, and establishes a sliding window of the image tracking according to the key frame (according to the feature point number of the previous image frame tracked by the current image frame and the average parallax of the current image frame and the previous key frame). Finally, the robot main control subsystem 150 obtains a rotation matrix by obtaining the position relation of each frame of image in the sliding window through epipolar geometry, three-dimensional reconstruction and PnP algorithm calculation, selects a yaw angle as an initial rotation matrix for the obtained rotation matrix, translates the translation vector on the horizontal planes of the x axis and the y axis, establishes a reprojection error cost function, carries out 3-degree-of-freedom minimum reprojection error calculation, and obtains a rotation and translation matrix without scale among image key frames, namely second pose data of the visual odometer model.
The robot main control subsystem 150 is further configured to perform time stamp alignment and motion trajectory alignment on the first pose data and the second pose data, and restore an optimal camera scale of the monocular camera 130.
Specifically, after the first pose data and the second pose data are obtained, the robot master subsystem 150 time-aligns the first pose data with the second pose data. The robot master subsystem 150 then trajectory aligns the time-stamp aligned first pose data with the second pose data. Finally, the robot master subsystem 150 obtains the optimal camera dimensions for the monocular camera 130 by solving for the least squares solution of the loss function. The optimal camera scale refers to a correspondence between a distance in an image and an actual distance, for example, each 100 pixel distances corresponds to an actual distance of 1 meter.
The robot main control subsystem 150 is further configured to scale recover the second pose data according to the optimal camera scale.
After obtaining the optimal camera scale of the monocular camera 130, the robot main control subsystem 150 may recalculate the translation matrix of the key frames in the world coordinate system and the 3D coordinates of the feature points in the visual odometer model according to the optimal camera scale.
The robot main control subsystem 150 is further configured to fuse the first pose data with the scale restored second pose data to obtain final pose data of the wheeled mobile robot.
After the second pose data is scale restored, the robot main control subsystem 150 fuses the first pose data with the scale restored second pose data to obtain final pose data of the wheeled mobile robot.
The robot main control subsystem 150 is also used for loop detection of each image key frame in the image data and repositioning calculation of the wheeled mobile robot when loop back occurs.
In the embodiment of the application, the specific steps of loop detection and repositioning are as follows:
step 1, the robot main control subsystem 150 extracts a plurality of FAST corner points from each key frame image in the image data, and calculates a BRIEF descriptor of each FAST corner point, and the BRIEF descriptor has a rotation scale invariance and a high calculation speed, so that the robot main control subsystem is suitable for real-time feature point matching.
Step 2, the robot main control subsystem 150 calculates the similarity between the current frame and the previous key frame according to each FAST corner and the corresponding BRIEF descriptor by using a DBoW2 algorithm.
And 3, when the similarity between the current frame and the previous key frame is greater than the set threshold, the robot main control subsystem 150 determines that loop-back occurs, and at this time, the robot main control subsystem 150 performs position calculation according to the loop-back candidate frame (i.e. the key frame with the similarity detected by the loop being greater than the set threshold), and corrects the positions of other key frames in the loop.
In step 3, the specific calculation steps are as follows:
step S31, the characteristic points of the current frame are matched with the BRIEF descriptors of the frame detected by the loop and the frames nearby the frame, and the matching criterion is the Hamming distance of the corresponding descriptors.
Step S32, carrying out RANSAC mismatching elimination on the obtained matching points.
Step S33, solving the known 3D world coordinates of the matching points through a PnP algorithm to obtain the relative position of the current frame in the world coordinate system, and eliminating accumulated errors.
And step S34, according to the matched characteristic points of the key frames in the loop, establishing a minimized reprojection error optimization function, optimizing to obtain a rotation matrix and a translation matrix after repositioning each key frame, and updating the 3D coordinates of the characteristic points.
In the embodiment of the application, the accumulated error of the previous wheel type odometer can be eliminated through once loop detection and repositioning process, the positioning accuracy is ensured, the fusion parameters are adjusted according to the number of characteristic point pairs and the position distances of two odometer models, the robustness of the system is enhanced, the limitation of the wheel type odometer is overcome through the fusion of the wheel type odometer and the visual odometer, and the positioning accuracy of the system is further improved.
Referring to fig. 2, a flowchart of a positioning method applied to the positioning system shown in fig. 1 according to a preferred embodiment of the present application will be described below.
Step S101, the chassis control chip obtains first pose data of a wheel type odometer model according to rotation speed data of wheels collected by an encoder and angle data of a wheel type mobile robot collected by a gyroscope.
Referring to fig. 3, step S101 includes the following sub-steps:
and S1011, the chassis control chip obtains the speed data and the first attitude angle of the wheeled mobile robot under the current coordinate system according to the rotation speed data of the wheels acquired by each encoder.
In sub-step S1012, the chassis control chip calculates a second attitude angle of the wheeled mobile robot currently under the global coordinate system according to the pre-established gyroscope error model.
And step S1013, the chassis control chip performs Kalman filtering fusion on the first attitude angle and the second attitude angle to obtain a final attitude angle of the wheeled mobile robot.
And step S1014, the chassis control chip calculates the speed and position information of the wheeled mobile robot under the world coordinate system according to the speed data and the final attitude angle to obtain first attitude data.
Step S102, the robot main control subsystem calculates second pose data of the visual odometer model according to the image data acquired by the monocular camera.
Referring to fig. 4, step S102 includes the following sub-steps:
and S1021, extracting FAST corner points from the image data by the robot main control subsystem, and tracking LK optical flow to obtain image feature point matching information.
In sub-step S1022, the robot main control subsystem issues image feature point matching information according to a preset image feature point issuing frequency.
In sub-step S1023, the robot main control subsystem sets the first frame of the image data as a key frame, and the other image frames determine whether to set the first frame according to the feature point number of the last key frame image tracked by the current image and the average parallax of the feature points.
Sub-step S1024, the robot main control subsystem establishes a sliding window for image tracking.
In sub-step S1025, the robot main control subsystem calculates the position relation of each frame of image in the sliding window to obtain a rotation matrix, selects a yaw angle as an initial rotation matrix, measures the translation of the translation vector on the horizontal planes of the x axis and the y axis, establishes a reprojection error cost function, and performs 3-degree-of-freedom minimum reprojection error calculation to obtain a rotation and translation matrix lacking scale between the image key frames.
Step S103, the robot main control subsystem performs time stamp alignment and motion track alignment on the first pose data and the second pose data, and recovers the optimal camera scale of the monocular camera.
Referring to fig. 5, step S103 includes the following sub-steps:
sub-step S1031, the robot master subsystem time-aligns the first pose data with the second pose data stamp.
Sub-step S1032, the robot main control subsystem track aligns the time-stamp aligned first pose data with the second pose data.
Sub-step S1033, the robot main control subsystem obtains the optimal camera scale of the monocular camera by solving the least squares solution of the loss function.
Step S104, the robot main control subsystem performs scale recovery on the second pose data according to the optimal camera scale.
Step S105, the robot main control subsystem fuses the first pose data with the second pose data after scale recovery to obtain final pose data of the wheeled mobile robot.
Step S106, the robot main control subsystem carries out loop detection on each key frame image in the image data.
Referring to fig. 6, step S106 includes the following sub-steps:
sub-step S1061, the robot main control subsystem extracts a plurality of FAST corner points for each key frame image in the image data, and calculates a BRIEF descriptor for each FAST corner point.
In sub-step S1062, the robot main control subsystem calculates the similarity between the current frame and the previous key frame according to each FAST corner and the corresponding BRIEF descriptor by using the DBoW2 algorithm.
Substep S1063, when the similarity is greater than the set threshold, the robot main control subsystem determines that loop-back occurs.
Step S107, the robot main control subsystem judges whether loop-back occurs, if yes, step S108 is executed.
Step S108, the robot main control subsystem performs repositioning calculation on the wheeled mobile robot.
In summary, the positioning method and the system provided by the application aim at the problem that the wheel type odometer has accumulated errors and cannot be automatically eliminated, the influence of external factors such as wheel slip and the like on the positioning precision is further optimized through the idea of sensor fusion, the wheel type odometer and the monocular vision odometer are combined, the mutual defects are overcome, the respective advantages are reserved, the problem that the monocular camera 130 has no scale and poor robustness in the positioning process can be overcome, and meanwhile, the problem of the accumulated errors of the wheel type odometer can be solved.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The apparatus embodiments described above are merely illustrative, for example, of the flowcharts and block diagrams in the figures that illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only of the preferred embodiments of the present application and is not intended to limit the present application, but various modifications and variations can be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application. It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (2)

1. A positioning method applied to a positioning system of a wheeled mobile robot, the positioning system comprising an encoder, a gyroscope, a monocular camera, a chassis control chip and a robot main control subsystem, the encoder being mounted on a wheel of the wheeled mobile robot, the method comprising:
the chassis control chip obtains first pose data of a wheel type odometer model according to the rotating speed data of the wheels collected by the encoder and the angle data of the wheel type mobile robot collected by the gyroscope, wherein the first pose data comprises a first position and a first speed;
the robot main control subsystem calculates second pose data of the visual odometer model according to the image data acquired by the monocular camera, wherein the second pose data comprises a second position and a second speed;
the robot main control subsystem performs time stamp alignment and motion trail alignment on the first pose data and the second pose data, and restores the optimal camera scale of the monocular camera;
the robot main control subsystem performs scale recovery on the second pose data according to the optimal camera scale;
the robot main control subsystem fuses the first pose data with the second pose data after scale recovery to obtain final pose data of the wheeled mobile robot;
the robot main control subsystem performs loop detection on each key frame image in the image data: the robot main control subsystem extracts a plurality of FAST corner points from each key frame image in the image data, and calculates BRIEF descriptors of each FAST corner point; the robot main control subsystem calculates the similarity between the current frame and the previous key frame through a DBoW2 algorithm according to each FAST corner and the corresponding BRIEF descriptor; when the similarity is larger than a set threshold value, the robot main control subsystem judges that loop-back occurs;
when loop-back occurs, the robot main control subsystem performs repositioning calculation on the wheeled mobile robot;
the wheel quantity of encoder with wheeled mobile robot is a plurality of and one-to-one, chassis control chip is according to the rotational speed data of wheel that the encoder gathered with the angle data of wheeled mobile robot that the gyroscope gathered obtains wheeled odometer model's first position appearance data, includes:
the chassis control chip obtains speed data and a first attitude angle of the wheeled mobile robot under a current coordinate system according to the rotating speed data of the wheels collected by each encoder;
the chassis control chip calculates a second attitude angle of the wheeled mobile robot under a global coordinate system according to a pre-established gyroscope error model;
the chassis control chip performs Kalman filtering fusion on the first attitude angle and the second attitude angle to obtain a final attitude angle of the wheeled mobile robot;
the chassis control chip calculates the speed and position information of the wheeled mobile robot under a world coordinate system according to the speed data and the final attitude angle to obtain the first attitude data;
the number of the wheels is 3, the included angle between every two wheels is 120 degrees, and the speed data of the wheeled mobile robot under the current coordinate system is as follows:
wherein v is x 、v y Respectively representing the speeds of the x-axis and the y-axis in the current coordinate system, ω represents the rotational speed about its own geometric center in the current coordinate system, ω 1 、ω 2 、ω 3 Respectively representing the rotation speeds of three wheels, wherein L is the radius of a chassis of the wheeled mobile robot, and R is the radius of the wheels;
the robot main control subsystem calculates second pose data of a visual odometer model according to the image data acquired by the monocular camera, and the robot main control subsystem comprises:
the robot main control subsystem extracts FAST corner points from the image data and performs LK optical flow tracking to obtain image feature point matching information;
the robot main control subsystem issues the image feature point matching information according to preset image feature point issuing frequency;
the robot main control subsystem sets a first frame of the image data as a key frame, and other image frames determine whether the first frame is set as the key frame according to the feature point number of the last key frame image tracked by the current image and the average parallax of the feature points;
the robot main control subsystem establishes a sliding window for image tracking;
the robot main control subsystem obtains a rotation matrix through the position relation of each frame of images in the sliding window through epipolar geometry, three-dimensional reconstruction and PnP algorithm calculation, a yaw angle is selected as an initial rotation matrix for the obtained rotation matrix, translation on the horizontal planes of an x axis and a y axis is measured by a translation direction, a reprojection error cost function is established, 3-degree-of-freedom minimum reprojection error calculation is carried out, and a rotation and translation matrix without scale between image key frames is obtained;
the robot main control subsystem performs time stamp alignment and motion trail alignment on the first pose data and the second pose data, and restores an optimal camera scale of the monocular camera, including:
the robot master control subsystem time-stamps the first pose data with the second pose data;
the robot main control subsystem performs track alignment on the first pose data and the second pose data after the time stamps are aligned;
and the robot main control subsystem obtains the optimal camera scale of the monocular camera by solving the least square solution of the loss function.
2. A positioning system for use with a wheeled mobile robot, comprising: the system comprises an encoder, a gyroscope, a monocular camera, a chassis control chip and a robot main control subsystem, wherein the encoder is arranged on a wheel of a wheeled mobile robot;
the chassis control chip is used for obtaining first pose data of a wheel type odometer model according to the rotating speed data of the wheels collected by the encoder and the angle data of the wheel type mobile robot collected by the gyroscope, wherein the first pose data comprises a first position and a first speed;
the robot main control subsystem is used for calculating second pose data of the visual odometer model according to the image data acquired by the monocular camera, and the second pose data comprises a second position and a second speed;
the robot main control subsystem is further used for performing time stamp alignment and motion track alignment on the first pose data and the second pose data, and recovering the optimal camera scale of the monocular camera;
the robot main control subsystem is further used for performing scale recovery on the second pose data according to the optimal camera scale;
the robot main control subsystem is further used for fusing the first pose data with the second pose data after scale recovery to obtain final pose data of the wheeled mobile robot;
the robot main control subsystem performs loop detection on each key frame image in the image data: the robot main control subsystem extracts a plurality of FAST corner points from each key frame image in the image data, and calculates BRIEF descriptors of each FAST corner point; the robot main control subsystem calculates the similarity between the current frame and the previous key frame through a DBoW2 algorithm according to each FAST corner and the corresponding BRIEF descriptor; when the similarity is larger than a set threshold value, the robot main control subsystem judges that loop-back occurs;
when loop-back occurs, the robot main control subsystem performs repositioning calculation on the wheeled mobile robot;
the wheel quantity of encoder with wheeled mobile robot is a plurality of and one-to-one, chassis control chip is according to the rotational speed data of wheel that the encoder gathered with the angle data of wheeled mobile robot that the gyroscope gathered obtains wheeled odometer model's first position appearance data, includes:
the chassis control chip obtains speed data and a first attitude angle of the wheeled mobile robot under a current coordinate system according to the rotating speed data of the wheels collected by each encoder;
the chassis control chip calculates a second attitude angle of the wheeled mobile robot under a global coordinate system according to a pre-established gyroscope error model;
the chassis control chip performs Kalman filtering fusion on the first attitude angle and the second attitude angle to obtain a final attitude angle of the wheeled mobile robot;
the chassis control chip calculates the speed and position information of the wheeled mobile robot under a world coordinate system according to the speed data and the final attitude angle to obtain the first attitude data;
the number of the wheels is 3, the included angle between every two wheels is 120 degrees, and the speed data of the wheeled mobile robot under the current coordinate system is as follows:
wherein v is x 、v y Respectively representing the speeds of the x-axis and the y-axis in the current coordinate system, ω represents the rotational speed about its own geometric center in the current coordinate system, ω 1 、ω 2 、ω 3 Respectively representing the rotation speeds of three wheels, wherein L is the radius of a chassis of the wheeled mobile robot, and R is the radius of the wheels;
the robot main control subsystem calculates second pose data of a visual odometer model according to the image data acquired by the monocular camera, and the robot main control subsystem comprises:
the robot main control subsystem extracts FAST corner points from the image data and performs LK optical flow tracking to obtain image feature point matching information;
the robot main control subsystem issues the image feature point matching information according to preset image feature point issuing frequency;
the robot main control subsystem sets a first frame of the image data as a key frame, and other image frames determine whether the first frame is set as the key frame according to the feature point number of the last key frame image tracked by the current image and the average parallax of the feature points;
the robot main control subsystem establishes a sliding window for image tracking;
the robot main control subsystem obtains a rotation matrix through the position relation of each frame of images in the sliding window through epipolar geometry, three-dimensional reconstruction and PnP algorithm calculation, a yaw angle is selected as an initial rotation matrix for the obtained rotation matrix, translation on the horizontal planes of an x axis and a y axis is measured by a translation direction, a reprojection error cost function is established, 3-degree-of-freedom minimum reprojection error calculation is carried out, and a rotation and translation matrix without scale between image key frames is obtained;
the robot main control subsystem performs time stamp alignment and motion trail alignment on the first pose data and the second pose data, and restores an optimal camera scale of the monocular camera, including:
the robot master control subsystem time-stamps the first pose data with the second pose data;
the robot main control subsystem performs track alignment on the first pose data and the second pose data after the time stamps are aligned;
and the robot main control subsystem obtains the optimal camera scale of the monocular camera by solving the least square solution of the loss function.
CN201811473786.3A 2018-12-04 2018-12-04 Positioning method and system Active CN109579844B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811473786.3A CN109579844B (en) 2018-12-04 2018-12-04 Positioning method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811473786.3A CN109579844B (en) 2018-12-04 2018-12-04 Positioning method and system

Publications (2)

Publication Number Publication Date
CN109579844A CN109579844A (en) 2019-04-05
CN109579844B true CN109579844B (en) 2023-11-21

Family

ID=65926961

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811473786.3A Active CN109579844B (en) 2018-12-04 2018-12-04 Positioning method and system

Country Status (1)

Country Link
CN (1) CN109579844B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110132277A (en) * 2019-05-14 2019-08-16 北京云迹科技有限公司 Robot idle running recognition methods and device
CN110108269B (en) * 2019-05-20 2023-01-17 电子科技大学 AGV positioning method based on multi-sensor data fusion
CN111699363A (en) * 2019-05-28 2020-09-22 深圳市大疆创新科技有限公司 Ground movable platform and motion information detection method and system thereof
CN112102646B (en) * 2019-06-17 2021-12-31 北京初速度科技有限公司 Parking lot entrance positioning method and device in parking positioning and vehicle-mounted terminal
CN110471407B (en) * 2019-07-02 2022-09-06 无锡真源科技有限公司 Self-adaptive positioning system and method for automatic adjustment of module
CN110458885B (en) * 2019-08-27 2024-04-19 纵目科技(上海)股份有限公司 Positioning system and mobile terminal based on stroke perception and vision fusion
CN110779511B (en) * 2019-09-23 2021-09-21 北京汽车集团有限公司 Pose variation determination method, device and system and vehicle
CN111829473B (en) * 2020-07-29 2022-04-26 威步智能科技(苏州)有限公司 Method and system for ranging moving chassis during traveling
CN112450820B (en) * 2020-11-23 2022-01-21 深圳市银星智能科技股份有限公司 Pose optimization method, mobile robot and storage medium
CN112476433B (en) * 2020-11-23 2023-08-04 深圳怪虫机器人有限公司 Mobile robot positioning method based on identification array boundary
CN112697153A (en) * 2020-12-31 2021-04-23 广东美的白色家电技术创新中心有限公司 Positioning method of autonomous mobile device, electronic device and storage medium
CN113223007A (en) * 2021-06-28 2021-08-06 浙江华睿科技股份有限公司 Visual odometer implementation method and device and electronic equipment
CN113390408A (en) * 2021-06-30 2021-09-14 深圳市优必选科技股份有限公司 Robot positioning method and device, robot and storage medium
CN114964270B (en) * 2022-05-17 2024-04-26 驭势科技(北京)有限公司 Fusion positioning method, device, vehicle and storage medium
CN117392518B (en) * 2023-12-13 2024-04-09 南京耀宇视芯科技有限公司 Low-power-consumption visual positioning and mapping chip and method thereof

Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997049065A1 (en) * 1996-06-19 1997-12-24 Arch Development Corporation Method and apparatus for three-dimensional reconstruction of coronary vessels from angiographic images
US6539278B1 (en) * 1999-09-20 2003-03-25 General Electric Company Method and apparatus for resin formulations with improved streaking performance
CN102254299A (en) * 2010-05-20 2011-11-23 索尼公司 System and method of image processing
CN102509327A (en) * 2011-09-30 2012-06-20 北京航空航天大学 Multiscale global sampling method for filling image void
WO2013086255A1 (en) * 2011-12-07 2013-06-13 Viewdle, Inc. Motion aligned distance calculations for image comparisons
CN104765739A (en) * 2014-01-06 2015-07-08 南京宜开数据分析技术有限公司 Large-scale face database searching method based on shape space
CN105856230A (en) * 2016-05-06 2016-08-17 简燕梅 ORB key frame closed-loop detection SLAM method capable of improving consistency of position and pose of robot
GB201612767D0 (en) * 2016-07-22 2016-09-07 Imp College Of Science Tech And Medicine Estimating dimensions for an enclosed space using a multi-directional camera
CN106092104A (en) * 2016-08-26 2016-11-09 深圳微服机器人科技有限公司 The method for relocating of a kind of Indoor Robot and device
CN106556412A (en) * 2016-11-01 2017-04-05 哈尔滨工程大学 The RGB D visual odometry methods of surface constraints are considered under a kind of indoor environment
WO2017067130A1 (en) * 2015-10-21 2017-04-27 华中科技大学 Aero-optical heat radiation noise correction method and system
CN106767833A (en) * 2017-01-22 2017-05-31 电子科技大学 A kind of robot localization method of fusion RGBD depth transducers and encoder
CN106954024A (en) * 2017-03-28 2017-07-14 成都通甲优博科技有限责任公司 A kind of unmanned plane and its electronic image stabilization method, system
CN107220932A (en) * 2017-04-18 2017-09-29 天津大学 Panorama Mosaic method based on bag of words
CN107255476A (en) * 2017-07-06 2017-10-17 青岛海通胜行智能科技有限公司 A kind of indoor orientation method and device based on inertial data and visual signature
CN107272677A (en) * 2017-06-07 2017-10-20 东南大学 A kind of structure-changeable self-adaptive Trajectory Tracking Control method of mobile robot
CN107516326A (en) * 2017-07-14 2017-12-26 中国科学院计算技术研究所 Merge monocular vision and the robot localization method and system of encoder information
CN107577646A (en) * 2017-08-23 2018-01-12 上海莫斐信息技术有限公司 A kind of high-precision track operation method and system
CN107886129A (en) * 2017-11-13 2018-04-06 湖南大学 A kind of mobile robot map closed loop detection method of view-based access control model bag of words
CN108036797A (en) * 2017-11-30 2018-05-15 深圳市隐湖科技有限公司 Mileage projectional technique based on four motorized wheels and combination IMU
CN108108716A (en) * 2017-12-29 2018-06-01 中国电子科技集团公司信息科学研究院 A kind of winding detection method based on depth belief network
CN108151745A (en) * 2017-12-25 2018-06-12 千寻位置网络有限公司 NMEA tracks difference automatically analyze and identification method
CN108519102A (en) * 2018-03-26 2018-09-11 东南大学 A kind of binocular vision speedometer calculation method based on reprojection
CN108829116A (en) * 2018-10-09 2018-11-16 上海岚豹智能科技有限公司 Barrier-avoiding method and equipment based on monocular cam
CN108846867A (en) * 2018-08-29 2018-11-20 安徽云能天智能科技有限责任公司 A kind of SLAM system based on more mesh panorama inertial navigations

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8694317B2 (en) * 2005-02-05 2014-04-08 Aurix Limited Methods and apparatus relating to searching of spoken audio data
US7965887B2 (en) * 2005-12-01 2011-06-21 Cognex Technology And Investment Corp. Method of pattern location using color image data
KR102131477B1 (en) * 2013-05-02 2020-07-07 퀄컴 인코포레이티드 Methods for facilitating computer vision application initialization

Patent Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997049065A1 (en) * 1996-06-19 1997-12-24 Arch Development Corporation Method and apparatus for three-dimensional reconstruction of coronary vessels from angiographic images
US6539278B1 (en) * 1999-09-20 2003-03-25 General Electric Company Method and apparatus for resin formulations with improved streaking performance
CN102254299A (en) * 2010-05-20 2011-11-23 索尼公司 System and method of image processing
CN102509327A (en) * 2011-09-30 2012-06-20 北京航空航天大学 Multiscale global sampling method for filling image void
WO2013086255A1 (en) * 2011-12-07 2013-06-13 Viewdle, Inc. Motion aligned distance calculations for image comparisons
CN104765739A (en) * 2014-01-06 2015-07-08 南京宜开数据分析技术有限公司 Large-scale face database searching method based on shape space
WO2017067130A1 (en) * 2015-10-21 2017-04-27 华中科技大学 Aero-optical heat radiation noise correction method and system
CN105856230A (en) * 2016-05-06 2016-08-17 简燕梅 ORB key frame closed-loop detection SLAM method capable of improving consistency of position and pose of robot
GB201612767D0 (en) * 2016-07-22 2016-09-07 Imp College Of Science Tech And Medicine Estimating dimensions for an enclosed space using a multi-directional camera
CN106092104A (en) * 2016-08-26 2016-11-09 深圳微服机器人科技有限公司 The method for relocating of a kind of Indoor Robot and device
CN106556412A (en) * 2016-11-01 2017-04-05 哈尔滨工程大学 The RGB D visual odometry methods of surface constraints are considered under a kind of indoor environment
CN106767833A (en) * 2017-01-22 2017-05-31 电子科技大学 A kind of robot localization method of fusion RGBD depth transducers and encoder
CN106954024A (en) * 2017-03-28 2017-07-14 成都通甲优博科技有限责任公司 A kind of unmanned plane and its electronic image stabilization method, system
CN107220932A (en) * 2017-04-18 2017-09-29 天津大学 Panorama Mosaic method based on bag of words
CN107272677A (en) * 2017-06-07 2017-10-20 东南大学 A kind of structure-changeable self-adaptive Trajectory Tracking Control method of mobile robot
CN107255476A (en) * 2017-07-06 2017-10-17 青岛海通胜行智能科技有限公司 A kind of indoor orientation method and device based on inertial data and visual signature
CN107516326A (en) * 2017-07-14 2017-12-26 中国科学院计算技术研究所 Merge monocular vision and the robot localization method and system of encoder information
CN107577646A (en) * 2017-08-23 2018-01-12 上海莫斐信息技术有限公司 A kind of high-precision track operation method and system
CN107886129A (en) * 2017-11-13 2018-04-06 湖南大学 A kind of mobile robot map closed loop detection method of view-based access control model bag of words
CN108036797A (en) * 2017-11-30 2018-05-15 深圳市隐湖科技有限公司 Mileage projectional technique based on four motorized wheels and combination IMU
CN108151745A (en) * 2017-12-25 2018-06-12 千寻位置网络有限公司 NMEA tracks difference automatically analyze and identification method
CN108108716A (en) * 2017-12-29 2018-06-01 中国电子科技集团公司信息科学研究院 A kind of winding detection method based on depth belief network
CN108519102A (en) * 2018-03-26 2018-09-11 东南大学 A kind of binocular vision speedometer calculation method based on reprojection
CN108846867A (en) * 2018-08-29 2018-11-20 安徽云能天智能科技有限责任公司 A kind of SLAM system based on more mesh panorama inertial navigations
CN108829116A (en) * 2018-10-09 2018-11-16 上海岚豹智能科技有限公司 Barrier-avoiding method and equipment based on monocular cam

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张国良,等.《移动机器人的SLAM与VSLAM方法》.西安交通大学出版社,2018,5. *

Also Published As

Publication number Publication date
CN109579844A (en) 2019-04-05

Similar Documents

Publication Publication Date Title
CN109579844B (en) Positioning method and system
US11747823B2 (en) Monocular modes for autonomous platform guidance systems with auxiliary sensors
WO2020119140A1 (en) Method, apparatus and smart device for extracting keyframe in simultaneous localization and mapping
CN112734852B (en) Robot mapping method and device and computing equipment
CN108489482B (en) The realization method and system of vision inertia odometer
CN106017463B (en) A kind of Aerial vehicle position method based on orientation sensing device
WO2021035669A1 (en) Pose prediction method, map construction method, movable platform, and storage medium
CN107665506B (en) Method and system for realizing augmented reality
CN111595333A (en) Modularized unmanned vehicle positioning method and system based on visual inertial laser data fusion
CN108682027A (en) VSLAM realization method and systems based on point, line Fusion Features
CN205426175U (en) Fuse on -vehicle multisensor's SLAM device
CN112556719B (en) Visual inertial odometer implementation method based on CNN-EKF
CN111882602B (en) Visual odometer implementation method based on ORB feature points and GMS matching filter
CN112819860B (en) Visual inertial system initialization method and device, medium and electronic equipment
CN112734841A (en) Method for realizing positioning by using wheel type odometer-IMU and monocular camera
CN102607532B (en) Quick low-level image matching method by utilizing flight control data
CN110751123B (en) Monocular vision inertial odometer system and method
CN111609868A (en) Visual inertial odometer method based on improved optical flow method
CN109767470B (en) Tracking system initialization method and terminal equipment
CN112731503B (en) Pose estimation method and system based on front end tight coupling
CN103903253A (en) Mobile terminal positioning method and system
CN111862146B (en) Target object positioning method and device
CN111795703B (en) Map construction method and device, storage medium and mobile device
CN116804553A (en) Odometer system and method based on event camera/IMU/natural road sign
CN115965673B (en) Centralized multi-robot positioning method based on binocular vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant