CN113390408A - Robot positioning method and device, robot and storage medium - Google Patents
Robot positioning method and device, robot and storage medium Download PDFInfo
- Publication number
- CN113390408A CN113390408A CN202110744785.3A CN202110744785A CN113390408A CN 113390408 A CN113390408 A CN 113390408A CN 202110744785 A CN202110744785 A CN 202110744785A CN 113390408 A CN113390408 A CN 113390408A
- Authority
- CN
- China
- Prior art keywords
- robot
- data
- image data
- odometer
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000003860 storage Methods 0.000 title claims abstract description 16
- 230000000007 visual effect Effects 0.000 claims abstract description 15
- 238000004590 computer program Methods 0.000 claims description 18
- 230000001360 synchronised effect Effects 0.000 claims description 18
- 230000004927 fusion Effects 0.000 claims description 14
- 238000001514 detection method Methods 0.000 claims description 12
- 230000006870 function Effects 0.000 claims description 10
- 238000006073 displacement reaction Methods 0.000 claims description 9
- 238000001914 filtration Methods 0.000 claims description 6
- 238000004891 communication Methods 0.000 description 13
- 230000008569 process Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 5
- 239000011664 nicotinic acid Substances 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000005259 measurement Methods 0.000 description 3
- 230000005484 gravity Effects 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 239000002096 quantum dot Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 239000010409 thin film Substances 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/005—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C22/00—Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers
- G01C22/02—Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers by conversion into electric waveforms and subsequent integration, e.g. using tachometer generator
- G01C22/025—Differential odometers
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Manipulator (AREA)
Abstract
The application is applicable to the technical field of robots and provides a robot positioning method, a device, a robot and a storage medium, wherein the method comprises the following steps: acquiring first attitude data of the robot according to image data acquired by two monocular cameras with different visual field ranges; acquiring second position data of the robot according to odometer data acquired by the wheel type odometer; and fusing according to the first position data and the second position data to obtain the real position data of the robot. This application can increase the visual scope of robot through adopting two monocular cameras that the field of vision scope is different, will be based on the first position appearance data that two monocular cameras obtained and obtain the second position appearance data based on wheeled odometer and fuse the back, can obtain the accurate position appearance data of robot to improve the positioning accuracy of robot.
Description
Technical Field
The present application relates to the field of robot technology, and in particular, to a method and an apparatus for positioning a robot, and a storage medium.
Background
In the positioning and mapping of the robot, accurate pose estimation of the robot needs to be obtained. The vision sensor has the advantages of low cost, rich information content and the like, and is commonly used for vision positioning and mapping of the robot. At present, most of visual positioning is based on a monocular camera, a real world scale is lacked, a positioning mode based on the camera and an Inertial Measurement Unit (IMU) is adopted, but in actual use, the IMU is found to easily generate zero-speed drift when the robot is static, so that the pose of the robot is wrong, the IMU vibrates when the robot moves, the value of IMU pre-integration is inaccurate, and the positioning precision is influenced.
Disclosure of Invention
The embodiment of the application provides a robot positioning method and device, a robot and a storage medium, and aims to solve the problems that a visual positioning based on a monocular camera lacks real world scale, and an IMU (inertial measurement Unit) is easy to drift at zero speed when the robot is static, so that the pose of the robot is wrong, and the IMU vibrates when the robot moves, so that the value of IMU pre-integration is inaccurate, and the positioning precision is influenced.
A first aspect of an embodiment of the present application provides a robot positioning method, including:
acquiring first attitude data of the robot according to image data acquired by two monocular cameras with different visual field ranges;
acquiring second position data of the robot according to odometer data acquired by the wheel type odometer;
and fusing according to the first position data and the second position data to obtain the real position data of the robot.
A second aspect of an embodiment of the present application provides a robot positioning device, including:
the first position and posture acquisition unit is used for acquiring first position and posture data of the robot according to image data acquired by two monocular cameras with different visual field ranges;
the second position and posture acquisition unit is used for acquiring second position and posture data of the robot according to the odometer data acquired by the wheel type odometer;
and the pose fusion unit is used for fusing according to the first pose data and the second pose data to obtain the real pose data of the robot.
A third aspect of embodiments of the present application provides a robot, including a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the robot positioning method according to the first aspect of embodiments of the present application when executing the computer program.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, which stores a computer program, which, when executed by a processor, implements the steps of the robot positioning method according to the first aspect of embodiments of the present application.
According to the robot positioning method provided by the first aspect of the embodiment of the application, first position and attitude data of a robot are obtained according to image data acquired by two monocular cameras with different visual field ranges; acquiring second position data of the robot according to odometer data acquired by the wheel type odometer; the real pose data of the robot are obtained by fusing the first pose data and the second pose data, the visual range of the robot can be increased by adopting two monocular cameras with different visual ranges, and the accurate pose data of the robot can be obtained after the first pose data obtained based on the two monocular cameras and the second pose data obtained based on the wheel type odometer are fused, so that the positioning accuracy of the robot is improved.
It is understood that the beneficial effects of the second to fourth aspects can be seen from the description of the first aspect, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a first schematic structural diagram of a robot provided in an embodiment of the present application
Fig. 2 is a first flowchart of a robot positioning method according to an embodiment of the present disclosure;
fig. 3 is a second flowchart of a robot positioning method according to an embodiment of the present disclosure;
fig. 4 is a third flowchart illustrating a robot positioning method according to an embodiment of the present disclosure;
FIG. 5 is a timing diagram of image data and odometry data provided by an embodiment of the application;
fig. 6 is a fourth flowchart illustrating a robot positioning method according to an embodiment of the present disclosure;
FIG. 7 is a schematic structural diagram of a robot positioning device according to an embodiment of the present disclosure;
fig. 8 is a second structural schematic diagram of a robot provided in the embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The embodiment of the application provides a robot positioning method, which can be executed by a processor of a robot when a corresponding computer program is run, and can obtain accurate pose data of the robot by fusing first pose data obtained based on two monocular cameras with different visual field ranges and second pose data obtained based on a wheel type odometer, so that the positioning accuracy of the robot is improved.
In application, the robot may be any type of robot having a moving roller, and there are types of robots having a moving roller among various types of robots such as a service robot, an entertainment robot, a production robot, an agricultural robot, and the like, for example, a bionic education robot, a bionic usher robot, a bionic dancing robot, a bionic nanny robot, and the like.
In application, the robot is provided with two monocular cameras and a wheel type odometer, the two monocular cameras are arranged at any positions capable of collecting image data in the moving direction of the robot, and the monocular cameras and the wheel type odometer can be integrated with the robot into a whole and belong to one part of the robot or can be an external device additionally arranged on the robot and in communication connection with the robot. The wheel-type odometer is arranged on the driving roller, the robot generally comprises a driving roller for driving the robot to move, and the robot can also comprise a driven roller, when the robot comprises a plurality of driving rollers, the wheel-type odometer is arranged at the centers of the plurality of driving rollers, the centers of the plurality of driving rollers are the geometric centers of polygons taking the geometric center of each driving roller as an angular point, and the polygons are parallel to the motion plane of the robot.
As shown in fig. 1, a schematic diagram of a structure of a robot 100 is exemplarily shown; one monocular camera 1 is arranged on the front side of the upper portion of the robot 100, the other monocular camera 2 is arranged on the rear side of the upper portion of the robot 100, the wheeled odometer 3 is arranged on a driving roller of a chassis of the robot 100, the direction of a dotted arrow indicates the movement direction of the robot 100, the coordinate system indicated by the monocular camera 1 is a camera coordinate system, the X-axis direction of the camera coordinate system points to the right side of the robot 100, the Y-axis direction is the gravity direction, the Z-axis direction is the same as the movement direction of the robot 100, the coordinate system indicated by the wheeled odometer 3 is a wheeled odometer coordinate system, the X-axis direction of the wheeled odometer coordinate system is the same as the movement direction of the robot 100, the Y-axis direction points to the left side of the robot 100, and the Z-axis direction is opposite to the gravity direction.
As shown in fig. 2, the robot positioning method provided in the embodiment of the present application includes the following steps S201 to S203:
step S201, acquiring first position data of the robot according to image data acquired by two monocular cameras with different visual field ranges.
In application, the two monocular cameras are used for synchronously acquiring image data respectively, namely time stamps of the image data acquired by the two monocular cameras are synchronous, and the two monocular cameras respectively acquire a frame of image data at the same moment. After the image data are synchronously acquired through the two monocular cameras respectively, one first-position posture data of the robot is acquired according to the image data acquired by one of the monocular cameras, the other first-position posture data of the robot is acquired according to the image data acquired by the other monocular camera, namely the two first-position posture data of the robot can be acquired according to the image data acquired by the two monocular cameras respectively.
Step S202, obtaining second position and posture data of the robot according to odometer data collected by the wheel type odometer;
and S203, fusing according to the first position data and the second position data to obtain real position data of the robot.
In application, because the visual positioning realized based on the monocular camera lacks a real world scale, the odometer data in the moving process of the robot is acquired through the wheel type odometer, the second position and posture data capable of reflecting the real world scale is obtained according to the odometer data, then the two first position and posture data obtained based on the two monocular cameras and the second position and posture data obtained based on the wheel type odometer are fused, and finally the real position and posture data of the robot are obtained.
As shown in fig. 3, in one embodiment, step S201 includes the following steps S301 to S304:
step S301, for each monocular camera, feature point detection is carried out on first frame image data collected by the monocular camera, and a preset number of feature points in the first frame image data are extracted.
In application, for each monocular camera, feature point detection needs to be performed on first frame image data acquired by the monocular camera separately, so as to extract a preset number of feature points in the first frame image data acquired by each monocular camera respectively. Any corner detection method may be adopted to extract the feature points, for example, the corner detection based on the grayscale image, the corner detection based on the binary image, the corner detection based on the contour curve, and the like, and specifically, the feature points may be extracted by a fast (features from accessed Segment test) feature point detection method. Before the feature point extraction, image processing, for example, Histogram Equalization (Histogram Equalization) processing may be performed on the first frame of image data to adjust the contrast of the first frame of image data. The preset number can be set according to actual needs, for example, any value between 70 and 150, and specifically may be 100.
Step S302, performing feature point tracking on the (k + 1) th frame of image data acquired by the monocular camera, where k is 1,2, …, and n is a positive integer.
In application, after feature point detection is performed on first frame image data acquired by each monocular camera, feature point tracking needs to be continuously performed on each frame image data subsequently acquired by each monocular camera. Tracking of a preset number of feature points in each subsequent frame of image data may be performed using any angular point tracking algorithm, such as optical flow.
In one embodiment, step S302 is followed by:
if the number of the tracked feature points in the (k + 1) th frame of image data is smaller than the preset number, feature point detection is carried out on the (k + 1) th frame of image data, the feature points in the (k + 1) th frame of image data are extracted, and the sum of the number of the tracked feature points in the (k + 1) th frame of image data and the number of the extracted feature points in the (k + 1) th frame of image data is equal to the preset number.
In application, for each monocular camera, if the number of the feature points tracked in any frame of image data after the first frame of image data acquired by the monocular camera does not reach the preset number, the feature points in any frame of image data are extracted by the same feature point detection method as the feature points extracted from the first frame of image data, so as to supplement the missing feature points, and the sum of the number of the feature points tracked in any frame of image data and the number of the feature points extracted from any frame of image data is equal to the preset number.
Step S303, constructing a reprojection error cost function according to common-view feature points among the multi-frame image data acquired by the monocular camera;
and S304, carrying out minimum re-projection error solving on the re-projection error cost function to obtain three-dimensional coordinates of a preset number of feature points in each frame of image data acquired by the monocular camera in a world coordinate system, wherein the three-dimensional coordinates are used as first pose data of the robot.
In application, aiming at each monocular camera, a least square optimization problem is constructed according to common-view feature points among multi-frame image data collected by the monocular camera, namely a re-projection error cost function of the minimized feature points is constructed, the re-projection error cost function is solved, a solution enabling the re-projection error to be minimum is obtained, and then three-dimensional coordinates of a preset number of feature points in each frame of image data under a world coordinate system are obtained, and the three-dimensional coordinates are first pose data of the robot.
As shown in fig. 4, in one embodiment, step S202 includes the following steps S401 and S402:
step S401, aligning the time stamp of the image data collected by the monocular camera with the time stamp of the odometer data collected by the wheel type odometer, and obtaining the odometer data synchronized with the time stamp of the image data.
In application, the frame rates of the two monocular cameras are the same and the image data is acquired synchronously, so the timestamps of the image data acquired by the two monocular cameras are the same, and the frame rates of the monocular cameras and the acquisition frequency of the wheel type odometer are usually different, so the timestamps of the image data acquired by the monocular cameras and the timestamps of the odometer data acquired by the wheel type odometer need to be aligned, and the odometer data synchronized with the image data in acquisition time is acquired.
As shown in fig. 5, a timing chart of image data and odometer data is exemplarily shown; where the dashed line is odometry data synchronized with the time stamp of the image data.
In one embodiment, step S401 includes:
and if the frame rate of the monocular camera is different from the acquisition frequency of the wheel type odometer, aligning the time stamp of the image data acquired by the monocular camera with the time stamp of the odometer data acquired by the wheel type odometer, and acquiring the odometer data synchronized with the time stamp of the image data.
In application, the frame rate of the monocular camera and the acquisition frequency of the wheel type odometer may be the same, the time stamp alignment is performed only when the frame rate of the monocular camera and the acquisition frequency of the wheel type odometer are different, and the time stamp alignment is not performed if the frame rate of the monocular camera and the acquisition frequency of the wheel type odometer are the same.
In one embodiment, step S401 includes:
and performing linear interpolation on the mileage data collected by the wheel type mileage meter according to the time stamp of the image data collected by the monocular camera so as to align the time stamp of the image data with the time stamp of the mileage data, and obtaining the mileage data synchronized with the time stamp of the image data.
In application, linear interpolation may be employed to align the time stamp of the image data with the time stamp of the odometry data.
And S402, pre-integrating the odometer data synchronized with the time stamp of the image data to obtain an integral value of the wheel type odometer as second position and attitude data of the robot.
In application, after aligning the time stamp of the image data with the time stamp of the odometer data, the odometer data after the time stamps are synchronized are pre-integrated to obtain an integral value of the odometer data collected by the wheel type odometer, and the integral value is used as second position and posture data capable of reflecting real world scales.
In one embodiment, the expression of the second posture data is:
wherein,an integrated value representing a rotation angle of the wheel odometer synchronized with a time stamp of the (k + 1) th frame image data,an integrated value indicating a rotation angle of the wheel odometer in synchronization with a time stamp of the image data of the k-th frame, Δ R indicating a variation amount of the integrated value of the rotation angle of the wheel odometer between the time stamp of the image data of the k + 1-th frame and the time stamp of the image data of the k-th frame,an integrated value representing a displacement of the wheel odometer synchronized with a time stamp of the (k + 1) th frame image data,an integrated value representing a displacement of the wheel odometer synchronized with a time stamp of the image data of the k-th frame, Δ p representing a variation amount of the integrated value of the displacement of the wheel odometer between the time stamp of the image data of the k + 1-th frame and the time stamp of the image data of the k-th frame, G representing a world coordinate system, and O representing a wheel odometer coordinate system.
As shown in fig. 6, in one embodiment, step S203 includes the following steps S601 to S603:
step S601, projecting the three-dimensional coordinates of the preset number of feature points in the world coordinate system back to the image coordinate system to obtain the two-dimensional coordinates of the preset number of feature points in the image coordinate system.
In application, for each monocular camera, three-dimensional coordinates of a preset number of feature points in each frame of image data obtained by the monocular camera under a world coordinate system, namely, first pose data, are projected back to the image coordinate system, and two-dimensional coordinates of the preset number of feature points in each frame of image data under the image coordinate system are obtained.
Step S602, obtaining a difference between the two-dimensional coordinates of the preset number of feature points in the image coordinate system and the two-dimensional coordinates of the preset number of feature points in the extracted first frame of image data in the image coordinate system.
In application, after the two-dimensional coordinates of the preset number of feature points in each frame of image data in the image coordinate system are obtained in step S601, the difference between the two-dimensional coordinates and the two-dimensional coordinates of the preset number of feature points in the first frame of image data extracted in step S301 in the image coordinate system is further obtained, and the specific method for obtaining the difference is as follows: and acquiring the difference value between the two-dimensional coordinates of the feature point in the image coordinate system of the (k + 1) th frame of image data and the two-dimensional coordinates of the feature point in the image coordinate system of the first frame of image data aiming at each feature point.
And S603, taking the second pose data as an estimated value and the difference value as an observation variable, and performing extended Kalman filtering fusion to obtain the real pose data of the robot in a wheel-type odometer coordinate system.
In application, after obtaining the difference between the feature points in the image data acquired by each monocular camera according to step S602, the second pose data and the difference are used as an estimated value and an observation variable of Extended Kalman Filter (EKF) fusion, and after data fusion, the real pose data of the robot in the wheel-type odometer coordinate system is finally obtained.
In one embodiment, step S603 includes:
and taking the second pose data as an estimated value, taking the difference value as an observation variable, and performing extended Kalman filtering fusion under a constraint condition to obtain real pose data of the robot under a wheel type odometer coordinate system, wherein the constraint condition is that an X-axis component and a Y-axis component which limit the rotation angle of the robot under the wheel type odometer coordinate system are 0, and a Z-axis component of the real pose data is fixed.
In application, as the robot generally moves on a plane, in order to further improve the positioning accuracy, plane constraint can be introduced to serve as a constraint condition for performing extended kalman filter fusion. Because the pose data estimated based on the image data is three-dimensional and introduces height drift when being tightly coupled with the odometer-based data, a plane constraint is required to be added to eliminate the drift, namely, the three-dimensional rotation angle of the robot solved under the constraint condition under the wheeled odometer coordinate system can only rotate around the Z axis, the components of the X axis and the Y axis are 0, and the component of the Z axis of the pose data is fixed and unchanged.
In one embodiment, step S603 includes:
and taking the second pose data as an estimated value, taking the difference value as an observation variable, taking a relative rotation angle and a relative displacement between a wheel type odometer coordinate system and a camera coordinate system as external parameters, and performing extended Kalman filtering fusion to obtain real pose data of the robot in the wheel type odometer coordinate system.
In one embodiment, because the coordinate system of the wheeled odometer is different from the coordinate system of the camera, coordinate transformation needs to be performed between the coordinate system of the wheeled odometer and the coordinate system of the camera to align coordinates so as to realize coordinate calibration, relative rotation angles and relative displacements between the coordinate system of the wheeled odometer and the coordinate system of the camera can be obtained through manual rough measurement and used as external parameters, and the external parameters are added into state variables of the EKF to perform online calibration and update, so that the accuracy of finally obtained real pose data is improved.
In one embodiment, step S603 is followed by:
and outputting the real pose data.
In application, the robot can perform offline positioning navigation according to the real pose data of the robot, and can also output the real pose data to other equipment, so that users of the other equipment can know the real pose data of the robot in real time, and perform positioning navigation control on the robot based on the real pose data so as to control the robot to move to an appointed position according to an appointed path. The other device may be any terminal device capable of wireless communication with the robot, e.g. a mobile phone, a tablet, a personal computer, a smart band, a personal digital assistant, (cloud) server, etc.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
The embodiment of the application also provides a robot positioning device, which is used for executing the method steps in the method embodiment. The device may be a virtual appliance (virtual appliance) in the robot, run by a processor of the robot, or the robot itself.
As shown in fig. 7, a robot positioning device 200 provided in the embodiment of the present application includes:
the first pose acquisition unit 201 is configured to acquire first pose data of the robot according to image data acquired by two monocular cameras with different view ranges;
the second position and posture acquisition unit 202 is used for acquiring second position and posture data of the robot according to the odometer data acquired by the wheel type odometer;
and a pose fusion unit 203, configured to fuse the first pose data and the second pose data to obtain real pose data of the robot.
In one embodiment, the robot positioning device further comprises an output unit for outputting the real pose data.
In application, each module in the above apparatus may be a software program module, may be implemented by different logic circuits integrated in a processor or a separate physical component connected to the processor, and may also be implemented by a plurality of distributed processors.
As shown in fig. 8, an embodiment of the present application further provides a robot 300, including: at least one processor 301 (only one processor is shown in fig. 8), a memory 302, and a computer program 303 stored in the memory 302 and executable on the at least one processor 301, the steps in the various robot positioning method embodiments described above being implemented when the computer program 303 is executed by the processor 301.
In an application, the robot may include, but is not limited to, a processor and a memory, fig. 8 is merely an example of the robot and does not constitute a limitation of the robot, and may include more or less components than those shown, or some components may be combined, or different components may be included, for example, two monocular cameras, a wheel odometer, a mobile component, an input-output device, a network access device, etc. The moving part can comprise moving rollers, steering gears, motors, drivers and the like for driving the robot to move. The input and output device can comprise the human-computer interaction device and can also comprise a display screen for displaying the working parameters of the robot. The network access device may include a communication module for the robot to communicate with the user terminal.
In an Application, the Processor may be a Central Processing Unit (CPU), and the Processor may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field-Programmable Gate arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The processor may specifically be a PID controller.
In an application, the storage may in some embodiments be an internal storage unit of the robot, such as a hard disk or a memory of the robot. The memory may also be an external storage device of the robot in other embodiments, such as a plug-in hard disk provided on the robot, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. The memory may also include both an internal storage unit of the robot and an external storage device. The memory is used for storing an operating system, an application program, a Boot Loader (Boot Loader), data, and other programs, such as program codes of computer programs. The memory may also be used to temporarily store data that has been output or is to be output.
In application, the Display screen may be a Thin Film Transistor Liquid Crystal Display (TFT-LCD), a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), a Quantum Dot Light Emitting Diode (QLED) Display screen, a seven-segment or eight-segment digital tube, or the like.
In application, the Communication module may be configured as any device capable of performing wired or Wireless Communication with a client directly or indirectly according to actual needs, for example, the Communication module may provide a solution for Communication applied to a network device, including Wireless Local Area Network (WLAN) (e.g., Wi-Fi network), bluetooth, Zigbee, mobile Communication network, Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (Infrared, IR), and the like. The communication module may include an antenna, and the antenna may have only one array element, or may be an antenna array including a plurality of array elements. The communication module can receive electromagnetic waves through the antenna, frequency-modulate and filter electromagnetic wave signals, and send the processed signals to the processor. The communication module can also receive a signal to be sent from the processor, frequency-modulate and amplify the signal, and convert the signal into electromagnetic waves through the antenna to radiate the electromagnetic waves.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/modules, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and reference may be made to the part of the embodiment of the method specifically, and details are not described here.
It will be clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely illustrated, and in practical applications, the above function distribution may be performed by different functional modules according to needs, that is, the internal structure of the apparatus is divided into different functional modules to perform all or part of the above described functions. Each functional module in the embodiments may be integrated into one processing module, or each module may exist alone physically, or two or more modules are integrated into one module, and the integrated module may be implemented in a form of hardware, or in a form of software functional module. In addition, specific names of the functional modules are only used for distinguishing one functional module from another, and are not used for limiting the protection scope of the application. The specific working process of the modules in the system may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
The embodiments of the present application further provide a computer-readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the steps in the above-mentioned method embodiments can be implemented.
The embodiments of the present application provide a computer program product, which, when running on a robot, enables the robot to implement the steps in the above-mentioned method embodiments.
The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a robot, recording medium, computer Memory, Read-Only Memory (ROM), Random-Access Memory (RAM), electrical carrier wave signals, telecommunications signals, and software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative modules and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and in actual implementation, there may be other divisions, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.
Claims (12)
1. A robot positioning method, comprising:
acquiring first attitude data of the robot according to image data acquired by two monocular cameras with different visual field ranges;
acquiring second position data of the robot according to odometer data acquired by the wheel type odometer;
and fusing according to the first position data and the second position data to obtain the real position data of the robot.
2. The robot positioning method according to claim 1, wherein the obtaining the first pose data of the robot based on the image data collected by the two monocular cameras with different visual field ranges comprises:
for each monocular camera, carrying out feature point detection on first frame image data acquired by the monocular camera, and extracting a preset number of feature points in the first frame image data;
carrying out feature point tracking on the (k + 1) th frame of image data acquired by the monocular camera, wherein k is 1,2, …, n and n are arbitrary positive integers;
constructing a reprojection error cost function according to common-view characteristic points among the multi-frame image data acquired by the monocular camera;
and carrying out minimum reprojection error solving on the reprojection error cost function to obtain three-dimensional coordinates of a preset number of characteristic points in each frame of image data acquired by the monocular camera under a world coordinate system, wherein the three-dimensional coordinates are used as first attitude data of the robot.
3. The robot positioning method according to claim 2, wherein after the feature point tracking is performed on the (k + 1) th frame of image data acquired by the monocular camera, the method comprises:
if the number of the tracked feature points in the (k + 1) th frame of image data is smaller than the preset number, feature point detection is carried out on the (k + 1) th frame of image data, the feature points in the (k + 1) th frame of image data are extracted, and the sum of the number of the tracked feature points in the (k + 1) th frame of image data and the number of the extracted feature points in the (k + 1) th frame of image data is equal to the preset number.
4. The robot positioning method according to claim 2, wherein the obtaining of the real pose data of the robot by fusing according to the first pose data and the second pose data comprises:
projecting the three-dimensional coordinates of the preset number of feature points in the world coordinate system back to the image coordinate system to obtain the two-dimensional coordinates of the preset number of feature points in the image coordinate system;
acquiring a difference value between two-dimensional coordinates of the preset number of feature points in an image coordinate system and two-dimensional coordinates of the preset number of feature points in the extracted first frame of image data in the image coordinate system;
and taking the second pose data as an estimated value, taking the difference value as an observation variable, and performing extended Kalman filtering fusion to obtain the real pose data of the robot in a wheel type odometer coordinate system.
5. The robot positioning method according to claim 4, wherein the performing extended kalman filter fusion by using the second pose data as an estimated value and the difference value as an observation variable to obtain the real pose data of the robot in the wheel-type odometer coordinate system comprises:
and taking the second pose data as an estimated value, taking the difference value as an observation variable, and performing extended Kalman filtering fusion under a constraint condition to obtain real pose data of the robot under a wheel type odometer coordinate system, wherein the constraint condition is that an X-axis component and a Y-axis component which limit the rotation angle of the robot under the wheel type odometer coordinate system are 0, and a Z-axis component of the real pose data is fixed.
6. The robot positioning method according to claim 4, wherein the performing extended kalman filter fusion by using the second pose data as an estimated value and the difference value as an observation variable to obtain the real pose data of the robot in the wheel-type odometer coordinate system comprises:
and taking the second pose data as an estimated value, taking the difference value as an observation variable, taking a relative rotation angle and a relative displacement between a wheel type odometer coordinate system and a camera coordinate system as external parameters, and performing extended Kalman filtering fusion to obtain real pose data of the robot in the wheel type odometer coordinate system.
7. A robot positioning method according to any of claims 1 to 6, wherein the obtaining second position data of the robot from odometer data collected by wheeled odometers comprises:
aligning a time stamp of image data acquired by the monocular camera with a time stamp of odometer data acquired by the wheel type odometer to obtain odometer data synchronized with the time stamp of the image data;
and pre-integrating odometer data synchronized with the time stamp of the image data to obtain an integral value of the wheel type odometer as second position and posture data of the robot.
8. The robot positioning method of claim 7, wherein the aligning the time stamp of the image data acquired by the monocular camera with the time stamp of the odometry data acquired by the wheeled odometer to obtain odometry data synchronized with the time stamp of the image data comprises:
and performing linear interpolation on the mileage data collected by the wheel type mileage meter according to the time stamp of the image data collected by the monocular camera so as to align the time stamp of the image data with the time stamp of the mileage data, and obtaining the mileage data synchronized with the time stamp of the image data.
9. A robot positioning method as described in claim 7, wherein the expression of said second pose data is:
wherein,an integrated value representing a rotation angle of the wheel odometer synchronized with a time stamp of the (k + 1) th frame image data,an integrated value indicating a rotation angle of the wheel odometer in synchronization with a time stamp of the image data of the k-th frame, Δ R indicating a variation amount of the integrated value of the rotation angle of the wheel odometer between the time stamp of the image data of the k + 1-th frame and the time stamp of the image data of the k-th frame,an integrated value representing a displacement of the wheel odometer synchronized with a time stamp of the (k + 1) th frame image data,an integrated value representing a displacement of the wheel odometer synchronized with a time stamp of the image data of the k-th frame, Δ p representing a variation amount of the integrated value of the displacement of the wheel odometer between the time stamp of the image data of the k + 1-th frame and the time stamp of the image data of the k-th frame, G representing a world coordinate system, and O representing a wheel odometer coordinate system.
10. A robot positioning device, comprising:
the first position and posture acquisition unit is used for acquiring first position and posture data of the robot according to image data acquired by two monocular cameras with different visual field ranges;
the second position and posture acquisition unit is used for acquiring second position and posture data of the robot according to the odometer data acquired by the wheel type odometer;
and the pose fusion unit is used for fusing according to the first pose data and the second pose data to obtain the real pose data of the robot.
11. A robot comprising a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the robot positioning method according to any of claims 1 to 9 when executing the computer program.
12. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when being executed by a processor, carries out the steps of the robot positioning method according to any one of claims 1 to 9.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110744785.3A CN113390408A (en) | 2021-06-30 | 2021-06-30 | Robot positioning method and device, robot and storage medium |
PCT/CN2021/126714 WO2023273057A1 (en) | 2021-06-30 | 2021-10-27 | Robot positioning method and apparatus, robot and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110744785.3A CN113390408A (en) | 2021-06-30 | 2021-06-30 | Robot positioning method and device, robot and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113390408A true CN113390408A (en) | 2021-09-14 |
Family
ID=77624929
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110744785.3A Pending CN113390408A (en) | 2021-06-30 | 2021-06-30 | Robot positioning method and device, robot and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113390408A (en) |
WO (1) | WO2023273057A1 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114485653A (en) * | 2022-02-23 | 2022-05-13 | 广州高新兴机器人有限公司 | Positioning method, device, medium and equipment based on fusion of vision and wheel type odometer |
CN115388884A (en) * | 2022-08-17 | 2022-11-25 | 南京航空航天大学 | Joint initialization method for intelligent body pose estimator |
CN115493579A (en) * | 2022-09-02 | 2022-12-20 | 松灵机器人(深圳)有限公司 | Positioning correction method, positioning correction device, mowing robot and storage medium |
WO2023273057A1 (en) * | 2021-06-30 | 2023-01-05 | 深圳市优必选科技股份有限公司 | Robot positioning method and apparatus, robot and storage medium |
CN118505756A (en) * | 2024-07-18 | 2024-08-16 | 比亚迪股份有限公司 | Pose generation method and device, electronic equipment, storage medium, product and vehicle |
CN115493579B (en) * | 2022-09-02 | 2024-10-22 | 深圳库犸科技有限公司 | Positioning correction method, positioning correction device, mowing robot and storage medium |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116149327B (en) * | 2023-02-08 | 2023-10-20 | 广州番禺职业技术学院 | Real-time tracking prospective path planning system, method and device |
CN116372941B (en) * | 2023-06-05 | 2023-08-15 | 北京航空航天大学杭州创新研究院 | Robot parameter calibration method and device and wheeled robot |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108682036A (en) * | 2018-04-27 | 2018-10-19 | 腾讯科技(深圳)有限公司 | Pose determines method, apparatus and storage medium |
CN109579844A (en) * | 2018-12-04 | 2019-04-05 | 电子科技大学 | Localization method and system |
CN110009681A (en) * | 2019-03-25 | 2019-07-12 | 中国计量大学 | A kind of monocular vision odometer position and posture processing method based on IMU auxiliary |
CN111811506A (en) * | 2020-09-15 | 2020-10-23 | 中国人民解放军国防科技大学 | Visual/inertial odometer combined navigation method, electronic equipment and storage medium |
CN112734852A (en) * | 2021-03-31 | 2021-04-30 | 浙江欣奕华智能科技有限公司 | Robot mapping method and device and computing equipment |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3451288A1 (en) * | 2017-09-04 | 2019-03-06 | Universität Zürich | Visual-inertial odometry with an event camera |
CN107808407B (en) * | 2017-10-16 | 2020-12-18 | 亿航智能设备(广州)有限公司 | Binocular camera-based unmanned aerial vehicle vision SLAM method, unmanned aerial vehicle and storage medium |
CN108242079B (en) * | 2017-12-30 | 2021-06-25 | 北京工业大学 | VSLAM method based on multi-feature visual odometer and graph optimization model |
JP7173471B2 (en) * | 2019-01-31 | 2022-11-16 | 株式会社豊田中央研究所 | 3D position estimation device and program |
CN112212852B (en) * | 2019-07-12 | 2024-06-21 | 浙江未来精灵人工智能科技有限公司 | Positioning method, mobile device and storage medium |
CN111161337B (en) * | 2019-12-18 | 2022-09-06 | 南京理工大学 | Accompanying robot synchronous positioning and composition method in dynamic environment |
CN113390408A (en) * | 2021-06-30 | 2021-09-14 | 深圳市优必选科技股份有限公司 | Robot positioning method and device, robot and storage medium |
-
2021
- 2021-06-30 CN CN202110744785.3A patent/CN113390408A/en active Pending
- 2021-10-27 WO PCT/CN2021/126714 patent/WO2023273057A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108682036A (en) * | 2018-04-27 | 2018-10-19 | 腾讯科技(深圳)有限公司 | Pose determines method, apparatus and storage medium |
CN109579844A (en) * | 2018-12-04 | 2019-04-05 | 电子科技大学 | Localization method and system |
CN110009681A (en) * | 2019-03-25 | 2019-07-12 | 中国计量大学 | A kind of monocular vision odometer position and posture processing method based on IMU auxiliary |
CN111811506A (en) * | 2020-09-15 | 2020-10-23 | 中国人民解放军国防科技大学 | Visual/inertial odometer combined navigation method, electronic equipment and storage medium |
CN112734852A (en) * | 2021-03-31 | 2021-04-30 | 浙江欣奕华智能科技有限公司 | Robot mapping method and device and computing equipment |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023273057A1 (en) * | 2021-06-30 | 2023-01-05 | 深圳市优必选科技股份有限公司 | Robot positioning method and apparatus, robot and storage medium |
CN114485653A (en) * | 2022-02-23 | 2022-05-13 | 广州高新兴机器人有限公司 | Positioning method, device, medium and equipment based on fusion of vision and wheel type odometer |
CN115388884A (en) * | 2022-08-17 | 2022-11-25 | 南京航空航天大学 | Joint initialization method for intelligent body pose estimator |
CN115493579A (en) * | 2022-09-02 | 2022-12-20 | 松灵机器人(深圳)有限公司 | Positioning correction method, positioning correction device, mowing robot and storage medium |
CN115493579B (en) * | 2022-09-02 | 2024-10-22 | 深圳库犸科技有限公司 | Positioning correction method, positioning correction device, mowing robot and storage medium |
CN118505756A (en) * | 2024-07-18 | 2024-08-16 | 比亚迪股份有限公司 | Pose generation method and device, electronic equipment, storage medium, product and vehicle |
Also Published As
Publication number | Publication date |
---|---|
WO2023273057A1 (en) | 2023-01-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113390408A (en) | Robot positioning method and device, robot and storage medium | |
CN109506642B (en) | Robot multi-camera visual inertia real-time positioning method and device | |
CN109297510B (en) | Relative pose calibration method, device, equipment and medium | |
US20240230335A1 (en) | Vision-Aided Inertial Navigation System for Ground Vehicle Localization | |
CN104750969B (en) | The comprehensive augmented reality information superposition method of intelligent machine | |
CN104748746B (en) | Intelligent machine attitude determination and virtual reality loaming method | |
CN107748569B (en) | Motion control method and device for unmanned aerial vehicle and unmanned aerial vehicle system | |
US12062210B2 (en) | Data processing method and apparatus | |
CN113820735B (en) | Determination method of position information, position measurement device, terminal and storage medium | |
CN111156998A (en) | Mobile robot positioning method based on RGB-D camera and IMU information fusion | |
CN105953796A (en) | Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone | |
US20220398767A1 (en) | Pose determining method and apparatus, electronic device, and storage medium | |
CN110879400A (en) | Method, equipment and storage medium for fusion positioning of laser radar and IMU | |
CN108253963A (en) | A kind of robot active disturbance rejection localization method and alignment system based on Multi-sensor Fusion | |
EP4105766A1 (en) | Image display method and apparatus, and computer device and storage medium | |
CN113048980B (en) | Pose optimization method and device, electronic equipment and storage medium | |
CN112729327A (en) | Navigation method, navigation device, computer equipment and storage medium | |
CN109544630A (en) | Posture information determines method and apparatus, vision point cloud construction method and device | |
CN113420678A (en) | Gaze tracking method, device, apparatus, storage medium, and computer program product | |
Gomez-Jauregui et al. | Quantitative evaluation of overlaying discrepancies in mobile augmented reality applications for AEC/FM | |
CN109767470B (en) | Tracking system initialization method and terminal equipment | |
CN112229424B (en) | Parameter calibration method and device for visual inertial system, electronic equipment and medium | |
CN109997150A (en) | System and method for classifying to roadway characteristic | |
CN110470333A (en) | Scaling method and device, the storage medium and electronic device of sensor parameters | |
CN114111776B (en) | Positioning method and related device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |