CN114415698B - Robot, positioning method and device of robot and computer equipment - Google Patents

Robot, positioning method and device of robot and computer equipment Download PDF

Info

Publication number
CN114415698B
CN114415698B CN202210329641.6A CN202210329641A CN114415698B CN 114415698 B CN114415698 B CN 114415698B CN 202210329641 A CN202210329641 A CN 202210329641A CN 114415698 B CN114415698 B CN 114415698B
Authority
CN
China
Prior art keywords
frame image
odometer
current frame
map
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210329641.6A
Other languages
Chinese (zh)
Other versions
CN114415698A (en
Inventor
闫瑞君
吴翔
何科君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Pudu Technology Co Ltd
Original Assignee
Shenzhen Pudu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Pudu Technology Co Ltd filed Critical Shenzhen Pudu Technology Co Ltd
Priority to CN202210329641.6A priority Critical patent/CN114415698B/en
Publication of CN114415698A publication Critical patent/CN114415698A/en
Application granted granted Critical
Publication of CN114415698B publication Critical patent/CN114415698B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0272Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means comprising means for registering the travel distance, e.g. revolutions of wheels

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a robot, a positioning method and a positioning device of the robot, computer equipment and a storage medium. The robot acquires a current frame image through the vision module, acquires the latest odometer data of the robot corresponding to the current frame image through the odometer, calculates to obtain the position of the odometer, matches the current frame image with a pre-constructed vision map based on a pre-trained deep learning model to obtain the accumulated error of the odometer, corrects the position of the current frame image based on the accumulated error to obtain the current position of the robot, and improves the accuracy of the acquired current position of the robot.

Description

Robot, positioning method and device of robot and computer equipment
Technical Field
The present application relates to the field of robotics, and in particular, to a robot, a positioning method and apparatus for the robot, a computer device, and a storage medium.
Background
In the automatic navigation process of the robot, the position and the posture of the robot are required to be acquired at any time and used for a subsequent planning module. Existing vision-based localization schemes are typically feature methods. The scheme of using the feature method needs to establish a visual map in advance, extract a global descriptor through the image information of the current frame and compare the global descriptor with global descriptors of all frames in the map to obtain the most matched map frame.
Extracting feature points and descriptors of the feature points through image information of the current frame, then matching the feature points and descriptors of the feature points with map points and descriptors of the feature points in the matched map frame to obtain a matching relation between feature points (2D) and map points (3D) of the current frame, and finally calculating the position and the posture of the current frame. In the prior art, the NN algorithm is usually used for local feature matching, and the method only utilizes descriptor information of feature points, which can cause more mismatching and poor positioning effect.
Disclosure of Invention
In view of the above, it is necessary to provide a robot positioning method, an apparatus, a computer device, a computer readable storage medium, and a computer program product capable of improving positioning accuracy.
In a first aspect, the present application provides a robot, where the robot is equipped with a vision module and a odometer, the robot includes a processor and a memory, where the memory stores executable program codes, and the processor is configured to, when calling and executing the executable program codes, implement the following steps:
acquiring a current frame image through the vision module;
acquiring the latest odometer data of the robot corresponding to the current frame image through the odometer, and calculating to obtain an odometer pose;
matching the current frame image with a pre-constructed visual map based on a pre-trained deep learning model to obtain the accumulated error of the odometer;
and correcting the position and pose of the odometer based on the accumulated error to obtain the current position and pose of the robot.
In one embodiment, the acquiring, by the odometer, latest odometer data of the robot corresponding to the current frame image includes:
outputting odometry data in real time through the odometer to maintain an odometer queue of the robot;
and acquiring the odometer data closest to the timestamp in the odometer queue according to the timestamp of the current frame image as the latest odometer data.
In one embodiment, the vision module comprises a camera installed at the top of the robot, and the current frame image is taken by the camera.
In one embodiment, the pre-trained deep learning model comprises a global feature extraction model and a local feature extraction model, and the visual map comprises more than two frames of map images;
the matching of the current frame image and a pre-constructed visual map based on the pre-trained deep learning model to obtain the accumulated error of the odometer comprises the following steps:
extracting a global descriptor of the current frame image based on the global feature extraction model;
carrying out global matching on the global descriptor of the current frame image and each frame of map image in the visual map to obtain a map frame image pose, a feature point pose of the map frame image, a feature point of the map frame image and a feature point descriptor, wherein the map frame image is an image which is most similar to the current frame image in more than two frame of map images;
extracting feature points and feature point descriptors of the current frame image based on the local feature extraction model;
locally matching the feature points and the feature point descriptors of the current frame image, the feature points and the feature point descriptors of the map frame image and the feature point poses of the map frame image to obtain the matching relationship between the feature points of the current frame image and the feature points of the map frame image;
and acquiring the accumulated error of the odometer based on the matching relationship, the position and the posture of the odometer, the camera internal parameters and the camera external parameters.
In one embodiment, before the step of extracting feature points and feature point descriptors of the current frame image based on the local feature extraction model is implemented, when the processor is used for calling and executing the executable program code, the method further comprises the following steps of:
acquiring a first current frame pose of the robot based on the odometer pose and an initial error of the odometer;
judging whether the current frame image and the map frame image face oppositely based on the first current frame pose and the map frame image pose;
if so, inverting the current frame image;
if not, keeping the current frame image unchanged.
In one embodiment, the obtaining the odometer accumulated error based on the matching relationship, the odometer pose, the camera internal reference, and the camera external reference includes:
calculating the pose of the camera under a world coordinate system according to the matching relation and the camera internal reference;
calculating a second current frame pose of the robot according to the pose of the camera in a world coordinate system and the camera external reference;
and acquiring the accumulated error of the odometer based on the second current frame pose and the odometer pose.
In a second aspect, the present application provides a positioning method for a robot having a vision module and an odometer mounted thereon, the positioning method including:
acquiring a current frame image through the vision module;
acquiring the latest odometer data of the robot corresponding to the current frame image through the odometer, and calculating to obtain an odometer pose;
matching the current frame image with a pre-constructed visual map based on a deep learning model to obtain the accumulated error of the odometer;
and correcting the position and posture of the odometer based on the accumulated error to obtain the current position and posture of the robot.
In a third aspect, the present application also provides a positioning apparatus for a robot, the apparatus comprising:
the acquisition module is used for acquiring the current frame image through the vision module;
the computing module is used for acquiring the latest odometer data of the robot corresponding to the current frame image through the odometer and computing to obtain an odometer pose;
the matching module is used for matching the current frame image with a pre-constructed visual map based on a deep learning model to obtain the accumulated error of the odometer;
and the correction module is used for correcting the position and posture of the odometer based on the accumulated error to obtain the current position and posture of the robot.
In a fourth aspect, the present application further provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the steps implemented by the robot in any of the above embodiments when executing the computer program.
In a fifth aspect, the present application further provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps implemented by the robot in any of the above embodiments.
According to the robot, the positioning method and device of the robot, the computer equipment and the storage medium, the robot acquires the current frame image through the vision module, acquires the latest odometer data of the robot corresponding to the current frame image through the odometer, calculates to obtain the position of the odometer, matches the current frame image with the pre-constructed vision map based on the pre-trained deep learning model to obtain the accumulated error of the odometer, corrects the position of the odometer based on the accumulated error to obtain the current position of the robot, and improves the accuracy of the acquired current position of the robot.
Drawings
FIG. 1 is a diagram of an exemplary positioning method for a robot;
FIG. 2 is a schematic flow chart illustrating positioning of a robot in one embodiment;
FIG. 3 is a schematic flow chart of a positioning method of a robot according to another embodiment;
FIG. 4 is a schematic flow chart of a positioning method of a robot according to still another embodiment;
FIG. 5 is a block diagram of a robotic positioning device in accordance with an embodiment;
FIG. 6 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clearly understood, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The positioning method of the robot provided by the embodiment of the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a communication network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104, or may be located on the cloud or other network server.
Wherein the terminal 102 is a robot. The server 104 may be implemented as a stand-alone server or as a server cluster comprised of multiple servers.
In one embodiment, a robot is provided, which is equipped with a vision module and a odometer, and includes a processor and a memory, where the memory stores executable program codes, and when the processor calls and executes the executable program codes, as shown in fig. 2, the following steps are implemented:
step S202, acquiring a current frame image through the vision module;
step S204, obtaining the latest odometer data of the robot corresponding to the current frame image through the odometer, and calculating to obtain an odometer pose;
specifically, the robot in this embodiment acquires the current frame image through a vision module mounted on the robot. And acquiring the latest odometer data of the robot corresponding to the current frame image through the odometer, and calculating to obtain the position and the posture of the odometer according to the acquired latest odometer data. In order to solve the problem of time consumption in the positioning calculation process, an odometer with a high output frequency, such as a wheel-type odometer, is used in this embodiment.
It can be understood that the odometer pose can be directly calculated through the latest odometer data, and data information of other sensors, such as imu data acquisition direction information, can be fused on the basis of the odometer to calculate the odometer pose.
Step S206, matching the current frame image with a pre-constructed visual map based on a pre-trained deep learning model to obtain the accumulated error of the odometer;
specifically, matching a current frame image with a pre-constructed visual map based on a pre-trained deep learning model, inputting the current frame image into the pre-trained deep learning model, and acquiring a global descriptor of the current frame image; the pre-trained deep learning model comprises a global feature extraction model and a local feature extraction model. And comparing the global descriptor of the current frame image with the global descriptor of each frame image in the pre-constructed visual map respectively to acquire the related information of the map frame image, wherein the information comprises the following steps: the pose of the map frame image, the pose of the feature point of the map frame image, the feature point of the map frame image and the feature point descriptor. Wherein the visual map comprises more than two frames of map images; the map frame image is the image which is most similar to the current frame image in the map image of the visual map. Inputting the current frame image into a local feature extraction model, extracting feature points and feature point descriptors of the current frame image, locally matching the feature points and the feature point descriptors of the current frame image, the feature points and the feature point descriptors of the map frame image and the feature point poses of the map frame image, and obtaining the matching relationship between the feature points of the current frame image and the feature points of the map frame image. And acquiring the accumulated error of the odometer based on the matching relationship between the feature points of the current frame image and the feature points of the map frame image, the position and the pose of the odometer, the camera internal reference and the camera external reference.
And S208, correcting the position and posture of the odometer based on the accumulated error to obtain the current position and posture of the robot.
Specifically, the current pose of the robot is acquired from the latest odometer pose of the robot and the accumulated error of the odometer, for example, by calculating the product of the accumulated error of the odometer and the latest odometer pose of the robot as the current pose of the robot.
In the embodiment, the robot acquires the current frame image through the vision module, acquires the latest odometer data of the robot corresponding to the current frame image through the odometer, calculates the latest odometer position and posture of the robot, matches the current frame image with the pre-constructed vision map based on the pre-trained deep learning model to acquire the accumulated error of the odometer, corrects the odometer position and posture based on the accumulated error to acquire the current position and posture of the robot, and improves the accuracy of the acquired current position and posture of the robot.
In one embodiment, the acquiring, by the odometer, latest odometer data of the robot corresponding to the current frame image includes:
outputting odometry data in real time through the odometer to maintain an odometer queue of the robot;
and acquiring the odometer data closest to the timestamp in the odometer queue according to the timestamp of the current frame image as the latest odometer data.
Specifically, the odometer is mounted on the robot, the robot outputs odometer data in real time through the odometer to generate odometer data in real time when the robot moves, and the odometer data generated in real time form an odometer queue of the robot together and maintain the odometer queue of the robot. And inquiring in the odometer queue according to the time stamp of the current frame image, and acquiring odometer data closest to the time stamp of the current frame image as latest odometer data.
In the embodiment, the odometer acquires odometer data in real time through the odometer, the odometer queue of the robot is maintained through the odometer data, and the odometer data closest to the time stamp is acquired in the odometer queue as the latest odometer data according to the time stamp of the current frame image, so that the latest odometer data is acquired, and the accuracy of acquiring the position and posture of the odometer is improved.
In one embodiment, the vision module comprises a camera mounted on the top of the robot, and the current frame image is captured by the camera.
Specifically, the vision module of the robot is installed on the top of the robot, and the current frame image is captured by the camera of the vision module, so that the camera of the vision module can observe information directly above the robot (such as information on a ceiling), and information of a dynamic object such as a pedestrian can be observed less frequently.
In this embodiment, the vision module includes a camera installed at the top of the robot, and the current frame image is obtained through shooting by the camera at the top of the robot, so that interference of a dynamic obstacle in the obtained current frame image is avoided, and the accuracy of obtaining the current pose of the robot is improved.
In one embodiment, the pre-trained deep learning model comprises a global feature extraction model and a local feature extraction model, and the visual map comprises more than two frames of map images;
the matching of the current frame image and a pre-constructed visual map based on the pre-trained deep learning model to obtain the accumulated error of the odometer comprises the following steps:
extracting a global descriptor of the current frame image based on the global feature extraction model;
carrying out global matching on the global descriptor of the current frame image and each frame of map image in the visual map to obtain a map frame image pose, a feature point pose of the map frame image, a feature point of the map frame image and a feature point descriptor, wherein the map frame image is an image which is most similar to the current frame image in the more than two frame map images;
extracting feature points and feature point descriptors of the current frame image based on the local feature extraction model;
locally matching the feature points and the feature point descriptors of the current frame image, the feature points and the feature point descriptors of the map frame image and the feature point poses of the map frame image to obtain the matching relationship between the feature points of the current frame image and the feature points of the map frame image;
and acquiring the accumulated error of the odometer based on the matching relationship, the position and the pose of the odometer, the camera internal parameters and the camera external parameters.
Specifically, the pre-trained deep learning model comprises a global feature extraction model and a local feature extraction model, a global descriptor of the current frame image is extracted based on the global feature extraction model, the current frame image is input into the global feature extraction model, and the global descriptor of the current frame image is obtained. The method for carrying out global matching on the global descriptor of the current frame image and each frame of map image in the visual map to obtain the relevant information of the map frame image comprises the following steps: the pose of the map frame image, the pose of the feature points of the map frame image, the feature points of the map frame image and the feature point descriptors. Wherein the visual map comprises more than two frames of map images; the map frame image is the image which is most similar to the current frame image in the map image of the visual map.
Inputting the current frame image into a local feature extraction model, extracting feature points and feature point descriptors of the current frame image, locally matching the feature points and the feature point descriptors of the current frame image, the feature points and the feature point descriptors of the map frame image and the feature point poses of the map frame image, and obtaining the matching relationship between the feature points of the current frame image and the feature points of the map frame image. And acquiring the accumulated error of the odometer based on the matching relationship between the feature points of the current frame image and the feature points of the map frame image, the position and the posture of the odometer, the camera internal reference and the camera external reference.
In the embodiment, feature extraction is performed on the current frame image through a pre-trained deep learning model, a global descriptor of the current frame image, a feature point and a feature point descriptor of the current frame image are obtained, local matching is performed according to the feature point and the feature point descriptor of the current frame image, the feature point and the feature point descriptor of the map frame image and the feature point pose of the map frame image to obtain a matching relation between the feature point of the current frame image and the feature point of the map frame image, and finally, an accumulated error of the odometer is obtained based on the matching relation, the position of the odometer, camera internal parameters and camera external parameters, so that the accumulated error of the odometer is obtained, and the accuracy of obtaining the accumulated error of the odometer is improved.
In one embodiment, before implementing the step of extracting feature points and feature point descriptors of the current frame image based on the local feature extraction model, when the processor is used for calling and executing the executable program code, further implementing:
acquiring a first current frame pose of the robot based on the odometer pose and an initial error of the odometer;
judging whether the current frame image and the map frame image face oppositely based on the first current frame pose and the map frame image pose;
if so, reversing the current frame image;
if not, keeping the current frame image unchanged.
Specifically, before the step of extracting the feature points and the feature point descriptors of the current frame image based on the local feature extraction model, it is further necessary to determine whether the direction and the position of the current frame image are correct. Acquiring a first current frame pose of the robot based on the position pose of the odometer and the initial error of the odometer, and acquiring the first current frame pose of the robot through the product of the position pose of the odometer and the initial error of the odometer; the initial error of the odometer is the accumulated error of the odometer obtained by the last calculation, so that when the pose of the robot is calculated again at the next time or the next moment, the currently obtained accumulated error can be automatically updated to the initial error. Comparing the pose of the first current frame image with the pose of the map frame image, and judging whether the current frame image and the map frame image face oppositely; if the orientation of the current frame image is opposite to that of the map frame image, reversing the current frame image; if the current frame image and the map frame image are not opposite in orientation, the current frame image is kept unchanged.
In the embodiment, the pose of the first current frame of the robot is obtained based on the pose of the odometer and the initial error of the odometer, whether the orientations of the current frame image and the map frame image are opposite or not is judged based on the pose of the first current frame and the pose of the map frame image, the current frame image is reversed when the orientations of the current frame image and the map frame image are opposite, and the current frame image is kept unchanged when the orientations of the current frame image and the map frame image are not opposite, so that the orientation detection of the current frame image is realized, and the extraction accuracy of the feature points and the feature point descriptors of the current frame image is improved.
In one embodiment, the obtaining the odometer accumulated error based on the matching relationship, the odometer pose, the camera internal reference, and the camera external reference comprises:
calculating the pose of the camera under a world coordinate system according to the matching relation and the camera internal parameters;
calculating a second current frame pose of the robot according to the pose of the camera in a world coordinate system and the camera external parameters;
and acquiring the accumulated error of the odometer based on the second current frame pose and the odometer pose.
Specifically, the pose of the camera in the world coordinate system is calculated according to the matching relationship between the feature points of the current frame image and the feature points of the map frame image and the camera internal parameters, the pose of the robot in the second current frame is calculated according to the pose of the camera in the world coordinate system and the camera external parameters, and the pose of the robot in the second current frame can be obtained by calculating the quotient of the pose of the camera in the world coordinate system and the camera external parameters. And acquiring the accumulated error of the odometer based on the second current frame pose and the odometer pose, wherein the accumulated error of the odometer can be acquired by calculating the quotient of the second current frame pose and the odometer pose of the robot.
In the embodiment, the pose of the camera in the world coordinate system is calculated according to the matching relationship and the camera internal parameters, the pose of the second current frame of the robot is calculated according to the pose of the camera in the world coordinate system and the camera external parameters, and finally the accumulated error of the odometer is acquired based on the pose of the second current frame and the pose of the odometer, so that the accumulated error of the odometer is acquired, and the prediction precision of the final current frame pose of the robot is improved.
In one embodiment, as shown in fig. 3, there is provided a positioning method of a robot having a vision module and an odometer mounted thereon, the positioning method including:
step S302, acquiring a current frame image through the vision module;
step S304, acquiring the latest odometer data of the robot corresponding to the current frame image through the odometer, and calculating to obtain an odometer pose;
specifically, the robot in this embodiment acquires the current frame image through a vision module mounted on the robot. And acquiring the latest odometer data of the robot corresponding to the current frame image through the odometer, and calculating according to the acquired latest odometer data to obtain the odometer pose. In order to solve the problem of time consumption in the positioning calculation process, an odometer with high output frequency, such as a wheel-type odometer, is used in this embodiment.
Step S306, matching the current frame image with a pre-constructed visual map based on a deep learning model to obtain the accumulated error of the odometer;
specifically, matching a current frame image with a pre-constructed visual map based on a pre-trained deep learning model, inputting the current frame image into the pre-trained deep learning model, and acquiring a global descriptor of the current frame image; the pre-trained deep learning model comprises a global feature extraction model and a local feature extraction model. And comparing the global descriptor of the current frame image with the global descriptor of each frame image in the pre-constructed visual map respectively to obtain the relevant information of the map frame image, wherein the information comprises the following steps: the pose of the map frame image, the pose of the feature point of the map frame image, the feature point of the map frame image and the feature point descriptor. Wherein the visual map comprises more than two frames of map images; the map frame image is the image which is most similar to the current frame image in the map image of the visual map. Inputting the current frame image into a local feature extraction model, extracting feature points and feature point descriptors of the current frame image, locally matching the feature points and the feature point descriptors of the current frame image, the feature points and the feature point descriptors of the map frame image and the feature point poses of the map frame image, and obtaining the matching relationship between the feature points of the current frame image and the feature points of the map frame image. And acquiring the accumulated error of the odometer based on the matching relationship between the feature points of the current frame image and the feature points of the map frame image, the position and the pose of the odometer, the camera internal reference and the camera external reference.
And S308, correcting the position and posture of the odometer based on the accumulated error to obtain the current position and posture of the robot.
Specifically, the current pose of the robot is obtained from the latest current frame pose of the robot and the accumulated error of the odometer, for example, by calculating the product of the accumulated error of the odometer and the latest odometer pose of the robot as the current pose of the robot.
In the embodiment, the robot acquires the current frame image through the vision module, acquires the latest odometer data of the robot corresponding to the current frame image through the odometer, calculates to obtain the position of the odometer, matches the current frame image with the pre-constructed vision map based on the pre-trained deep learning model to obtain the accumulated error of the odometer, corrects the position of the current frame image based on the accumulated error to obtain the current position of the robot, and improves the accuracy of the acquired current position of the robot.
In one embodiment, as shown in fig. 4, another positioning method for a robot is provided, which specifically includes: and acquiring the latest odometer data of the robot corresponding to the current frame image through the odometer, and calculating according to the acquired latest odometer data to obtain the odometer pose. Acquiring a first current frame pose of the robot based on the odometer pose and an initial error of the odometer; judging whether the current frame image is opposite in orientation or not based on the first current frame pose and the map frame image pose; if so, inverting the current frame image; if not, keeping the current frame image unchanged. And extracting a global descriptor of the current frame image based on the global feature extraction model, and performing global matching on the global descriptor of the current frame image and each frame of map image in the visual map to obtain a map frame image pose, a feature point pose of the map frame image, a feature point of the map frame image and a feature point descriptor. And carrying out local matching according to the feature points and the feature point descriptors of the current frame image, the feature points and the feature point descriptors of the map frame image and the feature point poses of the map frame image, and acquiring the matching relation between the feature points of the current frame image and the feature points of the map frame image. And finally, calculating and acquiring the accumulated error of the odometer according to the matching relation, the position and the posture of the odometer, the camera internal reference and the camera external reference based on a PNP algorithm.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not limited to being performed in the exact order illustrated and, unless explicitly stated herein, may be performed in other orders. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the present application further provides a robot positioning device for implementing the robot positioning method. The solution to the problem provided by the device is similar to the solution described in the above method, so the specific limitations in one or more embodiments of the robot positioning device provided below can be referred to the limitations of the above robot positioning method, and are not described herein again.
In one embodiment, as shown in fig. 5, there is provided a positioning apparatus 500 of a robot, including: an obtaining module 501, a calculating module 502, a matching module 503 and a correcting module 504, wherein:
an obtaining module 501, configured to obtain a current frame image through the vision module;
a calculating module 502, configured to obtain the latest odometer data of the robot corresponding to the current frame image through the odometer, and calculate to obtain an odometer pose;
the matching module 503 is configured to match the current frame image with a pre-constructed visual map based on a deep learning model to obtain an accumulated error of the odometer;
and a correcting module 504, configured to correct the position of the odometer based on the accumulated error to obtain a current position of the robot.
According to the positioning device of the robot, the current frame image is obtained through the vision module, the latest odometer data of the robot corresponding to the current frame image are obtained through the odometer, the position and posture of the odometer are obtained through calculation, the current frame image is matched with the pre-constructed visual map based on the pre-trained deep learning model, the accumulated error of the odometer is obtained, the position and posture of the current frame image are corrected based on the accumulated error, the current position and posture of the robot are obtained, and the accuracy of the obtained current position and posture of the robot is improved.
The various modules in the robot positioning device described above may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent of a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 6. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a robot positioning method.
Those skilled in the art will appreciate that the architecture shown in fig. 6 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory having a computer program stored therein and a processor that when executing the computer program performs the steps of:
acquiring a current frame image through the vision module;
acquiring the latest odometer data of the robot corresponding to the current frame image through the odometer, and calculating to obtain an odometer pose;
matching the current frame image with a pre-constructed visual map based on a pre-trained deep learning model to obtain the accumulated error of the odometer;
and correcting the current frame pose based on the accumulated error to obtain the current pose of the robot.
In one embodiment, the processor, when executing the computer program, further performs the steps of: outputting odometry data in real time by the odometer to maintain an odometer queue for the robot; and acquiring the odometer data closest to the timestamp in the odometer queue according to the timestamp of the current frame image as the latest odometer data.
In one embodiment, the processor, when executing the computer program, further performs the steps of: extracting a global descriptor of the current frame image based on the global feature extraction model; carrying out global matching on the global descriptor of the current frame image and each frame of map image in the visual map to obtain a map frame image pose, a feature point pose of the map frame image, a feature point of the map frame image and a feature point descriptor, wherein the map frame image is an image which is most similar to the current frame image in more than two frame of map images; extracting feature points and feature point descriptors of the current frame image based on the local feature extraction model; locally matching the feature points and the feature point descriptors of the current frame image, the feature points and the feature point descriptors of the map frame image and the feature point poses of the map frame image to obtain the matching relationship between the feature points of the current frame image and the feature points of the map frame image; and acquiring the accumulated error of the odometer based on the matching relationship, the position and the posture of the odometer, the camera internal parameters and the camera external parameters.
In one embodiment, the processor when executing the computer program further performs the steps of: acquiring a first current frame pose of the robot based on the odometer pose and an initial error of the odometer; judging whether the current frame image and the map frame image face oppositely based on the first current frame pose and the map frame image pose; if so, inverting the current frame image; if not, keeping the current frame image unchanged.
In one embodiment, the processor, when executing the computer program, further performs the steps of: calculating the pose of the camera under a world coordinate system according to the matching relation and the camera internal parameters; calculating a second current frame pose of the robot according to the pose of the camera in a world coordinate system and the camera external parameters; and acquiring the accumulated error of the odometer based on the second current frame pose and the odometer pose.
According to the computer equipment, the current frame image is obtained through the vision module, the latest odometer data of the robot corresponding to the current frame image are obtained through the odometer, the position and posture of the odometer are obtained through calculation, the current frame image is matched with the pre-constructed visual map based on the pre-trained deep learning model, the accumulated error of the odometer is obtained, the position and posture of the current frame are corrected based on the accumulated error, the current position and posture of the robot are obtained, and the accuracy of the obtained current position and posture of the robot is improved.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a current frame image through the vision module;
acquiring the latest odometer data of the robot corresponding to the current frame image through the odometer, and calculating to obtain an odometer pose;
matching the current frame image with a pre-constructed visual map based on a pre-trained deep learning model to obtain the accumulated error of the odometer;
and correcting the current frame pose based on the accumulated error to obtain the current pose of the robot.
In one embodiment, the computer program when executed by the processor further performs the steps of: outputting odometry data in real time by the odometer to maintain an odometer queue for the robot; and acquiring the odometer data closest to the timestamp in the odometer queue according to the timestamp of the current frame image as the latest odometer data.
In one embodiment, the computer program when executed by the processor further performs the steps of: extracting a global descriptor of the current frame image based on the global feature extraction model; carrying out global matching on the global descriptor of the current frame image and each frame of map image in the visual map to obtain a map frame image pose, a feature point pose of the map frame image, a feature point of the map frame image and a feature point descriptor, wherein the map frame image is an image which is most similar to the current frame image in the more than two frame map images; extracting feature points and feature point descriptors of the current frame image based on the local feature extraction model; locally matching the feature points and the feature point descriptors of the current frame image, the feature points and the feature point descriptors of the map frame image and the feature point poses of the map frame image to obtain the matching relationship between the feature points of the current frame image and the feature points of the map frame image; and acquiring the accumulated error of the odometer based on the matching relationship, the position and the posture of the odometer, the camera internal parameters and the camera external parameters.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring a first current frame pose of the robot based on the odometer pose and an initial error of the odometer; judging whether the current frame image and the map frame image face oppositely based on the first current frame pose and the map frame image pose; if so, inverting the current frame image; if not, keeping the current frame image unchanged.
In one embodiment, the computer program when executed by the processor further performs the steps of: calculating the pose of the camera under a world coordinate system according to the matching relation and the camera internal reference; calculating a second current frame pose of the robot according to the pose of the camera in a world coordinate system and the camera external reference; and acquiring the accumulated error of the odometer based on the second current frame pose and the odometer pose.
The storage medium acquires the current frame image through the vision module, acquires the latest odometer data of the robot corresponding to the current frame image through the odometer, calculates to obtain the position of the odometer, matches the current frame image with a pre-constructed vision map based on a pre-trained deep learning model to obtain the accumulated error of the odometer, corrects the position of the current frame image based on the accumulated error to obtain the current position of the robot, and improves the accuracy of the acquired current position of the robot.
It should be noted that, the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), magnetic Random Access Memory (MRAM), ferroelectric Random Access Memory (FRAM), phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases involved in the embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the various embodiments provided herein may be, without limitation, general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing-based data processing logic devices, or the like.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (9)

1. A robot having a vision module and a odometer mounted thereon, the robot comprising a processor and a memory, the memory storing executable program code, the processor being configured to, when invoked and executing the executable program code, perform the steps of:
acquiring a current frame image through the vision module;
acquiring the latest odometer data of the robot corresponding to the current frame image through the odometer, and calculating to obtain an odometer pose;
respectively performing global matching and local matching on the current frame image and a pre-constructed visual map based on a pre-trained deep learning model to obtain the accumulated error of the odometer;
correcting the position and pose of the odometer based on the accumulated error to obtain the current position and pose of the robot;
the pre-trained deep learning model comprises a global feature extraction model and a local feature extraction model, and the visual map comprises more than two map images;
the matching of the current frame image and a pre-constructed visual map based on the pre-trained deep learning model to obtain the accumulated error of the odometer comprises the following steps:
extracting a global descriptor of the current frame image based on the global feature extraction model;
carrying out global matching on the global descriptor of the current frame image and each frame of map image in the visual map to obtain a map frame image pose, a feature point pose of the map frame image, a feature point of the map frame image and a feature point descriptor, wherein the map frame image is an image which is most similar to the current frame image in the more than two frame map images;
extracting feature points and feature point descriptors of the current frame image based on the local feature extraction model;
locally matching the feature points and the feature point descriptors of the current frame image, the feature points and the feature point descriptors of the map frame image and the feature point poses of the map frame image to obtain the matching relationship between the feature points of the current frame image and the feature points of the map frame image;
and acquiring the accumulated error of the odometer based on the matching relationship, the position and the pose of the odometer, the camera internal parameters and the camera external parameters.
2. The robot of claim 1, wherein said obtaining, by the odometer, latest odometer data of the robot corresponding to the current frame image comprises:
outputting odometry data in real time by the odometer to maintain an odometer queue for the robot;
and acquiring the odometer data closest to the timestamp in the odometer queue according to the timestamp of the current frame image as the latest odometer data.
3. The robot of claim 1, wherein the vision module comprises a camera mounted on top of the robot, and wherein the current frame image is captured by the camera.
4. The robot of claim 1, wherein before the step of extracting feature points and feature point descriptors of the current frame image based on the local feature extraction model is implemented, when the processor is configured to call and execute the executable program code, further implementing:
acquiring a first current frame pose of the robot based on the odometer pose and an initial error of the odometer;
judging whether the current frame image and the map frame image face oppositely based on the first current frame pose and the map frame image pose;
if so, reversing the current frame image;
if not, keeping the current frame image unchanged.
5. The robot of claim 1, wherein said obtaining the odometer accumulated error based on the matching relationship, the odometer pose, camera internal parameters, and camera external parameters comprises:
calculating the pose of the camera under a world coordinate system according to the matching relation and the camera internal parameters;
calculating a second current frame pose of the robot according to the pose of the camera in a world coordinate system and the camera external parameters;
and acquiring the accumulated error of the odometer based on the second current frame pose and the odometer pose.
6. A robot positioning method, wherein the robot is mounted with a vision module and a speedometer, the positioning method comprising:
acquiring a current frame image through the vision module;
acquiring the latest odometer data of the robot corresponding to the current frame image through the odometer, and calculating to obtain an odometer pose;
respectively carrying out global matching and local matching on the current frame image and a pre-constructed visual map based on a pre-trained deep learning model to obtain the accumulated error of the odometer;
correcting the position and posture of the odometer based on the accumulated error to obtain the current position and posture of the robot;
the pre-trained deep learning model comprises a global feature extraction model and a local feature extraction model, and the visual map comprises more than two frames of map images;
the matching of the current frame image and a pre-constructed visual map based on the pre-trained deep learning model to obtain the accumulated error of the odometer comprises the following steps:
extracting a global descriptor of the current frame image based on the global feature extraction model;
carrying out global matching on the global descriptor of the current frame image and each frame of map image in the visual map to obtain a map frame image pose, a feature point pose of the map frame image, a feature point of the map frame image and a feature point descriptor, wherein the map frame image is an image which is most similar to the current frame image in more than two frame of map images;
extracting feature points and feature point descriptors of the current frame image based on the local feature extraction model;
locally matching the feature points and the feature point descriptors of the current frame image, the feature points and the feature point descriptors of the map frame image and the feature point poses of the map frame image to obtain the matching relationship between the feature points of the current frame image and the feature points of the map frame image;
and acquiring the accumulated error of the odometer based on the matching relationship, the position and the posture of the odometer, the camera internal parameters and the camera external parameters.
7. A positioning device for a robot, the device comprising:
the acquisition module is used for acquiring the current frame image through the vision module;
the computing module is used for acquiring the latest odometer data of the robot corresponding to the current frame image through an odometer and computing to obtain an odometer pose;
the matching module is used for respectively carrying out global matching and local matching on the current frame image and a pre-constructed visual map based on a pre-trained deep learning model to obtain the accumulated error of the odometer;
the correction module is used for correcting the position and posture of the odometer based on the accumulated error to obtain the current position and posture of the robot;
the pre-trained deep learning model comprises a global feature extraction model and a local feature extraction model, and the visual map comprises more than two frames of map images; the matching module is specifically configured to:
extracting a global descriptor of the current frame image based on the global feature extraction model;
carrying out global matching on the global descriptor of the current frame image and each frame of map image in the visual map to obtain a map frame image pose, a feature point pose of the map frame image, a feature point of the map frame image and a feature point descriptor, wherein the map frame image is an image which is most similar to the current frame image in the more than two frame map images;
extracting feature points and feature point descriptors of the current frame image based on the local feature extraction model;
locally matching the feature points and the feature point descriptors of the current frame image, the feature points and the feature point descriptors of the map frame image and the feature point poses of the map frame image to obtain the matching relationship between the feature points of the current frame image and the feature points of the map frame image;
and acquiring the accumulated error of the odometer based on the matching relationship, the position and the posture of the odometer, the camera internal parameters and the camera external parameters.
8. A computer arrangement comprising a memory and a processor, the memory storing a computer program, wherein the processor when executing the computer program performs the steps performed by the robot according to any of the claims 1 to 5, or performs the method of positioning a robot according to claim 6.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the robot of any one of claims 1 to 5, or carries out the positioning method of the robot of claim 6.
CN202210329641.6A 2022-03-31 2022-03-31 Robot, positioning method and device of robot and computer equipment Active CN114415698B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210329641.6A CN114415698B (en) 2022-03-31 2022-03-31 Robot, positioning method and device of robot and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210329641.6A CN114415698B (en) 2022-03-31 2022-03-31 Robot, positioning method and device of robot and computer equipment

Publications (2)

Publication Number Publication Date
CN114415698A CN114415698A (en) 2022-04-29
CN114415698B true CN114415698B (en) 2022-11-29

Family

ID=81263712

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210329641.6A Active CN114415698B (en) 2022-03-31 2022-03-31 Robot, positioning method and device of robot and computer equipment

Country Status (1)

Country Link
CN (1) CN114415698B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114812540B (en) * 2022-06-23 2022-11-29 深圳市普渡科技有限公司 Picture construction method and device and computer equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017224280A (en) * 2016-05-09 2017-12-21 ツーアンツ インク.TwoAntz Inc. Visual positioning-based navigation apparatus and method
WO2018028649A1 (en) * 2016-08-10 2018-02-15 纳恩博(北京)科技有限公司 Mobile device, positioning method therefor, and computer storage medium
CN111127557A (en) * 2019-12-13 2020-05-08 中国电子科技集团公司第二十研究所 Visual SLAM front-end attitude estimation method based on deep learning
CN112183171A (en) * 2019-07-05 2021-01-05 杭州海康机器人技术有限公司 Method and device for establishing beacon map based on visual beacon
CN112184824A (en) * 2019-07-05 2021-01-05 杭州海康机器人技术有限公司 Camera external parameter calibration method and device
CN113409368A (en) * 2020-03-16 2021-09-17 北京京东乾石科技有限公司 Drawing method and device, computer readable storage medium and electronic equipment
CN114255323A (en) * 2021-12-22 2022-03-29 深圳市普渡科技有限公司 Robot, map construction method, map construction device and readable storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105928505B (en) * 2016-04-19 2019-01-29 深圳市神州云海智能科技有限公司 The pose of mobile robot determines method and apparatus
US10593060B2 (en) * 2017-04-14 2020-03-17 TwoAntz, Inc. Visual positioning and navigation device and method thereof
CN110631554B (en) * 2018-06-22 2021-11-30 北京京东乾石科技有限公司 Robot posture determining method and device, robot and readable storage medium
CN109556596A (en) * 2018-10-19 2019-04-02 北京极智嘉科技有限公司 Air navigation aid, device, equipment and storage medium based on ground texture image
CN111161334B (en) * 2019-12-31 2023-06-02 南通大学 Semantic map construction method based on deep learning
CN112476433B (en) * 2020-11-23 2023-08-04 深圳怪虫机器人有限公司 Mobile robot positioning method based on identification array boundary

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017224280A (en) * 2016-05-09 2017-12-21 ツーアンツ インク.TwoAntz Inc. Visual positioning-based navigation apparatus and method
WO2018028649A1 (en) * 2016-08-10 2018-02-15 纳恩博(北京)科技有限公司 Mobile device, positioning method therefor, and computer storage medium
CN112183171A (en) * 2019-07-05 2021-01-05 杭州海康机器人技术有限公司 Method and device for establishing beacon map based on visual beacon
CN112184824A (en) * 2019-07-05 2021-01-05 杭州海康机器人技术有限公司 Camera external parameter calibration method and device
CN111127557A (en) * 2019-12-13 2020-05-08 中国电子科技集团公司第二十研究所 Visual SLAM front-end attitude estimation method based on deep learning
CN113409368A (en) * 2020-03-16 2021-09-17 北京京东乾石科技有限公司 Drawing method and device, computer readable storage medium and electronic equipment
CN114255323A (en) * 2021-12-22 2022-03-29 深圳市普渡科技有限公司 Robot, map construction method, map construction device and readable storage medium

Also Published As

Publication number Publication date
CN114415698A (en) 2022-04-29

Similar Documents

Publication Publication Date Title
CN107990899B (en) Positioning method and system based on SLAM
CN107025662B (en) Method, server, terminal and system for realizing augmented reality
CN112179330B (en) Pose determination method and device of mobile equipment
CN109506642B (en) Robot multi-camera visual inertia real-time positioning method and device
CN111445526B (en) Method, device and storage medium for estimating pose of image frame
CN112837352B (en) Image-based data processing method, device and equipment, automobile and storage medium
CN114332415B (en) Three-dimensional reconstruction method and device of power transmission line corridor based on multi-view technology
CN111986261B (en) Vehicle positioning method and device, electronic equipment and storage medium
JP2017528833A (en) Method for determining motion between a first coordinate system and a second coordinate system
WO2023005457A1 (en) Pose calculation method and apparatus, electronic device, and readable storage medium
CN114415698B (en) Robot, positioning method and device of robot and computer equipment
CN112541423A (en) Synchronous positioning and map construction method and system
CN113190120A (en) Pose acquisition method and device, electronic equipment and storage medium
WO2020019117A1 (en) Localization method and apparatus, electronic device, and readable storage medium
CN110880003B (en) Image matching method and device, storage medium and automobile
CN112330727A (en) Image matching method and device, computer equipment and storage medium
CN115972198B (en) Mechanical arm visual grabbing method and device under incomplete information condition
CN111882494A (en) Pose graph processing method and device, computer equipment and storage medium
CN113744236B (en) Loop detection method, device, storage medium and computer program product
CN113255700A (en) Image feature map processing method and device, storage medium and terminal
CN114812540B (en) Picture construction method and device and computer equipment
CN113554711A (en) Camera online calibration method and device, computer equipment and storage medium
CN112990003B (en) Image sequence repositioning judging method, device and computer equipment
US20230281862A1 (en) Sampling based self-supervised depth and pose estimation
US20230410338A1 (en) Method for optimizing depth estimation model, computer device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant