CN113223077A - Method and device for automatic initial positioning based on vision-assisted laser - Google Patents

Method and device for automatic initial positioning based on vision-assisted laser Download PDF

Info

Publication number
CN113223077A
CN113223077A CN202110561050.7A CN202110561050A CN113223077A CN 113223077 A CN113223077 A CN 113223077A CN 202110561050 A CN202110561050 A CN 202110561050A CN 113223077 A CN113223077 A CN 113223077A
Authority
CN
China
Prior art keywords
image
pose
mobile robot
map
relative
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110561050.7A
Other languages
Chinese (zh)
Inventor
袁国斌
柏林
刘彪
舒海燕
宿凯
沈创芸
祝涛剑
雷宜辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Gosuncn Robot Co Ltd
Original Assignee
Guangzhou Gosuncn Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Gosuncn Robot Co Ltd filed Critical Guangzhou Gosuncn Robot Co Ltd
Priority to CN202110561050.7A priority Critical patent/CN113223077A/en
Publication of CN113223077A publication Critical patent/CN113223077A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

The application discloses a method and a device for automatic initial positioning based on vision-assisted laser, wherein the method comprises the steps of loading an image map after a mobile robot is started; acquiring a first image of the mobile robot in the current pose by using a camera, and searching a second image matched with the first image; when the number of the matched feature points of the first image and the second image reaches a preset number, calculating the relative pose between the first image and the second image; and calculating the pose of the first image by using the pose of the second image and the relative pose to obtain the current pose of the mobile robot, and finishing pose initialization. The method and the device can automatically acquire the initial pose when the mobile robot is started, the initial accurate pose of the mobile robot does not need to be artificially determined, and when the initial pose of the mobile robot is accurately positioned, because the image has rich information colors, the robustness of global positioning is higher, the price of the camera is low, the installation is simple, the positioning is independent, and the positioning cost is greatly reduced.

Description

Method and device for automatic initial positioning based on vision-assisted laser
Technical Field
The invention belongs to the field of design of autonomous navigation mobile robots, and relates to a method and a device for automatic initial positioning based on vision-assisted laser.
Background
The autonomous navigation mobile robot requires the mobile robot to realize the autonomous path-finding walking ability from point to point, and the premise of realizing the ability is that the mobile robot knows the position of the mobile robot and the position of a target point to be reached. Therefore, the positioning technology of the autonomous navigation mobile robot is one of the hot research technologies in recent years. The positioning modes widely applied to indoor autonomous navigation mobile robots at present comprise a laser positioning mode, a two-dimensional code positioning mode, a UWB positioning mode, a visual positioning mode and the like. The laser positioning mode has become a positioning mode preferentially adopted by autonomous navigation mobile robot manufacturers due to the advantages of high positioning precision, mature technical scheme, proper price, convenience in installation and the like.
At present, the positioning technology for improving the indoor autonomous navigation mobile robot (such as a logistics carrying robot, a service robot, an indoor patrol robot and the like) basically mainly adopts a laser positioning mode, the specific algorithm implementation technology of the laser positioning mode is AMCL (adaptive Monte Carlo positioning, a Bayesian probability positioning technology), and the positioning technology has the advantages of high positioning precision, good positioning stability and mature algorithm and is widely used. However, the algorithm requires that a relatively good self-navigation mobile robot needs to be artificially given a current pose before the self-navigation mobile robot is started. As an autonomous navigation mobile robot client, an accurate initial pose is often difficult to give.
Disclosure of Invention
In order to solve the problem that in the prior art, when a mobile robot is positioned by using a laser positioning mode, a relatively good autonomous navigation mobile robot needs to be manually provided with the current position and posture before the mobile robot is started, so that the difficulty of client operation is high, the application provides a method and a device for automatic initial positioning based on vision-assisted laser. The specific technical scheme is as follows:
in a first aspect, the present application provides a method for automatic initial positioning based on vision-assisted laser, which is applied to a mobile robot equipped with a camera, and the method includes:
after the mobile robot is started, loading an image map, wherein the image map comprises a plurality of groups of images in a positioning space to be positioned by the mobile robot and poses corresponding to the images; acquiring a first image of the mobile robot in the current pose by using the camera, and searching a second image matched with the first image in the image map;
specifically, a second image matched with the first image is searched in the map, and the similarity between the existing images can be adopted for measurement to select the second image with the highest similarity;
performing feature point matching on the first image and the second image;
calculating the relative pose between the first image and the second image when the number of the matched feature points reaches a preset number;
and calculating the pose of the first image by using the pose of the second image in the image map and the relative pose obtained by calculation, and taking the pose of the first image as the current pose of the mobile robot to finish pose initialization.
Optionally, the relative pose between the first image and the second image is RBAAnd TBAThe pose of the second image in the world coordinate system is RwBAnd TwBThe pose of the second image in the image map and the pose obtained by calculationThe relative pose, calculating the pose of the first image, comprising:
calculating the pose R of the first image in the world coordinate system according to a pose calculation formulawAAnd TwBThe pose calculation formula is as follows:
RwA=RwB*RBA
TwB=TwB+RwBTBA
optionally, before the loading the image map, the method further comprises:
when the mobile robot is started in the positioning space for the first time, the camera is turned on, and the mobile robot is controlled to move;
and storing the currently acquired image and the pose of the mobile robot at the current moment as a group of corresponding relations when the relative displacement of the mobile robot is greater than a preset displacement threshold or the relative angle is greater than a preset angle threshold, until the mobile robot walks once in the positioning space, and establishing an image map.
Optionally, the performing feature point matching on the first image and the second image includes:
calling an image processing library Opencv to detect the feature points of the first image and the feature points of the second image by utilizing an ORB feature method;
and matching the characteristic points of the first image and the second image by using a violent matching algorithm.
Optionally, the calculating the relative pose between the first image and the second image comprises:
and calling an image processing library Opencv to calculate the relative pose between the first image and the second image by using SolvePnp.
In a second aspect, the present application further provides a device for automatic initial positioning based on visual-assisted laser, the device comprising:
the loading module is configured to load an image map after the mobile robot is started, wherein the image map comprises a plurality of groups of images in a positioning space to be positioned by the mobile robot and poses corresponding to the images;
the acquisition and search module is configured to acquire a first image of the mobile robot in the current pose by using a camera, and search an image map loaded by the loading module for a second image matched with the first image, wherein the camera is installed on the mobile robot;
the matching module is configured to perform feature point matching on the first image acquired by the acquisition and search module and the second image searched by the acquisition and search module;
a first calculation module configured to calculate a relative pose between the first image and the second image when the number of matched feature points reaches a predetermined number;
and the second calculation module is configured to calculate the pose of the first image by using the pose of the second image in the image map and the relative pose calculated by the first calculation module, and use the pose of the first image as the current pose of the mobile robot to finish pose initialization.
Optionally, the relative pose between the first image and the second image is RBAAnd TBAThe pose of the second image in the world coordinate system is RwBAnd TwBThe second computing module is further configured to:
calculating the pose R of the first image in the world coordinate system according to a pose calculation formulawAAnd TwBThe pose calculation formula is as follows:
RwA=RwB*RBA
TwB=TwB+RwBTBA
optionally, the apparatus further includes a mapping module, where the mapping module is configured to:
when the mobile robot is started in the positioning space for the first time, the camera is turned on, and the mobile robot is controlled to move;
and storing the currently acquired image and the pose of the mobile robot at the current moment as a group of corresponding relations when the relative displacement of the mobile robot is greater than a preset displacement threshold or the relative angle is greater than a preset angle threshold, until the mobile robot walks once in the positioning space, and establishing an image map.
Optionally, the matching module is further configured to:
calling an image processing library Opencv to detect the feature points of the first image and the feature points of the second image by utilizing an ORB feature method;
and matching the characteristic points of the first image and the second image by using a violent matching algorithm.
Optionally, the first computing module is further configured to:
and calling an image processing library Opencv to calculate the relative pose between the first image and the second image by using SolvePnp.
The application can at least realize the following beneficial effects:
when the mobile robot is started, the camera arranged on the mobile robot is used for collecting the image in the current positioning space, the preloaded image map is used for selecting the approximate image which is closest to the collected current image, when the approximate image is matched with the characteristic points of the collected current image more, the pose of the collected current image is calculated by using the pose of the approximate image, the pose is used as the initial pose of the mobile robot, because the image when the mobile robot is started can be automatically obtained, and the pose of the mobile robot is obtained according to the image, the initial accurate pose of the mobile robot is not required to be artificially determined, the initial pose of the mobile robot is accurately positioned, and simultaneously, the image has rich information colors, the robustness of global positioning is higher, the price of the camera is low, and the installation is simple, the positioning is independent, and the positioning cost is greatly reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow chart of a method for automatic initial positioning based on a visually assisted laser provided in one embodiment of the present application;
FIG. 2A is a flow chart of a method for automatic initial positioning based on a visually assisted laser provided in one embodiment of the present application;
fig. 2B is a schematic diagram of an acquired image map in a positioning space provided in an embodiment of the present application;
FIG. 2C is a comparison of similar images provided in one embodiment of the present application;
FIG. 2D is a schematic illustration of image matching provided in one embodiment of the present application;
fig. 3 is a schematic structural diagram of an apparatus for automatic initial positioning based on a visual-assisted laser provided in an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
Fig. 1 is a flowchart of a method for automatic initial positioning based on visual-assisted laser according to an embodiment of the present application, where the method for automatic initial positioning based on visual-assisted laser according to the present application is applied to a mobile robot with a camera mounted thereon, and the method may include the following steps:
step 101, loading an image map after starting the mobile robot;
the image map described herein may generally include a plurality of sets of images and poses corresponding to the images in a localization space where the mobile robot is to be localized.
The positioning space can be an indoor space to be positioned or an outdoor space to be positioned.
Step 102, acquiring a first image of the mobile robot in the current pose by using a camera, and searching a second image matched with the first image in an image map;
specifically, the second image matched with the first image is searched in the map, the similarity between the existing images may be used for measurement, and the second image with the highest similarity is selected.
Since the camera is mounted on the mobile robot, the camera can be used as a fixed view angle of the mobile robot as long as the camera has a fixed view angle relative to the mobile robot.
103, matching the characteristic points of the first image and the second image;
104, when the number of the matched feature points reaches a preset number, calculating the relative pose between the first image and the second image;
and 105, calculating the pose of the first image by using the pose of the second image in the image map and the calculated relative pose, and finishing pose initialization by taking the pose of the first image as the current pose of the mobile robot.
In summary, the method for automatic initial positioning based on vision-assisted laser provided by the application acquires the image in the current positioning space by using the camera installed on the mobile robot when the mobile robot is started, selects the approximate image closest to the acquired current image by using the pre-loaded image map, calculates the pose of the acquired current image by using the pose of the approximate image when the approximate image is matched with the characteristic points of the acquired current image more, uses the pose as the initial pose of the mobile robot, can automatically acquire the image when the mobile robot is started and acquires the pose of the mobile robot according to the image, does not need to artificially determine the initial accurate pose of the mobile robot any more, realizes the accurate positioning of the initial pose of the mobile robot, and simultaneously has rich information colors due to the image, the robustness of global positioning is higher, and the low price of camera moreover, the installation is also simple, and the location is autonomic, greatly reduced the location cost.
Fig. 2A is a flow chart of a method for automatic initial positioning based on a visually assisted laser provided in one embodiment of the present application, which may include the steps of:
before step 201 is executed, an image map needs to be established first, and the procedure of establishing the image map can be seen in step S1 and step S2 as follows.
Step S1, starting for the first time, opening a camera and controlling the mobile robot to move;
when the mobile robot is started in the positioning space for the first time, a camera on the mobile robot is turned on, the robot is controlled to perform a laser SLAM (synchronous positioning and mapping) process, and at the moment, the mobile robot is controlled to move according to a preset moving mode. The predetermined moving pattern may be a moving factor defining a moving speed, a moving direction, a moving time interval, and a moving angle.
Step S2, establishing an image map;
in practical implementation, when the relative displacement of the mobile robot is greater than a preset displacement threshold or the relative angle is greater than a preset angle threshold, storing a currently acquired image and the pose of the mobile robot at the current moment as a group of corresponding relations until the mobile robot walks once in the positioning space, and completing establishment of an image map.
The relative displacement referred to herein generally refers to a relative displacement between the rearward movement position and the forward movement position, and the relative angle may be understood as a relative angle between the rearward movement position and the forward movement position.
The predetermined displacement threshold may be defined according to factors such as the size of the positioning space, the complexity of an object in the positioning space, and the like, may also be defined according to factors such as a matching algorithm of an image, and may also be defined by combining other factors, for example, the predetermined displacement threshold may be 0.3m, 0.5m, 0.8m, 1m, and the like, and in an embodiment of the present application, the predetermined displacement threshold is 0.5 m.
Similarly, the predetermined angle threshold may also be defined according to factors such as the size of the positioning space, the complexity of an object in the positioning space, and the like, may also be defined according to factors such as a matching algorithm of the image, and may also be defined by combining other factors, for example, the predetermined angle threshold may be set to 5 °, 8 °, 10 °, 15 °, or 20 °, and the predetermined angle threshold may be set to 10 ° in an embodiment of the present application.
The image map in the present application may include a plurality of sets of images in a positioning space where the mobile robot is to be positioned and poses corresponding to the images, as shown in fig. 2B, which is a schematic diagram of the image map in the positioning space obtained in an embodiment of the present application, where each image corresponds to one pose.
Step 201, starting a mobile robot;
step 202, loading an image map;
after each use of the mobile robot, i.e. starting the mobile robot, the program in the mobile robot automatically loads the image map.
Step 203, collecting a current image;
and after the image map is loaded, controlling the camera to acquire an image of the mobile robot in the current pose, wherein the image acquired after starting is recorded as a first image.
Step 204, loop detection;
after the first image is acquired, a second image most similar to the first image is searched, for example, the first image in the current pose of the mobile robot is acquired by using a camera, and the second image most similar to the first image is searched in an image map.
That is to say, when the mobile robot is started each time, the program algorithm automatically loads the image map first, and after the loading is completed, the program algorithm controls the camera to automatically acquire the image in the current pose, and searches for a second image most similar to the first image in the image map by calling a DBOW3 (open source library for image loop detection) library, so as to complete loop detection. Please refer to fig. 2C, which is a comparison diagram of similar images provided in an embodiment of the present application.
Step 205, image matching;
and acquiring the characteristic points of the first image and the second image, and performing characteristic point matching on the first image and the second image.
In a possible implementation, first, an image processing library Opencv (open source image processing library) is called, and feature points of the first image and feature points of the second image are detected by using an ORB (image feature point detection method provided by the Opencv library) feature method; then, feature points of the first image and the second image are matched by using a violent matching algorithm.
Step 206, whether the number of the matched feature points reaches a preset number or not is judged;
the predetermined number may be determined according to actual requirements or experiments, for example, the value may be 5, 6, 7, 10, 15, 20, and the like, and the value of the predetermined number is not limited in this application. In the verification process, the experiment verification is carried out with the preset number value being 6.
Step 207, calculating relative poses when the number of the matched feature points reaches a preset number;
in one possible implementation, step 207 may be implemented by calling an image processing library Opencv, and calculating a relative pose between the first image and the second image by using SolvePnp (pose solving function between two frames of images provided by Opencv library).
Step 208, calculating the global pose;
and calculating the pose of the first image by using the pose of the second image in the image map and the relative pose obtained by calculation, and finishing pose initialization by taking the pose of the first image as the current pose of the mobile robot.
The relative pose between the first image and the second image is RBAAnd TBAThe pose of the second image in the world coordinate system is RwBAnd TwBCalculating the relative pose R between the first image and the second imageWBAnd TWBThen, the formula can be calculated by the pose as follows:
RwA=RwB*RBA
TwB=TwB+RwBTBA
since the first image is taken at the current pose, pose RWAAnd TWBNamely, the current pose of the mobile robot completes the pose initialization.
In summary, the method for automatic initial positioning based on vision-assisted laser provided by the application acquires the image in the current positioning space by using the camera installed on the mobile robot when the mobile robot is started, selects the approximate image closest to the acquired current image by using the pre-loaded image map, calculates the pose of the acquired current image by using the pose of the approximate image when the approximate image is matched with the characteristic points of the acquired current image more, uses the pose as the initial pose of the mobile robot, can automatically acquire the image when the mobile robot is started and acquires the pose of the mobile robot according to the image, does not need to artificially determine the initial accurate pose of the mobile robot any more, realizes the accurate positioning of the initial pose of the mobile robot, and simultaneously has rich information colors due to the image, the robustness of global positioning is higher, and the low price of camera moreover, the installation is also simple, and the location is autonomic, greatly reduced the location cost.
The following is an embodiment of the device based on visual-assisted laser automatic initial positioning provided by the present application, and technical features related to the embodiment of the device are the same as or correspond to those of the above-mentioned method embodiment, and the embodiment of the device may be explained in combination with the above-mentioned method embodiment, where the related technical features have already been mentioned in the method embodiment, and will not be repeated below.
Fig. 3 is a schematic structural diagram of an apparatus for automatic initial positioning based on visual-aided laser provided in an embodiment of the present application, where the apparatus for automatic initial positioning based on visual-aided laser provided in the present application may include:
the loading module 310 is configured to load an image map after the mobile robot is started, wherein the image map comprises a plurality of groups of images in a positioning space to be positioned by the mobile robot and poses corresponding to the images;
the acquisition and search module 320 is configured to acquire a first image of the mobile robot in the current pose by using a camera, search a second image which is most similar to the first image in the image map loaded by the loading module 310, and install the camera on the mobile robot;
a matching module 330 configured to perform feature point matching on the first image acquired by the acquisition and search module 320 and the second image searched by the acquisition and search module 320;
a first calculation module 340 configured to calculate a relative pose between the first image and the second image when the number of matched feature points reaches a predetermined number;
the predetermined number may be obtained according to practical experience or experimental data, for example, the value may be 5, 6, 7, 10, 15, 20, and the like, and the specific value of the predetermined number is not limited herein.
A second calculating module 350, configured to calculate the pose of the first image by using the pose of the second image in the image map and the relative pose calculated by the first calculating module 340, and use the pose of the first image as the current pose of the mobile robot, thereby completing pose initialization.
In one possible implementation, the relative pose between the first image and the second image is RBAAnd TBAThe pose of the second image in the world coordinate system is RwBAnd TwBThe second calculation module 350 may be further configured to:
calculating the pose R of the first image in the world coordinate system according to a pose calculation formulawAAnd TwBThe pose calculation formula is as follows:
RwA=RwB*RBA
TwB=TwB+RwBTBA
optionally, the apparatus provided in the present application may further include a mapping module, where the mapping module may be configured to: when the mobile robot is started in the positioning space for the first time, the camera is turned on, and the mobile robot is controlled to move; and storing the currently acquired image and the pose of the mobile robot at the current moment as a group of corresponding relations when the relative displacement of the mobile robot is greater than a preset displacement threshold or the relative angle is greater than a preset angle threshold, until the mobile robot walks once in the positioning space, and establishing an image map.
Optionally, the matching module 330 may be further configured to: calling an image processing library Opencv to detect the feature points of the first image and the feature points of the second image by utilizing an ORB feature method; and matching the characteristic points of the first image and the second image by using a violent matching algorithm.
Optionally, the first calculation module 340 may be further configured to: and calling an image processing library Opencv to calculate the relative pose between the first image and the second image by using SolvePnp.
In summary, the device for automatic initial positioning based on vision-assisted laser provided by the application acquires an image in a current positioning space by using a camera installed on a mobile robot when the mobile robot is started, selects an approximate image closest to the acquired current image by using a pre-loaded image map, calculates the pose of the acquired current image by using the pose of the approximate image when the approximate image is matched with feature points of the acquired current image more, uses the pose as the initial pose of the mobile robot, can automatically acquire the image when the mobile robot is started and acquires the pose of the mobile robot according to the image, does not need to artificially determine the initial accurate pose of the mobile robot any more, realizes accurate positioning of the initial pose of the mobile robot, and has rich information colors due to the image, the robustness of global positioning is higher, and the low price of camera moreover, the installation is also simple, and the location is autonomic, greatly reduced the location cost.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (10)

1. A method for automatic initial positioning based on vision-assisted laser is applied to a mobile robot provided with a camera, and comprises the following steps:
after the mobile robot is started, loading an image map, wherein the image map comprises a plurality of groups of images in a positioning space to be positioned by the mobile robot and poses corresponding to the images;
acquiring a first image of the mobile robot in the current pose by using the camera, and searching a second image matched with the first image in the image map;
performing feature point matching on the first image and the second image;
calculating the relative pose between the first image and the second image when the number of the matched feature points reaches a preset number;
and calculating the pose of the first image by using the pose of the second image in the image map and the relative pose obtained by calculation, and taking the pose of the first image as the current pose of the mobile robot to finish pose initialization.
2. The method of claim 1, wherein the relative pose between the first image and the second image is RBAAnd TBAThe pose of the second image in the world coordinate system is RwBAnd TwBCalculating the pose of the first image by using the pose of the second image in the image map and the calculated relative pose, including:
calculating the pose R of the first image in the world coordinate system according to a pose calculation formulawAAnd TwBThe pose calculation formula is as follows:
RwA=RwB*RBA
TwB=TwB+RwBTBA
3. the method of claim 1, wherein prior to the loading the image map, the method further comprises:
when the mobile robot is started in the positioning space for the first time, the camera is turned on, and the mobile robot is controlled to move;
and storing the currently acquired image and the pose of the mobile robot at the current moment as a group of corresponding relations when the relative displacement of the mobile robot is greater than a preset displacement threshold or the relative angle is greater than a preset angle threshold, until the mobile robot walks once in the positioning space, and establishing an image map.
4. The method of claim 1, wherein the feature point matching the first image and the second image comprises:
calling an image processing library Opencv to detect the feature points of the first image and the feature points of the second image by utilizing an ORB feature method;
and matching the characteristic points of the first image and the second image by using a violent matching algorithm.
5. The method of claim 1, wherein the calculating the relative pose between the first image and the second image comprises:
and calling an image processing library Opencv to calculate the relative pose between the first image and the second image by using SolvePnp.
6. An apparatus for automatic initial positioning based on visual assistance laser, the apparatus comprising:
the loading module is configured to load an image map after the mobile robot is started, wherein the image map comprises a plurality of groups of images in a positioning space to be positioned by the mobile robot and poses corresponding to the images;
the acquisition and search module is configured to acquire a first image of the mobile robot in the current pose by using a camera, and search an image map loaded by the loading module for a second image matched with the first image, wherein the camera is installed on the mobile robot;
the matching module is configured to perform feature point matching on the first image acquired by the acquisition and search module and the second image searched by the acquisition and search module;
a first calculation module configured to calculate a relative pose between the first image and the second image when the number of matched feature points reaches a predetermined number;
and the second calculation module is configured to calculate the pose of the first image by using the pose of the second image in the image map and the relative pose calculated by the first calculation module, and use the pose of the first image as the current pose of the mobile robot to finish pose initialization.
7. The apparatus of claim 6, wherein the relative pose between the first image and the second image is RBAAnd TBAThe pose of the second image in the world coordinate system is RwBAnd TwBThe second computing module is further configured to:
calculating the pose R of the first image in the world coordinate system according to a pose calculation formulawAAnd TwBThe pose calculation formula is as follows:
RwA=RwB*RBA
TwB=TwB+RwBTBA
8. the apparatus of claim 6, further comprising a mapping module configured to:
when the mobile robot is started in the positioning space for the first time, the camera is turned on, and the mobile robot is controlled to move;
and storing the currently acquired image and the pose of the mobile robot at the current moment as a group of corresponding relations when the relative displacement of the mobile robot is greater than a preset displacement threshold or the relative angle is greater than a preset angle threshold, until the mobile robot walks once in the positioning space, and establishing an image map.
9. The apparatus of claim 6, wherein the matching module is further configured to:
calling an image processing library Opencv to detect the feature points of the first image and the feature points of the second image by utilizing an ORB feature method;
and matching the characteristic points of the first image and the second image by using a violent matching algorithm.
10. The apparatus of claim 6, wherein the first computing module is further configured to:
and calling an image processing library Opencv to calculate the relative pose between the first image and the second image by using SolvePnp.
CN202110561050.7A 2021-05-21 2021-05-21 Method and device for automatic initial positioning based on vision-assisted laser Pending CN113223077A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110561050.7A CN113223077A (en) 2021-05-21 2021-05-21 Method and device for automatic initial positioning based on vision-assisted laser

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110561050.7A CN113223077A (en) 2021-05-21 2021-05-21 Method and device for automatic initial positioning based on vision-assisted laser

Publications (1)

Publication Number Publication Date
CN113223077A true CN113223077A (en) 2021-08-06

Family

ID=77099305

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110561050.7A Pending CN113223077A (en) 2021-05-21 2021-05-21 Method and device for automatic initial positioning based on vision-assisted laser

Country Status (1)

Country Link
CN (1) CN113223077A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114603555A (en) * 2022-02-24 2022-06-10 江西省智能产业技术创新研究院 Mobile robot initial pose estimation method and system, computer and robot

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105928505A (en) * 2016-04-19 2016-09-07 深圳市神州云海智能科技有限公司 Determination method and apparatus for position and orientation of mobile robot
US20180297207A1 (en) * 2017-04-14 2018-10-18 TwoAntz, Inc. Visual positioning and navigation device and method thereof
CN110533722A (en) * 2019-08-30 2019-12-03 的卢技术有限公司 A kind of the robot fast relocation method and system of view-based access control model dictionary
CN112179330A (en) * 2020-09-14 2021-01-05 浙江大华技术股份有限公司 Pose determination method and device of mobile equipment
CN112639883A (en) * 2020-03-17 2021-04-09 华为技术有限公司 Relative attitude calibration method and related device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105928505A (en) * 2016-04-19 2016-09-07 深圳市神州云海智能科技有限公司 Determination method and apparatus for position and orientation of mobile robot
US20180297207A1 (en) * 2017-04-14 2018-10-18 TwoAntz, Inc. Visual positioning and navigation device and method thereof
CN110533722A (en) * 2019-08-30 2019-12-03 的卢技术有限公司 A kind of the robot fast relocation method and system of view-based access control model dictionary
CN112639883A (en) * 2020-03-17 2021-04-09 华为技术有限公司 Relative attitude calibration method and related device
CN112179330A (en) * 2020-09-14 2021-01-05 浙江大华技术股份有限公司 Pose determination method and device of mobile equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114603555A (en) * 2022-02-24 2022-06-10 江西省智能产业技术创新研究院 Mobile robot initial pose estimation method and system, computer and robot
CN114603555B (en) * 2022-02-24 2023-12-08 江西省智能产业技术创新研究院 Mobile robot initial pose estimation method and system, computer and robot

Similar Documents

Publication Publication Date Title
US11392146B2 (en) Method for detecting target object, detection apparatus and robot
CN111445526B (en) Method, device and storage medium for estimating pose of image frame
US20180246515A1 (en) Vehicle Automated Parking System and Method
KR101976241B1 (en) Map building system and its method based on multi-robot localization
CN106125724A (en) A kind of method and system of robot autonomous charging
CN111427361B (en) Recharging method, recharging device and robot
KR20110047797A (en) Apparatus and Method for Building and Updating a Map for Mobile Robot Localization
CN107969995B (en) Visual floor sweeping robot and repositioning method thereof
JP2019132664A (en) Vehicle position estimating device, vehicle position estimating method, and vehicle position estimating program
JP2016517981A (en) Method for estimating the angular deviation of a moving element relative to a reference direction
CN111239763A (en) Object positioning method and device, storage medium and processor
CN113223077A (en) Method and device for automatic initial positioning based on vision-assisted laser
CN108445882A (en) Automatic guided vehicle with following function
CN111540013B (en) Indoor AGV trolley positioning method based on multi-camera visual slam
CN110378898A (en) A kind of method, apparatus, storage medium and the equipment of beacon positioning
US20150168155A1 (en) Method and system for measuring a vehicle position indoors
WO2022002149A1 (en) Initial localization method, visual navigation device, and warehousing system
JP2016148956A (en) Positioning device, positioning method and positioning computer program
CN106204516B (en) Automatic charging method and device for robot
CN111127542B (en) Image-based non-cooperative target docking ring extraction method
CN109238286B (en) Intelligent navigation method, intelligent navigation device, computer equipment and storage medium
JP7056840B2 (en) Vehicle position estimation device, vehicle position estimation method, and vehicle position estimation program
KR20210070690A (en) Robotic systems and a returning method of robot for automatic charging
EP3985609A1 (en) Positioning system and method for determining the three-dimensional position of a movable object
Tang et al. Indoor navigation for mobile robots using memorized omni-directional images and robot's motion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210806