WO2019196478A1 - Robot positioning - Google Patents

Robot positioning Download PDF

Info

Publication number
WO2019196478A1
WO2019196478A1 PCT/CN2018/121180 CN2018121180W WO2019196478A1 WO 2019196478 A1 WO2019196478 A1 WO 2019196478A1 CN 2018121180 W CN2018121180 W CN 2018121180W WO 2019196478 A1 WO2019196478 A1 WO 2019196478A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
feature point
coordinate system
image collector
robot
Prior art date
Application number
PCT/CN2018/121180
Other languages
French (fr)
Chinese (zh)
Inventor
郝立良
申浩
程保山
Original Assignee
北京三快在线科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京三快在线科技有限公司 filed Critical 北京三快在线科技有限公司
Publication of WO2019196478A1 publication Critical patent/WO2019196478A1/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present application relates to the field of navigation technology, and in particular to robot positioning.
  • the method of robot navigation by laser can generate a two-dimensional map by the obstacle of the area where the laser scanning robot is located, and the robot can determine the driving route based on the position of the obstacle in the two-dimensional map, thereby realizing navigation.
  • the current robot is capable of determining the surrounding obstacles by laser scanning at the time of starting, it is not possible to determine its position in the coordinate system. Therefore, it is necessary to manually indicate the position of the robot in the coordinate system when the robot is started, so that the robot can perform path planning in the two-dimensional map according to the position to realize navigation.
  • embodiments of the present invention provide a robot positioning method, a robot positioning device, and a computer readable storage medium, and a robot.
  • a robot positioning method includes: acquiring an image by an image collector on the robot, extracting feature points from the image; and determining and storing in a pre-stored feature point library. a target sample feature point matched by the feature point; a coordinate according to the pre-stored target sample feature point in a preset coordinate system, and the target sample feature point and the image collector at the preset coordinate The coordinate mapping relationship in the system determines the coordinates of the image collector in the preset coordinate system.
  • a robot positioning apparatus includes: a feature point extraction module, configured to extract a feature point from an image collected by an image collector on the robot; and a feature point matching module, And determining, by using a pre-stored feature point library, a target sample feature point that matches the feature point; and a first coordinate determining module, configured to use, according to the pre-stored coordinate of the target sample feature point in a preset coordinate system And a coordinate mapping relationship between the target sample feature point and the image collector in the preset coordinate system, and determining coordinates of the image collector in the preset coordinate system.
  • a computer readable storage medium having stored thereon a computer program that, when executed by a processor, performs the above-described robot positioning method.
  • a robot comprising a laser sensor and an image collector, further comprising a processor, wherein the processor is configured to perform the robot positioning method described above.
  • the coordinates of the robot in the preset coordinate system can be determined, and The coordinates are calibrated so that operations can be performed based on the coordinates so that the navigation path can be planned in the navigation map starting from the coordinates of the image collector in the preset coordinate system. Therefore, it is not necessary to manually instruct the coordinates of the robot to realize completely autonomous navigation of the robot.
  • FIG. 1 is a schematic flow chart of a robot positioning method according to an embodiment of the present invention.
  • FIG. 2 is a schematic flow chart of another robot positioning method according to an embodiment of the present invention.
  • FIG. 3 is a schematic flow chart showing determining a positional relationship and a posture relationship of a laser sensor and an image collector, according to an embodiment of the present invention.
  • FIG. 4 is a diagram showing depth information, position information, and angle information of an image collector when acquiring a sample image according to an embodiment of the present invention, determining that a sample feature point and an image collector are preset.
  • FIG. 5 is a schematic flow chart of extracting feature points from an image by acquiring an image by an image collector, according to an embodiment of the invention.
  • FIG. 6 is a schematic flow chart of still another robot positioning method according to an embodiment of the present invention.
  • FIG. 7 is a schematic flow chart showing navigation path planning in a navigation map matching a preset coordinate system according to coordinates of the image collector in a preset coordinate system according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram showing a hardware configuration of a device in which a robot positioning device is located, according to an embodiment of the present invention.
  • FIG. 9 is a schematic block diagram of a robotic positioning device, in accordance with an embodiment of the present invention.
  • FIG. 10 is a schematic block diagram of another robotic positioning device shown in accordance with an embodiment of the present invention.
  • FIG. 11 is a schematic block diagram of still another robotic positioning device shown in accordance with an embodiment of the present invention.
  • FIG. 12 is a schematic block diagram of a path planning module, shown in accordance with an embodiment of the present invention.
  • first, second, third, etc. may be used to describe various information in this application, such information should not be limited to these terms. These terms are only used to distinguish the same type of information from each other.
  • first information may also be referred to as the second information without departing from the scope of the present application.
  • second information may also be referred to as the first information.
  • word "if” as used herein may be interpreted as "when” or “when” or “in response to a determination.”
  • FIG. 1 is a schematic flow chart of a robot positioning method according to an embodiment of the present invention.
  • the method shown in this embodiment can be applied to a robot, which can include a laser sensor and an image collector.
  • the image capture device can be a monocular camera or a binocular camera.
  • the image acquired by the image collector can be a depth image.
  • the image collector can be rotated, for example, by 360° in a preset plane to capture images in different directions.
  • the laser sensor can emit and receive laser light. Also, the laser sensor can be rotated, for example, by rotating 360° in a predetermined plane to emit laser light in a direction in which it is directed.
  • the laser sensor and image collector may be in the same plane or in different planes.
  • the robot positioning method may include the following steps:
  • step S1 an image is acquired by the image collector, and feature points are extracted from the image.
  • the image acquired by the image collector may be an image or multiple images.
  • the number of sheets specifically collected can be set as needed.
  • the number of feature points extracted from the image may be the same or different, and the number of specifically extracted feature points may be set as needed. For example, for one image, the number of extracted feature points may be greater than or equal to six.
  • Step S2 determining a target sample feature point that matches the feature point in a pre-stored feature point library.
  • the sample feature points may be pre-stored, for example, a feature point library composed of sample feature points is generated in advance, and the feature point library stores coordinates of each sample feature point in a preset coordinate system.
  • the coordinate mapping relationship between the sample feature points and the image collector in the preset coordinate system may also be stored in the feature point library (for example, including a positional relationship and an attitude relationship, which may be represented by a matrix).
  • the coordinate mapping relationship can also be stored in other storage spaces than the feature point library.
  • the description information of the sample feature points may also be stored in the feature point library. For example, if the granularity of the sample feature points is a pixel in the image, the description information may include gray values of several (eg, 8) pixels around the pixel as the sample feature point. The description information may also include information such as the category of the sample feature points, the position of the sample feature points in the image, and the like.
  • the description information of the feature point may be determined, and then the sample feature point whose description information matches the description information of the feature point may be queried in the pre-stored feature point library, that is, the target sample feature point.
  • Step S3 Determine coordinates of the image collector in a preset coordinate system according to a coordinate mapping relationship between the target sample feature point and the image collector in a preset coordinate system.
  • the coordinate mapping relationship between the target sample feature point and the image collector in the preset coordinate system and the coordinates of the target sample feature point in the preset coordinate system may be stored in the feature point library,
  • the coordinates of the target sample feature points in the preset coordinate system are converted according to the coordinate mapping relationship, thereby deriving the coordinates of the image collector in the preset coordinate system.
  • the image collector is disposed on the robot, after determining the coordinates of the image collector in the preset coordinate system, the coordinates of the robot in the preset coordinate system can be determined. Therefore, it is not necessary to manually instruct the coordinates of the robot, and it is convenient to realize the completely autonomous navigation of the robot.
  • FIG. 2 is a schematic flow chart of another robot positioning method according to an embodiment of the present invention. As shown in FIG. 2, the robot positioning method may include the following steps:
  • Step S4 Before acquiring the sample image by the image collector, determining a positional relationship and a posture relationship of the laser sensor with respect to the image collector.
  • the positional relationship may refer to an offset of the image collector relative to the laser sensor along the x-axis in the predetermined coordinate system, an offset along the y-axis, and an offset along the z-axis.
  • the attitude relationship may refer to a rotation angle and an elevation angle of a direction in which the image collector acquires an image with respect to a laser sensor emission direction.
  • Step S5 determining a first coordinate of the laser sensor in a preset coordinate system.
  • the first coordinate of the laser sensor in a preset coordinate system can be determined by SLAM (Simultaneous Localization And Mapping).
  • the coordinates determined by the SLAM may be two-dimensional coordinates, and if the preset coordinate system is three-dimensional coordinates, the two-dimensional coordinates may be converted into three-dimensional coordinates, wherein the z-axis coordinates are zero.
  • Step S6 Convert the first coordinate according to the position relationship and the attitude relationship to obtain a second coordinate of the image collector in a preset coordinate system.
  • Step S7 collecting a sample image by the image collector, and extracting a plurality of sample feature points from the sample image.
  • multiple image images may be acquired by the image collector, and one or more sample feature points may be separately extracted for each sample image, and the number of sample feature points extracted for each sample image may be the same or different.
  • the plurality of sample feature points thus acquired are stored in the feature point library.
  • Step S8 Determine the coordinate mapping relationship according to depth information, position information of the sample feature point in the sample image, and angle information when the image collector collects the sample image.
  • the image acquired by the image collector may be a depth image.
  • the feature point has a granularity of pixels, and the depth image contains depth information for each pixel. Based on the depth information, the distance from the pixel to the image collector, that is, the distance from the feature point to the image collector, can be determined.
  • the distance and the position information of the sample feature point in the sample image (for example, the pixel of the sample feature point corresponding to the first few rows in the sample image), and the angle information when the image collector collects the sample image, Determining the coordinate mapping relationship.
  • the sample feature point is 100 pixels directly above the corresponding center point in the image collected by the image collector, wherein the length of each pixel may be preset, for example, L, then the length of 100 pixels is 100L. Since the sample feature point to the center of the image, the center of the image to the image collector, and the image collector to the sample feature point can form a right triangle, the distance D from the image collector to the sample feature point can be a hypotenuse, at 100L. At the corner, the length of the other right-angled edge is obtained according to the Pythagorean theorem, that is, the distance d from the center of the image to the image collector.
  • the coordinate mapping relationship may be: the sample feature point is located in the direction of the rotation angle ⁇ of the image collector, and to the image collector The distance is the plane of d, and is located 100L away from the center directly above the center of the plane. This content can be represented by a matrix.
  • Step S9 Determine, according to the second coordinate and the coordinate mapping relationship, a third coordinate of the sample feature point in a preset coordinate system.
  • the second coordinate may be further converted according to the coordinate mapping relationship to determine a third coordinate of the sample feature point.
  • the coordinate mapping relationship is represented by a matrix, and then the third coordinate can be obtained by multiplying the second coordinate with the matrix.
  • Step S10 storing the third coordinate and the coordinate mapping relationship.
  • a feature point library may be generated, and the feature point library may include coordinates of the sample feature points in the preset coordinate system for each sample feature point.
  • the coordinate mapping relationship between the sample feature points and the image collector in the preset coordinate system (for example, can be represented by a matrix).
  • steps S4 to S10 may be pre-executed before the robot navigation, and the coordinate mapping relationship between the sample feature points and the image collector in the preset coordinate system is pre-stored through the feature point library, so as to facilitate subsequent determination of the image collector.
  • the coordinates in the preset coordinate system for example, when performing steps S1 to S3, the target sample feature points matching the feature points extracted from the image may be determined in the feature point library, and the pre-stored coordinate mapping relationship is used in advance.
  • the stored target sample feature points are converted in the coordinates in the preset coordinate system, thereby obtaining the coordinates of the image collector in the preset coordinate system.
  • the coordinate accuracy of the feature point in the preset coordinate system is determined to be low only based on the result of the laser sensor scan.
  • the image acquired by the image collector is relatively less susceptible to environmental interference, that is, the coordinate mapping relationship between the feature points in the image and the image collector in the preset coordinate system is relatively accurate, and the feature point library of the embodiment is Combined with the results of the laser sensor scan and the images acquired by the image collector, the coordinates of the feature points can be determined relatively accurately by matching in the feature point library.
  • determining the positional relationship and the attitude relationship of the laser sensor with respect to the image collector may include: step S401, determining a positional relationship of the laser sensor with respect to the image collector according to a nonlinear optimization algorithm. And attitude relationship.
  • the positional relationship and attitude relationship of the laser sensor relative to the image collector can be determined by manual measurement or by a nonlinear optimization algorithm.
  • the nonlinear optimization algorithm adopted is a least square method. Since the position of the laser sensor and the image collector on the robot is relatively fixed, the positional relationship and the attitude relationship of the laser sensor with respect to the image collector are relatively fixed.
  • a plurality of known points may be set in the space, and for a known point, a laser sensor may be used to emit laser light to the known point and receive the reflected laser light, thereby determining the positional relationship of the known point with respect to the laser sensor and
  • the attitude relationship is represented, for example, by matrix A, and the spatial coordinates of the laser sensor are represented as matrix P).
  • the image of the point can be acquired by the image collector, thereby determining the positional relationship and the attitude relationship of the known point with respect to the image collector (for example, represented by a matrix B, and the spatial coordinates of the image collector are represented as a matrix Q).
  • a plurality of sets of matrices P, Q, and C can be separately measured, and a plurality of sets of matrices P, Q, and C can be calculated by using a least squares method to obtain a relatively accurate matrix C to represent the laser sensor relative to image acquisition.
  • the positional relationship and attitude relationship of the device Since the nonlinear optimization algorithm can be executed by software, the positional relationship and the attitude relationship of the laser sensor with respect to the image collector can be determined more accurately with respect to manual measurement.
  • determining the coordinate mapping relationship according to the depth information of the sample feature points in the sample image, the position information, and the angle information when the image collector acquires the sample image includes:
  • Step S801 determining the coordinate mapping relationship according to the depth information, the position information, the angle information, and the imaging model of the image collector of the sample feature point in the sample image.
  • the imaging model of the image collector is different, for example, the image collector is a pinhole camera, and the imaging model is a pinhole model, for example, the image collector is a fisheye camera, then the imaging model is a fisheye model.
  • the coordinate mapping relationship in different imaging models is different from the corresponding relationship between the depth information, the position information, and the angle information of the sample feature points in the sample image. Therefore, when determining the coordinates of the feature points, it is advantageous to determine the coordinates of the feature points more accurately by considering the imaging model of the image collector.
  • the model can be represented by the following relationship:
  • m' is the uv coordinate of the feature point
  • A is the camera internal parameter
  • t] is the relationship between the camera and the preset coordinate system (such as the world coordinate system)
  • M' is the feature point and the preset coordinate system (for example, world coordinates) Relationship
  • s is the z coordinate of the object in the camera coordinate system.
  • the camera internal reference mentioned here refers to the parameter determined only by the camera itself, that is, once a value is calculated for a certain camera, it is not necessary to perform calculation again.
  • FIG. 5 is a schematic flow diagram of extracting an image from the image by acquiring an image by the image collector, according to an embodiment of the invention.
  • the image is collected by the image collector, and extracting feature points from the image includes: step S101, when the robot is started, an image is collected by the image collector, and the image is extracted from the image. Feature points.
  • step S1 may be performed when the robot is started, that is, an image is acquired by the image collector when the robot is started, and feature points are extracted from the image. According to this, it can be ensured that as long as the robot is started, the coordinates in the preset coordinate system can be determined, thereby completing the autonomous navigation.
  • FIG. 6 is a schematic flow chart of still another robot positioning method according to an embodiment of the present invention. As shown in FIG. 6, on the basis of the embodiment shown in FIG. 1, the robot positioning method may further include:
  • Step S11 generating a navigation map by scanning the area where the robot is located by the laser sensor;
  • the navigation map generated by the laser sensor scanning the area where the robot is located may be a two-dimensional map.
  • the laser sensor can scan an obstacle in a range of a predetermined distance from the area where the robot is located to the robot, and the laser reflected by the obstacle can determine the position of the obstacle in the area, and then generate a navigation map according to the position of the obstacle.
  • a navigation map is generated, for example, by SLAM.
  • Step S12 matching the navigation map and the preset coordinate system.
  • the navigation map is a two-dimensional map and the preset coordinate system is a three-dimensional coordinate system
  • only two dimensions in the three-dimensional coordinate system may be matched to the navigation map.
  • a two-dimensional map is a map parallel to a horizontal plane.
  • the x-axis and the y-axis are axes parallel to the horizontal plane, and then the x-axis and the y-axis in the three-dimensional coordinate system can be matched to the navigation map, thereby navigating the map.
  • the x-axis coordinate and the y-axis coordinate can be calibrated.
  • Step S13 Perform navigation path planning on the navigation map matching the preset coordinate system according to the coordinates of the image collector in the preset coordinate system.
  • the coordinates of the robot in the preset coordinate system can be determined, and the calibration is determined in the navigation map.
  • the coordinates can therefore be calculated according to the coordinates so that the navigation path can be planned in the navigation map starting from the coordinates of the image collector in the preset coordinate system. Therefore, it is not necessary to manually instruct the coordinates of the robot to realize completely autonomous navigation of the robot.
  • the navigation path planning can be performed according to the amcl (adaptive Monte Carlo Localization) positioning algorithm, costmap (cost map) and path planning algorithm.
  • FIG. 7 is a schematic flow chart showing navigation path planning in a navigation map matching a preset coordinate system according to coordinates of the image collector in a preset coordinate system according to an embodiment of the present invention.
  • the navigation path planning in the navigation map matching the preset coordinate system according to the coordinates of the image collector in the preset coordinate system includes: Step S1301, according to the image collector Positioning on the robot determines a projection of the contour of the robot in the preset coordinate system; step S1302, according to the projection, performs navigation path planning in a navigation map matching the preset coordinate system.
  • the contour of the robot can be determined according to the position of the image collector on the robot at the preset coordinates. Projection in the system, and according to the projection in the navigation map matching the preset coordinate system for navigation path planning, to ensure that the robot does not touch the obstacles in the path, to ensure that the robot moves smoothly and smoothly. And according to the projection of the robot, the orientation of the robot in the preset coordinate system can also be determined, so that the navigation path can be easily planned.
  • the present application also provides an embodiment of the robot positioning device.
  • the embodiment of the robot positioning device of the present application can be applied to a device such as a robot.
  • the device embodiment may be implemented by software, or may be implemented by hardware or a combination of hardware and software.
  • the processor of the device in which it is located reads the corresponding computer program instructions in the non-volatile memory into the memory.
  • FIG. 8 a hardware structure diagram of the device where the robot positioning device is located, except for the processor 801, the memory 802, the network interface 803, and the non-volatile memory shown in FIG.
  • the device in which the device is located in the embodiment is usually other hardware according to the actual function of the device, and details are not described herein.
  • FIG. 9 is a schematic block diagram of a robotic positioning device, in accordance with an embodiment of the present invention.
  • the robot includes a laser sensor and an image collector, as shown in FIG. 9, the robot positioning device includes:
  • a feature point extraction module 1 for acquiring an image by the image collector, and extracting feature points from the image
  • a feature point matching module 2 configured to determine, in a pre-stored feature point library, a target sample feature point that matches the feature point;
  • a first coordinate determining module 3 configured to: according to pre-stored coordinates of the target sample feature point in a preset coordinate system, and coordinate mapping between the target sample feature point and the image collector in a preset coordinate system And determining coordinates of the image collector in a preset coordinate system.
  • FIG. 10 is a schematic block diagram of another robotic positioning device shown in accordance with an embodiment of the present invention. As shown in FIG. 10, on the basis of the embodiment shown in FIG. 9, the robot positioning device further includes:
  • a relationship determining module 4 configured to determine a positional relationship and a posture relationship of the laser sensor with respect to the image collector;
  • a second coordinate determining module 5 configured to determine a first coordinate of the laser sensor in a preset coordinate system
  • a coordinate conversion module 6 configured to convert the first coordinate according to the positional relationship and the attitude relationship to obtain a second coordinate of the image collector in a preset coordinate system
  • the feature point extraction module 1 is further configured to collect a sample image by using the image collector, and extract a plurality of sample feature points from the sample image;
  • the mapping determination module 7 is configured to determine the coordinate mapping relationship according to the depth information, the position information of the sample feature point in the sample image, and the angle information when the image collector acquires the sample image;
  • the third coordinate determining module 8 is configured to determine, according to the second coordinate and the coordinate mapping relationship, a third coordinate of the sample feature point in a preset coordinate system;
  • the storage module 9 is configured to store a third coordinate of the sample feature point in a preset coordinate system and a coordinate mapping relationship between the sample feature point and the image collector in a preset coordinate system.
  • the relationship determination module 4 is configured to determine a positional relationship and an attitude relationship of the laser sensor relative to the image collector according to a nonlinear optimization algorithm.
  • the mapping determining module 7 is configured to determine, according to the depth information, the position information, the angle information, and the imaging model of the image collector of the sample feature point in the sample image. Coordinate mapping relationship.
  • the feature point extraction module 1 is configured to acquire an image from the image by the image collector when the robot is started, and extract feature points from the image.
  • FIG. 11 is a schematic block diagram of still another robotic positioning device shown in accordance with an embodiment of the present invention. As shown in FIG. 11, on the basis of the embodiment shown in FIG. 9, the robot positioning device further includes:
  • a map generating module 10 configured to generate a navigation map by scanning, by the laser sensor, an area where the robot is located;
  • a map matching module 11 configured to match the navigation map and the preset coordinate system
  • the path planning module 12 is configured to perform navigation path planning in a navigation map matching the preset coordinate system according to coordinates of the image collector in a preset coordinate system.
  • the path planning module 12 is a schematic block diagram of a path planning module, shown in accordance with an embodiment of the present invention. As shown in FIG. 10, on the basis of the embodiment shown in FIG. 11, the path planning module 12 includes:
  • a projection determining sub-module 1201 configured to determine a projection of the contour of the robot in the preset coordinate system according to a position of the image collector on the robot;
  • the path planning sub-module 1202 is configured to perform navigation path planning in the navigation map matching the preset coordinate system according to the projection.
  • Embodiments of the present invention also provide a computer readable storage medium having stored thereon a computer program that, when executed by a processor, performs the robot positioning method of any of the above embodiments.
  • Embodiments of the present invention also provide a robot, including a laser sensor and an image collector, further comprising a processor, wherein the processor is configured to perform the robot positioning method of any of the above embodiments.
  • the device embodiment since it basically corresponds to the method embodiment, reference may be made to the partial description of the method embodiment.
  • the device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located A place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the objectives of the present application. Those of ordinary skill in the art can understand and implement without any creative effort.

Abstract

A robot positioning method and device. The robot positioning method comprises: acquiring an image by means of an image acquisition unit on a robot, and extracting a feature point from the image (S1); determining, in the pre-stored feature point library, a target sample feature point that matches the feature point (S2); and determining the coordinate of the image acquisition unit in a preset coordinate system according to the coordinate of the pre-stored target sample feature point in the preset coordinate system and the coordinate mapping relationship between the target sample feature point and the image acquisition unit in the preset coordinate system (S3).

Description

机器人定位Robot positioning
相关申请的交叉引用Cross-reference to related applications
本专利申请要求于2018年4月13日提交的、申请号为2018103308968、发明名称为“机器人定位方法和机器人定位装置”的中国专利申请的优先权,该申请的全文以引用的方式并入本文中。The present application claims priority to Chinese Patent Application No. 2018103308, the entire disclosure of which is incorporated herein in in.
技术领域Technical field
本申请涉及导航技术领域,具体而言,涉及机器人定位。The present application relates to the field of navigation technology, and in particular to robot positioning.
背景技术Background technique
目前通过激光进行机器人导航的方法,可以通过激光扫描机器人所处区域的障碍物生成二维地图,机器人基于二维地图中障碍物的位置,可以确定行驶路线,从而实现导航。At present, the method of robot navigation by laser can generate a two-dimensional map by the obstacle of the area where the laser scanning robot is located, and the robot can determine the driving route based on the position of the obstacle in the two-dimensional map, thereby realizing navigation.
但是目前的机器人在启动时,虽然能够通过激光扫描确定周围的障碍物,但是并不能够确定自身在坐标系中的位置。因此,需要人工在机器人启动时指示机器人在坐标系中的位置,进而机器人才能根据该位置在二维地图中进行路径规划来实现导航。However, while the current robot is capable of determining the surrounding obstacles by laser scanning at the time of starting, it is not possible to determine its position in the coordinate system. Therefore, it is necessary to manually indicate the position of the robot in the coordinate system when the robot is started, so that the robot can perform path planning in the two-dimensional map according to the position to realize navigation.
发明内容Summary of the invention
有鉴于此,本发明的实施例提供机器人定位方法、机器人定位装置和计算机可读存储介质以及机器人。In view of this, embodiments of the present invention provide a robot positioning method, a robot positioning device, and a computer readable storage medium, and a robot.
根据本发明实施例的第一方面,提供一种机器人定位方法,包括:通过所述机器人上的图像采集器采集图像,从所述图像中提取特征点;在预先存储的特征点库中确定与所述特征点相匹配的目标样本特征点;根据预先存储的所述目标样本特征点在预设坐标系中的坐标,以及所述目标样本特征点与所述图像采集器在所述预设坐标系中的坐标映射关系,确定所述图像采集器在所述预设坐标系中的坐标。According to a first aspect of the embodiments of the present invention, a robot positioning method includes: acquiring an image by an image collector on the robot, extracting feature points from the image; and determining and storing in a pre-stored feature point library. a target sample feature point matched by the feature point; a coordinate according to the pre-stored target sample feature point in a preset coordinate system, and the target sample feature point and the image collector at the preset coordinate The coordinate mapping relationship in the system determines the coordinates of the image collector in the preset coordinate system.
根据本发明实施例的第二方面,提供一种机器人定位装置,包括:特征点提取模块,用于从通过所述机器人上的图像采集器采集到的图像中提取特征点;特征点匹配模块,用于在预先存储的特征点库中确定与所述特征点相匹配的目标样本特征点;第一坐标确 定模块,用于根据预先存储的所述目标样本特征点在预设坐标系中的坐标,以及所述目标样本特征点与所述图像采集器在所述预设坐标系中的坐标映射关系,确定所述图像采集器在所述预设坐标系中的坐标。According to a second aspect of the embodiments of the present invention, a robot positioning apparatus includes: a feature point extraction module, configured to extract a feature point from an image collected by an image collector on the robot; and a feature point matching module, And determining, by using a pre-stored feature point library, a target sample feature point that matches the feature point; and a first coordinate determining module, configured to use, according to the pre-stored coordinate of the target sample feature point in a preset coordinate system And a coordinate mapping relationship between the target sample feature point and the image collector in the preset coordinate system, and determining coordinates of the image collector in the preset coordinate system.
根据本发明实施例的第三方面,提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时执行上述机器人定位方法。According to a third aspect of the embodiments of the present invention, there is provided a computer readable storage medium having stored thereon a computer program that, when executed by a processor, performs the above-described robot positioning method.
根据本发明实施例的第四方面,提供一种机器人,包括激光传感器和图像采集器,还包括处理器,其中,所述处理器被配置为执行上述的机器人定位方法。According to a fourth aspect of the embodiments of the present invention, there is provided a robot comprising a laser sensor and an image collector, further comprising a processor, wherein the processor is configured to perform the robot positioning method described above.
由上述实施例可知,本发明由于图像采集器设置在机器人上,在确定图像采集器在预设坐标系中的坐标后,就可以确定机器人在预设坐标系中的坐标,并且由于导航地图中标定了坐标,因此可以根据坐标进行运算,使得可以从图像采集器在预设坐标系中的坐标为起点在导航地图中规划导航路径。从而无需人工指示机器人的坐标,实现机器人完全自主的导航。It can be seen from the above embodiment that, since the image collector is disposed on the robot, after determining the coordinates of the image collector in the preset coordinate system, the coordinates of the robot in the preset coordinate system can be determined, and The coordinates are calibrated so that operations can be performed based on the coordinates so that the navigation path can be planned in the navigation map starting from the coordinates of the image collector in the preset coordinate system. Therefore, it is not necessary to manually instruct the coordinates of the robot to realize completely autonomous navigation of the robot.
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本发明。The above general description and the following detailed description are intended to be illustrative and not restrictive.
附图说明DRAWINGS
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本发明的实施例,并与说明书一起用于解释本发明的原理。The accompanying drawings, which are incorporated in the specification of FIG
图1是根据本发明的实施例示出的一种机器人定位方法的示意流程图。1 is a schematic flow chart of a robot positioning method according to an embodiment of the present invention.
图2是根据本发明的实施例示出的另一种机器人定位方法的示意流程图。2 is a schematic flow chart of another robot positioning method according to an embodiment of the present invention.
图3是根据本发明的实施例示出的一种确定激光传感器和图像采集器的位置关系和姿态关系的示意流程图。3 is a schematic flow chart showing determining a positional relationship and a posture relationship of a laser sensor and an image collector, according to an embodiment of the present invention.
图4是根据本发明的实施例示出的一种根据样本图像中样本特征点的深度信息、位置信息,以及图像采集器采集样本图像时的角度信息,确定样本特征点与图像采集器在预设坐标系中的坐标映射关系的示意流程图。FIG. 4 is a diagram showing depth information, position information, and angle information of an image collector when acquiring a sample image according to an embodiment of the present invention, determining that a sample feature point and an image collector are preset. A schematic flow chart of the coordinate mapping relationship in the coordinate system.
图5是根据本发明的实施例示出的一种通过图像采集器采集图像,从图像中提取特征点的示意流程图。FIG. 5 is a schematic flow chart of extracting feature points from an image by acquiring an image by an image collector, according to an embodiment of the invention.
图6是根据本发明的实施例示出的又一种机器人定位方法的示意流程图。FIG. 6 is a schematic flow chart of still another robot positioning method according to an embodiment of the present invention.
图7是根据本发明的实施例示出的一种通过根据所述图像采集器在预设坐标系中的坐标,在匹配了预设坐标系的导航地图中进行导航路径规划的示意流程图。FIG. 7 is a schematic flow chart showing navigation path planning in a navigation map matching a preset coordinate system according to coordinates of the image collector in a preset coordinate system according to an embodiment of the present invention.
图8是根据本发明的实施例示出机器人定位装置所在设备的一种硬件结构示意图。FIG. 8 is a schematic diagram showing a hardware configuration of a device in which a robot positioning device is located, according to an embodiment of the present invention.
图9是根据本发明的实施例示出的一种机器人定位装置的示意框图。9 is a schematic block diagram of a robotic positioning device, in accordance with an embodiment of the present invention.
图10是根据本发明的实施例示出的另一种机器人定位装置的示意框图。Figure 10 is a schematic block diagram of another robotic positioning device shown in accordance with an embodiment of the present invention.
图11是根据本发明的实施例示出的又一种机器人定位装置的示意框图。11 is a schematic block diagram of still another robotic positioning device shown in accordance with an embodiment of the present invention.
图12是根据本发明的实施例示出的一种路径规划模块的示意框图。12 is a schematic block diagram of a path planning module, shown in accordance with an embodiment of the present invention.
具体实施方式detailed description
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。Exemplary embodiments will be described in detail herein, examples of which are illustrated in the accompanying drawings. The following description refers to the same or similar elements in the different figures unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Instead, they are merely examples of devices and methods consistent with aspects of the present application as detailed in the appended claims.
在本申请使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本申请。在本申请和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。The terminology used in the present application is for the purpose of describing particular embodiments, and is not intended to be limiting. The singular forms "a", "the" and "the" It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
应当理解,尽管在本申请可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本申请范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。It should be understood that although the terms first, second, third, etc. may be used to describe various information in this application, such information should not be limited to these terms. These terms are only used to distinguish the same type of information from each other. For example, the first information may also be referred to as the second information without departing from the scope of the present application. Similarly, the second information may also be referred to as the first information. Depending on the context, the word "if" as used herein may be interpreted as "when" or "when" or "in response to a determination."
图1是根据本发明的实施例示出的一种机器人定位方法的示意流程图。本实施例所示的方法可以应用于机器人,所述机器人可以包括激光传感器和图像采集器。1 is a schematic flow chart of a robot positioning method according to an embodiment of the present invention. The method shown in this embodiment can be applied to a robot, which can include a laser sensor and an image collector.
在一个实施例中,图像采集器可以是单目摄像头,也可以是双目摄像头。图像采集器采集到的图像可以是深度图像。并且,图像采集器可以旋转,例如可以在预设平面内旋转360°,从而采集不同方向的图像。In one embodiment, the image capture device can be a monocular camera or a binocular camera. The image acquired by the image collector can be a depth image. Moreover, the image collector can be rotated, for example, by 360° in a preset plane to capture images in different directions.
在一个实施例中,激光传感器可以发射和接收激光。并且,激光传感器可以旋转, 例如可以在预设平面内旋转360°,从而向其所朝向的方向发射激光。In one embodiment, the laser sensor can emit and receive laser light. Also, the laser sensor can be rotated, for example, by rotating 360° in a predetermined plane to emit laser light in a direction in which it is directed.
在一个实施例中,激光传感器和图像采集器可以位于同一平面,也可以位于不同平面。In one embodiment, the laser sensor and image collector may be in the same plane or in different planes.
如图1所示,所述机器人定位方法可包括以下步骤:As shown in FIG. 1, the robot positioning method may include the following steps:
步骤S1,通过所述图像采集器采集图像,从所述图像中提取特征点。In step S1, an image is acquired by the image collector, and feature points are extracted from the image.
在一个实施例中,通过图像采集器采集的图像可以是一张图像,也可以是多张图像。在采集多张图像的情况下,具体采集的张数可以根据需要设置。并且,针对不同的图像,从图像中提取的特征点的数目可以相同也可以不同,具体提取的特征点的数目可以根据需要设置。例如针对一张图像而言,提取的特征点的数目可以大于或等于6个。In one embodiment, the image acquired by the image collector may be an image or multiple images. In the case of collecting multiple images, the number of sheets specifically collected can be set as needed. Moreover, for different images, the number of feature points extracted from the image may be the same or different, and the number of specifically extracted feature points may be set as needed. For example, for one image, the number of extracted feature points may be greater than or equal to six.
步骤S2,在预先存储的特征点库中确定与所述特征点相匹配的目标样本特征点。Step S2: determining a target sample feature point that matches the feature point in a pre-stored feature point library.
在一个实施例中,可以预先存储样本特征点,例如预先生成由样本特征点构成的特征点库,特征点库中存储有每个样本特征点在预设坐标系中的坐标。In an embodiment, the sample feature points may be pre-stored, for example, a feature point library composed of sample feature points is generated in advance, and the feature point library stores coordinates of each sample feature point in a preset coordinate system.
在特征点库中还可以存储有样本特征点和图像采集器在预设坐标系中的坐标映射关系(例如包括位置关系、姿态关系,可以通过矩阵来表示)。当然,该坐标映射关系也可以存储在特征点库以外的其他存储空间。The coordinate mapping relationship between the sample feature points and the image collector in the preset coordinate system may also be stored in the feature point library (for example, including a positional relationship and an attitude relationship, which may be represented by a matrix). Of course, the coordinate mapping relationship can also be stored in other storage spaces than the feature point library.
其中,在特征点库中还可以存储样本特征点的描述信息。例如,样本特征点的粒度为图像中的像素,那么描述信息可以包括作为样本特征点的像素周围若干个(例如8个)像素的灰度值。描述信息还可以包括样本特征点的类别、样本特征点在图像中的位置等信息。The description information of the sample feature points may also be stored in the feature point library. For example, if the granularity of the sample feature points is a pixel in the image, the description information may include gray values of several (eg, 8) pixels around the pixel as the sample feature point. The description information may also include information such as the category of the sample feature points, the position of the sample feature points in the image, and the like.
针对所述特征点,可以确定特征点的描述信息,进而可以在预先存储的特征点库中查询描述信息与所述特征点的描述信息相匹配的样本特征点,即为目标样本特征点。For the feature point, the description information of the feature point may be determined, and then the sample feature point whose description information matches the description information of the feature point may be queried in the pre-stored feature point library, that is, the target sample feature point.
步骤S3,根据预先存储的所述目标样本特征点与所述图像采集器在预设坐标系中的坐标映射关系,确定所述图像采集器在预设坐标系中的坐标。Step S3: Determine coordinates of the image collector in a preset coordinate system according to a coordinate mapping relationship between the target sample feature point and the image collector in a preset coordinate system.
在一个实施例中,由于在特征点库中可以存储有目标样本特征点与图像采集器在预设坐标系中的坐标映射关系,以及目标样本特征点在预设坐标系中的坐标,因此可以根据该坐标映射关系对目标样本特征点在预设坐标系中的坐标进行转换,从而推导出图像采集器在预设坐标系中的坐标。In an embodiment, since the coordinate mapping relationship between the target sample feature point and the image collector in the preset coordinate system and the coordinates of the target sample feature point in the preset coordinate system may be stored in the feature point library, The coordinates of the target sample feature points in the preset coordinate system are converted according to the coordinate mapping relationship, thereby deriving the coordinates of the image collector in the preset coordinate system.
由于图像采集器设置在机器人上,在确定图像采集器在预设坐标系中的坐标后,就 可以确定机器人在预设坐标系中的坐标。从而无需人工指示机器人的坐标,便于实现机器人完全自主的导航。Since the image collector is disposed on the robot, after determining the coordinates of the image collector in the preset coordinate system, the coordinates of the robot in the preset coordinate system can be determined. Therefore, it is not necessary to manually instruct the coordinates of the robot, and it is convenient to realize the completely autonomous navigation of the robot.
图2是根据本发明的实施例示出的另一种机器人定位方法的示意流程图。如图2所示,所述机器人定位方法可以包括以下步骤:2 is a schematic flow chart of another robot positioning method according to an embodiment of the present invention. As shown in FIG. 2, the robot positioning method may include the following steps:
步骤S4,在通过图像采集器采集样本图像之前,确定所述激光传感器相对于所述图像采集器的位置关系和姿态关系。Step S4: Before acquiring the sample image by the image collector, determining a positional relationship and a posture relationship of the laser sensor with respect to the image collector.
在一个实施例中,位置关系可以是指图像采集器相对于激光传感器在预设坐标系中沿x轴的偏移量、沿y轴的偏移量和沿z轴的偏移量。姿态关系可以是指图像采集器采集图像的方向相对于激光传感器发射方向的转角和仰角。In one embodiment, the positional relationship may refer to an offset of the image collector relative to the laser sensor along the x-axis in the predetermined coordinate system, an offset along the y-axis, and an offset along the z-axis. The attitude relationship may refer to a rotation angle and an elevation angle of a direction in which the image collector acquires an image with respect to a laser sensor emission direction.
步骤S5,确定所述激光传感器在预设坐标系中的第一坐标。Step S5, determining a first coordinate of the laser sensor in a preset coordinate system.
在一个实施例中,可通过SLAM(Simultaneous Localization And Mapping,即时定位与地图构建)确定所述激光传感器在预设坐标系中的第一坐标。In one embodiment, the first coordinate of the laser sensor in a preset coordinate system can be determined by SLAM (Simultaneous Localization And Mapping).
在一个实施例中,通过SLAM确定的坐标可以是二维坐标,若预设坐标系为三维坐标,那么可以将二维坐标转换为三维坐标,其中z轴坐标为0。In one embodiment, the coordinates determined by the SLAM may be two-dimensional coordinates, and if the preset coordinate system is three-dimensional coordinates, the two-dimensional coordinates may be converted into three-dimensional coordinates, wherein the z-axis coordinates are zero.
步骤S6,根据所述位置关系和所述姿态关系转换所述第一坐标,以得到所述图像采集器在预设坐标系中的第二坐标。Step S6: Convert the first coordinate according to the position relationship and the attitude relationship to obtain a second coordinate of the image collector in a preset coordinate system.
步骤S7,通过所述图像采集器采集样本图像,从所述样本图像中提取多个样本特征点。Step S7, collecting a sample image by the image collector, and extracting a plurality of sample feature points from the sample image.
在一个实施例中,通过图像采集器可以采集多张样本图像,针对每张样本图像可以分别提取一个或多个样本特征点,且针对每张样本图像提取的样本特征点的数目可以相同也可以不同。由此采集的多个样本特征点被存储于特征点库中。In one embodiment, multiple image images may be acquired by the image collector, and one or more sample feature points may be separately extracted for each sample image, and the number of sample feature points extracted for each sample image may be the same or different. The plurality of sample feature points thus acquired are stored in the feature point library.
步骤S8,根据所述样本特征点在所述样本图像中的深度信息、位置信息,以及所述图像采集器采集所述样本图像时的角度信息,确定所述坐标映射关系。Step S8: Determine the coordinate mapping relationship according to depth information, position information of the sample feature point in the sample image, and angle information when the image collector collects the sample image.
在一个实施例中,通过图像采集器采集到的图像可以是深度图像。例如,特征点的粒度为像素,而深度图像包含针对每个像素的深度信息。根据深度信息可以确定像素到图像采集器的距离,也即特征点到图像采集器的距离。In one embodiment, the image acquired by the image collector may be a depth image. For example, the feature point has a granularity of pixels, and the depth image contains depth information for each pixel. Based on the depth information, the distance from the pixel to the image collector, that is, the distance from the feature point to the image collector, can be determined.
进而根据该距离和样本特征点在样本图像中的位置信息(例如样本特征点在样本图像中对应第几行第几列的像素),以及图像采集器采集所述样本图像时的角度信息,可 以确定所述坐标映射关系。Further, according to the distance and the position information of the sample feature point in the sample image (for example, the pixel of the sample feature point corresponding to the first few rows in the sample image), and the angle information when the image collector collects the sample image, Determining the coordinate mapping relationship.
例如样本特征点在图像采集器所采集到的图像中对应中心点正上方100个像素,其中,每个像素的长度可以是预先设定的,例如为L,那么100个像素的长度就是100L。由于样本特征点到图像的中心、图像的中心到图像采集器、和图像采集器到样本特征点可以构成直角三角形,从而可以以图像采集器到样本特征点的距离D为斜边、以100L为一直角边,根据勾股定理得到另一直角边的长度,也即图像的中心到图像采集器的距离d。For example, the sample feature point is 100 pixels directly above the corresponding center point in the image collected by the image collector, wherein the length of each pixel may be preset, for example, L, then the length of 100 pixels is 100L. Since the sample feature point to the center of the image, the center of the image to the image collector, and the image collector to the sample feature point can form a right triangle, the distance D from the image collector to the sample feature point can be a hypotenuse, at 100L. At the corner, the length of the other right-angled edge is obtained according to the Pythagorean theorem, that is, the distance d from the center of the image to the image collector.
若图像采集器采集到包含该样本特征点的样本图像时所旋转的角度为α,那么所述坐标映射关系可以是:样本特征点位于图像采集器旋转α角度的方向上,且到图像采集器的距离为d的平面,并位于该平面的中心正上方与该中心相距100L的位置。该内容可以通过矩阵表示。If the angle of rotation of the sample image when the image collector collects the sample image of the sample feature point is α, the coordinate mapping relationship may be: the sample feature point is located in the direction of the rotation angle α of the image collector, and to the image collector The distance is the plane of d, and is located 100L away from the center directly above the center of the plane. This content can be represented by a matrix.
步骤S9,根据所述第二坐标和所述坐标映射关系,确定所述样本特征点在预设坐标系中的第三坐标。Step S9: Determine, according to the second coordinate and the coordinate mapping relationship, a third coordinate of the sample feature point in a preset coordinate system.
在一个实施例中,在得到所述坐标映射关系之后,进一步可以根据所述坐标映射关系对第二坐标进行转换以确定样本特征点的第三坐标。例如,坐标映射关系通过矩阵来表示,那么通过将第二坐标与该矩阵做乘法,即可得到第三坐标。In an embodiment, after the coordinate mapping relationship is obtained, the second coordinate may be further converted according to the coordinate mapping relationship to determine a third coordinate of the sample feature point. For example, the coordinate mapping relationship is represented by a matrix, and then the third coordinate can be obtained by multiplying the second coordinate with the matrix.
步骤S10,存储所述第三坐标和所述坐标映射关系。Step S10, storing the third coordinate and the coordinate mapping relationship.
在一个实施例中,在得到样本特征点在预设坐标系中的坐标后,可以生成特征点库,特征点库中针对每个样本特征点可以包含样本特征点在预设坐标系中的坐标,以及样本特征点和图像采集器在预设坐标系中的坐标映射关系(例如可以通过矩阵表示)。In an embodiment, after obtaining the coordinates of the sample feature points in the preset coordinate system, a feature point library may be generated, and the feature point library may include coordinates of the sample feature points in the preset coordinate system for each sample feature point. And the coordinate mapping relationship between the sample feature points and the image collector in the preset coordinate system (for example, can be represented by a matrix).
需要说明的是,上述步骤S4至S10可以是在机器人导航之前预先执行的,通过特征点库预先存储样本特征点和图像采集器在预设坐标系中的坐标映射关系,便于后续确定图像采集器在预设坐标系中的坐标,例如在执行步骤S1至S3时,可以在特征点库中确定与从图像提取的特征点相匹配的目标样本特征点,并根据预先存储的坐标映射关系对预先存储的目标样本特征点在预设坐标系中的坐标进行转换,从而得到述图像采集器在预设坐标系中的坐标。It should be noted that the foregoing steps S4 to S10 may be pre-executed before the robot navigation, and the coordinate mapping relationship between the sample feature points and the image collector in the preset coordinate system is pre-stored through the feature point library, so as to facilitate subsequent determination of the image collector. The coordinates in the preset coordinate system, for example, when performing steps S1 to S3, the target sample feature points matching the feature points extracted from the image may be determined in the feature point library, and the pre-stored coordinate mapping relationship is used in advance. The stored target sample feature points are converted in the coordinates in the preset coordinate system, thereby obtaining the coordinates of the image collector in the preset coordinate system.
另外,由于激光传感器发射的激光容易受到环境(例如雾、霾)干扰,因此仅根据激光传感器扫描的结果确定特征点在预设坐标系中的坐标精度较低。而图像采集器采集的图像则相对不易受到环境的干扰,也即图像中的特征点与图像采集器在预设坐标系中 的坐标映射关系是相对准确的,本实施例的特征点库正是结合激光传感器扫描的结果和图像采集器采集的图像得到的,因此可以通过在特征点库中进行匹配以相对精确地确定特征点的坐标。In addition, since the laser light emitted by the laser sensor is easily disturbed by the environment (for example, fog, helium), the coordinate accuracy of the feature point in the preset coordinate system is determined to be low only based on the result of the laser sensor scan. The image acquired by the image collector is relatively less susceptible to environmental interference, that is, the coordinate mapping relationship between the feature points in the image and the image collector in the preset coordinate system is relatively accurate, and the feature point library of the embodiment is Combined with the results of the laser sensor scan and the images acquired by the image collector, the coordinates of the feature points can be determined relatively accurately by matching in the feature point library.
图3是根据本发明的实施例示出的一种确定所述激光传感器相对所述图像采集器的位置关系和姿态关系的示意流程图。如图3所示,确定所述激光传感器相对于所述图像采集器的位置关系和姿态关系可包括:步骤S401,根据非线性优化算法确定所述激光传感器相对于所述图像采集器的位置关系和姿态关系。3 is a schematic flow chart showing determining a positional relationship and a posture relationship of the laser sensor with respect to the image collector, according to an embodiment of the present invention. As shown in FIG. 3, determining the positional relationship and the attitude relationship of the laser sensor with respect to the image collector may include: step S401, determining a positional relationship of the laser sensor with respect to the image collector according to a nonlinear optimization algorithm. And attitude relationship.
在一个实施例中,激光传感器相对于图像采集器的位置关系和姿态关系可以通过人工测量来确定,也可以通过非线性优化算法进行计算得到。In one embodiment, the positional relationship and attitude relationship of the laser sensor relative to the image collector can be determined by manual measurement or by a nonlinear optimization algorithm.
例如采用的非线性优化算法为最小二乘法,由于激光传感器和图像采集器在机器人上的位置是相对固定的,因此激光传感器相对于图像采集器的位置关系和姿态关系也是相对固定的。For example, the nonlinear optimization algorithm adopted is a least square method. Since the position of the laser sensor and the image collector on the robot is relatively fixed, the positional relationship and the attitude relationship of the laser sensor with respect to the image collector are relatively fixed.
例如可以在空间中设置多个已知点,针对某个已知点,可以通过激光传感器向该已知点发射激光以及接收反射的激光,进而确定该已知点相对于激光传感器的位置关系和姿态关系(例如通过矩阵A表示,而激光传感器的空间坐标表示为矩阵P)。相应地,可以通过图像采集器采集该点的图像,进而确定该已知点相对于图像采集器的位置关系和姿态关系(例如通过矩阵B表示,而图像采集器的空间坐标表示为矩阵Q)。在这种情况下,可以得到P·A=Q·B,从而激光传感器相对于图像采集器的位置关系和姿态关系即可通过矩阵A到矩阵B的转换矩阵C表示,例如P=Q·C。For example, a plurality of known points may be set in the space, and for a known point, a laser sensor may be used to emit laser light to the known point and receive the reflected laser light, thereby determining the positional relationship of the known point with respect to the laser sensor and The attitude relationship is represented, for example, by matrix A, and the spatial coordinates of the laser sensor are represented as matrix P). Correspondingly, the image of the point can be acquired by the image collector, thereby determining the positional relationship and the attitude relationship of the known point with respect to the image collector (for example, represented by a matrix B, and the spatial coordinates of the image collector are represented as a matrix Q). . In this case, P·A=Q·B can be obtained, so that the positional relationship and the attitude relationship of the laser sensor with respect to the image collector can be represented by the transformation matrix C of the matrix A to the matrix B, for example, P=Q·C. .
进而针对多个已知点,可以分别测得多组矩阵P、Q和C,针对多组矩阵P、Q和C可以应用最小二乘法计算得到相对准确的矩阵C来表示激光传感器相对于图像采集器的位置关系和姿态关系。由于非线性优化算法可以通过软件执行,相对于人工测量可以更加准确地确定所述激光传感器相对于所述图像采集器的位置关系和姿态关系。Further, for a plurality of known points, a plurality of sets of matrices P, Q, and C can be separately measured, and a plurality of sets of matrices P, Q, and C can be calculated by using a least squares method to obtain a relatively accurate matrix C to represent the laser sensor relative to image acquisition. The positional relationship and attitude relationship of the device. Since the nonlinear optimization algorithm can be executed by software, the positional relationship and the attitude relationship of the laser sensor with respect to the image collector can be determined more accurately with respect to manual measurement.
图4是根据本发明的实施例示出的一种根据样本图像中样本特征点的深度信息、位置信息,以及所述图像采集器采集所述样本图像时的角度信息,确定所述坐标映射关系的示意流程图。如图4所示,根据样本图像中样本特征点的深度信息、位置信息,以及所述图像采集器采集所述样本图像时的角度信息,确定所述坐标映射关系包括:4 is a diagram showing depth information, position information of sample feature points in a sample image, and angle information when the image collector acquires the sample image, and determining the coordinate mapping relationship according to an embodiment of the invention. Schematic flow chart. As shown in FIG. 4, determining the coordinate mapping relationship according to the depth information of the sample feature points in the sample image, the position information, and the angle information when the image collector acquires the sample image includes:
步骤S801,根据所述样本特征点在所述样本图像中的深度信息、位置信息、所述角度信息和所述图像采集器的成像模型,确定所述坐标映射关系。Step S801, determining the coordinate mapping relationship according to the depth information, the position information, the angle information, and the imaging model of the image collector of the sample feature point in the sample image.
在一个实施例中,由于图像采集器的成像模型不同,例如图像采集器为针孔摄像机,那么成像模型为针孔模型,例如图像采集器为鱼眼相机,那么成像模型为鱼眼模型。不同成像模型中坐标映射关系与样本特征点在所述样本图像中的深度信息、位置信息、所述角度信息的对应关系有所不同。因此在确定特征点的坐标时,通过考虑图像采集器的成像模型,有利于更加准确地确定特征点的坐标。In one embodiment, since the imaging model of the image collector is different, for example, the image collector is a pinhole camera, and the imaging model is a pinhole model, for example, the image collector is a fisheye camera, then the imaging model is a fisheye model. The coordinate mapping relationship in different imaging models is different from the corresponding relationship between the depth information, the position information, and the angle information of the sample feature points in the sample image. Therefore, when determining the coordinates of the feature points, it is advantageous to determine the coordinates of the feature points more accurately by considering the imaging model of the image collector.
在一个实施例中,以针孔模型为例,模型可以通过如下关系式表示:In one embodiment, taking the pinhole model as an example, the model can be represented by the following relationship:
Figure PCTCN2018121180-appb-000001
Figure PCTCN2018121180-appb-000001
s·m'=A[R|t]·M';s·m'=A[R|t]·M';
其中m’为特征点的uv坐标,A为相机内参,[R|t]为相机与预设坐标系(例如世界坐标系)的关系,M’为特征点与预设坐标系(例如世界坐标系)的关系,s为物体在相机坐标系中的z坐标。这里提到的相机内参是指仅由相机本身决定的参数,也就是对某一相机一旦这个值算好就不用再次进行计算。Where m' is the uv coordinate of the feature point, A is the camera internal parameter, [R|t] is the relationship between the camera and the preset coordinate system (such as the world coordinate system), and M' is the feature point and the preset coordinate system (for example, world coordinates) Relationship), s is the z coordinate of the object in the camera coordinate system. The camera internal reference mentioned here refers to the parameter determined only by the camera itself, that is, once a value is calculated for a certain camera, it is not necessary to perform calculation again.
图5是根据本发明的实施例示出的一种通过所述图像采集器采集图像,从所述图像中提取特征点的示意流程图。如图5所示,通过所述图像采集器采集图像,从所述图像中提取特征点包括:步骤S101,在所述机器人启动时,通过所述图像采集器采集图像,从所述图像中提取特征点。FIG. 5 is a schematic flow diagram of extracting an image from the image by acquiring an image by the image collector, according to an embodiment of the invention. As shown in FIG. 5, the image is collected by the image collector, and extracting feature points from the image includes: step S101, when the robot is started, an image is collected by the image collector, and the image is extracted from the image. Feature points.
在一个实施例中,可以在机器人启动时,就执行步骤S1,也即在机器人启动时通过所述图像采集器采集图像,从所述图像中提取特征点。据此,可以保证只要机器人启动后,就能确定自身在预设坐标系中的坐标,进而完成自主的导航。In one embodiment, step S1 may be performed when the robot is started, that is, an image is acquired by the image collector when the robot is started, and feature points are extracted from the image. According to this, it can be ensured that as long as the robot is started, the coordinates in the preset coordinate system can be determined, thereby completing the autonomous navigation.
图6是根据本发明的实施例示出的又一种机器人定位方法的示意流程图。如图6所示,在图1所示实施例的基础上,所述机器人定位方法还可以包括:FIG. 6 is a schematic flow chart of still another robot positioning method according to an embodiment of the present invention. As shown in FIG. 6, on the basis of the embodiment shown in FIG. 1, the robot positioning method may further include:
步骤S11,通过所述激光传感器扫描所述机器人所在的区域生成导航地图;Step S11, generating a navigation map by scanning the area where the robot is located by the laser sensor;
通过激光传感器扫描机器人所在的区域生成的导航地图可以是二维地图。其中,激光传感器可以扫描机器人所在区域到机器人的距离在预设距离范围内的障碍物,根据障碍物反射的激光可以确定障碍物在该区域内的位置,进而根据障碍物的位置生成导航地图,例如通过SLAM来生成导航地图。The navigation map generated by the laser sensor scanning the area where the robot is located may be a two-dimensional map. The laser sensor can scan an obstacle in a range of a predetermined distance from the area where the robot is located to the robot, and the laser reflected by the obstacle can determine the position of the obstacle in the area, and then generate a navigation map according to the position of the obstacle. A navigation map is generated, for example, by SLAM.
步骤S12,匹配所述导航地图和所述预设坐标系。Step S12, matching the navigation map and the preset coordinate system.
在一个实施例中,若导航地图为二维地图,而预设坐标系为三维坐标系,那么可以仅将三维坐标系中的两个维度匹配至导航地图。例如二维地图为平行于水平面的地图,三维坐标系中x轴和y轴为平行于水平面的坐标轴,那么可以将三维坐标系中的x轴和y轴匹配于导航地图,从而在导航地图中可以标定出x轴坐标和y轴坐标。In one embodiment, if the navigation map is a two-dimensional map and the preset coordinate system is a three-dimensional coordinate system, then only two dimensions in the three-dimensional coordinate system may be matched to the navigation map. For example, a two-dimensional map is a map parallel to a horizontal plane. In the three-dimensional coordinate system, the x-axis and the y-axis are axes parallel to the horizontal plane, and then the x-axis and the y-axis in the three-dimensional coordinate system can be matched to the navigation map, thereby navigating the map. The x-axis coordinate and the y-axis coordinate can be calibrated.
步骤S13,根据所述图像采集器在预设坐标系中的坐标,在匹配了预设坐标系的导航地图中进行导航路径规划。Step S13: Perform navigation path planning on the navigation map matching the preset coordinate system according to the coordinates of the image collector in the preset coordinate system.
在一个实施例中,由于图像采集器设置在机器人上,在确定图像采集器在预设坐标系中的坐标后,就可以确定机器人在预设坐标系中的坐标,并且由于导航地图中标定了坐标,因此可以根据坐标进行运算,使得可以从图像采集器在预设坐标系中的坐标为起点在导航地图中规划导航路径。从而无需人工指示机器人的坐标,实现机器人完全自主的导航。In one embodiment, since the image collector is disposed on the robot, after determining the coordinates of the image collector in the preset coordinate system, the coordinates of the robot in the preset coordinate system can be determined, and the calibration is determined in the navigation map. The coordinates can therefore be calculated according to the coordinates so that the navigation path can be planned in the navigation map starting from the coordinates of the image collector in the preset coordinate system. Therefore, it is not necessary to manually instruct the coordinates of the robot to realize completely autonomous navigation of the robot.
其中,可以根据amcl(adaptive Monte Carlo Localization,自适应蒙特卡洛定位)定位算法,costmap(代价地图)和路径规划算法进行导航路径规划。Among them, the navigation path planning can be performed according to the amcl (adaptive Monte Carlo Localization) positioning algorithm, costmap (cost map) and path planning algorithm.
图7是根据本发明的实施例示出的一种通过根据所述图像采集器在预设坐标系中的坐标,在匹配了预设坐标系的导航地图中进行导航路径规划的示意流程图。如图7所示,根据所述图像采集器在预设坐标系中的坐标,在匹配了预设坐标系的导航地图中进行导航路径规划包括:步骤S1301,根据所述图像采集器在所述机器人上的位置,确定所述机器人的轮廓在所述预设坐标系中的投影;步骤S1302,根据所述投影在匹配了预设坐标系的导航地图中进行导航路径规划。FIG. 7 is a schematic flow chart showing navigation path planning in a navigation map matching a preset coordinate system according to coordinates of the image collector in a preset coordinate system according to an embodiment of the present invention. As shown in FIG. 7, the navigation path planning in the navigation map matching the preset coordinate system according to the coordinates of the image collector in the preset coordinate system includes: Step S1301, according to the image collector Positioning on the robot determines a projection of the contour of the robot in the preset coordinate system; step S1302, according to the projection, performs navigation path planning in a navigation map matching the preset coordinate system.
在一个实施例中,由于机器人具有一定体积,为了避免机器人在运动过程中碰撞障碍物而出现被阻挡或失去平衡等问题,可以根据图像采集器在机器人上的位置确定机器人的轮廓在预设坐标系中的投影,并根据投影在匹配了预设坐标系的导航地图中进行导航路径规划,从而确保机器人不会接触到路径中的障碍物,保证机器人平稳且顺利地运动。并且根据机器人的投影,还可以确定机器人在预设坐标系中的朝向,便于规划导航路径。In one embodiment, since the robot has a certain volume, in order to avoid the problem that the robot is blocked or unbalanced when the obstacle is collided during the movement, the contour of the robot can be determined according to the position of the image collector on the robot at the preset coordinates. Projection in the system, and according to the projection in the navigation map matching the preset coordinate system for navigation path planning, to ensure that the robot does not touch the obstacles in the path, to ensure that the robot moves smoothly and smoothly. And according to the projection of the robot, the orientation of the robot in the preset coordinate system can also be determined, so that the navigation path can be easily planned.
与前述机器人定位方法的实施例相对应,本申请还提供了机器人定位装置的实施例。Corresponding to the aforementioned embodiment of the robot positioning method, the present application also provides an embodiment of the robot positioning device.
本申请机器人定位装置的实施例可以应用在机器人等设备上。装置实施例可以通过软件实现,也可以通过硬件或者软硬件结合的方式实现。以软件实现为例,作为一个逻 辑意义上的装置,是通过其所在设备的处理器将非易失性存储器中对应的计算机程序指令读取到内存中运行形成的。从硬件层面而言,如图8所示,为本申请机器人定位装置所在设备的一种硬件结构示意图,除了图8所示的处理器801、内存802、网络接口803、以及非易失性存储器804之外,实施例中装置所在的设备通常根据该设备的实际功能,还可以包括其他硬件,对此不再赘述。The embodiment of the robot positioning device of the present application can be applied to a device such as a robot. The device embodiment may be implemented by software, or may be implemented by hardware or a combination of hardware and software. Taking the software implementation as an example, as a logical device, the processor of the device in which it is located reads the corresponding computer program instructions in the non-volatile memory into the memory. From the hardware level, as shown in FIG. 8, a hardware structure diagram of the device where the robot positioning device is located, except for the processor 801, the memory 802, the network interface 803, and the non-volatile memory shown in FIG. The device in which the device is located in the embodiment is usually other hardware according to the actual function of the device, and details are not described herein.
图9是根据本发明的实施例示出的一种机器人定位装置的示意框图。所述机器人包括激光传感器和图像采集器,如图9所示,所述机器人定位装置包括:9 is a schematic block diagram of a robotic positioning device, in accordance with an embodiment of the present invention. The robot includes a laser sensor and an image collector, as shown in FIG. 9, the robot positioning device includes:
特征点提取模块1,用于通过所述图像采集器采集图像,从所述图像中提取特征点;a feature point extraction module 1 for acquiring an image by the image collector, and extracting feature points from the image;
特征点匹配模块2,用于在预先存储的特征点库中确定与所述特征点相匹配的目标样本特征点;a feature point matching module 2, configured to determine, in a pre-stored feature point library, a target sample feature point that matches the feature point;
第一坐标确定模块3,用于根据预先存储的所述目标样本特征点在预设坐标系的坐标,以及所述目标样本特征点与所述图像采集器在预设坐标系中的坐标映射关系,确定所述图像采集器在预设坐标系中的坐标。a first coordinate determining module 3, configured to: according to pre-stored coordinates of the target sample feature point in a preset coordinate system, and coordinate mapping between the target sample feature point and the image collector in a preset coordinate system And determining coordinates of the image collector in a preset coordinate system.
图10是根据本发明的实施例示出的另一种机器人定位装置的示意框图。如图10所示,在图9所示实施例的基础上,所述机器人定位装置还包括:Figure 10 is a schematic block diagram of another robotic positioning device shown in accordance with an embodiment of the present invention. As shown in FIG. 10, on the basis of the embodiment shown in FIG. 9, the robot positioning device further includes:
关系确定模块4,用于确定所述激光传感器相对于所述图像采集器的位置关系和姿态关系;a relationship determining module 4, configured to determine a positional relationship and a posture relationship of the laser sensor with respect to the image collector;
第二坐标确定模块5,用于确定所述激光传感器在预设坐标系中的第一坐标;a second coordinate determining module 5, configured to determine a first coordinate of the laser sensor in a preset coordinate system;
坐标转换模块6,用于根据所述位置关系和所述姿态关系转换所述第一坐标,以得到所述图像采集器在预设坐标系中的第二坐标;a coordinate conversion module 6 configured to convert the first coordinate according to the positional relationship and the attitude relationship to obtain a second coordinate of the image collector in a preset coordinate system;
其中,所述特征点提取模块1还用于通过所述图像采集器采集样本图像,从所述样本图像中提取多个样本特征点;The feature point extraction module 1 is further configured to collect a sample image by using the image collector, and extract a plurality of sample feature points from the sample image;
映射确定模块7,用于根据所述样本特征点在所述样本图像中的深度信息、位置信息,以及所述图像采集器采集所述样本图像时的角度信息,确定所述坐标映射关系;The mapping determination module 7 is configured to determine the coordinate mapping relationship according to the depth information, the position information of the sample feature point in the sample image, and the angle information when the image collector acquires the sample image;
第三坐标确定模块8,用于根据所述第二坐标和所述坐标映射关系确定所述样本特征点在预设坐标系中的第三坐标;The third coordinate determining module 8 is configured to determine, according to the second coordinate and the coordinate mapping relationship, a third coordinate of the sample feature point in a preset coordinate system;
存储模块9,用于存储所述样本特征点在预设坐标系中的第三坐标以及所述样本特征点和所述图像采集器在预设坐标系中的坐标映射关系。The storage module 9 is configured to store a third coordinate of the sample feature point in a preset coordinate system and a coordinate mapping relationship between the sample feature point and the image collector in a preset coordinate system.
在一个实施例中,所述关系确定模块4用于根据非线性优化算法确定所述激光传感器相对于所述图像采集器的位置关系和姿态关系。In one embodiment, the relationship determination module 4 is configured to determine a positional relationship and an attitude relationship of the laser sensor relative to the image collector according to a nonlinear optimization algorithm.
在一个实施例中,所述映射确定模块7用于根据所述样本特征点在所述样本图像中的深度信息、位置信息、所述角度信息和所述图像采集器的成像模型,确定所述坐标映射关系。In an embodiment, the mapping determining module 7 is configured to determine, according to the depth information, the position information, the angle information, and the imaging model of the image collector of the sample feature point in the sample image. Coordinate mapping relationship.
在一个实施例中,所述特征点提取模块1用于在所述机器人启动时,通过所述图像采集器采集图像,从所述图像中提取特征点。In one embodiment, the feature point extraction module 1 is configured to acquire an image from the image by the image collector when the robot is started, and extract feature points from the image.
图11是根据本发明的实施例示出的又一种机器人定位装置的示意框图。如图11所示,在图9所示实施例的基础上,所述机器人定位装置还包括:11 is a schematic block diagram of still another robotic positioning device shown in accordance with an embodiment of the present invention. As shown in FIG. 11, on the basis of the embodiment shown in FIG. 9, the robot positioning device further includes:
地图生成模块10,用于通过所述激光传感器扫描所述机器人所在的区域生成导航地图;a map generating module 10, configured to generate a navigation map by scanning, by the laser sensor, an area where the robot is located;
地图匹配模块11,用于匹配所述导航地图和所述预设坐标系;a map matching module 11 configured to match the navigation map and the preset coordinate system;
路径规划模块12,用于根据所述图像采集器在预设坐标系中的坐标,在匹配了预设坐标系的导航地图中进行导航路径规划。The path planning module 12 is configured to perform navigation path planning in a navigation map matching the preset coordinate system according to coordinates of the image collector in a preset coordinate system.
图12是根据本发明的实施例示出的一种路径规划模块的示意框图。如图10所示,在图11所示实施例的基础上,所述路径规划模块12包括:12 is a schematic block diagram of a path planning module, shown in accordance with an embodiment of the present invention. As shown in FIG. 10, on the basis of the embodiment shown in FIG. 11, the path planning module 12 includes:
投影确定子模块1201,用于根据所述图像采集器在所述机器人上的位置,确定所述机器人的轮廓在所述预设坐标系中的投影;a projection determining sub-module 1201, configured to determine a projection of the contour of the robot in the preset coordinate system according to a position of the image collector on the robot;
路径规划子模块1202,用于根据所述投影在匹配了预设坐标系的导航地图中进行导航路径规划。The path planning sub-module 1202 is configured to perform navigation path planning in the navigation map matching the preset coordinate system according to the projection.
本发明的实施例还提出一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时执行上述任一实施例所述的机器人定位方法。Embodiments of the present invention also provide a computer readable storage medium having stored thereon a computer program that, when executed by a processor, performs the robot positioning method of any of the above embodiments.
本发明的实施例还提出一种机器人,包括激光传感器和图像采集器,还包括处理器,其中,所述处理器被配置为执行上述任一实施例所述的机器人定位方法。Embodiments of the present invention also provide a robot, including a laser sensor and an image collector, further comprising a processor, wherein the processor is configured to perform the robot positioning method of any of the above embodiments.
上述装置中各个模块的功能和作用的实现过程具体详见上述方法中对应步骤的实现过程,在此不再赘述。For details of the implementation process of the functions and functions of the modules in the foregoing devices, refer to the implementation process of the corresponding steps in the foregoing methods, and details are not described herein again.
对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离 部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本申请方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。For the device embodiment, since it basically corresponds to the method embodiment, reference may be made to the partial description of the method embodiment. The device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located A place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the objectives of the present application. Those of ordinary skill in the art can understand and implement without any creative effort.
以上所述仅为本申请的较佳实施例而已,并不用以限制本申请,凡在本申请的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请保护的范围之内。The above is only the preferred embodiment of the present application, and is not intended to limit the present application. Any modifications, equivalent substitutions, improvements, etc., which are made within the spirit and principles of the present application, should be included in the present application. Within the scope of protection.

Claims (10)

  1. 一种机器人定位方法,包括:A robot positioning method includes:
    通过所述机器人上的图像采集器采集图像;Acquiring an image through an image collector on the robot;
    从所述图像中提取特征点;Extracting feature points from the image;
    在预先存储的特征点库中确定与所述特征点相匹配的目标样本特征点;Determining a target sample feature point that matches the feature point in a pre-stored feature point library;
    根据预先存储的所述目标样本特征点在预设坐标系中的坐标,以及所述目标样本特征点与所述图像采集器在所述预设坐标系中的坐标映射关系,确定所述图像采集器在所述预设坐标系中的坐标。Determining the image collection according to a coordinate of the target sample feature point stored in a preset coordinate system and a coordinate mapping relationship between the target sample feature point and the image collector in the preset coordinate system. The coordinates of the device in the preset coordinate system.
  2. 根据权利要求1所述的方法,其特征在于,还包括:The method of claim 1 further comprising:
    在通过所述图像采集器采集样本图像之前,确定所述机器人上的激光传感器相对于所述图像采集器的位置关系和姿态关系;Determining a positional relationship and a posture relationship of the laser sensor on the robot with respect to the image collector before acquiring the sample image by the image collector;
    确定所述激光传感器在所述预设坐标系中的第一坐标;Determining a first coordinate of the laser sensor in the preset coordinate system;
    根据所述位置关系和所述姿态关系转换所述第一坐标,以得到所述图像采集器在所述预设坐标系中的第二坐标;Converting the first coordinate according to the positional relationship and the attitude relationship to obtain a second coordinate of the image collector in the preset coordinate system;
    通过所述图像采集器采集所述样本图像;Collecting the sample image by the image collector;
    从所述样本图像中提取多个样本特征点存储于所述特征点库中,所述目标样本特征点为所述多个样本特征点之一;Extracting a plurality of sample feature points from the sample image is stored in the feature point library, and the target sample feature point is one of the plurality of sample feature points;
    根据各所述样本特征点在所述样本图像中的深度信息、位置信息,以及所述图像采集器采集所述样本图像时的角度信息,确定所述坐标映射关系;Determining the coordinate mapping relationship according to the depth information, the position information of each of the sample feature points in the sample image, and the angle information when the image collector acquires the sample image;
    根据所述第二坐标和所述坐标映射关系,确定所述样本特征点在所述预设坐标系中的第三坐标;Determining, according to the second coordinate and the coordinate mapping relationship, a third coordinate of the sample feature point in the preset coordinate system;
    存储所述第三坐标和所述坐标映射关系。And storing the third coordinate and the coordinate mapping relationship.
  3. 根据权利要求2所述的方法,其特征在于,确定所述激光传感器相对于所述图像采集器的位置关系和姿态关系包括:The method according to claim 2, wherein determining a positional relationship and an attitude relationship of the laser sensor with respect to the image collector comprises:
    根据非线性优化算法,确定所述激光传感器相对于所述图像采集器的位置关系和姿态关系。A positional relationship and a posture relationship of the laser sensor with respect to the image collector are determined according to a nonlinear optimization algorithm.
  4. 根据权利要求2所述的方法,其特征在于,根据所述样本特征点在所述样本图像中的深度信息、位置信息,以及所述图像采集器采集所述样本图像时的角度信息,确定所述映射关系包括:The method according to claim 2, wherein the depth information, the position information in the sample image, and the angle information when the image collector acquires the sample image are determined according to the angle information of the sample feature point The mapping relationship includes:
    根据所述深度信息、所述位置信息、所述角度信息以及所述图像采集器的成像模型,确定所述映射关系。And determining the mapping relationship according to the depth information, the location information, the angle information, and an imaging model of the image collector.
  5. 根据权利要求1至4中任一项所述的方法,其特征在于,通过所述机器人上的图像采集器采集图像,包括:The method according to any one of claims 1 to 4, wherein the image is collected by the image collector on the robot, comprising:
    在所述机器人启动时,通过所述图像采集器采集所述图像。The image is acquired by the image collector when the robot is activated.
  6. 根据权利要求1至4中任一项所述的方法,其特征在于,还包括:The method according to any one of claims 1 to 4, further comprising:
    通过所述激光传感器扫描所述机器人所在的区域生成导航地图;Generating a navigation map by scanning the area where the robot is located by the laser sensor;
    匹配所述导航地图和所述预设坐标系;Matching the navigation map and the preset coordinate system;
    根据所述图像采集器在所述预设坐标系中的坐标,在匹配了所述预设坐标系的导航地图中进行导航路径规划。According to the coordinates of the image collector in the preset coordinate system, the navigation path planning is performed in the navigation map matching the preset coordinate system.
  7. 根据权利要求6所述的方法,其特征在于,根据所述图像采集器在所述预设坐标系中的坐标,在匹配了所述预设坐标系的导航地图中进行导航路径规划包括:The method according to claim 6, wherein the navigation path planning in the navigation map matching the preset coordinate system according to the coordinates of the image collector in the preset coordinate system comprises:
    根据所述图像采集器在所述机器人上的位置,确定所述机器人的轮廓在所述预设坐标系中的投影;Determining a projection of the contour of the robot in the preset coordinate system according to a position of the image collector on the robot;
    根据所述投影在匹配了所述预设坐标系的导航地图中进行导航路径规划。Navigation path planning is performed in the navigation map matching the preset coordinate system according to the projection.
  8. 一种机器人定位装置,包括:A robot positioning device comprising:
    特征点提取模块,用于从通过所述机器人上的图像采集器采集到的图像中提取特征点;a feature point extraction module, configured to extract a feature point from an image collected by an image collector on the robot;
    特征点匹配模块,用于在预先存储的特征点库中确定与所述特征点相匹配的目标样本特征点;a feature point matching module, configured to determine a target sample feature point that matches the feature point in a pre-stored feature point library;
    第一坐标确定模块,用于根据预先存储的所述目标样本特征点在预设坐标系中的坐标,以及所述目标样本特征点与所述图像采集器在所述预设坐标系中的坐标映射关系,确定所述图像采集器在所述预设坐标系中的坐标。a first coordinate determining module, configured to: according to pre-stored coordinates of the target sample feature point in a preset coordinate system, and coordinates of the target sample feature point and the image collector in the preset coordinate system Mapping the relationship, determining coordinates of the image collector in the preset coordinate system.
  9. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时执行权利要求1至7中任一项所述的机器人定位方法。A computer readable storage medium having stored thereon a computer program, wherein the program is executed by a processor to perform the robot positioning method according to any one of claims 1 to 7.
  10. 一种机器人,其特征在于,包括激光传感器和图像采集器,还包括处理器,其中,所述处理器被配置为执行权利要求1至7中任一项所述的机器人定位方法。A robot characterized by comprising a laser sensor and an image collector, further comprising a processor, wherein the processor is configured to perform the robot positioning method according to any one of claims 1 to 7.
PCT/CN2018/121180 2018-04-13 2018-12-14 Robot positioning WO2019196478A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810330896.8A CN110377015B (en) 2018-04-13 2018-04-13 Robot positioning method and robot positioning device
CN201810330896.8 2018-04-13

Publications (1)

Publication Number Publication Date
WO2019196478A1 true WO2019196478A1 (en) 2019-10-17

Family

ID=68164134

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/121180 WO2019196478A1 (en) 2018-04-13 2018-12-14 Robot positioning

Country Status (2)

Country Link
CN (1) CN110377015B (en)
WO (1) WO2019196478A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112084285A (en) * 2020-09-11 2020-12-15 北京百度网讯科技有限公司 Method, device, electronic equipment and readable medium for map matching

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111157005A (en) * 2020-01-07 2020-05-15 深圳市锐曼智能装备有限公司 Method and device for positioning based on reflector
CN111337877A (en) * 2020-03-19 2020-06-26 北京北特圣迪科技发展有限公司 Reflector matching positioning method
CN111551113A (en) * 2020-05-19 2020-08-18 南京航空航天大学 Quality inspection method for large-batch aviation parts
CN112008718B (en) * 2020-06-12 2022-04-05 特斯联科技集团有限公司 Robot control method, system, storage medium and intelligent robot
CN111862211B (en) * 2020-07-22 2023-10-27 杭州海康威视数字技术股份有限公司 Positioning method, device, system, storage medium and computer equipment
CN112037525A (en) * 2020-09-11 2020-12-04 江苏小白兔智造科技有限公司 Intelligent parking method without parking hall based on camera device
CN112053586A (en) * 2020-09-11 2020-12-08 江苏小白兔智造科技有限公司 Vehicle position confirmation method and device based on infrared distance meter
CN112037575A (en) * 2020-09-11 2020-12-04 江苏小白兔智造科技有限公司 Intelligent parking method without parking hall based on ultrasonic range finder
CN112037530A (en) * 2020-09-11 2020-12-04 江苏小白兔智造科技有限公司 Vehicle position confirmation method and device based on camera device
CN112053585A (en) * 2020-09-11 2020-12-08 江苏小白兔智造科技有限公司 Intelligent parking method without parking hall based on laser radar
CN112065125A (en) * 2020-09-11 2020-12-11 江苏小白兔智造科技有限公司 Intelligent parking method without parking hall based on infrared distance meter
CN112114316A (en) * 2020-09-11 2020-12-22 江苏小白兔智造科技有限公司 Vehicle position confirmation method and device based on ultrasonic distance meter
CN112162559B (en) * 2020-09-30 2021-10-15 杭州海康机器人技术有限公司 Method, device and storage medium for multi-robot mixing
CN112697151B (en) * 2020-12-24 2023-02-21 北京百度网讯科技有限公司 Method, apparatus, and storage medium for determining initial point of mobile robot
CN114098980A (en) * 2021-11-19 2022-03-01 武汉联影智融医疗科技有限公司 Camera pose adjusting method, space registration method, system and storage medium
EP4321121A1 (en) * 2021-05-10 2024-02-14 Wuhan United Imaging Healthcare Surgical Technology Co., Ltd. Robot positioning and pose adjustment method and system
CN113313770A (en) * 2021-06-29 2021-08-27 智道网联科技(北京)有限公司 Calibration method and device of automobile data recorder
CN113761255B (en) * 2021-08-19 2024-02-09 劢微机器人科技(深圳)有限公司 Robot indoor positioning method, device, equipment and storage medium
CN114209433B (en) * 2021-12-31 2023-09-05 杭州三坛医疗科技有限公司 Surgical robot navigation positioning device
CN114653558B (en) * 2022-05-25 2022-08-02 苏州柳溪机电工程有限公司 Water blowing system for coating production line

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010018640A1 (en) * 2000-02-28 2001-08-30 Honda Giken Kogyo Kabushiki Kaisha Obstacle detecting apparatus and method, and storage medium which stores program for implementing the method
CN1569558A (en) * 2003-07-22 2005-01-26 中国科学院自动化研究所 Moving robot's vision navigation method based on image representation feature
CN101441769A (en) * 2008-12-11 2009-05-27 上海交通大学 Real time vision positioning method of monocular camera
CN104463108A (en) * 2014-11-21 2015-03-25 山东大学 Monocular real-time target recognition and pose measurement method
CN105046686A (en) * 2015-06-19 2015-11-11 奇瑞汽车股份有限公司 Positioning method and apparatus
CN105959529A (en) * 2016-04-22 2016-09-21 首都师范大学 Single image self-positioning method and system based on panorama camera
CN106408601A (en) * 2016-09-26 2017-02-15 成都通甲优博科技有限责任公司 GPS-based binocular fusion positioning method and device
US20170153646A1 (en) * 2014-06-17 2017-06-01 Yujin Robot Co., Ltd. Apparatus of controlling movement of mobile robot mounted with wide angle camera and method thereof
CN106826815A (en) * 2016-12-21 2017-06-13 江苏物联网研究发展中心 Target object method of the identification with positioning based on coloured image and depth image

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036475A (en) * 2013-07-22 2014-09-10 成都智慧星球科技有限公司 High-robustness geometric correction method adapted to random projector group and projection screen
CN104331896B (en) * 2014-11-21 2017-09-08 天津工业大学 A kind of system calibrating method based on depth information
CN105364934B (en) * 2015-11-30 2017-06-16 山东建筑大学 Hydraulic manipulator remote operating control system and method
CN106323241A (en) * 2016-06-12 2017-01-11 广东警官学院 Method for measuring three-dimensional information of person or object through monitoring video or vehicle-mounted camera
CN107132581B (en) * 2017-06-29 2019-04-30 上海理工大学 A kind of double-deck magnetic source localization method based on pose mapping relations database
CN107481247A (en) * 2017-07-27 2017-12-15 许文远 A kind of wisdom agricultural uses picking robot control system and method
CN107328420B (en) * 2017-08-18 2021-03-02 上海智蕙林医疗科技有限公司 Positioning method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010018640A1 (en) * 2000-02-28 2001-08-30 Honda Giken Kogyo Kabushiki Kaisha Obstacle detecting apparatus and method, and storage medium which stores program for implementing the method
CN1569558A (en) * 2003-07-22 2005-01-26 中国科学院自动化研究所 Moving robot's vision navigation method based on image representation feature
CN101441769A (en) * 2008-12-11 2009-05-27 上海交通大学 Real time vision positioning method of monocular camera
US20170153646A1 (en) * 2014-06-17 2017-06-01 Yujin Robot Co., Ltd. Apparatus of controlling movement of mobile robot mounted with wide angle camera and method thereof
CN104463108A (en) * 2014-11-21 2015-03-25 山东大学 Monocular real-time target recognition and pose measurement method
CN105046686A (en) * 2015-06-19 2015-11-11 奇瑞汽车股份有限公司 Positioning method and apparatus
CN105959529A (en) * 2016-04-22 2016-09-21 首都师范大学 Single image self-positioning method and system based on panorama camera
CN106408601A (en) * 2016-09-26 2017-02-15 成都通甲优博科技有限责任公司 GPS-based binocular fusion positioning method and device
CN106826815A (en) * 2016-12-21 2017-06-13 江苏物联网研究发展中心 Target object method of the identification with positioning based on coloured image and depth image

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112084285A (en) * 2020-09-11 2020-12-15 北京百度网讯科技有限公司 Method, device, electronic equipment and readable medium for map matching
CN112084285B (en) * 2020-09-11 2023-08-08 北京百度网讯科技有限公司 Method, device, electronic equipment and readable medium for map matching

Also Published As

Publication number Publication date
CN110377015A (en) 2019-10-25
CN110377015B (en) 2021-04-27

Similar Documents

Publication Publication Date Title
WO2019196478A1 (en) Robot positioning
CN112894832B (en) Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium
US7403268B2 (en) Method and apparatus for determining the geometric correspondence between multiple 3D rangefinder data sets
US8699005B2 (en) Indoor surveying apparatus
Pandey et al. Extrinsic calibration of a 3d laser scanner and an omnidirectional camera
US10086955B2 (en) Pattern-based camera pose estimation system
JP5593177B2 (en) Point cloud position data processing device, point cloud position data processing method, point cloud position data processing system, and point cloud position data processing program
JP5620200B2 (en) Point cloud position data processing device, point cloud position data processing method, point cloud position data processing system, and point cloud position data processing program
WO2018090250A1 (en) Three-dimensional point cloud generation method, device, computer system, and mobile apparatus
US20150116691A1 (en) Indoor surveying apparatus and method
JP2016057108A (en) Arithmetic device, arithmetic system, arithmetic method and program
WO2021208442A1 (en) Three-dimensional scene reconstruction system and method, device, and storage medium
US10451403B2 (en) Structure-based camera pose estimation system
RU2572637C2 (en) Parallel or serial reconstructions in online and offline modes for 3d measurements of rooms
JP2014529727A (en) Automatic scene calibration
JP2012533222A (en) Image-based surface tracking
US9858669B2 (en) Optimized camera pose estimation system
JP4132068B2 (en) Image processing apparatus, three-dimensional measuring apparatus, and program for image processing apparatus
Borrmann et al. Robotic mapping of cultural heritage sites
EP3479142B1 (en) Radiation imaging apparatus
Gong et al. Extrinsic calibration of a 3D LIDAR and a camera using a trihedron
US20180204387A1 (en) Image generation device, image generation system, and image generation method
JP2009175012A (en) Measurement device and measurement method
Zalud et al. Calibration and evaluation of parameters in a 3D proximity rotating scanner
Droeschel et al. Omnidirectional perception for lightweight MAVs using a continously rotationg 3D laser scanner

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18914228

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18914228

Country of ref document: EP

Kind code of ref document: A1