WO2020098532A1 - Method for positioning mobile robot, and mobile robot - Google Patents

Method for positioning mobile robot, and mobile robot Download PDF

Info

Publication number
WO2020098532A1
WO2020098532A1 PCT/CN2019/115745 CN2019115745W WO2020098532A1 WO 2020098532 A1 WO2020098532 A1 WO 2020098532A1 CN 2019115745 W CN2019115745 W CN 2019115745W WO 2020098532 A1 WO2020098532 A1 WO 2020098532A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
feature
area
mobile robot
pixel
Prior art date
Application number
PCT/CN2019/115745
Other languages
French (fr)
Chinese (zh)
Inventor
刘干
苏辉
蒋海青
Original Assignee
杭州萤石软件有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州萤石软件有限公司 filed Critical 杭州萤石软件有限公司
Publication of WO2020098532A1 publication Critical patent/WO2020098532A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Definitions

  • the present application relates to the technical field of mobile robots, in particular to a positioning method of mobile robots and mobile robots.
  • Positioning technology is a key technology in the research of mobile robots.
  • accurate space positioning technology is the prerequisite for its autonomous navigation and obstacle avoidance.
  • the method of using image analysis to obtain the position of the mobile robot requires the training of a large number of image samples to calculate and determine the relative position of the mobile robot and the environment, and then obtain the accurate position of the mobile robot itself.
  • high-performance computing hardware is required and the cost is high.
  • the present application provides a positioning method and mobile robot for a low-cost mobile robot.
  • a first aspect of the present application provides a positioning method for a mobile robot.
  • the method is performed by a mobile robot.
  • the method includes: acquiring a first image in a current field of view; and combining the first image with a pre-stored sample at a specified location The data is matched; the specified position where the sample data is successfully matched with the first image is determined as the current position of the mobile robot.
  • a second aspect of the present application provides a mobile robot.
  • the mobile robot includes an acquisition module, a memory, and a processor, wherein the acquisition module is used to acquire a first image in the current field of view; the memory is used to store a designated Sample data of the location; the processor is configured to match the first image with the sample data and determine the specified location where the sample data and the first image match successfully as the current location of the mobile robot .
  • a third aspect of the present application provides a computer-readable storage medium on which a computer program is stored, which, when executed by a processor in a mobile robot, causes the mobile robot to: acquire a first image in the current field of view; Matching the first image with the pre-stored sample data of the specified position; determining the specified position where the sample data is successfully matched with the first image as the current position of the mobile robot.
  • the position of the mobile robot can be accurately located, and the cost is low.
  • FIG. 1 is a flowchart of a positioning method of a mobile robot according to an exemplary embodiment of the present application
  • FIG. 2 is a flowchart of a process of extracting features from an image according to an exemplary embodiment of the present application
  • FIG. 3 is a flowchart of a process of performing de-redundancy processing on a first image according to an exemplary embodiment of the present application
  • FIG. 4 exemplarily shows the effect of applying de-redundancy processing to the first image
  • FIG. 5 is a schematic structural diagram of a mobile robot according to an exemplary embodiment of the present application.
  • first, second, third, etc. may be used to describe various information in this application, the information should not be limited by these terms. These terms are only used to distinguish the same type of information from each other.
  • first information may also be referred to as second information, and similarly, the second information may also be referred to as first information.
  • word “if” as used herein can be interpreted as "when” or “when” or “responsive”.
  • FIG. 1 is a flowchart of a mobile robot positioning method according to an exemplary embodiment of the present application. Referring to FIG. 1, the method may include steps S101 to S103.
  • step S101 the mobile robot may collect the first image in the current field of view.
  • the mobile robot when it needs to determine its own position, it will move to one of a plurality of designated positions and collect the first image in the current field of view.
  • the mobile robot may match the above-mentioned first image with pre-stored sample data at a specified position.
  • the sample data is a second image of the specified location that is collected in advance at different shooting angles.
  • This step S102 may be implemented by the following operations.
  • Feature extraction can be performed based on sift feature extraction algorithm, surf feature extraction algorithm, hog feature extraction algorithm, haar feature extraction algorithm, shape context algorithm, etc., to obtain feature descriptors.
  • sift feature extraction algorithm surf feature extraction algorithm
  • hog feature extraction algorithm haar feature extraction algorithm
  • shape context algorithm etc.
  • the sample data is a second feature descriptor of a second image
  • the second image is an image of the specified position pre-collected at different shooting angles
  • the six corners may be the specified location, and the different shooting angles may include 20 degrees, 50 degrees, 80 degrees, and so on.
  • the mobile robot may determine the specified location where the sample data matches the first image successfully as the current location of the mobile robot.
  • the position of the mobile robot can be accurately located, and the cost is low.
  • FIG. 2 is a flowchart of a process of extracting features from an image according to an exemplary embodiment of the present application.
  • the process of extracting features from an image may include steps S201 to S204.
  • step S201 non-maximum suppression processing is performed on the image to obtain the feature points of the image.
  • Table 1 and Table 2 both show the gray values of 9 neighboring pixels.
  • the gray value of a pixel with a gray value of 87 is greater than the gray values of other surrounding pixels. At this time, the pixel with a gray value of 87 is considered as a feature point.
  • the gray values of surrounding pixels are larger than 40 and smaller than 40. At this time, the pixels with a gray value of 40 are not Feature points.
  • step S202 for each feature point, the designated neighborhood of the feature point is divided into multiple sub-regions, and for each sub-region, the gradient amplitude and gradient direction of each pixel in the sub-region are calculated.
  • the designated neighborhood may be a 16 * 16 neighborhood.
  • the 16 * 16 neighborhood of each feature point can be divided into 16 4 * 4 sub-regions.
  • step S203 the gradient direction of each pixel is corrected so that the corrected gradient direction is within a specified range.
  • the gradient direction of each pixel obtained by the above calculation may be in the range of 0 ° to 360 °, and the corrected gradient direction is in the specified range (for example, 0 ° to 180 °).
  • the gradient direction of the pixel when correcting the gradient direction of each pixel, if the gradient direction of the pixel is greater than 180 °, the gradient direction of the pixel can be rotated 180 ° in a counterclockwise direction to make it in a plane rectangular coordinate system In the first or second quadrant, the corrected gradient direction is obtained. On the other hand, if the gradient direction of the pixel is less than 180 °, the gradient direction of the pixel can be directly determined as the corrected gradient direction.
  • step S204 the feature vector corresponding to the sub-region is obtained according to the gradient amplitude of each pixel in the sub-region and the corrected gradient direction, and the feature corresponding to the feature point is determined according to the feature vector corresponding to each sub-region Vector and the feature vector corresponding to each feature point to determine the feature descriptor of the above image.
  • the gradient amplitude and gradient direction of each pixel in the subregion are calculated as shown in Table 3 (where the left side of the slash indicates the gradient amplitude and the right side Indicates the gradient direction):
  • the feature vector corresponding to the sub-region can be obtained.
  • the feature vector of the sub-region can be a 4-dimensional feature vector.
  • the first dimension of the feature vector is the dimension corresponding to 0 °
  • the second dimension is the dimension corresponding to 45 °
  • the third dimension is The corresponding dimension of 90 °
  • the fourth dimension is the corresponding dimension of 135 °.
  • the feature vector can be calculated by the following method: For a pixel, if the gradient direction of the pixel after correction falls exactly at the boundary point (ie, 0 °, 45 °, 90 °, or 135 °), then directly The gradient amplitude of the pixel is added to the dimension corresponding to the boundary point.
  • the gradient direction of the pixel after correction is 45 °, which just falls on the boundary point, so the gradient amplitude of the pixel is directly added to the second dimension of the feature vector on.
  • the gradient amplitude of the pixel is added to the dimension corresponding to 0 °, that is, to the first dimension of the feature vector.
  • the first distance between the corrected gradient direction of the pixel and the starting point of the interval is calculated first (Ie, the first angle difference) and the second distance between the corrected gradient direction of the pixel and the end point of the interval (ie, the second angle difference), and then assign the pixel according to the ratio of the second distance to the first distance
  • the gradient amplitude of the point so that the ratio of the gradient amplitude component assigned to the dimension corresponding to the beginning of the interval to the gradient amplitude component assigned to the dimension corresponding to the end of the interval is equal to the second distance and The ratio of the first distance.
  • the corrected gradient direction of the pixel falls within the interval of 0 ° to 45 °, and the first distance from the starting point of the interval 0 ° is 40 °, the second distance of 45 ° from the end of the interval is 5 °.
  • the ratio of the second distance to the first distance can be calculated as 1: 8. Therefore, the gradient amplitude of the pixel can be divided into 9 equal parts, of which 1 part is added to the dimension corresponding to 0 °, and 8 parts are added to the dimension corresponding to 45 °.
  • the gradient amplitude component corresponding to 8 copies with a value of 133.33 is added to the dimension corresponding to 45 °, and the gradient amplitude component corresponding to 1 share with a value of 16.67 is added to the dimension corresponding to 0 °.
  • the feature vector corresponding to each sub-region can be obtained, and then the feature vector corresponding to each sub-region can be combined together (for example, after the feature vector corresponding to the first sub-region is arranged in sequence
  • the feature vector corresponding to the region), that is, the feature vector corresponding to the feature point is obtained.
  • the feature vector corresponding to the feature point may be a 64-dimensional feature vector.
  • the feature vector corresponding to each feature point can be combined together to obtain the feature descriptor of the image.
  • the mobile robot may also perform filter processing, enhancement processing, and / or de-redundancy processing on the first image.
  • the mobile robot may use relevant filtering algorithms and enhancement algorithms to perform filtering processing and enhancement processing on the first image.
  • FIG. 3 is a flowchart of a process of performing de-redundancy processing on a first image according to an exemplary embodiment of the present application. Referring to FIG. 3, the process may include steps S301 to S302.
  • step S301 a redundant area in the first image may be determined.
  • the first image captured by the mobile robot may include a ground image part, and the ground image part is usually weak in texture, and some may be smooth, and may contain a large amount of redundant information.
  • the mobile robot is a sweeping robot
  • the sweeping robot works in a home scene
  • the floor is a tile or a wooden floor, which is easy to reflect light and has a weak texture. Therefore, when feature extraction is performed on the floor image part included in the first image captured by the cleaning robot, there are often no feature points or fewer feature points. Even if the feature points are extracted, the similarity of the feature points of the floor image part is very high, and it is easy to cause mismatches when matching. If the floor image is partially removed in image processing, the efficiency of single frame processing will be improved, and the reliability of matching will be improved to a certain extent.
  • this step S301 can be implemented by the following operations.
  • the proportion of the ground image portion in the height direction of the first image is generally not less than 5%. Therefore, the lowermost part of the first image with a proportion of 5% can be designated as the first designated area. Then, the average value of the gray value of each pixel in the first designated area can be calculated.
  • the first updated image is obtained by subtracting the average value of the gray value of each pixel in the second designated area in the first image.
  • the second designated area may be set according to actual needs.
  • the lower half of the first image may be designated as the second designated area.
  • the first updated image can be obtained by subtracting the above average value from the gray value of each pixel in the lower half of the first image.
  • the proportion of pixels in the row area with a gray value of 255 is counted to obtain the proportion corresponding to the row area . It should be noted that the proportion of pixels with a gray value of 255 in each row area refers to the ratio of the number of pixels with a gray value of 255 in the row area to the image width.
  • the specified number of continuous lines is the target line number in the second updated image above.
  • the specified quantity can be set according to actual needs, for example, it can be 2.
  • the second preset threshold may also be set according to actual needs, for example, it may be 50%.
  • step S302 the gray value of each pixel in the determined redundant area may be updated to 0 to obtain an image after de-redundancy processing.
  • FIG. 4 exemplarily shows the effects of applying the above-described de-redundancy processing to the first image through images a to e.
  • Image a is an example of the first image collected by the mobile robot.
  • the image b (ie, the second updated image) is obtained.
  • the gray value of most pixels in the floor image part has been set to 0, and there are still sporadic white points.
  • the image b may be subjected to a morphological operation to remove sporadic white dots. (For the specific implementation principle and implementation process of the morphological operation, please refer to the description in the related art It will not be repeated here), and the image c is obtained.
  • the redundant area in the image a can be determined, as shown in the image d. Then, the gray value of each pixel in the redundant area can be updated to 0 to obtain the de-redundant processed image e.
  • redundant regions in the first image can be removed, thereby improving the efficiency of subsequent processing and improving the accuracy of matching.
  • the mobile robot may include an acquisition module 510, a memory 520, and a processor 530.
  • the collection module 510 can collect the first image in the current field of view.
  • the memory 520 may store sample data at a specified location.
  • the processor 530 may match the first image with the sample data, and determine the specified position where the sample data is successfully matched with the first image as the current location of the mobile robot.
  • the sample data may be a second image of the specified location that is collected in advance at different shooting angles.
  • the processor 530 may match the first image with the sample data by performing the following operations: performing feature extraction on the first image and the second image respectively to obtain the first feature of the first image A descriptor and a second feature descriptor of the second image, and calculating the similarity between the first feature descriptor and the second feature descriptor.
  • the sample data may be a second feature descriptor of a second image
  • the second image is an image of the specified position that is pre-collected at different shooting angles.
  • the processor 530 may match the first image with the sample data by performing the following operations: performing feature extraction on the first image, obtaining a first feature descriptor of the first image, and calculating the The similarity between the first feature descriptor and the second feature descriptor.
  • the processor 530 may determine that the first feature descriptor and the second feature descriptor match.
  • the processor 530 may perform feature extraction on the image through the following operations:
  • For each feature point divide the specified neighborhood of the feature point into multiple sub-regions, and for each sub-region, calculate the gradient amplitude and gradient direction of each pixel in the sub-region;
  • the processor 530 may perform filter processing, enhancement processing, and / or de-redundancy processing on the first image.
  • the de-redundancy processing performed by the processor 530 on the first image may include: determining a redundant area in the first image; and converting the gray value of each pixel in the redundant area Update to 0 to get the de-redundant image.
  • the processor 530 may determine the redundant area in the first image through the following operations:
  • the designated number of consecutive rows is determined The target line number of the last line in the area in the second updated image
  • the area between the line indicated by the target line number and the last line in the first image is determined as a redundant area.
  • the present application may also provide a computer-readable storage medium on which a computer program is stored, which, when executed by a processor in the mobile robot, causes the mobile robot to realize the above according to the embodiments of the present application Either way.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Multimedia (AREA)
  • Remote Sensing (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

A method for positioning a mobile robot, and a mobile robot. The mobile robot can (S101) collect a first image in the current field of view, (S102) match the first image with prestored sample data at a specified position, and (S103) determine the specified position where the sample data is successfully matched with the first image to be the current position of the mobile robot.

Description

移动机器人的定位方法和移动机器人Mobile robot positioning method and mobile robot 技术领域Technical field
本申请涉及移动机器人技术领域,尤其涉及一种移动机器人的定位方法和移动机器人。The present application relates to the technical field of mobile robots, in particular to a positioning method of mobile robots and mobile robots.
背景技术Background technique
定位技术是移动机器人研究中的一项关键技术,对于移动机器人来说,精确地空间定位技术是其实现自主导航、避障的前提。Positioning technology is a key technology in the research of mobile robots. For mobile robots, accurate space positioning technology is the prerequisite for its autonomous navigation and obstacle avoidance.
目前,采用图像分析的方式来获得移动机器人位置的方法,需要使用大量图像样本训练来计算确定移动机器人与环境的相对位置,进而获得移动机器人自身的准确位置。但是,当采用上述方法获取移动机器人自身的准确位置时,需要高性能的计算硬件,成本较高。At present, the method of using image analysis to obtain the position of the mobile robot requires the training of a large number of image samples to calculate and determine the relative position of the mobile robot and the environment, and then obtain the accurate position of the mobile robot itself. However, when the above method is used to obtain the accurate position of the mobile robot itself, high-performance computing hardware is required and the cost is high.
发明内容Summary of the invention
有鉴于此,本申请提供一种成本较低的移动机器人的定位方法和移动机器人。In view of this, the present application provides a positioning method and mobile robot for a low-cost mobile robot.
本申请第一方面提供一种移动机器人的定位方法,所述方法由移动机器人执行,所述方法包括:采集当前视野下的第一图像;将所述第一图像与预先存储的指定位置的样本数据进行匹配;将样本数据与所述第一图像匹配成功的指定位置确定为所述移动机器人当前所在位置。A first aspect of the present application provides a positioning method for a mobile robot. The method is performed by a mobile robot. The method includes: acquiring a first image in a current field of view; and combining the first image with a pre-stored sample at a specified location The data is matched; the specified position where the sample data is successfully matched with the first image is determined as the current position of the mobile robot.
本申请第二方面提供一种移动机器人,所述移动机器人包括采集模组、存储器和处理器,其中,所述采集模组用于采集当前视野下的第一图像;所述存储器用于存储指定位置的样本数据;所述处理器,用于将所述第一图像与所述样本数据进行匹配,并将样本数据与所述第一图像匹配成功的指定位置确定为所述移动机器人当前所在位置。A second aspect of the present application provides a mobile robot. The mobile robot includes an acquisition module, a memory, and a processor, wherein the acquisition module is used to acquire a first image in the current field of view; the memory is used to store a designated Sample data of the location; the processor is configured to match the first image with the sample data and determine the specified location where the sample data and the first image match successfully as the current location of the mobile robot .
本申请第三方面提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序在被移动机器人中的处理器执行时,使该移动机器人:采集当前视野下的第一图像;将所述第一图像与预先存储的指定位置的样本数据进行匹配;将样本数据与所述第一图像匹配成功的指定位置确定为该移动机器人当前所在位置。A third aspect of the present application provides a computer-readable storage medium on which a computer program is stored, which, when executed by a processor in a mobile robot, causes the mobile robot to: acquire a first image in the current field of view; Matching the first image with the pre-stored sample data of the specified position; determining the specified position where the sample data is successfully matched with the first image as the current position of the mobile robot.
根据本申请的实施例,可准确定位移动机器人的位置,且成本较低。According to the embodiments of the present application, the position of the mobile robot can be accurately located, and the cost is low.
附图说明BRIEF DESCRIPTION
图1为根据本申请一示例性实施例的移动机器人的定位方法的流程图;1 is a flowchart of a positioning method of a mobile robot according to an exemplary embodiment of the present application;
图2为根据本申请一示例性实施例的对图像进行特征提取的过程的流程图;2 is a flowchart of a process of extracting features from an image according to an exemplary embodiment of the present application;
图3为根据本申请一示例性实施例的对第一图像进行去冗余处理的过程的流程图;3 is a flowchart of a process of performing de-redundancy processing on a first image according to an exemplary embodiment of the present application;
图4示例性地示出了对第一图像应用去冗余处理的效果;FIG. 4 exemplarily shows the effect of applying de-redundancy processing to the first image;
图5为根据本申请一示例性实施例的移动机器人的结构示意图。5 is a schematic structural diagram of a mobile robot according to an exemplary embodiment of the present application.
具体实施方式detailed description
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同参考数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所述的、本申请的一些方面相一致的装置和方法的例子。Exemplary embodiments will be described in detail here, examples of which are shown in the drawings. When the following description refers to the accompanying drawings, unless otherwise indicated, the same reference numerals in different drawings represent the same or similar elements. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with this application. Rather, they are merely examples of devices and methods consistent with some aspects of the application as described in the appended claims.
在本申请使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本申请。在本申请和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括复数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目中的任何一个或其所有可能组合。The terminology used in this application is for the purpose of describing particular embodiments only, and is not intended to limit this application. The singular forms "a", "said" and "the" used in this application and the appended claims are also intended to include the plural forms unless the context clearly indicates other meanings. It should also be understood that the term "and / or" as used herein refers to and encompasses any one or all possible combinations of one or more associated listed items.
应当理解,尽管在本申请可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应受这些术语限制。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本申请范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于”。It should be understood that although the terms first, second, third, etc. may be used to describe various information in this application, the information should not be limited by these terms. These terms are only used to distinguish the same type of information from each other. For example, without departing from the scope of the present application, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information. Depending on the context, the word "if" as used herein can be interpreted as "when" or "when" or "responsive".
下面描述本申请的几个具体的实施例。这些实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。The following describes several specific embodiments of the present application. These embodiments may be combined with each other, and the same or similar concepts or processes may not be repeated in some embodiments.
图1为根据本申请一示例性实施例的移动机器人的定位方法的流程图。参照图1,该方法可以包括步骤S101至步骤S103。FIG. 1 is a flowchart of a mobile robot positioning method according to an exemplary embodiment of the present application. Referring to FIG. 1, the method may include steps S101 to S103.
在步骤S101,移动机器人可以采集当前视野下的第一图像。In step S101, the mobile robot may collect the first image in the current field of view.
需要说明的是,当移动机器人需要确定自身位置时,会移动到多个指定位置之一, 采集当前视野下的第一图像。It should be noted that when the mobile robot needs to determine its own position, it will move to one of a plurality of designated positions and collect the first image in the current field of view.
在步骤S102,移动机器人可以将上述第一图像与预先存储的指定位置的样本数据进行匹配。In step S102, the mobile robot may match the above-mentioned first image with pre-stored sample data at a specified position.
具体的,一实施例中,所述样本数据为在不同拍摄角度下预先采集到的所述指定位置的第二图像,该步骤S102可以通过下列操作来实现。Specifically, in an embodiment, the sample data is a second image of the specified location that is collected in advance at different shooting angles. This step S102 may be implemented by the following operations.
(1)分别对所述第一图像和所述第二图像进行特征提取,得到所述第一图像的第一特征描述子和所述第二图像的第二特征描述子。可基于sift特征提取算法、surf特征提取算法、hog特征提取算法、haar特征提取算法、形状上下文(shape context)算法等进行特征提取,得到特征描述子。有关各特征提取算法的具体实现原理和实现过程可以参见相关技术中的描述,此处不再赘述。(1) Perform feature extraction on the first image and the second image respectively to obtain a first feature descriptor of the first image and a second feature descriptor of the second image. Feature extraction can be performed based on sift feature extraction algorithm, surf feature extraction algorithm, hog feature extraction algorithm, haar feature extraction algorithm, shape context algorithm, etc., to obtain feature descriptors. For the specific implementation principle and implementation process of each feature extraction algorithm, please refer to the description in the related art, which will not be repeated here.
(2)计算上述第一特征描述子和上述第二特征描述子的相似度。有关计算两个特征描述子的相似度的方法可以参见相关技术中的描述,此处不再描述。若第一特征描述子和第二特征描述子的相似度大于预设阈值,则可以认为第一特征描述子和第二特征描述子匹配。(2) Calculate the similarity between the first feature descriptor and the second feature descriptor. For the method of calculating the similarity between the two feature descriptors, please refer to the description in the related art, which will not be described here. If the similarity between the first feature descriptor and the second feature descriptor is greater than a preset threshold, it can be considered that the first feature descriptor and the second feature descriptor match.
此外,在另一实施例中,所述样本数据为第二图像的第二特征描述子,所述第二图像为在不同拍摄角度下预先采集到的所述指定位置的图像,该步骤S102可以通过下列操作来实现。In addition, in another embodiment, the sample data is a second feature descriptor of a second image, and the second image is an image of the specified position pre-collected at different shooting angles, this step S102 may be This is achieved through the following operations.
(1)对所述第一图像进行特征提取,得到所述第一图像的第一特征描述子。(1) Perform feature extraction on the first image to obtain a first feature descriptor of the first image.
(2)计算所述第一特征描述子和所述第二特征描述子的相似度。若第一特征描述子和第二特征描述子的相似度大于预设阈值,则可以认为第一特征描述子和第二特征描述子匹配。(2) Calculate the similarity between the first feature descriptor and the second feature descriptor. If the similarity between the first feature descriptor and the second feature descriptor is greater than a preset threshold, it can be considered that the first feature descriptor and the second feature descriptor match.
例如,如果移动机器人所在房间包括六个墙角,则该六个墙角可以为上述指定位置,上述不同拍摄角度可以包括20度、50度、80度等。For example, if the room where the mobile robot is located includes six corners, the six corners may be the specified location, and the different shooting angles may include 20 degrees, 50 degrees, 80 degrees, and so on.
在步骤S103,移动机器人可以将样本数据与第一图像匹配成功的指定位置确定为移动机器人当前所在位置。In step S103, the mobile robot may determine the specified location where the sample data matches the first image successfully as the current location of the mobile robot.
根据本申请的实施例,可准确定位移动机器人的位置,且成本较低。According to the embodiments of the present application, the position of the mobile robot can be accurately located, and the cost is low.
图2为根据本申请一示例性实施例的对图像进行特征提取的过程的流程图。参照图2,对图像进行特征提取的过程可以包括步骤S201至步骤S204。2 is a flowchart of a process of extracting features from an image according to an exemplary embodiment of the present application. Referring to FIG. 2, the process of extracting features from an image may include steps S201 to S204.
在步骤S201,对图像进行非极大值抑制处理,得到图像的特征点。In step S201, non-maximum suppression processing is performed on the image to obtain the feature points of the image.
有关该步骤的具体实现原理和实现过程可以参见相关技术中的描述,此处不再赘述。例如,表1和表2都示出了9邻域像素点的灰度值。在表1所示示例中,灰度值为87的像素点的灰度值比周围其他像素点的灰度值都大,此时,认为该灰度值为87的像素点为特征点。在表2所示示例中,针对灰度值为40的像素点,周围像素点的灰度值有的比40大,有的比40小,此时,认为灰度值为40的像素点不是特征点。For the specific implementation principle and implementation process of this step, please refer to the description in the related art, which will not be repeated here. For example, Table 1 and Table 2 both show the gray values of 9 neighboring pixels. In the example shown in Table 1, the gray value of a pixel with a gray value of 87 is greater than the gray values of other surrounding pixels. At this time, the pixel with a gray value of 87 is considered as a feature point. In the example shown in Table 2, for pixels with a gray value of 40, the gray values of surrounding pixels are larger than 40 and smaller than 40. At this time, the pixels with a gray value of 40 are not Feature points.
表1Table 1
4343 2626 3232
4343 8787 2626
6565 7777 4545
表2Table 2
4343 2626 3232
4343 4040 2626
6565 7777 4545
在步骤S202,针对每个特征点,将该特征点的指定邻域划分为多个子区域,并针对每个子区域,计算该子区域中的各个像素点的梯度幅值和梯度方向。In step S202, for each feature point, the designated neighborhood of the feature point is divided into multiple sub-regions, and for each sub-region, the gradient amplitude and gradient direction of each pixel in the sub-region are calculated.
例如,该指定邻域可以为16*16邻域。可将每个特征点的16*16邻域划分为16个4*4的子区域。For example, the designated neighborhood may be a 16 * 16 neighborhood. The 16 * 16 neighborhood of each feature point can be divided into 16 4 * 4 sub-regions.
有关计算各个像素点的梯度幅值和梯度方向的具体方法可以参见相关技术中的描述,此处不再赘述。For a specific method of calculating the gradient amplitude and gradient direction of each pixel point, please refer to the description in the related art, which will not be repeated here.
在步骤S203,对每个像素点的梯度方向进行校正,以使校正后的梯度方向处于指定范围内。In step S203, the gradient direction of each pixel is corrected so that the corrected gradient direction is within a specified range.
上述计算得到的各个像素点的梯度方向可能处于0°~360°的范围内,而经过校正后的梯度方向处于指定范围(例如,0°~180°)内。The gradient direction of each pixel obtained by the above calculation may be in the range of 0 ° to 360 °, and the corrected gradient direction is in the specified range (for example, 0 ° to 180 °).
例如,在对每个像素点的梯度方向进行校正时,若该像素点的梯度方向大于180°,则可以将该像素点的梯度方向沿逆时针方向旋转180°,使其处于平面直角坐标系中的第一或第二象限,得到校正后的梯度方向。另一方面,若该像素点的梯度方向小于180°, 则可以直接将该像素点的梯度方向确定为校正后的梯度方向。For example, when correcting the gradient direction of each pixel, if the gradient direction of the pixel is greater than 180 °, the gradient direction of the pixel can be rotated 180 ° in a counterclockwise direction to make it in a plane rectangular coordinate system In the first or second quadrant, the corrected gradient direction is obtained. On the other hand, if the gradient direction of the pixel is less than 180 °, the gradient direction of the pixel can be directly determined as the corrected gradient direction.
在步骤S204,依据该子区域中的各个像素点的梯度幅值和校正后的梯度方向,获取该子区域对应的特征向量,并依据各个子区域对应的特征向量,确定该特征点对应的特征向量,以及依据各个特征点对应的特征向量,确定上述图像的特征描述子。In step S204, the feature vector corresponding to the sub-region is obtained according to the gradient amplitude of each pixel in the sub-region and the corrected gradient direction, and the feature corresponding to the feature point is determined according to the feature vector corresponding to each sub-region Vector and the feature vector corresponding to each feature point to determine the feature descriptor of the above image.
例如,针对某一个4*4的子区域,计算得到该子区域中的各个像素点的梯度幅值和梯度方向如表3所示(其中,斜线左侧表示梯度幅值,斜线右侧表示梯度方向):For example, for a 4 * 4 subregion, the gradient amplitude and gradient direction of each pixel in the subregion are calculated as shown in Table 3 (where the left side of the slash indicates the gradient amplitude and the right side Indicates the gradient direction):
表3table 3
10/45°10/45 ° 50/270°50/270 ° 90/180°90/180 ° 150/220°150/220 °
50/60°50/60 ° 60/70°60/70 ° 100/80°100/80 ° 120/130°120/130 °
80/350°80/350 ° 90/200°90/200 ° 50/30°50/30 ° 30/80°30/80 °
130/160°130/160 ° 110/50°110/50 ° 40/70°40/70 ° 90/160°90/160 °
经过步骤S203对梯度方向进行校正后,该子区域中的各个像素点的梯度幅值和校正后的梯度方向如表4所示:After correcting the gradient direction in step S203, the gradient amplitude and corrected gradient direction of each pixel in the sub-region are shown in Table 4:
表4Table 4
10/45°10/45 ° 50/90°50/90 ° 90/180°90/180 ° 150/40°150/40 °
50/60°50/60 ° 60/70°60/70 ° 100/80°100/80 ° 120/130°120/130 °
80/170°80/170 ° 90/20°90/20 ° 50/30°50/30 ° 30/80°30/80 °
130/160°130/160 ° 110/50°110/50 ° 40/70°40/70 ° 90/160°90/160 °
根据表4,可以得到该子区域对应的特征向量。如表5所示,该子区域的特征向量可以为4维特征向量,该特征向量的第一个维度为0°对应的维度,第二个维度为45°对应的维度,第三个维度为90°对应的维度,第四个维度为135°对应的维度。该特征向量可以通过以下方法计算获得:针对某个像素点,若该像素点校正后的梯度方向刚好落在分界点(即,0°、45°、90°或135°),则直接将该像素点的梯度幅值加在该分界点对应的那一个维度上。参见表5,例如,对于第一个像素点,该像素点校正后的梯度方向为45°,刚好落在分界点,于是直接将该像素点的梯度幅值加在特征向量的第二个维度上。再例如,针对梯度幅值为90且梯度方向为180°的像素点,将该像素点的梯度幅值加在0°对应的维度上,即加在特征向量的第一个维度上。According to Table 4, the feature vector corresponding to the sub-region can be obtained. As shown in Table 5, the feature vector of the sub-region can be a 4-dimensional feature vector. The first dimension of the feature vector is the dimension corresponding to 0 °, the second dimension is the dimension corresponding to 45 °, and the third dimension is The corresponding dimension of 90 °, the fourth dimension is the corresponding dimension of 135 °. The feature vector can be calculated by the following method: For a pixel, if the gradient direction of the pixel after correction falls exactly at the boundary point (ie, 0 °, 45 °, 90 °, or 135 °), then directly The gradient amplitude of the pixel is added to the dimension corresponding to the boundary point. Refer to Table 5, for example, for the first pixel, the gradient direction of the pixel after correction is 45 °, which just falls on the boundary point, so the gradient amplitude of the pixel is directly added to the second dimension of the feature vector on. For another example, for a pixel with a gradient amplitude of 90 and a gradient direction of 180 °, the gradient amplitude of the pixel is added to the dimension corresponding to 0 °, that is, to the first dimension of the feature vector.
针对某个像素点,若该像素点校正后的梯度方向落在两个相邻的分界点之间的区间内,则首先计算该像素点校正后的梯度方向与该区间的起点的第一距离(即,第一角度差)以及该像素点校正后的梯度方向与该区间的终点的第二距离(即,第二角度差),然后根据第二距离与第一距离之比来分配该像素点的梯度幅值,以使分配到该区间的起 点对应的那一个维度上的梯度幅值分量与分配到该区间的终点对应的那一个维度上的梯度幅值分量的比值等于第二距离与第一距离的比值。例如,针对梯度幅值为150且梯度方向为40°的像素点,该像素点校正后的梯度方向落在0°~45°的区间,且与该区间的起点0°的第一距离为40°,与该区间的终点45°的第二距离为5°。可以计算出第二距离与第一距离之比为1:8。因此,可以将该像素点的梯度幅值等分为9份,其中1份加在0°对应的维度上,8份加在45°对应的维度上。即,将对应8份的数值为133.33的梯度幅值分量加在45°对应的维度上,将对应1份的数值为16.67的梯度幅值分量加在0°对应的维度上。For a pixel, if the corrected gradient direction of the pixel falls within the interval between two adjacent boundary points, the first distance between the corrected gradient direction of the pixel and the starting point of the interval is calculated first (Ie, the first angle difference) and the second distance between the corrected gradient direction of the pixel and the end point of the interval (ie, the second angle difference), and then assign the pixel according to the ratio of the second distance to the first distance The gradient amplitude of the point, so that the ratio of the gradient amplitude component assigned to the dimension corresponding to the beginning of the interval to the gradient amplitude component assigned to the dimension corresponding to the end of the interval is equal to the second distance and The ratio of the first distance. For example, for a pixel with a gradient amplitude of 150 and a gradient direction of 40 °, the corrected gradient direction of the pixel falls within the interval of 0 ° to 45 °, and the first distance from the starting point of the interval 0 ° is 40 °, the second distance of 45 ° from the end of the interval is 5 °. The ratio of the second distance to the first distance can be calculated as 1: 8. Therefore, the gradient amplitude of the pixel can be divided into 9 equal parts, of which 1 part is added to the dimension corresponding to 0 °, and 8 parts are added to the dimension corresponding to 45 °. That is, the gradient amplitude component corresponding to 8 copies with a value of 133.33 is added to the dimension corresponding to 45 °, and the gradient amplitude component corresponding to 1 share with a value of 16.67 is added to the dimension corresponding to 0 °.
需要说明的是,当某一像素点校正后的梯度方向落在135°~180°之间时,则将180°对应的第二距离相应的梯度幅值分量加到0°对应的维度。It should be noted that when the gradient direction of a pixel after correction falls between 135 ° and 180 °, the gradient amplitude component corresponding to the second distance corresponding to 180 ° is added to the dimension corresponding to 0 °.
表5table 5
Figure PCTCN2019115745-appb-000001
Figure PCTCN2019115745-appb-000001
这样,通过上述方法,即可得到各个子区域对应的特征向量,进而可以将各个子区域对应的特征向量组合在一起(例如,在第一个子区域对应的特征向量之后依序排列其它各个子区域对应的特征向量),即得到该特征点对应的特征向量。例如,该特征点对应的特征向量可以为64维特征向量。可以将各个特征点对应的特征向量组合在一起,得到图像的特征描述子。In this way, through the above method, the feature vector corresponding to each sub-region can be obtained, and then the feature vector corresponding to each sub-region can be combined together (for example, after the feature vector corresponding to the first sub-region is arranged in sequence The feature vector corresponding to the region), that is, the feature vector corresponding to the feature point is obtained. For example, the feature vector corresponding to the feature point may be a 64-dimensional feature vector. The feature vector corresponding to each feature point can be combined together to obtain the feature descriptor of the image.
通过上述图像特征提取过程,可以在保障鲁棒性的同时,使得计算得到的图像的特征描述子的维度数量降低,从而降低计算成本。Through the above image feature extraction process, while ensuring the robustness, the number of dimensions of the calculated image feature descriptor can be reduced, thereby reducing the calculation cost.
可选地,根据本申请的实施例,在步骤S102之前,移动机器人还可以对第一图像进行滤波处理、增强处理和/或去冗余处理。例如,移动机器人可采用相关的滤波算法、增强算法对第一图像进行滤波处理和增强处理。Optionally, according to an embodiment of the present application, before step S102, the mobile robot may also perform filter processing, enhancement processing, and / or de-redundancy processing on the first image. For example, the mobile robot may use relevant filtering algorithms and enhancement algorithms to perform filtering processing and enhancement processing on the first image.
图3为根据本申请一示例性实施例的对第一图像进行去冗余处理的过程的流程图。参照图3,该过程可以包括步骤S301至步骤S302。FIG. 3 is a flowchart of a process of performing de-redundancy processing on a first image according to an exemplary embodiment of the present application. Referring to FIG. 3, the process may include steps S301 to S302.
在步骤S301,可以确定第一图像中的冗余区域。In step S301, a redundant area in the first image may be determined.
在某些情况下,移动机器人拍摄的第一图像中可能包含地面图像部分,而地面图像部分通常纹理较弱,甚至有些会很平滑,会含有大量的冗余信息。例如,当移动机器人为扫地机器人时,该扫地机器人工作在家庭场景中,地板为瓷砖或者木地板,容易反光,纹理较弱。因此,在对扫地机器人拍摄的第一图像中包含的地板图像部分进行特征提取时,往往提取不到特征点或提取的特征点较少。即使提取到了特征点,地板图像部分的特征点相似度也很高,在匹配的时候很容易造成误匹配。如果在图像处理中将该地板图像部分去除,则会提升单帧处理效率,并且在一定程度上提升匹配的可靠性。In some cases, the first image captured by the mobile robot may include a ground image part, and the ground image part is usually weak in texture, and some may be smooth, and may contain a large amount of redundant information. For example, when the mobile robot is a sweeping robot, the sweeping robot works in a home scene, and the floor is a tile or a wooden floor, which is easy to reflect light and has a weak texture. Therefore, when feature extraction is performed on the floor image part included in the first image captured by the cleaning robot, there are often no feature points or fewer feature points. Even if the feature points are extracted, the similarity of the feature points of the floor image part is very high, and it is easy to cause mismatches when matching. If the floor image is partially removed in image processing, the efficiency of single frame processing will be improved, and the reliability of matching will be improved to a certain extent.
作为一个例子,该步骤S301可以通过下列操作来实现。As an example, this step S301 can be implemented by the following operations.
(1)计算上述第一图像中的第一指定区域中的各个像素点的灰度值的均值。(1) Calculate the average value of the gray value of each pixel in the first designated area in the first image.
参见前面的描述,如果移动机器人拍摄的第一图像中包含地面图像部分,则地面图像部分在第一图像的高度方向上所占的比例一般不会小于5%。因此,可以将第一图像的最下面的具有5%比例的部分指定为第一指定区域。然后,可以计算该第一指定区域中的各个像素点的灰度值的均值。Referring to the foregoing description, if the first image captured by the mobile robot includes a ground image portion, the proportion of the ground image portion in the height direction of the first image is generally not less than 5%. Therefore, the lowermost part of the first image with a proportion of 5% can be designated as the first designated area. Then, the average value of the gray value of each pixel in the first designated area can be calculated.
(2)通过将上述第一图像中的第二指定区域中的各个像素点的灰度值减去所述均值,得到第一更新后的图像。(2) The first updated image is obtained by subtracting the average value of the gray value of each pixel in the second designated area in the first image.
第二指定区域可以是根据实际需要设定的。例如,可以指定第一图像的下半部分为第二指定区域。然后,可以通过将第一图像的下半部分中的各个像素点的灰度值减去上述均值,得到第一更新后的图像。The second designated area may be set according to actual needs. For example, the lower half of the first image may be designated as the second designated area. Then, the first updated image can be obtained by subtracting the above average value from the gray value of each pixel in the lower half of the first image.
(3)将上述第一更新后的图像中的上述第二指定区域中灰度值大于第一预设阈值的各个像素点的灰度值更新为255,并将所述第一更新后的图像中的上述第二指定区域中的灰度值小于或等于上述第一预设阈值的各个像素点的灰度值更新为0,得到第二更新后的图像。(3) Update the gray value of each pixel in the second designated area in the first updated image to a gray value greater than the first preset threshold to 255, and update the first updated image The gray value of each pixel in the second designated area in the above is less than or equal to the first preset threshold and the gray value is updated to 0 to obtain a second updated image.
(4)针对上述第二更新后的图像中的所述第二指定区域的每个行区域,统计该行区域中灰度值为255的像素点所占的比例,得到该行区域对应的比例。需要说明的是,每个行区域中灰度值为255的像素点所占的比例指该行区域中灰度值为255的像素点的个数与图像宽度的比值。(4) For each row area of the second designated area in the second updated image, the proportion of pixels in the row area with a gray value of 255 is counted to obtain the proportion corresponding to the row area . It should be noted that the proportion of pixels with a gray value of 255 in each row area refers to the ratio of the number of pixels with a gray value of 255 in the row area to the image width.
(5)针对上述第二更新后的图像中的第二指定区域,当指定数量的从上至下连续的行区域对应的比例均大于第二预设阈值时,确定该指定数量的连续的行区域中的最后一行在上述第二更新后的图像中的目标行号。该指定数量可以是根据实际需要设定的,例 如,可以为2。此外,第二预设阈值也可以是根据实际需要设定的,例如,可以为50%。(5) For the second specified area in the second updated image, when the proportion of the specified number of continuous line areas from top to bottom is greater than the second preset threshold, determine the specified number of continuous lines The last line in the area is the target line number in the second updated image above. The specified quantity can be set according to actual needs, for example, it can be 2. In addition, the second preset threshold may also be set according to actual needs, for example, it may be 50%.
(6)将所述第一图像中所述目标行号所指示的行至最后一行之间的区域确定为冗余区域。(6) Determine the area between the line indicated by the target line number and the last line in the first image as a redundant area.
在步骤S302,可以将所确定的冗余区域中的各个像素点的灰度值更新为0,得到去冗余处理后的图像。In step S302, the gray value of each pixel in the determined redundant area may be updated to 0 to obtain an image after de-redundancy processing.
图4通过图像a~e示例性地示出了对第一图像应用上述去冗余处理的效果。图像a为移动机器人采集到的第一图像的示例。FIG. 4 exemplarily shows the effects of applying the above-described de-redundancy processing to the first image through images a to e. Image a is an example of the first image collected by the mobile robot.
对图像a进行上述操作(1)-(3)后,得到图像b(即,第二更新后的图像)。其中,地板图像部分中大部分像素点的灰度值已经被置为0,还剩余零星的白点。为此,在进行上述操作(4)之前,可先对图像b进行形态学操作以去除其中零星的白点(有关形态学操作的具体实现原理和实现过程,可以参见相关技术中的描述,此处不再赘述),得到图像c。After performing the above operations (1)-(3) on the image a, the image b (ie, the second updated image) is obtained. Among them, the gray value of most pixels in the floor image part has been set to 0, and there are still sporadic white points. For this reason, before performing the above operation (4), the image b may be subjected to a morphological operation to remove sporadic white dots. (For the specific implementation principle and implementation process of the morphological operation, please refer to the description in the related art It will not be repeated here), and the image c is obtained.
进一步地,对图像c进行上述操作(4)-(6),即可确定图像a中的冗余区域,如图像d中所示。然后,可将冗余区域中的各个像素点的灰度值更新为0,得到去冗余处理后的图像e。Further, by performing the above operations (4)-(6) on the image c, the redundant area in the image a can be determined, as shown in the image d. Then, the gray value of each pixel in the redundant area can be updated to 0 to obtain the de-redundant processed image e.
根据上述实施例,可去除第一图像中的冗余区域,从而提高后续处理的效率,并提高匹配的精确度。According to the above-mentioned embodiment, redundant regions in the first image can be removed, thereby improving the efficiency of subsequent processing and improving the accuracy of matching.
图5为根据本申请一示例性实施例的移动机器人的结构示意图。参照图5,该移动机器人可以包括采集模组510、存储器520和处理器530。采集模组510可以采集当前视野下的第一图像。存储器520可以存储指定位置的样本数据。处理器530可以将所述第一图像与所述样本数据进行匹配,并将样本数据与第一图像匹配成功的指定位置确定为移动机器人当前所在位置。5 is a schematic structural diagram of a mobile robot according to an exemplary embodiment of the present application. Referring to FIG. 5, the mobile robot may include an acquisition module 510, a memory 520, and a processor 530. The collection module 510 can collect the first image in the current field of view. The memory 520 may store sample data at a specified location. The processor 530 may match the first image with the sample data, and determine the specified position where the sample data is successfully matched with the first image as the current location of the mobile robot.
根据一种实施例,所述样本数据可以为在不同拍摄角度下预先采集到的所述指定位置的第二图像。所述处理器530可以通过下列操作将所述第一图像与所述样本数据进行匹配:分别对所述第一图像和所述第二图像进行特征提取,得到所述第一图像的第一特征描述子和所述第二图像的第二特征描述子,并计算所述第一特征描述子和所述第二特征描述子的相似度。According to an embodiment, the sample data may be a second image of the specified location that is collected in advance at different shooting angles. The processor 530 may match the first image with the sample data by performing the following operations: performing feature extraction on the first image and the second image respectively to obtain the first feature of the first image A descriptor and a second feature descriptor of the second image, and calculating the similarity between the first feature descriptor and the second feature descriptor.
根据另一实施例,所述样本数据可以为第二图像的第二特征描述子,所述第二图像为在不同拍摄角度下预先采集到的所述指定位置的图像。所述处理器530可以通过下列 操作将所述第一图像与所述样本数据进行匹配:对所述第一图像进行特征提取,得到所述第一图像的第一特征描述子,并计算所述第一特征描述子和所述第二特征描述子的相似度。According to another embodiment, the sample data may be a second feature descriptor of a second image, and the second image is an image of the specified position that is pre-collected at different shooting angles. The processor 530 may match the first image with the sample data by performing the following operations: performing feature extraction on the first image, obtaining a first feature descriptor of the first image, and calculating the The similarity between the first feature descriptor and the second feature descriptor.
当所述第一特征描述子和所述第二特征描述子的相似度大于预设阈值时,处理器530可以确定所述第一特征描述子和所述第二特征描述子匹配。When the similarity between the first feature descriptor and the second feature descriptor is greater than a preset threshold, the processor 530 may determine that the first feature descriptor and the second feature descriptor match.
具体地,所述处理器530可以通过下列操作对图像进行特征提取:Specifically, the processor 530 may perform feature extraction on the image through the following operations:
对所述图像进行非极大值抑制处理,得到所述图像的特征点;Performing non-maximum suppression processing on the image to obtain feature points of the image;
针对每个特征点,将该特征点的指定邻域划分为多个子区域,并针对每个子区域,计算该子区域中的各个像素点的梯度幅值和梯度方向;For each feature point, divide the specified neighborhood of the feature point into multiple sub-regions, and for each sub-region, calculate the gradient amplitude and gradient direction of each pixel in the sub-region;
对每个像素点的梯度方向进行校正,以使校正后的梯度方向处于指定范围内;Correct the gradient direction of each pixel, so that the corrected gradient direction is within the specified range;
依据该子区域中的各个像素点的梯度幅值和校正后的梯度方向,获取该子区域对应的特征向量,并依据各个子区域对应的特征向量,确定该特征点对应的特征向量,以及依据各个特征点对应的特征向量,确定所述图像的特征描述子。Obtain the feature vector corresponding to the sub-region according to the gradient amplitude of each pixel in the sub-region and the corrected gradient direction, and determine the feature vector corresponding to the feature point according to the feature vector corresponding to each sub-region, and based on The feature vector corresponding to each feature point determines the feature descriptor of the image.
此外,所述处理器530在将所述第一图像与样本数据进行匹配之前,可以对所述第一图像进行滤波处理、增强处理和/或去冗余处理。In addition, before matching the first image with the sample data, the processor 530 may perform filter processing, enhancement processing, and / or de-redundancy processing on the first image.
具体地,所述处理器530对所述第一图像进行的去冗余处理可以包括:确定所述第一图像中的冗余区域;将所述冗余区域中的各个像素点的灰度值更新为0,得到去冗余处理后的图像。Specifically, the de-redundancy processing performed by the processor 530 on the first image may include: determining a redundant area in the first image; and converting the gray value of each pixel in the redundant area Update to 0 to get the de-redundant image.
具体地,所述处理器530可以通过下列操作来确定所述第一图像中的冗余区域:Specifically, the processor 530 may determine the redundant area in the first image through the following operations:
计算所述第一图像中的第一指定区域中的各个像素点的灰度值的均值;Calculating the average value of the gray value of each pixel in the first designated area in the first image;
通过将所述第一图像中的第二指定区域中的各个像素点的灰度值减去所述均值,得到第一更新后的图像;Obtain the first updated image by subtracting the average value of the gray value of each pixel in the second designated area in the first image;
将所述第一更新后的图像中的所述第二指定区域中灰度值大于第一预设阈值的各个像素点的灰度值更新为255,并将所述第一更新后的图像中的所述第二指定区域中灰度值小于或者等于所述第一预设阈值的各个像素点的灰度值更新为0,得到第二更新后的图像;Updating the gray value of each pixel in the second designated area in the first updated image with a gray value greater than the first preset threshold to 255, and updating the first updated image The gray value of each pixel in the second specified area whose gray value is less than or equal to the first preset threshold is updated to 0, to obtain a second updated image;
针对所述第二更新后的图像中的所述第二指定区域的每个行区域,统计该行区域中灰度值为255的像素点所占的比例,得到该行区域对应的比例;For each row area of the second designated area in the second updated image, the proportion of pixels in the row area with a gray value of 255 is counted to obtain the proportion corresponding to the row area;
针对所述第二更新后的图像中的所述第二指定区域,当指定数量的从上至下连续的行区域对应的比例均大于第二预设阈值时,确定该指定数量的连续的行区域中的最后一行在所述第二更新后的图像中的目标行号;For the second designated area in the second updated image, when the proportion corresponding to the designated number of consecutive row areas from top to bottom is greater than the second preset threshold, the designated number of consecutive rows is determined The target line number of the last line in the area in the second updated image;
将所述第一图像中所述目标行号所指示的行至最后一行之间的区域确定为冗余区域。The area between the line indicated by the target line number and the last line in the first image is determined as a redundant area.
此外,本申请还可以提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序在被移动机器人中的处理器执行时,使该移动机器人实现根据本申请的实施例的上述任一种方法。In addition, the present application may also provide a computer-readable storage medium on which a computer program is stored, which, when executed by a processor in the mobile robot, causes the mobile robot to realize the above according to the embodiments of the present application Either way.
以上所述仅为本申请的一些实施例,并不用以限制本申请。凡在本申请的精神和原则之内所做的任何修改、等同替换、改进等,均应包含在本申请保护的范围之内。The above are only some embodiments of the present application, and are not intended to limit the present application. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of this application shall be included within the scope of protection of this application.

Claims (15)

  1. 一种移动机器人的定位方法,所述方法由所述移动机器人执行,并且包括:A positioning method for a mobile robot, the method is performed by the mobile robot, and includes:
    采集当前视野下的第一图像;Collect the first image in the current field of view;
    将所述第一图像与预先存储的指定位置的样本数据进行匹配;Matching the first image with pre-stored sample data at a specified location;
    将样本数据与所述第一图像匹配成功的指定位置确定为所述移动机器人当前所在位置。The designated position where the sample data is successfully matched with the first image is determined as the current position of the mobile robot.
  2. 根据权利要求1所述的方法,其中,The method according to claim 1, wherein
    所述样本数据为在不同拍摄角度下预先采集到的所述指定位置的第二图像;The sample data is a second image of the specified position collected in advance at different shooting angles;
    将所述第一图像与预先存储的指定位置的样本数据进行匹配包括:Matching the first image with pre-stored sample data at a specified location includes:
    分别对所述第一图像和所述第二图像进行特征提取,得到所述第一图像的第一特征描述子和所述第二图像的第二特征描述子;Performing feature extraction on the first image and the second image respectively to obtain a first feature descriptor of the first image and a second feature descriptor of the second image;
    计算所述第一特征描述子和所述第二特征描述子的相似度。The similarity between the first feature descriptor and the second feature descriptor is calculated.
  3. 根据权利要求1所述的方法,其中,The method according to claim 1, wherein
    所述样本数据为第二图像的第二特征描述子,所述第二图像为在不同拍摄角度下预先采集到的所述指定位置的图像;The sample data is a second feature descriptor of a second image, and the second image is an image of the specified location collected in advance at different shooting angles;
    将所述第一图像与预先存储的指定位置的样本数据进行匹配包括:Matching the first image with pre-stored sample data at a specified location includes:
    对所述第一图像进行特征提取,得到所述第一图像的第一特征描述子;Performing feature extraction on the first image to obtain a first feature descriptor of the first image;
    计算所述第一特征描述子和所述第二特征描述子的相似度。The similarity between the first feature descriptor and the second feature descriptor is calculated.
  4. 根据权利要求2或3所述的方法,其中,所述移动机器人通过下列操作对所述第一图像或所述第二图像进行特征提取:The method according to claim 2 or 3, wherein the mobile robot performs feature extraction on the first image or the second image through the following operations:
    对该图像进行非极大值抑制处理,得到该图像的特征点;Perform non-maximum suppression processing on the image to obtain the feature points of the image;
    针对每个所述特征点,将该特征点的指定邻域划分为多个子区域,并针对每个所述子区域,计算该子区域中的各个像素点的梯度幅值和梯度方向;For each of the feature points, the designated neighborhood of the feature point is divided into multiple sub-regions, and for each of the sub-regions, the gradient amplitude and gradient direction of each pixel in the sub-region are calculated;
    对每个所述像素点的梯度方向进行校正,以使校正后的梯度方向处于指定范围内;Correct the gradient direction of each pixel point so that the corrected gradient direction is within a specified range;
    依据该子区域中的各个像素点的梯度幅值和校正后的梯度方向,获取该子区域对应的特征向量,并依据各个子区域对应的特征向量,确定该特征点对应的特征向量,以及依据各个特征点对应的特征向量,确定该图像的特征描述子。Obtain the feature vector corresponding to the sub-region according to the gradient amplitude of each pixel in the sub-region and the corrected gradient direction, and determine the feature vector corresponding to the feature point according to the feature vector corresponding to each sub-region, and based on The feature vector corresponding to each feature point determines the feature descriptor of the image.
  5. 根据权利要求1至3中任一项所述的方法,还包括:The method according to any one of claims 1 to 3, further comprising:
    在将所述第一图像与预先存储的指定位置的样本数据进行匹配之前,对所述第一图像进行滤波处理、增强处理和去冗余处理中的至少一种处理。Before matching the first image with the pre-stored sample data at the specified position, at least one of filtering processing, enhancement processing, and de-redundancy processing is performed on the first image.
  6. 根据权利要求5所述的方法,其中,对所述第一图像进行去冗余处理包括:The method according to claim 5, wherein the de-redundancy processing of the first image comprises:
    确定所述第一图像中的冗余区域;Determine the redundant area in the first image;
    将所述冗余区域中的各个像素点的灰度值更新为0,得到去冗余处理后的图像。The gray value of each pixel in the redundant area is updated to 0 to obtain a de-redundant processed image.
  7. 根据权利要求6所述的方法,其中,确定所述第一图像中的冗余区域包括:The method according to claim 6, wherein determining the redundant area in the first image comprises:
    计算所述第一图像中的第一指定区域中的各个像素点的灰度值的均值;Calculating the average value of the gray value of each pixel in the first designated area in the first image;
    通过将所述第一图像中的第二指定区域中的各个像素点的灰度值减去所述均值,得到第一更新后的图像;Obtain the first updated image by subtracting the average value of the gray value of each pixel in the second designated area in the first image;
    将所述第一更新后的图像中的所述第二指定区域中灰度值大于第一预设阈值的各个像素点的灰度值更新为255,并将所述第一更新后的图像中的所述第二指定区域中灰度值小于或者等于所述第一预设阈值的各个像素点的灰度值更新为0,得到第二更新后的图像;Updating the gray value of each pixel in the second designated area in the first updated image with a gray value greater than the first preset threshold to 255, and updating the first updated image The gray value of each pixel in the second specified area whose gray value is less than or equal to the first preset threshold is updated to 0, to obtain a second updated image;
    针对所述第二更新后的图像中的所述第二指定区域的每个行区域,统计该行区域中灰度值为255的像素点所占的比例,得到该行区域对应的比例;For each row area of the second designated area in the second updated image, the proportion of pixels in the row area with a gray value of 255 is counted to obtain the proportion corresponding to the row area;
    针对所述第二更新后的图像中的所述第二指定区域,当指定数量的从上至下连续的行区域对应的比例均大于第二预设阈值时,确定该指定数量的连续的行区域中的最后一行在所述第二更新后的图像中的目标行号;For the second designated area in the second updated image, when the proportion corresponding to the designated number of consecutive row areas from top to bottom is greater than the second preset threshold, the designated number of consecutive rows is determined The target line number of the last line in the area in the second updated image;
    将所述第一图像中所述目标行号所指示的行至最后一行之间的区域确定为所述冗余区域。The area between the line indicated by the target line number and the last line in the first image is determined as the redundant area.
  8. 一种移动机器人,所述移动机器人包括采集模组、存储器和处理器,其中,A mobile robot, the mobile robot includes a collection module, a memory and a processor, wherein,
    所述采集模组用于采集当前视野下的第一图像;The collection module is used to collect the first image in the current field of view;
    所述存储器用于存储指定位置的样本数据;The memory is used to store sample data at a specified location;
    所述处理器用于将所述第一图像与所述样本数据进行匹配,并将样本数据与所述第一图像匹配成功的指定位置确定为所述移动机器人当前所在位置。The processor is configured to match the first image with the sample data, and determine a specified position where the sample data successfully matches the first image as the current location of the mobile robot.
  9. 根据权利要求8所述的移动机器人,其中,The mobile robot according to claim 8, wherein
    所述样本数据为在不同拍摄角度下预先采集到的所述指定位置的第二图像;The sample data is a second image of the specified position collected in advance at different shooting angles;
    所述处理器通过下列操作将所述第一图像与所述样本数据进行匹配:The processor matches the first image with the sample data through the following operations:
    分别对所述第一图像和所述第二图像进行特征提取,得到所述第一图像的第一特征描述子和所述第二图像的第二特征描述子;Performing feature extraction on the first image and the second image respectively to obtain a first feature descriptor of the first image and a second feature descriptor of the second image;
    计算所述第一特征描述子和所述第二特征描述子的相似度。The similarity between the first feature descriptor and the second feature descriptor is calculated.
  10. 根据权利要求8所述的移动机器人,其中,The mobile robot according to claim 8, wherein
    所述样本数据为第二图像的第二特征描述子,所述第二图像为在不同拍摄角度下预先采集到的所述指定位置的图像;The sample data is a second feature descriptor of a second image, and the second image is an image of the specified location collected in advance at different shooting angles;
    所述处理器通过下列操作将所述第一图像与所述样本数据进行匹配:The processor matches the first image with the sample data through the following operations:
    对所述第一图像进行特征提取,得到所述第一图像的第一特征描述子;Performing feature extraction on the first image to obtain a first feature descriptor of the first image;
    计算所述第一特征描述子和所述第二特征描述子的相似度。The similarity between the first feature descriptor and the second feature descriptor is calculated.
  11. 根据权利要求9或10所述的移动机器人,其中,所述处理器通过下列操作对所述第一图像或所述第二图像进行特征提取:The mobile robot according to claim 9 or 10, wherein the processor performs feature extraction on the first image or the second image through the following operations:
    对该图像进行非极大值抑制处理,得到该图像的特征点;Perform non-maximum suppression processing on the image to obtain the feature points of the image;
    针对每个所述特征点,将该特征点的指定邻域划分为多个子区域,并针对每个所述子区域,计算该子区域中的各个像素点的梯度幅值和梯度方向;For each of the feature points, the designated neighborhood of the feature point is divided into multiple sub-regions, and for each of the sub-regions, the gradient amplitude and gradient direction of each pixel in the sub-region are calculated;
    对每个所述像素点的梯度方向进行校正,以使校正后的梯度方向处于指定范围内;Correct the gradient direction of each pixel point so that the corrected gradient direction is within a specified range;
    依据该子区域中的各个像素点的梯度幅值和校正后的梯度方向,获取该子区域对应的特征向量,并依据各个子区域对应的特征向量,确定该特征点对应的特征向量,以及依据各个特征点对应的特征向量,确定该图像的特征描述子。Obtain the feature vector corresponding to the sub-region according to the gradient amplitude of each pixel in the sub-region and the corrected gradient direction, and determine the feature vector corresponding to the feature point according to the feature vector corresponding to each sub-region, and based on The feature vector corresponding to each feature point determines the feature descriptor of the image.
  12. 根据权利要求8至10中任一项所述的移动机器人,其中,所述处理器还用于:The mobile robot according to any one of claims 8 to 10, wherein the processor is further used to:
    在将所述第一图像与样本数据进行匹配之前,对所述第一图像进行滤波处理、增强处理和去冗余处理中的至少一种处理。Before matching the first image with the sample data, at least one of filter processing, enhancement processing, and de-redundancy processing is performed on the first image.
  13. 根据权利要求12所述的移动机器人,其中,对所述第一图像进行去冗余处理包括:The mobile robot according to claim 12, wherein the de-redundancy processing of the first image includes:
    确定所述第一图像中的冗余区域;Determine the redundant area in the first image;
    将所述冗余区域中的各个像素点的灰度值更新为0,得到去冗余处理后的图像。The gray value of each pixel in the redundant area is updated to 0 to obtain a de-redundant processed image.
  14. 根据权利要求13所述的移动机器人,其中,确定所述第一图像中的冗余区域包括:The mobile robot according to claim 13, wherein determining the redundant area in the first image includes:
    计算所述第一图像中的第一指定区域中的各个像素点的灰度值的均值;Calculating the average value of the gray value of each pixel in the first designated area in the first image;
    通过将所述第一图像中的第二指定区域中的各个像素点的灰度值减去所述均值,得到第一更新后的图像;Obtain the first updated image by subtracting the average value of the gray value of each pixel in the second designated area in the first image;
    将所述第一更新后的图像中的所述第二指定区域中灰度值大于第一预设阈值的各个像素点的灰度值更新为255,并将所述第一更新后的图像中的所述第二指定区域中灰度值小于或者等于所述第一预设阈值的各个像素点的灰度值更新为0,得到第二更新后的图像;Updating the gray value of each pixel in the second designated area in the first updated image with a gray value greater than the first preset threshold to 255, and updating the first updated image The gray value of each pixel in the second specified area whose gray value is less than or equal to the first preset threshold is updated to 0, to obtain a second updated image;
    针对所述第二更新后的图像中的所述第二指定区域的每个行区域,统计该行区域中灰度值为255的像素点所占的比例,得到该行区域对应的比例;For each row area of the second designated area in the second updated image, the proportion of pixels in the row area with a gray value of 255 is counted to obtain the proportion corresponding to the row area;
    针对所述第二更新后的图像中的所述第二指定区域,当指定数量的从上至下连续的 行区域对应的比例均大于第二预设阈值时,确定该指定数量的连续的行区域中的最后一行在所述第二更新后的图像中的目标行号;For the second designated area in the second updated image, when the proportion corresponding to the designated number of consecutive row areas from top to bottom is greater than the second preset threshold, the designated number of consecutive rows is determined The target line number of the last line in the area in the second updated image;
    将所述第一图像中所述目标行号所指示的行至最后一行之间的区域确定为所述冗余区域。The area between the line indicated by the target line number and the last line in the first image is determined as the redundant area.
  15. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序在被移动机器人中的处理器执行时,使该移动机器人:A computer-readable storage medium on which a computer program is stored, which when executed by a processor in a mobile robot, causes the mobile robot to:
    采集当前视野下的第一图像;Collect the first image in the current field of view;
    将所述第一图像与预先存储的指定位置的样本数据进行匹配;Matching the first image with pre-stored sample data at a specified location;
    将样本数据与所述第一图像匹配成功的指定位置确定为该移动机器人当前所在位置。The designated position where the sample data is successfully matched with the first image is determined as the current position of the mobile robot.
PCT/CN2019/115745 2018-11-12 2019-11-05 Method for positioning mobile robot, and mobile robot WO2020098532A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811339179.8A CN111178366B (en) 2018-11-12 2018-11-12 Mobile robot positioning method and mobile robot
CN201811339179.8 2018-11-12

Publications (1)

Publication Number Publication Date
WO2020098532A1 true WO2020098532A1 (en) 2020-05-22

Family

ID=70646223

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/115745 WO2020098532A1 (en) 2018-11-12 2019-11-05 Method for positioning mobile robot, and mobile robot

Country Status (2)

Country Link
CN (1) CN111178366B (en)
WO (1) WO2020098532A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113822095B (en) * 2020-06-02 2024-01-12 苏州科瓴精密机械科技有限公司 Method, system, robot and storage medium for identifying working position based on image

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1569558A (en) * 2003-07-22 2005-01-26 中国科学院自动化研究所 Moving robot's vision navigation method based on image representation feature
CN101398689A (en) * 2008-10-30 2009-04-01 中控科技集团有限公司 Real-time color auto acquisition robot control method and the robot
US20120121132A1 (en) * 2009-05-12 2012-05-17 Albert-Ludwigs University Freiburg Object recognition method, object recognition apparatus, and autonomous mobile robot
CN102915039A (en) * 2012-11-09 2013-02-06 河海大学常州校区 Multi-robot combined target searching method of animal-simulated space cognition
CN104036494A (en) * 2014-05-21 2014-09-10 浙江大学 Fast matching computation method used for fruit picture
CN104915949A (en) * 2015-04-08 2015-09-16 华中科技大学 Image matching algorithm of bonding point characteristic and line characteristic

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101488224B (en) * 2008-01-16 2011-01-19 中国科学院自动化研究所 Characteristic point matching method based on relativity measurement
CN103697882A (en) * 2013-12-12 2014-04-02 深圳先进技术研究院 Geographical three-dimensional space positioning method and geographical three-dimensional space positioning device based on image identification
CN104936283B (en) * 2014-03-21 2018-12-25 中国电信股份有限公司 Indoor orientation method, server and system
CN106558043B (en) * 2015-09-29 2019-07-23 阿里巴巴集团控股有限公司 A kind of method and apparatus of determining fusion coefficients
CN105246039B (en) * 2015-10-20 2018-05-29 深圳大学 A kind of indoor orientation method and system based on image procossing
CN107345812A (en) * 2016-05-06 2017-11-14 湖北淦德智能消防科技有限公司 A kind of image position method, device and mobile phone
CN106355577B (en) * 2016-09-08 2019-02-12 武汉科技大学 Rapid image matching method and system based on significant condition and global coherency
CN107452028B (en) * 2017-07-28 2020-05-26 浙江华睿科技有限公司 Method and device for determining position information of target image
CN108646280A (en) * 2018-04-16 2018-10-12 宇龙计算机通信科技(深圳)有限公司 A kind of localization method, device and user terminal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1569558A (en) * 2003-07-22 2005-01-26 中国科学院自动化研究所 Moving robot's vision navigation method based on image representation feature
CN101398689A (en) * 2008-10-30 2009-04-01 中控科技集团有限公司 Real-time color auto acquisition robot control method and the robot
US20120121132A1 (en) * 2009-05-12 2012-05-17 Albert-Ludwigs University Freiburg Object recognition method, object recognition apparatus, and autonomous mobile robot
CN102915039A (en) * 2012-11-09 2013-02-06 河海大学常州校区 Multi-robot combined target searching method of animal-simulated space cognition
CN104036494A (en) * 2014-05-21 2014-09-10 浙江大学 Fast matching computation method used for fruit picture
CN104915949A (en) * 2015-04-08 2015-09-16 华中科技大学 Image matching algorithm of bonding point characteristic and line characteristic

Also Published As

Publication number Publication date
CN111178366B (en) 2023-07-25
CN111178366A (en) 2020-05-19

Similar Documents

Publication Publication Date Title
CN109784333B (en) Three-dimensional target detection method and system based on point cloud weighted channel characteristics
CN110443836B (en) Point cloud data automatic registration method and device based on plane features
KR101404640B1 (en) Method and system for image registration
WO2021143935A1 (en) Detection method, device, electronic apparatus, and storage medium
CN110493488B (en) Video image stabilization method, video image stabilization device and computer readable storage medium
CN105354841B (en) A kind of rapid remote sensing image matching method and system
CN108345821B (en) Face tracking method and device
CN109035207B (en) Density self-adaptive laser point cloud characteristic detection method
CN114529837A (en) Building outline extraction method, system, computer equipment and storage medium
CN108764297B (en) Method and device for determining position of movable equipment and electronic equipment
CN111383252A (en) Multi-camera target tracking method, system, device and storage medium
WO2019119752A1 (en) Obstacle recognition method and terminal
WO2019221149A1 (en) Template orientation estimation device, method, and program
WO2019167238A1 (en) Image processing device and image processing method
CN115861828A (en) Method, device, medium and equipment for extracting cross section contour of building
JP2017130067A (en) Automatic image processing system for improving position accuracy level of satellite image and method thereof
WO2020098532A1 (en) Method for positioning mobile robot, and mobile robot
CN113095385B (en) Multimode image matching method based on global and local feature description
CN110880003B (en) Image matching method and device, storage medium and automobile
CN110660091A (en) Image registration processing method and device and photographing correction operation system
CN111198563A (en) Terrain recognition method and system for dynamic motion of foot type robot
CN109816709B (en) Monocular camera-based depth estimation method, device and equipment
JP6601893B2 (en) Image processing apparatus, image processing method, and program
CN111144489B (en) Matching pair filtering method and device, electronic equipment and storage medium
FI3839882T3 (en) Radiometric correction in image mosaicing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19883878

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19883878

Country of ref document: EP

Kind code of ref document: A1