CN113390408A - Robot positioning method and device, robot and storage medium - Google Patents

Robot positioning method and device, robot and storage medium Download PDF

Info

Publication number
CN113390408A
CN113390408A CN202110744785.3A CN202110744785A CN113390408A CN 113390408 A CN113390408 A CN 113390408A CN 202110744785 A CN202110744785 A CN 202110744785A CN 113390408 A CN113390408 A CN 113390408A
Authority
CN
China
Prior art keywords
robot
image data
odometer
data
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110744785.3A
Other languages
Chinese (zh)
Inventor
全王飞
王涛
赵勇胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ubtech Technology Co ltd
Original Assignee
Shenzhen Ubtech Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ubtech Technology Co ltd filed Critical Shenzhen Ubtech Technology Co ltd
Priority to CN202110744785.3A priority Critical patent/CN113390408A/en
Publication of CN113390408A publication Critical patent/CN113390408A/en
Priority to PCT/CN2021/126714 priority patent/WO2023273057A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C22/00Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers
    • G01C22/02Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers by conversion into electric waveforms and subsequent integration, e.g. using tachometer generator
    • G01C22/025Differential odometers

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Manipulator (AREA)

Abstract

本申请适用于机器人技术领域,提供一种机器人定位方法、装置、机器人及存储介质,其中,方法包括:分别根据两个视野范围不同的单目摄像头采集的图像数据,获得机器人的第一位姿数据;根据轮式里程计采集的里程计数据,获得机器人的第二位姿数据;根据第一位姿数据和第二位姿数据进行融合,获得机器人的真实位姿数据。本申请通过采用两个视野范围不同的单目摄像头能够增加机器人的可视范围,将基于两个单目摄像头获得的第一位姿数据与基于轮式里程计获得第二位姿数据进行融合后,能够获得机器人的精确位姿数据,从而提高机器人的定位精度。

Figure 202110744785

The present application is applicable to the field of robotics, and provides a robot positioning method, device, robot and storage medium, wherein the method includes: obtaining the first pose of the robot according to image data collected by two monocular cameras with different fields of view, respectively. According to the odometer data collected by the wheeled odometer, the second pose data of the robot is obtained; according to the fusion of the first pose data and the second pose data, the real pose data of the robot is obtained. In this application, the visual range of the robot can be increased by using two monocular cameras with different fields of view, and the first pose data obtained based on the two monocular cameras and the second pose data obtained based on the wheel odometer are fused. , the precise pose data of the robot can be obtained, thereby improving the positioning accuracy of the robot.

Figure 202110744785

Description

一种机器人定位方法、装置、机器人及存储介质A robot positioning method, device, robot and storage medium

技术领域technical field

本申请属于机器人技术领域,尤其涉及一种机器人定位方法、装置、机器人及存储介质。The present application belongs to the field of robot technology, and in particular relates to a robot positioning method, device, robot and storage medium.

背景技术Background technique

在机器人的定位和建图中,需要获得机器人的准确位姿估计。视觉传感器具有成本低、信息量丰富等优点,常用于机器人的视觉定位和建图。目前,大多数的视觉定位是基于单目摄像头,缺乏真实的世界尺度,还有的采用基于摄像头和惯性测量单元(InertialMeasurement Unit,IMU)的定位方式,但是在实际使用时发现,IMU在机器人静止时,容易产生零速漂移,导致机器人的位姿出错,并且在机器人运动时会造成IMU震动,导致IMU预积分的值不准确,影响定位精度。In the positioning and mapping of the robot, it is necessary to obtain an accurate pose estimation of the robot. Vision sensors have the advantages of low cost and rich information, and are often used for visual positioning and mapping of robots. At present, most of the visual positioning is based on a monocular camera, lacking the real world scale, and some use the positioning method based on the camera and the Inertial Measurement Unit (IMU), but in actual use, it is found that the IMU is stationary when the robot is stationary When , zero-speed drift is likely to occur, resulting in the robot's pose error, and the IMU will vibrate when the robot moves, resulting in an inaccurate IMU pre-integration value and affecting the positioning accuracy.

发明内容SUMMARY OF THE INVENTION

本申请实施例提供了一种机器人定位方法、装置、机器人及存储介质,以解决基于单目摄像头的视觉定位,缺乏真实的世界尺度,基于摄像头和IMU的定位方式,IMU在机器人静止时,容易产生零速漂移,导致机器人的位姿出错,并且在机器人运动时会造成IMU震动,导致IMU预积分的值不准确,影响定位精度的问题。The embodiments of the present application provide a robot positioning method, device, robot, and storage medium to solve the problem of visual positioning based on a monocular camera, lack of real world scale, and a positioning method based on a camera and an IMU. When the robot is stationary, the IMU can easily Zero-speed drift occurs, resulting in the robot's pose error, and the IMU will vibrate when the robot moves, resulting in inaccurate IMU pre-integration values and affecting positioning accuracy.

本申请实施例的第一方面提供了一种机器人定位方法,包括:A first aspect of the embodiments of the present application provides a method for positioning a robot, including:

分别根据两个视野范围不同的单目摄像头采集的图像数据,获得所述机器人的第一位姿数据;Obtain the first pose data of the robot according to the image data collected by the two monocular cameras with different fields of view respectively;

根据轮式里程计采集的里程计数据,获得所述机器人的第二位姿数据;Obtain the second pose data of the robot according to the odometer data collected by the wheel odometer;

根据所述第一位姿数据和所述第二位姿数据进行融合,获得所述机器人的真实位姿数据。According to the fusion of the first pose data and the second pose data, the real pose data of the robot is obtained.

本申请实施例的第二方面提供了一种机器人定位装置,包括:A second aspect of the embodiments of the present application provides a robot positioning device, including:

第一位姿获取单元,用于分别根据两个视野范围不同的单目摄像头采集的图像数据,获得所述机器人的第一位姿数据;a first pose obtaining unit, configured to obtain the first pose data of the robot according to the image data collected by the two monocular cameras with different fields of view;

第二位姿获取单元,用于根据轮式里程计采集的里程计数据,获得所述机器人的第二位姿数据;A second pose obtaining unit, configured to obtain the second pose data of the robot according to the odometer data collected by the wheeled odometer;

位姿融合单元,用于根据所述第一位姿数据和所述第二位姿数据进行融合,获得所述机器人的真实位姿数据。A pose fusion unit, configured to fuse the first pose data and the second pose data to obtain the real pose data of the robot.

本申请实施例的第三方面提供了一种机器人,包括处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如本申请实施例的第一方面所述机器人定位方法的步骤。A third aspect of the embodiments of the present application provides a robot, including a processor and a computer program stored in the memory and executable on the processor, when the processor executes the computer program, the present invention is implemented The steps of the robot positioning method described in the first aspect of the application embodiments.

本申请实施例的第四方面提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如本申请实施例的第一方面所述机器人定位方法的步骤。A fourth aspect of the embodiments of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the first aspect of the embodiments of the present application is implemented Describe the steps of the robot positioning method.

本申请实施例的第一方面提供的机器人定位方法,分别根据两个视野范围不同的单目摄像头采集的图像数据,获得机器人的第一位姿数据;根据轮式里程计采集的里程计数据,获得机器人的第二位姿数据;根据第一位姿数据和第二位姿数据进行融合,获得机器人的真实位姿数据,通过采用两个视野范围不同的单目摄像头能够增加机器人的可视范围,将基于两个单目摄像头获得的第一位姿数据与基于轮式里程计获得第二位姿数据进行融合后,能够获得机器人的精确位姿数据,从而提高机器人的定位精度。In the robot positioning method provided by the first aspect of the embodiments of the present application, the first pose data of the robot is obtained according to the image data collected by two monocular cameras with different fields of view respectively; according to the odometer data collected by the wheel odometer, Obtain the second pose data of the robot; fuse the first pose data and the second pose data to obtain the real pose data of the robot, and increase the visual range of the robot by using two monocular cameras with different fields of view , after fusing the first pose data obtained based on the two monocular cameras and the second pose data obtained based on the wheeled odometer, the precise pose data of the robot can be obtained, thereby improving the positioning accuracy of the robot.

可以理解的是,上述第二方面至第四方面的有益效果可以参见上述第一方面中的相关描述,在此不再赘述。It can be understood that, for the beneficial effects of the foregoing second aspect to the fourth aspect, reference may be made to the relevant descriptions in the foregoing first aspect, and details are not described herein again.

附图说明Description of drawings

为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to illustrate the technical solutions in the embodiments of the present application more clearly, the following briefly introduces the accompanying drawings that need to be used in the description of the embodiments or the prior art. Obviously, the drawings in the following description are only for the present application. In some embodiments, for those of ordinary skill in the art, other drawings can also be obtained according to these drawings without any creative effort.

图1是本申请实施例提供的机器人的第一种结构示意图FIG. 1 is a first structural schematic diagram of a robot provided by an embodiment of the present application

图2是本申请实施例提供的机器人定位方法的第一种流程示意图;2 is a first schematic flow chart of a robot positioning method provided by an embodiment of the present application;

图3是本申请实施例提供的机器人定位方法的第二种流程示意图;3 is a second schematic flowchart of the robot positioning method provided by the embodiment of the present application;

图4是本申请实施例提供的机器人定位方法的第三种流程示意图;4 is a third schematic flowchart of the robot positioning method provided by the embodiment of the present application;

图5是本申请实施例提供的图像数据与里程计数据的时序图;5 is a sequence diagram of image data and odometer data provided by an embodiment of the present application;

图6是本申请实施例提供的机器人定位方法的第四种流程示意图;6 is a fourth schematic flowchart of the robot positioning method provided by the embodiment of the present application;

图7是本申请实施例提供的机器人定位装置的结构示意图;7 is a schematic structural diagram of a robot positioning device provided by an embodiment of the present application;

图8是本申请实施例提供的机器人的第二种结构示意图。FIG. 8 is a schematic diagram of a second structure of the robot provided by the embodiment of the present application.

具体实施方式Detailed ways

以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。In the following description, for the purpose of illustration rather than limitation, specific details such as a specific system structure and technology are set forth in order to provide a thorough understanding of the embodiments of the present application. However, it will be apparent to those skilled in the art that the present application may be practiced in other embodiments without these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.

应当理解,当在本申请说明书和所附权利要求书中使用时,术语“包括”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。It is to be understood that, when used in this specification and the appended claims, the term "comprising" indicates the presence of the described feature, integer, step, operation, element and/or component, but does not exclude one or more other The presence or addition of features, integers, steps, operations, elements, components and/or sets thereof.

还应当理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。It will also be understood that, as used in this specification and the appended claims, the term "and/or" refers to and including any and all possible combinations of one or more of the associated listed items.

如在本申请说明书和所附权利要求书中所使用的那样,术语“如果”可以依据上下文被解释为“当...时”或“一旦”或“响应于确定”或“响应于检测到”。类似地,短语“如果确定”或“如果检测到[所描述条件或事件]”可以依据上下文被解释为意指“一旦确定”或“响应于确定”或“一旦检测到[所描述条件或事件]”或“响应于检测到[所描述条件或事件]”。As used in the specification of this application and the appended claims, the term "if" may be contextually interpreted as "when" or "once" or "in response to determining" or "in response to detecting ". Similarly, the phrases "if it is determined" or "if the [described condition or event] is detected" may be interpreted, depending on the context, to mean "once it is determined" or "in response to the determination" or "once the [described condition or event] is detected. ]" or "in response to detection of the [described condition or event]".

另外,在本申请说明书和所附权利要求书的描述中,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。In addition, in the description of the specification of the present application and the appended claims, the terms "first", "second", "third", etc. are only used to distinguish the description, and should not be construed as indicating or implying relative importance.

在本申请说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。References in this specification to "one embodiment" or "some embodiments" and the like mean that a particular feature, structure or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," "in other embodiments," etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean "one or more but not all embodiments" unless specifically emphasized otherwise. The terms "including", "including", "having" and their variants mean "including but not limited to" unless specifically emphasized otherwise.

本申请实施例提供一种机器人定位方法,可以由机器人的处理器在运行相应的计算机程序时执行,通过将基于两个视野范围不同的单目摄像头获得的第一位姿数据与基于轮式里程计获得第二位姿数据进行融合,能够获得机器人的精确位姿数据,从而提高机器人的定位精度。The embodiment of the present application provides a method for positioning a robot, which can be executed by a processor of a robot when a corresponding computer program is run. The second pose data is obtained by the meter for fusion, and the precise pose data of the robot can be obtained, thereby improving the positioning accuracy of the robot.

在应用中,机器人可以是具有移动滚轮的任意类型的机器人,服务机器人、娱乐机器人、生产机器人、农业机器人等各种类型的机器人中都存在具有移动滚轮的机器人类型,例如,仿生教育机器人、仿生迎宾机器人、仿生跳舞机器人、仿生保姆机器人等。In the application, the robot can be any type of robot with moving rollers. There are robot types with moving rollers in various types of robots such as service robots, entertainment robots, production robots, and agricultural robots. For example, bionic educational robots, bionic robots Welcome robot, bionic dancing robot, bionic nanny robot, etc.

在应用中,机器人设置有两个视野范围不同的单目摄像头和一个轮式里程计,两个单目摄像头设置于能够采集到机器人移动方向上的图像数据的任意位置,单目摄像头和轮式里程计可以与机器人集成于一体、属于机器人的一部分,也可以是额外装设于机器人且与机器人通信连接的外部器件。轮式里程计设置于驱动滚轮,通常机器人包括用于驱动机器人运动的驱动滚动,还可以包括从动滚轮,当机器人包括多个驱动滚轮时,轮式里程计应当设置于多个驱动滚轮的中心,多个驱动滚轮的中心是以每个驱动滚轮的几何中心为角点的多边形的几何中心,该多边形平行于机器人的运动平面。In the application, the robot is provided with two monocular cameras with different fields of view and a wheeled odometer. The two monocular cameras are set at any position where the image data in the moving direction of the robot can be collected. The odometer can be integrated with the robot and be a part of the robot, or it can be an external device additionally installed in the robot and connected to the robot. The wheeled odometer is arranged on the driving roller. Usually, the robot includes a driving roller for driving the robot to move, and it can also include a driven roller. When the robot includes multiple driving rollers, the wheeled odometer should be arranged at the center of the plurality of driving rollers. , the center of the plurality of driving rollers is the geometric center of a polygon with the geometric center of each driving roller as the corner point, and the polygon is parallel to the motion plane of the robot.

如图1所示,示例性的示出了机器人100的一种结构示意图;其中,一个单目摄像头1设置于机器人100的上部前侧,另一个单目摄像头2设置于机器人100的上部后侧,轮式里程计3设置于机器人100的底盘的驱动滚轮,虚线箭头方向表示机器人100的运动方向,示意于单目摄像头1的坐标系为相机坐标系,相机坐标系的X轴方向指向机器人100的右侧、Y轴方向为重力方向、Z轴方向与机器人100的运动方向相同,示意于轮式里程计3的坐标系为轮式里程计坐标系,轮式里程计坐标系的X轴方向与机器人100的运动方向相同、Y轴方向指向机器人100的左侧、Z轴方向与重力方向相反。As shown in FIG. 1 , a schematic diagram of the structure of the robot 100 is exemplarily shown; wherein, a monocular camera 1 is arranged on the upper front side of the robot 100 , and another monocular camera 2 is arranged on the upper rear side of the robot 100 , the wheel odometer 3 is arranged on the driving roller of the chassis of the robot 100, the direction of the dashed arrow indicates the movement direction of the robot 100, and it is indicated that the coordinate system of the monocular camera 1 is the camera coordinate system, and the X-axis direction of the camera coordinate system points to the robot 100 The right side, the Y-axis direction is the gravity direction, and the Z-axis direction is the same as the movement direction of the robot 100. It is indicated that the coordinate system of the wheel odometer 3 is the wheel odometer coordinate system, and the X-axis direction of the wheel odometer coordinate system It is the same as the moving direction of the robot 100 , the Y-axis direction points to the left side of the robot 100 , and the Z-axis direction is opposite to the direction of gravity.

如图2所示,本申请实施例提供的机器人定位方法,包括如下步骤S201至S203:As shown in FIG. 2 , the robot positioning method provided by the embodiment of the present application includes the following steps S201 to S203:

步骤S201、分别根据两个视野范围不同的单目摄像头采集的图像数据,获得所述机器人的第一位姿数据。Step S201: Obtain the first pose data of the robot according to the image data collected by the two monocular cameras with different fields of view respectively.

在应用中,分别通过两个单目摄像头同步采集图像数据,也即两个单目摄像头采集的图像数据的时间戳是同步的,在同一时刻,两个单目摄像头各采集一帧图像数据。在分别通过两个单目摄像头同步采集图像数据之后,根据其中一个单目摄像头采集的图像数据获得机器人的一个第一位姿数据,根据另一个单目摄像头采集的图像数据获得机器人的另一个第一位姿数据,也即分别根据两个单目摄像头采集的图像数据,可以获得机器人的两个第一位姿数据。In the application, the image data is collected by two monocular cameras synchronously, that is, the time stamps of the image data collected by the two monocular cameras are synchronized. At the same time, the two monocular cameras each collect one frame of image data. After synchronously collecting image data through two monocular cameras, one first pose data of the robot is obtained according to the image data collected by one of the monocular cameras, and another first pose data of the robot is obtained according to the image data collected by the other monocular camera. One pose data, that is, according to the image data collected by the two monocular cameras, respectively, can obtain two first pose data of the robot.

步骤S202、根据轮式里程计采集的里程计数据,获得所述机器人的第二位姿数据;Step S202, obtaining the second pose data of the robot according to the odometer data collected by the wheeled odometer;

步骤S203、根据所述第一位姿数据和所述第二位姿数据进行融合,获得所述机器人的真实位姿数据。Step S203: Fusion is performed according to the first pose data and the second pose data to obtain the real pose data of the robot.

在应用中,由于基于单目摄像头实现的视觉定位缺乏真实的世界尺度,因此,还需要通过轮式里程计采集机器人移动过程中的里程计数据,并根据里程计数据获得能够反应真实的世界尺度的第二位姿数据,然后对基于两个单目摄像头获得两个第一位姿数据和基于轮式里程计获得的第二位姿数据进行融合,最终获得机器人的真实位姿数据。In the application, since the visual positioning based on the monocular camera lacks the real world scale, it is also necessary to collect the odometer data during the movement of the robot through the wheel odometer, and obtain the real world scale according to the odometer data. Then the two first pose data obtained based on the two monocular cameras and the second pose data obtained based on the wheel odometer are fused, and finally the real pose data of the robot is obtained.

如图3所示,在一个实施例,步骤S201包括如下步骤S301至S304:As shown in FIG. 3, in one embodiment, step S201 includes the following steps S301 to S304:

步骤S301、针对每个单目摄像头,对所述单目摄像头采集的第一帧图像数据进行特征点检测,提取所述第一帧图像数据中的预设数量个特征点。Step S301 , for each monocular camera, perform feature point detection on the first frame of image data collected by the monocular camera, and extract a preset number of feature points in the first frame of image data.

在应用中,针对每个单目摄像头,都需要单独对单目摄像头采集的第一帧图像数据进行特征点检测,以分别提取每个单目摄像头采集的第一帧图像数据中的预设数量个特征点。可以采用任意的角点检测方法来提取特征点,例如,基于灰度图像的角点检测、基于二值图像的角点检测、基于轮廓曲线的角点检测等,具体的,可以采用FAST(Features fromAccelerated Segment Test)特征点检测方法来提取特征点。在进行特征点提取之前,可以先对第一帧图像数据进行图像处理,例如,对第一帧图像数据进行直方图均衡化(Histogram Equalization)处理,以对第一帧图像数据的对比度进行调整。预设数量可以根据实际需要进行设置,例如,70~150之间的任意值,具体可以为100。In the application, for each monocular camera, it is necessary to perform feature point detection on the first frame of image data collected by the monocular camera to extract the preset number of the first frame of image data collected by each monocular camera. feature points. Any corner detection method can be used to extract feature points, for example, corner detection based on grayscale images, corner detection based on binary images, corner detection based on contour curves, etc. Specifically, FAST (Features fromAccelerated Segment Test) feature point detection method to extract feature points. Before the feature point extraction is performed, image processing may be performed on the first frame of image data, for example, histogram equalization (Histogram Equalization) processing is performed on the first frame of image data to adjust the contrast of the first frame of image data. The preset number can be set according to actual needs, for example, any value between 70 and 150, specifically 100.

步骤S302、对所述单目摄像头采集的第k+1帧图像数据进行特征点跟踪,k=1,2,…,n,n任意为正整数。Step S302: Perform feature point tracking on the k+1 th frame of image data collected by the monocular camera, where k=1, 2, . . . n, n is any positive integer.

在应用中,在针对每个单目摄像头采集的第一帧图像数据进行特征点检测之后,需要继续对每个单目摄像头后续采集的每帧图像数据进行特征点跟踪。可以采用任意的角点跟踪算法在后续每帧图像数据中实现对预设数量个特征点的跟踪,例如,光流法(optical flow)。In the application, after the feature point detection is performed on the first frame of image data collected by each monocular camera, it is necessary to continue to perform feature point tracking on each frame of image data subsequently collected by each monocular camera. Any corner tracking algorithm can be used to track a preset number of feature points in each subsequent frame of image data, for example, an optical flow method.

在一个实施例中,步骤S302之后还包括:In one embodiment, after step S302, it further includes:

若跟踪到的所述第k+1帧图像数据中的特征点的个数小于所述预设数量,则对所述第k+1帧图像数据进行特征点检测,提取所述第k+1帧图像数据中的特征点,使跟踪到的所述第k+1帧图像数据中的特征点与提取到的所述第k+1帧图像数据中的特征点的个数之和等于所述预设数量。If the number of feature points in the k+1 th frame of image data tracked is less than the preset number, perform feature point detection on the k+1 th frame of image data, and extract the k+1 th frame of image data. The feature points in the frame image data, so that the sum of the tracked feature points in the k+1 th frame image data and the extracted feature points in the k+1 th frame image data is equal to the Preset quantity.

在应用中,针对每个单目摄像头,若在单目摄像头采集的第一帧图像数据之后的任一帧图像数据中跟踪到的特征点的数量未达到预设数量,则通过与提取第一帧图像数据中的特征点相同的特征点检测方法,提取该任一帧图像数据中的特征点,以对缺少的特征点进行补充,使跟踪到的该任一帧图像数据中的特征点与提取到的该任一帧图像数据中的特征点的个数之和等于预设数量。In the application, for each monocular camera, if the number of feature points tracked in any frame of image data after the first frame of image data collected by the monocular camera does not reach the preset number, the The feature point detection method in which the feature points in the frame image data are the same, extract the feature points in the image data of any frame to supplement the missing feature points, so that the tracked feature points in the image data of any frame are the same as those in the frame image data. The sum of the number of the extracted feature points in any frame of image data is equal to the preset number.

步骤S303、根据所述单目摄像头采集的多帧图像数据之间的共视特征点,构建重投影误差代价函数;Step S303, constructing a reprojection error cost function according to the common view feature points between the multi-frame image data collected by the monocular camera;

步骤S304、对所述重投影误差代价函数进行最小化重投影误差求解,获得所述单目摄像头采集的每帧图像数据中的预设数量个特征点在世界坐标系下的三维坐标,作为所述机器人的第一位姿数据。Step S304: Minimize the reprojection error solution for the reprojection error cost function, and obtain the three-dimensional coordinates of a preset number of feature points in the world coordinate system in each frame of image data collected by the monocular camera, as the The first pose data of the robot.

在应用中,针对每个单目摄像头,根据单目摄像头采集的多帧图像数据之间的共视特征点构建最小二乘优化问题,也即构建最小化特征点的重投影误差代价函数,对该重投影误差代价函数进行求解,获取使得重投影误差最小的解,即得到每帧图像数据中的预设数量个特征点在世界坐标系下的三维坐标,该三维坐标即为机器人的第一位姿数据。In the application, for each monocular camera, the least squares optimization problem is constructed according to the common-view feature points between the multi-frame image data collected by the monocular camera, that is, the reprojection error cost function that minimizes the feature points is constructed. The reprojection error cost function is solved to obtain the solution that minimizes the reprojection error, that is, the three-dimensional coordinates of the preset number of feature points in each frame of image data in the world coordinate system are obtained, and the three-dimensional coordinates are the first coordinates of the robot. pose data.

如图4所示,在一个实施例中,步骤S202包括如下步骤S401和S402:As shown in FIG. 4, in one embodiment, step S202 includes the following steps S401 and S402:

步骤S401、将所述单目摄像头采集的图像数据的时间戳与轮式里程计采集的里程计数据的时间戳对齐,获得与所述图像数据的时间戳同步的里程计数据。Step S401: Align the time stamp of the image data collected by the monocular camera with the time stamp of the odometer data collected by the wheel odometer to obtain odometer data synchronized with the time stamp of the image data.

在应用中,两个单目摄像头的帧率相同且同步采集图像数据,因此,两个单目摄像头采集的图像数据的时间戳相同,由于单目摄像头的帧率与轮式里程计的采集频率通常不同,因此,需要对齐单目摄像头采集的图像数据的时间戳与轮式里程计采集的里程计数据的时间戳,获得在采集时间上与图像数据同步的里程计数据。In the application, the frame rate of the two monocular cameras is the same and the image data is collected synchronously. Therefore, the time stamps of the image data collected by the two monocular cameras are the same, because the frame rate of the monocular camera and the collection frequency of the wheel odometer are the same. Usually different, therefore, it is necessary to align the time stamp of the image data collected by the monocular camera with the time stamp of the odometer data collected by the wheel odometer to obtain the odometer data synchronized with the image data in the collection time.

如图5所示,示例性的示出了图像数据与里程计数据的时序图;其中,虚线处为与图像数据的时间戳同步的里程计数据。As shown in FIG. 5 , a time sequence diagram of image data and odometer data is exemplarily shown; wherein, the dotted line is the odometer data synchronized with the time stamp of the image data.

在一个实施例中,步骤S401包括:In one embodiment, step S401 includes:

若所述单目摄像头的帧率与轮式里程计的采集频率不同,则将所述单目摄像头采集的图像数据的时间戳与所述轮式里程计采集的里程计数据的时间戳对齐,获得与所述图像数据的时间戳同步的里程计数据。If the frame rate of the monocular camera is different from the collection frequency of the wheel odometer, the time stamp of the image data collected by the monocular camera is aligned with the time stamp of the odometer data collected by the wheel odometer, Odometry data is obtained that is synchronized with the time stamp of the image data.

在应用中,单目摄像头的帧率与轮式里程计的采集频率也可能相同,只有在二者不同时才进行时间戳对齐,若二者相同则不进行时间戳对齐,因此,可以先判断单目摄像头的帧率与轮式里程计的采集频率是否相同。In the application, the frame rate of the monocular camera and the collection frequency of the wheel odometer may also be the same. Only when the two are different, the timestamp alignment will be performed. If the two are the same, the timestamp alignment will not be performed. Therefore, it can be determined first. Whether the frame rate of the monocular camera is the same as the acquisition frequency of the wheel odometer.

在一个实施例中,步骤S401包括:In one embodiment, step S401 includes:

根据所述单目摄像头采集的图像数据的时间戳对轮式里程计采集的里程计数据进行线性插值,以将所述图像数据的时间戳与所述里程计数据的时间戳对齐,获得与所述图像数据的时间戳同步的里程计数据。Linear interpolation is performed on the odometer data collected by the wheel odometer according to the time stamp of the image data collected by the monocular camera, so as to align the time stamp of the image data with the time stamp of the odometer data, and obtain a The odometer data is synchronized with the timestamp of the image data.

在应用中,可以采用线性插值法来对齐图像数据的时间戳与里程计数据的时间戳。In applications, linear interpolation can be used to align the timestamps of the image data with the timestamps of the odometry data.

步骤S402、对与所述图像数据的时间戳同步的里程计数据进行预积分,获得所述轮式里程计的积分值,作为所述机器人的第二位姿数据。Step S402: Pre-integrate the odometer data synchronized with the time stamp of the image data to obtain the integral value of the wheel odometer as the second pose data of the robot.

在应用中,在对齐图像数据的时间戳与里程计数据的时间戳之后,对时间戳同步后的里程计数据进行预积分,获得轮式里程计采集的里程计数据的积分值,将该积分值作为能够反应真实的世界尺度的第二位姿数据。In the application, after aligning the time stamp of the image data and the time stamp of the odometer data, pre-integrate the odometer data after the time stamps are synchronized to obtain the integral value of the odometer data collected by the wheel odometer, and the integral value of the odometer data collected by the wheel odometer is obtained. The value is used as the second pose data that can reflect the real world scale.

在一个实施例中,所述第二位姿数据的表达式为:In one embodiment, the expression of the second pose data is:

Figure BDA0003142384110000081
Figure BDA0003142384110000081

Figure BDA0003142384110000082
Figure BDA0003142384110000082

其中,

Figure BDA0003142384110000083
表示与第k+1帧图像数据的时间戳同步的所述轮式里程计的旋转角度的积分值,
Figure BDA0003142384110000084
表示与第k帧图像数据的时间戳同步的所述轮式里程计的旋转角度的积分值,ΔR表示所述第k+1帧图像数据的时间戳与所述第k帧图像数据的时间戳之间的所述轮式里程计的旋转角度的积分值的变化量,
Figure BDA0003142384110000085
表示与第k+1帧图像数据的时间戳同步的所述轮式里程计的位移的积分值,
Figure BDA0003142384110000086
表示与第k帧图像数据的时间戳同步的所述轮式里程计的位移的积分值,Δp表示所述第k+1帧图像数据的时间戳与所述第k帧图像数据的时间戳之间的所述轮式里程计的位移的积分值的变化量,G表示世界坐标系,O表示轮式里程计坐标系。in,
Figure BDA0003142384110000083
represents the integral value of the rotation angle of the wheel odometer synchronized with the time stamp of the k+1th frame of image data,
Figure BDA0003142384110000084
represents the integral value of the rotation angle of the wheel odometer synchronized with the time stamp of the kth frame of image data, ΔR represents the time stamp of the k+1th frame of image data and the time stamp of the kth frame of image data The amount of change between the integral value of the rotation angle of the wheel odometer,
Figure BDA0003142384110000085
represents the integral value of the displacement of the wheel odometer synchronized with the time stamp of the k+1th frame of image data,
Figure BDA0003142384110000086
represents the integral value of the displacement of the wheel odometer synchronized with the time stamp of the kth frame of image data, Δp represents the sum of the time stamp of the k+1th frame of image data and the time stamp of the kth frame of image data The amount of change in the integral value of the displacement of the wheel odometer between, G represents the world coordinate system, and O represents the wheel odometer coordinate system.

如图6所示,在一个实施例中,步骤S203包括如下步骤S601至S603:As shown in FIG. 6, in one embodiment, step S203 includes the following steps S601 to S603:

步骤S601、将所述预设数量个特征点在世界坐标系下的三维坐标投影回图像坐标系,获得所述预设数量个特征点在图像坐标系下的二维坐标。Step S601: Project the three-dimensional coordinates of the preset number of feature points in the world coordinate system back to the image coordinate system, and obtain the two-dimensional coordinates of the preset number of feature points in the image coordinate system.

在应用中,针对每个单目摄像头,将单目摄像头所获得的每帧图像数据中预设数量个特征点在世界坐标系下的三维坐标,也即第一位姿数据投影回图像坐标系,获得每帧图像数据中预设数量个特征点在图像坐标系下的二维坐标。In the application, for each monocular camera, the three-dimensional coordinates of a preset number of feature points in each frame of image data obtained by the monocular camera in the world coordinate system, that is, the first pose data, are projected back to the image coordinate system , to obtain the two-dimensional coordinates of a preset number of feature points in each frame of image data in the image coordinate system.

步骤S602、获取所述预设数量个特征点在图像坐标系下的二维坐标与提取到的所述第一帧图像数据中的预设数量个特征点在图像坐标系下的二维坐标之间的差值。Step S602: Obtain the difference between the two-dimensional coordinates of the preset number of feature points under the image coordinate system and the extracted two-dimensional coordinates of the preset number of feature points in the first frame of image data under the image coordinate system. difference between.

在应用中,步骤S601中获得每帧图像数据中预设数量个特征点在图像坐标系下的二维坐标之后,即进一步获取这些二维坐标与步骤S301中提取到的第一帧图像数据中的预设数量个特征点在图像坐标系下的二维坐标之间的差值,获取差值的具体方法为:针对每个特征点,获取特征点在第k+1帧图像数据所在的图像坐标系中的二维坐标和特征点在第一帧图像数据所在的图像坐标系中的二维坐标之间的差值。In the application, after obtaining the two-dimensional coordinates of a preset number of feature points in the image coordinate system in each frame of image data in step S601, these two-dimensional coordinates and the first frame of image data extracted in step S301 are further obtained. The difference between the two-dimensional coordinates of the preset number of feature points in the image coordinate system, the specific method for obtaining the difference is: for each feature point, obtain the image where the feature point is located in the k+1th frame of image data The difference between the two-dimensional coordinates in the coordinate system and the two-dimensional coordinates of the feature point in the image coordinate system where the first frame of image data is located.

步骤S603、将所述第二位姿数据作为估计值,将所述差值作为观测变量,进行扩展卡尔曼滤波融合,获得所述机器人在轮式里程计坐标系下的真实位姿数据。Step S603 , using the second pose data as an estimated value and the difference as an observation variable, perform extended Kalman filter fusion to obtain the real pose data of the robot in the wheeled odometer coordinate system.

在应用中,在根据步骤S602获得每个单目摄像头所采集的图像数据中的特征点之间的差值之后,将第二位姿数据和差值作为扩展卡尔曼滤波(Extended Kalman Filter,EKF)融合的估计值和观测变量,进行数据融合之后,最终获得机器人在轮式里程计坐标系下的真实位姿数据。In the application, after obtaining the difference between the feature points in the image data collected by each monocular camera according to step S602, the second pose data and the difference are used as the Extended Kalman Filter (EKF) ) fused estimated values and observed variables, and after data fusion, the real pose data of the robot in the wheeled odometer coordinate system is finally obtained.

在一个实施例中,步骤S603包括:In one embodiment, step S603 includes:

将所述第二位姿数据作为估计值,将所述差值作为观测变量,进行约束条件下的扩展卡尔曼滤波融合,获得所述机器人在轮式里程计坐标系下的真实位姿数据,所述约束条件为限定所述机器人在轮式里程计坐标系下的旋转角度的X轴分量和Y轴分量为0且真实位姿数据的Z轴分量固定不变。Taking the second pose data as an estimated value and the difference as an observation variable, performing extended Kalman filter fusion under constraints to obtain the real pose data of the robot in the wheeled odometer coordinate system, The constraint condition is that the X-axis component and the Y-axis component of the rotation angle of the robot in the wheeled odometer coordinate system are 0, and the Z-axis component of the real pose data is fixed.

在应用中,由于机器人一般都在平面上运动,为了进一步提升定位精度,可以引入平面约束,作为进行扩展卡尔曼滤波融合的约束条件。由于基于图像数据估计的位姿数据是三维的,在与基于里程计数据进行紧耦合时,会引入高度的漂移,因此,需要加入平面约束来消除此漂移,即在约束条件下求解的机器人在轮式里程计坐标系下的三维旋转角度只能绕Z轴旋转,X轴和Y轴的分量为0且位姿数据的Z轴分量固定不变。In applications, since robots generally move on a plane, in order to further improve the positioning accuracy, plane constraints can be introduced as constraints for extended Kalman filter fusion. Since the pose data estimated based on the image data is three-dimensional, a high degree of drift will be introduced when tightly coupled with the odometer-based data. Therefore, plane constraints need to be added to eliminate this drift, that is, the robot solved under the constraints is in the The three-dimensional rotation angle in the wheel odometer coordinate system can only be rotated around the Z axis, the components of the X axis and the Y axis are 0, and the Z axis component of the pose data is fixed.

在一个实施例中,步骤S603包括:In one embodiment, step S603 includes:

将所述第二位姿数据作为估计值,将所述差值作为观测变量,将轮式里程计坐标系与相机坐标系之间的相对旋转角度和相对位移作为外参,进行扩展卡尔曼滤波融合,获得所述机器人在轮式里程计坐标系下的真实位姿数据。The second pose data is used as the estimated value, the difference is used as the observation variable, and the relative rotation angle and relative displacement between the wheel odometer coordinate system and the camera coordinate system are used as external parameters, and extended Kalman filtering is performed. Fusion to obtain the real pose data of the robot in the wheeled odometer coordinate system.

在一个实施例中,由于轮式里程计坐标系和相机坐标系不同,二者之间需要进行坐标变换,以进行坐标的对齐以实现坐标标定,可以通过人工粗略测量得到轮式里程计坐标系和相机坐标系之间的相对旋转角度和相对位移作为外参,将此外参加入EKF的状态变量中进行在线标定更新,以提高最终获得的真实位姿数据的准确性。In one embodiment, since the wheel odometer coordinate system and the camera coordinate system are different, coordinate transformation needs to be performed between the two to align the coordinates to realize coordinate calibration, and the wheel odometer coordinate system can be obtained by rough manual measurement. The relative rotation angle and relative displacement between the camera and the camera coordinate system are used as external parameters, which will be added to the state variables of the EKF for online calibration update to improve the accuracy of the final real pose data.

在一个实施例中,步骤S603之后:In one embodiment, after step S603:

输出所述真实位姿数据。Output the real pose data.

在应用中,机器人可以根据自身的真实位姿数据进行离线定位导航,也可以将真实位姿数据输出至其他设备,使其他设备的用户可以实时获知机器人的真实位姿数据,并基于该真实位姿数据对机器人进行定位导航控制,以控制机器人按照指定路径移动以移动至指定位置。其他设备可以是能够与机器人进行无线通信的任意终端设备,例如,手机、平板电脑、个人计算机、智能手环、个人数字助理、(云)服务器等。In the application, the robot can perform offline positioning and navigation according to its own real pose data, and can also output the real pose data to other devices, so that users of other devices can know the real pose data of the robot in real time, and based on the real pose data The orientation data is used to position and navigate the robot to control the robot to move according to the specified path to move to the specified position. Other devices may be any terminal devices capable of wireless communication with the robot, such as mobile phones, tablet computers, personal computers, smart bracelets, personal digital assistants, (cloud) servers, and the like.

应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。It should be understood that the size of the sequence numbers of the steps in the above embodiments does not mean the sequence of execution, and the execution sequence of each process should be determined by its function and internal logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.

本申请实施例还提供一种机器人定位装置,用于执行上述方法实施例中的方法步骤。该装置可以是机器人中的虚拟装置(virtual appliance),由机器人的处理器运行,也可以是机器人本身。The embodiments of the present application further provide a robot positioning device, which is used for performing the method steps in the above method embodiments. The appliance may be a virtual appliance in the robot, run by the robot's processor, or the robot itself.

如图7所示,本申请实施例提供的机器人定位装置200包括:As shown in FIG. 7 , the robot positioning device 200 provided by the embodiment of the present application includes:

第一位姿获取单元201,用于分别根据两个视野范围不同的单目摄像头采集的图像数据,获得所述机器人的第一位姿数据;The first pose obtaining unit 201 is configured to obtain the first pose data of the robot according to the image data collected by two monocular cameras with different fields of view respectively;

第二位姿获取单元202,用于根据轮式里程计采集的里程计数据,获得所述机器人的第二位姿数据;The second pose obtaining unit 202 is configured to obtain the second pose data of the robot according to the odometer data collected by the wheeled odometer;

位姿融合单元203,用于根据所述第一位姿数据和所述第二位姿数据进行融合,获得所述机器人的真实位姿数据。The pose fusion unit 203 is configured to fuse the first pose data and the second pose data to obtain the real pose data of the robot.

在一个实施例中,机器人定位装置还包括输出单元,用于输出所述真实位姿数据。In one embodiment, the robot positioning apparatus further includes an output unit for outputting the real pose data.

在应用中,上述装置中的各模块可以为软件程序模块,也可以通过处理器中集成的不同逻辑电路或与处理器连接的独立物理部件实现,还可以通过多个分布式处理器实现。In application, each module in the above apparatus may be a software program module, or may be implemented by different logic circuits integrated in the processor or independent physical components connected to the processor, or may be implemented by multiple distributed processors.

如图8所示,本申请实施例还提供一种机器人300,包括:至少一个处理器301(图8中仅示出一个处理器)、存储器302以及存储在存储器302中并可在至少一个处理器301上运行的计算机程序303,处理器301执行计算机程序303时实现上述各个机器人定位方法实施例中的步骤。As shown in FIG. 8, an embodiment of the present application further provides a robot 300, including: at least one processor 301 (only one processor is shown in FIG. 8), a memory 302, and a robot 300 that is stored in the memory 302 and can be processed in at least one The computer program 303 running on the processor 301 is executed. When the processor 301 executes the computer program 303, the steps in each of the above embodiments of the robot positioning method are implemented.

在应用中,机器人可包括,但不仅限于,处理器以及存储器,图8仅仅是机器人的举例,并不构成对机器人的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如,还可以包括两个单目摄像头、一个轮式里程计、移动部件、输入输出设备、网络接入设备等。移动部件可以包括用于驱动机器人的移动的移动滚轮、舵机、电机、驱动器等器件。输入输出设备可以包括前述人机交互器件,还可以包括显示屏,用于显示机器人的工作参数。网络接入设备可以包括通信模块,用于机器人与用户终端进行通信。In application, the robot may include, but is not limited to, a processor and a memory. FIG. 8 is only an example of a robot, and does not constitute a limitation to the robot. It may include more or less components than those shown in the figure, or combine some Components, or different components, for example, may also include two monocular cameras, a wheel odometer, moving components, input and output devices, network access devices, and the like. The moving parts may include moving rollers, steering gears, motors, drivers and other devices for driving the movement of the robot. The input and output device may include the aforementioned human-computer interaction device, and may also include a display screen for displaying the working parameters of the robot. The network access device may include a communication module for the robot to communicate with the user terminal.

在应用中,处理器可以是中央处理单元(Central Processing Unit,CPU),该处理器还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。处理器具体可以是PID控制器。In an application, the processor may be a central processing unit (Central Processing Unit, CPU), and the processor may also be other general-purpose processors, digital signal processors (Digital Signal Processors, DSP), application specific integrated circuits (Application Specific Integrated Circuits) , ASIC), off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The processor may specifically be a PID controller.

在应用中,存储器在一些实施例中可以是机器人的内部存储单元,例如机器人的硬盘或内存。存储器在另一些实施例中也可以是机器人的外部存储设备,例如,机器人上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。存储器还可以既包括机器人的内部存储单元也包括外部存储设备。存储器用于存储操作系统、应用程序、引导装载程序(Boot Loader)、数据以及其他程序等,例如计算机程序的程序代码等。存储器还可以用于暂时存储已经输出或者将要输出的数据。In application, the memory may in some embodiments be the robot's internal storage unit, such as the robot's hard disk or memory. In other embodiments, the memory can also be an external storage device of the robot, for example, a plug-in hard disk equipped on the robot, a Smart Media Card (SMC), a Secure Digital (SD) card, and a flash memory card. (Flash Card) etc. The memory may also include both an internal storage unit of the robot and an external storage device. The memory is used to store an operating system, an application program, a boot loader (Boot Loader), data, and other programs, such as program codes of computer programs, and the like. The memory can also be used to temporarily store data that has been or will be output.

在应用中,显示屏可以为薄膜晶体管液晶显示屏(Thin Film Transistor LiquidCrystal Display,TFT-LCD)、液晶显示屏(Liquid Crystal Display,LCD)、有机电激光显示屏(Organic Light-Emitting Diode,OLED)、量子点发光二极管(Quantum Dot LightEmitting Diodes,QLED)显示屏,七段或八段数码管等。In application, the display screen can be Thin Film Transistor LiquidCrystal Display (TFT-LCD), Liquid Crystal Display (LCD), Organic Light-Emitting Diode (OLED) , Quantum Dot Light Emitting Diodes (QLED) display screen, seven-segment or eight-segment digital tube, etc.

在应用中,通信模块可以根据实际需要设置为任意能够与客户端直接或间接进行远距离有线或无线通信的器件,例如,通信模块可以提供应用在网络设备上的包括无线局域网(Wireless Localarea Networks,WLAN)(如Wi-Fi网络),蓝牙,Zigbee,移动通信网络,全球导航卫星系统(Global Navigation Satellite System,GNSS),调频(FrequencyModulation,FM),近距离无线通信技术(Near Field Communication,NFC),红外技术(Infrared,IR)等通信的解决方案。通信模块可以包括天线,天线可以只有一个阵元,也可以是包括多个阵元的天线阵列。通信模块可以通过天线接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器。通信模块还可以从处理器接收待发送的信号,对其进行调频、放大,经天线转为电磁波辐射出去。In the application, the communication module can be set as any device that can directly or indirectly perform long-distance wired or wireless communication with the client according to actual needs. For example, the communication module can provide wireless local area networks (Wireless Localarea Networks, WLAN) (such as Wi-Fi network), Bluetooth, Zigbee, mobile communication network, Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC) , Infrared technology (Infrared, IR) and other communication solutions. The communication module may include an antenna, and the antenna may have only one array element, or may be an antenna array including multiple array elements. The communication module can receive electromagnetic waves through the antenna, frequency modulate and filter the electromagnetic wave signals, and send the processed signals to the processor. The communication module can also receive the signal to be sent from the processor, perform frequency modulation and amplification on it, and convert it into electromagnetic waves for radiation through the antenna.

需要说明的是,上述装置/模块之间的信息交互、执行过程等内容,由于与本申请方法实施例基于同一构思,其具体功能及带来的技术效果,具体可参见方法实施例部分,此处不再赘述。It should be noted that the information exchange, execution process and other contents between the above-mentioned devices/modules are based on the same concept as the method embodiments of the present application. For specific functions and technical effects, please refer to the method embodiments section. It is not repeated here.

所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要将上述功能分配由不同的功能模块完成,即将所述装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中,上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。另外,各功能模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and conciseness of the description, only the division of the above-mentioned functional modules is used as an example for illustration. The internal structure of the device is divided into different functional modules to complete all or part of the functions described above. Each functional module in the embodiment may be integrated in one processing module, or each module may exist physically alone, or two or more modules may be integrated in one module, and the above-mentioned integrated modules may be implemented in the form of hardware. , can also be implemented in the form of software function modules. In addition, the specific names of the functional modules are only for the convenience of distinguishing from each other, and are not used to limit the protection scope of the present application. For the specific working process of the modules in the above system, reference may be made to the corresponding process in the foregoing method embodiments, which will not be repeated here.

本申请实施例还提供了一种计算机可读存储介质,计算机可读存储介质中存储有计算机程序,计算机程序被处理器所执行时可实现上述各个方法实施例中的步骤。Embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the steps in the foregoing method embodiments can be implemented.

本申请实施例提供了一种计算机程序产品,当计算机程序产品在机器人上运行时,使得机器人可实现上述各个方法实施例中的步骤。The embodiments of the present application provide a computer program product, which enables the robot to implement the steps in each of the foregoing method embodiments when the computer program product runs on a robot.

所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,可以通过计算机程序来指令相关的硬件来完成,计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质至少可以包括:能够将计算机程序代码携带到机器人的任何实体或装置、记录介质、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质。例如U盘、移动硬盘、磁碟或者光盘等。If the integrated modules are implemented in the form of software functional modules and sold or used as independent products, they may be stored in a computer-readable storage medium. Based on this understanding, all or part of the processes in the methods of the above embodiments can be implemented by a computer program to instruct the relevant hardware. The computer program can be stored in a computer-readable storage medium, and the computer program can be processed When the device is executed, the steps of the foregoing method embodiments may be implemented. Wherein, the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file or some intermediate form, and the like. The computer-readable medium may include at least: any entity or device capable of carrying the computer program code to the robot, a recording medium, a computer memory, a read-only memory (ROM, Read-Only Memory), a random access memory (RAM, Random Access Memory), electrical carrier signals, telecommunication signals, and software distribution media. For example, U disk, mobile hard disk, disk or CD, etc.

在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。In the foregoing embodiments, the description of each embodiment has its own emphasis. For parts that are not described or described in detail in a certain embodiment, reference may be made to the relevant descriptions of other embodiments.

本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的模块及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Those of ordinary skill in the art can realize that the modules and algorithm steps of each example described in conjunction with the embodiments disclosed herein can be implemented in electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of this application.

在本申请所提供的实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个模块或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或模块的间接耦合或通讯连接,可以是电性,机械或其它的形式。In the embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are only illustrative. For example, the division of the modules is only a logical function division. In actual implementation, there may be other division methods. For example, multiple modules or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented. On the other hand, the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or modules, and may be in electrical, mechanical or other forms.

所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。The modules described as separate components may or may not be physically separated, and the components shown as modules may or may not be physical modules, that is, may be located in one place, or may be distributed to multiple network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution in this embodiment.

以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。The above-mentioned embodiments are only used to illustrate the technical solutions of the present application, but not to limit them; although the present application has been described in detail with reference to the above-mentioned embodiments, those of ordinary skill in the art should understand that: it can still be used for the above-mentioned implementations. The technical solutions described in the examples are modified, or some technical features thereof are equivalently replaced; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions in the embodiments of the application, and should be included in the within the scope of protection of this application.

Claims (12)

1.一种机器人定位方法,其特征在于,包括:1. a robot positioning method, is characterized in that, comprises: 分别根据两个视野范围不同的单目摄像头采集的图像数据,获得所述机器人的第一位姿数据;Obtain the first pose data of the robot according to the image data collected by the two monocular cameras with different fields of view respectively; 根据轮式里程计采集的里程计数据,获得所述机器人的第二位姿数据;Obtain the second pose data of the robot according to the odometer data collected by the wheel odometer; 根据所述第一位姿数据和所述第二位姿数据进行融合,获得所述机器人的真实位姿数据。According to the fusion of the first pose data and the second pose data, the real pose data of the robot is obtained. 2.如权利要求1所述的机器人定位方法,其特征在于,所述分别根据两个视野范围不同的单目摄像头采集的图像数据,获得所述机器人的第一位姿数据,包括:2. The method for positioning a robot according to claim 1, wherein the first pose data of the robot is obtained according to the image data collected by two monocular cameras with different fields of view, comprising: 针对每个单目摄像头,对所述单目摄像头采集的第一帧图像数据进行特征点检测,提取所述第一帧图像数据中的预设数量个特征点;For each monocular camera, feature point detection is performed on the first frame of image data collected by the monocular camera, and a preset number of feature points in the first frame of image data are extracted; 对所述单目摄像头采集的第k+1帧图像数据进行特征点跟踪,k=1,2,…,n,n任意为正整数;Perform feature point tracking on the k+1th frame of image data collected by the monocular camera, where k=1, 2, ..., n, n is any positive integer; 根据所述单目摄像头采集的多帧图像数据之间的共视特征点,构建重投影误差代价函数;Constructing a reprojection error cost function according to the common view feature points between the multiple frames of image data collected by the monocular camera; 对所述重投影误差代价函数进行最小化重投影误差求解,获得所述单目摄像头采集的每帧图像数据中的预设数量个特征点在世界坐标系下的三维坐标,作为所述机器人的第一位姿数据。The reprojection error cost function is minimized to solve the reprojection error, and the three-dimensional coordinates of the preset number of feature points in each frame of image data collected by the monocular camera in the world coordinate system are obtained as the robot's three-dimensional coordinates. First pose data. 3.如权利要求2所述的机器人定位方法,其特征在于,所述对所述单目摄像头采集的第k+1帧图像数据进行特征点跟踪之后,包括:3. The robot positioning method according to claim 2, wherein after the feature point tracking is performed on the k+1 th frame of image data collected by the monocular camera, the method comprises: 若跟踪到的所述第k+1帧图像数据中的特征点的个数小于所述预设数量,则对所述第k+1帧图像数据进行特征点检测,提取所述第k+1帧图像数据中的特征点,使跟踪到的所述第k+1帧图像数据中的特征点与提取到的所述第k+1帧图像数据中的特征点的个数之和等于所述预设数量。If the number of feature points in the k+1 th frame of image data tracked is less than the preset number, perform feature point detection on the k+1 th frame of image data, and extract the k+1 th frame of image data. The feature points in the frame image data, so that the sum of the tracked feature points in the k+1 th frame image data and the extracted feature points in the k+1 th frame image data is equal to the Preset quantity. 4.如权利要求2所述的机器人定位方法,其特征在于,所述根据所述第一位姿数据和所述第二位姿数据进行融合,获得所述机器人的真实位姿数据,包括:4. The method for positioning a robot according to claim 2, wherein the fusion according to the first pose data and the second pose data to obtain the real pose data of the robot, comprising: 将所述预设数量个特征点在世界坐标系下的三维坐标投影回图像坐标系,获得所述预设数量个特征点在图像坐标系下的二维坐标;Projecting the three-dimensional coordinates of the preset number of feature points under the world coordinate system back to the image coordinate system to obtain the two-dimensional coordinates of the preset number of feature points under the image coordinate system; 获取所述预设数量个特征点在图像坐标系下的二维坐标与提取到的所述第一帧图像数据中的预设数量个特征点在图像坐标系下的二维坐标之间的差值;Obtain the difference between the two-dimensional coordinates of the preset number of feature points under the image coordinate system and the extracted two-dimensional coordinates of the preset number of feature points in the first frame of image data under the image coordinate system value; 将所述第二位姿数据作为估计值,将所述差值作为观测变量,进行扩展卡尔曼滤波融合,获得所述机器人在轮式里程计坐标系下的真实位姿数据。Using the second pose data as an estimated value and the difference as an observation variable, extended Kalman filter fusion is performed to obtain the real pose data of the robot in the wheeled odometer coordinate system. 5.如权利要求4所述的机器人定位方法,其特征在于,所述将所述第二位姿数据作为估计值,将所述差值作为观测变量,进行扩展卡尔曼滤波融合,获得所述机器人在轮式里程计坐标系下的真实位姿数据,包括:5. The robot positioning method according to claim 4, wherein the second pose data is used as an estimated value, and the difference is used as an observation variable, and extended Kalman filter fusion is performed to obtain the The real pose data of the robot in the wheeled odometer coordinate system, including: 将所述第二位姿数据作为估计值,将所述差值作为观测变量,进行约束条件下的扩展卡尔曼滤波融合,获得所述机器人在轮式里程计坐标系下的真实位姿数据,所述约束条件为限定所述机器人在轮式里程计坐标系下的旋转角度的X轴分量和Y轴分量为0且真实位姿数据的Z轴分量固定不变。Taking the second pose data as an estimated value and the difference as an observation variable, performing extended Kalman filter fusion under constraints to obtain the real pose data of the robot in the wheeled odometer coordinate system, The constraint condition is that the X-axis component and the Y-axis component of the rotation angle of the robot in the wheeled odometer coordinate system are 0, and the Z-axis component of the real pose data is fixed. 6.如权利要求4所述的机器人定位方法,其特征在于,所述将所述第二位姿数据作为估计值,将所述差值作为观测变量,进行扩展卡尔曼滤波融合,获得所述机器人在轮式里程计坐标系下的真实位姿数据,包括:6. The robot positioning method according to claim 4, wherein the second pose data is used as an estimated value, and the difference is used as an observation variable, and extended Kalman filter fusion is performed to obtain the The real pose data of the robot in the wheeled odometer coordinate system, including: 将所述第二位姿数据作为估计值,将所述差值作为观测变量,将轮式里程计坐标系与相机坐标系之间的相对旋转角度和相对位移作为外参,进行扩展卡尔曼滤波融合,获得所述机器人在轮式里程计坐标系下的真实位姿数据。The second pose data is used as the estimated value, the difference is used as the observation variable, and the relative rotation angle and relative displacement between the wheel odometer coordinate system and the camera coordinate system are used as external parameters, and extended Kalman filtering is performed. Fusion to obtain the real pose data of the robot in the wheeled odometer coordinate system. 7.如权利要求1至6任一项所述的机器人定位方法,其特征在于,所述根据轮式里程计采集的里程计数据,获得所述机器人的第二位姿数据,包括:7. The robot positioning method according to any one of claims 1 to 6, wherein, obtaining the second pose data of the robot according to the odometer data collected by the wheeled odometer, comprising: 将所述单目摄像头采集的图像数据的时间戳与轮式里程计采集的里程计数据的时间戳对齐,获得与所述图像数据的时间戳同步的里程计数据;Aligning the time stamp of the image data collected by the monocular camera with the time stamp of the odometer data collected by the wheel odometer to obtain the odometer data synchronized with the time stamp of the image data; 对与所述图像数据的时间戳同步的里程计数据进行预积分,获得所述轮式里程计的积分值,作为所述机器人的第二位姿数据。Pre-integration is performed on the odometer data synchronized with the time stamp of the image data, and the integral value of the wheel odometer is obtained as the second pose data of the robot. 8.如权利要求7所述的机器人定位方法,其特征在于,所述将所述单目摄像头采集的图像数据的时间戳与轮式里程计采集的里程计数据的时间戳对齐,获得与所述图像数据的时间戳同步的里程计数据,包括:8 . The robot positioning method according to claim 7 , wherein the time stamp of the image data collected by the monocular camera is aligned with the time stamp of the odometer data collected by the wheel odometer, and the obtained data is obtained by aligning the time stamp of the image data collected by the monocular camera with the time stamp of the odometer data collected by the wheel odometer. odometry data synchronized with the timestamp of the image data, including: 根据所述单目摄像头采集的图像数据的时间戳对轮式里程计采集的里程计数据进行线性插值,以将所述图像数据的时间戳与所述里程计数据的时间戳对齐,获得与所述图像数据的时间戳同步的里程计数据。Linear interpolation is performed on the odometer data collected by the wheel odometer according to the time stamp of the image data collected by the monocular camera, so as to align the time stamp of the image data with the time stamp of the odometer data, and obtain a The odometer data is synchronized with the timestamp of the image data. 9.如权利要求7所述的机器人定位方法,其特征在于,所述第二位姿数据的表达式为:9. The robot positioning method according to claim 7, wherein the expression of the second pose data is:
Figure FDA0003142384100000031
Figure FDA0003142384100000031
Figure FDA0003142384100000032
Figure FDA0003142384100000032
其中,
Figure FDA0003142384100000033
表示与第k+1帧图像数据的时间戳同步的所述轮式里程计的旋转角度的积分值,
Figure FDA0003142384100000034
表示与第k帧图像数据的时间戳同步的所述轮式里程计的旋转角度的积分值,ΔR表示所述第k+1帧图像数据的时间戳与所述第k帧图像数据的时间戳之间的所述轮式里程计的旋转角度的积分值的变化量,
Figure FDA0003142384100000035
表示与第k+1帧图像数据的时间戳同步的所述轮式里程计的位移的积分值,
Figure FDA0003142384100000036
表示与第k帧图像数据的时间戳同步的所述轮式里程计的位移的积分值,Δp表示所述第k+1帧图像数据的时间戳与所述第k帧图像数据的时间戳之间的所述轮式里程计的位移的积分值的变化量,G表示世界坐标系,O表示轮式里程计坐标系。
in,
Figure FDA0003142384100000033
represents the integral value of the rotation angle of the wheel odometer synchronized with the time stamp of the k+1th frame of image data,
Figure FDA0003142384100000034
represents the integral value of the rotation angle of the wheel odometer synchronized with the time stamp of the kth frame of image data, ΔR represents the time stamp of the k+1th frame of image data and the time stamp of the kth frame of image data The amount of change between the integral value of the rotation angle of the wheel odometer,
Figure FDA0003142384100000035
represents the integral value of the displacement of the wheel odometer synchronized with the time stamp of the k+1th frame of image data,
Figure FDA0003142384100000036
represents the integral value of the displacement of the wheel odometer synchronized with the time stamp of the kth frame of image data, Δp represents the sum of the time stamp of the k+1th frame of image data and the time stamp of the kth frame of image data The amount of change in the integral value of the displacement of the wheel odometer between, G represents the world coordinate system, and O represents the wheel odometer coordinate system.
10.一种机器人定位装置,其特征在于,包括:10. A robot positioning device, comprising: 第一位姿获取单元,用于分别根据两个视野范围不同的单目摄像头采集的图像数据,获得所述机器人的第一位姿数据;a first pose obtaining unit, configured to obtain the first pose data of the robot according to the image data collected by the two monocular cameras with different fields of view; 第二位姿获取单元,用于根据轮式里程计采集的里程计数据,获得所述机器人的第二位姿数据;A second pose obtaining unit, configured to obtain the second pose data of the robot according to the odometer data collected by the wheeled odometer; 位姿融合单元,用于根据所述第一位姿数据和所述第二位姿数据进行融合,获得所述机器人的真实位姿数据。A pose fusion unit, configured to fuse the first pose data and the second pose data to obtain the real pose data of the robot. 11.一种机器人,其特征在于,包括处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如权利要求1至9任一项所述机器人定位方法的步骤。11. A robot, characterized by comprising a processor and a computer program stored in the memory and executable on the processor, the processor implementing the computer program as claimed in claims 1 to 9 when the processor executes the computer program any one of the steps of the robot positioning method. 12.一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至9任一项所述机器人定位方法的步骤。12. A computer-readable storage medium, wherein the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the robot positioning method according to any one of claims 1 to 9 is implemented A step of.
CN202110744785.3A 2021-06-30 2021-06-30 Robot positioning method and device, robot and storage medium Pending CN113390408A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110744785.3A CN113390408A (en) 2021-06-30 2021-06-30 Robot positioning method and device, robot and storage medium
PCT/CN2021/126714 WO2023273057A1 (en) 2021-06-30 2021-10-27 Robot positioning method and apparatus, robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110744785.3A CN113390408A (en) 2021-06-30 2021-06-30 Robot positioning method and device, robot and storage medium

Publications (1)

Publication Number Publication Date
CN113390408A true CN113390408A (en) 2021-09-14

Family

ID=77624929

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110744785.3A Pending CN113390408A (en) 2021-06-30 2021-06-30 Robot positioning method and device, robot and storage medium

Country Status (2)

Country Link
CN (1) CN113390408A (en)
WO (1) WO2023273057A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114485653A (en) * 2022-02-23 2022-05-13 广州高新兴机器人有限公司 Positioning method, device, medium and equipment based on fusion of vision and wheel odometer
CN115388884A (en) * 2022-08-17 2022-11-25 南京航空航天大学 A joint initialization method for agent pose estimators
CN115493579A (en) * 2022-09-02 2022-12-20 松灵机器人(深圳)有限公司 Positioning correction method, positioning correction device, mowing robot and storage medium
WO2023273057A1 (en) * 2021-06-30 2023-01-05 深圳市优必选科技股份有限公司 Robot positioning method and apparatus, robot and storage medium
CN116295352A (en) * 2023-03-17 2023-06-23 浙江华睿科技股份有限公司 A data fusion map construction method, system and electronic equipment
CN118505756A (en) * 2024-07-18 2024-08-16 比亚迪股份有限公司 Pose generation method and device, electronic equipment, storage medium, product and vehicle

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116030093A (en) * 2023-01-09 2023-04-28 哈尔滨工程大学 NDT-based imaging sonar odometer method
CN116149327B (en) * 2023-02-08 2023-10-20 广州番禺职业技术学院 Real-time tracking prospective path planning system, method and device
CN116372941B (en) * 2023-06-05 2023-08-15 北京航空航天大学杭州创新研究院 Robot parameter calibration method and device and wheeled robot
CN119784851B (en) * 2025-03-10 2025-07-01 杭州海康机器人股份有限公司 Method and device for generating map building data, electronic equipment and storage medium
CN119880935B (en) * 2025-03-25 2025-07-04 中建深圳装饰有限公司 Curtain defect detection system based on computer vision

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108682036A (en) * 2018-04-27 2018-10-19 腾讯科技(深圳)有限公司 Pose determines method, apparatus and storage medium
CN109579844A (en) * 2018-12-04 2019-04-05 电子科技大学 Localization method and system
CN110009681A (en) * 2019-03-25 2019-07-12 中国计量大学 A monocular visual odometry pose processing method based on IMU assistance
CN111811506A (en) * 2020-09-15 2020-10-23 中国人民解放军国防科技大学 Visual/inertial odometry integrated navigation method, electronic device and storage medium
CN112734852A (en) * 2021-03-31 2021-04-30 浙江欣奕华智能科技有限公司 Robot mapping method and device and computing equipment

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3451288A1 (en) * 2017-09-04 2019-03-06 Universität Zürich Visual-inertial odometry with an event camera
CN107808407B (en) * 2017-10-16 2020-12-18 亿航智能设备(广州)有限公司 Binocular camera-based unmanned aerial vehicle vision SLAM method, unmanned aerial vehicle and storage medium
CN108242079B (en) * 2017-12-30 2021-06-25 北京工业大学 A VSLAM method based on multi-feature visual odometry and graph optimization model
JP7173471B2 (en) * 2019-01-31 2022-11-16 株式会社豊田中央研究所 3D position estimation device and program
CN112212852B (en) * 2019-07-12 2024-06-21 浙江未来精灵人工智能科技有限公司 Positioning method, mobile device and storage medium
CN111161337B (en) * 2019-12-18 2022-09-06 南京理工大学 Accompanying robot synchronous positioning and composition method in dynamic environment
CN113390408A (en) * 2021-06-30 2021-09-14 深圳市优必选科技股份有限公司 Robot positioning method and device, robot and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108682036A (en) * 2018-04-27 2018-10-19 腾讯科技(深圳)有限公司 Pose determines method, apparatus and storage medium
CN109579844A (en) * 2018-12-04 2019-04-05 电子科技大学 Localization method and system
CN110009681A (en) * 2019-03-25 2019-07-12 中国计量大学 A monocular visual odometry pose processing method based on IMU assistance
CN111811506A (en) * 2020-09-15 2020-10-23 中国人民解放军国防科技大学 Visual/inertial odometry integrated navigation method, electronic device and storage medium
CN112734852A (en) * 2021-03-31 2021-04-30 浙江欣奕华智能科技有限公司 Robot mapping method and device and computing equipment

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023273057A1 (en) * 2021-06-30 2023-01-05 深圳市优必选科技股份有限公司 Robot positioning method and apparatus, robot and storage medium
CN114485653A (en) * 2022-02-23 2022-05-13 广州高新兴机器人有限公司 Positioning method, device, medium and equipment based on fusion of vision and wheel odometer
CN115388884A (en) * 2022-08-17 2022-11-25 南京航空航天大学 A joint initialization method for agent pose estimators
CN115493579A (en) * 2022-09-02 2022-12-20 松灵机器人(深圳)有限公司 Positioning correction method, positioning correction device, mowing robot and storage medium
CN115493579B (en) * 2022-09-02 2024-10-22 深圳库犸科技有限公司 Positioning correction method, positioning correction device, mowing robot and storage medium
CN116295352A (en) * 2023-03-17 2023-06-23 浙江华睿科技股份有限公司 A data fusion map construction method, system and electronic equipment
CN118505756A (en) * 2024-07-18 2024-08-16 比亚迪股份有限公司 Pose generation method and device, electronic equipment, storage medium, product and vehicle

Also Published As

Publication number Publication date
WO2023273057A1 (en) 2023-01-05

Similar Documents

Publication Publication Date Title
WO2023273057A1 (en) Robot positioning method and apparatus, robot and storage medium
CN110246147A (en) Vision inertia odometer method, vision inertia mileage counter device and mobile device
CN107748569B (en) Motion control method and device for unmanned aerial vehicle and unmanned aerial vehicle system
CN111127563A (en) Joint calibration method, device, electronic device and storage medium
CN110879400A (en) Method, equipment and storage medium for fusion positioning of laser radar and IMU
CN112652016A (en) Point cloud prediction model generation method, pose estimation method and device
KR102694715B1 (en) Method for detecting obstacle, electronic device, roadside device and cloud control platform
EP3852065A1 (en) Data processing method and apparatus
CN112229424B (en) Parameter calibration method and device for visual inertial system, electronic equipment and medium
CN111079079B (en) Data correction method, device, electronic equipment and computer readable storage medium
US10578880B2 (en) Augmenting reality via antenna and interaction profile
JP7182020B2 (en) Information processing method, device, electronic device, storage medium and program
US11042984B2 (en) Systems and methods for providing image depth information
CN109767470B (en) Tracking system initialization method and terminal equipment
US20230236219A1 (en) Visual inertial odometry with machine learning depth
CN110769245A (en) Calibration method and related equipment
Wei et al. Fusionportablev2: A unified multi-sensor dataset for generalized slam across diverse platforms and scalable environments
US10582186B1 (en) Approaches for identifying misaligned cameras
Yang et al. Tightly-coupled fusion of iGPS measurements in optimization-based visual SLAM
US11158119B2 (en) Systems and methods for reconstructing a three-dimensional object
WO2023160445A1 (en) Simultaneous localization and mapping method and apparatus, electronic device, and readable storage medium
CN115578432A (en) Image processing method, image processing device, electronic equipment and storage medium
CN114777773A (en) Camera position and posture compensation method and device, electronic equipment and readable storage medium
US10175778B1 (en) Method and apparatus for real-time motion direction detection via acceleration-magnetic fusion
HK40040129B (en) Visual inertial system of parameter calibration method and device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210914