WO2019196478A1 - Positionnement de robot - Google Patents

Positionnement de robot Download PDF

Info

Publication number
WO2019196478A1
WO2019196478A1 PCT/CN2018/121180 CN2018121180W WO2019196478A1 WO 2019196478 A1 WO2019196478 A1 WO 2019196478A1 CN 2018121180 W CN2018121180 W CN 2018121180W WO 2019196478 A1 WO2019196478 A1 WO 2019196478A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
feature point
coordinate system
image collector
robot
Prior art date
Application number
PCT/CN2018/121180
Other languages
English (en)
Chinese (zh)
Inventor
郝立良
申浩
程保山
Original Assignee
北京三快在线科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京三快在线科技有限公司 filed Critical 北京三快在线科技有限公司
Publication of WO2019196478A1 publication Critical patent/WO2019196478A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present application relates to the field of navigation technology, and in particular to robot positioning.
  • the method of robot navigation by laser can generate a two-dimensional map by the obstacle of the area where the laser scanning robot is located, and the robot can determine the driving route based on the position of the obstacle in the two-dimensional map, thereby realizing navigation.
  • the current robot is capable of determining the surrounding obstacles by laser scanning at the time of starting, it is not possible to determine its position in the coordinate system. Therefore, it is necessary to manually indicate the position of the robot in the coordinate system when the robot is started, so that the robot can perform path planning in the two-dimensional map according to the position to realize navigation.
  • embodiments of the present invention provide a robot positioning method, a robot positioning device, and a computer readable storage medium, and a robot.
  • a robot positioning method includes: acquiring an image by an image collector on the robot, extracting feature points from the image; and determining and storing in a pre-stored feature point library. a target sample feature point matched by the feature point; a coordinate according to the pre-stored target sample feature point in a preset coordinate system, and the target sample feature point and the image collector at the preset coordinate The coordinate mapping relationship in the system determines the coordinates of the image collector in the preset coordinate system.
  • a robot positioning apparatus includes: a feature point extraction module, configured to extract a feature point from an image collected by an image collector on the robot; and a feature point matching module, And determining, by using a pre-stored feature point library, a target sample feature point that matches the feature point; and a first coordinate determining module, configured to use, according to the pre-stored coordinate of the target sample feature point in a preset coordinate system And a coordinate mapping relationship between the target sample feature point and the image collector in the preset coordinate system, and determining coordinates of the image collector in the preset coordinate system.
  • a computer readable storage medium having stored thereon a computer program that, when executed by a processor, performs the above-described robot positioning method.
  • a robot comprising a laser sensor and an image collector, further comprising a processor, wherein the processor is configured to perform the robot positioning method described above.
  • the coordinates of the robot in the preset coordinate system can be determined, and The coordinates are calibrated so that operations can be performed based on the coordinates so that the navigation path can be planned in the navigation map starting from the coordinates of the image collector in the preset coordinate system. Therefore, it is not necessary to manually instruct the coordinates of the robot to realize completely autonomous navigation of the robot.
  • FIG. 1 is a schematic flow chart of a robot positioning method according to an embodiment of the present invention.
  • FIG. 2 is a schematic flow chart of another robot positioning method according to an embodiment of the present invention.
  • FIG. 3 is a schematic flow chart showing determining a positional relationship and a posture relationship of a laser sensor and an image collector, according to an embodiment of the present invention.
  • FIG. 4 is a diagram showing depth information, position information, and angle information of an image collector when acquiring a sample image according to an embodiment of the present invention, determining that a sample feature point and an image collector are preset.
  • FIG. 5 is a schematic flow chart of extracting feature points from an image by acquiring an image by an image collector, according to an embodiment of the invention.
  • FIG. 6 is a schematic flow chart of still another robot positioning method according to an embodiment of the present invention.
  • FIG. 7 is a schematic flow chart showing navigation path planning in a navigation map matching a preset coordinate system according to coordinates of the image collector in a preset coordinate system according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram showing a hardware configuration of a device in which a robot positioning device is located, according to an embodiment of the present invention.
  • FIG. 9 is a schematic block diagram of a robotic positioning device, in accordance with an embodiment of the present invention.
  • FIG. 10 is a schematic block diagram of another robotic positioning device shown in accordance with an embodiment of the present invention.
  • FIG. 11 is a schematic block diagram of still another robotic positioning device shown in accordance with an embodiment of the present invention.
  • FIG. 12 is a schematic block diagram of a path planning module, shown in accordance with an embodiment of the present invention.
  • first, second, third, etc. may be used to describe various information in this application, such information should not be limited to these terms. These terms are only used to distinguish the same type of information from each other.
  • first information may also be referred to as the second information without departing from the scope of the present application.
  • second information may also be referred to as the first information.
  • word "if” as used herein may be interpreted as "when” or “when” or “in response to a determination.”
  • FIG. 1 is a schematic flow chart of a robot positioning method according to an embodiment of the present invention.
  • the method shown in this embodiment can be applied to a robot, which can include a laser sensor and an image collector.
  • the image capture device can be a monocular camera or a binocular camera.
  • the image acquired by the image collector can be a depth image.
  • the image collector can be rotated, for example, by 360° in a preset plane to capture images in different directions.
  • the laser sensor can emit and receive laser light. Also, the laser sensor can be rotated, for example, by rotating 360° in a predetermined plane to emit laser light in a direction in which it is directed.
  • the laser sensor and image collector may be in the same plane or in different planes.
  • the robot positioning method may include the following steps:
  • step S1 an image is acquired by the image collector, and feature points are extracted from the image.
  • the image acquired by the image collector may be an image or multiple images.
  • the number of sheets specifically collected can be set as needed.
  • the number of feature points extracted from the image may be the same or different, and the number of specifically extracted feature points may be set as needed. For example, for one image, the number of extracted feature points may be greater than or equal to six.
  • Step S2 determining a target sample feature point that matches the feature point in a pre-stored feature point library.
  • the sample feature points may be pre-stored, for example, a feature point library composed of sample feature points is generated in advance, and the feature point library stores coordinates of each sample feature point in a preset coordinate system.
  • the coordinate mapping relationship between the sample feature points and the image collector in the preset coordinate system may also be stored in the feature point library (for example, including a positional relationship and an attitude relationship, which may be represented by a matrix).
  • the coordinate mapping relationship can also be stored in other storage spaces than the feature point library.
  • the description information of the sample feature points may also be stored in the feature point library. For example, if the granularity of the sample feature points is a pixel in the image, the description information may include gray values of several (eg, 8) pixels around the pixel as the sample feature point. The description information may also include information such as the category of the sample feature points, the position of the sample feature points in the image, and the like.
  • the description information of the feature point may be determined, and then the sample feature point whose description information matches the description information of the feature point may be queried in the pre-stored feature point library, that is, the target sample feature point.
  • Step S3 Determine coordinates of the image collector in a preset coordinate system according to a coordinate mapping relationship between the target sample feature point and the image collector in a preset coordinate system.
  • the coordinate mapping relationship between the target sample feature point and the image collector in the preset coordinate system and the coordinates of the target sample feature point in the preset coordinate system may be stored in the feature point library,
  • the coordinates of the target sample feature points in the preset coordinate system are converted according to the coordinate mapping relationship, thereby deriving the coordinates of the image collector in the preset coordinate system.
  • the image collector is disposed on the robot, after determining the coordinates of the image collector in the preset coordinate system, the coordinates of the robot in the preset coordinate system can be determined. Therefore, it is not necessary to manually instruct the coordinates of the robot, and it is convenient to realize the completely autonomous navigation of the robot.
  • FIG. 2 is a schematic flow chart of another robot positioning method according to an embodiment of the present invention. As shown in FIG. 2, the robot positioning method may include the following steps:
  • Step S4 Before acquiring the sample image by the image collector, determining a positional relationship and a posture relationship of the laser sensor with respect to the image collector.
  • the positional relationship may refer to an offset of the image collector relative to the laser sensor along the x-axis in the predetermined coordinate system, an offset along the y-axis, and an offset along the z-axis.
  • the attitude relationship may refer to a rotation angle and an elevation angle of a direction in which the image collector acquires an image with respect to a laser sensor emission direction.
  • Step S5 determining a first coordinate of the laser sensor in a preset coordinate system.
  • the first coordinate of the laser sensor in a preset coordinate system can be determined by SLAM (Simultaneous Localization And Mapping).
  • the coordinates determined by the SLAM may be two-dimensional coordinates, and if the preset coordinate system is three-dimensional coordinates, the two-dimensional coordinates may be converted into three-dimensional coordinates, wherein the z-axis coordinates are zero.
  • Step S6 Convert the first coordinate according to the position relationship and the attitude relationship to obtain a second coordinate of the image collector in a preset coordinate system.
  • Step S7 collecting a sample image by the image collector, and extracting a plurality of sample feature points from the sample image.
  • multiple image images may be acquired by the image collector, and one or more sample feature points may be separately extracted for each sample image, and the number of sample feature points extracted for each sample image may be the same or different.
  • the plurality of sample feature points thus acquired are stored in the feature point library.
  • Step S8 Determine the coordinate mapping relationship according to depth information, position information of the sample feature point in the sample image, and angle information when the image collector collects the sample image.
  • the image acquired by the image collector may be a depth image.
  • the feature point has a granularity of pixels, and the depth image contains depth information for each pixel. Based on the depth information, the distance from the pixel to the image collector, that is, the distance from the feature point to the image collector, can be determined.
  • the distance and the position information of the sample feature point in the sample image (for example, the pixel of the sample feature point corresponding to the first few rows in the sample image), and the angle information when the image collector collects the sample image, Determining the coordinate mapping relationship.
  • the sample feature point is 100 pixels directly above the corresponding center point in the image collected by the image collector, wherein the length of each pixel may be preset, for example, L, then the length of 100 pixels is 100L. Since the sample feature point to the center of the image, the center of the image to the image collector, and the image collector to the sample feature point can form a right triangle, the distance D from the image collector to the sample feature point can be a hypotenuse, at 100L. At the corner, the length of the other right-angled edge is obtained according to the Pythagorean theorem, that is, the distance d from the center of the image to the image collector.
  • the coordinate mapping relationship may be: the sample feature point is located in the direction of the rotation angle ⁇ of the image collector, and to the image collector The distance is the plane of d, and is located 100L away from the center directly above the center of the plane. This content can be represented by a matrix.
  • Step S9 Determine, according to the second coordinate and the coordinate mapping relationship, a third coordinate of the sample feature point in a preset coordinate system.
  • the second coordinate may be further converted according to the coordinate mapping relationship to determine a third coordinate of the sample feature point.
  • the coordinate mapping relationship is represented by a matrix, and then the third coordinate can be obtained by multiplying the second coordinate with the matrix.
  • Step S10 storing the third coordinate and the coordinate mapping relationship.
  • a feature point library may be generated, and the feature point library may include coordinates of the sample feature points in the preset coordinate system for each sample feature point.
  • the coordinate mapping relationship between the sample feature points and the image collector in the preset coordinate system (for example, can be represented by a matrix).
  • steps S4 to S10 may be pre-executed before the robot navigation, and the coordinate mapping relationship between the sample feature points and the image collector in the preset coordinate system is pre-stored through the feature point library, so as to facilitate subsequent determination of the image collector.
  • the coordinates in the preset coordinate system for example, when performing steps S1 to S3, the target sample feature points matching the feature points extracted from the image may be determined in the feature point library, and the pre-stored coordinate mapping relationship is used in advance.
  • the stored target sample feature points are converted in the coordinates in the preset coordinate system, thereby obtaining the coordinates of the image collector in the preset coordinate system.
  • the coordinate accuracy of the feature point in the preset coordinate system is determined to be low only based on the result of the laser sensor scan.
  • the image acquired by the image collector is relatively less susceptible to environmental interference, that is, the coordinate mapping relationship between the feature points in the image and the image collector in the preset coordinate system is relatively accurate, and the feature point library of the embodiment is Combined with the results of the laser sensor scan and the images acquired by the image collector, the coordinates of the feature points can be determined relatively accurately by matching in the feature point library.
  • determining the positional relationship and the attitude relationship of the laser sensor with respect to the image collector may include: step S401, determining a positional relationship of the laser sensor with respect to the image collector according to a nonlinear optimization algorithm. And attitude relationship.
  • the positional relationship and attitude relationship of the laser sensor relative to the image collector can be determined by manual measurement or by a nonlinear optimization algorithm.
  • the nonlinear optimization algorithm adopted is a least square method. Since the position of the laser sensor and the image collector on the robot is relatively fixed, the positional relationship and the attitude relationship of the laser sensor with respect to the image collector are relatively fixed.
  • a plurality of known points may be set in the space, and for a known point, a laser sensor may be used to emit laser light to the known point and receive the reflected laser light, thereby determining the positional relationship of the known point with respect to the laser sensor and
  • the attitude relationship is represented, for example, by matrix A, and the spatial coordinates of the laser sensor are represented as matrix P).
  • the image of the point can be acquired by the image collector, thereby determining the positional relationship and the attitude relationship of the known point with respect to the image collector (for example, represented by a matrix B, and the spatial coordinates of the image collector are represented as a matrix Q).
  • a plurality of sets of matrices P, Q, and C can be separately measured, and a plurality of sets of matrices P, Q, and C can be calculated by using a least squares method to obtain a relatively accurate matrix C to represent the laser sensor relative to image acquisition.
  • the positional relationship and attitude relationship of the device Since the nonlinear optimization algorithm can be executed by software, the positional relationship and the attitude relationship of the laser sensor with respect to the image collector can be determined more accurately with respect to manual measurement.
  • determining the coordinate mapping relationship according to the depth information of the sample feature points in the sample image, the position information, and the angle information when the image collector acquires the sample image includes:
  • Step S801 determining the coordinate mapping relationship according to the depth information, the position information, the angle information, and the imaging model of the image collector of the sample feature point in the sample image.
  • the imaging model of the image collector is different, for example, the image collector is a pinhole camera, and the imaging model is a pinhole model, for example, the image collector is a fisheye camera, then the imaging model is a fisheye model.
  • the coordinate mapping relationship in different imaging models is different from the corresponding relationship between the depth information, the position information, and the angle information of the sample feature points in the sample image. Therefore, when determining the coordinates of the feature points, it is advantageous to determine the coordinates of the feature points more accurately by considering the imaging model of the image collector.
  • the model can be represented by the following relationship:
  • m' is the uv coordinate of the feature point
  • A is the camera internal parameter
  • t] is the relationship between the camera and the preset coordinate system (such as the world coordinate system)
  • M' is the feature point and the preset coordinate system (for example, world coordinates) Relationship
  • s is the z coordinate of the object in the camera coordinate system.
  • the camera internal reference mentioned here refers to the parameter determined only by the camera itself, that is, once a value is calculated for a certain camera, it is not necessary to perform calculation again.
  • FIG. 5 is a schematic flow diagram of extracting an image from the image by acquiring an image by the image collector, according to an embodiment of the invention.
  • the image is collected by the image collector, and extracting feature points from the image includes: step S101, when the robot is started, an image is collected by the image collector, and the image is extracted from the image. Feature points.
  • step S1 may be performed when the robot is started, that is, an image is acquired by the image collector when the robot is started, and feature points are extracted from the image. According to this, it can be ensured that as long as the robot is started, the coordinates in the preset coordinate system can be determined, thereby completing the autonomous navigation.
  • FIG. 6 is a schematic flow chart of still another robot positioning method according to an embodiment of the present invention. As shown in FIG. 6, on the basis of the embodiment shown in FIG. 1, the robot positioning method may further include:
  • Step S11 generating a navigation map by scanning the area where the robot is located by the laser sensor;
  • the navigation map generated by the laser sensor scanning the area where the robot is located may be a two-dimensional map.
  • the laser sensor can scan an obstacle in a range of a predetermined distance from the area where the robot is located to the robot, and the laser reflected by the obstacle can determine the position of the obstacle in the area, and then generate a navigation map according to the position of the obstacle.
  • a navigation map is generated, for example, by SLAM.
  • Step S12 matching the navigation map and the preset coordinate system.
  • the navigation map is a two-dimensional map and the preset coordinate system is a three-dimensional coordinate system
  • only two dimensions in the three-dimensional coordinate system may be matched to the navigation map.
  • a two-dimensional map is a map parallel to a horizontal plane.
  • the x-axis and the y-axis are axes parallel to the horizontal plane, and then the x-axis and the y-axis in the three-dimensional coordinate system can be matched to the navigation map, thereby navigating the map.
  • the x-axis coordinate and the y-axis coordinate can be calibrated.
  • Step S13 Perform navigation path planning on the navigation map matching the preset coordinate system according to the coordinates of the image collector in the preset coordinate system.
  • the coordinates of the robot in the preset coordinate system can be determined, and the calibration is determined in the navigation map.
  • the coordinates can therefore be calculated according to the coordinates so that the navigation path can be planned in the navigation map starting from the coordinates of the image collector in the preset coordinate system. Therefore, it is not necessary to manually instruct the coordinates of the robot to realize completely autonomous navigation of the robot.
  • the navigation path planning can be performed according to the amcl (adaptive Monte Carlo Localization) positioning algorithm, costmap (cost map) and path planning algorithm.
  • FIG. 7 is a schematic flow chart showing navigation path planning in a navigation map matching a preset coordinate system according to coordinates of the image collector in a preset coordinate system according to an embodiment of the present invention.
  • the navigation path planning in the navigation map matching the preset coordinate system according to the coordinates of the image collector in the preset coordinate system includes: Step S1301, according to the image collector Positioning on the robot determines a projection of the contour of the robot in the preset coordinate system; step S1302, according to the projection, performs navigation path planning in a navigation map matching the preset coordinate system.
  • the contour of the robot can be determined according to the position of the image collector on the robot at the preset coordinates. Projection in the system, and according to the projection in the navigation map matching the preset coordinate system for navigation path planning, to ensure that the robot does not touch the obstacles in the path, to ensure that the robot moves smoothly and smoothly. And according to the projection of the robot, the orientation of the robot in the preset coordinate system can also be determined, so that the navigation path can be easily planned.
  • the present application also provides an embodiment of the robot positioning device.
  • the embodiment of the robot positioning device of the present application can be applied to a device such as a robot.
  • the device embodiment may be implemented by software, or may be implemented by hardware or a combination of hardware and software.
  • the processor of the device in which it is located reads the corresponding computer program instructions in the non-volatile memory into the memory.
  • FIG. 8 a hardware structure diagram of the device where the robot positioning device is located, except for the processor 801, the memory 802, the network interface 803, and the non-volatile memory shown in FIG.
  • the device in which the device is located in the embodiment is usually other hardware according to the actual function of the device, and details are not described herein.
  • FIG. 9 is a schematic block diagram of a robotic positioning device, in accordance with an embodiment of the present invention.
  • the robot includes a laser sensor and an image collector, as shown in FIG. 9, the robot positioning device includes:
  • a feature point extraction module 1 for acquiring an image by the image collector, and extracting feature points from the image
  • a feature point matching module 2 configured to determine, in a pre-stored feature point library, a target sample feature point that matches the feature point;
  • a first coordinate determining module 3 configured to: according to pre-stored coordinates of the target sample feature point in a preset coordinate system, and coordinate mapping between the target sample feature point and the image collector in a preset coordinate system And determining coordinates of the image collector in a preset coordinate system.
  • FIG. 10 is a schematic block diagram of another robotic positioning device shown in accordance with an embodiment of the present invention. As shown in FIG. 10, on the basis of the embodiment shown in FIG. 9, the robot positioning device further includes:
  • a relationship determining module 4 configured to determine a positional relationship and a posture relationship of the laser sensor with respect to the image collector;
  • a second coordinate determining module 5 configured to determine a first coordinate of the laser sensor in a preset coordinate system
  • a coordinate conversion module 6 configured to convert the first coordinate according to the positional relationship and the attitude relationship to obtain a second coordinate of the image collector in a preset coordinate system
  • the feature point extraction module 1 is further configured to collect a sample image by using the image collector, and extract a plurality of sample feature points from the sample image;
  • the mapping determination module 7 is configured to determine the coordinate mapping relationship according to the depth information, the position information of the sample feature point in the sample image, and the angle information when the image collector acquires the sample image;
  • the third coordinate determining module 8 is configured to determine, according to the second coordinate and the coordinate mapping relationship, a third coordinate of the sample feature point in a preset coordinate system;
  • the storage module 9 is configured to store a third coordinate of the sample feature point in a preset coordinate system and a coordinate mapping relationship between the sample feature point and the image collector in a preset coordinate system.
  • the relationship determination module 4 is configured to determine a positional relationship and an attitude relationship of the laser sensor relative to the image collector according to a nonlinear optimization algorithm.
  • the mapping determining module 7 is configured to determine, according to the depth information, the position information, the angle information, and the imaging model of the image collector of the sample feature point in the sample image. Coordinate mapping relationship.
  • the feature point extraction module 1 is configured to acquire an image from the image by the image collector when the robot is started, and extract feature points from the image.
  • FIG. 11 is a schematic block diagram of still another robotic positioning device shown in accordance with an embodiment of the present invention. As shown in FIG. 11, on the basis of the embodiment shown in FIG. 9, the robot positioning device further includes:
  • a map generating module 10 configured to generate a navigation map by scanning, by the laser sensor, an area where the robot is located;
  • a map matching module 11 configured to match the navigation map and the preset coordinate system
  • the path planning module 12 is configured to perform navigation path planning in a navigation map matching the preset coordinate system according to coordinates of the image collector in a preset coordinate system.
  • the path planning module 12 is a schematic block diagram of a path planning module, shown in accordance with an embodiment of the present invention. As shown in FIG. 10, on the basis of the embodiment shown in FIG. 11, the path planning module 12 includes:
  • a projection determining sub-module 1201 configured to determine a projection of the contour of the robot in the preset coordinate system according to a position of the image collector on the robot;
  • the path planning sub-module 1202 is configured to perform navigation path planning in the navigation map matching the preset coordinate system according to the projection.
  • Embodiments of the present invention also provide a computer readable storage medium having stored thereon a computer program that, when executed by a processor, performs the robot positioning method of any of the above embodiments.
  • Embodiments of the present invention also provide a robot, including a laser sensor and an image collector, further comprising a processor, wherein the processor is configured to perform the robot positioning method of any of the above embodiments.
  • the device embodiment since it basically corresponds to the method embodiment, reference may be made to the partial description of the method embodiment.
  • the device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located A place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the objectives of the present application. Those of ordinary skill in the art can understand and implement without any creative effort.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

L'invention concerne un procédé et un dispositif de positionnement de robot. Le procédé de positionnement de robot comprend les étapes consistant à : acquérir une image au moyen d'une unité d'acquisition d'image sur un robot, et extraire un point caractéristique à partir de l'image (S1) ; déterminer, dans la bibliothèque de points caractéristiques pré-stockés, un point caractéristique d'échantillon cible qui correspond au point caractéristique (S2) ; et déterminer les coordonnées de l'unité d'acquisition d'image dans un système de coordonnées prédéfini en fonction des coordonnées du point caractéristique d'échantillon cible pré-stocké dans le système de coordonnées prédéfini et la relation de mappage de coordonnées entre le point caractéristique d'échantillon cible et l'unité d'acquisition d'image dans le système de coordonnées prédéfini (S3).
PCT/CN2018/121180 2018-04-13 2018-12-14 Positionnement de robot WO2019196478A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810330896.8 2018-04-13
CN201810330896.8A CN110377015B (zh) 2018-04-13 2018-04-13 机器人定位方法和机器人定位装置

Publications (1)

Publication Number Publication Date
WO2019196478A1 true WO2019196478A1 (fr) 2019-10-17

Family

ID=68164134

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/121180 WO2019196478A1 (fr) 2018-04-13 2018-12-14 Positionnement de robot

Country Status (2)

Country Link
CN (1) CN110377015B (fr)
WO (1) WO2019196478A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112084285A (zh) * 2020-09-11 2020-12-15 北京百度网讯科技有限公司 用于地图匹配的方法、装置、电子设备以及可读介质
EP4321121A4 (fr) * 2021-05-10 2024-09-25 Wuhan United Imaging Healthcare Surgical Tech Co Ltd Procédé et système de positionnement et de réglage de position de robot

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111161335A (zh) * 2019-12-30 2020-05-15 深圳Tcl数字技术有限公司 虚拟形象的映射方法、映射装置及计算机可读存储介质
CN111157005A (zh) * 2020-01-07 2020-05-15 深圳市锐曼智能装备有限公司 基于反光片定位方法和装置
CN111337877A (zh) * 2020-03-19 2020-06-26 北京北特圣迪科技发展有限公司 一种反光板匹配定位方法
CN111551113A (zh) * 2020-05-19 2020-08-18 南京航空航天大学 一种大批量航空零件质检方法
CN112008718B (zh) * 2020-06-12 2022-04-05 特斯联科技集团有限公司 一种机器人控制方法、系统、存储介质及智能机器人
CN111862211B (zh) * 2020-07-22 2023-10-27 杭州海康威视数字技术股份有限公司 定位方法、装置、系统、存储介质和计算机设备
CN112053586A (zh) * 2020-09-11 2020-12-08 江苏小白兔智造科技有限公司 一种基于红外测距仪的车辆位置确认方法及装置
CN112037525A (zh) * 2020-09-11 2020-12-04 江苏小白兔智造科技有限公司 一种基于摄像装置的无需停车厅的智能停车方法
CN112053585A (zh) * 2020-09-11 2020-12-08 江苏小白兔智造科技有限公司 一种基于激光雷达的无需停车厅的智能停车方法
CN112037530A (zh) * 2020-09-11 2020-12-04 江苏小白兔智造科技有限公司 一种基于摄像装置的车辆位置确认方法及装置
CN112037575A (zh) * 2020-09-11 2020-12-04 江苏小白兔智造科技有限公司 一种基于超声波测距仪的无需停车厅的智能停车方法
CN112065125A (zh) * 2020-09-11 2020-12-11 江苏小白兔智造科技有限公司 一种基于红外测距仪的无需停车厅的智能停车方法
CN112114316A (zh) * 2020-09-11 2020-12-22 江苏小白兔智造科技有限公司 一种基于超声波测距仪的车辆位置确认方法及装置
CN112162559B (zh) * 2020-09-30 2021-10-15 杭州海康机器人技术有限公司 用于多机器人混行的方法、装置及存储介质
CN114434451A (zh) * 2020-10-30 2022-05-06 神顶科技(南京)有限公司 服务机器人及其控制方法、移动机器人及其控制方法
CN114445502A (zh) * 2020-11-06 2022-05-06 财团法人工业技术研究院 多摄影机定位调度系统及方法
CN112697151B (zh) * 2020-12-24 2023-02-21 北京百度网讯科技有限公司 移动机器人的初始点的确定方法、设备和存储介质
CN114098980B (zh) * 2021-11-19 2024-06-11 武汉联影智融医疗科技有限公司 相机位姿调整方法、空间注册方法、系统和存储介质
CN113313770A (zh) * 2021-06-29 2021-08-27 智道网联科技(北京)有限公司 行车记录仪的标定方法及其装置
CN113761255B (zh) * 2021-08-19 2024-02-09 劢微机器人科技(深圳)有限公司 机器人室内定位方法、装置、设备及存储介质
CN114209433B (zh) * 2021-12-31 2023-09-05 杭州三坛医疗科技有限公司 一种手术机器人导航定位装置
CN114653558B (zh) * 2022-05-25 2022-08-02 苏州柳溪机电工程有限公司 用于涂装流水线的吹水系统

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010018640A1 (en) * 2000-02-28 2001-08-30 Honda Giken Kogyo Kabushiki Kaisha Obstacle detecting apparatus and method, and storage medium which stores program for implementing the method
CN1569558A (zh) * 2003-07-22 2005-01-26 中国科学院自动化研究所 基于图像表现特征的移动机器人视觉导航方法
CN101441769A (zh) * 2008-12-11 2009-05-27 上海交通大学 单目摄像机实时视觉定位方法
CN104463108A (zh) * 2014-11-21 2015-03-25 山东大学 一种单目实时目标识别及位姿测量方法
CN105046686A (zh) * 2015-06-19 2015-11-11 奇瑞汽车股份有限公司 定位方法及装置
CN105959529A (zh) * 2016-04-22 2016-09-21 首都师范大学 一种基于全景相机的单像自定位方法及系统
CN106408601A (zh) * 2016-09-26 2017-02-15 成都通甲优博科技有限责任公司 一种基于gps的双目融合定位方法及装置
US20170153646A1 (en) * 2014-06-17 2017-06-01 Yujin Robot Co., Ltd. Apparatus of controlling movement of mobile robot mounted with wide angle camera and method thereof
CN106826815A (zh) * 2016-12-21 2017-06-13 江苏物联网研究发展中心 基于彩色图像与深度图像的目标物体识别与定位的方法

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036475A (zh) * 2013-07-22 2014-09-10 成都智慧星球科技有限公司 适应于任意投影机群及投影屏幕的高鲁棒性几何校正方法
CN104331896B (zh) * 2014-11-21 2017-09-08 天津工业大学 一种基于深度信息的系统标定方法
CN105364934B (zh) * 2015-11-30 2017-06-16 山东建筑大学 液压机械臂遥操作控制系统和方法
CN106323241A (zh) * 2016-06-12 2017-01-11 广东警官学院 一种通过监控视频或车载摄像头测量人或物体三维的方法
CN107132581B (zh) * 2017-06-29 2019-04-30 上海理工大学 一种基于位姿映射关系数据库的双层磁源定位方法
CN107481247A (zh) * 2017-07-27 2017-12-15 许文远 一种智慧农业用采摘机器人控制系统及方法
CN107328420B (zh) * 2017-08-18 2021-03-02 上海智蕙林医疗科技有限公司 定位方法和装置

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010018640A1 (en) * 2000-02-28 2001-08-30 Honda Giken Kogyo Kabushiki Kaisha Obstacle detecting apparatus and method, and storage medium which stores program for implementing the method
CN1569558A (zh) * 2003-07-22 2005-01-26 中国科学院自动化研究所 基于图像表现特征的移动机器人视觉导航方法
CN101441769A (zh) * 2008-12-11 2009-05-27 上海交通大学 单目摄像机实时视觉定位方法
US20170153646A1 (en) * 2014-06-17 2017-06-01 Yujin Robot Co., Ltd. Apparatus of controlling movement of mobile robot mounted with wide angle camera and method thereof
CN104463108A (zh) * 2014-11-21 2015-03-25 山东大学 一种单目实时目标识别及位姿测量方法
CN105046686A (zh) * 2015-06-19 2015-11-11 奇瑞汽车股份有限公司 定位方法及装置
CN105959529A (zh) * 2016-04-22 2016-09-21 首都师范大学 一种基于全景相机的单像自定位方法及系统
CN106408601A (zh) * 2016-09-26 2017-02-15 成都通甲优博科技有限责任公司 一种基于gps的双目融合定位方法及装置
CN106826815A (zh) * 2016-12-21 2017-06-13 江苏物联网研究发展中心 基于彩色图像与深度图像的目标物体识别与定位的方法

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112084285A (zh) * 2020-09-11 2020-12-15 北京百度网讯科技有限公司 用于地图匹配的方法、装置、电子设备以及可读介质
CN112084285B (zh) * 2020-09-11 2023-08-08 北京百度网讯科技有限公司 用于地图匹配的方法、装置、电子设备以及可读介质
EP4321121A4 (fr) * 2021-05-10 2024-09-25 Wuhan United Imaging Healthcare Surgical Tech Co Ltd Procédé et système de positionnement et de réglage de position de robot

Also Published As

Publication number Publication date
CN110377015B (zh) 2021-04-27
CN110377015A (zh) 2019-10-25

Similar Documents

Publication Publication Date Title
WO2019196478A1 (fr) Positionnement de robot
CN112894832B (zh) 三维建模方法、装置、电子设备和存储介质
US7403268B2 (en) Method and apparatus for determining the geometric correspondence between multiple 3D rangefinder data sets
US8699005B2 (en) Indoor surveying apparatus
US10086955B2 (en) Pattern-based camera pose estimation system
JP6573419B1 (ja) 位置決め方法、ロボット及びコンピューター記憶媒体
JP5620200B2 (ja) 点群位置データ処理装置、点群位置データ処理方法、点群位置データ処理システム、および点群位置データ処理プログラム
CN104574406B (zh) 一种360度全景激光与多个视觉系统间的联合标定方法
WO2018090250A1 (fr) Procédé de génération de nuage de points tridimensionnels, dispositif, système informatique et appareil mobile
US20150116691A1 (en) Indoor surveying apparatus and method
US10451403B2 (en) Structure-based camera pose estimation system
CN106898022A (zh) 一种手持式快速三维扫描系统及方法
JP2016057108A (ja) 演算装置、演算システム、演算方法およびプログラム
RU2572637C2 (ru) Параллельное или выполняемое последовательно в онлайн- и оффлайн- режимах формирование реконструкций для трехмерного обмера помещения
JP2014529727A (ja) 自動シーン較正
JP2012533222A (ja) 画像ベースの表面トラッキング
JP2012063866A (ja) 点群位置データ処理装置、点群位置データ処理方法、点群位置データ処理システム、および点群位置データ処理プログラム
US9858669B2 (en) Optimized camera pose estimation system
US20180204387A1 (en) Image generation device, image generation system, and image generation method
JP4132068B2 (ja) 画像処理装置及び三次元計測装置並びに画像処理装置用プログラム
Borrmann et al. Robotic mapping of cultural heritage sites
EP3479142B1 (fr) Appareil d'imagerie radiologique
Gong et al. Extrinsic calibration of a 3D LIDAR and a camera using a trihedron
WO2024093635A1 (fr) Procédé et appareil d'estimation de pose de caméra, et support de stockage lisible par ordinateur
CN113034347A (zh) 倾斜摄影图像处理方法、装置、处理设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18914228

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18914228

Country of ref document: EP

Kind code of ref document: A1