CN114102577B - Robot and positioning method applied to robot - Google Patents

Robot and positioning method applied to robot Download PDF

Info

Publication number
CN114102577B
CN114102577B CN202010898774.6A CN202010898774A CN114102577B CN 114102577 B CN114102577 B CN 114102577B CN 202010898774 A CN202010898774 A CN 202010898774A CN 114102577 B CN114102577 B CN 114102577B
Authority
CN
China
Prior art keywords
target
robot
information
pose information
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010898774.6A
Other languages
Chinese (zh)
Other versions
CN114102577A (en
Inventor
刘满堂
俞毓锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jizhijia Technology Co Ltd
Original Assignee
Beijing Jizhijia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jizhijia Technology Co Ltd filed Critical Beijing Jizhijia Technology Co Ltd
Priority to CN202010898774.6A priority Critical patent/CN114102577B/en
Publication of CN114102577A publication Critical patent/CN114102577A/en
Application granted granted Critical
Publication of CN114102577B publication Critical patent/CN114102577B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • B25J13/088Controls for manipulators by means of sensing devices, e.g. viewing or touching devices with position, velocity or acceleration sensors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • B25J13/088Controls for manipulators by means of sensing devices, e.g. viewing or touching devices with position, velocity or acceleration sensors
    • B25J13/089Determining the position of the robot with reference to its environment
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Human Computer Interaction (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The disclosure provides a robot and a positioning method applied to the robot, wherein the robot comprises: the device comprises an initial pose detection component, a ranging sensor, a control server and a memory; wherein: the initial pose detection component is configured to perform initial positioning on the robot to obtain initial pose information; the ranging sensor is configured to determine a contour scanning result of a target object in the surrounding environment by scanning the surrounding environment; the memory is configured to store an execution instruction of the control server, store initial pose information detected by the initial pose detection component and a contour scanning result output by the ranging sensor, and store a priori map in advance; the control server is configured to read the execution instruction from the memory, and execute, in accordance with the execution instruction: and reading initial pose information, a contour scanning result and a priori map, and determining accurate pose information of the robot based on the initial pose information, the contour scanning result and the prestored priori map.

Description

Robot and positioning method applied to robot
Technical Field
The disclosure relates to the technical field of robot positioning, in particular to a robot and a positioning method applied to the robot.
Background
The robot can construct a map of the current place according to a map construction algorithm in the working process, and generally, the pose information of the robot in the constructed map can be determined based on the odometer and the inertial device in the working process of the robot, so that the robot is positioned. However, there may be errors in positioning the robot based on the odometer or the inertial device, and when the robot operates for a long period of time, error accumulation occurs, resulting in lower positioning accuracy of the robot, and thus, repositioning of the robot is required.
At present, the robot can be relocated based on external equipment of the robot or based on a sensor of the robot body, but relocation of the robot based on external equipment of the robot (such as a global positioning system (GlobalPositioning System, GPS), ultra Wide Band (UWB) and the like) and the sensor of the robot body (such as a visual sensor) can cause lower positioning precision due to instability of the external environment, so that the problem of how to improve the accuracy of relocation is an urgent need to be solved currently.
Disclosure of Invention
The embodiment of the disclosure at least provides a robot and a positioning method applied to the robot.
In a first aspect, embodiments of the present disclosure provide a robot, comprising: the device comprises an initial pose detection component, a ranging sensor, a control server and a memory;
wherein: the initial pose detection component is configured to perform initial positioning on the robot to obtain initial pose information; the ranging sensor is configured to determine a contour scanning result of a target object in the surrounding environment by scanning the surrounding environment; the memory is configured to store an execution instruction of the control server, store initial pose information detected by the initial pose detection component and a contour scanning result output by the ranging sensor, and store a priori map in advance.
The control server is configured to read execution instructions from the memory, and execute, according to the execution instructions: and reading the initial pose information, the contour scanning result and the prior map, and determining the accurate pose information of the robot based on the initial pose information, the contour scanning result and the pre-stored prior map.
In one possible implementation, the control server, when executing the determination of the precise pose information of the robot based on the initial pose information, the contour scan result, and a pre-stored prior map, is configured to:
Determining whether target contour information matched with target position points exists in the prior map based on each target position point in the contour scanning result; if the matched target contour information exists, the initial pose information is used as the accurate pose information; if the target contour information does not exist, synchronously adjusting the initial pose information and the corresponding target position point information in the contour scanning result according to a preset pose adjustment step length, and returning to the step of determining whether the target contour information matched with the target position point exists in the prior map or not until the target contour information matched with the adjusted target position point exists, wherein the adjusted initial pose information corresponding to the adjusted target position point is used as the accurate pose information.
In a possible implementation manner, the control server is configured to, when executing the determination based on each target position point in the contour scan result, determine whether there is target contour information matching the target position point in the prior map:
determining a map search range based on the initial pose information; and determining whether target contour information matched with the target position point exists or not by searching contour information in the determined map searching range.
In one possible implementation, the initial pose detection component, when performing determining initial pose information obtained by initially positioning the robot, is configured to:
acquiring a target image obtained by the robot through scanning the surrounding environment by a vision sensor; and determining the initial pose information based on the pre-recorded world coordinate information corresponding to the target marker contained in the target image.
In one possible implementation, the initial pose detection component is configured to, when executing the determination of the initial pose information based on pre-recorded world coordinate information corresponding to a target marker contained in the target image:
determining first pose information of the target marker under a visual sensor coordinate system; determining second pose information of the target marker under the robot coordinate system based on the first pose information and first relative pose information between the vision sensor coordinate system and the robot coordinate system; determining second relative pose information between the robot coordinate system and the world coordinate system based on the second pose information and the world coordinate information of the target marker in the world coordinate system; and determining initial pose information of the robot according to the second relative pose information.
In one possible embodiment, the initial pose detection assembly is further configured to:
if the target marker cannot be successfully identified from the target image, determining the initial pose information based on the received wireless broadcast signal sent by the target positioning equipment.
In one possible implementation, the initial pose detection component, when performing determining initial pose information obtained by initially positioning the robot, is configured to:
and determining the initial pose information based on the received wireless broadcast signal sent by the target positioning equipment.
In one possible implementation, the initial pose detection component, when executing the determination of the initial pose information based on the received wireless broadcast signal sent by the target positioning device, is configured to:
and determining the position of the robot based on the received signal strength of the wireless broadcast signals sent by the plurality of target positioning devices and the preset position information corresponding to the plurality of target positioning devices.
In one possible implementation, the ranging sensor is configured to:
acquiring point cloud data obtained by scanning the surrounding environment by a ranging sensor; determining whether the number of point cloud points belonging to the target object is larger than a set threshold value based on the point cloud data; if the point cloud data is larger than the point cloud data, determining a contour scanning result of the target object based on the point cloud data corresponding to the point cloud point of the target object.
In a second aspect, an embodiment of the present disclosure provides a positioning method, applied to a robot, including:
determining initial pose information obtained by initially positioning the robot;
acquiring a contour scanning result of a target object obtained after the robot scans the surrounding environment through a ranging sensor;
and determining accurate pose information of the robot based on the initial pose information, the contour scanning result and a pre-stored prior map.
In an alternative embodiment, determining the precise pose information of the robot based on the initial pose information, the contour scan result, and a pre-stored prior map includes:
determining whether target contour information matched with target position points exists in the prior map based on each target position point in the contour scanning result;
if the matched target contour information exists, the initial pose information is used as the accurate pose information;
if the target contour information does not exist, synchronously adjusting the initial pose information and the corresponding target position point information in the contour scanning result according to a preset pose adjustment step length, and returning to the step of determining whether the target contour information matched with the target position point exists in the prior map or not until the target contour information matched with the adjusted target position point exists, wherein the adjusted initial pose information corresponding to the adjusted target position point is used as the accurate pose information.
In an alternative embodiment, based on each target location point in the contour scan result, determining whether there is target contour information in the prior map that matches the target location point includes:
determining a map search range based on the initial pose information;
and determining whether target contour information matched with the target position point exists or not by searching contour information in the determined map searching range.
In an alternative embodiment, determining initial pose information obtained by initially positioning the robot includes:
acquiring a target image obtained by the robot through scanning the surrounding environment by a vision sensor;
and determining the initial pose information based on the pre-recorded world coordinate information corresponding to the target marker contained in the target image.
In an alternative embodiment, the determining the initial pose information based on the pre-recorded world coordinate information corresponding to the target marker included in the target image includes:
determining first pose information of the target marker under a visual sensor coordinate system;
determining second pose information of the target marker under the robot coordinate system based on the first pose information and first relative pose information between the vision sensor coordinate system and the robot coordinate system;
Determining second relative pose information between the robot coordinate system and the world coordinate system based on the second pose information and the world coordinate information of the target marker in the world coordinate system;
and determining initial pose information of the robot according to the second relative pose information.
In one possible embodiment, the method further comprises:
if the target marker cannot be successfully identified from the target image, determining the initial pose information based on the received wireless broadcast signal sent by the target positioning equipment.
In one possible implementation, determining initial pose information obtained by initially positioning a robot includes:
and determining the initial pose information based on the received wireless broadcast signal sent by the target positioning equipment.
In one possible implementation, determining the initial pose information based on the received wireless broadcast signal sent by the target positioning device includes:
and determining the position of the robot based on the received signal strength of the wireless broadcast signals sent by the plurality of target positioning devices and the preset position information corresponding to the plurality of target positioning devices.
In one possible implementation manner, obtaining a profile scanning result of the target object obtained after the robot scans the surrounding environment through the ranging sensor includes:
acquiring point cloud data obtained by scanning the surrounding environment by a ranging sensor;
determining whether the number of point cloud points belonging to the target object is larger than a set threshold value based on the point cloud data;
if the point cloud data is larger than the point cloud data, determining a contour scanning result of the target object based on the point cloud data corresponding to the point cloud point of the target object.
According to the robot and the positioning method applied to the robot, according to the initial pose information of the robot, the contour scanning result of the target object, which is obtained by the robot through the surrounding environment scanned by the ranging sensor, and the prior map stored in advance, the accurate pose information of the robot is determined, wherein the initial pose is not accurately positioned due to the fact that the sensors used for determining the initial pose, such as the visual sensor, the wireless signal sensor and the like, are easily affected by the external environment, and therefore the relative position relation between the contour position information corresponding to the contour scanning result of the ranging sensor and the initial pose is accurate on the basis of the initial pose, and the accurate pose information after the contour position information is adjusted is determined by taking the pose information of the ranging sensor as a reference standard when calculating the contour position information.
The foregoing objects, features and advantages of the disclosure will be more readily apparent from the following detailed description of the preferred embodiments taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for the embodiments are briefly described below, which are incorporated in and constitute a part of the specification, these drawings showing embodiments consistent with the present disclosure and together with the description serve to illustrate the technical solutions of the present disclosure. It is to be understood that the following drawings illustrate only certain embodiments of the present disclosure and are therefore not to be considered limiting of its scope, for the person of ordinary skill in the art may admit to other equally relevant drawings without inventive effort.
Fig. 1 shows a schematic structural view of a robot provided in an embodiment of the present disclosure;
FIG. 2 illustrates a flow chart of a positioning method provided by an embodiment of the present disclosure;
fig. 3 is a flowchart illustrating a method for determining initial pose information of a robot in a positioning method according to an embodiment of the present disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. The components of the embodiments of the present disclosure, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure provided in the accompanying drawings is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be made by those skilled in the art based on the embodiments of this disclosure without making any inventive effort, are intended to be within the scope of this disclosure.
According to research, at present, the pose of the robot can be determined based on external equipment of the robot or based on a sensor of the robot body, but the robot is positioned based on external equipment of the robot (such as a global positioning system (GlobalPositioning System, GPS), ultra Wide Band (UWB) and the like) and the sensor of the robot body (such as a vision sensor), so that the positioning accuracy is low due to instability of the external environment, and therefore, how to improve the positioning accuracy is a problem which is needed to be solved urgently at present.
Based on the above study, the disclosure provides a robot and a positioning method applied to the robot, which determine accurate pose information of the robot according to initial pose information of the robot, a contour scanning result of a target object obtained by the robot through a surrounding environment scanned by a ranging sensor, and a pre-stored prior map, wherein, because sensors for determining the initial pose such as a vision sensor and a wireless signal sensor are easily affected by external environment, the positioning inaccuracy of the robot is caused, the embodiment of the disclosure can further determine the accurate pose information after correcting the initial pose according to the contour scanning result of the ranging sensor (the contour position information corresponding to the contour scanning result and the initial pose) of the target object and the prior map, when calculating the contour position information, for example, when the contour position information corresponding to the initial pose is matched with the prior map, the initial pose can be used as the accurate pose, and when the contour position information corresponding to the initial pose is not matched with the prior map, the contour position information can be adjusted, and when the contour position information corresponding to the prior pose information is adjusted, the position of the robot is adjusted, the accurate pose information is adjusted.
The present invention is directed to a method for manufacturing a semiconductor device, and a semiconductor device manufactured by the method.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
For the sake of understanding the present embodiment, first, a robot disclosed in the embodiment of the present disclosure is described in detail, and then a positioning method disclosed in the embodiment of the present disclosure is described in detail, where an execution body of the positioning method provided in the embodiment of the present disclosure may be a robot or a server controlling positioning of the robot. In some possible implementations, the positioning method may be implemented by way of a processor invoking computer readable instructions stored in a memory.
Example 1
Referring to fig. 1, a schematic structural diagram of a robot 10 according to an embodiment of the disclosure is provided, where the robot 10 includes: an initial pose detection assembly 11, a ranging sensor 12, a control server 13 and a memory 14.
Wherein: the initial pose detection component 11 is configured to perform initial positioning on the robot to obtain initial pose information.
Wherein the initial pose detection assembly 11 may include a vision sensor 110 and a wireless signal sensor 111 therein; the initial pose information is pose information of the robot under a world coordinate system, and can comprise initial position information and initial pose information; here, the initial pose information may be a current orientation of the robot. Here, the wireless signal sensor 111 may be a bluetooth sensor, or a global positioning system (GlobalPositioning System, GPS) sensor, or an Ultra Wide Band (UWB) sensor, or a wireless fidelity (WIreless Fidelity, wiFi) sensor, or the like; the wireless signal sensor 111 may be any sensor capable of implementing a positioning function according to a received wireless signal, which is not described herein.
In a specific implementation, the visual sensor 110 in the initial pose detection assembly 11 can scan the surrounding environment to obtain a target image; and determining the initial pose information based on the pre-recorded world coordinate information corresponding to the target marker contained in the target image.
The target image is a frame of image obtained by the robot 10 scanning the surrounding environment through the vision sensor 110 at the current moment, and the image may include at least one object.
Here, the target marker is an object with tag identification; the world coordinate information is pose information of the target marker in a world coordinate system.
Specifically, the initial pose detection component 11 determines first pose information of the target marker under a visual sensor coordinate system based on a target image; determining second pose information of the target marker under the robot coordinate system based on the first pose information and first relative pose information between the vision sensor coordinate system and the robot coordinate system; determining second relative pose information between the robot coordinate system and the world coordinate system based on the second pose information and the world coordinate information of the target marker in the world coordinate system; based on the second relative pose information, initial pose information of the robot 10 is determined.
The first pose information comprises position information and pose (orientation) information of the target marker under a visual sensor coordinate system; the first relative pose information between the vision sensor coordinate system and the robot coordinate system may be used to indicate a conversion relationship between the vision sensor coordinate system and the robot coordinate system, which may include translation and rotation; the second pose information comprises position information and pose (orientation) information of the target marker under a robot coordinate system; the second relative pose information between the robot coordinate system and the world coordinate system may be used to indicate a conversion relationship between the robot coordinate system and the world coordinate system, which may include translation and rotation.
In one possible implementation, the initial pose information may be determined based on a wireless broadcast signal received by the wireless signal sensor 111 in the initial pose detection assembly 11.
In one possible implementation, if the target marker cannot be identified from the target image, the initial pose information may be determined based on the wireless broadcast signal received by the wireless signal sensor 111 in the initial pose detection component 11.
The wireless broadcast signal may include a wireless broadcast signal strength and a transmission time of the wireless broadcast signal.
In a specific implementation, when determining the initial pose information of the robot 10 by using the wireless broadcast signals received by the wireless signal sensor 111 in the initial pose detection assembly 11, the position of the robot 10 may be determined based on the signal intensities of the received wireless broadcast signals sent by the plurality of target positioning devices and the preset position information corresponding to the plurality of target devices; the position of the robot 10 may also be determined based on the transmission time of the received wireless broadcast signals transmitted by the plurality of target positioning devices and the reception time of the wireless broadcast signals.
The target positioning device may be a Beacon positioning device paved in a working site area where the robot 10 is located, or may be a positioning base station.
Accordingly, the ranging sensor 12 is configured to determine a profile scan result of the target object in the surrounding environment by scanning the surrounding environment; the memory 14 is configured to store an execution instruction of the control server, and to store initial pose information detected by the initial pose detection component 11 and a contour scanning result output by the ranging sensor 12, and to store a priori map in advance.
The distance measuring sensor 12 may be a laser radar sensor, an infrared sensor, or the like; the ranging sensor 12 may be any depth sensor with ranging function, and will not be described herein.
Wherein the target object may comprise at least one object scanned by the robot 10 during operation by the ranging sensor 12; here, the at least one object may include a target label that carries tag identification.
Here, the contour scan result may be a contour map of the target object constituted by a plurality of point cloud data.
Here, the prior map is a global map of the workplace where the robot 10 is currently located, which may be established by slam or the like, and the global map may be an occupancy probability grid map.
In an implementation, ranging sensor 12 is configured to: acquiring point cloud data obtained by scanning the surrounding environment by the ranging sensor 12; determining whether the number of point cloud points belonging to the target object is larger than a set threshold value based on the point cloud data; if the point cloud data is larger than the point cloud data, determining a contour scanning result of the target object based on the point cloud data corresponding to the point cloud point of the target object.
Accordingly, the control server 13 is configured to read the execution instructions from the memory 14, and execute, according to the execution instructions: the initial pose information, the contour scanning result, and the prior map are read, and accurate pose information of the robot 10 is determined based on the initial pose information, the contour scanning result, and a pre-stored prior map.
In one possible implementation, the control server 13, when executing the determination of the precise pose information of the robot 10 based on the initial pose information, the contour scan result, and a pre-stored a priori map, is configured to:
determining whether target contour information matched with target position points exists in the prior map based on each target position point in the contour scanning result; if the matched target contour information exists, the initial pose information is used as the accurate pose information; if the target contour information does not exist, synchronously adjusting the initial pose information and the corresponding target position point information in the contour scanning result according to a preset pose adjustment step length, and returning to the step of determining whether the target contour information matched with the target position point exists in the prior map or not until the target contour information matched with the adjusted target position point exists, wherein the adjusted initial pose information corresponding to the adjusted target position point is used as the accurate pose information.
In a possible implementation, the control server 13, when executing the determination of whether there is target contour information matching the target position points in the prior map based on the respective target position points in the contour scan result, is configured to: determining a map search range based on the initial pose information; and determining whether target contour information matched with the target position point exists or not by searching contour information in the determined map searching range.
In the embodiment of the disclosure, the accurate pose information of the robot is determined according to the initial pose information of the robot, the contour scanning result of the target object obtained by the surrounding environment scanned by the ranging sensor by the robot, and the prior map stored in advance, wherein, because the sensors used for determining the initial pose such as the vision sensor and the wireless signal sensor are easily affected by the external environment, the positioning of the robot is inaccurate, the embodiment of the disclosure can further determine the accurate pose information after correcting the initial pose according to the contour scanning result of the ranging sensor on the basis of the initial pose (the relative position relation between the contour position information corresponding to the contour scanning result and the initial pose is accurate, because the pose information of the ranging sensor is taken as a reference standard when calculating the contour position information), for example, when the contour position information corresponding to the initial pose is matched with the prior map, the initial pose can be taken as the accurate pose, and if the initial pose is not matched, the contour position information can be adjusted, and the position of the robot is accurately adjusted after the contour position information is adjusted.
Based on the same inventive concept, the robot provided in the first embodiment of the present disclosure corresponds to the positioning method applied to the robot in the second embodiment, and since the principle of solving the problem of the robot in the embodiment of the present disclosure is similar to that of the positioning method in the embodiment of the present disclosure, the implementation of the robot can refer to the implementation of the method, and the repetition is omitted.
Description of the processing flow of each module in the robot and the interaction flow between each module may refer to the related description in the following method embodiments, and will not be described in detail here.
Example two
Referring to fig. 2, a flowchart of a positioning method according to an embodiment of the present disclosure is shown, where the method is applied to a robot, and the method includes steps S201 to S203, where:
s201, determining initial pose information obtained by initial positioning of the robot.
The initial pose information is pose information of the robot under a world coordinate system, and can comprise initial position information and initial pose information; here, the initial pose information may be a current orientation of the robot.
In a specific implementation, the initial pose information of the robot may be determined by a vision sensor or a wireless signal sensor mounted on the robot body, and the initial pose information of the robot may be determined by the vision sensor mounted on the robot body as follows: acquiring a target image obtained by the robot through scanning the surrounding environment by a vision sensor; and determining the initial pose information based on the pre-recorded world coordinate information corresponding to the target marker contained in the target image.
Here, the wireless signal sensor may include a bluetooth sensor, a global positioning system (GlobalPositioning System, GPS) sensor, an Ultra Wide Band (UWB) sensor, a wireless fidelity (WIreless Fidelity, wiFi) sensor, and the like; the wireless signal sensor may be any sensor capable of realizing a positioning function according to a received wireless signal, and will not be described herein.
The target image is a frame of image obtained by the robot scanning the surrounding environment through the vision sensor at the current moment, and the image can comprise at least one object.
Here, the target marker is an object with tag identification; the world coordinate information is pose information of the target marker in a world coordinate system.
Specifically, as shown in fig. 3, the initial pose information of the robot may be determined based on the pre-recorded world coordinate system information corresponding to the target marker included in the target image according to the following steps S301 to S304, which are specifically described as follows:
s301, determining first pose information of the target marker in a visual sensor coordinate system.
In specific implementation, the first pose information of the target marker carrying the tag mark under the visual sensor coordinate system can be determined by analyzing the target image; the first pose information may include first position information and first pose information.
S302, determining second pose information of the target marker under the robot coordinate system based on the first pose information and first relative pose information between the vision sensor coordinate system and the robot coordinate system.
Wherein the first relative pose information between the vision sensor coordinate system and the robot coordinate system may be used to indicate a conversion relationship between the vision sensor coordinate system and the robot coordinate system, which may include translation and rotation.
In specific implementation, according to the conversion relation between the visual sensor coordinate system and the robot coordinate system, and based on the first pose information of the target marker carrying the tag mark under the visual sensor coordinate system, the second pose information of the target marker carrying the tag mark under the robot coordinate system is calculated and obtained. Wherein the second pose information may include second position information and second pose information.
S303, determining second relative pose information between the robot coordinate system and the world coordinate system based on the second pose information and the world coordinate information of the target marker in the world coordinate system.
Here, the conversion relationship between the robot coordinate system and the world coordinate system (i.e., the second relative pose information) may include translation and rotation, which is calculated according to the world coordinate information of the target marker with tag identifier in the world coordinate system and the second pose information of the target marker determined in step S302 in the robot coordinate system.
S304, determining initial pose information of the robot according to the second relative pose information.
In a specific implementation, after the conversion relationship between the robot coordinate system and the world coordinate system is calculated according to step S303, initial pose information of the robot in the world coordinate system may be obtained according to the conversion relationship between the robot coordinate system and the world coordinate system.
Exemplary, if the target image is analyzed, determining the position of the target marker carrying the tag mark under the coordinate of the visual sensor system as P c (x c ,y c ) The first relative pose between the vision sensor coordinate system and the robot coordinate system is transformed into T cb The second relative pose of the robot coordinate system and the world coordinate system is transformed into T wb According to the coordinate of tag in world coordinate system as P w (x w ,y w ) Then according to formula P w =T wb *T cb -1 *P c Calculating to obtain initial pose information T of the robot under a world coordinate system wb The method comprises the steps of carrying out a first treatment on the surface of the Wherein T is wb The method comprises the steps of including position information of the robot under a world coordinate system and attitude information of the robot under the world coordinate system, which is determined through rotation according to a conversion relation between the coordinate systems; wherein T is cb -1 The conversion relation between the robot coordinate system and the vision sensor coordinate system is represented.
In specific implementation, after a target image obtained by scanning an ambient environment through a visual sensor, determining whether a current target image contains a target marker carrying a tag mark according to an algorithm of image analysis, if the current target image contains the target marker carrying the tag mark, determining whether the current target image is suitable for matching the target marker carrying the tag mark according to a preset visual image quality evaluation function, and if an evaluation score obtained by calculation according to the preset visual image quality evaluation function is lower, indicating that the current ambient environment has insufficient light, and if the current target image contains the target marker carrying the tag mark, determining initial pose information of the robot through the visual sensor; if the fact that the current target image does not contain the target marker carrying the tag mark is determined, initial pose information of the robot cannot be determined through the visual sensor.
In a specific implementation, when the initial pose information of the robot cannot be determined by the vision sensor mounted on the robot body, the initial pose information of the robot can be determined by the wireless signal sensor mounted on the robot body, which is described as follows: and determining the position of the robot based on the received signal strength of the wireless broadcast signals sent by the plurality of target positioning devices and the preset position information corresponding to the plurality of target positioning devices.
Here, there are various ways of determining the initial pose information of the robot through the wireless signal sensor, and the initial pose information of the robot can be determined for positioning the robot through the bluetooth sensor; the robot can be positioned by a WiFi positioning technology, so that the initial pose information of the robot is determined; the robot can be positioned by a GPS positioning technology, so that the initial pose information of the robot is determined; the robot can be positioned by UWB positioning technology, so that initial pose information of the robot and the like can be determined.
The target positioning device can be Beacon positioning devices paved in the working place area where the robot is located, or can be a positioning base station.
Here, the signal strength of the radio broadcast signal includes strength information of the radio signal and transmission time information of the radio signal; the wireless broadcast signal may be a bluetooth broadcast signal, a WiFi signal, a UWB signal, a GPS signal, or the like.
In one possible implementation manner, when the wireless signal sensor installed on the robot body is a bluetooth sensor, the robot can be positioned by two positioning modes of a network side positioning system and a terminal side positioning system.
Specifically, the specific description of the positioning of the robot by the network side positioning system is as follows: at least one Beacon positioning device paved in a workplace area where the robot is located continuously sends Bluetooth broadcast signals to the surroundings as Bluetooth beacons, and the robot calculates received signal strength indication (Received Signal Strength Indication, RSSI) values under the Beacon according to the received Bluetooth broadcast signals; and transmitting the RSSI value to a rear-end data server through a Bluetooth gateway paved in a workplace area where the robot is located, and analyzing the received RSSI value by the rear-end data server through a preset positioning algorithm to calculate the specific position of the robot (namely, the initial pose information of the robot).
Specifically, the specific description of the positioning of the robot by the terminal side positioning system is as follows: at least one Beacon positioning device paved in a workplace area where the robot is located is used as a Bluetooth Beacon to continuously send Bluetooth broadcast signals to the periphery, the robot receives the Bluetooth broadcast signals, and the distance between the robot and the Beacon positioning device is determined according to the signal strength of the received Bluetooth broadcast signals; based on the position information of the Beacon positioning device under the world coordinate system and the determined distance between the robot and the Beacon positioning device, the position information (namely the initial pose information) of the robot under the world coordinate system is calculated through a preset positioning algorithm of the robot.
In one possible implementation manner, when the wireless signal sensor installed on the robot body is a UWB sensor, the robot may be positioned by the transmission time and the reception time of the received wireless broadcast signal, which is described in detail as follows: at least one positioning base station paved in a workplace area where the robot is located continuously transmits broadcast signals to the surroundings, and the robot receives the broadcast signals and determines unidirectional flight time of the broadcast signals according to the transmission time carried in the received broadcast signals and the receiving time of the received broadcast signals; determining the distance between the robot and the positioning base station according to the flying speed of the broadcast signal and the unidirectional flying time; based on the position information of the positioning base station under the world coordinate system, the position information (namely the initial pose information of the robot) of the robot under the world coordinate system is obtained through calculation of a preset positioning algorithm of the robot.
In a specific implementation, since a vision sensor or a wireless signal sensor mounted on a robot body and other sensors for determining an initial pose are easily affected by an external environment, and thus the positioning of the robot is inaccurate, after initial pose information of the robot in a world coordinate system is obtained by initially positioning the robot through the vision sensor or the wireless signal sensor, accurate pose information after correcting the initial pose can be determined according to a contour scanning result (a relative positional relationship between contour position information corresponding to the contour scanning result and the initial pose, because pose information of the ranging sensor is used as a reference standard when calculating the contour position information) of a target object obtained after scanning an ambient environment by a ranging sensor mounted on the robot body and an a priori map, and the steps of S202 to S203 are specifically performed.
S202, acquiring a contour scanning result of a target object, which is obtained after the robot scans the surrounding environment through a ranging sensor.
The distance measuring sensor can be a laser radar sensor, an infrared sensor and the like; the ranging sensor may be any depth sensor with ranging function, and will not be described herein.
The target object may include at least one object scanned by the robot through the ranging sensor during operation; here, the at least one object may include a target label that carries tag identification.
Here, the contour scanning result of the target object may be a contour map of the target object constituted by a plurality of point cloud data.
In a specific implementation, the ranging sensor of the robot determines in step S201 that an initial position point in initial pose information of the robot transmits a signal, and when the signal intensity reflected by a target object in the surrounding environment is greater than a first preset threshold, generates point cloud data corresponding to the target object; and when the number of the point cloud points of the target object is larger than a second preset threshold value, determining a contour scanning result of the target object based on the point cloud data corresponding to the point cloud points of the target object.
After the contour scanning result of the target object obtained after the robot scans the surrounding environment through the ranging sensor is obtained based on step S202, the precise pose information of the robot is determined according to step S203, which is described in detail below.
S203, determining accurate pose information of the robot based on the initial pose information, the contour scanning result and a pre-stored prior map.
Wherein the initial pose information may include initial position information and initial pose information (i.e., orientation information of the robot); here, when initial pose information of the robot is determined by the vision sensor, initial position information and initial pose information of the robot can be determined at the same time; the initial pose information may also include only initial position information; here, when the initial pose information of the robot cannot be determined by the vision sensor, the initial position information of the robot may be determined by the wireless signal sensor.
The precise pose information may include precise position information and precise pose (angle, orientation) information (i.e., orientation of the robot), among others.
Here, the pre-stored prior map is a global map of the workplace where the robot is currently located, and the global map is pre-stored in a database; here, the global map may be established by slam or the like, and the global map may be an occupancy probability grid map.
In a specific implementation, in order to reduce the amount of calculation, the map search range may be determined in the pre-stored prior map by a covariance matrix or a preset method (for example, the preset map search range is a 10×10 grid map configured by centering on initial position information in the initial pose information) based on the initial pose information determined in step S201.
After the map searching range is determined, carrying out contour information searching on each target position point in the contour scanning result in the determined map searching range, obtaining a matching degree evaluation score corresponding to the matching degree of each target position point in the contour scanning result in the map searching range according to a matching degree evaluation score function, and determining whether target contour information matched with the target position point exists in the prior map. The matching degree evaluation score function is used for calculating evaluation scores corresponding to the matching degree of each target position point in the contour scanning result in the map searching range; the matching score function may be according to a first stage algorithm: a branch-and-bound matching algorithm of the grid map, a violent matching algorithm of the grid map, a fixed-step search algorithm of a point cloud and the like, and a second-stage algorithm: the iterative closest point algorithm (Iterative Closest Point, ICP) matching algorithm from point to point, the matching algorithm from point to normal distribution transformation (Normal Distributions Transform, NDT) model, the combination of any one of a plurality of algorithms such as the ICP algorithm from point to line and any one of a plurality of second-stage algorithms or the combination of any plurality of first-stage algorithms and any plurality of second-stage algorithms.
In specific implementation, according to initial pose information of a robot and the relative position relation between the initial pose information of the robot and each target position point in a contour scanning result, which is determined by a ranging sensor, determining the position of each target position point in the contour scanning result under a world coordinate system, and based on the position of each target position point in the contour scanning result under the world coordinate system, matching each target position point in the contour scanning result with an priori map in a map searching range, determining a matching degree evaluation score corresponding to the matching result according to a matching degree evaluation score function, when the matching degree evaluation score is larger than a preset threshold, indicating that target contour information matched with the target position point exists in the priori map, and taking the initial pose information of the robot as accurate pose information; and when the matching degree evaluation score is not greater than a preset threshold value, according to the determined relative position relation between the initial pose information of the robot and each target position point in the contour scanning result, according to a preset pose adjustment step length, synchronously adjusting the initial pose information and each target position point information in the corresponding contour scanning result, and based on the position of each target position point in the adjusted contour scanning result under a world coordinate system, matching each target position point in the adjusted contour scanning result with an priori map in a map searching range, and determining a matching degree evaluation score corresponding to the adjusted matching result according to a matching degree evaluation score function until the obtained matching degree evaluation score corresponding to the adjusted matching result is greater than the preset threshold value, wherein the initial pose information which is matched with the adjusted target position point exists in the priori map is taken as accurate pose information.
The preset pose adjustment step length may include a preset position adjustment step length and a preset pose (angle) adjustment step length; here, the preset position adjustment step length may be determined according to grid accuracy in a pre-stored prior map; the preset angle adjustment step length can be determined according to the included angle between two adjacent ranging signals transmitted by the ranging sensor of the robot.
Performing contour information search on each target position point in the contour scanning result in a determined map searching range, obtaining a matching degree evaluation score corresponding to the matching degree of each target position point in the contour scanning result in the map searching range according to a matching degree evaluation score function, and determining that target contour information matched with the target position point does not exist in the prior map if the matching degree evaluation score is not greater than a preset threshold value; after determining that the prior map does not have the target contour information matched with the target position points, the initial position and posture information and the corresponding target position point information in the contour scanning result can be synchronously adjusted in two dimensions of position and angle according to the preset position adjusting step length and the preset posture (angle) adjusting step length, contour information searching is carried out on each target position point in the adjusted contour scanning result in a determined map searching range, a matching degree evaluation score corresponding to the matching degree of each target position point in the adjusted contour scanning result in the map searching range is obtained according to a matching degree evaluation score function, and if the matching degree evaluation score is larger than a preset threshold value, the prior map is determined to have the target contour information matched with the adjusted target position point; the adjusted initial pose information (here, the adjusted position information in the adjusted initial pose information is obtained by adjusting according to the preset position adjustment step length on the basis of the initial position information, and the adjusted robot orientation information in the adjusted initial pose information is obtained by adjusting according to the preset angle adjustment step length on the basis of the initial pose (robot orientation) information) is taken as the accurate pose information of the robot.
The process of synchronously adjusting the initial pose information and the corresponding target position point information in the contour scanning result according to the preset pose adjustment step length can be implemented by firstly adjusting the initial position information in the initial pose information and the corresponding target position point information in the contour scanning result synchronously based on a preset adjustment step length (which can be a preset position adjustment step length or a preset angle adjustment step length), obtaining an adjustment result after the first adjustment, matching the first adjustment result with the prior map, and calculating a matching degree evaluation score; when the matching degree evaluation score is greater than a preset threshold, taking initial posture (angle) information in initial posture information of the robot (namely, robot orientation information in the initial posture information determined according to S201) and initial position information adjusted according to a preset position adjustment step length as accurate orientation information and accurate position information of the robot; when the matching degree evaluation score is not greater than the preset threshold value, adjusting the adjustment result after the first adjustment based on another preset adjustment step length (which can be a preset angle adjustment step length or a preset position adjustment step length) to obtain an adjustment result after the second adjustment, matching the second adjustment result with the prior map, and calculating the matching degree evaluation score corresponding to the second adjustment result; when the matching degree evaluation score corresponding to the second adjustment result is larger than a preset threshold value, the initial position information after the first adjustment of the robot according to the preset position adjustment step length and the initial posture (angle and orientation) information after the adjustment of the robot according to the preset angle adjustment step length are used as the precise position information and the precise orientation information of the robot.
For example, the method includes the steps of firstly adjusting position information in initial pose information and corresponding target position point information in a contour scanning result synchronously according to a preset position adjustment step length, adjusting the position of each target position point in the contour scanning result after the first adjustment under a world coordinate system based on the position of each target position point in the contour scanning result after the first adjustment, matching each target position point in the contour scanning result after the first adjustment with an a priori map in a map searching range, determining a matching degree evaluation score corresponding to the matching result after the first adjustment according to a matching degree evaluation score function, and when the matching degree evaluation score is not greater than a preset threshold value, determining a matching degree corresponding to each target position point in the contour scanning result after the second adjustment according to the position of each target position point in the contour scanning result after the second adjustment under the coordinate system based on the corresponding position relation between the contour position information corresponding to the contour scanning result and the initial pose in the contour scanning result, adjusting the step length synchronously according to a preset angle, and matching each target position point in the contour scanning result after the second adjustment with the map in the map searching range.
In the embodiment of the disclosure, the accurate pose information of the robot is determined according to the initial pose information of the robot, the contour scanning result of the target object obtained by the surrounding environment scanned by the ranging sensor by the robot, and the prior map stored in advance, wherein, because the sensors used for determining the initial pose such as the vision sensor and the wireless signal sensor are easily affected by the external environment, the positioning of the robot is inaccurate, the embodiment of the disclosure can further determine the accurate pose information after correcting the initial pose according to the contour scanning result of the ranging sensor on the basis of the initial pose (the relative position relation between the contour position information corresponding to the contour scanning result and the initial pose is accurate, because the pose information of the ranging sensor is taken as a reference standard when calculating the contour position information), for example, when the contour position information corresponding to the initial pose is matched with the prior map, the initial pose can be taken as the accurate pose, and if the initial pose is not matched, the contour position information can be adjusted, and the position of the robot is accurately adjusted after the contour position information is adjusted.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the robot described above may refer to the corresponding process in the above method embodiment, which is not described herein again. In several embodiments provided in the present disclosure, it should be understood that the disclosed robots and methods may be implemented in other ways. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or a part of the technical solution, or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present disclosure, and are not intended to limit the scope of the disclosure, but the present disclosure is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, it is not limited to the disclosure: any person skilled in the art, within the technical scope of the disclosure of the present disclosure, may modify or easily conceive changes to the technical solutions described in the foregoing embodiments, or make equivalent substitutions for some of the technical features thereof; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the disclosure, and are intended to be included within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (16)

1. A robot, comprising: the device comprises an initial pose detection component, a ranging sensor, a control server and a memory;
wherein: the initial pose detection component is configured to perform initial positioning on the robot to obtain initial pose information; the ranging sensor is configured to determine a contour scanning result of a target object in the surrounding environment by scanning the surrounding environment; the memory is configured to store an execution instruction of the control server, store initial pose information detected by the initial pose detection component and a contour scanning result output by the ranging sensor, and store a priori map in advance;
The control server is configured to read execution instructions from the memory, and execute, according to the execution instructions: reading the initial pose information, the contour scanning result and the prior map, and determining the accurate pose information of the robot based on the initial pose information, the contour scanning result and the pre-stored prior map;
the control server, when executing the determination of the precise pose information of the robot based on the initial pose information, the contour scanning result and a pre-stored prior map, is configured to:
determining whether target contour information matched with target position points exists in the prior map based on each target position point in the contour scanning result; if the matched target contour information exists, the initial pose information is used as the accurate pose information; if the target contour information does not exist, synchronously adjusting the initial pose information and the corresponding target position point information in the contour scanning result according to a preset pose adjustment step length, and returning to the step of determining whether the target contour information matched with the target position point exists in the prior map or not until the target contour information matched with the adjusted target position point exists, wherein the adjusted initial pose information corresponding to the adjusted target position point is used as the accurate pose information.
2. The robot of claim 1, wherein the control server, when executing the determination of whether there is target contour information in the prior map that matches the target location points based on the respective target location points in the contour scan result, is configured to:
determining a map search range based on the initial pose information; and determining whether target contour information matched with the target position point exists or not by searching contour information in the determined map searching range.
3. The robot of claim 1, wherein the initial pose detection component, when performing determining initial pose information for initial positioning of the robot, is configured to:
acquiring a target image obtained by the robot through scanning the surrounding environment by a vision sensor; and determining the initial pose information based on the pre-recorded world coordinate information corresponding to the target marker contained in the target image.
4. The robot of claim 3, wherein the initial pose detection component, when performing determining the initial pose information based on pre-recorded world coordinate information corresponding to a target marker contained in the target image, is configured to:
Determining first pose information of the target marker under a visual sensor coordinate system; determining second pose information of the target marker under the robot coordinate system based on the first pose information and first relative pose information between the vision sensor coordinate system and the robot coordinate system; determining second relative pose information between the robot coordinate system and the world coordinate system based on the second pose information and the world coordinate information of the target marker in the world coordinate system; and determining initial pose information of the robot according to the second relative pose information.
5. The robot of claim 3 or 4, wherein the initial pose detection assembly is further configured to:
if the target marker cannot be successfully identified from the target image, determining the initial pose information based on the received wireless broadcast signal sent by the target positioning equipment.
6. The robot of claim 1, wherein the initial pose detection component, when performing determining initial pose information for initial positioning of the robot, is configured to:
and determining the initial pose information based on the received wireless broadcast signal sent by the target positioning equipment.
7. The robot of claim 6, wherein the initial pose detection component, when performing determining the initial pose information based on the received wireless broadcast signal transmitted by the target positioning device, is configured to:
and determining the position of the robot based on the received signal strength of the wireless broadcast signals sent by the plurality of target positioning devices and the preset position information corresponding to the plurality of target positioning devices.
8. The robot of claim 1, wherein the ranging sensor is configured to:
acquiring point cloud data obtained by scanning the surrounding environment by a ranging sensor; determining whether the number of point cloud points belonging to the target object is larger than a set threshold value based on the point cloud data; if the point cloud data is larger than the point cloud data, determining a contour scanning result of the target object based on the point cloud data corresponding to the point cloud point of the target object.
9. A positioning method, applied to a robot, comprising:
determining initial pose information obtained by initially positioning the robot;
acquiring a contour scanning result of a target object obtained after the robot scans the surrounding environment through a ranging sensor;
Determining accurate pose information of the robot based on the initial pose information, the contour scanning result and a pre-stored prior map;
determining accurate pose information of the robot based on the initial pose information, the contour scanning result and a pre-stored prior map, including:
determining whether target contour information matched with target position points exists in the prior map based on each target position point in the contour scanning result;
if the matched target contour information exists, the initial pose information is used as the accurate pose information;
if the target contour information does not exist, synchronously adjusting the initial pose information and the corresponding target position point information in the contour scanning result according to a preset pose adjustment step length, and returning to the step of determining whether the target contour information matched with the target position point exists in the prior map or not until the target contour information matched with the adjusted target position point exists, wherein the adjusted initial pose information corresponding to the adjusted target position point is used as the accurate pose information.
10. The method of claim 9, wherein determining whether there is target contour information in the prior map that matches the target location points based on the respective target location points in the contour scan results comprises:
determining a map search range based on the initial pose information;
and determining whether target contour information matched with the target position point exists or not by searching contour information in the determined map searching range.
11. The method of claim 9, wherein determining initial pose information for initial positioning of the robot comprises:
acquiring a target image obtained by the robot through scanning the surrounding environment by a vision sensor;
and determining the initial pose information based on the pre-recorded world coordinate information corresponding to the target marker contained in the target image.
12. The method of claim 11, wherein determining the initial pose information based on pre-recorded world coordinate information corresponding to a target marker contained in the target image comprises:
determining first pose information of the target marker under a visual sensor coordinate system;
Determining second pose information of the target marker under the robot coordinate system based on the first pose information and first relative pose information between the vision sensor coordinate system and the robot coordinate system;
determining second relative pose information between the robot coordinate system and the world coordinate system based on the second pose information and the world coordinate information of the target marker in the world coordinate system;
and determining initial pose information of the robot according to the second relative pose information.
13. The method according to any one of claims 11 to 12, further comprising:
if the target marker cannot be successfully identified from the target image, determining the initial pose information based on the received wireless broadcast signal sent by the target positioning equipment.
14. The method of claim 9, wherein determining initial pose information for initial positioning of the robot comprises:
and determining the initial pose information based on the received wireless broadcast signal sent by the target positioning equipment.
15. The method of claim 14, wherein determining the initial pose information based on received wireless broadcast signals transmitted by a target positioning device comprises:
And determining the position of the robot based on the received signal strength of the wireless broadcast signals sent by the plurality of target positioning devices and the preset position information corresponding to the plurality of target positioning devices.
16. The method of claim 9, wherein obtaining a profile scan of the target object obtained after the robot scans the surrounding environment with the ranging sensor, comprises:
acquiring point cloud data obtained by scanning the surrounding environment by a ranging sensor;
determining whether the number of point cloud points belonging to the target object is larger than a set threshold value based on the point cloud data;
if the point cloud data is larger than the point cloud data, determining a contour scanning result of the target object based on the point cloud data corresponding to the point cloud point of the target object.
CN202010898774.6A 2020-08-31 2020-08-31 Robot and positioning method applied to robot Active CN114102577B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010898774.6A CN114102577B (en) 2020-08-31 2020-08-31 Robot and positioning method applied to robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010898774.6A CN114102577B (en) 2020-08-31 2020-08-31 Robot and positioning method applied to robot

Publications (2)

Publication Number Publication Date
CN114102577A CN114102577A (en) 2022-03-01
CN114102577B true CN114102577B (en) 2023-05-30

Family

ID=80359940

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010898774.6A Active CN114102577B (en) 2020-08-31 2020-08-31 Robot and positioning method applied to robot

Country Status (1)

Country Link
CN (1) CN114102577B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114353807B (en) * 2022-03-21 2022-08-12 沈阳吕尚科技有限公司 Robot positioning method and positioning device
CN115047874B (en) * 2022-06-02 2023-09-15 北京三快在线科技有限公司 Robot connection method, locker, robot, system and electronic equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104626206A (en) * 2014-12-17 2015-05-20 西南科技大学 Robot operation pose information measuring method under non-structural environment
CN105928505A (en) * 2016-04-19 2016-09-07 深圳市神州云海智能科技有限公司 Determination method and apparatus for position and orientation of mobile robot
WO2017215777A2 (en) * 2016-06-13 2017-12-21 Plasser & Theurer Export Von Bahnbaumaschinen Gesellschaft M.B.H. Method and system for the maintenance of a travel path for rail vehicles
CN109084732A (en) * 2018-06-29 2018-12-25 北京旷视科技有限公司 Positioning and air navigation aid, device and processing equipment
CN110260867A (en) * 2019-07-29 2019-09-20 浙江大华技术股份有限公司 Method, equipment and the device that pose is determining in a kind of robot navigation, corrects
CN110319834A (en) * 2018-03-30 2019-10-11 深圳市神州云海智能科技有限公司 A kind of method and robot of Indoor Robot positioning
CN110375738A (en) * 2019-06-21 2019-10-25 西安电子科技大学 A kind of monocular merging Inertial Measurement Unit is synchronous to be positioned and builds figure pose calculation method
WO2019219077A1 (en) * 2018-05-18 2019-11-21 京东方科技集团股份有限公司 Positioning method, positioning apparatus, positioning system, storage medium, and method for constructing offline map database
CN111473785A (en) * 2020-06-28 2020-07-31 北京云迹科技有限公司 Method and device for adjusting relative pose of robot to map
CN111590595A (en) * 2020-06-30 2020-08-28 深圳市银星智能科技股份有限公司 Positioning method and device, mobile robot and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104626206A (en) * 2014-12-17 2015-05-20 西南科技大学 Robot operation pose information measuring method under non-structural environment
CN105928505A (en) * 2016-04-19 2016-09-07 深圳市神州云海智能科技有限公司 Determination method and apparatus for position and orientation of mobile robot
WO2017215777A2 (en) * 2016-06-13 2017-12-21 Plasser & Theurer Export Von Bahnbaumaschinen Gesellschaft M.B.H. Method and system for the maintenance of a travel path for rail vehicles
CN110319834A (en) * 2018-03-30 2019-10-11 深圳市神州云海智能科技有限公司 A kind of method and robot of Indoor Robot positioning
WO2019219077A1 (en) * 2018-05-18 2019-11-21 京东方科技集团股份有限公司 Positioning method, positioning apparatus, positioning system, storage medium, and method for constructing offline map database
CN109084732A (en) * 2018-06-29 2018-12-25 北京旷视科技有限公司 Positioning and air navigation aid, device and processing equipment
CN110375738A (en) * 2019-06-21 2019-10-25 西安电子科技大学 A kind of monocular merging Inertial Measurement Unit is synchronous to be positioned and builds figure pose calculation method
CN110260867A (en) * 2019-07-29 2019-09-20 浙江大华技术股份有限公司 Method, equipment and the device that pose is determining in a kind of robot navigation, corrects
CN111473785A (en) * 2020-06-28 2020-07-31 北京云迹科技有限公司 Method and device for adjusting relative pose of robot to map
CN111590595A (en) * 2020-06-30 2020-08-28 深圳市银星智能科技股份有限公司 Positioning method and device, mobile robot and storage medium

Also Published As

Publication number Publication date
CN114102577A (en) 2022-03-01

Similar Documents

Publication Publication Date Title
CN107923960B (en) System and method for locating tags in space
CN109490825B (en) Positioning navigation method, device, equipment, system and storage medium
US20190202067A1 (en) Method and device for localizing robot and robot
CN110865393A (en) Positioning method and system based on laser radar, storage medium and processor
EP3404439A1 (en) Cluster-based magnetic positioning method, device and system
CN114102577B (en) Robot and positioning method applied to robot
US20200233061A1 (en) Method and system for creating an inverse sensor model and method for detecting obstacles
CN111380510B (en) Repositioning method and device and robot
US9970762B2 (en) Target point detection method
CN112505671B (en) Millimeter wave radar target positioning method and device under GNSS signal missing environment
JP6073944B2 (en) Laser measurement system, reflection target body, and laser measurement method
US11002842B2 (en) Method and apparatus for determining the location of a static object
US11288554B2 (en) Determination method and determination device
Jiménez et al. Precise localisation of archaeological findings with a new ultrasonic 3D positioning sensor
JP2019039867A (en) Position measurement device, position measurement method and program for position measurement
KR102482968B1 (en) Method and Apparatus for Positioning Train Using Deep Kalman Filter
EP3667368B1 (en) Sensor control device
KR101642186B1 (en) Location tracking method, location tracking system and recording medium for performing the method
KR102275309B1 (en) Method and Apparatus for Timing Measurement Based Positioning to Minimize the Number of Scans
JP6823690B2 (en) Position adjustment method, position adjustment device, and position adjustment program
CN113238186A (en) Mobile robot repositioning method, system and chip
CN117148346A (en) Positioning method, positioning device, vehicle-mounted equipment, vehicle and storage medium
KR20240009673A (en) Method for extracting outline of building in vehicle and vehicle thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant