CN114102577A - Robot and positioning method applied to robot - Google Patents

Robot and positioning method applied to robot Download PDF

Info

Publication number
CN114102577A
CN114102577A CN202010898774.6A CN202010898774A CN114102577A CN 114102577 A CN114102577 A CN 114102577A CN 202010898774 A CN202010898774 A CN 202010898774A CN 114102577 A CN114102577 A CN 114102577A
Authority
CN
China
Prior art keywords
robot
information
target
pose information
initial pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010898774.6A
Other languages
Chinese (zh)
Other versions
CN114102577B (en
Inventor
刘满堂
俞毓锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jizhijia Technology Co Ltd
Original Assignee
Beijing Jizhijia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jizhijia Technology Co Ltd filed Critical Beijing Jizhijia Technology Co Ltd
Priority to CN202010898774.6A priority Critical patent/CN114102577B/en
Publication of CN114102577A publication Critical patent/CN114102577A/en
Application granted granted Critical
Publication of CN114102577B publication Critical patent/CN114102577B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • B25J13/088Controls for manipulators by means of sensing devices, e.g. viewing or touching devices with position, velocity or acceleration sensors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • B25J13/088Controls for manipulators by means of sensing devices, e.g. viewing or touching devices with position, velocity or acceleration sensors
    • B25J13/089Determining the position of the robot with reference to its environment
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The present disclosure provides a robot and a positioning method applied to the robot, wherein the robot includes: the system comprises an initial pose detection assembly, a ranging sensor, a control server and a memory; wherein: the initial pose detection assembly is configured to perform initial positioning on the robot to obtain initial pose information; the ranging sensor is configured to determine a profile scan result of a target object in the surrounding environment by scanning the surrounding environment; the memory is configured to store an execution instruction of the control server, initial pose information detected by the initial pose detection assembly and a contour scanning result output by the ranging sensor, and a priori map; the control server is configured to read the execution instruction from the memory, and execute, according to the execution instruction: and reading the initial pose information, the contour scanning result and the prior map, and determining the accurate pose information of the robot based on the initial pose information, the contour scanning result and the pre-stored prior map.

Description

Robot and positioning method applied to robot
Technical Field
The disclosure relates to the technical field of robot positioning, in particular to a robot and a positioning method applied to the robot.
Background
The robot can construct a map of a current place according to a map construction algorithm in the working process, generally, the robot can determine pose information of the robot in the constructed map based on a odometer and an inertial device in the working process, and therefore the robot can be positioned. However, there may be errors in positioning the robot based on the odometer or the inertial device, and when the robot runs for a long time, errors may be accumulated, resulting in low accuracy in positioning the robot, and thus, the robot needs to be repositioned.
At present, the robot can be repositioned based on a robot external device or based on a robot body sensor, but when the robot is repositioned based on the robot external device (such as a Global Positioning System (GPS), an Ultra Wide Band (UWB), etc.) and the robot body sensor (such as a vision sensor), the positioning accuracy is low due to instability of an external environment, and therefore how to improve the repositioning accuracy becomes a problem which needs to be solved urgently at present.
Disclosure of Invention
The embodiment of the disclosure at least provides a robot and a positioning method applied to the robot.
In a first aspect, an embodiment of the present disclosure provides a robot, including: the system comprises an initial pose detection assembly, a ranging sensor, a control server and a memory;
wherein: the initial pose detection assembly is configured to perform initial positioning on the robot to obtain initial pose information; the ranging sensor is configured to determine a profile scan result of a target object in the surrounding environment by scanning the surrounding environment; the memory is configured to store an execution instruction of the control server, store initial pose information detected by the initial pose detection assembly and a contour scanning result output by the ranging sensor, and store a priori maps in advance.
The control server is configured to read an execution instruction from the memory, and execute, according to the execution instruction: and reading the initial pose information, the contour scanning result and the prior map, and determining the accurate pose information of the robot based on the initial pose information, the contour scanning result and the pre-stored prior map.
In one possible embodiment, the control server, when executing the determination of the accurate pose information of the robot based on the initial pose information, the contour scan result, and a pre-stored a priori map, is configured to:
determining whether target contour information matched with the target position points exists in the prior map or not based on each target position point in the contour scanning result; if the matched target contour information exists, taking the initial pose information as the accurate pose information; if the target contour information does not exist, adjusting the initial pose information and the information of each target position point in the corresponding contour scanning result synchronously according to a preset pose adjusting step length, returning to the step of determining whether the target contour information matched with the target position point exists in the prior map or not until the target contour information matched with the adjusted target position point exists, and taking the adjusted initial pose information corresponding to the adjusted target position point as the accurate pose information.
In one possible embodiment, the control server, when performing the determination of whether there is target contour information matching the target position point in the prior map based on each target position point in the contour scan result, is configured to:
determining a map search range based on the initial pose information; and determining whether target contour information matched with the target position point exists or not by searching contour information in the determined map searching range.
In one possible embodiment, the initial pose detection component, when executing the initial pose information obtained by determining the initial positioning of the robot, is configured to:
acquiring a target image obtained by scanning the surrounding environment by the robot through a vision sensor; and determining the initial pose information based on the pre-recorded world coordinate information corresponding to the target marker contained in the target image.
In one possible implementation, the initial pose detection component, when executing the determining the initial pose information based on pre-recorded world coordinate information corresponding to a target marker included in the target image, is configured to:
determining first position and attitude information of the target marker in a visual sensor coordinate system; determining second pose information of the target marker in the robot coordinate system based on the first pose information and first relative pose information between the vision sensor coordinate system and the robot coordinate system; determining second relative pose information between the robot coordinate system and a world coordinate system based on the second pose information and the world coordinate information of the target marker in the world coordinate system; and determining initial pose information of the robot according to the second relative pose information.
In one possible embodiment, the initial pose detection assembly is further configured to:
and if the target marker cannot be successfully identified from the target image, determining the initial pose information based on the received wireless broadcast signal sent by the target positioning equipment.
In one possible embodiment, the initial pose detection component, when executing the initial pose information obtained by determining the initial positioning of the robot, is configured to:
and determining the initial pose information based on the received wireless broadcast signals sent by the target positioning equipment.
In one possible implementation, the initial pose detection component, in performing the determining the initial pose information based on the received wireless broadcast signal transmitted by the object localization device, is configured to:
and determining the position of the robot based on the received signal strength of wireless broadcast signals sent by a plurality of target positioning devices and preset position information corresponding to the target positioning devices.
In one possible embodiment, the ranging sensor is configured to:
acquiring point cloud data obtained by scanning the surrounding environment by a distance measuring sensor; determining whether the number of point cloud points belonging to a target object is greater than a set threshold value or not based on the point cloud data; and if so, determining the contour scanning result of the target object based on the point cloud data corresponding to the point cloud point of the target object.
In a second aspect, an embodiment of the present disclosure provides a positioning method applied to a robot, including:
determining initial pose information obtained by initially positioning the robot;
acquiring a contour scanning result of a target object obtained after the robot scans the surrounding environment through a ranging sensor;
and determining accurate pose information of the robot based on the initial pose information, the contour scanning result and a pre-stored prior map.
In an optional embodiment, determining accurate pose information of the robot based on the initial pose information, the contour scan result, and a pre-stored a priori map includes:
determining whether target contour information matched with the target position points exists in the prior map or not based on each target position point in the contour scanning result;
if the matched target contour information exists, taking the initial pose information as the accurate pose information;
if the target contour information does not exist, adjusting the initial pose information and the information of each target position point in the corresponding contour scanning result synchronously according to a preset pose adjusting step length, returning to the step of determining whether the target contour information matched with the target position point exists in the prior map or not until the target contour information matched with the adjusted target position point exists, and taking the adjusted initial pose information corresponding to the adjusted target position point as the accurate pose information.
In an optional embodiment, determining whether there is target contour information matching the target position point in the prior map based on each target position point in the contour scan result includes:
determining a map search range based on the initial pose information;
and determining whether target contour information matched with the target position point exists or not by searching contour information in the determined map searching range.
In an optional implementation manner, determining initial pose information obtained by initially positioning the robot includes:
acquiring a target image obtained by scanning the surrounding environment by the robot through a vision sensor;
and determining the initial pose information based on the pre-recorded world coordinate information corresponding to the target marker contained in the target image.
In an optional embodiment, determining the initial pose information based on the pre-recorded world coordinate information corresponding to the target annotation contained in the target image includes:
determining first position and attitude information of the target marker in a visual sensor coordinate system;
determining second pose information of the target marker in the robot coordinate system based on the first pose information and first relative pose information between the vision sensor coordinate system and the robot coordinate system;
determining second relative pose information between the robot coordinate system and a world coordinate system based on the second pose information and the world coordinate information of the target marker in the world coordinate system;
and determining initial pose information of the robot according to the second relative pose information.
In one possible embodiment, the method further comprises:
and if the target marker cannot be successfully identified from the target image, determining the initial pose information based on the received wireless broadcast signal sent by the target positioning equipment.
In one possible embodiment, determining initial pose information obtained by initially positioning the robot includes:
and determining the initial pose information based on the received wireless broadcast signals sent by the target positioning equipment.
In one possible implementation, the determining the initial pose information based on the received wireless broadcast signal transmitted by the object locating device includes:
and determining the position of the robot based on the received signal strength of wireless broadcast signals sent by a plurality of target positioning devices and preset position information corresponding to the target positioning devices.
In a possible embodiment, obtaining a contour scanning result of the target object obtained after the robot scans the surrounding environment through the ranging sensor includes:
acquiring point cloud data obtained by scanning the surrounding environment by a distance measuring sensor;
determining whether the number of point cloud points belonging to a target object is greater than a set threshold value or not based on the point cloud data;
and if so, determining the contour scanning result of the target object based on the point cloud data corresponding to the point cloud point of the target object.
The robot and the positioning method applied to the robot provided by the embodiment of the disclosure determine the accurate pose information of the robot according to the initial pose information of the robot, the contour scanning result of the target object obtained by the robot through the surrounding environment scanned by the ranging sensor, and the pre-stored prior map, wherein, the sensors used for determining the initial pose, such as the vision sensor and the wireless signal sensor, are easily affected by the external environment, so that the robot is inaccurately positioned, therefore, the embodiment of the disclosure can further determine the contour scanning result of the target object by the ranging sensor (the relative position relationship between the contour position information corresponding to the contour scanning result and the initial pose is accurate on the basis of the initial pose because the pose information of the ranging sensor is taken as the reference standard when calculating the contour position information) and the prior map according to the initial pose information of the robot, for example, when the contour position information corresponding to the initial pose is matched with the prior map, the initial pose can be used as the accurate pose, if the contour position information is not matched with the prior map, the contour position information is adjusted, and when the adjusted contour position information is matched with the prior map, the adjusted robot pose corresponding to the adjusted contour position information is used as the accurate pose, so that the positioning accuracy of the robot is improved.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 shows a schematic structural diagram of a robot provided by an embodiment of the present disclosure;
fig. 2 shows a flow chart of a positioning method provided by an embodiment of the present disclosure;
fig. 3 shows a flowchart of a method for determining initial pose information of a robot in a positioning method provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
Research shows that at present, the pose of the robot can be determined based on a robot external device or based on a robot body sensor, but when the robot is positioned based on the robot external device (such as a Global Positioning System (GPS), an Ultra Wide Band (UWB), etc.) and the robot body sensor (such as a vision sensor), the positioning accuracy is low due to instability of an external environment, and therefore how to improve the positioning accuracy becomes a problem which needs to be solved urgently at present.
Based on the above research, the present disclosure provides a robot and a positioning method applied to the robot, which determine accurate pose information of the robot according to initial pose information of the robot, a contour scanning result of a target object obtained by the robot through a surrounding environment scanned by a ranging sensor, and a pre-stored prior map, where sensors used for determining the initial pose, such as a vision sensor and a wireless signal sensor, are easily affected by an external environment to cause positioning of the robot, and therefore, the embodiments of the present disclosure may further determine, based on the initial pose, a contour scanning result of the ranging sensor on the target object (a relative position relationship between contour position information corresponding to the contour scanning result and the initial pose is accurate because the pose information of the ranging sensor is used as a reference standard in calculating the contour position information) and the prior map, for example, when the contour position information corresponding to the initial pose is matched with the prior map, the initial pose can be used as the accurate pose, if the contour position information is not matched with the prior map, the contour position information is adjusted, and when the adjusted contour position information is matched with the prior map, the adjusted robot pose corresponding to the adjusted contour position information is used as the accurate pose, so that the positioning accuracy of the robot is improved.
The above-mentioned drawbacks are the results of the inventor after practical and careful study, and therefore, the discovery process of the above-mentioned problems and the solutions proposed by the present disclosure to the above-mentioned problems should be the contribution of the inventor in the process of the present disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
For the convenience of understanding of the present embodiment, first, a robot disclosed in the embodiments of the present disclosure is described in detail, and then, a positioning method disclosed in the embodiments of the present disclosure is described in detail. In some possible implementations, the location method may be implemented by a processor calling computer readable instructions stored in a memory.
Example one
Referring to fig. 1, a schematic structural diagram of a robot 10 provided in an embodiment of the present disclosure is shown, where the robot 10 includes: an initial pose detection assembly 11, a ranging sensor 12, a control server 13, and a memory 14.
Wherein: the initial pose detection assembly 11 is configured to perform initial positioning on the robot to obtain initial pose information.
The initial pose detection assembly 11 may include a vision sensor 110 and a wireless signal sensor 111; the initial pose information is pose information of the robot in a world coordinate system and can comprise initial position information and initial pose information; here, the initial pose information may be a current orientation of the robot. Here, the WIreless signal sensor 111 may be a bluetooth sensor, a Global Positioning System (GPS) sensor, an Ultra Wide Band (UWB) sensor, a WIreless Fidelity (WiFi) sensor, or the like; the wireless signal sensor 111 may be any sensor capable of implementing a positioning function according to a received wireless signal, and will not be described herein again.
In specific implementation, the target image can be obtained by scanning the surrounding environment through the vision sensor 110 in the initial pose detection assembly 11; and determining the initial pose information based on pre-recorded world coordinate information corresponding to the target marker contained in the target image.
The target image is a frame of image obtained by scanning the surrounding environment by the robot 10 through the vision sensor 110 at the current time, and the image may include at least one object.
Here, the target label is an object with tag identification; the world coordinate information is the pose information of the target marker in a world coordinate system.
Specifically, the initial pose detection assembly 11 determines first pose information of the target marker in a visual sensor coordinate system based on a target image; determining second pose information of the target marker in the robot coordinate system based on the first pose information and first relative pose information between the vision sensor coordinate system and the robot coordinate system; determining second relative pose information between the robot coordinate system and a world coordinate system based on the second pose information and the world coordinate information of the target marker in the world coordinate system; and determining initial pose information of the robot 10 according to the second relative pose information.
The first position and orientation information comprises position information and orientation (orientation) information of the target mark object in a visual sensor coordinate system; first relative pose information between the vision sensor coordinate system and the robot coordinate system can be used for indicating a conversion relation between the vision sensor coordinate system and the robot coordinate system, and translation and rotation can be included; the second position and orientation information comprises position information and orientation (orientation) information of the target mark object in a robot coordinate system; second relative pose information between the robot coordinate system and the world coordinate system may be used to indicate a translation relationship between the robot coordinate system and the world coordinate system, which may include translation and rotation.
In one possible embodiment, the initial pose information may be determined based on a wireless broadcast signal received by the wireless signal sensor 111 in the initial pose detection assembly 11.
In one possible implementation, if the target marker cannot be identified from the target image, the initial pose information may be determined based on a wireless broadcast signal received by the wireless signal sensor 111 in the initial pose detection assembly 11.
The wireless broadcast signal may include a wireless broadcast signal strength and a transmission time of the wireless broadcast signal.
In a specific implementation, when the initial pose information of the robot 10 is determined by the wireless broadcast signal received by the wireless signal sensor 111 in the initial pose detection assembly 11, the position of the robot 10 may be determined based on the received signal strength of the wireless broadcast signals transmitted by the plurality of target positioning devices and the preset position information corresponding to the plurality of target devices; the position of the robot 10 may also be determined based on the transmission times of the received wireless broadcast signals transmitted by the plurality of target positioning devices and the reception times of the wireless broadcast signals.
The target positioning device may be a Beacon positioning device laid in a work place area where the robot 10 is located, or may be a positioning base station.
Accordingly, the ranging sensor 12 is configured to determine a result of scanning a contour of the target object in the surrounding environment by scanning the surrounding environment; the memory 14 is configured to store execution instructions of the control server, as well as initial pose information detected by the initial pose detection assembly 11 and a contour scan result output by the range sensor 12, and to store an a priori map in advance.
The distance measuring sensor 12 may be a laser radar sensor, an infrared sensor, or the like; the distance measuring sensor 12 may be any depth sensor with distance measuring function, and will not be described in detail herein.
Wherein the target object may include at least one object scanned by the robot 10 during operation by the ranging sensor 12; here, at least one object may include a target tag carrying tag identification.
Here, the contour scan result may be a contour map of the target object composed of a plurality of point cloud data.
Here, the prior map is a global map of a workplace where the robot 10 is currently located, the global map may be created in a slam manner or the like, and the global map may be an occupancy probability grid map.
In a specific implementation, the ranging sensor 12 is configured to: acquiring point cloud data obtained by scanning the surrounding environment by the distance measuring sensor 12; determining whether the number of point cloud points belonging to a target object is greater than a set threshold value or not based on the point cloud data; and if so, determining the contour scanning result of the target object based on the point cloud data corresponding to the point cloud point of the target object.
Accordingly, the control server 13 is configured to read the execution instruction from the memory 14, and according to the execution instruction, execute: the initial pose information, the contour scanning result, and the prior map are read, and the accurate pose information of the robot 10 is determined based on the initial pose information, the contour scanning result, and the pre-stored prior map.
In one possible embodiment, the control server 13, when executing the determination of the accurate pose information of the robot 10 based on the initial pose information, the contour scan result, and the pre-stored a priori map, is configured to:
determining whether target contour information matched with the target position points exists in the prior map or not based on each target position point in the contour scanning result; if the matched target contour information exists, taking the initial pose information as the accurate pose information; if the target contour information does not exist, adjusting the initial pose information and the information of each target position point in the corresponding contour scanning result synchronously according to a preset pose adjusting step length, returning to the step of determining whether the target contour information matched with the target position point exists in the prior map or not until the target contour information matched with the adjusted target position point exists, and taking the adjusted initial pose information corresponding to the adjusted target position point as the accurate pose information.
In one possible embodiment, the control server 13, when performing the determination of whether there is target contour information matching the target position point in the prior map based on each target position point in the contour scan result, is configured to: determining a map search range based on the initial pose information; and determining whether target contour information matched with the target position point exists or not by searching contour information in the determined map searching range.
In the embodiment of the disclosure, the accurate pose information of the robot is determined according to the initial pose information of the robot, the contour scanning result of the target object obtained by the robot through the surrounding environment scanned by the ranging sensor, and the pre-stored prior map, wherein, because sensors used for determining the initial pose, such as a visual sensor and a wireless signal sensor, are easily affected by the external environment, and the robot is positioned inaccurately, the embodiment of the disclosure can further determine the accurate pose information after correcting the initial pose according to the contour scanning result of the ranging sensor on the target object (the relative position relationship between the contour position information corresponding to the contour scanning result and the initial pose is accurate because the pose information of the ranging sensor is used as the reference standard when calculating the contour position information) and the prior map, for example, when the contour position information corresponding to the initial pose is matched with the prior map, the initial pose can be used as an accurate pose, if the contour position information is not matched with the prior map, the contour position information is adjusted, and when the adjusted contour position information is matched with the prior map, the adjusted robot pose corresponding to the adjusted contour position information is used as an accurate pose, so that the positioning accuracy of the robot is improved.
Based on the same inventive concept, the robot provided in the first embodiment of the present disclosure corresponds to the positioning method applied to the robot in the second embodiment described below, and since the principle of solving the problem of the robot in the first embodiment of the present disclosure is similar to the positioning method described below in the second embodiment of the present disclosure, the implementation of the robot may refer to the implementation of the method, and repeated details are not described again.
The description of the processing flow of each module in the robot and the interaction flow between the modules may refer to the related description in the following method embodiments, and will not be described in detail here.
Example two
Referring to fig. 2, a flowchart of a positioning method provided by an embodiment of the present disclosure is a method for a robot, the method includes steps S201 to S203, where:
s201, determining initial pose information obtained by initially positioning the robot.
The initial pose information is pose information of the robot in a world coordinate system and can comprise initial position information and initial pose information; here, the initial pose information may be a current orientation of the robot.
In a specific implementation, the initial pose information of the robot can be determined by a vision sensor or a wireless signal sensor mounted on the robot body, and the determination of the initial pose information of the robot by the vision sensor mounted on the robot body is described as follows: acquiring a target image obtained by scanning the surrounding environment by the robot through a vision sensor; and determining the initial pose information based on the pre-recorded world coordinate information corresponding to the target marker contained in the target image.
Here, the WIreless signal sensor may include a bluetooth sensor, a Global Positioning System (GPS) sensor, an Ultra Wide Band (UWB) sensor, a WIreless Fidelity (WiFi) sensor, and the like; the wireless signal sensor may be any sensor capable of implementing a positioning function according to a received wireless signal, and details thereof are not repeated herein.
The target image is a frame of image obtained by scanning the surrounding environment by the robot through the vision sensor at the current moment, and the image can include at least one object.
Here, the target label is an object with tag identification; the world coordinate information is the pose information of the target marker in a world coordinate system.
Specifically, as shown in fig. 3, the initial pose information of the robot may be determined based on the pre-recorded world coordinate system information corresponding to the target marker included in the target image according to the following steps S301 to S304, which are described as follows:
s301, determining first position information of the target marking object in a visual sensor coordinate system.
In specific implementation, the first position and orientation information of the target marker carrying the tag identification in the coordinate system of the visual sensor can be determined by analyzing the target image; wherein the first position information may include first position information and first posture information.
S302, determining second position and posture information of the target marker in the robot coordinate system based on the first position and posture information and first relative position and posture information between the vision sensor coordinate system and the robot coordinate system.
Wherein the first relative pose information between the vision sensor coordinate system and the robot coordinate system can be used to indicate a transformation relationship between the vision sensor coordinate system and the robot coordinate system, which can include translation and rotation.
In the specific implementation, second position information of the target marking object carrying the tag identification in the robot coordinate system is calculated and obtained according to the conversion relation between the vision sensor coordinate system and the robot coordinate system and based on the first position information of the target marking object carrying the tag identification in the vision sensor coordinate system. Wherein the second position information may include second position information and second posture information.
S303, determining second relative pose information between the robot coordinate system and the world coordinate system based on the second pose information and the world coordinate information of the target mark object in the world coordinate system.
Here, the conversion relationship between the robot coordinate system and the world coordinate system (i.e. the second relative pose information) is calculated according to the world coordinate information of the target marker carrying the tag identifier in the world coordinate system and the second pose information of the target marker determined in step S302 in the robot coordinate system, which may include translation and rotation.
S304, determining initial pose information of the robot according to the second relative pose information.
In a specific implementation, after the transformation relationship between the robot coordinate system and the world coordinate system is calculated according to step S303, the initial pose information of the robot in the world coordinate system may be obtained according to the transformation relationship between the robot coordinate system and the world coordinate system.
For example, if the target image is analyzed, the position of the target marker carrying the tag identifier under the coordinates of the visual sensor is determined to be Pc(xc,yc) The first relative pose between the vision sensor coordinate system and the robot coordinate system is transformed into TcbAnd the second relative pose of the robot coordinate system and the world coordinate system is transformed into TwbAccording to the coordinate of tag in the world coordinate system, the coordinate is Pw(xw,yw) Then can be according to formula Pw=Twb*Tcb -1*PcAnd calculating to obtain initial pose information T of the robot under a world coordinate systemwb(ii) a Wherein, TwbThe robot posture information comprises position information of the robot under a world coordinate system and posture information of the robot under the world coordinate system determined by rotation according to the conversion relation between the coordinate systems; wherein, Tcb -1Representing the translation between the robot coordinate system and the vision sensor coordinate system.
In specific implementation, after a target image obtained by scanning a surrounding environment through a visual sensor, whether a current target image contains a target marker carrying a tag identification or not can be determined according to an algorithm of image analysis, if the current target image contains the target marker carrying the tag identification, whether the current target image is suitable for matching of the target marker carrying the tag identification or not can be determined according to a preset quality evaluation function of the visual image, if an evaluation score calculated according to the preset quality evaluation function of the visual image is low, it indicates that light of the current surrounding environment is insufficient, and initial pose information of the robot cannot be determined through the visual sensor; if the current target image is determined not to contain the target marker carrying the tag identification, the initial pose information of the robot cannot be determined through the vision sensor.
In a specific implementation, when the initial pose information of the robot cannot be determined by the vision sensor mounted on the robot body, the initial pose information of the robot can be determined by the wireless signal sensor mounted on the robot body, which is specifically described as follows: and determining the position of the robot based on the received signal strength of wireless broadcast signals sent by a plurality of target positioning devices and preset position information corresponding to the target positioning devices.
The initial pose information of the robot is determined by the wireless signal sensor in various ways, and the initial pose information of the robot can be determined by positioning the robot by the Bluetooth sensor; the robot can also be positioned through a WiFi positioning technology, so that the initial pose information of the robot is determined; the robot can also be positioned by a GPS positioning technology, so that the initial pose information of the robot is determined; the method can also be used for positioning the robot by the UWB positioning technology so as to determine the initial pose information and the like of the robot.
The target positioning device can be a Beacon positioning device laid in a work place area where the robot is located, and can also be a positioning base station.
Here, the signal strength of the radio broadcast signal includes strength information of the radio signal and transmission time information of the radio signal; the wireless broadcast signal may be a bluetooth broadcast signal, a WiFi signal, a UWB signal, a GPS signal, or the like.
In a possible implementation manner, when the wireless signal sensor mounted on the robot body is a bluetooth sensor, the robot can be located by two locating manners, namely a network side locating system and a terminal side locating system.
Specifically, the specific description of positioning the robot by the network side positioning system is as follows: at least one Beacon positioning device laid in a workplace area where the robot is located is used as a Bluetooth Beacon to continuously send Bluetooth broadcast signals to the periphery, and the robot calculates a Received Signal Strength Indication (RSSI) value under the Beacon according to the Received Bluetooth broadcast signals; and the RSSI value is transmitted to a back-end data server through a Bluetooth gateway laid in a workplace area where the robot is located, and the back-end data server analyzes the received RSSI value through a preset positioning algorithm to calculate the specific position of the robot (namely the initial pose information of the robot).
Specifically, the specific description of positioning the robot by the terminal-side positioning system is as follows: at least one Beacon positioning device laid in a workplace area where the robot is located is used as a Bluetooth Beacon to continuously send Bluetooth broadcast signals to the periphery, the robot receives the Bluetooth broadcast signals, and the distance between the robot and the Beacon positioning device is determined according to the signal intensity of the received Bluetooth broadcast signals; and calculating to obtain the position information of the robot (namely the initial pose information of the robot) in the world coordinate system through a preset positioning algorithm of the robot based on the position information of the Beacon positioning equipment in the world coordinate system and the determined distance between the robot and the Beacon positioning equipment.
In a possible embodiment, when the wireless signal sensor mounted on the robot body is a UWB sensor, the robot can be located by the sending time and the receiving time of the received wireless broadcast signal, which is described as follows: at least one positioning base station laid in the working place area of the robot continuously sends broadcast signals to the surroundings, the robot receives the broadcast signals and determines the one-way flight time of the broadcast signals according to the sending time carried in the received broadcast signals and the receiving time of the received broadcast signals; determining the distance between the robot and the positioning base station according to the flight speed of the broadcast signal and the one-way flight time; based on the position information of the positioning base station in the world coordinate system, the position information of the robot in the world coordinate system (namely the initial pose information of the robot) is calculated through a preset positioning algorithm of the robot.
In the specific implementation, since the sensors for determining the initial pose, such as the vision sensor or the wireless signal sensor, installed on the robot body are easily affected by the external environment, so as to cause inaccurate positioning of the robot, after the initial positioning of the robot is performed by the vision sensor or the wireless signal sensor to obtain the initial pose information of the robot in a world coordinate system, the accurate pose information after the initial pose is corrected can be determined further according to the contour scanning result of the target object obtained after the ranging sensor installed on the robot body scans the surrounding environment (the relative position relationship between the contour position information corresponding to the contour scanning result and the initial pose is accurate, because the pose information of the ranging sensor is used as a reference standard when calculating the contour position information) and the prior map, the specific execution steps are as shown in steps S202 to S203 below.
S202, obtaining a contour scanning result of the target object obtained after the robot scans the surrounding environment through the ranging sensor.
The distance measuring sensor can be a laser radar sensor, an infrared sensor and the like; the distance measuring sensor may be any depth sensor with distance measuring function, and will not be described herein.
The target object can comprise at least one object scanned by the robot through the ranging sensor in the operation process; here, at least one object may include a target tag carrying tag identification.
Here, the contour scan result of the target object may be a contour map of the target object composed of a plurality of point cloud data.
In a specific implementation, the ranging sensor of the robot determines, in step S201, that an initial position point in the initial pose information of the robot transmits a signal, and when the intensity of a signal reflected by a target object in the surrounding environment is greater than a first preset threshold, point cloud data corresponding to the target object is generated; and when the number of the point cloud points of the target object is greater than a second preset threshold value, determining a contour scanning result of the target object based on the point cloud data corresponding to the point cloud points of the target object.
After the contour scanning result of the target object obtained after the robot scans the surrounding environment through the ranging sensor is obtained based on step S202, the accurate pose information of the robot is determined according to step S203, which is described in detail as follows.
S203, determining accurate pose information of the robot based on the initial pose information, the contour scanning result and a pre-stored prior map.
Wherein the initial pose information may include initial position information and initial pose information (i.e., orientation information of the robot); here, when the initial pose information of the robot is determined by the vision sensor, the initial position information and the initial pose information of the robot can be determined at the same time; the initial pose information may also include only initial position information; here, when the initial pose information of the robot cannot be determined by the vision sensor, the initial position information of the robot may be determined by the wireless signal sensor.
The precise pose information may include precise position information and precise pose (angle, orientation) information (i.e., orientation of the robot).
Here, the pre-stored prior map is a global map of a workplace where the robot is currently located, and the global map is pre-stored in a database; here, the global map may be created by slam or the like, and the global map may be an occupancy probability grid map.
In a specific implementation, in order to reduce the amount of calculation, a map search range may be determined in a pre-stored prior map by a covariance matrix or a pre-set method (for example, a pre-set map search range is a 10 × 10 grid map configured by centering on the initial position information in the initial pose information) based on the initial pose information determined in step S201.
After the map searching range is determined, profile information searching is carried out on each target position point in the profile scanning result in the determined map searching range, a matching degree evaluation score corresponding to the matching degree of each target position point in the profile scanning result in the map searching range is obtained according to a matching degree evaluation score function, and whether target profile information matched with the target position point exists in the prior map or not is determined. The matching degree evaluation score function is used for calculating the evaluation score corresponding to the matching degree of each target position point in the contour scanning result in the map searching range; the matching degree evaluation score function may be based on a first stage algorithm: a branch and bound matching algorithm of the grid map, a violence matching algorithm of the grid map, a fixed step search algorithm of the point cloud and the like, and a second-stage algorithm: the Point-to-Point Iterative Closest Point (ICP) matching algorithm, the Point-to-Normal Distribution Transform (NDT) model matching algorithm, the Point-to-line ICP algorithm, and any one of a plurality of algorithms, is combined with any one of a plurality of second-stage algorithms or is combined with any one of a plurality of second-stage algorithms.
In specific implementation, according to initial pose information of a robot and a relative position relationship between the initial pose information of the robot determined by a ranging sensor and each target position point in a contour scanning result, determining the position of each target position point in the contour scanning result under a world coordinate system, matching each target position point in the contour scanning result with a prior map in a map search range based on the position of each target position point in the contour scanning result under the world coordinate system, determining a matching degree evaluation score corresponding to the matching result according to a matching degree evaluation score function, and when the matching degree evaluation score is greater than a preset threshold value, indicating that target contour information matched with the target position point exists in the prior map, and taking the initial pose information of the robot as accurate pose information; when the matching degree evaluation score is not larger than a preset threshold value, synchronously adjusting the initial pose information and the information of each target position point in the corresponding contour scanning result according to the relative position relationship between the determined initial pose information of the robot and each target position point in the contour scanning result and the preset pose adjustment step length, matching each target position point in the adjusted contour scanning result with a prior map in a map search range based on the position of each target position point in the adjusted contour scanning result under a world coordinate system, and determining the matching degree evaluation score corresponding to the adjusted matching result according to a matching degree evaluation score function until the obtained matching degree evaluation score corresponding to the adjusted matching result is larger than the preset threshold value, indicating that the target contour information matched with the adjusted target position point exists in the prior map, and taking the initial pose information of the robot after adjustment as accurate pose information.
The preset pose adjustment step length may include a preset position adjustment step length and a preset posture (angle) adjustment step length; here, the preset position adjustment step length may be determined according to grid accuracy in a pre-stored prior map; the preset angle adjustment step length can be determined according to an included angle between two adjacent ranging signals transmitted by a ranging sensor of the robot.
Performing contour information search on each target position point in the contour scanning result in a determined map search range, obtaining a matching degree evaluation score corresponding to the matching degree of each target position point in the contour scanning result in the map search range according to a matching degree evaluation score function, and determining that no target contour information matched with the target position point exists in the prior map if the matching degree evaluation score is not greater than a preset threshold value; after determining that no target contour information matched with the target position points exists in the prior map, synchronously adjusting initial pose information and information of each target position point in a corresponding contour scanning result on two dimensions of position and angle according to a preset position adjustment step length and a preset posture (angle) adjustment step length, performing contour information search on each target position point in the adjusted contour scanning result in a determined map search range, evaluating a score function according to the matching degree to obtain a matching degree evaluation score corresponding to the matching degree of each target position point in the adjusted contour scanning result in the map search range, and if the matching degree evaluation score is greater than a preset threshold value, determining that target contour information matched with the adjusted target position points exists in the prior map; then the initial pose information after adjustment according to the preset position adjustment step length and the preset angle adjustment step length (wherein the adjusted position information in the adjusted initial pose information is obtained after adjustment according to the preset position adjustment step length on the basis of the initial position information, and the adjusted robot orientation information in the adjusted initial pose information is obtained after adjustment according to the preset angle adjustment step length on the basis of the initial attitude (robot orientation) information) is used as the accurate pose information of the robot.
Here, in the process of synchronously adjusting the initial pose information and the information of each target position point in the corresponding contour scanning result according to the preset pose adjustment step length, in the specific implementation, the initial position information in the initial pose information and the information of each target position point in the corresponding contour scanning result are synchronously adjusted based on a preset adjustment step length (which can be the preset position adjustment step length or the preset angle adjustment step length) to obtain an adjustment result after the first adjustment, the first adjustment result is matched with the prior map, and the matching degree evaluation score is calculated; when the matching degree evaluation score is larger than a preset threshold value, taking initial attitude (angle) information in the initial pose information of the robot (namely, the robot orientation information in the initial pose information determined according to S201) and initial position information adjusted according to a preset position adjustment step length as accurate orientation information and accurate position information of the robot; when the matching degree evaluation score is not greater than a preset threshold value, adjusting the adjustment result after the first adjustment based on another preset adjustment step length (which can be a preset angle adjustment step length or a preset position adjustment step length) to obtain an adjustment result after the second adjustment, matching the adjustment result after the second adjustment with a prior map, and calculating a matching degree evaluation score corresponding to the adjustment result after the second adjustment; and when the matching degree evaluation score corresponding to the second adjustment result is larger than the preset threshold, taking the initial position information of the robot after the first adjustment according to the preset position adjustment step length and the initial posture (angle, orientation) information after the adjustment according to the preset angle adjustment step length as the accurate position information and the accurate orientation information of the robot.
For example, the step length may be adjusted according to a preset position, the position information in the initial pose information and the information of each target position point in the corresponding contour scanning result are synchronously adjusted, each target position point in the contour scanning result after the first adjustment is matched with the prior map in the map search range based on the position of each target position point in the contour scanning result after the first adjustment in the world coordinate system, the matching degree evaluation score corresponding to the matching result after the first adjustment is determined according to the matching degree evaluation score function, when the matching degree evaluation score is not greater than a preset threshold value, the step length is adjusted according to a preset angle based on the initial pose information after the first adjustment and the information of each target position point in the contour scanning result, based on the relative position relationship between the contour position information corresponding to the contour scanning result and the initial pose, and synchronously adjusting orientation information of the robot in the initial pose information and information of each target position point in the corresponding contour scanning result, matching each target position point in the contour scanning result after the second adjustment with a prior map in a map search range based on the position of each target position point in the contour scanning result after the second adjustment in a world coordinate system, and determining a matching degree evaluation score corresponding to the matching result after the second adjustment according to a matching degree evaluation score function.
In the embodiment of the disclosure, the accurate pose information of the robot is determined according to the initial pose information of the robot, the contour scanning result of the target object obtained by the robot through the surrounding environment scanned by the ranging sensor, and the pre-stored prior map, wherein, because sensors used for determining the initial pose, such as a visual sensor and a wireless signal sensor, are easily affected by the external environment, and the robot is positioned inaccurately, the embodiment of the disclosure can further determine the accurate pose information after correcting the initial pose according to the contour scanning result of the ranging sensor on the target object (the relative position relationship between the contour position information corresponding to the contour scanning result and the initial pose is accurate because the pose information of the ranging sensor is used as the reference standard when calculating the contour position information) and the prior map, for example, when the contour position information corresponding to the initial pose is matched with the prior map, the initial pose can be used as an accurate pose, if the contour position information is not matched with the prior map, the contour position information is adjusted, and when the adjusted contour position information is matched with the prior map, the adjusted robot pose corresponding to the adjusted contour position information is used as an accurate pose, so that the positioning accuracy of the robot is improved.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the robot described above may refer to the corresponding process in the foregoing method embodiment, and is not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed robot and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A robot, comprising: the system comprises an initial pose detection assembly, a ranging sensor, a control server and a memory;
wherein: the initial pose detection assembly is configured to perform initial positioning on the robot to obtain initial pose information; the ranging sensor is configured to determine a profile scan result of a target object in the surrounding environment by scanning the surrounding environment; the memory is configured to store an execution instruction of the control server, initial pose information detected by the initial pose detection assembly and a contour scanning result output by the ranging sensor, and a priori map;
the control server is configured to read an execution instruction from the memory, and execute, according to the execution instruction: and reading the initial pose information, the contour scanning result and the prior map, and determining the accurate pose information of the robot based on the initial pose information, the contour scanning result and the pre-stored prior map.
2. The robot of claim 1, wherein the control server, in performing the determining of the accurate pose information of the robot based on the initial pose information, the contour scan results, and a pre-stored a priori map, is configured to:
determining whether target contour information matched with the target position points exists in the prior map or not based on each target position point in the contour scanning result; if the matched target contour information exists, taking the initial pose information as the accurate pose information; if the target contour information does not exist, adjusting the initial pose information and the information of each target position point in the corresponding contour scanning result synchronously according to a preset pose adjusting step length, returning to the step of determining whether the target contour information matched with the target position point exists in the prior map or not until the target contour information matched with the adjusted target position point exists, and taking the adjusted initial pose information corresponding to the adjusted target position point as the accurate pose information.
3. The robot of claim 2, wherein the control server, in performing the determining whether there is target contour information matching the target location point in the prior map based on each target location point in the contour scan results, is configured to:
determining a map search range based on the initial pose information; and determining whether target contour information matched with the target position point exists or not by searching contour information in the determined map searching range.
4. The robot of claim 1, wherein the initial pose detection component, when performing the determining initial pose information for initially positioning the robot, is configured to:
acquiring a target image obtained by scanning the surrounding environment by the robot through a vision sensor; and determining the initial pose information based on the pre-recorded world coordinate information corresponding to the target marker contained in the target image.
5. The robot of claim 4, wherein the initial pose detection component, in performing the determining the initial pose information based on pre-recorded world coordinate information corresponding to target markers included in the target image, is configured to:
determining first position and attitude information of the target marker in a visual sensor coordinate system; determining second pose information of the target marker in the robot coordinate system based on the first pose information and first relative pose information between the vision sensor coordinate system and the robot coordinate system; determining second relative pose information between the robot coordinate system and a world coordinate system based on the second pose information and the world coordinate information of the target marker in the world coordinate system; and determining initial pose information of the robot according to the second relative pose information.
6. The robot according to claim 4 or 5, wherein the initial pose detection assembly is further configured to:
and if the target marker cannot be successfully identified from the target image, determining the initial pose information based on the received wireless broadcast signal sent by the target positioning equipment.
7. The robot of claim 1, wherein the initial pose detection component, when performing the determining initial pose information for initially positioning the robot, is configured to:
and determining the initial pose information based on the received wireless broadcast signals sent by the target positioning equipment.
8. A robot as claimed in claim 6 or 7, wherein the initial pose detection assembly, in performing the determination of the initial pose information based on the received wireless broadcast signals transmitted by the object localization devices, is configured to:
and determining the position of the robot based on the received signal strength of wireless broadcast signals sent by a plurality of target positioning devices and preset position information corresponding to the target positioning devices.
9. The robot of claim 1, wherein the ranging sensor is configured to:
acquiring point cloud data obtained by scanning the surrounding environment by a distance measuring sensor; determining whether the number of point cloud points belonging to a target object is greater than a set threshold value or not based on the point cloud data; and if so, determining the contour scanning result of the target object based on the point cloud data corresponding to the point cloud point of the target object.
10. A positioning method is applied to a robot and comprises the following steps:
determining initial pose information obtained by initially positioning the robot;
acquiring a contour scanning result of a target object obtained after the robot scans the surrounding environment through a ranging sensor;
and determining accurate pose information of the robot based on the initial pose information, the contour scanning result and a pre-stored prior map.
CN202010898774.6A 2020-08-31 2020-08-31 Robot and positioning method applied to robot Active CN114102577B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010898774.6A CN114102577B (en) 2020-08-31 2020-08-31 Robot and positioning method applied to robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010898774.6A CN114102577B (en) 2020-08-31 2020-08-31 Robot and positioning method applied to robot

Publications (2)

Publication Number Publication Date
CN114102577A true CN114102577A (en) 2022-03-01
CN114102577B CN114102577B (en) 2023-05-30

Family

ID=80359940

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010898774.6A Active CN114102577B (en) 2020-08-31 2020-08-31 Robot and positioning method applied to robot

Country Status (1)

Country Link
CN (1) CN114102577B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114353807A (en) * 2022-03-21 2022-04-15 沈阳吕尚科技有限公司 Robot positioning method and positioning device
CN115047874A (en) * 2022-06-02 2022-09-13 北京三快在线科技有限公司 Robot connection method, storage cabinet, robot, system and electronic equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104626206A (en) * 2014-12-17 2015-05-20 西南科技大学 Robot operation pose information measuring method under non-structural environment
CN105928505A (en) * 2016-04-19 2016-09-07 深圳市神州云海智能科技有限公司 Determination method and apparatus for position and orientation of mobile robot
WO2017215777A2 (en) * 2016-06-13 2017-12-21 Plasser & Theurer Export Von Bahnbaumaschinen Gesellschaft M.B.H. Method and system for the maintenance of a travel path for rail vehicles
CN109084732A (en) * 2018-06-29 2018-12-25 北京旷视科技有限公司 Positioning and air navigation aid, device and processing equipment
CN110260867A (en) * 2019-07-29 2019-09-20 浙江大华技术股份有限公司 Method, equipment and the device that pose is determining in a kind of robot navigation, corrects
CN110319834A (en) * 2018-03-30 2019-10-11 深圳市神州云海智能科技有限公司 A kind of method and robot of Indoor Robot positioning
CN110375738A (en) * 2019-06-21 2019-10-25 西安电子科技大学 A kind of monocular merging Inertial Measurement Unit is synchronous to be positioned and builds figure pose calculation method
WO2019219077A1 (en) * 2018-05-18 2019-11-21 京东方科技集团股份有限公司 Positioning method, positioning apparatus, positioning system, storage medium, and method for constructing offline map database
CN111473785A (en) * 2020-06-28 2020-07-31 北京云迹科技有限公司 Method and device for adjusting relative pose of robot to map
CN111590595A (en) * 2020-06-30 2020-08-28 深圳市银星智能科技股份有限公司 Positioning method and device, mobile robot and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104626206A (en) * 2014-12-17 2015-05-20 西南科技大学 Robot operation pose information measuring method under non-structural environment
CN105928505A (en) * 2016-04-19 2016-09-07 深圳市神州云海智能科技有限公司 Determination method and apparatus for position and orientation of mobile robot
WO2017215777A2 (en) * 2016-06-13 2017-12-21 Plasser & Theurer Export Von Bahnbaumaschinen Gesellschaft M.B.H. Method and system for the maintenance of a travel path for rail vehicles
CN110319834A (en) * 2018-03-30 2019-10-11 深圳市神州云海智能科技有限公司 A kind of method and robot of Indoor Robot positioning
WO2019219077A1 (en) * 2018-05-18 2019-11-21 京东方科技集团股份有限公司 Positioning method, positioning apparatus, positioning system, storage medium, and method for constructing offline map database
CN109084732A (en) * 2018-06-29 2018-12-25 北京旷视科技有限公司 Positioning and air navigation aid, device and processing equipment
CN110375738A (en) * 2019-06-21 2019-10-25 西安电子科技大学 A kind of monocular merging Inertial Measurement Unit is synchronous to be positioned and builds figure pose calculation method
CN110260867A (en) * 2019-07-29 2019-09-20 浙江大华技术股份有限公司 Method, equipment and the device that pose is determining in a kind of robot navigation, corrects
CN111473785A (en) * 2020-06-28 2020-07-31 北京云迹科技有限公司 Method and device for adjusting relative pose of robot to map
CN111590595A (en) * 2020-06-30 2020-08-28 深圳市银星智能科技股份有限公司 Positioning method and device, mobile robot and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114353807A (en) * 2022-03-21 2022-04-15 沈阳吕尚科技有限公司 Robot positioning method and positioning device
CN115047874A (en) * 2022-06-02 2022-09-13 北京三快在线科技有限公司 Robot connection method, storage cabinet, robot, system and electronic equipment
CN115047874B (en) * 2022-06-02 2023-09-15 北京三快在线科技有限公司 Robot connection method, locker, robot, system and electronic equipment

Also Published As

Publication number Publication date
CN114102577B (en) 2023-05-30

Similar Documents

Publication Publication Date Title
CN107923960B (en) System and method for locating tags in space
CN107340522B (en) Laser radar positioning method, device and system
CN108875804B (en) Data processing method based on laser point cloud data and related device
CN109490825B (en) Positioning navigation method, device, equipment, system and storage medium
CN107742304B (en) Method and device for determining movement track, mobile robot and storage medium
CN104732514A (en) Apparatus, systems, and methods for processing a height map
JP2017072422A (en) Information processing device, control method, program, and storage medium
CN111380510B (en) Repositioning method and device and robot
CN114102577B (en) Robot and positioning method applied to robot
US8744752B2 (en) Apparatus and method for detecting locations of vehicle and obstacle
CN112171659A (en) Robot and method and device for identifying limited area of robot
US11002842B2 (en) Method and apparatus for determining the location of a static object
CN112505671B (en) Millimeter wave radar target positioning method and device under GNSS signal missing environment
CN113126600A (en) Follow system and article transfer cart based on UWB
CN114610032A (en) Target object following method and device, electronic equipment and readable storage medium
JPWO2017199369A1 (en) Feature recognition apparatus, feature recognition method and program
EP3851788A1 (en) Determination method and determination device
CN110988795A (en) Mark-free navigation AGV global initial positioning method integrating WIFI positioning
CN110879397A (en) Obstacle recognition method, apparatus, storage medium, and device
CN114526724B (en) Positioning method and equipment for inspection robot
CN113203424B (en) Multi-sensor data fusion method and device and related equipment
KR101642186B1 (en) Location tracking method, location tracking system and recording medium for performing the method
CN110412613B (en) Laser-based measurement method, mobile device, computer device, and storage medium
JPWO2019031372A1 (en) Sensor control device
CN113376617B (en) Method, device, storage medium and system for evaluating accuracy of radar calibration result

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant