CN115366097A - Robot following method, device, robot and computer readable storage medium - Google Patents

Robot following method, device, robot and computer readable storage medium Download PDF

Info

Publication number
CN115366097A
CN115366097A CN202210893035.7A CN202210893035A CN115366097A CN 115366097 A CN115366097 A CN 115366097A CN 202210893035 A CN202210893035 A CN 202210893035A CN 115366097 A CN115366097 A CN 115366097A
Authority
CN
China
Prior art keywords
robot
position information
uwb module
target
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210893035.7A
Other languages
Chinese (zh)
Inventor
郁亚南
廖铉泓
于金旭
梁智乐
吴建波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongguan Benmo Technology Co Ltd
Original Assignee
Dongguan Benmo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongguan Benmo Technology Co Ltd filed Critical Dongguan Benmo Technology Co Ltd
Priority to CN202210893035.7A priority Critical patent/CN115366097A/en
Priority to PCT/CN2022/128354 priority patent/WO2024021340A1/en
Publication of CN115366097A publication Critical patent/CN115366097A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Remote Sensing (AREA)
  • Robotics (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the field of artificial intelligence, and discloses a robot following method, a device, a robot and a storage medium, wherein the method comprises the following steps: constructing an initial three-dimensional point cloud map of a target environment; when a robot following instruction is received, recording the target environment to obtain a motion video; calculating the motion track of the robot in any two adjacent image time frames according to the motion video; according to the motion trail, determining spatial position information of a UWB module in the robot; calculating the distance between a UWB module in the robot and a UWB module in the user handle by using a TOF ranging method; according to the distance, calculating the spatial position information of the UWB module in the user handle by utilizing a triangulation method; and generating a following path of the robot according to the initial three-dimensional point cloud map and the spatial position information of the UWB module in the user handle. The invention can improve the following accuracy of the robot.

Description

Robot following method, device, robot and computer readable storage medium
Technical Field
The invention relates to the field of artificial intelligence, in particular to a robot following method and device, a robot and a readable storage medium.
Background
Robot following refers to the process of following the robot to stay within a certain range of the user through a positioning system, e.g., a dog walking robot, etc.
Most of the current common robot following methods are GPS positioning navigation, bluetooth guiding navigation, laser guiding navigation and ultrasonic guiding navigation, wherein the GPS positioning navigation is only suitable for the outdoors due to the limitation of signals, and the indoor precision is not high; the bluetooth guidance navigation is in a 2.4G wave band which is occupied more, and is easily interfered by common wireless equipment such as mobile phone signals, wiFi and wireless mice, so that the measured distance is large and the accuracy is low; the laser guidance navigation and the ultrasonic guidance navigation are difficult to realize the following of a single target in a place with many people, so that the precision is not high.
Disclosure of Invention
The invention provides a robot following method, a robot following device, a robot and a readable storage medium, and aims to improve the accuracy of the robot in the following process.
In order to achieve the above object, the present invention provides a robot following method, including:
acquiring image information of a target environment and initial position information of a UWB module in a robot, calculating angular points in the image information by using an angular point algorithm, and constructing an initial three-dimensional point cloud map of the target environment by using a visual instant positioning and mapping technology according to the image information and the angular points;
when a robot following instruction is received, a camera module in the robot is used for recording the target environment to obtain a motion video;
screening target corner points from the corner points according to the image information of each frame of image in the motion video, and calculating the motion trail of the robot in any two adjacent frames of images within time according to the target corner points in any two adjacent frames of images in the motion video;
according to the motion track and the initial position information, determining spatial position information of each frame time node of the UWB module in the robot in the video recording process;
acquiring UWB module space position information in the robot of all adjacent frames of the motion video according to a preset time interval in a preset time period to form a first space position information set, acquiring signal receiving time and signal response time of the UWB module in the robot and the UWB module in a user handle, and calculating the distance between the UWB module in the robot and the UWB module in the user handle according to the signal receiving time and the signal response time;
according to the spatial position information set and the distance, calculating spatial position information of a UWB module in the user handle by utilizing a triangulation method;
and generating a following path of the robot according to the initial three-dimensional point cloud map and the spatial position information of the UWB module in the user handle.
Optionally, the calculating, according to the target corner points in any two adjacent frames of images in the motion video, a motion trajectory of the robot within any two adjacent frames of images in time includes:
matching the target corner points in any two adjacent frames of images in the motion video to obtain a matching result;
calculating a rotation vector and a displacement vector between any two adjacent frames of images by using an epipolar geometric formula according to the matching result;
carrying out nonlinear optimization on the rotation vector and the displacement vector by using a least square method to obtain a target rotation vector and a target displacement vector;
and splicing any two adjacent frames of images according to the target rotation vector and the target displacement vector to obtain the motion track of the robot.
Optionally, the calculating the corner in the image information by using a corner algorithm includes:
acquiring pixel information in the image information to obtain a pixel image;
selecting any point pixel point in the pixel image as a circle center, and drawing a circle with a preset length as a radius to obtain a target circle;
judging whether a first preset number of continuous comparison pixel points exist on the target circle;
when the first preset number of continuous contrast pixel points do not exist on the target circle, judging that the pixel points are not angular points;
when a first preset number of continuous contrast pixels exist on the target circle, calculating the difference value between the brightness value of the contrast pixels and the brightness value of the pixels;
judging whether the difference value of a second preset number is larger than or smaller than a preset threshold value or not;
when the difference value of a second preset number does not exist in the difference values and is larger than or smaller than a preset threshold value, judging that the pixel point is not an angular point;
and when the difference value of the second preset number is larger than or smaller than a preset threshold value, judging that the pixel point is an angular point.
Optionally, the constructing an initial three-dimensional point cloud map of the target environment by using a visual instant positioning and mapping technique according to the image information and the corner points includes:
filtering the image information to obtain de-noised image information;
carrying out distortion removal processing on the de-noised image information to obtain target image information;
performing corner matching on the target image information according to the corners to obtain a matching result;
and splicing the target picture information according to the matching result to obtain an initial three-dimensional point cloud map.
Optionally, the determining, according to the motion trajectory and the initial position information, spatial position information where each frame time node of the UWB module in the robot is located in a recording process includes:
acquiring the movement speed of the robot and the time interval between two adjacent frames;
calculating the movement distance of the robot in the time interval of two adjacent frames according to the movement track, the movement speed and the time interval;
and sequentially calculating the space position information of the robot at each frame time node according to the time sequence according to the initial position information of the UWB module in the robot and the movement distance of the robot in the time interval of the two adjacent frames of images.
Optionally, the calculating spatial location information of the UWB module in the user handle according to the spatial location information set and the distance by using a triangulation method includes:
taking the spatial position information of any UWB module in the robot in any frame of the spatial position information set as an origin, constructing a spatial coordinate system and acquiring the origin coordinate of the origin;
acquiring residual space position information of other UWB modules in the robot in any frame in the space coordinate system and adjacent space position information of all UWB modules in the robot in an adjacent frame of any frame;
according to the spatial correspondence between the residual spatial position information and the adjacent spatial position information and the origin in the spatial coordinate system, calculating the contrast coordinates of the rest UWB modules in the robot in any frame and all UWB modules in the robot in the adjacent frame of any frame in the spatial coordinate system;
constructing a spherical surface according to the origin coordinate, the comparison coordinate and the distance between the UWB module corresponding to each coordinate and the UWB module in the user handle, and taking the coordinate of a point through which all the spherical surfaces pass as the handle coordinate of the UWB module in the user handle;
and calculating the spatial position information of the UWB module in the user handle according to the corresponding relation between the handle coordinate and the origin coordinate.
Optionally, said calculating a distance between a UWB module in the robot and a UWB module in the user handle according to the signal receiving time and the signal response time includes:
calculating the distance Z between the UWB module in the robot and the UWB module in the user handle by using the following distance measurement formula:
Z=X*(A-B)
wherein X is the speed of the electromagnetic wave signal emitted by the robot or the user handle; a is the signal reception time of the UWB module in the robot or the UWB module in the user handle; b is the signal response time of the UWB module in the robot or the UWB module in the user handle.
In order to solve the above problem, the present invention also provides a robot following device, the device including:
the three-dimensional map building module is used for acquiring image information of a target environment and initial position information of a UWB module in the robot, calculating angular points in the image information by using an angular point algorithm, and building an initial three-dimensional point cloud map of the target environment by using a visual instant positioning and mapping technology according to the image information and the angular points;
the user handle positioning module is used for recording the target environment by using a camera module in the robot when a robot following instruction is received to obtain a motion video, screening out a target angular point from the angular points according to the image information of each frame of image in the motion video, calculating a motion track of the robot in any two adjacent frame of image time according to the target angular point in any two adjacent frames of image in the motion video, determining the spatial position information of each frame of time node of a UWB module in the robot in the video recording process according to the motion track and the initial position information, acquiring the spatial position information of the UWB module in the robot in all adjacent frames of the motion video according to a preset time interval in a preset time period to form a first spatial position information set, and acquiring a U in the robot;
and the following path generating module is used for generating a following path of the robot according to the initial three-dimensional point cloud map and the spatial position information of the UWB module in the user handle.
In order to solve the above problem, the present invention also provides a robot comprising:
a memory storing at least one computer program; and
a processor executing the computer program stored in the memory to implement the robot following method described above.
In order to solve the above problem, the present invention also provides a computer-readable storage medium having at least one computer program stored therein, the at least one computer program being executed by a processor in a robot to implement the robot following method described above.
The embodiment of the invention calculates the angular points in the image information by acquiring the image information of a target environment and the initial position information of a UWB module in a robot by using an angular point algorithm, constructs an initial three-dimensional point cloud map of the target environment by using a visual instant positioning and mapping technology according to the image information and the angular points, provides a complete target environment map of the robot, thereby helping the robot to carry out obstacle avoidance operation in the navigation advancing process, ensuring the follow-up safety of the robot, secondly, when a robot following instruction is received, records the target environment by using a camera module in the robot to obtain a motion video, calculates the motion track of the robot according to the motion video, and determines the spatial position information of each frame time node of the UWB module in the robot in the video recording process according to the motion track, the positioning accuracy of the robot is improved, the robot can accurately avoid obstacles and accurately position in a task following clock, furthermore, the spatial position information of UWB modules in the robot of all adjacent frames of the motion video is acquired according to a preset time interval in a preset time period to form a first spatial position information set, the signal receiving time and the signal response time of the UWB modules in the robot and the user handle are acquired, the distance between the UWB modules in the robot and the UWB modules in the user handle is calculated according to the signal receiving time and the signal response time, whether the UWB modules are positioned in the limit range in the robot following task is judged, so that the robot can smoothly complete the robot following task, in addition, the distance between the UWB modules in the robot and the modules in the user handle is calculated, the precision of the robot at any time is improved, and finally, according to the spatial position information set reaches the distance, the spatial position information of the UWB module in the user handle is calculated by a triangulation method, so that the precision of the distance between the UWB module in the robot and the UWB module in the user handle is improved, the robot is convenient to search the position of the user handle, a task is followed in a target environment, and further, according to the initial three-dimensional point cloud map and the spatial position information of the UWB module in the user handle, a following path of the robot is generated, so that the robot following task is completed, the robot is ensured to be capable of achieving obstacle avoidance and accurate positioning, and the precision of the robot following is improved. Therefore, the robot following method, device, equipment and storage medium provided by the invention can improve the following accuracy of the robot.
Drawings
Fig. 1 is a schematic flow chart of a robot following method according to an embodiment of the present invention;
fig. 2 and fig. 3 are detailed implementation flowcharts of one step in a robot following method according to an embodiment of the present invention;
FIG. 4 is a block diagram of a robot follower according to an embodiment of the present invention;
fig. 5 is a schematic diagram of an internal structure of a robot for implementing a robot following method according to an embodiment of the present invention;
the implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
The embodiment of the invention provides a robot following method. The execution main body of the robot following method is mainly an intelligent robot. In other words, the robot following method may be performed by software or hardware installed in the intelligent robot
Referring to a schematic flow chart of a robot following method provided in an embodiment of the present invention shown in fig. 1, in an embodiment of the present invention, the robot following method includes the following steps:
s1, obtaining image information of a target environment and initial position information of a UWB module in a robot, calculating angular points in the image information by using an angular point algorithm, and constructing an initial three-dimensional point cloud map of the target environment by using a visual instant positioning and mapping technology according to the image information and the angular points.
In this embodiment of the present invention, the target environment may be a working environment of the robot, that is, an environment in which the robot follows a user handle to navigate. The image information may be a picture of the target environment taken by the robot, or a picture taken in advance and stored in a preset database. The UWB module may be a module that performs communication using a wireless carrier. The initial position information may be a starting position of a UWB module in the robot in the target environment. The corner point may be defined such that a certain pixel and enough pixels in the surrounding area are in different regions, and then this pixel is called the corner point.
In an optional embodiment of the invention, because the image shot by the monocular camera is two-dimensional and the depth information of the target environment cannot be obtained, the target environment can be shot by the binocular camera of the robot, so that the image information of the target environment is obtained, further, the images obtained in the shooting process are fused, the depth value of the object can be directly calculated, so that the three-dimensional information of the target environment and the initial position information of the UWB module in the robot are obtained, and the acquisition accuracy of the target environment is improved.
According to the embodiment of the invention, the angular points in the image information are calculated by using an angular point algorithm, so that the motion track of the robot and the position of each angular point in the target environment are calculated, the smooth construction of the three-dimensional point cloud map is ensured, and the accuracy of the three-dimensional point cloud map is improved.
Further, as an alternative embodiment of the present invention, referring to fig. 2, the calculating the corner in the image information by using a corner algorithm includes:
s11, obtaining pixel information in the image information to obtain a pixel image;
s12, selecting any point pixel point in the pixel image as a circle center, and drawing a circle with a preset length as a radius to obtain a target circle;
s13, judging whether a first preset number of continuous contrast pixel points exist on the target circle;
when the first preset number of continuous contrast pixel points does not exist on the target circle, S14 judges that the pixel points are not angular points;
s15, when a first preset number of continuous contrast pixel points exist on the target circle, calculating the difference value between the brightness value of the contrast pixel points and the brightness value of the pixel points;
s16, judging whether the difference value of a second preset number is larger than or smaller than a preset threshold value or not;
when the difference value of the second preset number is not larger than or smaller than the preset threshold value in the difference values, executing S14 to judge that the pixel point is not an angular point;
and S17, when the difference value with the second preset number is larger than or smaller than a preset threshold value, judging that the pixel point is an angular point.
In the embodiment of the invention, the contrast pixel points are pixel points covered by the edge of the target circle.
In the embodiment of the present invention, the pixel information includes a pixel value and a brightness value of each pixel point in the image. The preset length is generally the length of two pixels spaced from the pixel at the center of the circle.
In an optional embodiment of the present invention, first, a pixel point in image information is arbitrarily selected, and it is determined whether a target circle can be formed by taking the pixel point as a circle center and a preset length as a radius, when the target circle exists, a difference between a luminance value of each pixel point on the target circle and a luminance value of the pixel point is calculated, and further, it is determined whether a preset number of differences exist in the differences and are greater than or less than a preset threshold, for example, if there are 16 pixel points on the target circle, and a difference between a luminance value of 12 pixel points and a luminance value of a pixel point at the circle center is greater than or less than a preset threshold, it is determined that the pixel point is an angular point.
According to the embodiment of the invention, an initial three-dimensional point cloud map of the target environment is constructed by utilizing a visual instant positioning and mapping technology according to the image information and the angular points, and a complete target environment map of the robot is provided, so that the robot is assisted to carry out obstacle avoidance operation in a navigation advancing process, and the safety of the robot when the robot moves along with a handle is ensured.
Further, as an optional embodiment of the present invention, the constructing an initial three-dimensional point cloud map of the target environment by using a visual instant positioning and mapping technique according to the image information and the corner points includes:
filtering the image information to obtain de-noised image information;
carrying out distortion removal processing on the de-noised image information to obtain target image information;
according to the angular points, performing angular point matching on the target image information to obtain a matching result;
and splicing the target picture information according to the matching result to obtain an initial three-dimensional point cloud map.
In the embodiment of the present invention, the filtering process may be a filtering technique such as median filtering or gaussian filtering. The distortion removal process may be removing pincushion distortion or barrel distortion in the denoised image information.
In the optional embodiment of the invention, when the corner matching is performed on the target image information, the situation of matching errors is easy to occur, so that the image splicing errors are caused, and the initial three-dimensional point cloud map is not accurate enough.
Further, in another optional embodiment of the present invention, according to a matching result, after a rotation matrix and a displacement vector between any two adjacent target image information and a bit depth of each pixel point in the target image information are calculated by using a geometric calculation formula, a PointCloud is constructed, and further, the target image information is spliced into an initial three-dimensional point cloud map through a preset PCL point cloud library and the PointCloud.
And S2, when a robot following instruction is received, recording the target environment by using a camera module in the robot to obtain a motion video.
In the embodiment of the invention, the robot following instruction can be a program instruction input by people or a preset instruction triggered under a certain condition. The camera module comprises a binocular camera and a sensor of the robot.
In an optional embodiment of the invention, in the following two cases, the robot receives a robot following instruction, and one of the two cases is that when the distance between the robot and a user handle exceeds the maximum limit distance, the robot starts the robot following instruction to realize robot following; and secondly, when the user transmits a following command to the robot, the robot receives the robot following command to realize robot following.
Further, as another optional embodiment of the present invention, when the robot receives a robot following instruction, the robot automatically starts the binocular camera to record a video of a target environment, and inputs a result of the video recording to the sensor to form motion video data, so that the visual odometer module in the robot can read the motion video, and calculate a motion trajectory of the robot according to the motion video.
And S3, screening out target angular points from the angular points according to the image information of each frame of image in the motion video, and calculating the motion trail of the robot in the time of any two adjacent frames of images according to the target angular points in any two adjacent frames of images in the motion video.
In this embodiment of the present invention, the motion video may be formed by multiple frames of images. The two adjacent images may be a first frame image and a second frame image in the motion video, or a second frame image and a third frame image, and so on.
In an optional embodiment of the invention, the image information of each frame of image in the motion video is compared with the image information of the initial three-dimensional point cloud map, so that the corner included in each frame of image in the motion video is determined, and then the target corner which meets the image information of each frame of image in the motion video is screened from the corner parameters stored in the process of constructing the initial three-dimensional point cloud map.
According to the embodiment of the invention, the motion trail of the robot in the time period of any two adjacent images is calculated according to the target angular points in any two adjacent images in the motion video, so that the robot is ensured to normally advance in the process of completing the robot following command, and the robot following task is ensured to be successfully completed.
Further, as an optional embodiment of the present invention, referring to fig. 3, the calculating a motion trajectory of the robot in time between any two adjacent images according to target corner points in any two adjacent images in the motion video includes:
s31, matching target corner points in any two adjacent frames of images in the motion video to obtain a matching result;
s32, calculating a rotation vector and a displacement vector between any two adjacent frames of images by using an antipodal geometric formula according to the matching result;
s33, carrying out nonlinear optimization on the rotation vector and the displacement vector by using a least square method to obtain a target rotation vector and a target displacement vector;
and S34, splicing any two adjacent frames of images according to the target rotation vector and the target displacement vector to obtain the motion trail of the robot.
In the embodiment of the present invention, the least square method may be any one of calculation methods such as a gauss newton method, a levenberg-marquardt method, and the like.
Further, in an optional embodiment of the present invention, a generation principle of the motion trajectory is similar to that of the initial three-dimensional point cloud map constructed by using the visual instant positioning and mapping technology, and thus, details are omitted here.
And S4, determining the spatial position information of each frame time node of the UWB module in the robot in the video recording process according to the motion track and the initial position information.
In the embodiment of the present invention, the time node of each frame may be a time stamp carried by each frame of image.
According to the embodiment of the invention, the spatial position information of each frame time node in the recording process of the UWB module in the robot is determined according to the motion track and the initial position information, and the motion track of the robot is wholly subdivided, so that the positioning accuracy of the robot is improved, and the robot can be accurately positioned and avoided when following a task clock.
Further, as an optional embodiment of the present invention, the determining, according to the motion trajectory and the initial position information, spatial position information where a UWB module in the robot is located at each frame time node in a recording process includes:
acquiring the movement speed of the robot and the time interval between two adjacent frames;
calculating the movement distance of the robot in the time interval of two adjacent frames according to the movement track, the movement speed and the time interval;
and sequentially calculating the space position information of the robot at each frame time node according to the time sequence according to the initial position information of the UWB module in the robot and the movement distance of the robot in the time interval of the two adjacent frames of images.
In an optional embodiment of the present invention, the movement speed of the robot may be measured and calculated by a speed measurement tool, further, a time interval between two adjacent frames in the movement video is determined by checking timestamp information of each frame of image of the movement video, further, a movement distance of the robot within the time interval between two adjacent frames is calculated by using a path calculation formula according to the movement track, the movement speed and the time interval, so as to determine a displacement distance of the robot, and finally, spatial position information of the robot at each frame of time node is calculated according to initial position information of a UWB module in the robot, so that accuracy of robot positioning is improved.
S5, acquiring space position information of UWB modules in the robot of all adjacent frames of the motion video according to a preset time interval in a preset time period to form a first space position information set, acquiring signal receiving time and signal response time of the UWB modules in the robot and a user handle, and calculating the distance between the UWB modules in the robot and the UWB modules in the user handle according to the signal receiving time and the signal response time.
In the embodiment of the present invention, the preset time period may be a time period set manually, and is generally the whole video recording process. The preset time interval may be a time period artificially defined, and for convenience of calculation, the time interval between two adjacent frames in the motion video is generally taken.
In the embodiment of the invention, the robot and the user handle both generally comprise two or more UWB modules so as to ensure that the spatial position information of the user handle in the three-dimensional space can be accurately positioned.
According to the embodiment of the invention, the distance between the UWB module in the robot and the UWB module in the user handle is calculated according to the signal receiving time and the signal response time, and whether the robot is in the limit range in the robot following task is judged, so that the robot can smoothly complete the robot following task.
Further, as an optional embodiment of the present invention, the calculating a distance between a UWB module in the robot and a UWB module in the user handle according to the signal receiving time and the signal response time includes:
calculating the distance Z between the UWB module in the robot and the UWB module in the user handle by using the following distance measurement formula:
Z=X*(A-B)
wherein X is the speed of the electromagnetic wave signal emitted by the robot or the user handle, typically 3X 10 8 m/s; a is the signal reception time of the UWB module in the robot or the UWB module in the user handle; b is the signal response time of the UWB module in the robot or the UWB module in the user handle.
According to the optional embodiment of the invention, the distance between the UWB module in the robot and the UWB module in the user handle in each frame time node in the motion video can be calculated by using a TOF ranging method, and further, in order to reduce time errors, a bilateral two-way ranging method in the TOF ranging method can be adopted.
And S6, calculating the spatial position information of the UWB module in the user handle by using a triangulation method according to the spatial position information set and the distance.
In the embodiment of the invention, the triangulation method can be a mathematical method which detects the target position at different positions by using 2 or more than 2 detectors and then determines the position and the distance of the target by using a triangle geometric principle.
According to the space position information set and the distance, the space position information of the UWB module in the user handle is calculated by utilizing a triangulation method, so that the accuracy of the distance between the UWB module in the robot and the UWB module in the user handle is improved, the robot can conveniently search the position of the user handle, and a following task is carried out in a target environment.
Further, as an optional embodiment of the present invention, the calculating spatial location information of the UWB module in the subscriber handle by using triangulation according to the spatial location information set and the distance includes:
taking the spatial position information of any UWB module in the robot in any frame of the spatial position information set as an origin, constructing a spatial coordinate system and acquiring the origin coordinate of the origin;
acquiring residual space position information of other UWB modules in the robot in any frame in the space coordinate system and adjacent space position information of all UWB modules in the robot in an adjacent frame of any frame;
according to the spatial correspondence between the residual spatial position information and the adjacent spatial position information and the origin in the spatial coordinate system, calculating the contrast coordinates of the rest UWB modules in the robot in any frame and all UWB modules in the robot in the adjacent frame of any frame in the spatial coordinate system;
constructing a spherical surface according to the origin coordinate, the comparison coordinate and the distance between the UWB module corresponding to each coordinate and the UWB module in the user handle, and taking the coordinate of a point through which all the spherical surfaces pass as the handle coordinate of the UWB module in the user handle;
and calculating the spatial position information of the UWB module in the user handle according to the corresponding relation between the handle coordinate and the origin coordinate.
In the optional embodiment of the invention, as the positioning by utilizing the triangulation method in the three-dimensional space can be realized only by four or more spherical surfaces, at least two UWB modules are arranged in the robot, so that the accuracy of the spatial position information of the UWB modules in the user handle is improved, and the following accuracy of the robot is improved.
And S7, generating a following path of the robot according to the initial three-dimensional point cloud map and the spatial position information of the UWB module in the user handle.
In an optional embodiment of the invention, an obstacle avoidance function of the robot is provided, so that when a robot follows a task, a following path needs to be planned in the initial three-dimensional point cloud map, and the safety of the robot and the positioning accuracy during following are ensured.
Further, in an optional embodiment of the present invention, the user handle may be movable, and when the spatial position information of the user handle changes, the robot may generate a new following path according to the initial three-dimensional point cloud map and the latest spatial position information of the UWB module in the user handle, so as to ensure that the robot can smoothly complete a robot following task.
The embodiment of the invention calculates the angular points in the image information by acquiring the image information of a target environment and the initial position information of a UWB module in a robot by using an angular point algorithm, constructs an initial three-dimensional point cloud map of the target environment by using a visual instant positioning and mapping technology according to the image information and the angular points, provides a complete target environment map of the robot, thereby helping the robot to carry out obstacle avoidance operation in the navigation advancing process, ensuring the follow-up safety of the robot, secondly, when a robot following instruction is received, records the target environment by using a camera module in the robot to obtain a motion video, calculates the motion track of the robot according to the motion video, and determines the spatial position information of each frame time node of the UWB module in the robot in the video recording process according to the motion track, the positioning accuracy of the robot is improved, the robot can accurately avoid obstacles and accurately position in a task following clock, furthermore, the spatial position information of UWB modules in the robot of all adjacent frames of the motion video is acquired according to a preset time interval in a preset time period to form a first spatial position information set, the signal receiving time and the signal response time of the UWB modules in the robot and the user handle are acquired, the distance between the UWB modules in the robot and the UWB modules in the user handle is calculated according to the signal receiving time and the signal response time, whether the UWB modules are positioned in the limit range in the robot following task is judged, so that the robot can smoothly complete the robot following task, in addition, the distance between the UWB modules in the robot and the modules in the user handle is calculated, the precision of the robot at any time is improved, and finally, according to the spatial position information set reaches the distance, the spatial position information of the UWB module in the user handle is calculated by a triangulation method, so that the precision of the distance between the UWB module in the robot and the UWB module in the user handle is improved, the robot is convenient to search the position of the user handle, a task is followed in a target environment, and further, according to the initial three-dimensional point cloud map and the spatial position information of the UWB module in the user handle, a following path of the robot is generated, so that the robot following task is completed, the robot is ensured to be capable of achieving obstacle avoidance and accurate positioning, and the precision of the robot following is improved. Therefore, the robot following method provided by the invention can improve the following accuracy of the robot.
Fig. 4 is a functional block diagram of the robot follower according to the present invention.
The robot following device 100 according to the present invention may be installed in a robot. According to the implemented functions, the robot following device 100 may include a three-dimensional map building module 101, a user handle positioning module 102 and a following path generating module 103, which may also be referred to as a unit according to the present invention, and refers to a series of computer program segments that can be executed by a robot processor and can perform fixed functions, and are stored in a memory of the robot.
In the present embodiment, the functions regarding the respective modules/units are as follows:
the three-dimensional map building module 101 is configured to obtain image information of a target environment and initial position information of a UWB module in a robot, calculate corners in the image information by using a corner algorithm, and build an initial three-dimensional point cloud map of the target environment by using a visual instant positioning and map building technique according to the image information and the corners.
The user handle positioning module 102 is configured to, when receiving a robot following instruction, utilize a camera module in the robot to record the target environment to obtain a motion video, screen out target corner points from the corner points according to image information of each frame of image in the motion video, calculate a motion trajectory in any two adjacent frame of image time of the robot according to the target corner points in any two adjacent frames of images in the motion video, determine spatial position information where each frame of time node is located in a video recording process of a UWB module in the robot according to the motion trajectory and the initial position information, acquire UWB module spatial position information in the robot of all adjacent frames of the motion video according to a preset time interval in a preset time period to form a first spatial position information set, acquire signal receiving time and signal response time of the UWB module in the robot and the UWB module in the user handle, calculate a distance between the UWB module in the robot and the user handle according to the signal receiving time and the signal response time, and calculate a distance between the UWB module in the user handle according to the spatial position information set and a triangle position information in the user handle positioning method.
The following path generating module 103 is configured to generate a following path of the robot according to the initial three-dimensional point cloud map and spatial position information of the UWB module in the user handle.
In detail, when the modules in the robot following device 100 according to the embodiment of the present invention are used, the same technical means as the robot following method described in fig. 1 to 3 are adopted, and the same technical effects can be produced, which is not described herein again.
Fig. 5 is a schematic structural diagram of a robot for implementing the robot following method according to the present invention.
The robot may comprise a processor 10, a memory 11, a communication bus 12 and a communication interface 13, and may further comprise a computer program, such as a robot follower program, stored in the memory 11 and executable on the processor 10.
The memory 11 includes at least one type of readable storage medium, which includes flash memory, removable hard disk, multimedia card, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, etc. The memory 11 may in some embodiments be an internal memory unit of the robot, for example a mobile hard disk of the robot. The memory 11 may also be an external storage device of the robot in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), etc. provided on the robot. Further, the memory 11 may also include both an internal storage unit of the robot and an external storage device. The memory 11 may be used not only to store application software installed in the robot and various types of data, such as codes of robot following programs, etc., but also to temporarily store data that has been output or will be output.
The processor 10 may be composed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 10 is a Control Unit (Control Unit) of the robot, connects various components of the entire robot by using various interfaces and lines, and executes various functions of the robot and processes data by running or executing programs or modules (e.g., robot follower programs, etc.) stored in the memory 11 and calling data stored in the memory 11.
The communication bus 12 may be a PerIPheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus. The bus may be divided into an address bus, a data bus, a control bus, etc. The communication bus 12 is arranged to enable connection communication between the memory 11 and at least one processor 10 or the like. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
Fig. 5 shows only a robot with components, and those skilled in the art will appreciate that the structure shown in fig. 5 does not constitute a limitation of the robot, and may include fewer or more components than shown, or some components in combination, or a different arrangement of components.
For example, although not shown, the robot may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so that functions such as charge management, discharge management, and power consumption management are implemented through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The robot may further include various sensors, a bluetooth module, a Wi-Fi module, etc., which are not described herein again.
Optionally, the communication interface 13 may include a wired interface and/or a wireless interface (e.g., WI-FI interface, bluetooth interface, etc.), which is generally used to establish a communication connection between the robot and another robot.
Optionally, the communication interface 13 may further include a user interface, which may be a Display (Display), an input unit (such as a Keyboard (Keyboard)), and optionally, a standard wired interface, or a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable for displaying information processed in the robot and for displaying a visualized user interface.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
The robot following program stored in the memory 11 of the robot is a combination of a plurality of computer programs which, when run in the processor 10, enable:
acquiring image information of a target environment and initial position information of a UWB module in a robot, calculating angular points in the image information by using an angular point algorithm, and constructing an initial three-dimensional point cloud map of the target environment by using a visual instant positioning and mapping technology according to the image information and the angular points;
when a robot following instruction is received, a camera module in the robot is used for recording the target environment to obtain a motion video;
screening target corner points from the corner points according to the image information of each frame of image in the motion video, and calculating the motion trail of the robot in any two adjacent frames of images within time according to the target corner points in any two adjacent frames of images in the motion video;
according to the motion track and the initial position information, determining spatial position information of each frame time node of the UWB module in the robot in the video recording process;
acquiring UWB module space position information in the robot of all adjacent frames of the motion video according to a preset time interval within a preset time period to form a first space position information set, acquiring signal receiving time and signal response time of a UWB module in the robot and a UWB module in a user handle, and calculating the distance between the UWB module in the robot and the UWB module in the user handle according to the signal receiving time and the signal response time;
according to the spatial position information set and the distance, calculating spatial position information of a UWB module in the user handle by utilizing a triangulation method;
and generating a following path of the robot according to the initial three-dimensional point cloud map and the spatial position information of the UWB module in the user handle.
Specifically, the processor 10 may refer to the description of the relevant steps in the embodiment corresponding to fig. 1 for a specific implementation method of the computer program, which is not described herein again.
Further, the modules/units integrated in the robot, if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium. The computer readable medium may be non-volatile or volatile. The computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM).
Embodiments of the present invention may also provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor of a robot, the computer program may implement:
acquiring image information of a target environment and initial position information of a UWB module in a robot, calculating angular points in the image information by using an angular point algorithm, and constructing an initial three-dimensional point cloud map of the target environment by using a visual instant positioning and mapping technology according to the image information and the angular points;
when a robot following instruction is received, a camera module in the robot is used for recording the target environment to obtain a motion video;
screening target corner points from the corner points according to the image information of each frame of image in the motion video, and calculating the motion trail of the robot in any two adjacent frames of images within time according to the target corner points in any two adjacent frames of images in the motion video;
according to the motion track and the initial position information, determining spatial position information of each frame time node of the UWB module in the robot in the video recording process;
acquiring UWB module space position information in the robot of all adjacent frames of the motion video according to a preset time interval in a preset time period to form a first space position information set, acquiring signal receiving time and signal response time of the UWB module in the robot and the UWB module in a user handle, and calculating the distance between the UWB module in the robot and the UWB module in the user handle according to the signal receiving time and the signal response time;
according to the spatial position information set and the distance, calculating spatial position information of a UWB module in the user handle by utilizing a triangulation method;
and generating a following path of the robot according to the initial three-dimensional point cloud map and the spatial position information of the UWB module in the user handle.
Further, the computer usable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the blockchain node, and the like.
In the embodiments provided in the present invention, it should be understood that the disclosed robot, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
Furthermore, it will be obvious that the term "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. A robot following method, characterized in that the method comprises:
acquiring image information of a target environment and initial position information of a UWB module in a robot, calculating angular points in the image information by using an angular point algorithm, and constructing an initial three-dimensional point cloud map of the target environment by using a visual instant positioning and mapping technology according to the image information and the angular points;
when a robot following instruction is received, a camera module in the robot is used for recording the target environment to obtain a motion video;
screening target corner points from the corner points according to the image information of each frame of image in the motion video, and calculating the motion trail of the robot in any two adjacent frames of images within time according to the target corner points in any two adjacent frames of images in the motion video;
according to the motion track and the initial position information, determining spatial position information of each frame time node of the UWB module in the robot in the video recording process;
acquiring UWB module space position information in the robot of all adjacent frames of the motion video according to a preset time interval within a preset time period to form a first space position information set, acquiring signal receiving time and signal response time of a UWB module in the robot and a UWB module in a user handle, and calculating the distance between the UWB module in the robot and the UWB module in the user handle according to the signal receiving time and the signal response time;
according to the spatial position information set and the distance, calculating spatial position information of a UWB module in the user handle by using a triangulation method;
and generating a following path of the robot according to the initial three-dimensional point cloud map and the spatial position information of the UWB module in the user handle.
2. The robot following method according to claim 1, wherein said calculating a motion trajectory of the robot in time of any two adjacent frames of images according to the target corner points in any two adjacent frames of images in the motion video comprises:
matching the target corner points in any two adjacent frames of images in the motion video to obtain a matching result;
calculating a rotation vector and a displacement vector between any two adjacent frames of images by using an epipolar geometric formula according to the matching result;
performing nonlinear optimization on the rotation vector and the displacement vector by using a least square method to obtain a target rotation vector and a target displacement vector;
and splicing any two adjacent frames of images according to the target rotation vector and the target displacement vector to obtain the motion trail of the robot.
3. The robot following method according to claim 1, wherein the calculating the corner points in the image information using a corner point algorithm comprises:
acquiring pixel information in the image information to obtain a pixel image;
selecting any point of pixel point in the pixel image as a circle center, and drawing a circle with a preset length as a radius to obtain a target circle;
judging whether a first preset number of continuous comparison pixel points exist on the target circle;
when the first preset number of continuous contrast pixel points do not exist on the target circle, judging that the pixel points are not angular points;
when a first preset number of continuous contrast pixels exist on the target circle, calculating the difference value between the brightness value of the contrast pixels and the brightness value of the pixels;
judging whether the difference value of a second preset number is larger than or smaller than a preset threshold value or not;
when the difference value of a second preset number does not exist in the difference values and is larger than or smaller than a preset threshold value, judging that the pixel point is not an angular point;
and when the difference value of the second preset number is larger than or smaller than a preset threshold value, judging that the pixel point is an angular point.
4. The robot following method according to claim 1, wherein the constructing an initial three-dimensional point cloud map of the target environment using a visual instantaneous localization and mapping technique based on the image information and the corner points comprises:
filtering the image information to obtain de-noised image information;
carrying out distortion removal processing on the de-noised image information to obtain target image information;
according to the angular points, performing angular point matching on the target image information to obtain a matching result;
and splicing the target picture information according to the matching result to obtain an initial three-dimensional point cloud map.
5. The robot following method according to claim 1, wherein the determining spatial position information of each frame time node of the UWB module in the robot during recording according to the motion trajectory and the initial position information comprises:
acquiring the movement speed of the robot and the time interval between two adjacent frames;
calculating the movement distance of the robot in the time interval of two adjacent frames according to the movement track, the movement speed and the time interval;
and sequentially calculating the space position information of the robot at each frame time node according to the time sequence according to the initial position information of the UWB module in the robot and the movement distance of the robot in the time interval of the two adjacent frames of images.
6. The robot following method according to claim 1, wherein the calculating spatial position information of the UWB module in the user handle by triangulation based on the set of spatial position information and the distance comprises:
taking the spatial position information of any UWB module in the robot in any frame of the spatial position information set as an origin, constructing a spatial coordinate system and acquiring the origin coordinate of the origin;
acquiring residual space position information of other UWB modules in the robot in any frame in the space coordinate system and adjacent space position information of all UWB modules in the robot in an adjacent frame of any frame;
according to the spatial correspondence between the residual spatial position information and the adjacent spatial position information and the origin in the spatial coordinate system, calculating the contrast coordinates of the rest UWB modules in the robot in any frame and all UWB modules in the robot in the adjacent frame of any frame in the spatial coordinate system;
constructing a spherical surface according to the origin coordinate, the comparison coordinate and the distance between the UWB module in the robot and the UWB module in the user handle corresponding to each coordinate, and taking the coordinate of a point through which all the spherical surfaces pass as the handle coordinate of the UWB module in the user handle;
and calculating the spatial position information of the UWB module in the user handle according to the corresponding relation between the handle coordinate and the origin coordinate.
7. The robot following method according to claim 1, wherein said calculating a distance between a UWB module in the robot and a UWB module in the user handle based on the signal reception time and the signal response time comprises:
calculating the distance Z between the UWB module in the robot and the UWB module in the user handle by using the following distance measurement formula:
Z=X*(A-B)
wherein X is the speed of the electromagnetic wave signal emitted by the robot or the user handle; a is the signal reception time of the UWB module in the robot or the UWB module in the user handle; b is the signal response time of the UWB module in the robot or the UWB module in the user handle.
8. A robot follower device, the device comprising:
the three-dimensional map building module is used for obtaining image information of a target environment and initial position information of a UWB module in the robot, calculating angular points in the image information by using an angular point algorithm, and building an initial three-dimensional point cloud map of the target environment by using a visual instant positioning and map building technology according to the image information and the angular points;
a user handle positioning module, configured to record a video of the target environment by using a camera module in the robot when a robot following instruction is received, obtain a motion video, screen out target corner points from the corner points according to image information of each frame of image in the motion video, calculate a motion trajectory of the robot within any two adjacent frame of image time according to the target corner points in any two adjacent frames of images in the motion video, and determine spatial position information of a time node of each frame of the UWB module in the robot during the video recording process according to the motion trajectory and the initial position information, acquiring UWB module spatial position information in the robot of all adjacent frames of the motion video according to a preset time interval within a preset time period to form a first spatial position information set, acquiring signal receiving time and signal response time of a UWB module in the robot and a UWB module in a user handle, calculating the distance between the UWB module in the robot and the UWB module in the user handle according to the signal receiving time and the signal response time, and calculating the spatial position information of the UWB module in the user handle by using a triangulation method according to the spatial position information set and the distance;
and the following path generating module is used for generating a following path of the robot according to the initial three-dimensional point cloud map and the spatial position information of the UWB module in the user handle.
9. A robot, characterized in that the robot comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores computer program instructions executable by the at least one processor to cause the at least one processor to perform the robot following method of any one of claims 1 to 7.
10. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the robot following method according to any one of claims 1 to 7.
CN202210893035.7A 2022-07-27 2022-07-27 Robot following method, device, robot and computer readable storage medium Pending CN115366097A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210893035.7A CN115366097A (en) 2022-07-27 2022-07-27 Robot following method, device, robot and computer readable storage medium
PCT/CN2022/128354 WO2024021340A1 (en) 2022-07-27 2022-10-28 Robot following method and apparatus, and robot and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210893035.7A CN115366097A (en) 2022-07-27 2022-07-27 Robot following method, device, robot and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN115366097A true CN115366097A (en) 2022-11-22

Family

ID=84064658

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210893035.7A Pending CN115366097A (en) 2022-07-27 2022-07-27 Robot following method, device, robot and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN115366097A (en)
WO (1) WO2024021340A1 (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101907548B1 (en) * 2016-12-23 2018-10-12 한국과학기술연구원 Moving and searching method of mobile robot for following human
CN109682388B (en) * 2018-12-21 2020-12-25 北京智行者科技有限公司 Method for determining following path
CN111198496A (en) * 2020-01-03 2020-05-26 浙江大学 Target following robot and following method
US20220193907A1 (en) * 2020-12-22 2022-06-23 X Development Llc Robot planning
CN113910224B (en) * 2021-09-30 2023-07-21 达闼科技(北京)有限公司 Robot following method and device and electronic equipment
CN114326732A (en) * 2021-12-28 2022-04-12 无锡笠泽智能科技有限公司 Robot autonomous following system and autonomous following control method

Also Published As

Publication number Publication date
WO2024021340A1 (en) 2024-02-01

Similar Documents

Publication Publication Date Title
CN112652016B (en) Point cloud prediction model generation method, pose estimation method and pose estimation device
US10086955B2 (en) Pattern-based camera pose estimation system
WO2022052660A1 (en) Warehousing robot localization and mapping methods, robot, and storage medium
CN108810473B (en) Method and system for realizing GPS mapping camera picture coordinate on mobile platform
US10451403B2 (en) Structure-based camera pose estimation system
CN112258567A (en) Visual positioning method and device for object grabbing point, storage medium and electronic equipment
US9858669B2 (en) Optimized camera pose estimation system
CN110058591A (en) A kind of AGV system based on laser radar Yu depth camera hybrid navigation
CN110597265A (en) Recharging method and device for sweeping robot
CN110942474B (en) Robot target tracking method, device and storage medium
CN115205128A (en) Depth camera temperature drift correction method, system, equipment and medium based on structured light
CN111353453A (en) Obstacle detection method and apparatus for vehicle
JP2017004228A (en) Method, device, and program for trajectory estimation
CN112967340A (en) Simultaneous positioning and map construction method and device, electronic equipment and storage medium
EP4332631A1 (en) Global optimization methods for mobile coordinate scanners
CN112017202B (en) Point cloud labeling method, device and system
US20230224576A1 (en) System for generating a three-dimensional scene of a physical environment
CN110853098A (en) Robot positioning method, device, equipment and storage medium
CN115366097A (en) Robot following method, device, robot and computer readable storage medium
US9824455B1 (en) Detecting foreground regions in video frames
US9842402B1 (en) Detecting foreground regions in panoramic video frames
CN115240140A (en) Equipment installation progress monitoring method and system based on image recognition
Araujo et al. Life cycle of a slam system: Implementation, evaluation and port to the project tango device
CN113513983A (en) Precision detection method, device, electronic equipment and medium
CN112614181A (en) Robot positioning method and device based on highlight target

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination