CN113313764B - Positioning method, positioning device, electronic equipment and storage medium - Google Patents

Positioning method, positioning device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113313764B
CN113313764B CN202110594737.0A CN202110594737A CN113313764B CN 113313764 B CN113313764 B CN 113313764B CN 202110594737 A CN202110594737 A CN 202110594737A CN 113313764 B CN113313764 B CN 113313764B
Authority
CN
China
Prior art keywords
robot
pose
obstacle
positioning
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110594737.0A
Other languages
Chinese (zh)
Other versions
CN113313764A (en
Inventor
宋乐
郭鑫
李国林
谭浩轩
王世魏
陈侃
霍峰
秦宝星
程昊天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Gaussian Automation Technology Development Co Ltd
Original Assignee
Shanghai Gaussian Automation Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Gaussian Automation Technology Development Co Ltd filed Critical Shanghai Gaussian Automation Technology Development Co Ltd
Priority to CN202110594737.0A priority Critical patent/CN113313764B/en
Publication of CN113313764A publication Critical patent/CN113313764A/en
Application granted granted Critical
Publication of CN113313764B publication Critical patent/CN113313764B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a positioning method, a positioning device, electronic equipment and a storage medium, wherein the positioning method comprises the following steps: collecting the current pose of the robot, depth data of at least one preset direction and infrared data of at least one preset direction; extracting at least one edge point from the infrared data, and converting each edge point into a characteristic point cloud according to the depth data; determining obstacle scores of the characteristic point clouds in a preset global grid map, wherein the preset global grid map is formed based on historical point clouds; and optimizing the current pose according to the obstacle score to determine the positioning information of the robot. According to the method, the obstacle score of the edge point of the infrared data in the preset global grid map is determined, and the pose of the robot is optimized by using the obstacle score, so that the accuracy of the positioning information of the robot is improved, the service quality of the robot can be enhanced, and the use experience of a user is improved.

Description

Positioning method, positioning device, electronic equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of automatic control, in particular to a positioning method, a positioning device, electronic equipment and a storage medium.
Background
With the development of automated control technology, mobile service robots are increasingly being used in industrial production and commercial services. An important working premise of mobile service robots is accurate location information acquisition. A common method for acquiring position information in the prior art is that a mobile server person relies on laser radar sensing to acquire the position information. Although the laser radar has the capabilities of positioning precision, anti-interference and the like, the mobile server robot often operates in a complex scene, so that the application of the laser radar is limited, for example, the characteristic inter-frame transformation identified by the laser radar in a people stream dense scene is large, so that a large error exists in the pose determined by the motion server robot, the field of view of the laser radar is limited by people, and the acquired data of the laser radar is limited. The motion service robot needs a positioning method with high precision when running in the scene of dense people stream.
Disclosure of Invention
The invention provides a positioning method, a device, a robot and a storage medium, which are used for realizing accurate positioning in a dense scene of people flow, improving the accuracy of acquiring the position information of the robot, improving the movement efficiency of the robot, enhancing the service quality of the robot and being beneficial to improving the use experience of users.
In a first aspect, an embodiment of the present invention provides a positioning method, including:
collecting the current pose of the robot, depth data of at least one preset direction and infrared data of at least one preset direction;
extracting at least one edge point from the infrared data, and converting each edge point into a characteristic point cloud according to the depth data;
determining obstacle scores of the characteristic point clouds in a preset global grid map, wherein the preset global grid map is formed based on historical point clouds;
and optimizing the current pose according to the obstacle score to determine the positioning information of the robot.
In a second aspect, an embodiment of the present invention further provides a positioning device, including:
the data acquisition module is used for acquiring the current pose of the robot, depth data of at least one preset direction and infrared data of at least one preset direction;
the characteristic point cloud module is used for extracting at least one edge point from the infrared data and converting each edge point into characteristic point cloud according to the depth data;
the obstacle scoring module is used for determining obstacle scores of the characteristic point clouds in a preset global grid map, wherein the preset global grid map is formed based on historical point clouds;
And the positioning determining module is used for optimizing the current pose according to the obstacle score so as to determine the positioning information of the robot.
In a third aspect, an embodiment of the present invention further provides an electronic device, including:
one or more processors;
a memory for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the positioning method as described in any of the embodiments of the present invention.
In a fourth aspect, embodiments of the present invention also provide a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements a positioning method according to any of the embodiments of the present invention.
According to the embodiment of the invention, the current pose of the robot and the depth data and the infrared data in the preset direction are obtained, at least one edge point is extracted from the infrared data, the feature point cloud is formed after each edge point is processed according to the depth data, the obstacle score of the feature point cloud in the preset global grid map is determined, the current pose is optimized by using the obstacle score to obtain the positioning information of the robot, the accurate obtaining of the position information of the robot is realized, the influence of the complex environment on the pose determination is reduced, the service quality of the robot can be enhanced, and the use experience of a user is improved.
Drawings
FIG. 1 is a flow chart of a positioning method according to a first embodiment of the present invention;
FIG. 2 is an exemplary diagram of a pose provided by a first embodiment of the present invention;
FIG. 3 is an exemplary diagram of a preset global raster map provided in accordance with a first embodiment of the present invention;
FIG. 4 is a flowchart of another positioning method according to the second embodiment of the present invention;
FIG. 5 is an exemplary diagram of a coordinate transformation provided in accordance with a second embodiment of the present invention;
FIG. 6 is a flow chart of another positioning method according to a third embodiment of the present invention;
FIG. 7 is a diagram illustrating a positioning method according to a third embodiment of the present invention;
fig. 8 is a schematic structural diagram of a positioning device according to a fourth embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be noted that, for convenience of description, only some, but not all of the structures related to the present invention are shown in the drawings, and furthermore, embodiments of the present invention and features in the embodiments may be combined with each other without conflict.
The service quality of the robot mainly depends on accurate position information, and at present, the mobile robot is mostly measured by a laser radar sensor to obtain the position information, but is limited by scenes, for example, the robot is more in people flow in the environment, so that the characteristic change measured by the laser radar is larger, and the accurate positioning cannot be realized. Aiming at the problems, the technical scheme of the application builds and positions the map and positions the indoor ceiling by the indoor ceiling characteristics which are basically unchanged in a short time, thereby improving the positioning accuracy of the robot.
Example 1
Fig. 1 is a flowchart of a positioning method provided by an embodiment of the present application, where the embodiment of the present application is applicable to a situation of robot positioning in a dense traffic scene, the method may be performed by a positioning device, and the device may be implemented in a hardware and/or software manner, and referring to fig. 1, the positioning method provided by the embodiment of the present application specifically includes the following steps:
step 110, collecting the current pose of the robot, at least one depth data in a preset direction and at least one infrared data in the preset direction.
The current pose may be information representing the current position and state of the robot, and may include position coordinates in a world coordinate system, a robot direction, and the like. Fig. 2 is an exemplary diagram of a pose provided in the first embodiment of the present application, referring to fig. 2, in the embodiment of the present application, the current pose of the robot may include 3 unknowns, which are X, y and yaw, respectively, where X represents the robot as an abscissa in the world coordinate system, y represents an ordinate in the world coordinate system, and yaw represents an included angle between the direction of the robot and the X axis of the world coordinate system. The preset direction may be a preset data acquisition direction, may be set by a user or a service provider, and may be any direction in space, for example, a vertical direction of the robot, a horizontal upward 45-degree direction of the robot, and the like. The depth data may be data reflecting the object to robot distance, which may be acquired by sensors provided on the robot. The infrared data can be data acquired through an infrared sensor, the infrared data can be generated by the infrared sensor acquiring an object in a preset direction of the robot, and the distance between the robot and the object can be represented.
In the embodiment of the invention, the current pose can be acquired by using a sensor arranged on the robot, for example, the current pose of the robot can be determined by acquiring the moving distance by using inertial navigation or a displacement sensor. The depth data and the infrared data may be acquired in a preset direction Of the robot, and for example, the depth data Of the obstacle may be acquired in a top direction Of the robot using a Time Of Flight (TOF) camera. It can be appreciated that the robot can collect depth data and infrared data in a plurality of preset directions to further improve the accuracy of robot positioning.
And 120, extracting at least one edge point from the infrared data, and converting each edge point into a characteristic point cloud according to the depth data.
The edge points can be position points located at edge positions in an image formed by infrared data, and the edge points can be detected in the infrared data by means of differential edge detection, reborts operator edge detection, sobel edge detection, laplacian edge detection, prewitt operator edge detection and the like.
Specifically, edge points may be extracted from the infrared image capturing data according to the edge detection method, each edge point may be converted from two-dimensional coordinates to three-dimensional coordinates according to a depth value of the depth data, for example, a depth value corresponding to each edge point may be extracted from the depth data, and the depth value may be used as a third dimension of the three-dimensional coordinates. The converted edge points may be formed into a feature point cloud.
And 130, determining obstacle scores of the characteristic point clouds in a preset global grid map, wherein the preset global grid map is formed based on the history point clouds.
The obstacle score may be a total probability score of each edge point in the characteristic point cloud projected to a preset global grid map for encountering an obstacle grid, the preset global grid map may be formed by a history point cloud, and may reflect a situation in a space where the robot is located, the preset global grid map may include one or more grids, and each grid may include a probability value of the robot being located in the grid. The preset global grid map may be gradually perfected during the movement of the robot. The obstacle score may be a sum of probability values of the edge points at the positions of the obstacles in the preset global grid map, and fig. 3 is an exemplary diagram of the preset global grid map provided in the first embodiment of the present invention, and referring to fig. 3, the preset global grid map may include three parts including an unknown area, an obstacle area and a non-obstacle area, where the obstacle score may be determined by summing the probability values of the points of the statistical feature map mapped to the grids in the obstacle area.
In the embodiment of the invention, each edge point in the characteristic point cloud can be mapped into a preset global grid map one by one, the probability value of each edge point in the corresponding grid is determined, and the probability value of the edge point in the obstacle grid can be counted as the obstacle score of the characteristic point cloud in the preset global grid map.
And 140, optimizing the current pose according to the obstacle score to determine the positioning information of the robot.
The positioning information may reflect the position information of the robot at the current moment, and may include coordinates of the robot in a world coordinate system and an included angle between the robot and an X-axis of the world coordinate system.
Specifically, the obstacle score of the feature point cloud may be used as a limiting condition of the current pose, the current pose may be optimized based on the obstacle score, the optimized current pose may be used as positioning information of the robot, and it may be understood that the manner of optimizing the current pose may be nonlinear least square optimization, lagrangian multiplier optimization, and the like.
According to the embodiment of the invention, the current pose of the robot and the depth data and the infrared data in the preset direction are obtained, at least one edge point is extracted from the infrared data, the feature point cloud is formed after each edge point is processed according to the depth data, the obstacle score of the feature point cloud in the preset global grid map is determined, the current pose is optimized by using the obstacle score to obtain the positioning information of the robot, the accurate obtaining of the position information of the robot is realized, the influence of the complex environment on the pose determination is reduced, the service quality of the robot can be enhanced, and the use experience of a user is improved.
Example two
Fig. 4 is a flowchart of another positioning method provided by the second embodiment of the present invention, where the embodiment of the present invention is embodied on the basis of the foregoing embodiment of the present invention, and referring to fig. 4, the method provided by the embodiment of the present invention specifically includes the following steps:
step 210, acquiring a current pose in a world coordinate system of the robot, wherein the current pose at least comprises an abscissa, an ordinate and an included angle between the robot direction and an X axis of the world coordinate system.
The world coordinate system may be an absolute coordinate system of the robot, and an origin of the world coordinate system may be determined at the time of initialization of the robot.
In the embodiment of the invention, the current pose can comprise three elements, namely an abscissa, an ordinate and an included angle between the direction of the robot and the X axis of the world coordinate system, and can be obtained through a sensor arranged in the robot. Specifically, sensor data of the robot can be collected, and the sensor data determines an abscissa and an ordinate of the robot in a world coordinate system at the current moment and an included angle between the direction of the robot and an X axis of the world coordinate system as the current pose.
Step 220, collecting depth data in a preset direction by using at least one depth data sensor preset on the robot and collecting infrared data in the preset direction by using at least one infrared data sensor, wherein the preset direction at least comprises one of a horizontal direction and a vertical direction.
The depth data sensor may be a device for collecting depth data, may collect a distance from a robot to a collected object, may sense a depth of the object in space, may include a structured light depth sensor, a camera array depth sensor, a time-of-flight depth sensor, and an infrared data sensor, and the like, the infrared data sensor may be a device for generating a thermal imaging map of the collected object, the infrared data sensor may include an infrared imager, a time-of-flight camera, and the like, and the depth data sensor and the infrared data sensor on the robot device may be an integrated data collection device, such as a TOF camera, and may directly control the TOF camera to collect depth data and infrared data of the collected object. The preset direction may specifically be at least one of a horizontal direction and a vertical direction of the robot.
In the embodiment of the invention, the depth data sensor and the infrared data sensor are pre-installed on the robot, and the depth data sensor and the infrared data sensor can be used for respectively acquiring the data of the horizontal direction or the vertical direction of the robot, and it is understood that a plurality of depth data sensors and a plurality of infrared data sensors can be pre-arranged on the robot, the pre-arranged direction of the acquired data pre-arranged by each depth data sensor and each infrared data sensor can be different, and the pre-arranged direction can be a pre-arranged direction set by a user or a server, for example, the pre-arranged direction can be a direction convenient for acquiring indoor ceiling characteristics, and the pre-arranged direction can comprise a direction such as a vertical direction, a horizontal direction up 45 degrees or the like.
Step 230, filtering noise in the infrared data.
In the embodiment of the application, because noise exists in the data due to the influence of the environment in the infrared data acquisition process, the accuracy of edge point acquisition is reduced, the noise in the infrared data can be filtered in order to improve the accuracy of robot positioning, and the filtering method can comprise Gaussian filtering, bilateral filtering, median filtering, mean filtering and other methods, wherein the Gaussian filtering can be linear smooth filtering, gaussian noise can be eliminated in the image processing process, and the noise reduction of the image data is realized.
Illustratively, taking the example of eliminating noise in the collected infrared data by Gaussian filtering, a template (convolution or mask) can be used for scanning each pixel in the infrared data, and the weighted average gray value of the pixels in the neighborhood is determined by using the template to replace the value of the central pixel point of the template, so that noise filtering of the infrared data is realized.
Step 240, extracting at least one edge point from the noise-filtered infrared data.
In the embodiment of the application, after noise is filtered out from the infrared data, edge points can be extracted from an image formed by the infrared data, and it is understood that all the edge points in the infrared data can be extracted, and one edge point can be extracted at intervals, so that the efficiency of robot positioning is further improved. The method of extracting the edge points may be a method of image recognition, for example, a point having a large color difference from other surrounding pixel points in an image formed by the infrared data may be used as the edge point.
In an exemplary embodiment, the infrared data may be processed according to a Canny edge detection algorithm, and the steps of reducing noise by gaussian filtering, calculating the amplitude and the direction of the gradient by using a finite difference of first-order bias, performing non-maximum suppression on the gradient amplitude, detecting and connecting edges by using a dual-threshold algorithm, so as to obtain edge points in the infrared data may be sequentially performed on the infrared image data, where the Canny edge detection algorithm may be an algorithm for detecting an image edge, and specifically may include steps of reducing noise by gaussian filtering of an image, calculating the amplitude and the direction of the gradient by using a finite difference of first-order bias, performing non-maximum suppression on the gradient amplitude, detecting and connecting edges by using a dual-threshold algorithm, and may also be used: sobel edge detection, prewitt edge detection, roberts edge detection, canny edge detection, marr-Hildrete edge detection and other methods are used for detecting edge points in infrared data.
Step 250, converting each edge point into three-dimensional coordinates using the camera model of the robot and the depth data to construct a feature point cloud.
The camera model may be a camera model established by a robot and used for three-dimensional conversion, and may be used for correcting coordinates of edge points by using depth data to obtain feature point clouds with small distortion, the depth data may include depth information of each edge point, the depth information may be used as third dimension information corresponding to the three-dimensional coordinates of the edge points, and the camera model may specifically include one or more of an euler camera model, a UVN camera model, a pinhole camera model, a fisheye camera model and a wide-angle camera model.
Specifically, the coordinates of each edge point may be converted using a preset camera model as a reference system, so that the two-dimensional coordinates of the edge point and the coordinates of the world coordinate system are in the same reference system, the depth information corresponding to each edge point may be determined in the depth data, the two-dimensional coordinates and the depth information of each edge point may be formed into three-dimensional coordinates, and each edge point having the three-dimensional coordinates may be formed into a feature point cloud. By way of example, the conversion of two-dimensional data of edge points into a three-dimensional point cloud may be achieved in the following conversion manner:
wherein Z is depth information of the edge points, (u, v) is two-dimensional image coordinates of the edge points, K is an internal camera reference matrix, which can be determined by a camera model, and P is coordinates of a three-dimensional point cloud.
And 260, converting the coordinates of each edge point in the characteristic point cloud into a world coordinate system, and mapping each edge point to a target grid of a preset global grid map.
Specifically, coordinate system conversion may be performed between coordinates of each edge point in the feature point cloud, so that each coordinate is based on a world coordinate system, fig. 5 is an exemplary diagram of coordinate conversion provided in the second embodiment of the present invention, referring to fig. 5, an acquired edge point is located under a robot coordinate system, a camera model corresponds to one coordinate system, depth data may be added to an edge point under the robot coordinate system according to the coordinate system of the camera model, so as to implement undistorted or low-distorted coordinate conversion, and then each three-dimensional coordinate is reconverted to the world coordinate, where the conversion process may be represented by the following formula: w (w) i =Proj(T*p i ) Wherein w is i Representing coordinates of edge points in the world coordinate system, p i Representing the coordinates of the edge points under the robot coordinates, the Proj function maps the three-dimensional coordinates to two-dimensional coordinates, T is the current position of the robot and is represented by x, y and yaw, and T can be represented by the following formula:
in the embodiment of the invention, after the coordinates of each edge point in the world coordinate system are determined, each edge point can be mapped into the target grid of the preset global grid map in turn according to the coordinates, for example, different grids in the preset grid map have different coordinate ranges, each edge point can be mapped into a corresponding grid according to the respective coordinate ranges, and the grid with the edge point mapping can be recorded as the target grid.
Step 270, if the target grid is an obstacle position, acquiring a probability value of the target grid as an obstacle score of the corresponding edge point.
The obstacle position may be a position indicating that the target grid belongs to the obstacle region, and the grid of the obstacle position may be identified by information.
Specifically, a target grid with edge point mapping in a preset global grid map may be checked, and if the target grid is an obstacle position, the probability value stored in the target grid is used as an obstacle score of the corresponding edge point.
Further, if the target grid is not an obstacle position, for example, the edge point is preset to be an unobstructed area or an unknown area in the global grid, the probability value corresponding to the edge point may not be counted, and the edge point may be deleted or the probability value stored in the grid corresponding to the edge point may not be acquired.
Step 280, counting the sum of obstacle scores of all edge points in the characteristic point cloud as the obstacle score of the characteristic point cloud.
In the embodiment of the invention, the obstacle scores of all edge points in the characteristic point cloud can be counted, and the sum of the obstacle scores can be used as the obstacle score of the characteristic point cloud.
And 290, constructing a residual function according to the obstacle scores of the current pose and the characteristic point cloud.
In the embodiment of the invention, a residual function for optimizing the current pose can be constructed according to the current pose and the obstacle score of the characteristic point cloud, wherein the residual function can be a functional relation for optimizing the pose of the robot, can represent the association relation between the current pose of the robot and the obstacle score, and can be an optimization function formed by a nonlinear least square method problem, for example,
wherein e 1 As residual, p k Is a point cloud, M (T, pk) is a point p k And projecting the obstacle score calculated by the grid map when the pose of the robot is T.
And 2100, adjusting parameter information in the residual function to enable a result value of the residual function to be minimum, wherein the parameter information in the residual function is a value of at least one parameter of an abscissa and an ordinate of the current pose and an included angle between the robot direction and an X axis of a world coordinate system.
Specifically, the value of the current pose in the residual function may be adjusted so that the result value of the residual formula is minimum, and the adjustment modes of the current pose may include a gradient descent method, a newton method, a quasi-newton method, and the like.
And 2110, taking the abscissa and the ordinate corresponding to the current pose when the result value is the smallest and the included angle between the robot direction and the X axis of the world coordinate system as the output positioning pose information.
The positioning pose information may be pose information for positioning the robot, and the positioning pose information may represent a state in which the robot is most likely to be currently located.
In the embodiment of the invention, when the result value of the residual function is minimum, the current pose can be determined to be optimized optimally, and the abscissa and the ordinate corresponding to the current pose after adjustment at the moment and the included angle between the robot direction and the X axis of the world coordinate system can be used as the output positioning pose information.
According to the embodiment of the application, the current pose of the robot under the world coordinate system is obtained, the depth sensor arranged on the robot is used for collecting depth data and infrared data in a preset direction, noise contained in the infrared data is filtered, the camera model and the depth data of the robot are used for processing the infrared data into a characteristic point cloud, coordinates of all edge points in the characteristic point cloud are converted into a world coordinate system, all the edge points are mapped to a target grid of a preset global grid map, the probability value of the target grid is obtained when the target grid is at an obstacle position and is used as an obstacle score of the edge points, the sum of the obstacle scores of all the edge points of the characteristic point cloud is counted and used as an obstacle score of the characteristic point cloud, a residual function is constructed by using the obstacle score and the current pose, the current pose in the residual function is adjusted, the result value of the residual function is minimum, the current pose with the minimum result value is used as positioning pose information of the robot, the contour points of a plane in the three-dimensional point cloud corresponding to the depth data are selected to form the characteristic point cloud, the positioning information of the robot is improved on the premise that the positioning information is not changed, the positioning information is improved, the quality of the position information of the robot is improved, the user experience is improved, and the quality of the user can be improved, and the user experience is improved.
Further, on the basis of the above embodiment of the present invention, the method further includes: and optimizing the positioning information according to the moving speed and the obstacle score of the robot.
On the basis of the embodiment of the invention, the moving speed of the robot can be obtained, the moving speed and the obstacle score can be used for optimizing the positioning information together, the positioning accuracy of the robot is further improved, for example, the moving speed of the robot can be collected, a limiting condition is formed based on the moving speed, the nonlinear least square problem can be formed on the positioning information optimized based on the obstacle score based on the limiting condition, the problem is solved according to the gradient descent method, and the finally optimized positioning information can be obtained when the result value of the nonlinear least square problem is minimum.
Example III
Fig. 6 is a flowchart of another positioning method provided in the third embodiment of the present invention, where the implementation of the present invention is embodied on the basis of the above embodiment of the present invention, and referring to fig. 6, the method provided in the embodiment of the present invention specifically includes the following steps:
step 310, collecting the current pose of the robot, at least one depth data in a preset direction and at least one infrared data in the preset direction.
Step 320, at least one edge point is extracted from the infrared data, and each edge point is converted into a feature point cloud according to the depth data.
Step 330, determining obstacle scores of the characteristic point clouds in a preset global grid map, wherein the preset global grid map is formed based on the history point clouds.
And 340, constructing a first residual error item according to the obstacle scores of the current pose and the characteristic point cloud.
In the embodiment of the invention, the method can be based onThe obstacle scores of the current pose and feature point clouds construct a first residual term corresponding to the nonlinear least squares problem, e.g.,wherein e 1 As residual, p k Is a point cloud, M (T, pk) is a point p k And projecting the obstacle score calculated by the grid map when the pose of the robot is T.
Step 350, determining a speed prediction pose based on the moving speed, and taking the difference between the speed prediction pose and the historical pose at the last moment as a second residual term.
The predicted pose may be a pose of the robot determined according to the movement speed, for example, a movement position of the robot may be determined according to the movement speed, and the pose of the robot may be determined as the speed predicted pose according to the movement position.
Specifically, the moving speed of the robot may be collected, the position of the robot may be generated using the moving speed, a speed prediction pose may be determined according to the position, and a difference value between the speed prediction pose and a historical pose at a previous moment is used as a second residual error term, where the historical pose may be pose information determined at the previous moment of the robot, and may include coordinates and an included angle with an abscissa axis.
And 360, adjusting parameter information in the speed prediction pose and/or the current pose in the first residual item and the second residual item so that the sum value of the first residual item and the second residual item is minimum, wherein the parameter information comprises the value of at least one parameter in the abscissa, the ordinate and the included angle between the robot direction and the X axis of the world coordinate system.
In the embodiment of the invention, at least one parameter of the horizontal coordinate, the vertical coordinate and the included angle between the robot direction and the X axis of the world coordinate system in the speed prediction pose and/or the current pose can be respectively adjusted, so that the sum value of the first residual error item and the second residual error item is minimum, and the adjustment mode can specifically comprise a gradient descent method, a Newton method, a quasi-Newton method and the like.
And 370, taking the abscissa and the ordinate of the speed predicted pose and/or the current pose and the included angle between the robot direction and the X axis of the world coordinate system when the sum of the first residual error item and the second residual error item is minimum as the optimized positioning pose information.
Specifically, when the sum of the first residual error item and the second residual error item is minimum, the positioning information of the robot can be optimized, the relevant information of the speed predicted pose and/or the current pose after the adjustment can be used as the positioning pose information after the robot is optimized, and the positioning pose information can be used as the positioning information used in the robot positioning process, wherein the relevant information comprises an abscissa, an ordinate and an included angle between the robot direction and the X axis of the world coordinate system.
And 380, updating the robot pose corresponding to the positioning information to a preset global grid map.
Specifically, the final pose of the robot can be determined according to the abscissa and the ordinate in the positioning information and the included angle between the direction of the robot and the X-axis of the world coordinate system, the probability value of each grid in the preset global grid map can be determined according to the final pose, and the probability value is added to the corresponding grid in the corresponding preset global grid map, so that the update of the preset global grid map is realized.
According to the embodiment of the invention, at least one edge point is extracted from the infrared data by collecting the depth data and the infrared data of the current pose and the preset direction of the robot, the characteristic point cloud is formed after each edge point is processed according to the depth data, the obstacle score of the characteristic point cloud in the preset global grid map is determined, a first residual error item is constructed based on the obstacle score and the current pose, a speed prediction pose and a history pose are determined based on the moving pose, a second residual error item is constructed, the current pose and the speed prediction pose are adjusted to enable the sum of the first residual error item and the second residual error item to be minimum, the transverse coordinate, the longitudinal coordinate and the included angle between the direction of the robot and the X axis of the world coordinate system after the current displacement and the speed prediction pose are the minimum are used as positioning information, the pose corresponding to the positioning information is updated to the preset global grid map, the accurate acquisition of the positioning information of the robot is realized, the influence of the determination of the pose in a complex environment is reduced, the service quality of the robot can be enhanced, and the use experience of a user is facilitated.
In an exemplary implementation, fig. 7 is an exemplary diagram of a positioning method provided by the third embodiment of the present invention, referring to fig. 7, a method for positioning and mapping a robot based on a Time of Flight (TOF) camera of a top view feature may include the following steps:
step one: acquiring a top view characteristic point cloud:
1. and carrying out Gaussian filtering noise reduction treatment on the acquired infrared ray image data.
2. The method mainly comprises the steps of finite difference calculation of amplitude and direction, non-maximum value inhibition, detection and edge connection by a double-threshold algorithm and the like.
3. And 2, obtaining a three-dimensional characteristic point cloud by using depth information of each edge point in the depth data and a camera model, wherein the extracted edge point in the step 2 is a coordinate point of a two-dimensional image layer, and the conversion mode is as follows:
wherein Z is depth information of the edge points, (u, v) is two-dimensional image coordinates of the edge points, K is an internal camera reference matrix, the depth camera model can be determined, and P is coordinates of a three-dimensional point cloud.
Step two, point cloud matching:
when the robot performs positioning mapping, the history feature point cloud can be used for converting into a world coordinate system to construct a grid map, the new feature point cloud is used for matching with the grid map, and the matching process can comprise the following steps:
1. Converting the extracted characteristic point cloud P into a world coordinate system through a robot pose T, and converting the formula into w i =Proj(T*p i ) Wherein w is i Representing coordinates of edge points in the world coordinate system, p i Representing the coordinates of the edge points under the robot coordinates, the Proj function maps the three-dimensional coordinates into two-dimensional coordinates, and T is the current position of the robotLet x, y and yaw, T can be expressed as follows:
2. each point in the characteristic point cloud is mapped into a grid map, the probability that the mapping grid is an obstacle is taken as the matching score of a single point, and the sum of the matching scores of all the points is recorded as the score of the characteristic point cloud:
s=1*p cell
where s is a single point matching score, p cell To map the occupancy probability of the grid.
3. Constructing a cost function model to optimize the pose T of the robot, and adjusting the pose T to minimize the cost function, wherein the equation is as follows:
wherein e 1 As residual, p k Is a point cloud, M (T, pk) is a point p k And projecting the obstacle score calculated by the grid map when the pose of the robot is T.
Step three, robot pose optimization
And (3) constructing a nonlinear least square problem by taking the characteristic point cloud and the vehicle speed as constraints in the second step, and jointly optimizing the pose of the robot, wherein the method specifically comprises the following steps of:
1. constructing a cost function model to optimize the pose T of the robot, and adjusting the pose T to enable an error term of the cost function to be minimum, wherein the error term is as follows:
Wherein e 1 As residual, p k Is a point cloud, M (T, pk) is a point p k And projecting the obstacle score calculated by the grid map when the pose of the robot is T.
2. Constructing an error term e obtained by vehicle speed 3 :
e 3 =P-P last
Wherein P is the pose of the robot at the current moment pushed by the vehicle speed, and P is the pose of the robot at the current moment pushed by the vehicle speed last The robot pose obtained after the last time of optimization is obtained.
3. For all error terms, the following optimization problem is constructed, and the pose at the current moment is obtained by using an optimization library solution (google thres):
(x,y,yaw)=argmin∑|e 1 |+|e 3 |
4. and (3) converting the feature point cloud obtained in the second step into a world coordinate system through the optimized pose, and updating the grid map based on the feature point cloud.
Example IV
Fig. 8 is a schematic structural diagram of a positioning device according to a fourth embodiment of the present invention, which can execute the positioning method according to any embodiment of the present invention, and has functional modules and beneficial effects corresponding to the execution method. The apparatus may be implemented by software and/or hardware, and specifically includes: a data acquisition module 401, a feature point cloud module 402, an obstacle scoring module 403, and a positioning determination module 404.
The data acquisition module 401 is configured to acquire depth data and infrared data of a current pose and a preset direction of the robot.
The feature point cloud module 402 is configured to extract at least one edge point from the infrared data, and convert each edge point into a feature point cloud according to the depth data.
The obstacle scoring module 403 is configured to determine an obstacle score of the feature point cloud in a preset global grid map, where the preset global grid map is configured based on a history point cloud.
A positioning determination module 404, configured to optimize the current pose according to the obstacle score to determine positioning information of the robot.
According to the embodiment of the invention, the current pose of the robot and the depth data and the infrared data in the preset direction are acquired through the data acquisition module, the characteristic point cloud module extracts at least one edge point in the infrared data, the characteristic point cloud is formed after each edge point is processed according to the depth data, the obstacle score module determines the obstacle score of the characteristic point cloud in the preset global grid map, the positioning determination module optimizes the current pose by using the obstacle score to acquire the positioning information of the robot, the accurate acquisition of the position information of the robot is realized, the influence of the complex environment on the pose determination is reduced, the service quality of the robot can be enhanced, and the use experience of a user is improved.
Further, on the basis of the above embodiment of the present invention, the location determining module 404 includes:
and the comprehensive optimization unit is used for optimizing the positioning information according to the moving speed and the obstacle score of the robot.
Further, on the basis of the above embodiment of the present invention, the data acquisition module 401 includes:
and the pose acquisition unit is used for acquiring the current pose of the robot in the world coordinate system, wherein the current pose at least comprises an abscissa, an ordinate and an included angle between the robot direction and the X axis of the world coordinate system.
The data acquisition unit is used for acquiring depth data in the preset direction by using at least one depth data sensor preset on the robot and acquiring infrared data in the preset direction by using at least one infrared data sensor, wherein the preset direction at least comprises one of a horizontal direction and a vertical direction.
Further, based on the above application embodiment, the feature point cloud module 402 includes:
and the noise processing unit is used for filtering noise in the infrared data.
And the edge extraction unit is used for extracting at least one edge point from the infrared data with noise filtered.
And the point cloud generating unit is used for converting each edge point into three-dimensional coordinates by using the camera model of the robot and the depth data to form a characteristic point cloud.
Further, on the basis of the above embodiment of the present invention, the obstacle scoring module 403 includes:
and the position mapping unit is used for converting the coordinates of each edge point in the characteristic point cloud into a world coordinate system and mapping each edge point to a target grid of the preset global grid map.
And the score determining unit is used for acquiring the probability value of the target grid as the obstacle score of the corresponding edge point if the target grid is the obstacle position.
And the score counting unit is used for counting the sum of the obstacle scores of the edge points in the characteristic point cloud as the obstacle score of the characteristic point cloud.
Further, on the basis of the above embodiment of the present invention, the location determining module 404 further includes:
and the first residual error unit is used for constructing a residual error function according to the current pose and the obstacle score of the characteristic point cloud.
And the parameter adjusting unit is used for adjusting the parameter information in the residual function so as to enable the result value of the residual function to be minimum, wherein the parameter information in the residual function is the value of at least one parameter in the abscissa and the ordinate of the current pose and the included angle between the robot direction and the X axis of the world coordinate system.
And the positioning determining unit is used for taking the abscissa and the ordinate corresponding to the current pose when the result value is minimum and the included angle between the robot direction and the X axis of the world coordinate system as the output positioning pose information.
Furthermore, on the basis of the above embodiment of the present invention, the comprehensive optimization unit is specifically configured to: constructing a first residual error item according to the current pose and the obstacle score of the characteristic point cloud; determining a speed prediction pose based on the moving speed, and taking a difference value between the speed prediction pose and the historical pose at the last moment as a second residual error term; adjusting parameter information in speed prediction pose and/or current pose in the first residual error item and the second residual error item to enable the sum value of the first residual error item and the second residual error item to be minimum, wherein the parameter information comprises the value of at least one parameter in an abscissa, an ordinate and an included angle between the robot direction and the X axis of a world coordinate system; and taking the velocity predicted pose and/or the abscissa and the ordinate of the current pose and the included angle between the robot direction and the X axis of the world coordinate system when the sum of the first residual error item and the second residual error item is minimum as optimized positioning pose information.
Further, on the basis of the above embodiment of the present invention, the apparatus further includes:
and the map updating module is used for updating the robot pose corresponding to the positioning information to the preset global grid map.
Example five
Fig. 9 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention, and as shown in fig. 9, the electronic device includes a processor 50, a memory 51, an input device 52, and an output device 53; the number of processors 50 in the electronic device may be one or more, one processor 50 being taken as an example in fig. 9; the processor 50, the memory 51, the input means 52 and the output means 53 of the electronic device may be connected by a bus or by other means, in fig. 9 by way of example.
The memory 51 is used as a computer readable storage medium, and may be used to store a software program, a computer executable program, and modules, such as modules (a data acquisition module 401, a feature point cloud module 402, an obstacle scoring module 403, and a positioning determination module 404) corresponding to the positioning device in the fourth embodiment of the present invention. The processor 50 executes various functional applications of the electronic device and data processing, i.e. implements the positioning method described above, by running software programs, instructions and modules stored in the memory 51.
The memory 51 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for functions; the storage data area may store data created according to the use of the terminal, etc. In addition, memory 51 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, memory 51 may further include memory located remotely from processor 50, which may be connected to the electronic device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 52 is operable to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the electronic device. The output means 53 may comprise a display device such as a display screen.
In an exemplary embodiment, the electronic device may be a robot or a positioning navigation device mounted on a robot. The robot or the positioning navigation equipment can accurately determine the position of the robot through the positioning method provided by the embodiment of the invention.
Example six
A sixth embodiment of the present invention also provides a storage medium containing computer-executable instructions, which when executed by a computer processor, are for performing a positioning method, the method comprising:
collecting the current pose of the robot, depth data of at least one preset direction and infrared data of at least one preset direction;
extracting at least one edge point from the infrared data, and converting each edge point into a characteristic point cloud according to the depth data;
determining obstacle scores of the characteristic point clouds in a preset global grid map, wherein the preset global grid map is formed based on historical point clouds;
and optimizing the current pose according to the obstacle score to determine the positioning information of the robot.
Of course, the storage medium containing the computer executable instructions provided in the embodiments of the present invention is not limited to the method operations described above, and may also perform the related operations in the positioning method provided in any embodiment of the present invention.
From the above description of embodiments, it will be clear to a person skilled in the art that the present invention may be implemented by means of software and necessary general purpose hardware, but of course also by means of hardware, although in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, etc., and include several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments of the present invention.
It should be noted that, in the embodiment of the positioning device, each unit and module included are only divided according to the functional logic, but not limited to the above division, so long as the corresponding functions can be implemented; in addition, the specific names of the functional units are also only for distinguishing from each other, and are not used to limit the protection scope of the present invention.
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.

Claims (8)

1. A positioning method, comprising:
collecting the current pose of the robot, depth data of at least one preset direction and infrared data of at least one preset direction;
Extracting at least one edge point from the infrared data, and converting each edge point into a characteristic point cloud according to the depth data;
determining obstacle scores of the characteristic point clouds in a preset global grid map, wherein the preset global grid map is formed based on historical point clouds;
optimizing the current pose according to the obstacle score to determine positioning information of the robot;
the determining the obstacle score of the characteristic point cloud in the preset global grid map comprises the following steps:
converting coordinates of all edge points in the characteristic point cloud into a world coordinate system, and mapping all the edge points to a target grid of the preset global grid map;
if the target grid is an obstacle position, acquiring a probability value of the target grid as an obstacle score of a corresponding edge point;
if the target grid is not an obstacle position, the edge points are in a non-obstacle area or an unknown area in a preset global grid, the probability values corresponding to the edge points are not counted, and the edge points are deleted or the probability values stored in the grids corresponding to the edge points are not acquired;
counting the sum of obstacle scores of all the edge points in the characteristic point cloud as the obstacle score of the characteristic point cloud;
Wherein the optimizing the current pose according to the obstacle score to determine positioning information of the robot comprises:
optimizing the positioning information according to the moving speed and the obstacle score of the robot;
wherein optimizing the positioning information according to the moving speed of the robot includes:
constructing a first residual error item according to the current pose and the obstacle score of the characteristic point cloud;
determining a speed prediction pose based on the moving speed, and taking a difference value between the speed prediction pose and the historical pose at the last moment as a second residual error term;
adjusting parameter information in speed prediction pose and/or current pose in the first residual error item and the second residual error item to enable the sum value of the first residual error item and the second residual error item to be minimum, wherein the parameter information comprises the value of at least one parameter in an abscissa, an ordinate and an included angle between the robot direction and the X axis of a world coordinate system;
and taking the velocity predicted pose and/or the abscissa and the ordinate of the current pose and the included angle between the robot direction and the X axis of the world coordinate system when the sum of the first residual error item and the second residual error item is minimum as optimized positioning pose information.
2. The method according to claim 1, wherein the acquiring the current pose of the robot and the depth data of at least one preset direction and/or the infrared data of at least one preset direction comprises:
acquiring a current pose under the world coordinate system of the robot, wherein the current pose at least comprises an abscissa, an ordinate and an included angle between the robot direction and an X axis of the world coordinate system;
and acquiring depth data of the preset direction by using at least one depth data sensor preset on the robot and acquiring infrared data of the preset direction by using at least one infrared data sensor, wherein the preset direction at least comprises one of a horizontal direction and a vertical direction.
3. The method of claim 1, wherein extracting at least one edge point in the infrared data and converting each edge point to a feature point cloud based on the depth data comprises:
filtering noise in the infrared data;
extracting at least one edge point from the infrared data with noise filtered;
each of the edge points is converted into three-dimensional coordinates using a camera model of the robot and the depth data to construct a feature point cloud.
4. The method of claim 1, wherein the optimizing the current pose according to the obstacle score to determine positioning information of the robot comprises:
constructing a residual function according to the current pose and the obstacle score of the characteristic point cloud;
adjusting parameter information in the residual function to enable a result value of the residual function to be minimum, wherein the parameter information in the residual function is the value of at least one parameter in the abscissa and the ordinate of the current pose and the included angle between the robot direction and the X axis of the world coordinate system;
and taking the abscissa and the ordinate corresponding to the current pose when the result value is minimum and the included angle between the robot direction and the X axis of the world coordinate system as output positioning pose information.
5. The method of any one of claims 1-4, further comprising:
and updating the robot pose corresponding to the positioning information to the preset global grid map.
6. A positioning device, the device comprising:
the data acquisition module is used for acquiring the current pose of the robot, depth data of at least one preset direction and infrared data of at least one preset direction;
The characteristic point cloud module is used for extracting at least one edge point from the infrared data and converting each edge point into characteristic point cloud according to the depth data;
the obstacle scoring module is used for determining obstacle scores of the characteristic point clouds in a preset global grid map, wherein the preset global grid map is formed based on historical point clouds;
the positioning determining module is used for optimizing the current pose according to the obstacle score so as to determine positioning information of the robot;
the obstacle scoring module includes:
the position mapping unit is used for converting the coordinates of each edge point in the characteristic point cloud into a world coordinate system and mapping each edge point to a target grid of the preset global grid map;
a score determining unit, configured to obtain a probability value of the target grid as a barrier score of a corresponding edge point if the target grid is a barrier position; if the target grid is not an obstacle position, the edge points are in a non-obstacle area or an unknown area in a preset global grid, the probability values corresponding to the edge points are not counted, and the edge points are deleted or the probability values stored in the grids corresponding to the edge points are not acquired;
A score statistics unit, configured to count a sum of obstacle scores of the edge points in the feature point cloud as an obstacle score of the feature point cloud;
the positioning determination module comprises:
the comprehensive optimization unit is used for optimizing the positioning information according to the moving speed and the obstacle score of the robot;
the comprehensive optimization unit is specifically used for:
constructing a first residual error item according to the current pose and the obstacle score of the characteristic point cloud;
determining a speed prediction pose based on the moving speed, and taking a difference value between the speed prediction pose and the historical pose at the last moment as a second residual error term;
adjusting parameter information in speed prediction pose and/or current pose in the first residual error item and the second residual error item to enable the sum value of the first residual error item and the second residual error item to be minimum, wherein the parameter information comprises the value of at least one parameter in an abscissa, an ordinate and an included angle between the robot direction and the X axis of a world coordinate system;
and taking the velocity predicted pose and/or the abscissa and the ordinate of the current pose and the included angle between the robot direction and the X axis of the world coordinate system when the sum of the first residual error item and the second residual error item is minimum as optimized positioning pose information.
7. An electronic device, comprising:
one or more processors;
a memory for storing one or more programs,
when executed by the one or more processors, causes the one or more processors to implement the positioning method of any of claims 1-5.
8. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the positioning method according to any of claims 1-5.
CN202110594737.0A 2021-05-28 2021-05-28 Positioning method, positioning device, electronic equipment and storage medium Active CN113313764B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110594737.0A CN113313764B (en) 2021-05-28 2021-05-28 Positioning method, positioning device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110594737.0A CN113313764B (en) 2021-05-28 2021-05-28 Positioning method, positioning device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113313764A CN113313764A (en) 2021-08-27
CN113313764B true CN113313764B (en) 2023-08-29

Family

ID=77376335

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110594737.0A Active CN113313764B (en) 2021-05-28 2021-05-28 Positioning method, positioning device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113313764B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109658373A (en) * 2017-10-10 2019-04-19 中兴通讯股份有限公司 A kind of method for inspecting, equipment and computer readable storage medium
CN110275540A (en) * 2019-07-01 2019-09-24 湖南海森格诺信息技术有限公司 Semantic navigation method and its system for sweeping robot
CN110530368A (en) * 2019-08-22 2019-12-03 浙江大华技术股份有限公司 A kind of robot localization method and apparatus
CN111862219A (en) * 2020-07-29 2020-10-30 上海高仙自动化科技发展有限公司 Computer equipment positioning method and device, computer equipment and storage medium
CN111895989A (en) * 2020-06-24 2020-11-06 浙江大华技术股份有限公司 Robot positioning method and device and electronic equipment
CN112363158A (en) * 2020-10-23 2021-02-12 浙江华睿科技有限公司 Pose estimation method for robot, and computer storage medium
CN112634305A (en) * 2021-01-08 2021-04-09 哈尔滨工业大学(深圳) Infrared vision odometer implementation method based on edge feature matching
CN112799095A (en) * 2020-12-31 2021-05-14 深圳市普渡科技有限公司 Static map generation method and device, computer equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11360216B2 (en) * 2017-11-29 2022-06-14 VoxelMaps Inc. Method and system for positioning of autonomously operating entities

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109658373A (en) * 2017-10-10 2019-04-19 中兴通讯股份有限公司 A kind of method for inspecting, equipment and computer readable storage medium
CN110275540A (en) * 2019-07-01 2019-09-24 湖南海森格诺信息技术有限公司 Semantic navigation method and its system for sweeping robot
CN110530368A (en) * 2019-08-22 2019-12-03 浙江大华技术股份有限公司 A kind of robot localization method and apparatus
CN111895989A (en) * 2020-06-24 2020-11-06 浙江大华技术股份有限公司 Robot positioning method and device and electronic equipment
CN111862219A (en) * 2020-07-29 2020-10-30 上海高仙自动化科技发展有限公司 Computer equipment positioning method and device, computer equipment and storage medium
CN112363158A (en) * 2020-10-23 2021-02-12 浙江华睿科技有限公司 Pose estimation method for robot, and computer storage medium
CN112799095A (en) * 2020-12-31 2021-05-14 深圳市普渡科技有限公司 Static map generation method and device, computer equipment and storage medium
CN112634305A (en) * 2021-01-08 2021-04-09 哈尔滨工业大学(深圳) Infrared vision odometer implementation method based on edge feature matching

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Extraction of Semantic Floor Plans from 3D Point Cloud Maps;Vytenis Sakenas 等;《Proceedings of the 2007 IEEE》;1-6 *

Also Published As

Publication number Publication date
CN113313764A (en) 2021-08-27

Similar Documents

Publication Publication Date Title
CN111612760B (en) Method and device for detecting obstacles
CN110097553B (en) Semantic mapping system based on instant positioning mapping and three-dimensional semantic segmentation
JP6739517B2 (en) Lane recognition modeling method, device, storage medium and device, and lane recognition method, device, storage medium and device
CN111862214B (en) Computer equipment positioning method, device, computer equipment and storage medium
CN113203409B (en) Method for constructing navigation map of mobile robot in complex indoor environment
JP5535025B2 (en) Outdoor feature detection system, program for outdoor feature detection system, and recording medium for program for outdoor feature detection system
CN113313765B (en) Positioning method, positioning device, electronic equipment and storage medium
CN113359782B (en) Unmanned aerial vehicle autonomous addressing landing method integrating LIDAR point cloud and image data
CN111862219B (en) Computer equipment positioning method and device, computer equipment and storage medium
CN115376109B (en) Obstacle detection method, obstacle detection device, and storage medium
CN110634138A (en) Bridge deformation monitoring method, device and equipment based on visual perception
CN112818925A (en) Urban building and crown identification method
CN114179788A (en) Automatic parking method, system, computer readable storage medium and vehicle terminal
WO2022246812A1 (en) Positioning method and apparatus, electronic device, and storage medium
CN115683100A (en) Robot positioning method, device, robot and storage medium
CN113313764B (en) Positioning method, positioning device, electronic equipment and storage medium
CN114092771A (en) Multi-sensing data fusion method, target detection device and computer equipment
WO2023216555A1 (en) Obstacle avoidance method and apparatus based on binocular vision, and robot and medium
CN116642490A (en) Visual positioning navigation method based on hybrid map, robot and storage medium
Chen et al. Amobile system combining laser scanners and cameras for urban spatial objects extraction
CN112967345B (en) External parameter calibration method, device and system of fish-eye camera
CN116503567B (en) Intelligent modeling management system based on AI big data
KR102249380B1 (en) System for generating spatial information of CCTV device using reference image information
KR102548786B1 (en) System, method and apparatus for constructing spatial model using lidar sensor(s)
WO2022153910A1 (en) Detection system, detection method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant