CN110579215A - positioning method based on environmental feature description, mobile robot and storage medium - Google Patents

positioning method based on environmental feature description, mobile robot and storage medium Download PDF

Info

Publication number
CN110579215A
CN110579215A CN201911007542.0A CN201911007542A CN110579215A CN 110579215 A CN110579215 A CN 110579215A CN 201911007542 A CN201911007542 A CN 201911007542A CN 110579215 A CN110579215 A CN 110579215A
Authority
CN
China
Prior art keywords
feature description
mobile robot
coordinate
global
local feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911007542.0A
Other languages
Chinese (zh)
Other versions
CN110579215B (en
Inventor
张干
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Noah Wood Robot Technology Co ltd
Original Assignee
Shanghai Wood Wood Robot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Wood Wood Robot Technology Co Ltd filed Critical Shanghai Wood Wood Robot Technology Co Ltd
Priority to CN201911007542.0A priority Critical patent/CN110579215B/en
Publication of CN110579215A publication Critical patent/CN110579215A/en
Application granted granted Critical
Publication of CN110579215B publication Critical patent/CN110579215B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Abstract

the invention provides a positioning method based on environmental characteristic description, a mobile robot and a storage medium, wherein the method comprises the following steps: after the system is started, the positioning auxiliary sensor is controlled to scan the surrounding environment to obtain a local feature description set corresponding to the current moment; comparing the local feature description set with a global feature description set acquired in advance, and calculating to obtain candidate coordinates of the current position of the mobile robot according to a comparison result; and evaluating the matching degree of each candidate coordinate according to the characteristic information, and determining the candidate coordinate with the maximum score as the global coordinate of the mobile robot. The invention realizes the global positioning of the mobile robot, saves the workload of field deployment and improves the use convenience of the mobile robot.

Description

positioning method based on environmental feature description, mobile robot and storage medium
Technical Field
The present invention relates to the field of positioning technologies, and in particular, to a positioning method, a mobile robot, and a storage medium based on environmental feature description.
background
at present, a mobile robot senses an environment and a self state through a sensor, and then realizes autonomous movement facing a target in an environment with obstacles, which is a navigation technology of a generally-known intelligent autonomous mobile robot. Positioning is to determine the position of the mobile robot relative to the global coordinate and the posture of the mobile robot in the working environment, and is a basic link of mobile robot navigation.
for a mobile robot, the mobile robot must locate its map after starting, and how to quickly implement the global positioning function after starting is an important capability. At present, mobile robots are started at fixed positions, namely, the mobile robots are moved to initial positions and then start to navigate, and the mobile robots are started and identified and positioned by using reference identification modes such as RFID (radio frequency identification) and two-dimensional codes deployed on site.
Therefore, how to realize global positioning of the mobile robot, and improve the use convenience of the mobile robot while saving the workload of field deployment is an urgent problem to be solved.
Disclosure of Invention
The invention aims to provide a positioning method based on environmental characteristic description, a mobile robot and a storage medium, so that the global positioning of the mobile robot is realized, the workload of field deployment is saved, and the use convenience of the mobile robot is improved.
the technical scheme provided by the invention is as follows:
The invention provides a positioning method based on environmental characteristic description, which comprises the following steps:
after the system is started, the positioning auxiliary sensor is controlled to scan the surrounding environment to obtain a local feature description set corresponding to the current moment; the local feature description set comprises at least two local feature description lines which are not collinear, and the local feature description lines comprise the distance between two current adjacent objects in a mobile robot coordinate system, type information corresponding to the two current adjacent objects respectively and position coordinates of the two current adjacent objects in the mobile robot coordinate system;
comparing the local feature description set with a global feature description set acquired in advance, and calculating to obtain candidate coordinates of the current position of the mobile robot according to a comparison result; the global feature description set comprises a plurality of global feature description lines, the global feature description lines comprise distances between two current adjacent objects in a world coordinate system, and type information corresponding to the two current adjacent objects respectively and space coordinates of the two current adjacent objects in the world coordinate system;
And evaluating the matching degree of each candidate coordinate according to the characteristic information, and determining the candidate coordinate with the maximum score as the global coordinate of the mobile robot.
Further, after comparing the local feature description set with a global feature description set acquired in advance, and calculating candidate coordinates of the position of the mobile robot at the current moment according to a comparison result, the matching degree evaluation is performed on each candidate coordinate according to feature information, and the candidate coordinate with the maximum score is determined to be the global coordinate of the mobile robot, the method includes the following steps:
and when the global map comprises a preset forbidden region, performing traversal check on all candidate coordinates, and deleting the candidate coordinates matched with any position on the preset forbidden region.
further, before controlling the positioning auxiliary sensor to scan the surrounding environment to obtain the local feature description set corresponding to the current time after the start-up, the method includes the steps of:
generating corresponding object reference nodes and corresponding reference coordinate information thereof according to the spatial coordinates and the type information of the position of each object in the global map; the reference coordinate information comprises space coordinates and type information;
and creating the global feature description set according to the reference coordinate information corresponding to each object reference node and the distance between each pair of adjacent object reference nodes.
Further, the step of controlling the positioning auxiliary sensor to scan the surrounding environment after the start to obtain the local feature description set corresponding to the current time includes:
Triggering the positioning auxiliary sensor to work after starting, and acquiring acquisition data from the positioning auxiliary sensor; the positioning auxiliary sensor comprises a visual sensor and/or a laser sensor;
carrying out object identification according to the acquired data to obtain a local map, and type information and position coordinates of each object in the local map;
generating corresponding object positioning nodes and corresponding positioning coordinate information thereof according to the position coordinates and the type information of the position of each object in the local map; the positioning coordinate information comprises position coordinates and type information;
and creating the local feature description set according to the positioning coordinate information corresponding to each object positioning node and the distance between each pair of adjacent object positioning nodes.
further, after comparing the local feature description set with a global feature description set acquired in advance, calculating candidate coordinates of the current position of the mobile robot according to a comparison result includes:
Comparing the current local feature description line with all global feature description lines in the global feature description set, and screening out a target global feature description line which is the same as all types of information included in the current local feature description line and is the same as the length of the current local feature description line;
comparing the next local feature description line with all global feature description lines in the global feature description set, and screening out a target global feature description line corresponding to the next local feature description line until all local feature description lines are screened;
And calculating to obtain candidate coordinates corresponding to the mobile robot under the current local feature description line according to the distance and the space coordinates corresponding to the current local feature description line and the distance and the position coordinates corresponding to the target global feature description line corresponding to the current local feature description line, and switching to the next local feature description line until all candidate coordinates of the position of the mobile robot at the current moment are obtained through calculation.
further, the step of evaluating the matching degree of each candidate coordinate according to the feature information and determining the candidate coordinate with the maximum score as the global coordinate of the mobile robot comprises the steps of:
analyzing the acquired data acquired by the positioning auxiliary sensor to obtain corresponding characteristic information;
When the feature information is point cloud features, comparing the point cloud features of the surrounding environment corresponding to each candidate coordinate with preset point cloud features to calculate matching degree; and/or the presence of a gas in the gas,
when the characteristic information is an image characteristic, comparing the image characteristic of the surrounding environment corresponding to each candidate coordinate with a preset image characteristic to calculate the matching degree;
and determining the candidate coordinate with the maximum matching degree value as the global coordinate of the mobile robot.
The present invention also provides a mobile robot comprising:
the scanning processing module is used for controlling the positioning auxiliary sensor to scan the surrounding environment after being started to obtain a local feature description set corresponding to the current moment; the local feature description set comprises at least two local feature description lines which are not collinear, and the local feature description lines comprise the distance between two current adjacent objects in a mobile robot coordinate system, type information corresponding to the two current adjacent objects respectively and position coordinates of the two current adjacent objects in the mobile robot coordinate system;
The comparison calculation module is used for comparing the local feature description set with a global feature description set acquired in advance and then calculating candidate coordinates of the position of the mobile robot at the current moment according to a comparison result; the global feature description set comprises a plurality of global feature description lines, the global feature description lines comprise distances between two current adjacent objects in a world coordinate system, and type information corresponding to the two current adjacent objects respectively and space coordinates of the two current adjacent objects in the world coordinate system;
And the calculation positioning module is used for evaluating the matching degree of each candidate coordinate according to the characteristic information and determining the candidate coordinate with the maximum score as the global coordinate of the mobile robot.
further, the method also comprises the following steps:
And the deletion processing module is used for performing traversal check on all candidate coordinates when the global map comprises a preset forbidden region and deleting the candidate coordinates matched with any position on the preset forbidden region.
further, the method also comprises the following steps:
the generating module is used for generating corresponding object reference nodes and corresponding reference coordinate information according to the space coordinate and the type information of the position of each object in the global map; the reference coordinate information comprises space coordinates and type information;
And the creating module is used for creating the global feature description set according to the reference coordinate information corresponding to each object reference node and the distance between each pair of adjacent object reference nodes.
Further, the scan processing module includes:
the starting acquisition unit is used for triggering the positioning auxiliary sensor to work after starting and acquiring acquisition data from the positioning auxiliary sensor; the positioning auxiliary sensor comprises a visual sensor and/or a laser sensor;
The first processing unit is used for carrying out object identification according to the acquired data to obtain a local map, and type information and position coordinates of each object in the local map;
The generating unit is used for generating corresponding object positioning nodes and corresponding positioning coordinate information according to the position coordinates and the type information of the position of each object in the local map; the positioning coordinate information comprises position coordinates and type information; and creating the local feature description set according to the positioning coordinate information corresponding to each object positioning node and the distance between each pair of adjacent object positioning nodes.
Further, the alignment calculation module includes:
The comparison unit is used for comparing the current local feature description line with all global feature description lines in the global feature description set, and screening out a target global feature description line which has the same type information as all types of information included in the current local feature description line and has the same length as the current local feature description line;
the comparison unit is further configured to compare the next local feature description line with all global feature description lines in the global feature description set, and screen out a target global feature description line corresponding to the next local feature description line until all local feature description lines are screened;
and the calculating unit is used for calculating to obtain candidate coordinates corresponding to the mobile robot under the current local feature description line according to the distance and the space coordinates corresponding to the current local feature description line and the distance and the position coordinates corresponding to the target global feature description line corresponding to the current local feature description line, and switching to the next local feature description line until all candidate coordinates of the position of the mobile robot at the current moment are obtained through calculation.
Further, the computing location module comprises:
the analysis unit is used for analyzing the acquired data acquired by the positioning auxiliary sensor to obtain corresponding characteristic information;
The point cloud feature matching unit is used for comparing the point cloud features of the surrounding environment corresponding to each candidate coordinate with preset point cloud features to calculate the matching degree when the feature information is the point cloud features; and/or the presence of a gas in the gas,
the image feature matching unit is used for comparing the image features of the surrounding environment corresponding to each candidate coordinate with preset image features to calculate the matching degree when the feature information is the image features;
And the determining unit is used for determining the candidate coordinate with the maximum matching degree value as the global coordinate of the mobile robot.
The present invention also provides a storage medium having at least one instruction stored therein, which is loaded and executed by a processor to implement the operations performed by the positioning method based on environment feature description as described.
By the positioning method based on the environmental characteristic description, the mobile robot and the storage medium, the global positioning of the mobile robot can be realized, the workload of field deployment is saved, and the use convenience of the mobile robot is improved.
drawings
the above features, technical features, advantages and implementations of a positioning method, a mobile robot and a storage medium described based on environmental characteristics will be further explained in a clearly understandable manner by referring to the accompanying drawings.
FIG. 1 is a flow chart of one embodiment of a location method based on environmental characterization of the present invention;
FIG. 2 is a flow chart of another embodiment of a location method based on environmental characterization of the present invention;
FIG. 3 is a flow chart of another embodiment of a location method based on environmental characterization of the present invention;
FIG. 4 is an exemplary diagram of a global feature description line in a world coordinate system of the present invention;
FIG. 5 is a flow chart of another embodiment of a location method based on environmental characterization of the present invention;
fig. 6 is a schematic structural diagram of one embodiment of a mobile robot of the present invention.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will be made with reference to the accompanying drawings. It is obvious that the drawings in the following description are only some examples of the invention, and that for a person skilled in the art, other drawings and embodiments can be derived from them without inventive effort.
for the sake of simplicity, the drawings only schematically show the parts relevant to the present invention, and they do not represent the actual structure as a product. In addition, in order to make the drawings concise and understandable, components having the same structure or function in some of the drawings are only schematically illustrated or only labeled. In this document, "one" means not only "only one" but also a case of "more than one".
one embodiment of the present invention, as shown in fig. 1, is a positioning method based on environmental feature description, including:
S100, after starting, controlling a positioning auxiliary sensor to scan the surrounding environment to obtain a local feature description set corresponding to the current moment; the local feature description set comprises at least two local feature description lines which are not collinear, wherein the local feature description lines comprise the distance between two current adjacent objects in a mobile robot coordinate system, type information corresponding to the two current adjacent objects respectively and position coordinates of the two current adjacent objects in the mobile robot coordinate system;
Specifically, the mobile robot positioning refers to determining a map position where the mobile robot is located, that is, a coordinate value of the mobile robot on a world coordinate system. The mobile robot positioning is a basic link of mobile robot navigation, but due to abnormal conditions such as system shutdown or human movement after power failure, the mobile robot cannot position the map position of the mobile robot after being restarted, and at the moment, the mobile robot can perform autonomous navigation only after the mobile robot is manually moved to an initial position to restart the system to obtain the initial position, so that the purpose of completing autonomous navigation without human participation is achieved.
s200, comparing the local feature description set with a global feature description set acquired in advance, and calculating to obtain candidate coordinates of the current position of the mobile robot according to a comparison result; the global feature description set comprises a plurality of global feature description lines, the global feature description lines comprise distances between two current adjacent objects in a world coordinate system, type information corresponding to the two current adjacent objects respectively and space coordinates of the two current adjacent objects in the world coordinate system;
Specifically, the mobile robot performs traversal comparison on each local feature description line in the local feature description set obtained through real-time detection and all global feature description lines of the global feature description set obtained in advance to obtain a comparison result, and calculates candidate coordinates of the current position of the mobile robot according to the comparison result, wherein the number of the candidate coordinates is at most two.
S300, according to the characteristic information, the matching degree of each candidate coordinate is evaluated, and the candidate coordinate with the maximum score is determined to be the global coordinate of the mobile robot.
specifically, each time the mobile robot passes through any place or position of the active area, since the active area is provided with a door, a window, a lamp, and other objects, and even other obvious and easily-distinguishable objects with colors (such as red, black, blue, and the like) and figures (circles, squares, and the like), the mobile robot not only needs to identify the type of the surrounding object, but also needs to identify the characteristic information corresponding to the surrounding environment (including but not limited to the shape, the outline, the size of the space, the color, and the type of the object included in the effective identification range of the positioning assistance sensor). After the mobile robot calculates candidate coordinates, the matching degree of each candidate coordinate is evaluated according to the characteristic information, the matching degree value between each candidate coordinate and the characteristic information is calculated, and then the candidate coordinate corresponding to the matching degree with the maximum score is determined to be the global coordinate of the mobile robot.
in the embodiment, the matching degree of each candidate coordinate is evaluated based on the characteristic information, so that the position error of the current position of the mobile robot is reduced, the positioning accuracy and reliability of the mobile robot are improved, and the navigation performance of the mobile robot is further improved. Moreover, the mobile robot does not need to be started up and used at a fixed position like the prior art, and the use convenience and the popularization strength of the mobile robot are improved. In addition, the labels with identification and high identification degrees such as two-dimensional codes, bar codes or special patterns do not need to be additionally arranged on the site of the moving range of the mobile robot, so that the workload of site deployment is greatly reduced, the cost of accurate positioning of the mobile robot is greatly reduced, and the convenience and the popularization strength of the mobile robot are greatly improved.
one embodiment of the present invention, as shown in fig. 2, is a positioning method based on environment feature description, including:
s010 generates a corresponding object reference node and corresponding reference coordinate information thereof according to the space coordinate and the type information of the position of each object in the global map; the reference coordinate information includes spatial coordinates and type information;
specifically, according to combination of slam graph building and object identification capacity, type information of each object in an actual environment can be identified according to a global map, so that N object reference coordinate information Info of different types are stored in the global map according to the slam graph, each object is assumed to form an object reference node, each object reference node comprises a spatial coordinate P and category information T of each object in the global map, and reference coordinate information corresponding to the object reference nodes is represented as (P, T).
s020 creating a global feature description set according to the reference coordinate information corresponding to each object reference node and the distance between each pair of adjacent object reference nodes;
Specifically, the distance between two adjacent object reference nodes in the world coordinate system can be calculated by using the reference coordinate information (P, T) corresponding to the object reference nodes. Therefore, a plurality of Global feature description lines (which can be called as edges) in a preset range can be constructed according to the reference coordinate information (P, T) corresponding to each object reference node and the distance between each pair of adjacent object reference nodes, each Global feature description line comprises two end points, each end point is an object reference node, namely, a Global feature description set Global _ feature containing the distance, the type information and the space coordinates between the adjacent object reference nodes can be constructed according to the reference coordinate information (P, T) corresponding to each object reference node: { (d1, V11, V12), (d2, V21, V22), (d3, V31, V32), … …, (dn, V1n, Vn2) }, wherein d1 denotes a distance between the object 1 (left side or upper side of the first global feature description line) corresponding to the first global feature description line and the object 2 (right side or lower side of the first global feature description line) under the world coordinate system, V11 denotes type information of the object corresponding to the object 1 corresponding to the first global feature description line and spatial coordinates under the world coordinate system, and V12 denotes type information of the object corresponding to the object 2 corresponding to the first global feature description line and spatial coordinates under the world coordinate system. By analogy, dn represents the distance between the object n (left side or upper side of the nth global feature description line) corresponding to the nth global feature description line and the object 2 (right side or lower side of the nth global feature description line) in the world coordinate system, Vn1 represents the type information of the object corresponding to the object n corresponding to the nth global feature description line and the spatial coordinates in the world coordinate system, and Vn2 represents the type information of the object corresponding to the object 2 corresponding to the nth global feature description line and the spatial coordinates in the world coordinate system. So that a global feature description set is composed or constructed from several global feature description lines.
S100, after starting, controlling a positioning auxiliary sensor to scan the surrounding environment to obtain a local feature description set corresponding to the current moment; the local feature description set comprises at least two local feature description lines which are not collinear, wherein the local feature description lines comprise the distance between two current adjacent objects in a mobile robot coordinate system, type information corresponding to the two current adjacent objects respectively and position coordinates of the two current adjacent objects in the mobile robot coordinate system;
S200, comparing the local feature description set with a global feature description set acquired in advance, and calculating to obtain candidate coordinates of the current position of the mobile robot according to a comparison result; the global feature description set comprises a plurality of global feature description lines, the global feature description lines comprise distances between two current adjacent objects in a world coordinate system, type information corresponding to the two current adjacent objects respectively and space coordinates of the two current adjacent objects in the world coordinate system;
s205, when the global map comprises a preset forbidden region, all candidate coordinates are subjected to traversal check, and the candidate coordinates matched with any position on the preset forbidden region are deleted;
S300, according to the characteristic information, the matching degree of each candidate coordinate is evaluated, and the candidate coordinate with the maximum score is determined to be the global coordinate of the mobile robot.
specifically, since the global map includes a complete environment state or environment features, and may even include a preset forbidden region, if the global map includes the preset forbidden region, the mobile robot compares the local feature description set with the global feature description set acquired in advance to obtain candidate coordinates, and performs traversal check on all candidate coordinates when the global map includes the preset forbidden region, and deletes the candidate coordinates matched with any position on the preset forbidden region in advance.
the parts of this embodiment that are the same as the above embodiments are not described in detail herein. In the embodiment, the candidate coordinates matched with any position on the preset forbidden zone are deleted, so that the calculation amount of the follow-up mobile robot for evaluating the matching degree of each candidate coordinate according to the characteristic information can be reduced, the positioning calculation speed of the mobile robot is greatly increased, and the positioning efficiency of the mobile robot is improved.
one embodiment of the present invention, as shown in fig. 3, is a positioning method based on environment feature description, including:
s110, triggering the positioning auxiliary sensor to work after starting, and acquiring acquired data from the positioning auxiliary sensor; the positioning auxiliary sensor comprises a visual sensor and/or a laser sensor;
In particular, the positioning assistance sensors include vision sensors including, but not limited to, cameras, depth cameras, etc., and the laser sensors include, but not limited to, single line lidar, multiline lidar. After the mobile robot is started, a signal is sent to a positioning auxiliary sensor arranged on the mobile robot, so that the positioning auxiliary sensor starts to work to scan the surrounding environment to obtain collected data.
s120, carrying out object identification according to the acquired data to obtain a local map, and type information and position coordinates of each object in the local map;
specifically, the mobile robot can perform object recognition according to the collected data to obtain a local map.
when the collected data is laser collected data acquired from the laser sensor, the laser collected data is aggregated and then subjected to contour recognition, so that the mobile robot can recognize the type information of each object in the self scanning range. In addition, since the laser acquisition data includes the laser reflected by each object in the surrounding environment of the mobile robot, the laser coordinates of the reference point (e.g., a center point, a contour vertex, etc.) corresponding to each object in the laser coordinate system can be obtained through the existing laser measurement algorithm, and the position coordinates of each object in the mobile robot coordinate system can be obtained through conversion calculation according to the laser coordinates. The mobile robot coordinate system is established by taking a preset point (such as a central point of the mobile robot) as an origin, the installation position of the laser sensor on the mobile robot is known, and the laser coordinate of the reference point corresponding to the object in the laser coordinate system is known, so that the position coordinate of the reference point corresponding to the object in the mobile robot coordinate system can be obtained through conversion calculation.
when the collected data is image collected data acquired from the vision sensor, the image collected data is subjected to image preprocessing, and then the processed image collected data is subjected to image recognition, so that the mobile robot can recognize the type information of each object in the self scanning range. In addition, since the image acquisition data includes images of various objects in the surrounding environment of the mobile robot, pixel coordinates of a reference point (such as a center point, a contour vertex and the like) corresponding to each object in a pixel coordinate system can be obtained through an existing vision measurement algorithm, and position coordinates of each object in the mobile robot coordinate system are obtained through conversion calculation according to the pixel coordinates. The mobile robot coordinate system is established by taking a preset point (such as a central point of the mobile robot) as an origin, the installation position of the visual sensor on the mobile robot is known, and the pixel coordinates of the reference point corresponding to the object in the pixel coordinate system are known, so that the position coordinates of the reference point corresponding to the object in the mobile robot coordinate system can be obtained through conversion calculation.
s130, generating corresponding object positioning nodes and corresponding positioning coordinate information according to the position coordinates and the type information of the position of each object in the local map; the positioning coordinate information comprises position coordinates and type information;
Specifically, after the mobile robot is started, each object in the visual field range is detected by combining the object identification capability, so that the position coordinate Q and the category information T of each object relative to the mobile robot are obtained, that is, the map is built according to slam and the object identification capability is combined, the type information of each object in the actual environment can be identified according to the local map, so that M different types of object positioning coordinate information Info are stored in the local map according to the slam, it is assumed that each object forms an object positioning node, each object positioning node comprises the position coordinate Q and the category information T of each object in the local map, and the positioning coordinate information corresponding to the object positioning node is represented as (Q, T).
s140, creating a local feature description set according to the positioning coordinate information corresponding to each object positioning node and the distance between each pair of adjacent object positioning nodes;
In addition, after the mobile robot carries out object identification according to collected data to obtain type information and position coordinates of each object, points corresponding to each object in the local map can be marked according to the position coordinates, type information corresponding to each point is marked to obtain object positioning nodes, all the object positioning nodes are used as end points, and the local positioning nodes are connected pairwise to obtain at least two non-collinear local feature description lines. In addition, distance calculation can be performed according to the position coordinates to obtain the distance between two end points (namely, objects) corresponding to the current local feature description line, and the distances between the two end points (namely, objects) corresponding to all the local feature description lines are repeatedly calculated, so that a local feature description set consisting of or constructed by at least two local feature description lines which are not collinear is obtained.
specifically, the distance between two adjacent object positioning nodes in the world coordinate system can be calculated by using the positioning coordinate information (Q, T) corresponding to the object positioning nodes. Therefore, at least two local feature description lines (which can be called as edges) which are not collinear in a preset range can be constructed according to the positioning coordinate information (Q, T) corresponding to each object positioning node and the distance between each pair of adjacent object positioning nodes, each local feature description line comprises two end points, each end point is an object positioning node, namely, a local feature description set local _ feature containing the distance, the type information and the position coordinates between the adjacent object positioning nodes can be constructed according to the positioning coordinate information (Q, T) corresponding to each object positioning node: { (tmp _ d1, tmp _ V11, tmp _ V12), (tmp _ d2, tmp _ V21, tmp _ V22), (tmp _ d3, tmp _ V31, tmp _ V32), … …, (tmp _ dm, tmp _ Vm1, tmp _ Vm2) }, wherein tmp _ d1 denotes a distance between the object 1 (left side or upper side of the first local feature description line) corresponding to the first local feature description line and the object 2 (right side or lower side of the first local feature description line) under the world coordinate system, tmp _ V11 denotes type information of the object 1 corresponding to the first local feature description line and position coordinates under the world coordinate system, and tmp _ V12 denotes type information of the object 2 corresponding to the first local feature description line and position coordinates under the world coordinate system. By analogy, tmp _ dm represents a distance between the object 1 (left side or upper side of the 1 st local feature description line) corresponding to the 1 st local feature description line and the object 2 (right side or lower side of the 1 st local feature description line) in the world coordinate system, Vm1 represents type information of the object corresponding to the object 1 corresponding to the mth local feature description line and position coordinates in the world coordinate system, and Vm2 represents type information of the object corresponding to the object 2 corresponding to the mth local feature description line and position coordinates in the world coordinate system. So that a local feature description set is composed or constructed from at least two local feature description lines that are not collinear.
S210, comparing the current local feature description line with all global feature description lines in a global feature description set, and screening out a target global feature description line which is the same as all types of information included in the current local feature description line and is the same as the length of the current local feature description line;
specifically, each local feature description line in the local feature description set includes an object positioning node, positioning coordinate information corresponding to each object positioning node includes position coordinates and type information, each global feature description line in the global feature description set includes an object reference node, and reference coordinate information corresponding to each object reference node includes space coordinates and type information. Therefore, after the target type information corresponding to the two end points of the current local feature description line is obtained, all the global feature description lines identical to the target type information are screened out to obtain the target global feature description line. Illustratively, as shown in fig. 4, the type information of the object corresponding to the point a and the point B is M1, the type information of the object corresponding to the point C and the point D is M2, and the distance relationship between the center points of the object A, B, C is AC-BC-AD-BD. The current local feature description line L1 is created from points corresponding to the object a and the object C, respectively. The global feature description set includes global feature description lines H1, H2, H3, H4, and H5, a global feature description line H1 is created from points corresponding to the object B and the object C, respectively, a global feature description line H2 is created from points corresponding to the object B and the object E, a global feature description line H3 is created from points corresponding to the object B and the object D, a global feature description line H4 is created from points corresponding to the object a and the object F, respectively, and a global feature description line H5 is created from points corresponding to the object a and the object C, respectively. Since AC ═ BC ═ AD ═ BD, the target global feature profiles obtained by the above-described screening are global feature profiles H1, H4, and H5, respectively.
S220, comparing the next local feature description line with all global feature description lines in the global feature description set, and screening out a target global feature description line corresponding to the next local feature description line until all local feature description lines are screened out;
And continuing the steps until each local feature description line in the local feature description set is screened, which is not described in detail herein.
and S230, calculating to obtain candidate coordinates corresponding to the mobile robot under the current local feature description line according to the distance and the space coordinates corresponding to the current local feature description line and the distance and the position coordinates corresponding to the target global feature description line corresponding to the current local feature description line, and switching to the next local feature description line until all candidate coordinates of the position of the mobile robot at the current moment are obtained through calculation.
specifically, continuing the above example, traversing each local feature description line in the local feature description set, and comparing each local feature description line in the local feature description set with all global feature description lines in the global feature description set, because each local feature description line includes a distance between two object locating nodes, and each global feature description line includes a distance between two object reference nodes, after one local feature description line is matched with any of the global feature description line(s), because the local feature description lines include at least two local feature description lines, and the local feature description lines are not collinear when there are two local feature description lines, a plane can be determined, and in addition, the principle of obtaining two points from the distances between two points in the plane, according to the current distance and the current position coordinate corresponding to the current local feature description line, and the distance, the current space coordinate and the space coordinate corresponding to each target global feature description line corresponding to the current local feature description line are correspondingly calculated to obtain the candidate coordinates of the mobile robot corresponding to the current local feature description line, and the number of the candidate coordinates corresponding to each local feature description line is at most two. And in the same way, switching to the next local feature description line for calculation until all candidate coordinates of the position of the mobile robot at the current moment are obtained through calculation, and storing the candidate coordinates into a cache. It is assumed that the spatial coordinates of the object A, B, C, D on the world coordinate system are PA (XA, YA, ZA), PB (XB, YB, ZB), PC (XC, YC, ZC), PD (XD, YD, ZD), respectively. The position coordinates on the mobile robot coordinate system are PA '(xA, yA, zA), PB' (xB, yB, zB), PC '(xC, yC, zC), PD' (xD, yD, zD), respectively. Because the mobile robot coordinate system and the world coordinate system have a conversion relation (namely the principle of obtaining two points according to the distance between two points reached in a plane), the mobile robot can perform conversion calculation to obtain all candidate coordinates of the current position of the mobile robot.
s310, analyzing the acquired data acquired by the positioning auxiliary sensor to obtain corresponding characteristic information;
specifically, the mobile robot scans the surrounding environment through the positioning auxiliary sensor to obtain collected data, wherein the collected data includes any one or two of laser collected data and image collected data. The mobile robot analyzes the laser collected data to obtain corresponding point cloud characteristics, or the mobile robot analyzes the image collected data to obtain corresponding image characteristics, wherein the characteristic information comprises the point cloud characteristics and/or the image characteristics. The point cloud features include, but are not limited to, appearance features and size features corresponding to the object. The image features include, but are not limited to, appearance features, size features, and color features corresponding to the object. The shape feature includes any one or more of a shape, a contour, a dimension, and a size. The color features include any one or more of color, texture, and gray value.
S320, when the feature information is point cloud features, comparing the point cloud features of the surrounding environment corresponding to each candidate coordinate with preset point cloud features to calculate matching degree; and/or the presence of a gas in the gas,
specifically, the preset point cloud feature is a point cloud feature of the surrounding environment on the global map corresponding to each candidate coordinate. When the feature information is point cloud features, the mobile robot recognizes multiple point cloud features of the external environment as much as possible (combination of multiple appearance features as much as possible), so that the mobile robot can recognize the surrounding environment corresponding to each candidate coordinate to obtain more comprehensive and complete point cloud features, and further the mobile robot compares the point cloud features of the surrounding environment on the local map corresponding to each candidate coordinate with preset point cloud features, so that the matching degree of each candidate coordinate obtained by calculation of the mobile robot is more accurate and reliable.
s330, when the feature information is an image feature, comparing the image feature of the surrounding environment corresponding to each candidate coordinate with a preset image feature to calculate the matching degree;
specifically, the preset image feature is an image feature of the surrounding environment on the global map corresponding to each candidate coordinate. When the feature information is an image feature, the mobile robot recognizes multiple image features of the external environment as much as possible (one or a combination of multiple appearance features and color features as much as possible), so that the mobile robot can recognize the surrounding environment corresponding to each candidate coordinate to obtain more comprehensive and complete image features, and further the mobile robot compares the image features of the surrounding environment corresponding to each candidate coordinate with preset image features, so that the matching degree of each candidate coordinate calculated by the mobile robot is more accurate and reliable.
In addition, the visual positioning method has excellent performance in a complex environment in consideration of the fact that the laser sensor is not influenced by illumination change, therefore, the mobile robot positioning technical method based on the fusion of the visual sensor and the laser sensor is adopted, the data collected by the laser sensor is used for matching to make up the defect that the visual positioning is influenced by the illumination change, and the data collected by the visual sensor is used for making up the defect that the laser positioning is influenced by the complex environment, so that the mobile robot is positioned in a mode of combining the laser sensor and the visual sensor, the positioning accuracy of the mobile robot is improved, and the mobile robot is accurately and independently navigated.
s340 determines the candidate coordinate with the largest matching degree value as the global coordinate of the mobile robot itself.
specifically, after the mobile robot calculates candidate coordinates, the matching degree of each candidate coordinate is evaluated according to the feature information, the matching degree value between each candidate coordinate and the feature information is calculated, and then the candidate coordinate corresponding to the matching degree with the maximum score is determined to be the global coordinate of the mobile robot.
in the embodiment, the matching degree of each candidate coordinate is evaluated based on the characteristic information, so that the position error of the current position of the mobile robot is reduced, the global positioning is realized by using the information (including reference coordinate information and distance, and positioning coordinate information and distance) of the object in the environment, the workload of field deployment can be saved, the use convenience is not limited, the positioning accuracy and reliability of the mobile robot are improved, and the navigation performance of the mobile robot is further improved. Moreover, the mobile robot does not need to be started up and used at a fixed position like the prior art, and the use convenience and the popularization strength of the mobile robot are improved. In addition, need not additionally arrange the two-dimensional code at mobile robot's home range's scene, bar code or special pattern etc. have the identifiability, the label of high distinguishability, thereby greatly reduced the work load of scene deployment, and then when greatly reduced the accurate cost of fixing a position of mobile robot, the information of quick utilization object realizes the location, the calculated amount of global positioning has been saved to a great extent, greatly reduced global positioning efficiency, also can promote mobile robot's convenient degree and the dynamics of popularization that uses greatly.
one example of the present invention, as shown in FIG. 5, includes the steps of:
s1, constructing an object reference node corresponding to each object according to the position of the object in the global map (namely the corresponding space coordinate of the object in the world coordinate system) and the type information, and calculating the distance between two adjacent object reference nodes in the world coordinate system;
S2, constructing a global feature description line according to the space coordinate, the type information and the distance in a preset range, and forming a global feature description set by the global feature description line;
S3, the mobile robot identifies type information corresponding to each object in the current environment at the current moment, calculates the position coordinate of each object relative to the mobile robot, constructs a local feature description line according to the position coordinate, the type information and the distance, and forms a local feature description set by the local feature description line;
s4, traversing each local feature description line in the local feature description set, matching each global feature description line in the global feature description set, and calculating to obtain candidate coordinates of the current environment of the mobile robot at the current moment if the type information and the distance are matched or matched with each other;
And S5, carrying out validity detection on all candidate coordinates, excluding the candidate coordinates in a preset forbidden area, comparing object features of the surrounding environment corresponding to the candidate coordinates with a map to calculate the matching degree, and selecting the candidate coordinates with the highest value as the global coordinates of the mobile robot.
One embodiment of the present invention, as shown in fig. 6, is a mobile robot including:
The scanning processing module 10 is configured to control the positioning auxiliary sensor to scan the surrounding environment after starting to obtain a local feature description set corresponding to the current time; the local feature description set comprises at least two local feature description lines which are not collinear, wherein the local feature description lines comprise the distance between two current adjacent objects in a mobile robot coordinate system, type information corresponding to the two current adjacent objects respectively and position coordinates of the two current adjacent objects in the mobile robot coordinate system;
the comparison calculation module 20 is configured to compare the local feature description set with a global feature description set acquired in advance, and calculate candidate coordinates of a current position of the mobile robot according to a comparison result; the global feature description set comprises a plurality of global feature description lines, the global feature description lines comprise distances between two current adjacent objects in a world coordinate system, type information corresponding to the two current adjacent objects respectively and space coordinates of the two current adjacent objects in the world coordinate system;
And the calculation positioning module 30 is configured to perform matching degree evaluation on each candidate coordinate according to the feature information, and determine the candidate coordinate with the largest score as the global coordinate of the mobile robot.
further, the method also comprises the following steps:
And the deletion processing module is used for performing traversal check on all candidate coordinates when the global map comprises a preset forbidden region and deleting the candidate coordinates matched with any position on the preset forbidden region.
Further, the method also comprises the following steps:
The generating module is used for generating corresponding object reference nodes and corresponding reference coordinate information according to the space coordinate and the type information of the position of each object in the global map; the reference coordinate information includes spatial coordinates and type information;
And the creating module is used for creating a global feature description set according to the reference coordinate information corresponding to each object reference node and the distance between each pair of adjacent object reference nodes.
Further, the scan processing module 10 includes:
the starting acquisition unit is used for triggering the positioning auxiliary sensor to work after starting and acquiring acquisition data from the positioning auxiliary sensor; the positioning auxiliary sensor comprises a visual sensor and/or a laser sensor;
The first processing unit is used for carrying out object identification according to the acquired data to obtain a local map, and type information and position coordinates of each object in the local map;
the generating unit is used for generating corresponding object positioning nodes and corresponding positioning coordinate information according to the position coordinates and the type information of the position of each object in the local map; the positioning coordinate information comprises position coordinates and type information; and creating a local feature description set according to the positioning coordinate information corresponding to each object positioning node and the distance between each pair of adjacent object positioning nodes.
Further, the alignment calculation module 20 includes:
The comparison unit is used for comparing the current local feature description line with all global feature description lines in the global feature description set, and screening out a target global feature description line which has the same type of information as all types of information included in the current local feature description line and has the same length as the current local feature description line;
the comparison unit is further used for comparing the next local feature description line with all global feature description lines in the global feature description set, and screening a target global feature description line corresponding to the next local feature description line until all the local feature description lines are screened;
And the calculating unit is used for calculating to obtain candidate coordinates corresponding to the mobile robot under the current local feature description line according to the distance and the space coordinates corresponding to the current local feature description line and the distance and the position coordinates corresponding to the target global feature description line corresponding to the current local feature description line, and switching to the next local feature description line until all candidate coordinates of the position of the mobile robot at the current moment are calculated.
Further, the calculating and positioning module 30 includes:
the analysis unit is used for analyzing the acquired data acquired by the positioning auxiliary sensor to obtain corresponding characteristic information;
The point cloud feature matching unit is used for comparing the point cloud features of the surrounding environment corresponding to each candidate coordinate with preset point cloud features to calculate the matching degree when the feature information is the point cloud features; and/or the presence of a gas in the gas,
The image feature matching unit is used for comparing the image features of the surrounding environment corresponding to each candidate coordinate with preset image features to calculate the matching degree when the feature information is the image features;
And the determining unit is used for determining the candidate coordinate with the maximum matching degree value as the global coordinate of the mobile robot.
Specifically, this embodiment is a device embodiment corresponding to the method embodiment, and specific effects refer to the method embodiment, which is not described in detail herein.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of program modules is illustrated, and in practical applications, the above-described distribution of functions may be performed by different program modules, that is, the internal structure of the apparatus may be divided into different program units or modules to perform all or part of the above-described functions. Each program module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one processing unit, and the integrated unit may be implemented in a form of hardware, or may be implemented in a form of software program unit. In addition, the specific names of the program modules are only used for distinguishing the program modules from one another, and are not used for limiting the protection scope of the application.
In an embodiment of the present invention, a storage medium stores at least one instruction, and the instruction is loaded and executed by a processor to implement the operations performed by the corresponding embodiments of the positioning method based on the environmental characteristic description. For example, the computer readable storage medium may be a read-only memory (ROM), a random-access memory (RAM), a compact disc read-only memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
They may be implemented in program code that is executable by a computing device such that it is executed by the computing device, or separately, or as individual integrated circuit modules, or as a plurality or steps of individual integrated circuit modules. Thus, the present invention is not limited to any specific combination of hardware and software.
in the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or recited in detail in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described apparatus/terminal device embodiments are merely illustrative, and for example, a module or a unit may be divided into only one logical function, and may be implemented in other ways, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be in an electrical, mechanical or other form.
units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method according to the embodiments of the present invention may also be implemented by sending instructions to relevant hardware through a computer program, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of the embodiments of the method. Wherein the computer program comprises: computer program code which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable storage medium may include: any entity or device capable of carrying computer program code, recording medium, U.S. disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution media, and the like. It should be noted that the content of the computer readable storage medium can be increased or decreased according to the requirements of the legislation and patent practice in the jurisdiction, for example: in certain jurisdictions, in accordance with legislation and patent practice, the computer-readable medium does not include electrical carrier signals and telecommunications signals.
It should be noted that the above embodiments can be freely combined as necessary. The foregoing is only a preferred embodiment of the present invention, and it should be noted that it is obvious to those skilled in the art that various modifications and improvements can be made without departing from the principle of the present invention, and these modifications and improvements should also be considered as the protection scope of the present invention.

Claims (13)

1. a positioning method based on environmental feature description is characterized by comprising the following steps:
After the system is started, the positioning auxiliary sensor is controlled to scan the surrounding environment to obtain a local feature description set corresponding to the current moment; the local feature description set comprises at least two local feature description lines which are not collinear, and the local feature description lines comprise the distance between two current adjacent objects in a mobile robot coordinate system, type information corresponding to the two current adjacent objects respectively and position coordinates of the two current adjacent objects in the mobile robot coordinate system;
Comparing the local feature description set with a global feature description set acquired in advance, and calculating to obtain candidate coordinates of the current position of the mobile robot according to a comparison result; the global feature description set comprises a plurality of global feature description lines, the global feature description lines comprise distances between two current adjacent objects in a world coordinate system, and type information corresponding to the two current adjacent objects respectively and space coordinates of the two current adjacent objects in the world coordinate system;
and evaluating the matching degree of each candidate coordinate according to the characteristic information, and determining the candidate coordinate with the maximum score as the global coordinate of the mobile robot.
2. The environmental feature description-based positioning method according to claim 1, wherein after comparing the local feature description set with a global feature description set acquired in advance, and after obtaining candidate coordinates of a position of the mobile robot at the current time according to a comparison result through calculation, the method performs matching degree evaluation on each candidate coordinate according to feature information, and before determining a candidate coordinate with a maximum score as a global coordinate of the mobile robot itself includes:
and when the global map comprises a preset forbidden region, performing traversal check on all candidate coordinates, and deleting the candidate coordinates matched with any position on the preset forbidden region.
3. the environmental feature description-based positioning method according to claim 1, wherein before controlling the positioning auxiliary sensor to scan the surrounding environment to obtain the local feature description set corresponding to the current time, the method comprises the steps of:
Generating corresponding object reference nodes and corresponding reference coordinate information thereof according to the spatial coordinates and the type information of the position of each object in the global map; the reference coordinate information comprises space coordinates and type information;
And creating the global feature description set according to the reference coordinate information corresponding to each object reference node and the distance between each pair of adjacent object reference nodes.
4. the environmental feature description-based positioning method according to claim 1, wherein the step of controlling the positioning auxiliary sensor to scan the surrounding environment after activation to obtain the local feature description set corresponding to the current time includes the steps of:
triggering the positioning auxiliary sensor to work after starting, and acquiring acquisition data from the positioning auxiliary sensor; the positioning auxiliary sensor comprises a visual sensor and/or a laser sensor;
carrying out object identification according to the acquired data to obtain a local map, and type information and position coordinates of each object in the local map;
generating corresponding object positioning nodes and corresponding positioning coordinate information thereof according to the position coordinates and the type information of the position of each object in the local map; the positioning coordinate information comprises position coordinates and type information;
and creating the local feature description set according to the positioning coordinate information corresponding to each object positioning node and the distance between each pair of adjacent object positioning nodes.
5. the environmental feature description-based positioning method according to claim 1, wherein after comparing the local feature description set with a global feature description set obtained in advance, calculating candidate coordinates of a current position of the mobile robot according to a comparison result comprises:
Comparing the current local feature description line with all global feature description lines in the global feature description set, and screening out a target global feature description line which is the same as all types of information included in the current local feature description line and is the same as the length of the current local feature description line;
comparing the next local feature description line with all global feature description lines in the global feature description set, and screening out a target global feature description line corresponding to the next local feature description line until all local feature description lines are screened;
and calculating to obtain candidate coordinates corresponding to the mobile robot under the current local feature description line according to the distance and the space coordinates corresponding to the current local feature description line and the distance and the position coordinates corresponding to the target global feature description line corresponding to the current local feature description line, and switching to the next local feature description line until all candidate coordinates of the position of the mobile robot at the current moment are obtained through calculation.
6. The environmental feature description-based positioning method according to any one of claims 1 to 5, wherein the evaluation of the matching degree of each candidate coordinate is performed according to the feature information, and the determination of the candidate coordinate with the largest score as the global coordinate of the mobile robot itself includes the steps of:
analyzing the acquired data acquired by the positioning auxiliary sensor to obtain corresponding characteristic information;
When the feature information is point cloud features, comparing the point cloud features of the surrounding environment corresponding to each candidate coordinate with preset point cloud features to calculate matching degree; and/or the presence of a gas in the gas,
when the characteristic information is an image characteristic, comparing the image characteristic of the surrounding environment corresponding to each candidate coordinate with a preset image characteristic to calculate the matching degree;
and determining the candidate coordinate with the maximum matching degree value as the global coordinate of the mobile robot.
7. A mobile robot, comprising:
The scanning processing module is used for controlling the positioning auxiliary sensor to scan the surrounding environment after being started to obtain a local feature description set corresponding to the current moment; the local feature description set comprises at least two local feature description lines which are not collinear, and the local feature description lines comprise the distance between two current adjacent objects in a mobile robot coordinate system, type information corresponding to the two current adjacent objects respectively and position coordinates of the two current adjacent objects in the mobile robot coordinate system;
the comparison calculation module is used for comparing the local feature description set with a global feature description set acquired in advance and then calculating candidate coordinates of the position of the mobile robot at the current moment according to a comparison result; the global feature description set comprises a plurality of global feature description lines, the global feature description lines comprise distances between two current adjacent objects in a world coordinate system, and type information corresponding to the two current adjacent objects respectively and space coordinates of the two current adjacent objects in the world coordinate system;
And the calculation positioning module is used for evaluating the matching degree of each candidate coordinate according to the characteristic information and determining the candidate coordinate with the maximum score as the global coordinate of the mobile robot.
8. The mobile robot of claim 7, further comprising:
and the deletion processing module is used for performing traversal check on all candidate coordinates when the global map comprises a preset forbidden region and deleting the candidate coordinates matched with any position on the preset forbidden region.
9. The mobile robot of claim 7, further comprising:
The generating module is used for generating corresponding object reference nodes and corresponding reference coordinate information according to the space coordinate and the type information of the position of each object in the global map; the reference coordinate information comprises space coordinates and type information;
and the creating module is used for creating the global feature description set according to the reference coordinate information corresponding to each object reference node and the distance between each pair of adjacent object reference nodes.
10. the mobile robot of claim 7, wherein the scan processing module comprises:
the starting acquisition unit is used for triggering the positioning auxiliary sensor to work after starting and acquiring acquisition data from the positioning auxiliary sensor; the positioning auxiliary sensor comprises a visual sensor and/or a laser sensor;
The first processing unit is used for carrying out object identification according to the acquired data to obtain a local map, and type information and position coordinates of each object in the local map;
the generating unit is used for generating corresponding object positioning nodes and corresponding positioning coordinate information according to the position coordinates and the type information of the position of each object in the local map; the positioning coordinate information comprises position coordinates and type information; and creating the local feature description set according to the positioning coordinate information corresponding to each object positioning node and the distance between each pair of adjacent object positioning nodes.
11. the mobile robot of claim 7, wherein the comparison calculation module comprises:
The comparison unit is used for comparing the current local feature description line with all global feature description lines in the global feature description set, and screening out a target global feature description line which has the same type information as all types of information included in the current local feature description line and has the same length as the current local feature description line;
the comparison unit is further configured to compare the next local feature description line with all global feature description lines in the global feature description set, and screen out a target global feature description line corresponding to the next local feature description line until all local feature description lines are screened;
and the calculating unit is used for calculating to obtain candidate coordinates corresponding to the mobile robot under the current local feature description line according to the distance and the space coordinates corresponding to the current local feature description line and the distance and the position coordinates corresponding to the target global feature description line corresponding to the current local feature description line, and switching to the next local feature description line until all candidate coordinates of the position of the mobile robot at the current moment are obtained through calculation.
12. the mobile robot of any of claims 7-11, wherein the compute position module comprises:
the analysis unit is used for analyzing the acquired data acquired by the positioning auxiliary sensor to obtain corresponding characteristic information;
The point cloud feature matching unit is used for comparing the point cloud features of the surrounding environment corresponding to each candidate coordinate with preset point cloud features to calculate the matching degree when the feature information is the point cloud features; and/or the presence of a gas in the gas,
the image feature matching unit is used for comparing the image features of the surrounding environment corresponding to each candidate coordinate with preset image features to calculate the matching degree when the feature information is the image features;
and the determining unit is used for determining the candidate coordinate with the maximum matching degree value as the global coordinate of the mobile robot.
13. a storage medium having stored therein at least one instruction, which is loaded and executed by a processor to perform operations performed by the environment feature description based positioning method according to any one of claims 1 to 6.
CN201911007542.0A 2019-10-22 2019-10-22 Positioning method based on environmental feature description, mobile robot and storage medium Active CN110579215B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911007542.0A CN110579215B (en) 2019-10-22 2019-10-22 Positioning method based on environmental feature description, mobile robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911007542.0A CN110579215B (en) 2019-10-22 2019-10-22 Positioning method based on environmental feature description, mobile robot and storage medium

Publications (2)

Publication Number Publication Date
CN110579215A true CN110579215A (en) 2019-12-17
CN110579215B CN110579215B (en) 2021-05-18

Family

ID=68815251

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911007542.0A Active CN110579215B (en) 2019-10-22 2019-10-22 Positioning method based on environmental feature description, mobile robot and storage medium

Country Status (1)

Country Link
CN (1) CN110579215B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111220148A (en) * 2020-01-21 2020-06-02 珊口(深圳)智能科技有限公司 Mobile robot positioning method, system and device and mobile robot
CN111546348A (en) * 2020-06-10 2020-08-18 上海有个机器人有限公司 Robot position calibration method and position calibration system
CN111814752A (en) * 2020-08-14 2020-10-23 上海木木聚枞机器人科技有限公司 Indoor positioning implementation method, server, intelligent mobile device and storage medium
CN111928852A (en) * 2020-07-23 2020-11-13 武汉理工大学 Indoor robot positioning method and system based on LED position coding
CN112327312A (en) * 2020-10-28 2021-02-05 上海高仙自动化科技发展有限公司 Vehicle pose determining method and device, vehicle and storage medium
CN113609985A (en) * 2021-08-05 2021-11-05 诺亚机器人科技(上海)有限公司 Object pose detection method, detection device, robot and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104298971A (en) * 2014-09-28 2015-01-21 北京理工大学 Method for identifying objects in 3D point cloud data
CN105928505A (en) * 2016-04-19 2016-09-07 深圳市神州云海智能科技有限公司 Determination method and apparatus for position and orientation of mobile robot
CN107300919A (en) * 2017-06-22 2017-10-27 中国科学院深圳先进技术研究院 A kind of robot and its traveling control method
CN109074085A (en) * 2018-07-26 2018-12-21 深圳前海达闼云端智能科技有限公司 A kind of autonomous positioning and map method for building up, device and robot
US20190065863A1 (en) * 2017-08-23 2019-02-28 TuSimple Feature matching and correspondence refinement and 3d submap position refinement system and method for centimeter precision localization using camera-based submap and lidar-based global map

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104298971A (en) * 2014-09-28 2015-01-21 北京理工大学 Method for identifying objects in 3D point cloud data
CN105928505A (en) * 2016-04-19 2016-09-07 深圳市神州云海智能科技有限公司 Determination method and apparatus for position and orientation of mobile robot
CN107300919A (en) * 2017-06-22 2017-10-27 中国科学院深圳先进技术研究院 A kind of robot and its traveling control method
US20190065863A1 (en) * 2017-08-23 2019-02-28 TuSimple Feature matching and correspondence refinement and 3d submap position refinement system and method for centimeter precision localization using camera-based submap and lidar-based global map
CN109074085A (en) * 2018-07-26 2018-12-21 深圳前海达闼云端智能科技有限公司 A kind of autonomous positioning and map method for building up, device and robot

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111220148A (en) * 2020-01-21 2020-06-02 珊口(深圳)智能科技有限公司 Mobile robot positioning method, system and device and mobile robot
CN111546348A (en) * 2020-06-10 2020-08-18 上海有个机器人有限公司 Robot position calibration method and position calibration system
CN111928852A (en) * 2020-07-23 2020-11-13 武汉理工大学 Indoor robot positioning method and system based on LED position coding
CN111928852B (en) * 2020-07-23 2022-08-23 武汉理工大学 Indoor robot positioning method and system based on LED position coding
CN111814752A (en) * 2020-08-14 2020-10-23 上海木木聚枞机器人科技有限公司 Indoor positioning implementation method, server, intelligent mobile device and storage medium
CN111814752B (en) * 2020-08-14 2024-03-12 上海木木聚枞机器人科技有限公司 Indoor positioning realization method, server, intelligent mobile device and storage medium
CN112327312A (en) * 2020-10-28 2021-02-05 上海高仙自动化科技发展有限公司 Vehicle pose determining method and device, vehicle and storage medium
CN113609985A (en) * 2021-08-05 2021-11-05 诺亚机器人科技(上海)有限公司 Object pose detection method, detection device, robot and storage medium
CN113609985B (en) * 2021-08-05 2024-02-23 诺亚机器人科技(上海)有限公司 Object pose detection method, detection device, robot and storable medium

Also Published As

Publication number Publication date
CN110579215B (en) 2021-05-18

Similar Documents

Publication Publication Date Title
CN110579215B (en) Positioning method based on environmental feature description, mobile robot and storage medium
Kropp et al. Interior construction state recognition with 4D BIM registered image sequences
Xu et al. An occupancy grid mapping enhanced visual SLAM for real-time locating applications in indoor GPS-denied environments
CN110114692B (en) Ground environment detection method and device
US20180174038A1 (en) Simultaneous localization and mapping with reinforcement learning
US20240092344A1 (en) Method and apparatus for detecting parking space and direction and angle thereof, device and medium
US10433119B2 (en) Position determination device, position determining method, and storage medium
US10747634B2 (en) System and method for utilizing machine-readable codes for testing a communication network
CN108875804A (en) A kind of data processing method and relevant apparatus based on laser point cloud data
CN111123912B (en) Calibration method and device for travelling crane positioning coordinates
CN111814752B (en) Indoor positioning realization method, server, intelligent mobile device and storage medium
CN103162682A (en) Indoor path navigation method based on mixed reality
CN107527368B (en) Three-dimensional space attitude positioning method and device based on two-dimensional code
CN114842156A (en) Three-dimensional map construction method and device
CN113111144A (en) Room marking method and device and robot movement method
CN110443099B (en) Object identity recognition system and method for automatically recognizing object identity
Heya et al. Image processing based indoor localization system for assisting visually impaired people
Zhuang et al. Using scale coordination and semantic information for robust 3-D object recognition by a service robot
JP7160257B2 (en) Information processing device, information processing method, and program
Sadreddini et al. A distance measurement method using single camera for indoor environments
Naggar et al. A low cost indoor positioning system using computer vision
JP2007200364A (en) Stereo calibration apparatus and stereo image monitoring apparatus using the same
CN111354038A (en) Anchor object detection method and device, electronic equipment and storage medium
CN115657021A (en) Fire detection method for movable robot and movable robot
CN110412613A (en) Measurement method, mobile device, computer equipment and storage medium based on laser

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 200335 402 rooms, No. 33, No. 33, Guang Shun Road, Shanghai

Applicant after: Shanghai Zhihui Medical Technology Co.,Ltd.

Address before: 200335 402 rooms, No. 33, No. 33, Guang Shun Road, Shanghai

Applicant before: SHANGHAI MROBOT TECHNOLOGY Co.,Ltd.

Address after: 200335 402 rooms, No. 33, No. 33, Guang Shun Road, Shanghai

Applicant after: Shanghai zhihuilin Medical Technology Co.,Ltd.

Address before: 200335 402 rooms, No. 33, No. 33, Guang Shun Road, Shanghai

Applicant before: Shanghai Zhihui Medical Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 202150 room 205, zone W, second floor, building 3, No. 8, Xiushan Road, Chengqiao Town, Chongming District, Shanghai (Shanghai Chongming Industrial Park)

Patentee after: Shanghai Noah Wood Robot Technology Co.,Ltd.

Address before: 200335 402 rooms, No. 33, No. 33, Guang Shun Road, Shanghai

Patentee before: Shanghai zhihuilin Medical Technology Co.,Ltd.