CN117288183A - Angle-based robot repositioning method - Google Patents

Angle-based robot repositioning method Download PDF

Info

Publication number
CN117288183A
CN117288183A CN202210699893.8A CN202210699893A CN117288183A CN 117288183 A CN117288183 A CN 117288183A CN 202210699893 A CN202210699893 A CN 202210699893A CN 117288183 A CN117288183 A CN 117288183A
Authority
CN
China
Prior art keywords
robot
line segment
window
matched
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210699893.8A
Other languages
Chinese (zh)
Inventor
李永勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Amicro Semiconductor Co Ltd
Original Assignee
Zhuhai Amicro Semiconductor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Amicro Semiconductor Co Ltd filed Critical Zhuhai Amicro Semiconductor Co Ltd
Priority to CN202210699893.8A priority Critical patent/CN117288183A/en
Publication of CN117288183A publication Critical patent/CN117288183A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses an angle-based robot repositioning method, which comprises the following steps: the robot rotates at different rotation center positions in the window where the robot is currently positioned and builds a target subgraph; taking a window adjacent to the window where the robot is currently located as a next window, and moving to the next window until the robot traverses all the windows; controlling the target subgraph to rotate, controlling the fitting line segments in the rotated target subgraph to be matched with the fitting line segments in other target subgraphs in similarity of line segment lengths, obtaining an angle error amount and an error coordinate offset when the matching success rate between the fitting line segments in the two target subgraphs reaches a first preset success rate, and processing the current rotation center position of the robot into a temporary repositioning position by using the angle error amount; when all the target subgraphs are matched, the obtained error coordinate offset is processed into a positioning coordinate compensation quantity; and correcting the newly obtained temporary repositioning position by using the positioning coordinate compensation quantity to obtain the repositioning position.

Description

Angle-based robot repositioning method
Technical Field
The invention relates to the technical field of robot navigation, in particular to an angle-based robot repositioning method.
Background
SLAM is used as a positioning and ground converter reconstruction technology, and through the inquiry and mutual fusion of various sensors, the mobile robot can have the capability of sensing the position of the mobile robot and the surrounding environment. The machine with the positioning function in the market basically needs to rely on a rotary laser radar or vision auxiliary positioning, the corresponding cost is high, more chip resources need to be called, and a large amount of data need to be filtered and processed when positioning tasks are carried out. At present, some robots which rely on inertial system navigation can use gyroscopes for navigation positioning, the gyroscopes have a lot of errors such as temperature drift, zero drift, wandering errors and the like in the walking process of the robots, so that after the walking time of the robots is slightly longer, the accumulated angle errors of the gyroscopes are increased, in a low-cost scheme, the errors cannot be eliminated, and meanwhile, the angle errors of the gyroscopes can cause the coordinate positions marked in a map to deviate from the actual physical positions easily, and particularly, after the robots are randomly placed in a traversed area, the positions where the gyroscopes are positioned are not easy to accurately position back.
Disclosure of Invention
In order to solve the technical defects, the invention discloses a repositioning method of a robot based on an angle, wherein the repositioning function of the robot for finding the lost position in a global working area can be realized by only utilizing a fitting result of point cloud data acquired by a single-point ranging sensor and a matching result between two frames of subgraphs on the premise of not determining current position information and angle information in inertial navigation with a gyroscope for measuring the angle, and the specific technical scheme is as follows:
the method for repositioning the robot based on the angle comprises the following steps that an execution main body of the method for repositioning the robot is a robot fixedly provided with a single-point ranging sensor and a gyroscope; the single-point ranging sensor is used for collecting point cloud data of the environment where the robot is located and marking the point cloud data in a map, and the gyroscope is used for collecting angle information of the robot; the robot repositioning method comprises the following steps: step 1, a robot determines a window in which the robot is currently located, wherein at least one window exists in a global working area; step 2, the robot rotates at different rotation center positions in the window where the robot is currently located in sequence, and a target subgraph is built at each rotation center position; step 3, in the global working area, when the robot selects a window adjacent to the current window as a next window, the robot moves to the next window, updates the next window to the current window, and then repeatedly executes the step 2 until the robot traverses all windows in the global working area and builds a target subgraph; step 4, in the global working area, the robot controls the target subgraph to rotate, and then controls the fitting line segments in the rotated target subgraph to be matched with the fitting line segments in other target subgraphs in similarity of line segment lengths; step 5, when the matching success rate between the fitting line segments in the two target subgraphs reaches a first preset success rate, determining that the two target subgraphs are successfully matched, and obtaining an angle error amount and an error coordinate offset; processing the current rotation center position of the robot by using the angle error amount to obtain a temporary repositioning position; step 6, when the robot finishes matching all the target subgraphs, the robot performs average value processing on all the obtained error coordinate offsets to obtain a positioning coordinate compensation quantity; and correcting the newly obtained temporary repositioning position by using the positioning coordinate compensation quantity to obtain the repositioning position, and finishing repositioning of the robot in the global working area.
Further, aiming at two target subgraphs in the global working area, the robot sets one target subgraph as a reference target subgraph and sets the other target subgraph as a target subgraph to be matched; each time step 4 performs similarity matching of the line length of one fitting line segment in the reference target subgraph and one fitting line segment in the target subgraph to be matched, rotating the target subgraph to be matched according to a preset angle step; when the target subgraph to be matched rotates along the preset clockwise direction by the preset angle step length, the robot updates the rotated target subgraph to be matched into the target subgraph to be matched, and then performs similarity matching of the line segment length on one fitting line segment in the reference target subgraph and one fitting line segment in the target subgraph to be matched; when the matching success rate between the fitting line segment in the reference target subgraph and the fitting line segment in the target subgraph to be matched reaches a first preset success rate, the robot sets the rotated angle of the target subgraph to be matched, which is obtained by updating the latest, relative to the original target subgraph to be matched as an angle error amount, and then corrects the initial pose angle of the robot at the current rotation center position by the angle error amount to obtain a temporary repositioning angle; converting the coordinates of the current rotation center position of the robot by using the angle error amount to obtain the coordinates of the temporary repositioning position; when the robot is matched with the target subgraphs in all windows of the global working area, the robot updates the coordinate of the latest obtained temporary repositioning position to the coordinate of the repositioning position, and the robot updates the latest obtained temporary repositioning angle to the initial pose angle of the robot at the repositioning position; the initial pose angle of the robot at the current rotation center position is an included angle formed by the advancing direction of the robot at the current rotation center position and relative to the coordinate axis before the robot does not rotate.
Further, in step 1, taking the position of the robot when executing step 1 as the center of the current window, extending a preset extending distance along the horizontal left side of the center of the current window, and extending a preset extending distance along the horizontal right side of the center of the current window to form the lateral side length of the current window, wherein the preset extending distance is equal to half of the maximum distance measuring distance of the single-point distance measuring sensor so as to scan a plurality of line feature subgraphs in the current window; the window is a rectangular area framed in the map and is used for dividing the global working area and limiting the coverage range of the line characteristic subgraph scanned by the robot in the divided corresponding area.
Further, the windows adjacent to the currently located window include an upper adjacent window to the currently located window, a lower adjacent window to the currently located window, a left adjacent window to the currently located window, and a right adjacent window to the currently located window; the shape of each window adjacent to the current window is the same as the shape of the current window, and the size of each window adjacent to the current window is the same as the size of the current window; the abscissa of each point of the window adjacent to the upper side of the window where the current position is formed is equal to the abscissa of each point of the window where the current position is formed, and the difference value between the ordinate of each vertex of the window adjacent to the upper side of the window where the current position is formed and the ordinate of the vertex at the same position relation of the window where the current position is formed is the longitudinal side length of the window where the current position is formed; the abscissa of each point of the window adjacent to the lower side of the window where the current position is formed is equal to the abscissa of each point of the window where the current position is formed, and the difference value of the ordinate of each vertex of the window where the current position is formed and the ordinate of the vertex at the same position relation of the window adjacent to the lower side of the window where the current position is formed is the longitudinal side length of the window where the current position is formed; the difference value between the abscissa of each vertex of the window adjacent to the right side of the current window and the abscissa of the vertex at the same position relation of the window is the lateral side length of the current window; the difference value between the ordinate of each point of the window adjacent to the left side of the current window and the abscissa of each vertex of the window adjacent to the left side of the current window is the transverse side length of the current window; wherein the same positional relationship indicates that the relative positional relationship of the two vertices with respect to the center of the window in which they are located is the same.
Further, in step 2, the method for constructing the target subgraph includes: when the robot rotates at the rotating center position for one circle, the single-point ranging sensor is controlled to acquire point cloud data in the rotating process of the robot, then line segments of corresponding trend are fitted in each angle range, the part of the fitted line segments of the corresponding trend in the window where the robot is currently located is set as a group of fitted line segments in the window where the robot is currently located, the group of fitted line segments form a line feature sub-graph, a line feature sub-graph is determined to be scanned, and the line feature sub-graph is set as the target sub-graph; the point cloud data comprises coordinate information of a position point scanned by a single-point ranging sensor and angle information of the position point; every time one line characteristic subgraph is scanned, the robot also records the coordinates and initial pose angles of the robot at the rotation center position, wherein the coordinates of the rotation center position are relative position coordinates formed by taking the position of the robot in the step 1 as an origin.
Further, in step 2, the method for constructing the target subgraph includes: the robot rotates preset turns at different rotation center positions in the window where the robot is currently located in sequence, scans line feature subgraphs with the preset turns at each rotation center position, and then merges the scanned line feature subgraphs with the preset turns at each rotation center position into a corresponding target subgraph; wherein, a target subgraph is correspondingly combined at a rotation center position; when the robot rotates at the rotating center position for one circle, the single-point ranging sensor is controlled to acquire point cloud data in the rotating process of the robot, then line segments of corresponding trend are fitted in each angle range, the part of the fitted line segments of the corresponding trend in the window where the robot is currently located is set as a group of fitted line segments in the window where the robot is currently located, the group of fitted line segments form a line feature sub-graph, and the scanning of the line feature sub-graph is determined, wherein at least one fitted line segment exists in the group of fitted line segments.
Further, in the step 2, the method for merging the scanned line feature subgraphs with the preset number of turns into the target subgraph at each rotation center position includes: step 21, the robot rotates the current circle in situ at a rotation center position, the robot carries out fitting processing on the point cloud data correspondingly collected in the current circle to obtain a group of fitting line segments in a window where the current circle is located, and the group of fitting line segments form a line feature subgraph; step 22, judging whether the number of in-situ rotation turns of the robot at the rotation center position in step 21 is equal to a preset number of turns, if yes, executing step 23, otherwise, executing step 24; step 23, the robot scans out line characteristic subgraphs with the number of preset turns and stops rotating in situ at the position of the rotating center; then, in the process that the robot traverses fitting line segments in line feature subgraphs with the number of preset turns, the robot selects one of the line feature subgraphs as a template subgraph; the robot sequentially judges whether each fitting line segment in the template subgraph and the fitting line segments in the other line feature subgraphs are completely overlapped, if so, the two fitting line segments which are completely overlapped and are currently judged are set as the same fitting line segment in the target subgraph, otherwise, the two fitting line segments which are not completely overlapped and are currently judged are set as the two fitting line segments in the target subgraph; after the robot judges each fitting line segment in the template subgraph and any fitting line segment in other every line feature subgraph, each fitting line segment in the target subgraph is obtained, and the merging of the line feature subgraphs with the number of preset turns into the target subgraph is completed; step 24, the robot updates the next circle to the current circle, and then step 22 is executed; wherein, the robot rotates in situ at one rotation center position, and the robot rotates 360 degrees around the rotation center position; wherein the preset number of turns is set to be greater than the value 1.
Further, the window in which the window is currently positioned is rectangular in shape; the transverse side length of the window where the window is currently located is equal to the maximum ranging distance of the single-point ranging sensor; the longitudinal side length of the window where the current window is located is larger than or equal to the diameter of the robot body; half of the maximum ranging distance of the single-point ranging sensor is equal to the preset extension distance; the robot sets the center of the window where the robot is currently located as a first rotation center position; the robot sets the central axis of the window in the longitudinal direction at present as a base line; the robot sets the position points which are positioned at a half of the preset extending distance from the center of the window where the robot is currently positioned and are positioned in two symmetrical directions of the vertical base line as a second first rotation center position and a second rotation center position respectively; the robot sets the position points which are separated from the center of the window where the robot is currently located by a preset extension distance and are positioned in two symmetrical directions of a vertical base line as a third first rotation center position and a third second rotation center position respectively; the first rotation center position, the second rotation center position, the third rotation center position and the third rotation center position all belong to the rotation center positions, and the robot sequentially and repeatedly traverses each rotation center position in the window where the robot is currently located so as to scan a plurality of line feature subgraphs in the window where the robot is currently located.
Further, in the step 2, each time the robot detects that the length of the fitted line segment of the corresponding trend is greater than the preset fitting length threshold, the following situations exist: when the fitted line segment with the corresponding trend is positioned in the window where the current window is positioned, the robot sets the fitted line segment with the corresponding trend as the fitted line segment and marks the fitted line segment into a map, records the coordinates of the starting point and the coordinates of the ending point of the fitted line segment with the corresponding trend and the inclination angle, and determines and records the fitted line segment with the corresponding trend; the inclination angle is set to be represented by an angle formed by the line segment of the fitted corresponding trend and the coordinate axis; the judging criteria of the two different line segments comprise different coordinates of the starting point of the line segment, different coordinates of the ending point of the line segment or different inclination angles; the fitted line segment corresponding to the trend is a line segment of the corresponding trend, wherein the robot uses a least square method to fit point cloud data acquired in a corresponding angle range into a target linear equation, and the target linear equation represents the line segment corresponding to the trend.
Further, in step 5, the matching success rate between the fitting line segments of the two target subgraphs is the ratio of the number of all fitting line segment pairs successfully matched to the number of all fitting line segment pairs matched by the similarity of the lengths of the participating line segments; the fitting line segment pair participating in the similarity matching of the line segment lengths consists of one fitting line segment in one target subgraph and one fitting line segment in the other target subgraph; in the global working area, the target subgraph group where two fitting line segments participating in similarity matching of line segment lengths are located is a target subgraph pair; in one of the target subgraph pairs, one target subgraph is set as a reference target subgraph, and the other target subgraph is set as a target subgraph to be matched, which is the target subgraph rotated in step 4.
Further, in the step 4, the method of performing similarity matching between the line segment length of the fitting line segment in the rotated target subgraph and the line segment of the fitting line segment in the other target subgraphs by the robot control is as follows: the robot controls the fitting line segment in the target subgraph to be matched and the fitting line segment at the corresponding position in the reference target subgraph to carry out similarity matching of the line segment length; the method for matching the similarity of the line segment length between the fitting line segment in the target subgraph to be matched and the fitting line segment at the corresponding position in the reference target subgraph by the robot comprises the following steps: step 41, calculating an absolute value of a difference value between the line segment length of one fitting line segment in the target subgraph to be matched and the line segment length of one fitting line segment in the reference target subgraph; step 42, when the absolute value of the difference value in step 41 is smaller than or equal to a preset length threshold, determining that one fitting line segment in the target subgraph to be matched is successfully matched with one fitting line segment at a corresponding position in the reference target subgraph in the similarity matching of the line segment lengths, and marking the fitting line segment matched with the similarity of the current line segment lengths as a fitting line segment pair successfully matched; and when the absolute value of the difference value in the step 41 is greater than a preset length threshold, determining that one fitting line segment in the target subgraph to be matched and one fitting line segment at a corresponding position in the reference target subgraph fail to be matched in the similarity matching of the line segment lengths, and marking the fitting line segment matched with the similarity of the line segment lengths currently as one fitting line segment pair failed to be matched.
Further, when the ratio of the number of all the matched line segment pairs successfully matched to the number of all the matched line segment pairs matched with the similarity of the lengths of the participated line segments is greater than or equal to a first preset success rate, determining that the target subgraph to be matched and the reference target subgraph are successfully matched, and marking as one target subgraph pair successfully matched; in the global working area, stopping similarity matching of line segment lengths of the fitting line segments in the target subgraph to be matched and the fitting line segments in the reference target subgraph when the ratio of the number of all target subgraph pairs successfully matched to the number of all target subgraph pairs participating in the matching is greater than or equal to a second preset success rate; wherein all target subgraphs involved in the matching include any two different target subgraphs.
Further, before the robot performs similarity matching of the line segment length between each fitting line segment in the target subgraph to be matched and the fitting line segment at the corresponding position in the reference target subgraph, or when the ratio of the number of all fitting line segment pairs successfully matched to the number of all fitting line segment pairs participating in similarity matching is detected to be smaller than a first preset success rate, the robot sets the rotation center position corresponding to the target subgraph to be matched as an offset starting point position; then controlling the target subgraph to be matched to translate along the direction of a preset coordinate axis according to a preset translation step length from the position of the offset starting point; and each time the target subgraph to be matched is translated once, updating the translated target subgraph to be matched into the target subgraph to be matched, and then controlling the robot to execute the steps 41 and 42.
Further, on the premise that the coordinate offset of the target subgraph to be matched, which is translated in the same coordinate axis direction from the offset starting point position, does not reach the maximum preset offset, each time the target subgraph to be matched is translated by a preset translation step length, the robot judges whether the ratio of the number of all fitting line segment pairs successfully matched to the number of all fitting line segment pairs matched by the similarity of the length of the participated line segment is greater than or equal to a first preset success rate, if so, the target subgraph to be matched and the reference target subgraph are successfully matched, and the coordinate offset of the target subgraph to be matched, which is translated in the latest translation direction from the offset starting point position, is set as an error coordinate offset, and then the target subgraph to be matched is controlled to stop translating; otherwise, the robot adjusts the direction of the set coordinate axis to be opposite or perpendicular to the direction of the set coordinate axis, updates the direction of the opposite or perpendicular coordinate axis to be the direction of the set coordinate axis, and controls the target subgraph to be matched to translate the preset translation step length along the direction of the set coordinate axis from the position of the offset starting point; if the coordinate offset of the target subgraph to be matched, which is translated in the same coordinate axis direction from the offset starting point position, reaches the maximum preset offset, when the robot judges that the ratio of the number of all fitting line segment pairs successfully matched to the number of all fitting line segment pairs matched with the similarity of the lengths of the participated line segments is smaller than a first preset success rate, the robot adjusts the direction of the established coordinate axis to be opposite or perpendicular to the established coordinate axis direction, updates the opposite or perpendicular coordinate axis direction to be the established coordinate axis direction, and controls the target subgraph to be matched to translate the preset translation step length along the established coordinate axis direction from the offset starting point position; before the coordinate offset translated in the given coordinate axis direction reaches the maximum preset offset, each time the same target sub-graph to be matched translates along the given coordinate axis direction by the preset translation step length, updating the translated target sub-graph to be matched into the target sub-graph to be matched, and executing the steps 41 and 42; and after each robot is matched with each fitting line segment in the target subgraph to be matched and the fitting line segment at the corresponding position in the reference target subgraph, the robot judges whether the ratio of the number of all fitting line segment pairs successfully matched to the number of all fitting line segment pairs matched by the similarity of the lengths of the participating line segments is larger than or equal to a first preset success rate.
Further, after the coordinate offset of the target subgraph to be matched, which is translated along all coordinate axis directions from the offset starting point position, reaches the maximum preset offset, if the robot judges that the ratio of the number of all fitting line segment pairs successfully matched to the number of all fitting line segment pairs participating in similarity matching is smaller than a first preset success rate, determining that the matching between the target subgraph to be matched and the reference target subgraph fails, and then executing the step 4 and the step 5 on the premise of excluding a pair of target subgraphs which fail to match; the preset maximum positioning error is associated with a maximum preset offset, and the maximum preset offset comprises a maximum preset offset sitting amount in the horizontal axis direction and a maximum preset offset sitting amount in the vertical axis direction.
Further, when the robot traverses all the target subgraphs in all the windows in the global working area, the robot obtains a plurality of error coordinate offsets, and if the robot judges that the number of the error coordinate offsets obtained at present is larger than the preset processing number, the error coordinate offset with the maximum abscissa value, the error coordinate offset with the minimum abscissa value, the error coordinate offset with the maximum ordinate value and the error coordinate offset with the minimum ordinate value are all removed from all the error coordinate offsets obtained at present; averaging the rest error coordinate offset to obtain an average coordinate offset; setting the average coordinate offset as the positioning coordinate compensation amount; wherein each error coordinate offset includes a horizontal axis error coordinate offset and a vertical axis error coordinate offset; each positioning coordinate compensation amount includes a horizontal axis positioning coordinate compensation amount and a vertical axis positioning coordinate compensation amount.
The method has the advantages that under the condition that the current position coordinate information and the angle information of the robot are uncertain and the coordinate system of the constructed line feature subgraphs is not uniform, before the robot performs similarity matching on fitting line segments in two target subgraphs or in the line feature subgraphs (which can be equivalent to the target subgraphs), the robot also performs rotation transformation on one of the target subgraphs or the line feature subgraphs (which can be equivalent to the target subgraphs), positioning errors caused by an inertial sensor can be overcome in a heuristically rotating mode under the condition that the pose is uncertain, and similarly, the robot also performs merging processing on the line feature subgraphs formed by the fitting line segments through in-situ rotation, so that correspondingly distributed point cloud data in the merged target subgraphs are more uniform and dense, and the positioning accuracy of the robot using the target subgraphs is improved.
In the process of carrying out similarity matching of line segment lengths on fitting line segments in two target subgraphs or line characteristic subgraphs, the line segment lengths are only used for comparison, one fitting line segment pair with similarity meeting a certain ratio is marked firstly, then two target subgraphs with proper fitting line segment pairs meeting a certain ratio are marked, and in the process of subsequent repositioning, error coordinate offset for correcting or repositioning the position of a robot is obtained every time the two target subgraphs are successfully matched, and reasonable numerical upper limits are set for the number of successfully matched target subgraphs in each window, so that the calculation amount for calling the target subgraphs and the matching positioning is reduced.
Drawings
FIG. 1 is a flow chart of an angle-based robotic repositioning method disclosed in one embodiment of the invention.
Detailed Description
The following describes the technical solution in the embodiment of the present invention in detail with reference to the drawings in the embodiment of the present invention. For further illustration of the various embodiments, the invention is provided with the accompanying drawings. The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate embodiments and together with the description, serve to explain the principles of the embodiments. With reference to these matters, one of ordinary skill in the art will understand other possible embodiments and advantages of the present invention. A process or method depicted as a flowchart. Although a flowchart depicts steps as a sequential process, many of the steps may be implemented in parallel, concurrently, or with other steps. Furthermore, the order of the steps may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figures. The processes may correspond to methods, functions, procedures, subroutines, and the like.
It should be noted that the terms "comprises" and "comprising," and any variations thereof, in the description and claims of the present application and in the foregoing figures, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. The implementation conditions used in the examples may be further adjusted according to the conditions of the specific manufacturer, and the implementation conditions not specified are generally those in routine experiments.
As an embodiment, the invention discloses an angle-based robot repositioning method, wherein an execution subject of the robot repositioning method is a robot fixedly provided with a single-point ranging sensor and a gyroscope, the single-point ranging sensor is used for collecting point cloud data of an environment where the robot is positioned and marking the point cloud data in a map, the single-point ranging sensor is not provided with a rotating mechanism of a laser radar, the single-point ranging sensor can be a TOF sensor, and the TOF sensor is fixedly arranged on one side of the robot; according to the embodiment, the robot body is required to rotate (generally 360-degree rotation) to drive the single-point ranging sensor fixedly arranged on the robot body to scan the surrounding environment, multi-contour discrete points (which can be regarded as position points) of the reaction environment and a point cloud image formed by the multi-contour discrete points are obtained, namely, the surrounding environment characteristics of the robot are collected to obtain point cloud data, the point cloud data are subjected to data processing to obtain relevant straight lines, and the straight lines are used for constructing the environment image (belonging to a map). The gyroscope is used for acquiring the rotation angle of the robot, when the robot rotates, the gyroscope and the single-point ranging sensor synchronously acquire data, and a map constructed by the robot also needs to be converted according to the angle acquired by the gyroscope, for example, the pose information included in the point cloud data is converted into a coordinate system;
Preferably, the single-point ranging sensor can be a TOF sensor, is fixedly arranged on any one of two sides of a robot body, including the left side or the right side of the robot body, can accurately test the distance between an obstacle and a wall on one side where the single-point ranging sensor is located in the walking process of the robot, can mark a straight line on a corresponding grid map, is limited by the maximum ranging distance of the TOF sensor, namely, the effective ranging distance of the TOF sensor allows a point cloud error or a certain precision in the process of scanning surrounding obstacles, and is configured with a certain frequency to sample and acquire pose information in the moving process of the robot and record. Note that, TOF: time of flight. TOF sensors typically require measurement using a specific artificial light source, i.e. by measuring the "time of flight" of ultrasonic, microwave, light, etc. signals between the emitter and the reflector to calculate the distance between the two. Many TOF sensors are used, and many TOF sensors for measuring distance by infrared or laser are used. The robot provided by the embodiment comprises a planned moving sweeping robot, so that the sweeping robot can sweep according to a preset direction.
As an embodiment, as shown in fig. 1, the robot repositioning method includes:
step S1, a robot determines a window in which the robot is currently located, wherein the robot can adopt the window which is already constructed as the window in which the robot is currently located, or can construct a new window at the current position and set the new window as the window in which the robot is currently located; step S2 is then performed. The robot collects point cloud data through the single-point ranging sensor, the robot in the embodiment drives the single-point ranging sensor fixedly arranged on the body to scan the surrounding environment through in-situ rotation to obtain discrete points (which can be regarded as position points) of the outline of the surrounding environment, a point cloud subgraph is formed in a local area, namely, environmental features around the robot are collected to obtain the point cloud data, and as most of the points are discrete points, relevant straight-line segments obtained by carrying out data processing on the point cloud data later are not necessarily continuous straight-line segments. The robot also builds a current window in the map according to the maximum ranging distance of the single-point ranging sensor, and the robot can acquire the current window after starting to move in a global working area, wherein the current window comprises the current position of the robot so as to determine the position relation of the current window in the global working area and further determine the next window in the global working area which is traversed by the robot subsequently; at least one window exists in the global working area, and a plurality of line feature subgraphs are built in each window in advance and can be known from a corresponding map by the robot.
It should be noted that, in the repositioning disclosed in this embodiment, in the global working area, the robot is randomly placed in any traversed area, and then the position where the robot is placed is located back, where the working position before the robot is randomly placed is the repositioning position, that is, the position where the robot needs to be repositioned back; the pre-established local coordinate system is established by taking the direction corresponding to the initial pose angle when the robot starts as the coordinate axis direction, so that the pre-established local coordinate systems at all positions can be unified. Since the robot is randomly placed at a position and the robot does not know pose information of the position at present, whether the robot is abnormally shut down or not is uncertain, the angle information is uncertain, that is, the robot does not know current position coordinates and angle information, and the angle information recorded at the corresponding position of the pre-established local coordinate system cannot be matched with the current position of the robot. In some embodiments, the robot will set the repositioning position and/or the randomly placed position inside the window in which it is currently located.
S2, the robot rotates at different rotation center positions in the window where the robot is currently located in sequence, and a target subgraph is built at each rotation center position; after the robot traverses all the rotation center positions in the window where the robot is currently located and builds a corresponding target subgraph at each rotation center position, step S3 is executed. The target subgraph here may be a set of fitted line segments where the point cloud data fits various trends. After traversing all the rotation center positions in the window where the robot is currently located, the robot obtains a plurality of target subgraphs in the window where the robot is currently located, and then the robot merges corresponding target subgraphs at all the rotation center positions in one window and records the target subgraphs in the memory of the robot, wherein each rotation center position correspondingly merges one target subgraph, and more point cloud data (coordinate information and angle information of position points) or more uniform fitting line segments are added relative to a single line feature subgraph; step S3 is then performed.
In some embodiments, the robot rotates preset turns at different rotation center positions in the window where the robot is currently located in sequence, and scans line feature subgraphs with the number of preset turns at each rotation center position; the robot rotates in place for a preset circle at each rotation center position, the robot rotates for one circle at the rotation center position by taking 360 degrees as a rotation period, each circle scans out a line characteristic subgraph, a plurality of line characteristic subgraphs rotationally scanned out at one rotation center position can cover point clouds in the same area range, and one line characteristic subgraph which is scanned out first can be directly set as one target subgraph, namely one target subgraph constructed at the rotation center position.
In other embodiments, considering the point cloud noise factor, in the process that the robot rotates at a rotation center for a preset number of turns, the current circle of scanned line characteristic subgraphs covers new point clouds than the previous circle of scanned line characteristic subgraphs, and new fitting line segments are added, so that the plurality of line characteristic subgraphs are combined into a target subgraph, and the point clouds or the fitting line segments covered by the target subgraphs are more uniform and denser; merging the scanned line characteristic subgraphs with the number of preset turns into a target subgraph at each rotation center position; then traversing all rotation center positions in the window where the robot is positioned and merging corresponding target subgraphs at each rotation center position, specifically, in step S2, fitting the point cloud data collected by rotation in each circle of rotation of the robot at one rotation center position to obtain a corresponding group of fitting line segments in the window where the robot is positioned, wherein at least one fitting line segment exists in the group of fitting line segments; and then, correspondingly forming a line characteristic subgraph by each group of fitting line segments, repeatedly rotating for a plurality of circles to form a plurality of line characteristic subgraphs, and merging the plurality of line characteristic subgraphs into a target subgraph.
Step S3, in the global working area, each time after the robot executes the step S2, the robot selects one window adjacent to the window where the robot is currently located as a next window, wherein the next window and the window where the robot is currently located do not have an overlapping area; the robot moves to the next window first, updates the next window to the window where the current window is located, and in order to avoid repeated traversal, the next window needs to be updated to the window where the current window is located on the premise that the fact that the next window is not traversed is detected; and then repeatedly executing the step S2 until the robot traverses all windows in the global working area and merges all target subgraphs, namely, the robot does not repeatedly scan all target subgraphs in each window in the global working area according to the step S2, and then executing the step S4. Wherein, a window adjacent to the window in which the window is currently located may be a window area having the same size as the window in which the window is currently located, and the adjacent two windows allow a certain interval.
S4, in the global working area, the robot controls the target subgraph to rotate, and then controls the fitting line segments in the rotated target subgraph to be matched with the fitting line segments in other target subgraphs in similarity of line segment lengths; step S5 is then performed. Wherein the rotation of the target subgraph is essentially the rotation of the points and line segments that make up the target subgraph. Because the robot needs to reposition back to the original lost position, in the embodiment, before the robot controls the fitting line segment to perform similarity matching of the line segment length, the robot first strives for a higher matching success rate by rotating the target subgraph to be matched by an angle, and the original lost position is recovered from the dimension of the angle in the coordinate system to which the target subgraph belongs.
In some embodiments, step S4 may be to rotate all the target subgraphs in each window at the same time, and then control the similarity matching of the line segment lengths between the fitting line segment in each rotated target subgraph and the fitting line segments in the other rotated target subgraphs, or control the similarity matching of the line segment lengths between the fitting line segment in each rotated target subgraph and the fitting line segment in the other non-rotated target subgraphs.
In other embodiments, the robot may rotate one of the target subgraphs, then perform similarity matching of the line segment lengths on the fitted line segment in the rotated target subgraph and the fitted line segment in the other target subgraphs in the global working area one by one, then select one target subgraph which is not rotated to perform rotation, and then perform similarity matching of the line segment lengths on the fitted line segment in the other target subgraphs in the global working area one by one, so repeating until the robot completes matching of any target subgraph in the global working area, wherein one target subgraph is a subgraph subjected to rotation operation in the two target subgraphs each time that performs similarity matching of the line segment lengths, and belongs to a line feature subgraph to be matched; the other target subgraph is used as a line characteristic subgraph with a reference function and belongs to a template type line characteristic subgraph, and the other target subgraph can be a target subgraph which is rotated in advance or a target subgraph which is not rotated in advance, or can be a target subgraph before rotation is selected, so that the pose angle and the coordinate calculated later are more accurate.
Specifically, in the global working area, specifically, in the previously established each window, the robot controls each fitting line segment in each target subgraph to perform similarity matching of line segment lengths with the fitting line segments at corresponding positions in other target subgraphs (which may be non-rotated or rotated), and performs similarity matching between line segment attribute position relationships, specifically, the coincidence matching degree (or similarity degree) between two fitting line segments in line segment lengths by using a one-to-one correspondence relationship between the fitting line segments in two different target subgraphs, wherein the relative position relationship (which direction of a line is at a point) of the two fitting line segments participating in the similarity matching of line segment lengths relative to the center of the window (the origin of the coordinate system to which the window belongs) is equivalent; or, the relative positional relationship (which direction the line is in) of the two fitting line segments participating in the similarity matching of the line segment lengths with respect to the rotation center position of the target subgraph where each is located is equivalent; alternatively, the two fitted line segments that participate in the similarity matching are parallel fitted line segments that are clustered to the two target subgraphs within the global work area. In order to pursue more comprehensive matching, the robot may also set two fitting line segments participating in similarity matching as any one fitting line segment in each of the target subgraphs and other target subgraphs in the same window, or the robot may set two fitting line segments participating in similarity matching as any one fitting line segment in each of the target subgraphs and any one fitting line segment in any of the target subgraphs in other windows. And matching the similarity between the line segment attribute position relations one by one among the fitting line segments in the same window or two target subgraphs in different windows to construct a one-to-one matching relation of the fitting line segments between any two target subgraphs in the global working area, and removing all fitting line segment pairs with lower similarity in the global working area at one time on the basis.
Step S5, in the global working area, when the matching success rate between the fitting line segments in the two target subgraphs reaches a first preset success rate, determining that the two target subgraphs are successfully matched, obtaining an angle error amount and an error coordinate offset, and processing the current rotation center position of the robot by using the angle error amount to obtain a temporary repositioning position; then step S6 is performed. The processing of the current rotation center position of the robot using the angle error amount corresponds to the rotation conversion of the coordinate information of the current rotation center position of the robot using the angle error amount, and may be the rotation conversion around the origin of the coordinate system of the target sub-graph where the current rotation center position of the robot is located. In this embodiment, the robot completes the similarity matching of the line segment length of one fitting line segment in one target sub-graph with the line segment length of one fitting line segment at the corresponding position in the other target sub-graph in the same window or in different windows, until all the fitting line segments for similarity matching of the line segment lengths in the two target sub-graphs are matched; because the matching success rate between the fitting line segments in the two target subgraphs is calculated, the matching success rate between the fitting line segments in the two target subgraphs is updated by repeatedly executing the step S5 until the robot matches all the fitting line segments which are subjected to similarity matching of the line segment lengths in the two target subgraphs, or the robot completes the similarity matching of the line segment lengths between each fitting line segment in one target subgraph and the fitting line segment at the corresponding position in the other target subgraph; then, if the matching success rate between the fitting line segments in the two target subgraphs reaches a first preset success rate, the robot talents can acquire the coordinates of a temporary repositioning position and the pose angle of the robot at the temporary repositioning position, and the similarity matching of the line segment lengths of the fitting line segments between the target subgraphs can be selected to be stopped; after the robot matches all the fitting line segments of which the line segment lengths are matched, if the matching success rate between the fitting line segments in the two target subgraphs cannot always reach the first preset success rate, the robot cannot obtain the coordinates of the temporary repositioning position and cannot obtain the pose angle of the temporary repositioning position, and then the step S4 is continuously executed on the basis of excluding the target subgraph pair which is not successfully matched or the fitting line segment pair which is not successfully matched, so that the target subgraphs which are rotated once when the step S4 is executed last time are controlled to continuously rotate once.
Specifically, the matching success rate between the fitting line segments in the two target subgraphs is the matching success rate between the calculated multiple pairs of fitting line segments in the process that the robot traverses the two target subgraphs, the first preset success rate is preferably 60%, and if the ratio of the matching success rate to the total number of the fitting line segment pairs which participate in similarity matching is greater than or equal to 60% for the two target subgraphs, the matching success rate of the two target subgraphs is determined. In the same matching stage, the robot obtains an error coordinate offset whenever the matching success rate between the fitting line segments in the two target subgraphs reaches a first preset success rate. In order to overcome the position error marked in the map, one of the two target subgraphs is matched with the similarity between the fitting line segments of the other target subgraph after rotation and translation, so that the coordinate translation amount generated by the one target subgraph (which has rotated) is calculated only when the line segment lengths of the two fitting line segments reach a certain degree of coincidence, and the coordinate translation amount can be a single-step error or an accumulated error as the error amount; the matching between the two target subgraphs can be understood as the similarity matching of the line segment lengths between the fitting line segments in the two target subgraphs, and also simplified as the similarity matching between the two target subgraphs, and the number of successfully matched fitting line segments can be recorded at the moment.
Step S6, when the robot completes matching of all the target subgraphs, namely the robot traverses and completes matching of all the fitting line segments for similarity matching of the line segment lengths in any two target subgraphs in the global working area, the robot carries out average value processing on all the error coordinate offsets obtained in the step S5, and a positioning coordinate compensation quantity is obtained; all the error coordinate offsets obtained in the step S5 are error coordinate offsets correspondingly obtained when the matching of the two target subgraphs is successful, and each error coordinate offset comprises a horizontal axis error offset and a vertical axis error offset; when the error coordinate offset is obtained by translation in the horizontal axis direction, the vertical axis error offset is set to 0 and does not participate in the average value processing, and the horizontal axis error offset participates in the average value processing to obtain a positioning coordinate compensation quantity; when the error coordinate offset is obtained by translation in the vertical axis direction, setting the horizontal axis error offset to 0 and not participating in average value processing, and enabling the vertical axis error offset to participate in average value processing to obtain a positioning coordinate compensation quantity; then the robot uses the positioning coordinate compensation quantity to correct the current rotation center position of the robot obtained in the step S5, specifically to correct the coordinate of the current rotation center position of the robot obtained in the latest to obtain a repositioning position, including the horizontal and vertical axis coordinates; meanwhile, the robot updates the pose angle of the robot at the last obtained temporary repositioning position into the initial pose angle of the robot at the repositioning position, and the direction of the initial pose angle can be the direction of the current rotation center position of the robot, which is obtained by the latest step S5, to the repositioning position; at the moment, the robot stops matching the similarity of the line segment lengths of the fitting line segments between the target subgraphs; and finishing the relocation of the robot in the global working area.
Preferably, when the matching success rate between the target subgraphs in the global working area reaches a second preset success rate, obtaining a plurality of error coordinate offsets, determining that all windows in the global working area are traversed, and understanding that any two different target subgraphs in different windows or in the same window are matched in an exhaustive manner, namely, each target subgraph is matched with any other target subgraph, and then recording the matching success rate between the same target subgraph and the rest target subgraphs determined in the step S5 as the matching success rate (accumulation result) of the target subgraph; or, in order to reduce the calculation amount, setting all target subgraphs involved in matching as a fixed target subgraph and any other target subgraphs in a fixed window, and only performing similarity matching of line segment length once for the same pair of target subgraphs or the same pair of fitting line segments, excluding the pair of target subgraphs or the pair of fitting line segments which fail in the last matching, and completing the matching of all target subgraphs in the global working area with less calculation cost. Then, carrying out average value processing on all the obtained error coordinate offsets, specifically summing all the error coordinate offsets or the error coordinate offsets with representativeness to obtain an average value, and obtaining a positioning coordinate compensation quantity as an optimal error quantity; correcting the coordinates of the temporary repositioning position iteratively converted by the angle error amount by using the positioning coordinate compensation amount to obtain the coordinates of the repositioning position of the robot, wherein the correction can be performed in a form of adding coaxial coordinates to obtain the coordinate information of the repositioning position so as to finish one repositioning in the window where the current positioning coordinate compensation amount is positioned, and the correction belongs to one global positioning for finishing the pose of the robot; the window in which the current window is located belongs to a region which can be updated and transformed to a neighborhood. In this embodiment, the matching success rate between the target subgraphs increases with the matching success rate of every two target subgraphs in the global working area, where the matching success rate of the two target subgraphs is determined by the step S5; when the second preset success rate is preferably 60%, because there are few target subgraphs in the global working area, the robot traverses all the target subgraphs in the global working area to perform matching comparison, and if more than half of the paired matched target subgraphs can be found, global repositioning of the robot pose can be performed through step S6.
In summary, in a scene that the current position coordinate information, the angle information and the coordinate system of the constructed line feature sub-graph are not uniform, before the robot performs similarity matching on fitting line segments in two target sub-graphs or in the line feature sub-graphs (which can be equivalent to the target sub-graphs), the robot also performs rotation transformation on one of the target sub-graphs or the line feature sub-graphs (which can be equivalent to the target sub-graphs), so that the positioning error caused by an inertial sensor can be overcome in a heuristically rotating manner under the condition that the pose is uncertain, and in the same way, the robot also performs merging processing on the line feature sub-graphs formed by the fitting line segments through in-situ rotation, so that correspondingly distributed point cloud data in the merged target sub-graphs are more uniform and dense, and the positioning precision of the robot using the target sub-graphs is improved.
In the process of carrying out similarity matching of line segment lengths on fitting line segments in two target subgraphs or line characteristic subgraphs, the line segment lengths are only used for comparison, one fitting line segment pair with similarity meeting a certain ratio is marked firstly, then two target subgraphs with proper fitting line segment pairs meeting a certain ratio are marked, and in the process of subsequent repositioning, error coordinate offset for correcting or repositioning the position of a robot is obtained every time the two target subgraphs are successfully matched, and reasonable numerical upper limits are set for the number of successfully matched target subgraphs in each window, so that the calculation amount for calling the target subgraphs and the matching positioning is reduced.
As an embodiment, in each execution of the step S4 by the robot, for two target subgraphs in the window or other windows currently located or two target subgraphs in two different windows respectively, the robot sets one of the target subgraphs as a reference target subgraph and the other target subgraph as a target subgraph to be matched, wherein the windows are all local areas covering the global working area, the windows are rectangular areas, and the global working area is formed by combining a plurality of rectangular areas; before the robot performs the step S4 to perform similarity matching of the line segment lengths between one fitting line segment in the reference target sub-graph and one fitting line segment in the target sub-graph to be matched, the target sub-graph to be matched is rotated according to a preset angle step, which may be autorotation, or rotation around the center of the window where the target sub-graph to be matched is located, or rotation around the center of the global working area, where the rotation of the target sub-graph to be matched is actually the rotation of the fitting line segment that forms the target sub-graph to be matched, and also rotation according to the preset angle step. When the target subgraph to be matched rotates once along the preset clockwise direction by the preset angle step length, the robot updates the rotated target subgraph to be matched into the target subgraph to be matched, preferably, the preset clockwise direction is clockwise or anticlockwise, and the minimum rotation step length of the target subgraph configured by the robot is 1 degree, namely, the preset angle step length is 1 degree; and every time the target sub-graph to be matched rotates by 1 degree, controlling the similarity matching of the length of the line segment once between the fitting line segment in the target sub-graph to be matched and the fitting line segment in the reference target sub-graph after rotating by 1 degree. Then, when the matching success rate between the fitting line segment in the reference target subgraph and the fitting line segment in the target subgraph to be matched reaches a first preset success rate, setting the rotated angle of the target subgraph to be matched, which is obtained by latest updating, relative to the original target subgraph to be matched as an angle error amount, and correcting the initial pose angle of the robot at the current rotation center position by the angle error amount to obtain a temporary repositioning angle; the robot also uses the angle error amount to convert the coordinate of the current rotation center position of the robot to obtain the coordinate of the temporary repositioning position. When the target subgraphs are updated in the reference subgraphs participating in matching and the target subgraphs to be matched, including updating and replacing the target subgraphs after rotation with other target subgraphs (from the same window or in different windows), a new angle error value is obtained after the matching success rate between the fitting line segments in the reference subgraphs and the fitting line segments in the target subgraphs to be matched reaches a first preset success rate, and the robot records the new angle error value and the new angle error value relative to the original target subgraphs to be matched (when the target subgraphs to be matched are rotated for a plurality of times, the original target subgraphs to be matched are relative to the target subgraphs to be matched when the target subgraphs to be matched are not rotated, or the original target subgraphs to be matched can be replaced by other target subgraphs in the same window or target subgraphs in different windows); if the same target sub-graph to be matched is configured to rotate too many times in the same clockwise direction, the obtained angle error amount will increase. The method comprises the steps that the initial pose angle of a robot at the current rotation center position can be corrected by the angle error amount obtained by the robot each time to obtain a new temporary repositioning angle, the coordinate of the rotation center position of the robot at the current position is converted by the angle error amount obtained by the robot each time to obtain the coordinate of the new temporary repositioning position, the initial pose angle of the robot at the current rotation center position is fixed, and the coordinate of the temporary repositioning position and the temporary repositioning angle obtained each time are recorded by the robot; and correcting the initial pose angle of the robot at the current rotation center position every time the robot obtains a new angle error amount, and converting the coordinates of the current rotation center position of the robot once, so as to update the temporary repositioning angle once and update the coordinates of the temporary repositioning position once. In this embodiment, when the robot has matched the target subgraphs in all windows in the global working area, the robot obtains the coordinates and the temporary repositioning angles of the temporary repositioning positions updated last time, then the robot updates the coordinates of the temporary repositioning positions obtained last time to the coordinates of the repositioning positions, and the robot updates the temporary repositioning angles obtained last time to the initial pose angles of the temporary repositioning positions so as to obtain repositioning results of the pose angles. The angular offset error accumulated by the sensor is overcome.
It should be noted that, when the target sub-graph rotates each time, the fitting line segments that constitute the target sub-graph rotate around the center (such as the rotation center position) of the target sub-graph by a preset angle step in a preset clockwise direction. The line feature subgraph is also rotated in a similar manner.
In some embodiments, when the robot traverses all the target subgraphs in all the windows in the global working area, the robot performs average value processing on all the obtained temporary repositioning angles to obtain a pose angle compensation amount, then adds the pose angle compensation amount to an initial pose angle (not updated angle information) at the current rotation center position, updates the added result to the initial pose angle at the current rotation center position of the robot to obtain initial advancing direction information of the robot at the current rotation center position, and correspondingly sets the initial advancing direction information as an angle at the repositioning position of the robot to restore the home pose so as to improve the accuracy of pose calculation.
As an embodiment, in the step S2, each time the robot rotates one turn at the rotation center position (the robot rotates in place by 360 degrees), the single-point ranging sensor is controlled to collect point cloud data during the rotation of the robot, wherein the point cloud data includes coordinate information of a position point scanned by the single-point ranging sensor and angle information of the position point. The position points can be points representing the outline of the obstacle in the surrounding environment of the robot or points in the acquired point cloud; the robot fits the position points in each angle range to the line segments of the corresponding trend respectively, so that a plurality of sections of straight line segments which are discretely distributed in the window are formed, each angle range is determined according to a fitting function model, for example, when the fitting is performed by adopting a least square method, each angle range is determined according to the parameters of a target straight line fitting function, and the line segments of the corresponding trend are formed; if Hough transformation is adopted to represent a straight line, the angle range of the included angle between the polar line and the x axis of the coordinate system of the linear feature subgraph or window is set in advance, so that the line segment to be fitted in each coordinate quadrant is determined, and the line segment corresponding to the trend is formed. Then the robot sets the part of the fitted line segment with the corresponding trend in the window where the robot is currently located as a group of fitted line segments in the window where the robot is currently located, wherein the group of fitted line segments comprises a plurality of fitted line segments which can be the fitted line segments with the length and the slope meeting the requirements; then the group of fitting line segments form a line characteristic subgraph, namely a set of fitting line segments is formed in a map, so that the construction of one line characteristic subgraph in the window where the current line characteristic subgraph is positioned is completed, the line characteristic subgraph can be directly set as the target subgraph, and as for the line characteristic subgraphs in other windows in the global working area or the target subgraph is constructed according to the same method, a plurality of groups of fitting line segments are assembled in the global working area; in this embodiment, each time the robot composes one of the line feature subgraphs in each window of the global working area, the robot also records its coordinate information at the rotation center position and initial pose angle information (angle information before the robot starts to rotate one turn at the rotation center position (angle information measured by the gyroscope)), so as to compensate the coordinates and angles of the repositioning position of the robot in combination with the error coordinate offset and the angle error amount later.
As an embodiment, in step S1, a method for constructing a window in which a single-point ranging sensor is currently located in a map according to a maximum ranging distance includes: taking the position of the robot when executing the step S1 as the center of the window where the current position is located, extending a first extending distance along the horizontal left side of the robot (corresponding to the left side of the advancing direction of the robot), and extending a second extending distance along the horizontal right side of the robot (corresponding to the right side of the advancing direction of the robot) so as to extend and form the transverse side length of the window where the current position is located, wherein the longitudinal side length of the window where the current position is preferably equal to the transverse side length of the window where the current position is located, or the longitudinal side length of the window where the current position is preferably greater than or equal to the body diameter of the robot, so that a local area is defined in a map so as to facilitate the local positioning of the robot; the map is a grid map pre-constructed by the robot, and can be formed by converting point cloud data acquired by a TOF sensor and angle information acquired by a gyroscope; the first extension distance and the second extension distance are equal to half of the maximum ranging distance of the single-point ranging sensor, and the transverse side length of the window where the window is currently located is equal to the maximum ranging distance of the single-point ranging sensor; wherein the window is rectangular in shape. Preferably, the maximum distance measurement distance of the single-point distance measurement sensor is 4 meters, the lateral side length of the window where the current position is located is equal to 4 meters, the distance between the longitudinal side of the window where the current position is located and the current position of the robot in the lateral direction is equal to 2 meters,
In the present invention, all the windows are rectangular areas framed in the map, and are used for dividing the global working area into areas and limiting the coverage range of the line characteristic subgraph scanned by the robot in the divided corresponding areas. Dividing the global working area into corresponding subareas no matter the window where the window is currently located or the next window, wherein one window corresponds to one subarea; the robot can conveniently traverse the global working area in regions according to a preset traversing sequence; each window may be a specific border line frame, may be associated with an actual position of the robot, or may be a rectangular area set with a position of the robot at each working cycle as a center, and may be reflected in a map, be a landmark product of an area in a global working area, and specifically may search for point cloud information in an external area. The size of the window is related to the maximum ranging distance of the single-point ranging sensor; the window is convenient to frame the point cloud data acquired by the TOF sensor and reflect the contour information of the environment, and a plurality of line feature subgraphs are also contained in one window.
As one embodiment, the windows constructed at the neighborhood of the currently located window include an upper adjacent window of the currently located window, a lower adjacent window of the currently located window, a left adjacent window of the currently located window, and a right adjacent window of the currently located window; the shape of each window constructed in the neighborhood of the current window is the same as the shape of the current window, and the size of each window constructed in the neighborhood of the current window is the same as the size of the current window; so that the window constructed in the neighborhood of the window where the current is located is equivalent to the four neighboring domains of one window on the map. Specifically, the abscissa of each point constituting the window adjacent to the upper side of the window in which the current is located is equal to the abscissa of each point constituting the window in which the current is located, and the difference between the ordinate of each vertex of the window adjacent to the upper side of the window in which the current is located and the ordinate of the vertex at the same positional relationship of the window in which the current is located is the longitudinal side length of the window in which the current is located. The abscissa of each point of the window adjacent to the lower side of the window where the current position is located is equal to the abscissa of each point of the window where the current position is located, and the difference value of the ordinate of each vertex of the window where the current position is located and the ordinate of the vertex at the same position relation of the window adjacent to the lower side of the window where the current position is located is the longitudinal side length of the window where the current position is located. The difference between the abscissa of each vertex of the window adjacent to the right side of the current window and the abscissa of the vertex at the same position relation of the window is the lateral side length of the current window. The difference between the ordinate of each point of the window adjacent to the left side of the current window and the abscissa of each vertex of the window adjacent to the left side of the current window is the lateral side length of the current window. Wherein the same positional relationship is indicative of the relative positional relationship of two vertices with respect to the center of the window in which they are located being the same, including direction and distance. And each window is easy to be repeatedly traversed by the robot when serving as the next window, and each time the robot starts to repeatedly traverse one window, the robot can multiplex the associated information of the target subgraphs in the window, wherein the associated information comprises the matching condition of the similarity between the fitting line segments in each target subgraph and the fitting line segments in other target subgraphs, which are related in the step S4, the matching success rate, the angle error amount and the error coordinate offset of the fitting line segments of the two target subgraphs in the step S5, so that the positioning condition of the robot in the corresponding window can be directly corrected and obtained.
Preferably, the window in which the window is currently located is rectangular in shape; the transverse side length of the window where the window is currently located is equal to the maximum ranging distance of the single-point ranging sensor; the longitudinal side length of the window is larger than or equal to the diameter of the robot body; half of the maximum ranging distance of the single-point ranging sensor is equal to the first extension distance; the robot sets the center of the window where the robot is currently located as a first rotation center position; the robot sets the central axis of the window in the longitudinal direction as a base line, and preferably, the robot sets the longitudinal direction of the window in the longitudinal direction parallel to the advancing direction of the robot; the robot sets the position points which are separated from the center of the window where the robot is positioned by a first extension distance and are positioned in two symmetrical directions of a vertical base line as a second first rotation center position and a second rotation center position respectively, wherein the second rotation center position and the second rotation center position are respectively positioned at the left side and the right side of the first rotation center position; the robot sets the position points which are separated from the center of the current window by a first extension distance and are positioned in two symmetrical directions of a vertical baseline as a third first rotation center position and a third second rotation center position respectively, the third rotation center position and the third second rotation center position are respectively positioned at the left side and the right side of the first rotation center position and can be positioned at the longitudinal edge of the current window, only half of the target subgraphs scanned at the third rotation center position and the third second rotation center position are positioned in the current window, and the other half of the target subgraphs are negligible. It should be noted that the first rotation center position, the second rotation center position, the third rotation center position and the third rotation center position all belong to the rotation center positions, the robot sequentially and repeatedly traverses each rotation center position in the window where the robot is currently located, so as to construct a plurality of line feature subgraphs in the window where the robot is currently located, and then the target subgraphs are merged through the rotation motion of the robot.
Preferably, the time interval of the robot continuously rotating twice at the rotation center position is limited within a reasonable time range, so as to avoid that repeatedly collected point cloud data are used for constructing the same line characteristic subgraph; or the distance between the positions (rotation center positions) at which the two rotations are consecutively performed (in-place rotation for scanning out the line feature subgraph) is limited to the effective ranging distance of the single-point ranging sensor, wherein the distance between the positions at which the two rotations are consecutively performed is less than or equal to the maximum ranging distance of the single-point ranging sensor, and corresponds to the above-described embodiment, the distance between the two rotation center positions includes the distance between the first rotation center position and the second rotation center position, the distance between the first rotation center position and the third rotation center position, and the distance between the third rotation center position and the third rotation center position, which are sequentially equal to one fourth of the maximum ranging distance of the single-point ranging sensor, one half of the maximum ranging distance of the single-point ranging sensor, and the maximum ranging distance of the single-point ranging sensor.
As an embodiment, in the step S2, the method for constructing the target subgraph includes: the robot rotates preset turns at different rotation center positions in the window where the robot is currently located in sequence, scans line feature subgraphs with the preset turns at each rotation center position, and then merges the scanned line feature subgraphs with the preset turns at each rotation center position into a corresponding target subgraph; wherein, a target subgraph is correspondingly combined at a rotation center position; when the robot rotates at the rotating center position for one circle, the single-point ranging sensor is controlled to acquire point cloud data in the rotating process of the robot, then line segments of corresponding trend are fitted in each angle range, the part of the fitted line segments of the corresponding trend in the window where the robot is currently located is set as a group of fitted line segments in the window where the robot is currently located, the group of fitted line segments form a line feature sub-graph, and the scanning of the line feature sub-graph is determined, wherein at least one fitted line segment exists in the group of fitted line segments. Specifically, in consideration of the point cloud noise factor, in the process that the robot rotates a preset circle number at a rotation center position, the current circle of scanned line characteristic subgraph covers a new point cloud compared with the previous circle of scanned line characteristic subgraph, and a new fitting line segment is added, so that the plurality of line characteristic subgraphs are combined into a target subgraph, and the point cloud or the fitting line segment covered by the target subgraph is more uniform and denser; merging the scanned line characteristic subgraphs with the number of preset turns into a target subgraph at each rotation center position; then traversing all rotation center positions in the window where the robot is positioned and merging corresponding target subgraphs at each rotation center position, specifically, in step S2, fitting the point cloud data collected by rotation in each circle of rotation of the robot at one rotation center position to obtain a corresponding group of fitting line segments in the window where the robot is positioned, wherein at least one fitting line segment exists in the group of fitting line segments; and then, correspondingly forming a line characteristic subgraph by each group of fitting line segments, repeatedly rotating for a plurality of circles to form a plurality of line characteristic subgraphs, and merging the plurality of line characteristic subgraphs into a target subgraph. It should be noted that, in order to construct the target subgraph in the corresponding window, each time the robot moves to a rotation center position, the robot fits a plurality of fitting line segments (the rotation center position is known, the robot may walk from an unknown region) at the current rotation center position, and then all the fitting line segments form a line feature subgraph, and the line feature subgraph corresponds to a set of fitting line segments in a local region, so that the corresponding fitting line segments can be recorded in the map. In step S2, the rotation center positions may be symmetrically disposed in the window where the current position is located; when the robot composes a target subgraph, besides recording the line feature subgraph, it also needs to record the rotation center position (including the coordinates of the robot at the position and the initial pose angle (the angle just moved to the rotation center position)), and the corresponding creation time, where the rotation center position does not belong to the line feature subgraph scanned at the rotation center position, because the rotation center position is not directly detected by the TOF sensor of the robot, and may not be included in the fitted line segment.
On the basis of the above embodiment, in the step S2, the method for merging the scanned line feature subgraphs with the number of preset turns into the target subgraph at each rotation center position includes: step 21, the robot rotates the current circle at a rotation center position in situ, the robot fits the current circle of point cloud data correspondingly acquired to obtain a group of fitting line segments in a window where the current circle is located, and the group of fitting line segments form a line feature sub-graph, wherein the robot can adopt a least square method to fit the point cloud data acquired at the rotation center position into straight line segments with various trend to form the line feature sub-graph in the window; step 22 is then performed; step 22, judging whether the number of in-situ rotations of the robot at the rotation center position described in step 21 is equal to a preset number of rotations, if yes, executing step 23, otherwise, executing step 24.
Step 23, the robot scans out line feature subgraphs with the number of preset turns and stops rotating in situ at the rotating center position, wherein the front part of the robot can rotate back to the pointed direction just moved to the rotating center position, and the initial pose angle of the robot at the current rotating center position corresponds to the pointed direction; then, in the process that the robot traverses fitting line segments in line feature subgraphs with the number of preset turns, the robot selects one of the line feature subgraphs as a template subgraph; the robot sequentially judges whether each fitting line segment in the template subgraph and a corresponding fitting line segment in the other line feature subgraphs are completely overlapped, if so, the two fitting line segments which are completely overlapped and are judged currently are set to be the same fitting line segment in the target subgraph, namely, a plurality of fitting line segments which are equal in line segment length and pass through position points and are the same in the line feature subgraphs with the number of preset circles are combined to be one fitting line segment, otherwise, the two fitting line segments which are not completely overlapped and are judged currently are set to be two different fitting line segments in the target subgraph, and the position relation of the two fitting line segments which are not completely overlapped comprises but is not limited to be parallel but not overlapped, and the two fitting line segments which are partially overlapped but are different in line segment length and intersect. Because of the existence of the point cloud noise, fitting line segments in two line characteristic subgraphs scanned by the robot sequentially at the same rotation center position are not identical, for example, a fitting line segment fitted by a first circle of collected point clouds is slightly longer than a fitting line segment fitted by a second circle of collected point clouds and having the same relative position relation with respect to the rotation center position (a fitting straight line fitted by the second circle of collected point clouds and having the same relative position relation with respect to the rotation center position may be discontinuous, and a formed line segment is shorter), or a fitting line segment fitted by the first circle of collected point clouds and having the same relative position relation with respect to the rotation center position is deviated by one grid towards one coordinate axis direction (accumulated position deviation error is caused); any two incompletely overlapped fitting line segments are added into the same target subgraph in the line characteristic subgraph with the number of preset turns, so that the consistency of the point cloud data of the target subgraph and the uniformity of the distribution of the fitting line segments are improved. And after the robot judges each fitting line segment in the template subgraph and any fitting line segment in the other line feature subgraphs, obtaining each fitting line segment in the target subgraph, and merging the line feature subgraphs with the number of preset turns into the target subgraph.
In some embodiments, the robot may determine, according to the foregoing step 23, a fit line segment at a corresponding position in each of the remaining line feature subgraphs and each fit line segment in the template subgraph, where, in each pair of fit line segments participating in the determination of step 23, a relative positional relationship of the fit line segment in the template subgraph with respect to the rotation center position is equivalent to a relative positional relationship of the fit line segment in the remaining line feature subgraphs scanned at the rotation center position with respect to the rotation center position; specifically, in a line characteristic subgraph with the number of preset turns, as long as one fitting line segment in one line characteristic subgraph and one fitting line segment with the same relative position relationship with the relative center position in the other line characteristic subgraph are not completely overlapped, two fitting line segments are considered, so that more uniform point cloud data information and fitting line segments distributed at more positions are added for one target subgraph, the two fitting line segments are partially overlapped together, but the lengths of the two fitting line segments are not equal, the two fitting line segments are still considered as two different fitting line segments, wherein the two fitting line segments have the same relative position relationship with respect to the same rotation center position, the fact that the nearest distances from the rotation center position to the two fitting line segments are equal is shown that the two fitting line segments are positioned on the same side of the rotation center position, and the matching number of the fitting line segments in the judging process can be reduced.
Step 24, the robot updates the next circle to be the current circle, then step 22 is executed, and the robot rotates again for one circle at the same rotation center position so as to realize in-situ rotation of the preset circle at the same rotation center position; wherein, the robot rotates in situ at one rotation center position, and the robot rotates 360 degrees around the rotation center position; the preset number of turns is set to be greater than the value 1. When the preset number of turns is preferably 3, the robot scans three line characteristic subgraphs at the same rotation center position sequentially to form a first line characteristic subgraph, a second line characteristic subgraph and a third line characteristic subgraph, wherein the first line characteristic subgraph is set as a template subgraph; if the robot judges that the first fitting line segment in the template subgraph is partially overlapped with the second fitting line segment at the corresponding position in the second line feature subgraph, the first fitting line segment and the second fitting line segment are respectively set as fitting line segments at two different positions in the target subgraph; then, the robot judges that the first fitting line segment in the template sub-graph and the third fitting line segment at the corresponding position in the third line feature sub-graph are completely overlapped, and the first fitting line segment and the third fitting line segment are set to be the same fitting line segment in the target sub-graph and can be marked as the first fitting line segment in the target sub-graph; and repeating the steps until the robot judges each fitting line segment in the template subgraph and any fitting line segment in the other two line feature subgraphs, obtaining each fitting line segment in the target subgraph, and finishing merging the three line feature subgraphs into the target subgraph.
As an example of correcting the measurement error of the relevant sensor by the robot, in the step S2, there is also the following processing case for the fitted line segment of the corresponding trend: when the line segment of the corresponding trend fitted by the robot is positioned in the window where the robot is positioned, if the robot detects that the length of the line segment of the corresponding trend fitted is larger than a preset fitting length threshold, the line segment of the corresponding trend fitted is set as the fitting line segment and marked in a map, the coordinates of the starting point, the coordinates of the ending point and the inclination angle (the angle formed by the horizontal axis of the coordinate system) of the line segment of the corresponding trend fitted are recorded, so that a fitting line segment is represented according to the recorded information, and although a straight line (corresponding straight line fitting equation) is fitted, only the line segment with the corresponding length is cut out and taken as the fitting line segment in the window where the robot is positioned, and then the line segment is added into a corresponding group of fitting line segments in the window where the robot is positioned. When the fitted line segment with the corresponding trend extends from the inside of the current window to the outside of the current window or the fitted line segment with the corresponding trend extends from the outside of the current window to the inside of the current window (the fitted longer line segment passing through the current window), if the length of the line segment intercepted by the robot detecting the fitted line segment with the corresponding trend in the current window is greater than a preset fitting length threshold value, setting the line segment intercepted by the fitted line segment with the corresponding trend in the current window as the fitting line segment and marking the fitting line segment in a map, and recording the coordinates of the starting point, the coordinates of the ending point and the inclination angle (the angle formed by the angle with the transverse axis of the coordinate system) of the line segment intercepted by the fitted line segment with the corresponding trend in the current window, thereby determining a fitting line segment according to recorded information, and adding the fitting line segment into a corresponding group of fitting line segments in the current window. Specifically, the inclination angle is set as an angle representation of an included angle formed by the line segment of the fitted corresponding trend and the coordinate axis. The preset fitting length threshold is preferably 1 meter; in the robot walking process, a single-point ranging sensor (such as a TOF sensor) on the side can well measure the distance of a wall on the side, then a line segment with a characteristic trend is easily fitted in the step S2, in the fitting process, position points far away from a straight line fitted by a target need to be screened out, so that the fitting error of the straight line is reduced, because the single-point ranging sensor is used for directly ranging, the position points exceeding the maximum ranging distance are easily removed, after a straight line equation is detected, the length of the fitted straight line segment is recorded and marked on a corresponding grid map, and the fitted line segment in the window where the current position is located is formed, so that the shorter fitted line segment is eliminated through the length of the line segment before the matching, and the subsequent matching precision is improved.
On the basis of the above embodiment, for the step S2, in the process that the TOF sensor of the robot detects the boundary of the same wall (detects the same straight line in the surrounding environment), the TOF sensor may continuously detect the same wall in the process that the robot moves along the boundary of the same wall or the robot walks along the i-shaped path, but the robot may not always record the same straight line segment in the map, but when detecting the boundary (straight line) of the same wall, the angle between the currently detected straight line and the prerecorded straight line segment in the same direction is always compared, if the angle of the angle is found to be greater than 2 degrees, the angle error measured by the gyroscope is determined, and then the angle correction needs to be started. In the walking process of the robot, once a straight line is fitted, a fitting line segment corresponding to the trend is searched in a preset database so as to calculate the included angle information between the two; in this embodiment, the different judging criteria of the two line segments include that the coordinates of the start point of the line segment are different, the coordinates of the end point of the line segment are different, or the inclination angle is different, and each line segment may be converted into a nearest distance from the origin of the coordinate system using the line segment, and the angle formed by the line segment and the horizontal axis of the coordinate system is expressed. The line segment of the corresponding trend fitted from the boundary of the wall body is determined by a target linear equation which is obtained by fitting the point cloud data acquired in the corresponding angle range by the robot through a least square method, and the target linear equation represents the line segment of the corresponding trend. Therefore, each time the robot fits a new line segment (relative to the line of the corresponding trend fitted last time) from the boundary of the wall, calculating the angle between the line segment of the new trend fitted currently and the fitted line segment of the same trend recorded in advance (which can be obtained by calling the line data of the corresponding trend in a preset database, and the angle error of the fitted line segment recorded previously may occur); if the angle of the included angle formed by the line segment of the new trend which is currently fitted and the fitting line segment of the same trend which is recorded in advance is larger than the preset fitting angle threshold value, carrying out weighted average treatment on the inclination angle of the line segment of the new trend which is currently fitted and the inclination angle of the fitting line segment of the same trend which is recorded in advance, and obtaining a calibration inclination angle; the angle of the included angle formed by the line segment of the new trend which is currently fitted and the fitting line segment of the same trend which is recorded in advance is equal to the absolute value of the angle difference between the inclination angle of the line segment of the new trend which is currently fitted and the inclination angle of the fitting line segment of the same trend which is recorded in advance; the calibration inclination angle is specifically equal to the sum of the product of the inclination angle of the line segment of the new trend fitted currently and the weight applied by the line segment, and the product of the inclination angle of the fitted line segment of the same trend recorded in advance and the weight applied by the line segment; updating the calibration inclination angle to the inclination angle of a fitting line segment with the same trend recorded in advance; preferably, in order to obtain the calibration inclination angle, the robot applies a weight of 70% to the inclination angle of the line segment of the new trend fitted currently, and applies a weight of 30% to the inclination angle of the fitted line segment of the same trend recorded in advance, and the preset fitting angle threshold is 2 degrees; in this embodiment, the inclination angle of the fitting line segment of the same trend recorded in advance is an angle to be corrected with errors, so that the weight given to the inclination angle of the line segment of the new trend fitted in advance is relatively large, and the possibility that the fitting line segment of the new trend fitted in advance is matched with the line segment of the new trend fitted in advance is relatively high, so that the direction of the fitting line segment recorded in the follow-up is closer to the direction of the detection contour line corresponding to the actual environment.
Preferably, for the same detection direction, the effective ranging position point farthest from the single-point ranging sensor is marked as a ranging end point of the single-point ranging sensor, which is equivalent to the farthest obstacle contour point which can be detected; in the process that the robot rotates at the same position for one circle, particularly when the robot rotates in situ according to the preset rotation speed, the acquisition frame rate of the single-point ranging sensors is higher, the distance between the ranging tail end points of two adjacent single-point ranging sensors is smaller, the acquired point cloud data are denser, the obtained obstacle information is about accurate, and the positioning precision and accuracy can be guaranteed. On the other hand, the lower the acquisition frame rate of the single-point ranging sensor is, the larger the distance between the ranging tail end points of the two adjacent single-point ranging sensors is, and the sparse the acquired point cloud data is. To meet the requirement of fitting a straight line and mapping matching, in this embodiment, the distance between the ranging end points of two adjacent single-point ranging sensors may be set to be equal to the arc length that the single-point ranging end points move when the robot rotates in place by one degree, which is related to the body diameter, rotation speed, and/or acquisition frame rate of the single-point ranging sensors of the robot.
As an embodiment, in the step S5, the matching success rate between the fitted line segments of the two target subgraphs is the ratio of the number of all the fitted line segment pairs successfully matched to the number of all the fitted line segment pairs matched by the similarity of the lengths of the participating line segments; the fitting line segment pair participating in the similarity matching of the line segment length is composed of fitting line segments divided into two target subgraphs, the fitting line segment pair participating in the similarity matching of the line segment length is divided into two target subgraphs, one fitting line segment in each target subgraph is matched with the similarity of the line segment length of any one fitting line segment in any other target subgraph to form one fitting line segment pair, namely the fitting line segment pair participating in the similarity matching of the line segment length, wherein the fitting line segment pair successfully matched is two fitting line segments successfully matched in the process of carrying out the similarity matching of the line segment length in the step S4, and the fitting line segment pair successfully matched is marked as one fitting line segment pair successfully matched; preferably, in order to reduce the matching amount, the relative positional relationship of one of the pair of the fitted line segments participating in the similarity matching of the line segment lengths with respect to the rotation center position of one of the target subgraphs may be equivalent to the relative positional relationship of one of the pair of the fitted line segments participating in the similarity matching of the line segment lengths with respect to the rotation center position of the other target subgraph (the shortest distance of the rotation center position to the corresponding fitted line segment and the included angle of the fitted line segment with the horizontal axis of the coordinate system), specifically, one of the pair of the fitted line segments participating in the similarity matching of the line segment lengths and one of the pair of the fitted line segments participating in the similarity matching are parallel to each other. In the global working area, the target subgraph group where two fitting line segments participating in similarity matching of line segment lengths are located is a target subgraph pair; the target subgraph pair can be composed of any two target subgraphs in the global working area; in one of the target subgraph pairs, one target subgraph is set as a reference target subgraph and the other target subgraph is set as a target subgraph to be matched, which is the target subgraph rotated in the step S4. The two target subgraphs participating in the matching are any two target subgraphs in the global working area respectively, so that traversing and matching are carried out on each target subgraph in each window of the global working area, and the matching is essentially short for similarity matching of line segment lengths carried out by fitting line segments. In order to reduce the matching quantity, the robot sets all target subgraphs participating in matching as one target subgraph fixed in the window where the robot is currently located and any other target subgraph, and only performs matching once on the same pair of target subgraphs or the same pair of fitting line segments, so that the matching of all target subgraphs in the global various areas is completed.
On the basis of the above embodiment, for two target subgraphs (one line feature subgraph pair in the same window) involved in matching, the robot sets one of the target subgraphs as a reference target subgraph and the other target subgraph as a target subgraph to be matched, which is the target subgraph rotated in the step S4; preferably, the target subgraph to be matched is a map region within the global work region except for the reference target subgraph.
Corresponding to step S4, when the coincidence degree of one fitting line segment in the target subgraph to be matched and one fitting line segment at the corresponding position in the reference line characteristic subgraph in the line segment length dimension is higher than the preset coincidence degree, determining that the matching of the fitting line segment in the target subgraph to be matched and one fitting line segment at the corresponding position in the reference target subgraph is successful in similarity matching, and marking as a matching line segment pair which is successful in matching; it should be noted that, the robot needs to calculate the coincidence degree of each fitting line segment in the target subgraph to be matched and the line segment length dimension of the fitting line segment at the corresponding position in the reference target subgraph one by one. In order to expand the scope of the search area, the fitting line segment at the corresponding position in the reference target subgraph can be any fitting line segment in the reference target subgraph. Alternatively, in order to reduce the calculation amount, the fitted line segment at the corresponding position in the reference target subgraph may be parallel to the fitted line segment in the target subgraph to be matched with the similarity of the lengths of the participating line segments.
After the robot completes matching of each fitting line segment in the target subgraph to be matched with the fitting line segment at the corresponding position in the reference target subgraph according to the step S4, the robot executes to the step S5, and when the ratio of the number of all fitting line segment pairs successfully matched to the number of all fitting line segment pairs matched by the similarity of the participating line segment lengths is greater than or equal to a first preset success rate, the target subgraph to be matched with the reference target subgraph is determined to be successfully matched, and the target subgraph is marked as one target subgraph pair successfully matched, wherein the number of all fitting line segment pairs matched by the similarity of the participating line segment lengths exceeds a certain number threshold. In some embodiments, the two fitted line segments that participate in similarity matching of line segment lengths are parallel fitted line segments that are separate the target subgraph to be matched from the reference target subgraph. The number of all pairs of fitted line segments participating in the similarity matching of the line segment lengths is preferably equal to the number of fitted line segments within the target subgraph to be matched or equal to the number of fitted line segments within the reference target subgraph.
Corresponding to step S6, after the robot finishes matching (may traverse) all the target subgraphs in the global working area, when the ratio of the number of all the target subgraph pairs successfully matched to the number of all the target subgraph pairs involved in matching is greater than or equal to a second preset success rate or the number of all the target subgraph pairs involved in matching exceeds a certain number threshold, determining that the robot has completed matching the similarity of the line segment lengths of all the fitted line segments in the global working area, completing the matching of all the target subgraphs, and obtaining the target subgraph pairs successfully matched, wherein the matching is sufficient to use all the target subgraph pairs successfully matched to perform global positioning on the robot.
As an embodiment, in the step S4, the method of performing similarity matching between the fitted line segment in the rotated target subgraph and the fitted line segment in the other target subgraph by the robot control is as follows: the robot controls the fitting line segment in the target subgraph to be matched and the fitting line segment at the corresponding position in the reference target subgraph to carry out similarity matching of the line segment length; the method for matching the similarity of the line segment length between the fitting line segment in the target subgraph to be matched and the fitting line segment at the corresponding position in the reference target subgraph by the robot comprises the following steps: step 41, calculating the absolute value of the difference between the length of one fitting line segment in the target subgraph to be matched and the length of one fitting line segment in the reference target subgraph, so as to measure the difference between the two fitting line segments in the length dimension of the line segments through the difference, and then executing step 42; step 42, when the absolute value of the difference value in step 41 is smaller than or equal to a preset length threshold, determining that one fitting line segment in the target subgraph to be matched is successfully matched with one fitting line segment at a corresponding position in the reference target subgraph in similarity matching of line segment lengths, and marking two fitting line segments which are currently subjected to similarity matching of line segment lengths as one fitting line segment pair which is successfully matched; and when the absolute value of the difference value in the step 41 is greater than a preset length threshold, determining that one fitting line segment in the target subgraph to be matched and one fitting line segment at a corresponding position in the reference target subgraph fail to be matched in the similarity matching of the line segment lengths, and marking the fitting line segment matched with the similarity of the line segment lengths currently as one fitting line segment pair failed to be matched. When the ratio of the number of all the matched line segment pairs successfully matched to the number of all the matched line segment pairs matched with the similarity of the lengths of the participated line segments is larger than or equal to a first preset success rate, determining that the target subgraph to be matched and the reference target subgraph are successfully matched, and marking the target subgraph as a target subgraph pair successfully matched; the first preset success rate is preferably 60%.
In some embodiments, a ratio of an absolute value of a difference between a line length of one fitting line segment in the target subgraph to be matched and a line length of one fitting line segment in the reference target subgraph to the length of the reference line segment is marked as a difference rate in a line length dimension, when a difference between a value 1 and the difference rate in the line length dimension is greater than or equal to a preset coincidence ratio, a successful matching in similarity matching of the line length of one fitting line segment in the target subgraph to be matched and the line length of one fitting line segment at a corresponding position in the reference target subgraph is determined, and two fitting line segments currently subjected to similarity matching of the line length are marked as a matching successful pair of fitting line segments. The reference line segment is one fitting line segment with relatively large line segment length out of two fitting line segments which are subjected to similarity matching of line segment lengths at present; preferably, the preset overlap ratio is set to 80%.
In some embodiments, in the global working area, when the ratio of the number of all target sub-image pairs successfully matched to the number of all target sub-image pairs involved in matching is greater than or equal to a second preset success rate, stopping similarity matching of the line segment lengths of the fitting line segments in the target sub-image to be matched and the fitting line segments in the reference target sub-image, stopping traversing each fitting line segment in any target sub-image in the global working area, and controlling the target sub-image to stop rotating. Wherein all target subgraphs involved in the matching include any two different target subgraphs.
As an embodiment, before the robot performs similarity matching of the line length between each fitting line segment in the target sub-graph to be matched and the fitting line segment at the corresponding position in the reference target sub-graph, the robot sets the rotation center position corresponding to the target sub-graph to be matched as the offset starting point position, where the similarity matching of the line length is preceded by that the fitting line segment in the target sub-graph to be matched is translated but the corresponding rotation center position is not translated, and the target sub-graph to be matched belongs to the rotated target sub-graph in step S4. In some embodiments, when the ratio of the number of all the fitted line segment pairs successfully matched to the number of all the fitted line segment pairs matched to the similarity of the lengths of the participating line segments is detected to be smaller than the first preset success rate, the robot also sets the rotation center position corresponding to the target subgraph to be matched as the offset starting point position, and in fact, the robot has traversed and matched each fitted line segment in the target subgraph to be matched with the fitted line segment at the corresponding position in the reference target subgraph. And then the robot controls the target subgraph to be matched to translate along the direction of the preset coordinate axis from the position of the offset starting point according to the preset translation step length, wherein before the coordinate offset translated along the direction of the preset coordinate axis reaches the maximum preset offset, the translation step length is translated once, the corresponding fitting line segment in the translated target subgraph to be matched is translated, and the translation direction is not limited to the positive direction or the negative direction of the coordinate axis. The mapping from the target subgraph to be matched to the reference target subgraph is completed in a translation mode along the coordinate axis direction, and the influence of noise errors acquired by the sensor is overcome; in this embodiment, each time the same target subgraph to be matched translates along the given coordinate axis direction by the preset translation step length, the translated target subgraph to be matched is updated to the target subgraph to be matched, so that the corresponding fitting line segments in the target subgraph to be matched can execute step 41 and step 42 to control the fitting line segments in the target subgraph to be matched with the fitting line segments at the corresponding positions in the reference target subgraph to perform similarity matching of the line segment lengths, and in the process of performing similarity matching of the line segment lengths, the coincidence rate (which can be regarded as the similarity of the line segment lengths) in the line segment length dimension needs to be calculated; and then, after each robot finishes matching each fitting line segment in the target subgraph to be matched with the fitting line segment at the corresponding position in the reference target subgraph, the robot judges whether the ratio of the number of all fitting line segment pairs successfully matched to the number of all fitting line segment pairs matched with the similarity of the lengths of the participated line segments is larger than or equal to a first preset success rate, and judges the matching condition of the target subgraph to be matched and the reference target subgraph after each translation.
It should be noted that, the maximum preset offset is preferably 10 grids, and the maximum preset offset may be a ratio of a preset maximum positioning error to a preset side length of a grid or a rounding result of the ratio, where a unit is a grid number; the preset maximum positioning error and the preset translation step length do not exceed the side length of a window, and the maximum distance measurement distance of the single-point distance measurement sensor is not exceeded. The preset maximum positioning error can be obtained by repeated comparison experiments of the sensing data of the TOF sensor and/or the gyroscope. In order to expand the scope of the search area within the global working area, the fitted line segment at the corresponding position in the reference target subgraph may be any fitted line segment in the reference target subgraph. Alternatively, in order to reduce the calculation amount, the fitted line segment at the corresponding position in the reference target subgraph may be parallel to the fitted line segment in the target subgraph of the line to be matched, which is matched with the similarity of the lengths of the participating line segments.
On the basis of the embodiment, on the premise that the coordinate offset of one target sub-graph to be matched, which is translated in the same coordinate axis direction from the offset starting point position, does not reach the maximum preset offset, each time the target sub-graph to be matched is translated by a preset translation step length, after the robot finishes matching each fitting line segment in the target sub-graph to be matched with the fitting line segment at the corresponding position in the reference target sub-graph, the robot judges that the ratio of the number of all fitting line segment pairs successfully matched to the number of all fitting line segment pairs matched to the similarity of the lengths of the participating line segments is greater than or equal to a first preset success rate, if yes, the target sub-graph to be matched (which can be understood as the target sub-graph to be matched without translation, which can also be understood as the target sub-graph to be translated) is successfully matched with the reference target sub-graph, and marks as one target sub-graph pair successfully matched, and sets the coordinate offset of the target to be translated in the latest translation direction from the offset starting point position as the coordinate offset, and uses a preset grid as the unit, and the number of the coordinate offset of the target sub-graph to be translated in the same integer multiple times as the translation of the target sub-graph in the translation direction; the error coordinate offset is provided with signs related to the translation direction, so that the coordinate of the repositioning position of the robot can be influenced during subsequent correction; at this time, the robot controls the target subgraph to be matched to stop translating.
It should be noted that, the error coordinate offset is decomposed into a coordinate offset in the horizontal axis direction and a coordinate offset in the vertical axis direction, the error coordinate offset may change along with the change of the translation direction, and when the error coordinate offset is accumulated in the translation of the target subgraph to be matched in the horizontal axis direction, the error offset of the vertical axis is set to 0; when the error coordinate offset is accumulated by the translation of the target subgraph to be matched in the vertical axis direction, the horizontal axis error offset is set to 0.
Under the premise that the coordinate offset of a target subgraph to be matched, which is translated in the same coordinate axis direction from the offset starting point position, does not reach the maximum preset offset, each time the target subgraph to be matched is translated by a preset translation step length, after the robot is matched with each fitting line segment in the target subgraph to be matched and the fitting line segment at the corresponding position in the reference target subgraph, the robot judges that the ratio of the number of all fitting line segment pairs successfully matched to the number of all fitting line segment pairs participating in similarity matching is smaller than a first preset success rate, the robot can be matched with each fitting line segment in the target subgraph to be matched and the fitting line segment at the corresponding position in the reference target subgraph, or most of fitting line segments in the target subgraph to be matched and fitting line segments at corresponding positions in the reference target subgraph are matched, the robot adjusts the direction of the set coordinate axis to be opposite or perpendicular to the direction of the set coordinate axis, and then updates the direction of the opposite or perpendicular coordinate axis to be the direction of the set coordinate axis, at the moment, the direction of the opposite or perpendicular coordinate axis is the latest translation direction adjusted by the robot, and can be changed from the positive direction of an x axis (a transverse axis) to the negative direction of the x axis (a transverse axis) or from the positive direction of the x axis (a transverse axis) to the positive direction of the y axis (a longitudinal axis), so as to control the target subgraph to be matched to translate towards different translation directions from the previous coordinate axis directions, and overcome the influence of position offset errors in the directions of the corresponding coordinate axes. Then the robot controls the target subgraph to be matched (the corresponding rotation center position is the offset starting position) to translate the preset translation step length along the given coordinate axis direction (updated) from the offset starting position, then the translated target subgraph to be matched is updated into the target subgraph to be matched, step 41 and step 42 are repeatedly executed until the robot finishes matching each fitting line segment in the target subgraph to be matched with the fitting line segment at the corresponding position in the reference target subgraph, the ratio of the number of all fitting line segment pairs successfully matched to the number of all fitting line segment pairs participating in similarity matching is calculated, and then the robot determines whether to continuously control the translation of the target subgraph to be matched along the coordinate axis direction by judging whether the ratio of the number of all fitting line segment pairs successfully matched to the number of all fitting line segment pairs participating in similarity matching of line segment length is larger than or equal to a first preset success rate.
It should be noted that, starting from the offset starting point position, before the coordinate offset translated in the given coordinate axis direction reaches the maximum preset offset, each target sub-image to be matched translates once along the given coordinate axis direction by the preset translation step length, the translated target sub-image to be matched is updated into the target sub-image to be matched, and then step 41 and step 42 are executed to calculate the ratio of the number of all the fitting line segment pairs successfully matched to the number of all the fitting line segment pairs participating in similarity matching in the new translation direction. And each time the robot matches each fitting line segment in the target subgraph to be matched with the fitting line segment at the corresponding position in the reference target subgraph, the robot judges whether the ratio of the number of all fitting line segment pairs successfully matched to the number of all fitting line segment pairs matched by the similarity of the lengths of the participating line segments is larger than or equal to a first preset success rate.
If the coordinate offset of the target subgraph to be matched, which is translated in the same coordinate axis direction from the offset starting point position, has reached the maximum preset offset, after the robot has matched each fitting line segment in the target subgraph to be matched with the fitting line segment at the corresponding position in the reference target subgraph, judging whether the ratio of the number of all fitting line segment pairs successfully matched to the number of all fitting line segment pairs participating in similarity matching is greater than or equal to the first preset success rate, if so, determining that the target subgraph to be matched (which can be understood as the target subgraph to be matched without translation, which can be understood as the target subgraph to be matched after translation) is successfully matched with the reference target subgraph, and recording the coordinate offset of the robot translated in the latest translation direction as an error coordinate offset, namely setting the maximum preset offset as the error coordinate offset, and setting an error coordinate offset corresponding to the target subgraph to be matched, but not updating the error coordinate offset; and then the robot controls the target subgraph to be matched to stop translating, and similarity matching of the line segment lengths between each fitting line segment in the target subgraph to be matched and the fitting line segment at the corresponding position in the reference target subgraph is also finished. The robot judges that the ratio of the number of all fitting line segment pairs successfully matched to the number of all fitting line segment pairs participating in similarity matching is smaller than the first preset success rate, adjusts the direction of the set coordinate axis to be opposite or perpendicular to the direction of the set coordinate axis, updates the direction of the opposite or perpendicular coordinate axis to be the direction of the set coordinate axis, controls the original target subgraph to be matched to translate the preset translation step length along the direction of the set coordinate axis from the offset starting point position, updates the translated target subgraph to be matched to the target subgraph to be matched, and then executes the steps 41 and 42; similarly, in the updated given coordinate axis direction, the target subgraph to be matched starts from the same offset starting point position, and before the coordinate offset translated in the given coordinate axis direction reaches the maximum preset offset, each time the preset translation step length is translated, the translated target subgraph to be matched is updated to the target subgraph to be matched, and then step 41 and step 42 are executed; and after each robot is matched with each fitting line segment in the target subgraph to be matched and the fitting line segment at the corresponding position in the reference target subgraph, the robot judges whether the ratio of the number of all fitting line segment pairs successfully matched to the number of all fitting line segment pairs matched by the similarity of the lengths of the participating line segments is larger than or equal to a first preset success rate.
On the basis of the foregoing embodiment, after the robot has controlled the original target subgraph to be matched to have translated along all coordinate axis directions from the offset start position, and the coordinate offset of the target subgraph to be matched translated along all coordinate axis directions from the offset start position reaches the maximum preset offset, the robot determines that the ratio of the number of all fitting line segment pairs successfully matched to the number of all fitting line segment pairs participating in similarity matching is smaller than a first preset success rate, and the robot determines that the original target subgraph to be matched and the reference target subgraph fail to match, specifically, controls the same target subgraph to be matched to translate along all coordinate axis directions from the offset start position, and determines that the ratio of the number of all fitting line segment pairs successfully matched to the number of all fitting line segment pairs participating in similarity matching is smaller than the first preset success rate when the target subgraph to be matched is translated each time; and if the robot judges that the ratio of the number of all fitting line segment pairs successfully matched to the number of all fitting line segment pairs participating in similarity matching is smaller than a first preset success rate, determining that the original target subgraph to be matched and the reference target subgraph fail to match. On the basis, the robot returns to execute the step S4 and the step S5 on the premise of excluding a pair of target subgraphs which fail to match, controls the original target subgraphs to be matched to rotate, if the original target subgraphs to be matched are obtained by rotating one target subgraph in the last step S4 executed, continues to rotate once along the preset clockwise direction by the preset angle step, updates the rotated target subgraphs to be matched into the target subgraphs to be matched, and then carries out similarity matching of the line segment lengths of each fitting line segment in the target subgraphs to be matched and the fitting line segment at the corresponding position in the reference target subgraphs in the step S4; and after the robot is matched with each fitting line segment in the target subgraph to be matched and the fitting line segment at the corresponding position in the reference target subgraph, when the matching success rate between the fitting line segments in the two target subgraphs (corresponding to the target subgraphs to be matched and the reference target subgraphs) is determined to reach a first preset success rate in step S5, the target subgraphs to be matched and the reference target subgraphs are determined to be successfully matched, and the angle error amount and the error coordinate offset amount are obtained.
As an embodiment, in the step S6, when the robot traverses all the target subgraphs in all the windows in the global working area, the robot obtains a plurality of error coordinate offsets, and determines that the robot matches any two target subgraphs in all the windows in the global working area; if the foregoing embodiment is to exclude the target sub-graph pair with partial matching failure in the process of repeatedly executing step S4 and step S5, that is, the robot will not match the target sub-graph pair but will traverse to the two target sub-graphs included in the target sub-graph pair, because each target sub-graph included in the target sub-graph pair with matching failure can be successfully matched with other target sub-graphs. If the robot judges that the number of the error coordinate offsets obtained at present is larger than the preset processing number, the error coordinate offset with the largest abscissa value, the error coordinate offset with the smallest abscissa value, the error coordinate offset with the largest ordinate value and the error coordinate offset with the smallest ordinate value are all removed from all the error coordinate offsets obtained at present; averaging the rest error coordinate offset to obtain an average coordinate offset; setting the average coordinate offset as the positioning coordinate compensation amount; wherein each error coordinate offset includes a horizontal axis error coordinate offset and a vertical axis error coordinate offset; each positioning coordinate compensation amount includes a horizontal axis positioning coordinate compensation amount and a vertical axis positioning coordinate compensation amount.
Specifically, if the robot judges that the number of the error coordinate offsets obtained at present is greater than the preset processing number, the error coordinate offset with the largest abscissa value, the error coordinate offset with the smallest abscissa value, the error coordinate offset with the largest ordinate value and the error coordinate offset with the smallest ordinate value are all removed from all the error coordinate offsets obtained at present; since each error coordinate offset includes a horizontal axis error coordinate offset and a vertical axis error coordinate offset, an error coordinate offset may be expressed as a coordinate value composed of a horizontal axis error coordinate offset and a vertical axis error coordinate offset. Averaging the horizontal axis error coordinate offset in the residual error coordinate offset to obtain a horizontal axis average coordinate value, wherein the averaging is to sum and average the horizontal axis error coordinate offset in the residual error coordinate offset, and the number of the horizontal axis error coordinate offset in the residual error coordinate offset is the number of the averaged data; averaging the vertical axis error coordinate offset in the residual error coordinate offset to obtain a vertical axis average coordinate value, wherein the averaging is to sum and average the vertical axis error coordinate offset in the residual error coordinate offset, and the number of the vertical axis error coordinate offset in the residual error coordinate offset is the number of the averaged data; then, the average coordinate value of the horizontal axis and the average coordinate value of the vertical axis form an average coordinate offset; and averaging the rest error coordinate offset to obtain an average coordinate offset. The robot then sets the average coordinate offset as positioning coordinate compensation amounts, wherein each positioning coordinate compensation amount includes a horizontal axis positioning coordinate compensation amount and a vertical axis positioning coordinate compensation amount, which can be understood as being expressed in the form of coordinate values. And then the robot controls the positioning coordinate compensation quantity to be added with the temporary repositioning position which is newly obtained in the step S5 to obtain the coordinate of the repositioning position, specifically, the transverse axis positioning coordinate compensation quantity of the positioning coordinate compensation quantity is added with the transverse axis of the temporary repositioning position, and the longitudinal axis positioning coordinate compensation quantity of the positioning coordinate compensation quantity is added with the longitudinal axis of the temporary repositioning position to obtain the coordinate of the repositioning position.
In some embodiments, if the robot determines that the number of the error coordinate offsets obtained at present is less than or equal to the preset processing number, then directly averaging all the error coordinate offsets obtained at present to obtain an average coordinate offset; the principle of the specific averaging is similar to the above embodiment, except that the number of averaged data is equal to the number of all the error coordinate offsets currently obtained, and is equal to the number of the horizontal axis error coordinate offsets among all the error coordinate offsets currently obtained, where one error coordinate offset may be expressed as a coordinate value composed of one horizontal axis error coordinate offset and one vertical axis error coordinate offset, and the number of the horizontal axis error coordinate offsets among all the error coordinate offsets currently obtained is equal to the number of the vertical axis error coordinate offsets among all the error coordinate offsets currently obtained.
It should be noted that, in correspondence to the target sub-graph matching process mentioned in the foregoing embodiment, the setting of the preset processing number is associated with the number of times of translation of the target sub-graph to be matched along the predetermined coordinate axis direction from the offset start point position, or the number of times of change of the translation direction; since there are four coordinate axis directions in the plane coordinate system where the window is currently located, including a positive transverse axis direction (positive transverse axis direction, i.e., positive x-axis direction), a negative transverse axis direction (negative transverse axis direction, i.e., negative x-axis direction), a positive longitudinal axis direction (positive longitudinal axis direction, i.e., positive y-axis direction), and a negative longitudinal axis direction (negative longitudinal axis direction, i.e., negative y-axis direction), the preset processing number is set to a value of 3 in order to reduce the accumulated offset error amount (position error).
In some embodiments, after the robot traverses all the target subgraphs in the window in the global working area, if the ratio of the number of all the target subgraph pairs successfully matched to the number of all the target subgraph pairs involved in the matching is greater than or equal to a second preset success rate, the robot obtains a plurality of error coordinate offsets, wherein each time the robot determines one target subgraph pair successfully matched in step S5, one error coordinate offset is set, so that the robot traverses all the line feature subgraphs in the global working area and performs similarity matching on all the fitted line segment pairs, obtains a plurality of error coordinate offsets, the robot counts the number of all the error coordinate offsets, and the number of the error coordinate offsets obtained in an accumulated way is equal to the number of all the target subgraph pairs successfully matched. Each error coordinate offset includes a horizontal axis error coordinate offset and a vertical axis error coordinate offset, each error coordinate offset is expressed in terms of coordinates, the horizontal axis error coordinate offset is a coordinate value of the error coordinate offset in the x-axis direction, and the vertical axis error coordinate offset is a coordinate value of the error coordinate offset in the y-axis direction. In some embodiments, the values of each error coordinate offset on the horizontal axis are equal and the values of each error coordinate offset on the vertical axis are equal, regardless of the sign differences in the translation direction; therefore, the coordinate value of the result of the accumulation of all the error coordinate offsets on each coordinate axis is the value 0 under the condition of not considering the position error, namely, the ideal state.
In summary, aiming at matching of the target subgraph, matching judgment is carried out by using the coincidence degree between the line segment lengths of the fitting line segments and the number of the fitting line segments meeting the corresponding coincidence rate, so that the calculation complexity is reduced, and the calculation speed is improved; aiming at the position positioning errors existing in the target subgraphs involved in matching, the target subgraphs are controlled to rotate and then translate along the directions of all coordinate axes, the matching judgment is repeatedly carried out by using the coincidence degree between the line segment lengths of the matching line segments and the number of the matching line segments meeting the corresponding coincidence rate until the number of the matching line segments which are successfully matched meets the preset matching success rate, the interference of coordinate offset errors under various angles is overcome, and a plurality of error amounts are extracted, so that the subsequent processing is the compensation amount of the position coordinates of the robot. Specifically, an error coordinate offset for correcting or repositioning the robot position is obtained every time two target subgraphs are successfully matched, and a reasonable numerical upper limit is set for the number of successfully matched target subgraphs in each window, so that the calculation amount for calling the target subgraphs and matching positioning is reduced.
It should be noted that, in several embodiments provided in the present application, it should be understood that the disclosed technical content may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, such as the division of the units, is merely a logical function division, and may be implemented in another manner, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment. In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application and are intended to be comprehended within the scope of the present application.

Claims (16)

1. The angle-based robot repositioning method is characterized in that an execution main body of the robot repositioning method is a robot fixedly provided with a single-point ranging sensor and a gyroscope; the single-point ranging sensor is used for collecting point cloud data of the environment where the robot is located and marking the point cloud data in a map, and the gyroscope is used for collecting angle information of the robot;
the robot repositioning method comprises the following steps:
step 1, a robot determines a window in which the robot is currently located, wherein at least one window exists in a global working area;
step 2, the robot rotates at different rotation center positions in the window where the robot is currently located in sequence, and a target subgraph is built at each rotation center position;
step 3, in the global working area, when the robot selects a window adjacent to the current window as a next window, the robot moves to the next window, updates the next window to the current window, and then repeatedly executes the step 2 until the robot traverses all windows in the global working area and builds a target subgraph;
Step 4, in the global working area, the robot controls the target subgraph to rotate, and then controls the fitting line segments in the rotated target subgraph to be matched with the fitting line segments in other target subgraphs in similarity of line segment lengths;
step 5, when the matching success rate between the fitting line segments in the two target subgraphs reaches a first preset success rate, determining that the two target subgraphs are successfully matched, and obtaining an angle error amount and an error coordinate offset; processing the current rotation center position of the robot by using the angle error amount to obtain a temporary repositioning position;
step 6, when the robot finishes matching all the target subgraphs, the robot performs average value processing on all the obtained error coordinate offsets to obtain a positioning coordinate compensation quantity; and correcting the newly obtained temporary repositioning position by using the positioning coordinate compensation quantity to obtain the repositioning position, and finishing repositioning of the robot in the global working area.
2. The robot repositioning method according to claim 1, characterized in that for two target subgraphs in the global work area, the robot sets one of the target subgraphs as a reference target subgraph and the other target subgraph as a target subgraph to be matched; each time step 4 performs similarity matching of the line length of one fitting line segment in the reference target subgraph and one fitting line segment in the target subgraph to be matched, rotating the target subgraph to be matched according to a preset angle step;
When the target subgraph to be matched rotates along the preset clockwise direction by the preset angle step length, the robot updates the rotated target subgraph to be matched into the target subgraph to be matched, and then performs similarity matching of the line segment length on one fitting line segment in the reference target subgraph and one fitting line segment in the target subgraph to be matched;
when the matching success rate between the fitting line segment in the reference target subgraph and the fitting line segment in the target subgraph to be matched reaches a first preset success rate, the robot sets the rotated angle of the target subgraph to be matched, which is obtained by updating the latest, relative to the original target subgraph to be matched as an angle error amount, and then corrects the initial pose angle of the robot at the current rotation center position by the angle error amount to obtain a temporary repositioning angle; converting the coordinates of the current rotation center position of the robot by using the angle error amount to obtain the coordinates of the temporary repositioning position;
when the robot is matched with the target subgraphs in all windows of the global working area, the robot updates the coordinate of the latest obtained temporary repositioning position to the coordinate of the repositioning position, and the robot updates the latest obtained temporary repositioning angle to the initial pose angle of the robot at the repositioning position;
The initial pose angle of the robot at the current rotation center position is an included angle formed by the advancing direction of the robot at the current rotation center position and relative to the coordinate axis before the robot does not rotate.
3. The robot repositioning method according to claim 1, wherein in step 1, the position where the robot is located when executing step 1 is taken as the center of the window where the robot is currently located, a preset extension distance is extended along the horizontal left side of the center of the window where the robot is currently located, and a preset extension distance is extended along the horizontal right side of the center of the window where the robot is currently located, so as to form the lateral side length of the window where the robot is currently located, wherein the preset extension distance is equal to half of the maximum ranging distance of the single-point ranging sensor, so that a plurality of line feature subgraphs can be scanned in the window where the robot is currently located;
the window is a rectangular area framed in the map and is used for dividing the global working area and limiting the coverage range of the line characteristic subgraph scanned by the robot in the divided corresponding area.
4. The robot repositioning method of claim 3 wherein the windows adjacent to the current window comprise an upper adjacent window to the current window, a lower adjacent window to the current window, a left adjacent window to the current window, and a right adjacent window to the current window; the shape of each window adjacent to the current window is the same as the shape of the current window, and the size of each window adjacent to the current window is the same as the size of the current window;
The abscissa of each point of the window adjacent to the upper side of the window where the current position is formed is equal to the abscissa of each point of the window where the current position is formed, and the difference value between the ordinate of each vertex of the window adjacent to the upper side of the window where the current position is formed and the ordinate of the vertex at the same position relation of the window where the current position is formed is the longitudinal side length of the window where the current position is formed;
the abscissa of each point of the window adjacent to the lower side of the window where the current position is formed is equal to the abscissa of each point of the window where the current position is formed, and the difference value of the ordinate of each vertex of the window where the current position is formed and the ordinate of the vertex at the same position relation of the window adjacent to the lower side of the window where the current position is formed is the longitudinal side length of the window where the current position is formed;
the difference value between the abscissa of each vertex of the window adjacent to the right side of the current window and the abscissa of the vertex at the same position relation of the window is the lateral side length of the current window;
the difference value between the ordinate of each point of the window adjacent to the left side of the current window and the abscissa of each vertex of the window adjacent to the left side of the current window is the transverse side length of the current window;
Wherein the same positional relationship indicates that the relative positional relationship of the two vertices with respect to the center of the window in which they are located is the same.
5. The robot repositioning method according to claim 1, wherein in step 2, the method of constructing the target subgraph comprises: when the robot rotates at the rotating center position for one circle, the single-point ranging sensor is controlled to acquire point cloud data in the rotating process of the robot, then line segments of corresponding trend are fitted in each angle range, the part of the fitted line segments of the corresponding trend in the window where the robot is currently located is set as a group of fitted line segments in the window where the robot is currently located, the group of fitted line segments form a line feature sub-graph, a line feature sub-graph is determined to be scanned, and the line feature sub-graph is set as the target sub-graph;
the point cloud data comprises coordinate information of a position point scanned by a single-point ranging sensor and angle information of the position point; every time one line characteristic subgraph is scanned, the robot also records the coordinates and initial pose angles of the robot at the rotation center position, wherein the coordinates of the rotation center position are relative position coordinates formed by taking the position of the robot in the step 1 as an origin.
6. The robot repositioning method according to claim 1, wherein in step 2, the method of constructing the target subgraph comprises: the robot rotates preset turns at different rotation center positions in the window where the robot is currently located in sequence, scans line feature subgraphs with the preset turns at each rotation center position, and then merges the scanned line feature subgraphs with the preset turns at each rotation center position into a corresponding target subgraph; wherein, a target subgraph is correspondingly combined at a rotation center position;
when the robot rotates at the rotating center position for one circle, the single-point ranging sensor is controlled to acquire point cloud data in the rotating process of the robot, then line segments of corresponding trend are fitted in each angle range, the part of the fitted line segments of the corresponding trend in the window where the robot is currently located is set as a group of fitted line segments in the window where the robot is currently located, the group of fitted line segments form a line feature sub-graph, and the scanning of the line feature sub-graph is determined, wherein at least one fitted line segment exists in the group of fitted line segments.
7. The robot repositioning method of claim 6, wherein in the step 2, the method of merging the scanned line feature subgraphs of a preset number of turns into the target subgraph at each rotation center position includes:
Step 21, the robot rotates the current circle in situ at a rotation center position, the robot carries out fitting processing on the point cloud data correspondingly collected in the current circle to obtain a group of fitting line segments in a window where the current circle is located, and the group of fitting line segments form a line feature subgraph;
step 22, judging whether the number of in-situ rotation turns of the robot at the rotation center position in step 21 is equal to a preset number of turns, if yes, executing step 23, otherwise, executing step 24;
step 23, the robot scans out line characteristic subgraphs with the number of preset turns and stops rotating in situ at the position of the rotating center; then, in the process that the robot traverses fitting line segments in line feature subgraphs with the number of preset turns, the robot selects one of the line feature subgraphs as a template subgraph; the robot sequentially judges whether each fitting line segment in the template subgraph and the fitting line segments in the other line feature subgraphs are completely overlapped, if so, the two fitting line segments which are completely overlapped and are currently judged are set as the same fitting line segment in the target subgraph, otherwise, the two fitting line segments which are not completely overlapped and are currently judged are set as the two fitting line segments in the target subgraph; after the robot judges each fitting line segment in the template subgraph and any fitting line segment in other every line feature subgraph, each fitting line segment in the target subgraph is obtained, and the merging of the line feature subgraphs with the number of preset turns into the target subgraph is completed;
Step 24, the robot updates the next circle to the current circle, and then step 22 is executed;
wherein, the robot rotates in situ at one rotation center position, and the robot rotates 360 degrees around the rotation center position;
wherein the preset number of turns is set to be greater than the value 1.
8. The robot repositioning method of claim 7 wherein the window in which the window is currently located is rectangular in shape; the transverse side length of the window where the window is currently located is equal to the maximum ranging distance of the single-point ranging sensor; the longitudinal side length of the window where the current window is located is larger than or equal to the diameter of the robot body; half of the maximum ranging distance of the single-point ranging sensor is equal to the preset extension distance;
the robot sets the center of the window where the robot is currently located as a first rotation center position;
the robot sets the central axis of the window in the longitudinal direction at present as a base line;
the robot sets the position points which are positioned at a half of the preset extending distance from the center of the window where the robot is currently positioned and are positioned in two symmetrical directions of the vertical base line as a second first rotation center position and a second rotation center position respectively;
the robot sets the position points which are separated from the center of the window where the robot is currently located by a preset extension distance and are positioned in two symmetrical directions of a vertical base line as a third first rotation center position and a third second rotation center position respectively;
The first rotation center position, the second rotation center position, the third rotation center position and the third rotation center position all belong to the rotation center positions, and the robot sequentially and repeatedly traverses each rotation center position in the window where the robot is currently located so as to scan a plurality of line feature subgraphs in the window where the robot is currently located.
9. The robot repositioning method according to claim 6, wherein in the step 2, each time the robot detects that the length of the fitted line segment of the corresponding trend is greater than a preset fitting length threshold, there are the following cases:
when the fitted line segment with the corresponding trend is positioned in the window where the current window is positioned, the robot sets the fitted line segment with the corresponding trend as the fitted line segment and marks the fitted line segment into a map, records the coordinates of the starting point and the coordinates of the ending point of the fitted line segment with the corresponding trend and the inclination angle, and determines and records the fitted line segment with the corresponding trend;
the inclination angle is set to be represented by an angle formed by the line segment of the fitted corresponding trend and the coordinate axis;
The judging criteria of the two different line segments comprise different coordinates of the starting point of the line segment, different coordinates of the ending point of the line segment or different inclination angles;
the fitted line segment corresponding to the trend is a line segment of the corresponding trend, wherein the robot uses a least square method to fit point cloud data acquired in a corresponding angle range into a target linear equation, and the target linear equation represents the line segment corresponding to the trend.
10. The robot repositioning method according to claim 5 or 6, wherein in step 5, the matching success rate between the fitted line segments of the two target subgraphs is a ratio of the number of all fitted line segment pairs successfully matched to the number of all fitted line segment pairs matched to the similarity of the participating line segment lengths; the fitting line segment pair participating in the similarity matching of the line segment lengths consists of one fitting line segment in one target subgraph and one fitting line segment in the other target subgraph;
in the global working area, the target subgraph group where two fitting line segments participating in similarity matching of line segment lengths are located is a target subgraph pair; in one of the target subgraph pairs, one target subgraph is set as a reference target subgraph, and the other target subgraph is set as a target subgraph to be matched, which is the target subgraph rotated in step 4.
11. The method for repositioning the robot according to claim 10, wherein in the step 4, the method for matching the similarity of the line segment length between the fitting line segment in the rotated target subgraph and the fitting line segment in the other target subgraph by the robot control is represented as follows: the robot controls the fitting line segment in the target subgraph to be matched and the fitting line segment at the corresponding position in the reference target subgraph to carry out similarity matching of the line segment length;
the method for matching the similarity of the line segment length between the fitting line segment in the target subgraph to be matched and the fitting line segment at the corresponding position in the reference target subgraph by the robot comprises the following steps:
step 41, calculating an absolute value of a difference value between the line segment length of one fitting line segment in the target subgraph to be matched and the line segment length of one fitting line segment in the reference target subgraph; step 42, when the absolute value of the difference value in step 41 is smaller than or equal to a preset length threshold, determining that one fitting line segment in the target subgraph to be matched is successfully matched with one fitting line segment at a corresponding position in the reference target subgraph in the similarity matching of the line segment lengths, and marking the fitting line segment matched with the similarity of the current line segment lengths as a fitting line segment pair successfully matched; and when the absolute value of the difference value in the step 41 is greater than a preset length threshold, determining that one fitting line segment in the target subgraph to be matched and one fitting line segment at a corresponding position in the reference target subgraph fail to be matched in the similarity matching of the line segment lengths, and marking the fitting line segment matched with the similarity of the line segment lengths currently as one fitting line segment pair failed to be matched.
12. The robot repositioning method according to claim 11, wherein when a ratio of the number of all the fitted line segment pairs successfully matched to the number of all the fitted line segment pairs matched to the similarity of the lengths of the participating line segments is greater than or equal to a first preset success rate, determining that the target subgraph to be matched and the reference target subgraph are successfully matched, and marking as one target subgraph pair successfully matched;
in the global working area, stopping similarity matching of line segment lengths of the fitting line segments in the target subgraph to be matched and the fitting line segments in the reference target subgraph when the ratio of the number of all target subgraph pairs successfully matched to the number of all target subgraph pairs participating in the matching is greater than or equal to a second preset success rate; wherein all target subgraphs involved in the matching include any two different target subgraphs.
13. The robot repositioning method according to claim 11, wherein the robot sets the rotation center position corresponding to the target subgraph to be matched as the offset start position before the robot performs similarity matching of the line segment lengths of each fitting line segment in the target subgraph to be matched and the fitting line segment at the corresponding position in the reference target subgraph, or when the ratio of the number of all fitting line segment pairs successfully matched to the number of all fitting line segment pairs participating in the similarity matching is detected to be smaller than a first preset success rate; then controlling the target subgraph to be matched to translate along the direction of a preset coordinate axis according to a preset translation step length from the position of the offset starting point; and each time the target subgraph to be matched is translated once, updating the translated target subgraph to be matched into the target subgraph to be matched, and then controlling the robot to execute the steps 41 and 42.
14. The robot repositioning method according to claim 13, wherein the robot judges whether the ratio of the number of all fitting line segment pairs successfully matched to the number of all fitting line segment pairs matched with the similarity of the lengths of the participating line segments is greater than or equal to a first preset success rate or not on the premise that the coordinate offset of the target subgraph to be matched, which is translated in the same coordinate axis direction from the offset starting point position, does not reach a maximum preset offset, each time the target subgraph to be matched is translated by a preset translation step length, if yes, the target subgraph to be matched is successfully matched with the reference target subgraph, and the coordinate offset of the target subgraph to be matched, which is translated in the latest translation direction from the offset starting point position, is set as an error coordinate offset, and then the target subgraph to be matched is controlled to stop translation; otherwise, the robot adjusts the direction of the set coordinate axis to be opposite or perpendicular to the direction of the set coordinate axis, updates the direction of the opposite or perpendicular coordinate axis to be the direction of the set coordinate axis, and controls the target subgraph to be matched to translate the preset translation step length along the direction of the set coordinate axis from the position of the offset starting point;
If the coordinate offset of the target subgraph to be matched, which is translated in the same coordinate axis direction from the offset starting point position, reaches the maximum preset offset, when the robot judges that the ratio of the number of all fitting line segment pairs successfully matched to the number of all fitting line segment pairs matched with the similarity of the lengths of the participated line segments is smaller than a first preset success rate, the robot adjusts the direction of the established coordinate axis to be opposite or perpendicular to the established coordinate axis direction, updates the opposite or perpendicular coordinate axis direction to be the established coordinate axis direction, and controls the target subgraph to be matched to translate the preset translation step length along the established coordinate axis direction from the offset starting point position;
before the coordinate offset translated in the given coordinate axis direction reaches the maximum preset offset, each time the same target sub-graph to be matched translates along the given coordinate axis direction by the preset translation step length, updating the translated target sub-graph to be matched into the target sub-graph to be matched, and executing the steps 41 and 42; and after each robot is matched with each fitting line segment in the target subgraph to be matched and the fitting line segment at the corresponding position in the reference target subgraph, the robot judges whether the ratio of the number of all fitting line segment pairs successfully matched to the number of all fitting line segment pairs matched by the similarity of the lengths of the participating line segments is larger than or equal to a first preset success rate.
15. The method for repositioning the robot according to claim 14, wherein after the coordinate offsets of the target subgraph to be matched translated along all coordinate axis directions from the offset start point position reach a maximum preset offset, if the robot determines that the ratio of the number of all fitting line segment pairs successfully matched to the number of all fitting line segment pairs participating in similarity matching is smaller than a first preset success rate, determining that the matching between the target subgraph to be matched and the reference target subgraph fails, and then executing step 4 and step 5 on the premise of excluding a pair of target subgraphs failing in matching;
the preset maximum positioning error is associated with a maximum preset offset, and the maximum preset offset comprises a maximum preset offset sitting amount in the horizontal axis direction and a maximum preset offset sitting amount in the vertical axis direction.
16. The robot repositioning method of claim 15, wherein the robot obtains a plurality of error coordinate offsets when the robot traverses all the target subgraphs in the window in the global work area, and if the robot determines that the number of the currently obtained error coordinate offsets is greater than the preset processing number, the error coordinate offset having the largest abscissa value, the error coordinate offset having the smallest abscissa value, the error coordinate offset having the largest ordinate value, and the error coordinate offset having the smallest ordinate value are all removed from all the currently obtained error coordinate offsets; averaging the rest error coordinate offset to obtain an average coordinate offset; setting the average coordinate offset as the positioning coordinate compensation amount; wherein each error coordinate offset includes a horizontal axis error coordinate offset and a vertical axis error coordinate offset; each positioning coordinate compensation amount includes a horizontal axis positioning coordinate compensation amount and a vertical axis positioning coordinate compensation amount.
CN202210699893.8A 2022-06-20 2022-06-20 Angle-based robot repositioning method Pending CN117288183A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210699893.8A CN117288183A (en) 2022-06-20 2022-06-20 Angle-based robot repositioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210699893.8A CN117288183A (en) 2022-06-20 2022-06-20 Angle-based robot repositioning method

Publications (1)

Publication Number Publication Date
CN117288183A true CN117288183A (en) 2023-12-26

Family

ID=89243179

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210699893.8A Pending CN117288183A (en) 2022-06-20 2022-06-20 Angle-based robot repositioning method

Country Status (1)

Country Link
CN (1) CN117288183A (en)

Similar Documents

Publication Publication Date Title
JP6987797B2 (en) Laser scanner with real-time online egomotion estimation
CN111590595B (en) Positioning method and device, mobile robot and storage medium
CN110703268B (en) Air route planning method and device for autonomous positioning navigation
CN106382917B (en) The continuous accurate acquisition method of three-dimensional spatial information under a kind of indoor environment
US7696894B2 (en) Method for determining a relative position of a mobile unit by comparing scans of an environment and mobile unit
CN112880682B (en) Mobile robot positioning method, system and chip based on wireless ranging sensor
CN110895408B (en) Autonomous positioning method and device and mobile robot
CN111257909B (en) Multi-2D laser radar fusion mapping and positioning method and system
JP6649743B2 (en) Matching evaluation device and matching evaluation method
CN114280625A (en) Unmanned aerial vehicle-based three-dimensional laser radar underground map construction method and device
Fang et al. A Real‐Time 3D Perception and Reconstruction System Based on a 2D Laser Scanner
CN112578392B (en) Environment boundary construction method based on remote sensor and mobile robot
CN113110496A (en) Mobile robot mapping method and system
CN108876862A (en) A kind of noncooperative target point cloud position and attitude calculation method
Li et al. Aerial-triangulation aided boresight calibration for a low-cost UAV-LiDAR system
US11883961B2 (en) Method for autonomously dimensional accuracy of a workpiece via three-dimensional sanding
CN117288183A (en) Angle-based robot repositioning method
JP5953393B2 (en) Robot system and map updating method
AU2021273605B2 (en) Multi-agent map generation
JP6913339B2 (en) Movement locus calculation system, control method and program of movement locus calculation system
CN117289689A (en) Robot positioning method based on line segment matching in window
CN113532421B (en) Dynamic laser SLAM method based on subgraph updating and reflector optimization
CN113156971A (en) Method for correcting walking path, chip and robot
Pfeifer et al. Early stages of LiDAR data processing
CN117452924A (en) Robot alignment method and walking direction adjusting method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination