CN117576200B - Long-period mobile robot positioning method, system, equipment and medium - Google Patents

Long-period mobile robot positioning method, system, equipment and medium Download PDF

Info

Publication number
CN117576200B
CN117576200B CN202410050100.9A CN202410050100A CN117576200B CN 117576200 B CN117576200 B CN 117576200B CN 202410050100 A CN202410050100 A CN 202410050100A CN 117576200 B CN117576200 B CN 117576200B
Authority
CN
China
Prior art keywords
semantic
information
map
occupation
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410050100.9A
Other languages
Chinese (zh)
Other versions
CN117576200A (en
Inventor
皇攀凌
欧金顺
李文广
任纪颖
高新彪
史建杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN202410050100.9A priority Critical patent/CN117576200B/en
Publication of CN117576200A publication Critical patent/CN117576200A/en
Application granted granted Critical
Publication of CN117576200B publication Critical patent/CN117576200B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a method, a system, equipment and a medium for positioning a long-period mobile robot, which relate to the technical field of image processing and comprise the following steps: acquiring a scene image and a scene point cloud, performing semantic segmentation on the scene image, projecting the scene point cloud onto a semantic segmentation result, and estimating the position and the category of each entity in the scene to obtain scene semantic occupation information; estimating the pose and the position of the robot in the current scene, and constructing a grid map; mapping scene semantic occupation information into a grid map, and constructing a semantic occupation map; according to a scene image and a scene point cloud in a visual field of the robot in a moving process, comparing the scene image with a semantic occupation map to update the semantic occupation map; and positioning the robot position according to the scene image and the scene point cloud in the robot view at the current moment by adopting the updated semantic occupation map. The problem of mobile machine long period location under the change environment is solved.

Description

Long-period mobile robot positioning method, system, equipment and medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method, a system, an apparatus, and a medium for positioning a long-period mobile robot.
Background
The realization of stable positioning of a mobile robot for a long period in a varying environment is a key to maintaining stable operation of the mobile robot for a long period of time. At present, the positioning of the mobile robot is mainly based on a pre-constructed map, and a mode of manually or automatically timing and fixing points to update the map is generally adopted for solving the problem of environmental change.
The automatic map updating technology is a mainstream technology for coping with environmental changes at present, and generally needs a period of iteration to determine the actual changes of the environment, and is not suitable for scenes such as logistics warehouse and the like with more frequent changes.
For a scene with frequent changes, some methods acquire more information in the environment by adding sensors or add invariance features in a pre-constructed map by using specific environment marks so as to ensure the long-term reliability of the map, but this way tends to increase the deployment cost and some limited areas are not easy to deploy.
Disclosure of Invention
In order to solve the problems, the invention provides a long-period mobile robot positioning method, a system, equipment and a medium, which update a semantic occupation map and assist mobile robot positioning by monitoring semantic information changing in an environment in real time, dynamically and timely update the semantic information in the environment, and solve the problem of long-period mobile robot positioning in the changing environment.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
in a first aspect, the present invention provides a long-period mobile robot positioning method, including:
Acquiring a scene image and a scene point cloud, performing semantic segmentation on the scene image, projecting the scene point cloud onto a semantic segmentation result, and estimating the position and the category of each entity in the scene to obtain scene semantic occupation information;
Estimating the pose and the position of the robot in the current scene, so as to construct a grid map;
mapping scene semantic occupation information into a grid map, and constructing a semantic occupation map;
determining entity semantic information and corresponding point cloud information according to scene images and scene point clouds in a visual field of the robot in a moving process, determining a change area and a positioning constraint residual error through comparison with a semantic occupation map, calculating joint information entropy according to the change area, and updating the semantic occupation map through combining the joint information entropy with the positioning constraint residual error;
and positioning the robot position according to the scene image and the scene point cloud in the robot view at the current moment by adopting the updated semantic occupation map.
As an alternative embodiment, the construction of the grid map includes: and extracting environment key points from the scene image, estimating the pose of the robot by matching the environment key points, the odometer and IMU data through a feature matching algorithm, determining the position of the robot, and completing the construction of the grid map through continuous iterative updating.
In the process of positioning the robot position, the semantic information of all entities in the current field of view is estimated according to the scene image, the semantic information is matched with the semantic occupation map to obtain a change area, and the change area is mapped into the grid map and is removed.
As an alternative implementation, the robot pose is positioned by combining the scanning matching of the laser point cloud to the grid map with the changed region removed.
As an alternative implementation manner, the positioning constraint residual error comprises a spatial position difference and an angle or posture difference of the point cloud information and the entity semantic information when being compared with the semantic occupation map; the process for updating the semantic occupancy map by combining the joint information entropy with the positioning constraint residual comprises the following steps: the joint information entropy is: wherein X is the condition that the contrast of the current entity semantic information and the historical semantic occupation information is changed, Y is the condition that the contrast of the current point cloud information and the historical grid occupation information is changed, Representing the joint probability of occurrence of the two cases,/>,/>、/>The probability of change when the semantic information of the current entity and the current point cloud information are compared with the semantic occupation map is respectively determined; the method comprises the following steps:
;/>;/> ; wherein/> 、/>For the distribution coefficient; A. b is an intermediate parameter,/>、/>Respectively occupying entity semantic information and the quantity of point cloud information in a map for historical semantics; /(I)、/>The entity semantic information and the number of point cloud information which are matched when compared with the historical semantic occupation map are respectively; /(I)The semantic information quantity of the current entity; max () is a maximum function; min () is a function taking the minimum value;
When the joint information entropy is larger than a first set threshold value, not updating the semantic occupation map; when the combined information entropy is smaller than a first set threshold, comparing the positioning constraint residual error with a second set threshold, and if the positioning constraint residual error is smaller than the second set threshold, positioning without jump, and updating a semantic occupation map; otherwise, judging that the robot generates positioning jump at the moment, and not executing the updating of the semantic occupation map.
In the process of updating the semantic occupation map, accumulating a plurality of multi-frame scene images with different angles at a plurality of positions in the moving process of the robot to judge whether an object in the current scene changes or not; and updating the grid map according to the scene point cloud in the visual field of the robot in the moving process, so that the semantic occupation map is updated by combining the updated scene semantic occupation information.
As an alternative embodiment, the method further comprises calibrating external parameters of the camera and the laser radar, in particular:
The two-dimensional code is arranged right above the reflecting column, so that the projections of the centers of the two-dimensional code and the reflecting column on the horizontal plane are overlapped;
Estimating the pose of a camera in a two-dimensional code, projecting the pose onto a horizontal plane, and recording
Screening all the reflecting column point clouds according to the point cloud light intensity of the laser radar, calculating the center of each reflecting column and the normal vector thereof by using the average point cloud, fitting the pose of the laser radar according to the pose of all the reflecting columns, and recording
Constructing constraints between the camera and the laser radar, the two-dimensional code and the reflective column, and solving to obtain the optimal external parameters between the camera and the laser radarThe constraints are: /(I)
In a second aspect, the present invention provides a long-period mobile robot positioning system comprising:
the semantic occupation determining module is configured to acquire a scene image and scene point clouds, perform semantic segmentation on the scene image, project the scene point clouds on a semantic segmentation result, estimate the positions and the categories of all entities in the scene and obtain scene semantic occupation information;
The grid map construction module is configured to estimate the pose and the position of the robot in the current scene so as to construct a grid map;
the semantic occupation map construction module is configured to map scene semantic occupation information into a grid map and construct a semantic occupation map;
The map updating module is configured to determine entity semantic information and corresponding point cloud information according to scene images and scene point clouds in a visual field of the robot in a moving process, determine a change area and a positioning constraint residual error through comparison with the semantic occupation map, calculate joint information entropy according to the change area, and update the semantic occupation map through the joint information entropy and the positioning constraint residual error;
The positioning module is configured to adopt the updated semantic occupation map to position the robot according to the scene image and the scene point cloud in the view of the robot at the current moment.
In a third aspect, the invention provides an electronic device comprising a memory and a processor and computer instructions stored on the memory and running on the processor, which when executed by the processor, perform the method of the first aspect.
In a fourth aspect, the present invention provides a computer readable storage medium storing computer instructions which, when executed by a processor, perform the method of the first aspect.
Compared with the prior art, the invention has the beneficial effects that:
The invention provides a long-period mobile robot positioning method and system based on semantic occupation estimation, which solve the problem of long-period mobile robot positioning in a variable environment. Firstly, deducing a scene geometry through a 3D semantic occupation estimation method based on vision, then fusing the geometry with semantic information with a 2D grid map to obtain the map with semantic occupation, finally, updating the map with semantic occupation and assisting in positioning a mobile robot through real-time monitoring of the semantic information changed in the environment, and dynamically and timely updating the semantic information in the environment to achieve the purpose of improving long-term positioning stability, the existing scene is not required to be redeployed, additional deployment cost is not increased, and meanwhile, the long-term positioning stability in the dynamic environment can be effectively improved.
Because environmental change is often an important factor causing the loss of mobile robot positioning, the invention utilizes semantic information in the environment to construct a map with semantic occupied information, and in the process of mobile robot positioning, the current semantic information is compared with the original map with semantic occupied information to find out changed areas in the environment, and then the interference of the changed areas on mobile robot positioning is eliminated, so that the positioning precision and the positioning stability can be effectively improved. For a gradually changing scene, the semantic occupied area which changes is detected in real time in the running process of the robot, and the semantic occupied map is updated in real time, so that the long-term reliability of the map can be effectively maintained.
Additional aspects of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention.
Fig. 1 is a flowchart of a positioning method of a long-period mobile robot according to embodiment 1 of the present invention;
fig. 2 is a front view of the two-dimensional code and reflective column arrangement provided in embodiment 1 of the present invention;
Fig. 3 is a top view of the two-dimensional code and reflective column arrangement provided in embodiment 1 of the present invention;
fig. 4 is a flow chart of semantic occupation map construction provided in embodiment 1 of the present invention;
FIG. 5 is a flowchart of semantic occupancy estimation and mobile robot positioning provided in embodiment 1 of the present invention;
FIG. 6 is a flowchart of updating a semantic occupancy map provided in embodiment 1 of the present invention;
fig. 7 is a schematic diagram of a semantic occupation map update determination logic provided in embodiment 1 of the present invention.
Detailed Description
The invention is further described below with reference to the drawings and examples.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the invention. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the present invention. As used herein, unless the context clearly indicates otherwise, the singular forms also are intended to include the plural forms, and furthermore, it is to be understood that the terms "comprises" and "comprising" and any variations thereof are intended to cover non-exclusive inclusions, e.g., processes, methods, systems, products or devices that comprise a series of steps or units, are not necessarily limited to those steps or units that are expressly listed, but may include other steps or units that are not expressly listed or inherent to such processes, methods, products or devices.
Embodiments of the invention and features of the embodiments may be combined with each other without conflict.
Example 1
The embodiment provides a positioning method of a long-period mobile robot, as shown in fig. 1, including:
Acquiring a scene image and a scene point cloud, performing semantic segmentation on the scene image, projecting the scene point cloud onto a semantic segmentation result, and estimating the position and the category of each entity in the scene to obtain scene semantic occupation information;
Estimating the pose and the position of the robot in the current scene, so as to construct a grid map;
mapping scene semantic occupation information into a grid map, and constructing a semantic occupation map;
determining entity semantic information and corresponding point cloud information according to scene images and scene point clouds in a visual field of the robot in a moving process, determining a change area and a positioning constraint residual error through comparison with a semantic occupation map, calculating joint information entropy according to the change area, and updating the semantic occupation map through combining the joint information entropy with the positioning constraint residual error;
and positioning the robot position according to the scene image and the scene point cloud in the robot view at the current moment by adopting the updated semantic occupation map.
In the embodiment, firstly, calibrating external parameters of a camera and a laser radar, and enabling data of the camera and the laser radar to be spatially aligned through calibrating the external parameters between the camera and the laser radar; the method specifically comprises the following steps:
(1) Scene arrangement; and sticking the two-dimensional code and the light reflecting column on a vertical wall surface, wherein the two-dimensional code is stuck right above the light reflecting column, namely, the projection of the centers of the two on a horizontal plane is overlapped, as shown in fig. 2-3.
(2) Estimating the pose of a camera; using a two-dimensional code pose estimation method, estimating the pose of the camera in each two-dimensional code, the pose being a3×3 homogeneous transformation matrix, projecting the pose to a horizontal plane (denoted asPlane), noted as:
it can be appreciated that the two-dimensional code pose estimation method is only required to be a conventional method, and details are not repeated here.
(3) Estimating the pose of the laser radar; screening all the reflecting columns according to the light intensity of the laser radar point cloud, extracting each reflecting column by using a clustering method (such as Euclidean clustering), calculating the center of each reflecting column and the normal vector of each reflecting column by using the average point cloud, fitting the laser radar pose according to all the reflecting column poses, and marking as: ; the pose is also a 3 x 3 homogeneous transformation matrix.
(4) Calculating external parameters; constructing constraints between the camera and the laser radar, the two-dimensional code and the reflective column, and solving to obtain the optimal external parameters between the camera and the laser radarThe constraint equation is: /(I)
In this embodiment, the semantic occupancy map is a grid map with semantic occupancy information, which is constructed by fusing information acquired by cameras and lidars, and is used for positioning a mobile robot.
As shown in fig. 4, the construction process of the semantic occupancy map includes:
(1) Semantic recognition and geometric occupation estimation; acquiring scene image data acquired by a camera and scene laser point cloud data acquired by a laser radar, performing semantic segmentation on the scene image data by using a deep learning algorithm, projecting the scene laser point cloud data onto a semantic segmentation result, and estimating the position and the category of each geometric body (namely physical entity in the scene) in the scene to obtain semantic occupation information of the scene; wherein, based on the optimal external parameters between the camera and the laser radar The projection process is realized.
(2) Constructing a grid map;
Firstly, acquiring environment data, wherein the environment data comprises camera image data and radar point cloud data; commonly used sensors include lidar, cameras, inertial Measurement Units (IMUs), wheel odometers, and the like;
Then, extracting characteristic points from the sensor data, wherein the characteristic points can be key points in the environment, such as line characteristics, right angle characteristics, arc characteristics and the like, and can also be all point clouds in laser radar point cloud data;
then, matching the extracted characteristic points, the odometer and the IMU data through a characteristic matching algorithm to estimate the pose of the robot, including translation and rotation, and determining the position in a map; the pose estimation can use particle filtering and other algorithms, and is not limited.
Finally, fusing the new feature points into the map to complete the construction of the current local grid map, and updating the estimated pose of the robot;
repeating the above process to complete the construction of the whole grid map.
(3) Constructing a semantic occupation map; according to the geometric body occupation estimation with semantic information and the grid map, the semantic information and the geometric body occupation information in the geometric body occupation estimation with semantic information are mapped into the grid map according to the position information in the semantic occupation estimation information, and the semantic occupation information and the grid map are fused to complete the construction of the semantic occupation map.
In the embodiment, the mobile robot positioning based on semantic occupancy estimation performs semantic occupancy estimation through image information acquired by a camera, and assists a laser radar to accurately position by using a semantic occupancy map;
As shown in fig. 5, includes: for the images of the current moment and the pose, estimating semantic information of all entities in the current visual field according to image data, calculating all change areas by matching the semantic information with a semantic occupation map, mapping all the change areas into a grid map according to a camera model, and removing the change areas in the grid map to remove interference information and reduce interference on accurate positioning; and after the change areas are removed, combining the scanning of the laser point cloud to match and position the accurate pose of the robot.
In the embodiment, a change area in the environment is monitored through semantic information, and the semantic occupation map is updated according to map updating judging logic and the change area so as to realize long-term positioning stability of the mobile robot; the overall flow is as shown in fig. 6, comprising: firstly, carrying out semantic occupation estimation through an image, calculating all existing entities in a current scene, and comparing the entities with occupation information in a semantic occupation map at the moment; and accumulating multi-frame data of a plurality of positions and different angles in the moving process of the robot to judge whether the object in the current environment is newly added, moved or removed.
For all the change areas, updating the semantic occupation map of the area through map updating judging logic; the specific implementation method of the map update judgment logic is as shown in fig. 7, and includes:
(1) Calculating joint information entropy in the current state according to entity semantic information, point cloud information and historical semantic occupation map in the current environment: Wherein X is the condition that the entity semantic information is compared with semantic occupied information in a historical semantic occupied map under the current state and changes, Y is the condition that the point cloud information is compared with grid occupied information in the historical semantic occupied map under the current state and changes, and the meaning of the point cloud information is/is that the point cloud information is compared with grid occupied information in the historical semantic occupied map Representing the joint probability of the two cases.
Joint probabilityCan be simplified as/>;/>、/>The probability of change when the entity semantic information and the point cloud information in the current state are compared with the historical semantic occupation map is respectively determined.
Wherein,,/>;/>、/>To assign a coefficient, the greater the coefficient the greater the sensitivity to environmental changes; A. b is an intermediate parameter, in particular,/>,/>、/>The method comprises the steps that observable entity semantic information and the quantity of point cloud information in a map are occupied for historical semantics under the current pose; /(I)、/>Entity semantic information and the number of point cloud information matched in the map are occupied for the current state and the historical semantic; /(I)The amount of entity semantic information detected for the current state; max () is a maximum function; min () is a function taking the minimum value.
(2) The smaller the joint information entropy is, the larger the probability of changing the current environment is, and when the joint information entropy is larger than a first set threshold value threshold1, the current environment is not changed, and the map updating operation is not executed.
(3) When the current positioning state is smaller than a first set threshold value threshold1, calculating a positioning constraint residual error constraint_res of the current positioning state;
the positioning constraint residual comprises space position difference and angle or attitude difference when the laser radar point cloud information and the entity semantic information are matched with the historical semantic occupation map, the specific calculation method depends on the used optimization framework, a nonlinear optimization library can be used for calculating the positioning constraint residual, and the calculation mode is not limited specifically.
(4) If the positioning constraint residual is smaller than a second set threshold value threshold2, the positioning state is considered to be good, and when the environment changes, the robot positioning does not jump, so that the semantic occupation map is updated; otherwise, judging that the robot can generate positioning jump at the moment, and not executing the update of the semantic occupation map.
The threshold value can be set in a debugging mode according to different positioning algorithms and environments.
It should be noted that, the process of eliminating and updating the change area is two different processes, and the elimination is performed when the change area is observed in the positioning process; the updating needs to continuously observe multi-frame data, and the updating operation is executed according to the map updating judgment logic, so that the map updating is the fundamental guarantee of long-period positioning of the mobile robot.
Example 2
The embodiment provides a long-period mobile robot positioning system, which comprises:
the semantic occupation determining module is configured to acquire a scene image and scene point clouds, perform semantic segmentation on the scene image, project the scene point clouds on a semantic segmentation result, estimate the positions and the categories of all entities in the scene and obtain scene semantic occupation information;
The grid map construction module is configured to estimate the pose and the position of the robot in the current scene so as to construct a grid map;
the semantic occupation map construction module is configured to map scene semantic occupation information into a grid map and construct a semantic occupation map;
The map updating module is configured to determine entity semantic information and corresponding point cloud information according to scene images and scene point clouds in a visual field of the robot in a moving process, determine a change area and a positioning constraint residual error through comparison with the semantic occupation map, calculate joint information entropy according to the change area, and update the semantic occupation map through the joint information entropy and the positioning constraint residual error;
The positioning module is configured to adopt the updated semantic occupation map to position the robot according to the scene image and the scene point cloud in the view of the robot at the current moment.
It should be noted that the above modules correspond to the steps described in embodiment 1, and the above modules are the same as examples and application scenarios implemented by the corresponding steps, but are not limited to those disclosed in embodiment 1. It should be noted that the modules described above may be implemented as part of a system in a computer system, such as a set of computer-executable instructions.
In further embodiments, there is also provided:
An electronic device comprising a memory and a processor and computer instructions stored on the memory and running on the processor, which when executed by the processor, perform the method described in embodiment 1. For brevity, the description is omitted here.
It should be understood that in this embodiment, the processor may be a central processing unit CPU, and the processor may also be other general purpose processors, digital signal processors DSP, application specific integrated circuits ASIC, off-the-shelf programmable gate array FPGA or other programmable logic device, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may include read only memory and random access memory and provide instructions and data to the processor, and a portion of the memory may also include non-volatile random access memory. For example, the memory may also store information of the device type.
A computer readable storage medium storing computer instructions which, when executed by a processor, perform the method described in embodiment 1.
The method in embodiment 1 may be directly embodied as a hardware processor executing or executed with a combination of hardware and software modules in the processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method. To avoid repetition, a detailed description is not provided herein.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
While the foregoing description of the embodiments of the present invention has been presented in conjunction with the drawings, it should be understood that it is not intended to limit the scope of the invention, but rather, it is intended to cover all modifications or variations within the scope of the invention as defined by the claims of the present invention.

Claims (9)

1. A long-period mobile robot positioning method, comprising:
Acquiring a scene image and a scene point cloud, performing semantic segmentation on the scene image, projecting the scene point cloud onto a semantic segmentation result, and estimating the position and the category of each entity in the scene to obtain scene semantic occupation information;
Estimating the pose and the position of the robot in the current scene, so as to construct a grid map;
mapping scene semantic occupation information into a grid map, and constructing a semantic occupation map;
determining entity semantic information and corresponding point cloud information according to scene images and scene point clouds in a visual field of the robot in a moving process, determining a change area and a positioning constraint residual error through comparison with a semantic occupation map, calculating joint information entropy according to the change area, and updating the semantic occupation map through combining the joint information entropy with the positioning constraint residual error;
The updated semantic occupation map is adopted, and the position of the robot is positioned according to the scene image and the scene point cloud in the view of the robot at the current moment;
The process for updating the semantic occupancy map by combining the joint information entropy with the positioning constraint residual comprises the following steps: the positioning constraint residual error comprises a spatial position difference and an angle or posture difference of the point cloud information and the entity semantic information when the point cloud information and the entity semantic information are compared with a semantic occupation map; the joint information entropy is: Wherein X is the condition that the comparison of the semantic information of the current entity and the historical semantic occupation information is changed, Y is the condition that the comparison of the cloud information of the current point and the historical grid occupation information is changed, and the meaning of the current entity and the historical semantic occupation information is equal to the meaning of the current point cloud information of the current point cloud, the historical grid occupation information of the current point cloud, the historical semantic occupation information of the current point cloud, the historical point cloud, the grid, and Representing the joint probability of the two cases occurring,,/>、/>The probability of change when the semantic information of the current entity and the current point cloud information are compared with the semantic occupation map is respectively determined; the method comprises the following steps:
;/>
;/>
Wherein, 、/>For the distribution coefficient; A. b is an intermediate parameter,/>、/>Respectively occupying entity semantic information and the quantity of point cloud information in a map for historical semantics; /(I)、/>The entity semantic information and the number of point cloud information which are matched when compared with the historical semantic occupation map are respectively; /(I)The semantic information quantity of the current entity; max () is a maximum function; min () is a function taking the minimum value;
When the joint information entropy is larger than a first set threshold value, not updating the semantic occupation map; when the combined information entropy is smaller than a first set threshold, comparing the positioning constraint residual error with a second set threshold, and if the positioning constraint residual error is smaller than the second set threshold, positioning without jump, and updating a semantic occupation map; otherwise, judging that the robot generates positioning jump at the moment, and not executing the updating of the semantic occupation map.
2. The method for positioning a long-period mobile robot according to claim 1, wherein the constructing of the grid map comprises: and extracting environment key points from the scene image, estimating the pose of the robot by matching the environment key points, the odometer and IMU data through a feature matching algorithm, determining the position of the robot, and completing the construction of the grid map through continuous iterative updating.
3. The method for positioning the long-period mobile robot according to claim 1, wherein in the process of positioning the position of the robot, semantic information of all entities in the current field of view is estimated according to the scene image, a change area is obtained by matching the semantic information with a semantic occupation map, and the change area is mapped into a grid map and is removed.
4. A method of positioning a long-period mobile robot as recited in claim 3, wherein the robot pose is positioned in combination with the laser point cloud scan matching of the grid map of the culled change area.
5. The method for positioning a long-period mobile robot according to claim 1, wherein in the process of updating the semantic occupation map, a plurality of multi-frame scene images of different angles at a plurality of positions are accumulated in the moving process of the robot to judge whether an object in a current scene changes or not;
And updating the grid map according to the scene point cloud in the visual field of the robot in the moving process, so that the semantic occupation map is updated by combining the updated scene semantic occupation information.
6. The method for positioning a long-period mobile robot according to claim 1, further comprising calibrating external parameters of a camera and a lidar, specifically:
The two-dimensional code is arranged right above the reflecting column, so that the projections of the centers of the two-dimensional code and the reflecting column on the horizontal plane are overlapped;
Estimating the pose of a camera in a two-dimensional code, projecting the pose onto a horizontal plane, and recording
Screening all the reflecting column point clouds according to the point cloud light intensity of the laser radar, calculating the center of each reflecting column and the normal vector thereof by using the average point cloud, fitting the pose of the laser radar according to the pose of all the reflecting columns, and recording
Constructing constraints between the camera and the laser radar, the two-dimensional code and the reflective column, and solving to obtain the optimal external parameters between the camera and the laser radarThe constraints are: /(I)
7. A long-period mobile robotic positioning system, comprising:
the semantic occupation determining module is configured to acquire a scene image and scene point clouds, perform semantic segmentation on the scene image, project the scene point clouds on a semantic segmentation result, estimate the positions and the categories of all entities in the scene and obtain scene semantic occupation information;
The grid map construction module is configured to estimate the pose and the position of the robot in the current scene so as to construct a grid map;
the semantic occupation map construction module is configured to map scene semantic occupation information into a grid map and construct a semantic occupation map;
The map updating module is configured to determine entity semantic information and corresponding point cloud information according to scene images and scene point clouds in a visual field of the robot in a moving process, determine a change area and a positioning constraint residual error through comparison with the semantic occupation map, calculate joint information entropy according to the change area, and update the semantic occupation map through the joint information entropy and the positioning constraint residual error;
The positioning module is configured to adopt the updated semantic occupation map to position the robot according to the scene image and the scene point cloud in the view of the robot at the current moment;
The process for updating the semantic occupancy map by combining the joint information entropy with the positioning constraint residual comprises the following steps: the positioning constraint residual error comprises a spatial position difference and an angle or posture difference of the point cloud information and the entity semantic information when the point cloud information and the entity semantic information are compared with a semantic occupation map; the joint information entropy is: Wherein X is the condition that the comparison of the semantic information of the current entity and the historical semantic occupation information is changed, Y is the condition that the comparison of the cloud information of the current point and the historical grid occupation information is changed, and the meaning of the current entity and the historical semantic occupation information is equal to the meaning of the current point cloud information of the current point cloud, the historical grid occupation information of the current point cloud, the historical semantic occupation information of the current point cloud, the historical point cloud, the grid, and Representing the joint probability of the two cases occurring,,/>、/>The probability of change when the semantic information of the current entity and the current point cloud information are compared with the semantic occupation map is respectively determined; the method comprises the following steps:
;/>
;/>
Wherein, 、/>For the distribution coefficient; A. b is an intermediate parameter,/>、/>Respectively occupying entity semantic information and the quantity of point cloud information in a map for historical semantics; /(I)、/>The entity semantic information and the number of point cloud information which are matched when compared with the historical semantic occupation map are respectively; /(I)The semantic information quantity of the current entity; max () is a maximum function; min () is a function taking the minimum value;
When the joint information entropy is larger than a first set threshold value, not updating the semantic occupation map; when the combined information entropy is smaller than a first set threshold, comparing the positioning constraint residual error with a second set threshold, and if the positioning constraint residual error is smaller than the second set threshold, positioning without jump, and updating a semantic occupation map; otherwise, judging that the robot generates positioning jump at the moment, and not executing the updating of the semantic occupation map.
8. An electronic device comprising a memory and a processor and computer instructions stored on the memory and running on the processor, which when executed by the processor, perform the method of any one of claims 1-6.
9. A computer readable storage medium storing computer instructions which, when executed by a processor, perform the method of any of claims 1-6.
CN202410050100.9A 2024-01-15 2024-01-15 Long-period mobile robot positioning method, system, equipment and medium Active CN117576200B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410050100.9A CN117576200B (en) 2024-01-15 2024-01-15 Long-period mobile robot positioning method, system, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410050100.9A CN117576200B (en) 2024-01-15 2024-01-15 Long-period mobile robot positioning method, system, equipment and medium

Publications (2)

Publication Number Publication Date
CN117576200A CN117576200A (en) 2024-02-20
CN117576200B true CN117576200B (en) 2024-05-03

Family

ID=89884764

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410050100.9A Active CN117576200B (en) 2024-01-15 2024-01-15 Long-period mobile robot positioning method, system, equipment and medium

Country Status (1)

Country Link
CN (1) CN117576200B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111190981A (en) * 2019-12-25 2020-05-22 中国科学院上海微系统与信息技术研究所 Method and device for constructing three-dimensional semantic map, electronic equipment and storage medium
WO2020119684A1 (en) * 2018-12-14 2020-06-18 中国科学院深圳先进技术研究院 3d navigation semantic map update method, apparatus and device
CN111461245A (en) * 2020-04-09 2020-07-28 武汉大学 Wheeled robot semantic mapping method and system fusing point cloud and image
CN112097742A (en) * 2019-06-17 2020-12-18 北京地平线机器人技术研发有限公司 Pose determination method and device
WO2021053031A1 (en) * 2019-09-20 2021-03-25 Continental Automotive Gmbh Method for detecting a moving state of a vehicle
CN113674416A (en) * 2021-08-26 2021-11-19 中国电子科技集团公司信息科学研究院 Three-dimensional map construction method and device, electronic equipment and storage medium
CN113920044A (en) * 2021-09-30 2022-01-11 杭州电子科技大学 Photovoltaic hot spot component post-detection positioning method based on unmanned aerial vehicle imaging
WO2022143360A1 (en) * 2021-01-04 2022-07-07 炬星科技(深圳)有限公司 Autonomous environment map updating method and device, and computer-readable storage medium
CN116608850A (en) * 2023-06-13 2023-08-18 五八智能科技(杭州)有限公司 Method, system, device and medium for constructing robot navigation map
CN116734834A (en) * 2023-05-31 2023-09-12 人工智能与数字经济广东省实验室(深圳) Positioning and mapping method and device applied to dynamic scene and intelligent equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230113331A1 (en) * 2021-10-12 2023-04-13 Avidbots Corp Localization framework for dynamic environments for autonomous indoor semi-autonomous devices

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020119684A1 (en) * 2018-12-14 2020-06-18 中国科学院深圳先进技术研究院 3d navigation semantic map update method, apparatus and device
CN112097742A (en) * 2019-06-17 2020-12-18 北京地平线机器人技术研发有限公司 Pose determination method and device
WO2021053031A1 (en) * 2019-09-20 2021-03-25 Continental Automotive Gmbh Method for detecting a moving state of a vehicle
CN111190981A (en) * 2019-12-25 2020-05-22 中国科学院上海微系统与信息技术研究所 Method and device for constructing three-dimensional semantic map, electronic equipment and storage medium
CN111461245A (en) * 2020-04-09 2020-07-28 武汉大学 Wheeled robot semantic mapping method and system fusing point cloud and image
WO2022143360A1 (en) * 2021-01-04 2022-07-07 炬星科技(深圳)有限公司 Autonomous environment map updating method and device, and computer-readable storage medium
CN113674416A (en) * 2021-08-26 2021-11-19 中国电子科技集团公司信息科学研究院 Three-dimensional map construction method and device, electronic equipment and storage medium
CN113920044A (en) * 2021-09-30 2022-01-11 杭州电子科技大学 Photovoltaic hot spot component post-detection positioning method based on unmanned aerial vehicle imaging
CN116734834A (en) * 2023-05-31 2023-09-12 人工智能与数字经济广东省实验室(深圳) Positioning and mapping method and device applied to dynamic scene and intelligent equipment
CN116608850A (en) * 2023-06-13 2023-08-18 五八智能科技(杭州)有限公司 Method, system, device and medium for constructing robot navigation map

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
一种基于RGB-D的移动机器人未知室内环境自主探索与地图构建方法;于宁波;王石荣;徐昌;;机器人;20171115(06);全文 *
基于语义建图的室内机器人实时场景分类;张文;刘勇;张超凡;张龙;夏营威;;传感器与微系统;20170820(08);全文 *
张文 ; 刘勇 ; 张超凡 ; 张龙 ; 夏营威 ; .基于语义建图的室内机器人实时场景分类.传感器与微系统.2017,(08),全文. *
齐少华 ; 徐和根 ; 万友文 ; 付豪 ; .动态环境下的语义地图构建.计算机科学.(09),全文. *

Also Published As

Publication number Publication date
CN117576200A (en) 2024-02-20

Similar Documents

Publication Publication Date Title
US10580164B2 (en) Automatic camera calibration
CN112257692B (en) Pedestrian target detection method, electronic device and storage medium
CN111210477A (en) Method and system for positioning moving target
CN113313763B (en) Monocular camera pose optimization method and device based on neural network
KR20220025028A (en) Method and device for building beacon map based on visual beacon
CN108332752B (en) Indoor robot positioning method and device
Muñoz-Bañón et al. Targetless camera-LiDAR calibration in unstructured environments
CN110119679B (en) Object three-dimensional information estimation method and device, computer equipment and storage medium
CN110853085B (en) Semantic SLAM-based mapping method and device and electronic equipment
WO2019136613A1 (en) Indoor locating method and device for robot
CN112150448B (en) Image processing method, device and equipment and storage medium
CN114179788A (en) Automatic parking method, system, computer readable storage medium and vehicle terminal
CN109636828A (en) Object tracking methods and device based on video image
CN112184906A (en) Method and device for constructing three-dimensional model
CN115546681A (en) Asynchronous feature tracking method and system based on events and frames
CN117576200B (en) Long-period mobile robot positioning method, system, equipment and medium
CN112904365A (en) Map updating method and device
CN115511970B (en) Visual positioning method for autonomous parking
CN118463965A (en) Positioning accuracy evaluation method and device and vehicle
CN117893602A (en) Feature point map building method, memory parking device and computer storage medium
CN115655291B (en) Method, device, mobile robot, equipment and medium for laser SLAM closed loop mapping
CN114648639B (en) Target vehicle detection method, system and device
CN112837404A (en) Method and device for constructing three-dimensional information of planar object
CN117314960A (en) Vehicle target tracking method, device, equipment and storage medium
CN114510031A (en) Robot visual navigation method and device, robot and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant