CN114627253A - Map construction method, device and equipment - Google Patents

Map construction method, device and equipment Download PDF

Info

Publication number
CN114627253A
CN114627253A CN202210174502.0A CN202210174502A CN114627253A CN 114627253 A CN114627253 A CN 114627253A CN 202210174502 A CN202210174502 A CN 202210174502A CN 114627253 A CN114627253 A CN 114627253A
Authority
CN
China
Prior art keywords
pose
key frame
target key
determining
visual image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210174502.0A
Other languages
Chinese (zh)
Inventor
王雷
陈熙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ecoflow Technology Ltd
Original Assignee
Ecoflow Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ecoflow Technology Ltd filed Critical Ecoflow Technology Ltd
Priority to CN202210174502.0A priority Critical patent/CN114627253A/en
Publication of CN114627253A publication Critical patent/CN114627253A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/11Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The application provides a map construction method, a map construction device and map construction equipment, which relate to the technical field of robots, wherein the method comprises the following steps: the method comprises the steps of firstly obtaining an initial pose and a visual image in the moving process of the mobile equipment, obtaining a plurality of target key frames from the visual image, determining a local error equation according to the initial pose and the visual image, performing local nonlinear optimization on the target key frames by adopting the local error equation, further adding the optimized pose of each target key frame into a feature map layer to obtain a target feature map layer, finally determining a global error equation according to the optimized pose, and performing global nonlinear optimization on the pose in the target feature map layer according to the global error equation to obtain a target global map. According to the technical scheme, the self-moving equipment can construct a map with higher accuracy in scenes such as poor characteristic points, similar scenes, poor real-time dynamic measuring instrument signals, wheel slipping and the like.

Description

Map construction method, device and equipment
Technical Field
The present application relates to the field of robotics, and in particular, to a map construction method, apparatus, and device.
Background
With the development of society and the advancement of science and technology, the functions of robots are diversified, and these various robots can perform well in specific environments, so that the robots are increasingly popular.
In most cases, the working environment of the robot is unknown or uncertain, the autonomous movement And positioning of the robot are realized by means of an environment map, And in the current map construction mode of the robot, laser or vision is adopted to perform instant positioning And map construction (SLAM), so that SLAM matching failure is easily caused in a field with insufficient characteristic points or similar scenes, And the constructed map is low in accuracy.
Disclosure of Invention
In view of this, the present application provides a map construction method, apparatus and device, so as to improve the accuracy of a map constructed by a robot in a field with insufficient feature points or similar scenes.
In order to achieve the above object, in a first aspect, an embodiment of the present application provides a map construction method, including:
acquiring an initial pose and a visual image in the moving process of the mobile equipment;
extracting a target key frame from the visual image;
determining a local error equation according to the initial pose and the visual image;
performing local nonlinear optimization on the target key frame by using the local error equation to obtain the pose of the self-moving equipment after the target key frame is optimized;
adding the optimized pose of each target key frame into a feature map layer to obtain a target feature map layer;
determining a global error equation according to the optimized pose;
and performing global nonlinear optimization on the pose in the target feature map layer according to the global error equation to obtain a global map.
As an optional implementation manner of this embodiment of this application, the determining a local error equation according to the initial pose and the visual image includes:
determining a predicted pose of the self-moving device in the target key frame according to the initial pose, and determining a pose measurement error factor according to the predicted pose;
extracting visual image characteristic points of the target key frame;
determining a reprojection error factor according to the predicted pose and the visual image feature points;
determining a real-time dynamic measurement error factor according to the real-time position information of the self-moving equipment;
determining a marginalization factor according to the visual image feature points and the real-time position information;
and determining the local error equation according to the pose measurement error factor, the reprojection error factor, the real-time dynamic measurement error factor and the marginalization factor.
As an optional implementation manner of this embodiment of the present application, the determining a reprojection error factor according to the predicted pose and the visual image feature point includes:
determining coordinates of visual image feature points of the target key frame in the feature map layer according to the predicted pose;
determining target feature points matched with the visual image feature points in the feature map layer according to the coordinates of the visual image feature points in the feature map layer;
converting the target feature points into the visual image according to the predicted pose to obtain matched feature points;
and determining a reprojection error factor according to the matched feature points and the visual image feature points.
As an optional implementation manner of the embodiment of the present application, determining a global error equation according to the optimized pose includes:
determining a relative pose constraint factor of the pose optimized by the target key frame according to the pose corresponding to each target key frame and the inverse of the pose corresponding to each target key frame;
obtaining a closed loop detection pose constraint factor according to the inverse of the pose corresponding to each target key frame and the closed loop detection pose corresponding to each target key frame;
determining a real-time dynamic measurement position constraint factor according to the inverse of the corresponding pose of each target key frame and the real-time dynamic measurement position of each target key frame;
and determining a global error equation according to the relative pose constraint factor, the closed loop detection pose constraint factor and the real-time dynamic measurement position constraint factor.
As an optional implementation manner of the embodiment of the present application, the global map further includes at least one of a target track layer and a boundary layer, where the target track layer is used to determine a working track range during the movement of the self-moving device, and the boundary layer is used to distinguish a working range from a non-working range and to plan the working track range for the self-moving device within the working range.
As an optional implementation manner of this embodiment, when the global map includes the target track layer, the method further includes:
acquiring an initial track layer;
and adding the optimized pose of each target key frame into the initial track layer to obtain the target track layer.
As an optional implementation manner of this embodiment of the present application, when the visual image includes a color image and a depth image corresponding to the color image, and the global map includes a boundary layer, the method further includes: acquiring boundary information in the color image, and determining a depth value corresponding to the boundary information from a depth image corresponding to the color image; wherein the boundary information is used for determining the working range and the non-working range;
acquiring coordinates of the boundary information in a first coordinate system of the mobile device according to the boundary information and the depth value corresponding to the boundary information;
and determining the coordinates of the boundary information in a second coordinate system according to the acquired pose of the self-moving equipment in the moving process and the coordinates of the boundary information in the first coordinate system, so that the coordinates of the boundary information in the second coordinate system are used as the coordinate information of the boundary layer.
As an optional implementation manner of this embodiment, before the obtaining of the initial pose and the visual image during the moving process of the mobile device, the method further includes:
controlling the self-moving equipment to move from an initial position along a preset direction for a preset distance and then stop;
taking the roll angle, the pitch angle and the position information of the self-moving equipment when the self-moving equipment stops as the initial pose of the self-moving equipment;
transforming the initial pose to a second coordinate system where feature points in an initial visual image acquired after the mobile equipment stops are located, so that the initial pose corresponds to coordinates in the second coordinate system one to one;
and obtaining an initial feature map layer according to the coordinates of the feature points in the initial visual image in the second coordinate system.
In a second aspect, an embodiment of the present application provides a map building apparatus, including:
an acquisition module: the system comprises a camera, a display and a controller, wherein the camera is used for acquiring an initial pose and a visual image in the moving process of the mobile equipment;
an extraction module: the system is used for acquiring a plurality of target key frames from the visual image;
a local error determination module: the local error equation is determined according to the initial pose and the visual image;
a local optimization module: the local nonlinear optimization module is used for performing local nonlinear optimization on the target key frame by adopting the local error equation to obtain the pose of the self-moving equipment after the target key frame is optimized;
adding a module: the pose optimization module is used for adding the pose optimized by each target key frame into a feature map layer to obtain a target feature map layer;
a global error determination module: the global error equation is determined according to the optimized pose;
a global optimization module: and the global nonlinear optimization module is used for carrying out global nonlinear optimization on the pose in the target feature map layer according to the global error equation so as to obtain a target global map.
In a third aspect, an embodiment of the present application provides an electronic device, including: a memory for storing a computer program and a processor; the processor is configured to perform the method of the first aspect or any of the embodiments of the first aspect when the computer program is invoked.
In a fourth aspect, the present application provides a readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the method of the first aspect or any implementation manner of the first aspect.
According to the map construction scheme provided by the embodiment of the application, an initial pose and a visual image in the moving process of the mobile equipment are obtained, a plurality of target key frames are obtained from the visual image, a local error equation is determined according to the initial pose and the visual image, the local nonlinear optimization is carried out on the target key frames by adopting the local error equation so as to obtain the optimized pose of the mobile equipment on the target key frames, the optimized pose of each target key frame is further added to a characteristic map layer so as to obtain a target characteristic map layer, a global error equation is determined according to the optimized pose, and the global nonlinear optimization is carried out on the pose in the target characteristic map layer according to the global error equation so as to obtain the target global map. The map construction scheme provided by the embodiment of the application combines a local error equation and a global error equation generated by a visual image and pose data, realizes double optimization of the pose of the self-moving equipment, reduces the local error of the pose of a key frame of the visual image in the moving process of the self-moving equipment, and reduces the global error of the pose in the whole target feature map layer, thereby improving the map construction accuracy, and enabling the self-moving equipment to still construct a map with higher accuracy in a scene with poor data acquired by partial sensors under the conditions of poor characteristic points, similar scenes, poor signals of a real-time dynamic measuring instrument, wheel slip and the like.
Drawings
Fig. 1 is a schematic flowchart of a map construction method according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of a process for determining a local error equation according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a reprojection error factor determination process provided in an embodiment of the present application;
fig. 4 is a schematic flowchart of a global error equation determination process provided in an embodiment of the present application;
fig. 5 is a schematic diagram of a global map constructed by the map construction method provided in the embodiment of the present application;
fig. 6 is a schematic flowchart of a target track layer generation process provided in an embodiment of the present application;
FIG. 7 is a schematic flow chart of a boundary layer generation process provided by an embodiment of the present application;
fig. 8 is a schematic flowchart of a positioning initialization process according to an embodiment of the present application;
FIG. 9 is a block diagram of a mapping apparatus according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The embodiments of the present application will be described below with reference to the drawings. The terminology used in the description of the embodiments of the examples herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
The map construction method provided by the embodiment of the application can be realized by map construction equipment, the map construction equipment can be self-moving equipment such as a robot, and can also be a chip or a circuit applied to the robot, or the map construction equipment can also be electronic equipment or a chip or a circuit applied to the electronic equipment, for example, a map can be constructed on a computer by adopting the map construction method. The embodiment will be described below by taking an example in which the map construction method is applied to a robot. When the map building device is an electronic device, the map building device may interact with the robot, for example, the robot may report various sensor data of the robot to the electronic device, and the like.
The robot may be a mowing robot, a sweeping robot, a mine clearance robot, a cruise robot, etc., which is not particularly limited in this embodiment.
Fig. 1 is a schematic flowchart of a map building method provided in an embodiment of the present application, and as shown in fig. 1, the method may include the following steps:
and S110, acquiring an initial pose and a visual image in the moving process of the mobile equipment.
A number of different types of sensors and cameras are deployed from mobile devices, including but not limited to wheel speed meters, inertial sensors, and Real Time Kinematic (RTK) meters.
Wherein, the wheel speed meter can be a ring wheel speed sensor, a linear wheel speed sensor or a Hall wheel speed sensor, etc.
The Inertial sensor may be a Micro Electro Mechanical System (MEMS), an Inertial Measurement Unit (IMU), or the like.
The camera may be a monocular camera, a binocular camera, an RGB-D camera, an event camera, etc.
The initial pose may include data acquired from a wheel speed meter of the mobile device in the moving process, data acquired from an inertial sensor, and position information of the mobile device acquired from a real-time dynamic measurement instrument, and the mobile device may acquire a visual image in a visual range through a camera, thereby acquiring feature points in the visual image.
In the following description, the wheel speed is taken as an annular wheel speed sensor, the inertial sensor is taken as an IMU, and the camera is taken as an RGB-D camera for example.
It is understood that the initial pose may also include data of other sensors, which is not limited by the embodiments of the present application.
And S120, acquiring a plurality of target key frames from the visual image.
Specifically, the self-mobile device may determine one or more target visual images from visual images other than a first visual image in the captured multiple frames of visual images, where a disparity between the target visual image and a previous visual image of the target visual image is greater than a preset angle.
The self-moving device may use the first frame of visual image and one or more target visual images as target key frames, and may specifically set the number of visual image frames as target key frames according to actual situations, which is not limited herein.
The extracted target key frame can be added into a sliding window from the mobile device so as to facilitate the subsequent local nonlinear optimization of the target key frame.
The self-moving device can also store the extracted target key frame in a visual dictionary database so as to facilitate subsequent closed-loop detection.
And S130, determining a local error equation according to the initial pose and the visual image.
Fig. 2 is a schematic flow chart of a local error equation determination process provided in an embodiment of the present application, and as shown in fig. 2, the process may include the following steps:
s131, obtaining a predicted pose of the mobile device in the target key frame according to the initial pose, and determining a pose measurement error factor according to the predicted pose.
The predicted pose may include a wheel speed meter prediction and an IMU prediction, and the pose measurement error factors may include a wheel speed meter error factor and an IMU error factor, respectively.
The target key frame can be a first frame of visual image in the visual images acquired by the camera and the visual image with the parallax greater than a preset threshold value with the previous frame of visual image.
Specifically, the self-moving device may pre-integrate wheel speed meter data and IMU data corresponding to each acquired frame of visual image to obtain a wheel speed meter prediction result and an IMU prediction result of the self-moving device at the target key frame.
The self-moving device can pre-integrate the data collected by the wheel speed meter according to the following formula (1):
Figure BDA0003518522890000071
wherein the content of the first and second substances,
Figure BDA0003518522890000072
indicating the coordinates of the mobile device in the world coordinate system in the jth target key frame,
Figure BDA0003518522890000073
representing the coordinates of the mobile device in the world coordinate system in the ith target key frame,
Figure BDA0003518522890000074
representing the i-th target key frame from the rotational pose parameters of the mobile device in the world coordinate system, qboA rotation external parameter, v, representing the IMU coordinate system to the wheel speed meter coordinate systemoiRepresents the speed of the wheel speed meter of the ith target key frame, and delta t represents the time interval between the ith target key frame and the jth target key frame, wherein i is more than or equal to 1, j>i。
The self-moving device may pre-integrate the data collected by the IMU according to equation (2) below:
Figure BDA0003518522890000081
wherein the content of the first and second substances,
Figure BDA0003518522890000082
indicating the coordinates of the mobile device in the world coordinate system in the jth target key frame,
Figure BDA0003518522890000083
indicating the speed of the jth target key frame from the mobile device,
Figure BDA0003518522890000084
the j target key frame is represented by the rotation attitude parameters of the self-moving equipment under the world coordinate system,
Figure BDA0003518522890000085
represents the offset of the i-th target keyframe IMU gyroscope,
Figure BDA0003518522890000086
represents the offset of the i-th target key frame IMU accelerometer,
Figure BDA0003518522890000087
representing coordinates in the world coordinate system from the mobile device in the ith target keyframe,
Figure BDA0003518522890000088
representing the speed of the ith target key frame from the mobile device, Δ t representing the time interval between the ith target key frame and the jth target key frame, gwWhich is indicative of the force of gravity,
Figure BDA0003518522890000089
representing the ith target key frame, from the rotational pose parameters of the mobile device in the world coordinate system,
Figure BDA00035185228900000810
the pre-integration result representing the IMU translation from the ith target key frame to the jth target key frame,
Figure BDA00035185228900000811
the result of pre-integration representing IMU velocities for the ith through jth target keyframes,
Figure BDA00035185228900000812
IMU rotation representing ith to jth target Key framesAs a result of the pre-integration of (a),
Figure BDA00035185228900000813
represents the bias of the IMU gyroscope for the jth target keyframe,
Figure BDA00035185228900000814
represents the offset of the IMU accelerometer for the jth target keyframe.
After pre-integrating the data collected by the wheel speed meter and the data collected by the IMU, the self-moving device may determine error factors of the wheel speed meter and the IMU according to a pre-integration result of the wheel speed meter (i.e., a prediction result of the wheel speed meter) and a pre-integration result of the IMU (i.e., a prediction result of the IMU), respectively.
Specifically, the self-moving device may determine a wheel speed meter error factor according to equation (3) as follows:
Figure BDA00035185228900000815
wherein r iso(voiS) represents the i-th target keyframe wheel speed meter error factor, roIndicating wheel speed error, voiThe speed of the wheel speed meter of the ith target key frame is shown, s represents the state quantity of a visual image feature point corresponding to the marginalization, namely the optimized variation quantity in the sliding window,
Figure BDA00035185228900000816
representing the rotation attitude parameters of the self-mobile device under the world coordinate system of the ith target key frame,
Figure BDA00035185228900000817
indicating the coordinates of the mobile device in the world coordinate system in the jth target key frame,
Figure BDA0003518522890000091
representing the coordinates of the mobile device in the world coordinate system, p, in the ith target keyframeboRepresenting the coordinate system of the mobile device from the IMU coordinate system to the wheel speed meter coordinate systemThe translation of the external parameters between the two,
Figure BDA0003518522890000092
the j target key frame is represented by the rotation attitude parameters of the self-moving equipment under the world coordinate system,
Figure BDA0003518522890000093
represents the pre-integration result of the wheel speed meter between the ith target key frame and the jth target key frame, and Σ o represents the information matrix of the wheel speed meter.
The self-moving device may determine the IMU error factor according to equation (4) as follows:
Figure BDA0003518522890000094
wherein, bjRepresenting the jth target key frame, biRepresenting the ith target key frame, s represents the state quantity of a visual image feature point corresponding to marginalization, namely the optimized variable quantity in the sliding window, rp、rq、rv、rba、rbgRespectively representing the translation error, the rotation error, the speed error, the gyroscope error and the accelerometer error of the IMU,
Figure BDA0003518522890000095
representing the inverse of the i-th target key frame from the mobile device pose, (x, y, z) is the corresponding camera coordinate of the mobile device in the target key frame.
S132, extracting the visual image feature points of the target key frame, and determining a reprojection error factor according to the predicted pose and the visual image feature points.
Fig. 3 is a schematic flowchart of a reprojection error factor determination process provided in an embodiment of the present application, and as shown in fig. 3, the process may include the following steps:
s1321, determining coordinates of the visual image feature points of the target key frame in the feature map layer according to the predicted pose.
The visual image feature point is a representative point in the visual image, and may be, for example, a certain point on the outline of a tree, a person, or an object in one frame of image, and the mobile device may acquire pixel values corresponding to the points in the visual image.
The self-moving device may determine the predicted pose of the visual image according to the prediction result of the wheel speed meter corresponding to the visual image and the prediction result of the inertial sensor, and determine the coordinate of the visual image feature point a in the feature map layer according to the predicted pose.
S1322, determining target feature points matched with the visual image feature points in the feature map layer according to the coordinates of the visual image feature points in the feature map layer.
Specifically, the self-moving device may find, according to the coordinates of the visual image feature point a in the visual image in the feature map layer, a target feature point a1 in the feature map layer, where the target feature point a represents the same actual position as the visual image feature point a and a corresponding pixel value is the same.
And S1323, converting the target feature points into the visual image according to the predicted pose to obtain matched feature points.
Specifically, the self-moving device can determine the coordinates of the target feature point a1 in the visual image according to the predicted pose, and project a1 into the visual image to obtain a matching feature point a' of the visual image feature point a.
And S1324, determining a re-projection error factor according to the matched feature points and the visual image feature points.
Specifically, the self-moving device may determine the reprojection error factor according to the matching feature points a' of the visual image feature points a and a in the visual image, and the self-moving device may determine the reprojection error factor according to the following formula (5) and formula (6):
Figure BDA0003518522890000101
Figure BDA0003518522890000102
wherein the content of the first and second substances,
Figure BDA0003518522890000103
is the coordinate of a feature point in the jth target key frame in the camera coordinate system, TbcRepresenting camera-to-IMU coordinate system external parameters,
Figure BDA0003518522890000104
representing the pose of the ith target keyframe,
Figure BDA0003518522890000105
the pose of the jth target key frame is shown, lambda represents the inverse depth of the feature point,
Figure BDA0003518522890000106
Figure BDA0003518522890000107
representing the coordinates of the feature point in a pixel coordinate system, rcRepresenting the reprojection error factor.
And S133, determining real-time dynamic measurement error factors according to the real-time position information of the mobile equipment.
The self-moving device may determine an RTK error factor (i.e., a real-time dynamic measurement error factor) based on the self-current position information acquired by the RTK and the self-position information of the self-moving device in the current visual image, where the real-time position information includes coordinates of the self-moving device in an RTK base coordinate system and a yaw angle of the self-moving device.
Specifically, the self-moving device may determine the RTK error factor according to equation (7) as follows:
error=(x1-x2)*(x1-x2)+(y1-y2)*(y1-y2)+|(θ12) |.
Wherein error represents the RTK errorFactor (x)1,y1) Coordinates (x) of the self-moving device representing RTK acquisition in the RTK base station coordinate system2,y2) Coordinates, θ, representing the self-moving device in the current visual image1Indicating the current heading angle, θ, of the RTK rover (i.e., self-moving device)2Representing the current heading angle of the self-moving device in the visual image.
In order to enable the built map to drift in a non-directional angle, the position information of the self-moving equipment can be acquired by using a single-antenna RTK, and when the RTK is a single antenna, the self-moving equipment can calculate the current course angle of the RTK rover station in an interframe difference mode because the RTK cannot feed back the course angle of the RTK rover station.
Specifically, when the mobile robot detects that the mobile robot moves along a straight line, the mobile robot can be moved according to the position coordinates (rtk _ x) of the mobile robot in the two adjacent visual images1,rtk_y1)(rtk_x2,rtk_y2) Determining a current heading angle of the RTK rover, wherein the current heading angle of the RTK rover can be determined according to the following formula (8):
θ1=atan2(rtk_y2-rtk_y1,rtk_x2-rtk_x1) ..
When the current angular velocity of the self-moving equipment is smaller than the set angular velocity, the self-moving equipment moves linearly, otherwise, the self-moving equipment moves non-linearly.
And S134, determining a marginalization factor according to the visual image feature points and the real-time position information.
The self-moving device can determine the marginalization factor according to the visual image feature point and the real-time position information of the self-moving device, and specifically, the self-moving device can determine the marginalization factor according to the following formula (9):
er=||rp-Jps||2..
Where er denotes the marginalization factor, JpAn information matrix representing the marginalized correspondence, the information matrix including real-time location information from the mobile device,s represents the state quantity of one visual image feature point corresponding to marginalization, that is, represents the optimized amount of change in the sliding window.
And S135, determining a local error equation according to the pose measurement error factor, the reprojection error factor, the real-time dynamic measurement error factor and the marginalization factor.
Specifically, the self-moving device may construct a local error equation according to equation (10) as follows:
Figure BDA0003518522890000121
wherein, | | rp-Jps||2Which represents the edge-rendering factor, is,
Figure BDA0003518522890000122
an error factor representing the IMU (i.e. an attitude measurement error factor),
Figure BDA0003518522890000123
representing the reprojection error factor, p representing the kernel function, epsiloncRepresenting a pixel projection function of the visual image,
Figure BDA0003518522890000124
a pixel point representing a visual image is displayed,
Figure BDA0003518522890000125
representing the RTK error factor (i.e., the real-time kinematic measurement error factor), (x)curi,ycuri,θcuri) Coordinates representing the self-moving device in the ith target Key frame, (x)RTKi,yRTKi,θRTKi) Representing the coordinates of the self-moving device acquired by RTK when the camera acquires the ith target keyframe.
It is understood that when the closed-loop detection pose constraint factor is present, the self-moving device may also construct a local error equation according to the following equation (11):
Figure BDA0003518522890000126
wherein, | | Tcur -1·Tclose||2Representing a closed loop detection pose constraint factor, TcurIndicating the current location, T, of the mobile devicecloseIndicating the detected position of the closed loop.
Further, when the self-moving equipment is in a scene with poor data acquired by sensors such as poor characteristic points, similar scenes, poor RTK signals, wheel slip and the like, a local error equation can still be constructed through error factors determined by normal data acquired by other sensors.
And S140, performing local nonlinear optimization on the target key frame by adopting a local error equation to obtain the optimized pose of the target key frame.
Specifically, n target key frames can be maintained in the sliding window, the self-moving device can perform nonlinear optimization on the pose of each target key frame maintained in the sliding window according to a local error equation to obtain the optimized pose of the target key frame, and n is an integer greater than 1.
The pose in the target keyframe may include the following variables:
s=[x1,…,xn,xbc,xbo1,…,λm],m∈[0,z]
Figure BDA0003518522890000131
xbc=[pbc,qbc]
xbo=[pbo,qbo]
wherein s represents the optimized variation in the sliding window, and can also be represented as the optimized pose of the target key frame in the sliding window, (x)1,…,xn) And (lambda)1,…,λm) Representing an optimizable state variable, x, in the pose of a sliding window object keyframebcOuter reference, x, representing IMU coordinate system to camera coordinate systemboRepresenting IMU coordinate system to wheel speedExternal reference, p, of the coordinate system of the meterbcRepresenting a translational external parameter, q, from the IMU coordinate system to the camera coordinate systembcRepresenting a rotational external parameter, p, from the IMU coordinate system to the camera coordinate systemboRepresenting a translational external reference, q, from the IMU coordinate system to the wheel speed meter coordinate systemboRepresenting a rotational external reference, x, from the IMU coordinate system to the wheel speed meter coordinate systemkRepresents the state variables that can be optimized in the kth target key frame,
Figure BDA0003518522890000132
indicating the location of the kth target key frame from the mobile device,
Figure BDA0003518522890000133
representing the velocity of the kth target keyframe from the mobile device,
Figure BDA0003518522890000134
representing the pose of the kth target keyframe from the mobile device,
Figure BDA0003518522890000135
represents the offset of the IMU accelerometer for the kth target keyframe,
Figure BDA0003518522890000136
representing the offset of the IMU gyroscope for the kth target keyframe.
And S150, adding the optimized pose of each target key frame into the feature map layer to obtain a target feature map layer.
And initializing the created characteristic map layer, namely adding the initial pose into the characteristic map layer. After initialization, the self-moving device may add the pose of the optimized target key frame in the sliding window to a position in the feature map layer corresponding to the target key frame to obtain a target feature map layer, where the target feature map layer includes the pose of each optimized target key frame and a feature point of each optimized target key frame.
And S160, determining a global error equation according to the optimized pose.
Fig. 4 is a schematic flowchart of a global error equation determining process provided in the embodiment of the present application, and as shown in fig. 4, the process may include the following steps:
and S161, determining a relative pose constraint factor of the pose optimized by the target key frame according to the pose corresponding to each target key frame and the inverse of the pose corresponding to each target key frame.
The self-mobile device can add n target key frames in the target key frames into the pose graph, wherein in the n target key frames, the translation difference between every two adjacent target key frames is greater than a set distance or the angle difference is greater than a set angle.
The self-moving equipment can determine the relative pose constraint factor of the optimized pose of the target key frame in the pose graph according to the optimized pose corresponding to each target key frame in the pose graph and the inverse of the optimized pose corresponding to each target key frame in the pose graph.
Specifically, the self-moving device may determine the relative pose constraint factor according to equation (12) as follows:
error1=T1.inverse()*T2+T2.inverse()*T3+.....+Ti.inverse()*Ti+1
+Tn-1.inverse()*Tn..
Wherein, error1Representing a relative pose constraint factor, TiShowing the optimized pose, T, corresponding to the ith target key frame in the n target key framesiInverse () represents the inverse of the optimized pose corresponding to the ith target key frame.
And S162, obtaining a closed loop detection pose constraint factor according to the inverse of the pose corresponding to each target key frame and the closed loop detection pose corresponding to each target key frame.
The self-moving device may detect whether each target key frame in the pose graph needs to be subjected to closed-loop detection, for example, the target key frames in the pose graph may be subjected to closed-loop detection every other set value, that is, when the self-moving device detects that the number of intervals between the current target key frame and the last target key frame subjected to closed-loop detection is a set value, the current target key frame is subjected to closed-loop detection, and otherwise, the current target key frame is not subjected to closed-loop detection.
When determining that the current target key frame in the pose image needs to be subjected to closed-loop detection from the mobile device, searching for a similar target key frame in the visual dictionary database according to the feature points extracted from the current target key frame, and if the similar target key frame is not searched, continuously detecting whether the next target key frame needs to be subjected to closed-loop detection; if so, these similar target key frames are combined into a local feature map.
The self-moving device can perform matching by using a pnp (perspective-n-point) algorithm according to the feature points in the current target key frame and the feature points in the local map to obtain a closed-loop detection pose.
The self-moving equipment can obtain a closed loop detection pose constraint factor according to the inverse of the optimized pose corresponding to each target key frame in the pose graph and the closed loop detection pose corresponding to each target key frame.
Specifically, the self-moving device may determine the closed-loop detection pose constraint factor according to the following equation (13):
error2=Ti.inverse()*Ti.
Wherein, error2Representing a closed loop detection pose constraint factor, TiAnd _ close represents a closed loop detection pose corresponding to the ith target key frame in the n target key frames.
And S163, determining a real-time dynamic measurement position constraint factor according to the inverse of the corresponding pose of each target key frame and the real-time dynamic measurement position corresponding to each target key frame.
The self-moving equipment can determine the real-time dynamic measurement position constraint factor according to the inverse of the optimized pose corresponding to each target key frame in the pose graph and the position information acquired by the RTK corresponding to each target key frame in the pose graph.
Specifically, the self-moving device may determine the real-time dynamic measurement location constraint factor according to equation (14) as follows:
error3=Ti.inverse()*Ti.
Wherein, error3Representing a real-time dynamically measured position constraint, TiA _ RTK represents the position information of the RTK acquisition corresponding to the ith target key frame in the N target key frames.
And S164, determining a global error equation according to the relative pose constraint factor, the closed loop detection pose constraint factor and the real-time dynamic measurement position constraint factor.
Specifically, the self-moving device may determine a global error equation according to equation (15) as follows:
error=error1+error2+error3the...
The self-moving equipment can also add the closed-loop detection pose constraint factor into the next sliding window so as to add the closed-loop detection pose constraint factor when a local error equation is constructed in the sliding window next time.
And S170, performing global nonlinear optimization on the pose in the target feature map layer according to a global error equation to obtain a target global map.
Specifically, the self-moving device can perform global nonlinear optimization on the pose in the target feature map layer according to a global error equation to obtain a global map.
The global nonlinear optimization may be that the self-moving device performs nonlinear optimization on the poses of all target key frames in the target feature map layer except the first frame key frame according to a global error equation.
The self-moving equipment can also perform global nonlinear optimization on map feature points in the target feature map layer, so that the consistency of the land preparation map is more reasonable and closer to a real scene.
It will be appreciated by those skilled in the art that the above embodiments are exemplary and not intended to limit the present application. Where possible, the order of execution of one or more of the above steps may be modified, or selectively combined, to provide one or more other embodiments. The skilled person can select any combination of the above steps according to the needs, and all that does not depart from the essence of the scheme of the present application falls into the protection scope of the present application.
Fig. 5 is a schematic diagram of a global map constructed by the map construction method provided in the embodiment of the present application, and as shown in fig. 5, the global map may further include a target track layer and a boundary layer in addition to a target feature map layer.
The target track layer is used for determining a working track range in the moving process of the self-moving equipment, and the boundary layer is used for distinguishing the working range from the non-working range and planning the working track range for the self-moving equipment in the working range.
Fig. 6 is a flowchart of a target track layer generation process provided in the embodiment of the present application, and as shown in fig. 6, when the global map further includes a target track layer, the process may include the following steps:
and S210, acquiring an initial track layer.
Specifically, a blank layer can be newly built, and the initial pose of the mobile device before moving is projected to the blank layer to obtain an initial track layer.
And S220, adding the optimized pose of each target key frame into the initial track layer to obtain a target track layer.
Specifically, after the self-moving device starts moving, the pose optimized by each target key frame can be added to the initial track layer to obtain a target track layer, and the target track layer comprises a track formed by coordinates corresponding to the pose optimized by each target key frame, so that the self-moving device can only work within a track range during operation.
Fig. 7 is a schematic flowchart of a boundary layer generation process provided in an embodiment of the present application, and as shown in fig. 7, when the global map further includes a boundary layer, the process may include the following steps:
s310, obtaining boundary information in the color image, and determining a depth value corresponding to the boundary information from a depth image corresponding to the color image.
In particular, RGB images (i.e., color images) and depth images captured from an RGB-D camera of the mobile device may be acquired while fusing the positioning.
Specifically, boundary information may be extracted from the RGB image, the boundary information being used to determine a working range and a non-working range, the boundary information may include a set of visible points representing an outermost layer and a shadow boundary of the mobile device and a set of points on an intersection of an obstruction and a background, and the mobile device may determine depth values corresponding to the respective points from a depth image corresponding to the RGB image.
And S320, acquiring the coordinates of the boundary information in the first coordinate system of the mobile device according to the boundary information and the depth value corresponding to the boundary information.
The self-mobile device may obtain coordinates of each point in the boundary information in a camera coordinate system (i.e., a first coordinate system) of the self-mobile device according to a depth value corresponding to each point in the boundary information.
And S330, determining the coordinates of the boundary information in a second coordinate system according to the acquired pose of the mobile equipment in the moving process and the coordinates of the boundary information in the first coordinate system, so that the coordinates of the boundary information in the second coordinate system are used as the coordinate information of the boundary layer.
The self-moving equipment can determine the coordinates of each point in the boundary information under an RTK base station coordinate system (namely a second coordinate system) according to the acquired pose of the self-moving equipment and the coordinates of each point in the boundary information under a camera coordinate system, the coordinates of the points are added into the boundary layer, then the points added into the boundary layer are connected to form a working boundary, and when the self-moving equipment works, a route can be planned only in the range contained by the working boundary so as to ensure that the self-moving equipment does not exceed the working boundary.
In another embodiment of the present application, the self-mobile device may further perform positioning initialization before acquiring the first data during the moving process of the self-mobile device.
Fig. 8 is a schematic flowchart of a positioning initialization process provided in an embodiment of the present application, and as shown in fig. 8, the process may include the following steps:
and S410, controlling the mobile equipment to move from the initial position along the preset direction for a preset distance and then stopping.
In particular, the movement from the mobile device in a straight line may be controlled for a set distance, for example 0.5 meters. In the moving process of the self-moving equipment, RTK position coordinates can be acquired at fixed time intervals, linear fitting is carried out on a plurality of groups of RTK position coordinates in a least square method mode to obtain a straight line, and the azimuth angle of the straight line is used as the initial yaw angle of the self-moving equipment.
And S420, taking the roll angle, the pitch angle and the position information of the self-moving equipment when the self-moving equipment stops as the initial pose of the self-moving equipment.
Specifically, the roll angle and the pitch angle in the attitude of the self-moving device acquired by the IMU at the moment when the self-moving device stops moving may be used as the initial roll angle and the pitch angle of the self-moving device.
And taking the coordinate acquired by the RTK at the moment when the self-moving equipment stops moving as the initial coordinate of the self-moving equipment, namely the first position information of the self-moving equipment.
And determining the initial pose of the self-moving equipment according to the initial roll angle, the pitch angle and the initial coordinate of the self-moving equipment.
And S430, transforming the initial pose to a second coordinate system where feature points in the initial visual image collected after the mobile equipment stops are located, so that the initial pose corresponds to coordinates in the second coordinate system one by one.
Specifically, the feature points in the visual image acquired after the mobile device stops may be transformed into coordinates in the RTK base station coordinate system through the transformation of the initial pose.
And S440, obtaining a feature map layer according to the coordinates of the feature points in the initial visual image in the second coordinate system.
Specifically, the corresponding feature points may be projected into the feature map layer according to the transformed coordinates of the RTK base station coordinate system to obtain the feature map layer, and subsequently, the pose in the target key frame is optimized based on the initial pose of the self-moving device.
According to the map construction scheme, an initial pose and a visual image in the moving process of a mobile device are obtained, a local error equation is determined according to the initial pose and the visual image, a target key frame is extracted from the visual image, the local error equation is adopted to carry out local nonlinear optimization on the target key frame to obtain the optimized pose of the target key frame, the optimized pose of each target key frame is further added into a target feature map layer, a global error equation is determined according to the optimized pose, and global nonlinear optimization is carried out on the pose in the target feature map layer according to the global error equation to obtain a global map. The map construction scheme provided by the embodiment of the application combines a local error equation and a global error equation generated by a visual image and pose data, realizes double optimization of the pose of the self-moving equipment, reduces the local error of the pose of a key frame of the visual image in the moving process of the self-moving equipment, and reduces the global error of the pose in the whole target feature map layer, thereby improving the map construction accuracy, and enabling the self-moving equipment to still construct a map with higher accuracy in a scene with poor data acquired by partial sensors under the conditions of poor characteristic points, similar scenes, poor signals of a real-time dynamic measuring instrument, wheel slip and the like.
Based on the same inventive concept, the embodiment of the application also provides a map construction device. Fig. 9 is a schematic structural diagram of a map building apparatus provided in an embodiment of the present application, and as shown in fig. 9, the map building apparatus provided in the embodiment of the present application may include:
the acquisition module 11: the system comprises a camera, a display and a controller, wherein the camera is used for acquiring an initial pose and a visual image in the moving process of the mobile equipment;
the extraction module 12: the system is used for acquiring a plurality of target key frames from the visual image;
local error determination module 13: the local error equation is determined according to the initial pose and the visual image;
the local optimization module 14: the local error equation is used for carrying out local nonlinear optimization on the target key frame so as to obtain the pose of the self-moving equipment after the target key frame is optimized;
an adding module 15: the pose optimization module is used for adding the pose optimized by each target key frame into a feature map layer to obtain a target feature map layer;
global error determination module 16: the global error equation is determined according to the optimized pose;
the global optimization module 17: and the global nonlinear optimization module is used for carrying out global nonlinear optimization on the pose in the target feature map layer according to the global error equation so as to obtain a global map.
As an optional implementation manner, the local error determination module 13 is specifically configured to:
determining a predicted pose of the self-moving device in the target key frame according to the initial pose, and determining a pose measurement error factor according to the predicted pose;
extracting visual image characteristic points of the target key frame;
determining a reprojection error factor according to the predicted pose and the visual image feature points;
determining a real-time dynamic measurement error factor according to the real-time position information of the self-moving equipment;
determining a marginalization factor according to the feature points and the real-time position information;
and determining the local error equation according to the pose measurement error factor, the reprojection error factor, the real-time dynamic measurement error factor and the marginalization factor.
As an optional implementation, the local error determination module 13 is further configured to:
determining coordinates of visual image feature points of the target key frame in the feature map layer according to the predicted pose;
determining target feature points matched with the visual image feature points in the feature map layer according to the coordinates of the visual image feature points in the feature map layer;
converting the target feature points into the visual image according to the predicted pose to obtain matched feature points;
and determining a reprojection error factor according to the matched feature points and the visual image feature points.
As an alternative embodiment, the global error determination module 16 is configured to:
determining a relative pose constraint factor of the pose optimized by the target key frame according to the pose corresponding to each target key frame and the inverse of the pose corresponding to each target key frame;
obtaining a closed loop detection pose constraint factor according to the inverse of the pose corresponding to each target key frame and the closed loop detection pose corresponding to each target key frame;
determining a real-time dynamic measurement position constraint factor according to the inverse of the corresponding pose of each target key frame and the real-time dynamic measurement position of each target key frame;
and determining a global error equation according to the relative pose constraint factor, the closed loop detection pose constraint factor and the real-time dynamic measurement position constraint factor.
As an optional implementation, the global map further includes at least one of a target track layer for determining a working track range during movement of the self-moving device and a boundary layer for distinguishing a working range from a non-working range and for planning the working track range for the self-moving device within the working range.
As an optional embodiment, when the global map includes the target track layer, the adding module 15 is further configured to:
acquiring an initial track layer;
and adding the optimized pose of each target key frame into the initial track layer to obtain the target track layer.
As an optional implementation manner, when the global map includes a boundary layer, the visual image includes a color image and a depth image corresponding to the color image, and the map building apparatus is further configured to:
acquiring boundary information in the color image, and determining a depth value corresponding to the boundary information from a depth image corresponding to the color image, wherein the boundary information is used for determining the working range and the non-working range;
acquiring coordinates of the boundary information in a first coordinate system of the mobile device according to the boundary information and the depth value corresponding to the boundary information;
and determining the coordinates of the boundary information in a second coordinate system according to the acquired pose of the self-moving equipment in the moving process and the coordinates of the boundary information in the first coordinate system, so that the coordinates of the boundary information in the second coordinate system are used as the coordinate information of the boundary layer.
As an optional implementation manner, before the obtaining of the initial pose and the visual image during the moving process of the mobile device, the map building apparatus is further configured to:
controlling the self-moving equipment to move from an initial position along a preset direction for a preset distance and then stop;
taking the roll angle, the pitch angle and the position information of the self-moving equipment when the self-moving equipment stops as the initial pose of the self-moving equipment;
transforming the initial pose to a second coordinate system where feature points in an initial visual image acquired after the mobile equipment stops are located, so that the initial pose corresponds to coordinates in the second coordinate system one to one;
and obtaining a feature map layer according to the coordinates of the feature points in the initial visual image in the second coordinate system.
The map building apparatus provided in this embodiment may implement the above method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
Based on the same inventive concept, the embodiment of the application also provides the electronic equipment. Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application, and as shown in fig. 10, the electronic device according to the embodiment includes: a memory 210 and a processor 220, the memory 210 for storing computer programs; the processor 220 is adapted to perform the method according to the above-described method embodiments when invoking the computer program.
The electronic device provided by this embodiment may perform the above method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
Embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computer program implements the method described in the above method embodiments.
The embodiment of the present application further provides a computer program product, which when running on an electronic device, enables the electronic device to implement the method described in the above method embodiment when executed.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only used for distinguishing one functional unit from another, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, all or part of the implementation may be realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in or transmitted over a computer-readable storage medium. The computer instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optics, digital subscriber line) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., a floppy Disk, a hard Disk, or a magnetic tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a Solid State Disk (SSD)), among others.
One of ordinary skill in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer-readable storage medium, and when executed, may include the processes of the above method embodiments. And the aforementioned storage medium may include: various media capable of storing program codes, such as ROM or RAM, magnetic or optical disks, etc.
The naming or numbering of the steps appearing in the present application does not mean that the steps in the method flow have to be executed in the chronological/logical order indicated by the naming or numbering, and the named or numbered process steps may be executed in a modified order depending on the technical purpose to be achieved, as long as the same or similar technical effects are achieved.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/device and method may be implemented in other ways. For example, the above-described apparatus/device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some feature points may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
In the description of the present application, a "/" indicates a relationship in which the objects associated before and after are an "or", for example, a/B may indicate a or B; in the present application, "and/or" is only an association relationship describing an associated object, and means that there may be three relationships, for example, a and/or B, and may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural.
Also, in the description of the present application, "a plurality" means two or more than two unless otherwise specified. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of singular or plural items. For example, at least one of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be implemented in other sequences than those illustrated or described herein.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, described with reference to "one embodiment" or "some embodiments" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A map construction method, comprising:
acquiring an initial pose and a visual image in the moving process of the mobile equipment;
acquiring a plurality of target key frames from the visual image;
determining a local error equation according to the initial pose and the visual image;
performing local nonlinear optimization on the target key frame by using the local error equation to obtain the pose of the self-moving equipment after the target key frame is optimized;
adding the optimized pose of each target key frame into a feature map layer to obtain a target feature map layer;
determining a global error equation according to the optimized pose;
and performing global nonlinear optimization on the pose in the target feature map layer according to the global error equation to obtain a target global map.
2. The method of claim 1, wherein determining a local error equation from the initial pose and the visual image comprises:
acquiring a predicted pose of the self-moving equipment in the target key frame according to the initial pose, and determining a pose measurement error factor according to the predicted pose;
extracting visual image characteristic points of the target key frame;
determining a reprojection error factor according to the predicted pose and the visual image feature points;
determining a real-time dynamic measurement error factor according to the real-time position information of the self-moving equipment;
determining a marginalization factor according to the visual image feature points and the real-time position information;
and determining a local error equation according to the pose measurement error factor, the reprojection error factor, the real-time dynamic measurement error factor and the marginalization factor.
3. The method of claim 2, wherein determining a reprojection error factor based on the predicted pose and the visual image feature points comprises:
determining coordinates of the visual image feature points of the target key frame in a feature map layer according to the predicted pose;
determining target feature points matched with the visual image feature points in the feature map layer according to the coordinates of the visual image feature points in the feature map layer;
converting the target feature points into the visual image according to the predicted pose to obtain matched feature points;
and determining a reprojection error factor according to the matched feature points and the visual image feature points.
4. The method of claim 1, wherein determining a global error equation from the optimized pose comprises:
determining a relative pose constraint factor of the pose optimized by the target key frame according to the pose corresponding to each target key frame and the inverse of the pose corresponding to each target key frame;
obtaining a closed loop detection pose constraint factor according to the inverse of the pose corresponding to each target key frame and the closed loop detection pose corresponding to each target key frame;
determining a real-time dynamic measurement position constraint factor according to the inverse of the corresponding pose of each target key frame and the real-time dynamic measurement position of each target key frame;
and determining a global error equation according to the relative pose constraint factor, the closed loop detection pose constraint factor and the real-time dynamic measurement position constraint factor.
5. The method according to any one of claims 1-4, wherein the global map further comprises at least one of a target track layer for determining a working track range during movement of the self-moving equipment and a boundary layer for distinguishing a working range from a non-working range and for planning the working track range for the self-moving equipment within the working range.
6. The method of claim 5, wherein when the global map includes the target trajectory layer, the method further comprises:
acquiring an initial track layer;
and adding the optimized pose of each target key frame into the initial track layer to obtain the target track layer.
7. The method of claim 5, wherein the visual image comprises a color image and a depth image corresponding to the color image, and wherein when the global map comprises a boundary layer, the method further comprises:
acquiring boundary information in the color image, and determining a depth value corresponding to the boundary information from a depth image corresponding to the color image; wherein the boundary information is used for determining the working range and the non-working range;
acquiring coordinates of the boundary information in a first coordinate system of the mobile device according to the boundary information and the depth value corresponding to the boundary information;
and determining the coordinates of the boundary information in a second coordinate system according to the acquired pose of the self-moving equipment in the moving process and the coordinates of the boundary information in the first coordinate system, so that the coordinates of the boundary information in the second coordinate system are used as the coordinate information of the boundary layer.
8. The method according to any one of claims 1-4, wherein before the obtaining of the initial pose and the visual image from the mobile device moving process, the method further comprises:
controlling the self-moving equipment to move from an initial position along a preset direction for a preset distance and then stop;
taking the roll angle, the pitch angle and the position information of the self-moving equipment when the self-moving equipment stops as the initial pose of the self-moving equipment;
transforming the initial pose to a second coordinate system where feature points in an initial visual image acquired after the mobile equipment stops are located, so that the initial pose corresponds to coordinates in the second coordinate system one to one;
and obtaining a feature map layer according to the coordinates of the feature points in the initial visual image in the second coordinate system.
9. An electronic device, comprising: a memory for storing a computer program and a processor; the processor is adapted to perform the method of any of claims 1-8 when the computer program is invoked.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-8.
CN202210174502.0A 2022-02-24 2022-02-24 Map construction method, device and equipment Pending CN114627253A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210174502.0A CN114627253A (en) 2022-02-24 2022-02-24 Map construction method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210174502.0A CN114627253A (en) 2022-02-24 2022-02-24 Map construction method, device and equipment

Publications (1)

Publication Number Publication Date
CN114627253A true CN114627253A (en) 2022-06-14

Family

ID=81900245

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210174502.0A Pending CN114627253A (en) 2022-02-24 2022-02-24 Map construction method, device and equipment

Country Status (1)

Country Link
CN (1) CN114627253A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116518961A (en) * 2023-06-29 2023-08-01 煤炭科学研究总院有限公司 Method and device for determining global pose of large-scale fixed vision sensor

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116518961A (en) * 2023-06-29 2023-08-01 煤炭科学研究总院有限公司 Method and device for determining global pose of large-scale fixed vision sensor
CN116518961B (en) * 2023-06-29 2023-09-01 煤炭科学研究总院有限公司 Method and device for determining global pose of large-scale fixed vision sensor

Similar Documents

Publication Publication Date Title
Dai et al. Rgb-d slam in dynamic environments using point correlations
Zou et al. StructVIO: Visual-inertial odometry with structural regularity of man-made environments
CN112567201B (en) Distance measuring method and device
US11127203B2 (en) Leveraging crowdsourced data for localization and mapping within an environment
Panahandeh et al. Vision-aided inertial navigation based on ground plane feature detection
Liu et al. Stereo visual-inertial odometry with multiple Kalman filters ensemble
Sola et al. Fusing monocular information in multicamera SLAM
WO2018081348A1 (en) Vision-inertial navigation with variable contrast tracking residual
CN112740274A (en) System and method for VSLAM scale estimation on robotic devices using optical flow sensors
US20180075614A1 (en) Method of Depth Estimation Using a Camera and Inertial Sensor
CN110533719B (en) Augmented reality positioning method and device based on environment visual feature point identification technology
Fang et al. Multi-sensor based real-time 6-DoF pose tracking for wearable augmented reality
CN115371665A (en) Mobile robot positioning method based on depth camera and inertia fusion
CN114088087A (en) High-reliability high-precision navigation positioning method and system under unmanned aerial vehicle GPS-DENIED
Tang et al. LE-VINS: A robust solid-state-LiDAR-enhanced visual-inertial navigation system for low-speed robots
CN114627253A (en) Map construction method, device and equipment
Xian et al. Fusing stereo camera and low-cost inertial measurement unit for autonomous navigation in a tightly-coupled approach
CN112731503B (en) Pose estimation method and system based on front end tight coupling
Panahandeh et al. Vision-aided inertial navigation using planar terrain features
Acharya et al. Modelling uncertainty of single image indoor localisation using a 3D model and deep learning
Qayyum et al. Inertial-kinect fusion for outdoor 3d navigation
Alliez et al. Indoor localization and mapping: Towards tracking resilience through a multi-slam approach
CN113375665B (en) Unmanned aerial vehicle pose estimation method based on multi-sensor elastic coupling
Hernández et al. Visual SLAM with oriented landmarks and partial odometry
Teixeira et al. Indoor localization using slam in parallel with a natural marker detector

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination