CN116202511B - Method and device for determining pose of mobile equipment under long roadway ultra-wideband one-dimensional constraint - Google Patents

Method and device for determining pose of mobile equipment under long roadway ultra-wideband one-dimensional constraint Download PDF

Info

Publication number
CN116202511B
CN116202511B CN202310500784.3A CN202310500784A CN116202511B CN 116202511 B CN116202511 B CN 116202511B CN 202310500784 A CN202310500784 A CN 202310500784A CN 116202511 B CN116202511 B CN 116202511B
Authority
CN
China
Prior art keywords
data
ultra
wideband
laser
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310500784.3A
Other languages
Chinese (zh)
Other versions
CN116202511A (en
Inventor
李和平
程健
李�昊
孙大智
王广福
闫鹏鹏
修海鑫
杨国奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Technology Research Branch Of Tiandi Technology Co ltd
General Coal Research Institute Co Ltd
Original Assignee
Beijing Technology Research Branch Of Tiandi Technology Co ltd
General Coal Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Technology Research Branch Of Tiandi Technology Co ltd, General Coal Research Institute Co Ltd filed Critical Beijing Technology Research Branch Of Tiandi Technology Co ltd
Priority to CN202310500784.3A priority Critical patent/CN116202511B/en
Publication of CN116202511A publication Critical patent/CN116202511A/en
Application granted granted Critical
Publication of CN116202511B publication Critical patent/CN116202511B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The present disclosure provides a method and a device for determining a pose of a mobile device under a one-dimensional constraint of ultra wideband of a long roadway, comprising: acquiring first data acquired by a camera, second data acquired by a laser and third data acquired by ultra-wideband under a roadway scene; calculating a global pose of the mobile equipment according to the first data and the second data; acquiring target ultra-wideband position data according to the global pose, and forming an ultra-wideband distance position data set; calculating the global position of the ultra-wideband base station according to the ultra-wideband distance position data set; and acquiring camera data, laser data and inertial navigation data in a local time window, and calculating the global initial pose and a local sparse map of the mobile equipment by fusing the odometer, so that the pose of the mobile equipment is optimized by combining the ultra-wideband data, the global initial pose, the global position of the ultra-wideband base station, the local sparse map and an objective function. Therefore, the pose of the mobile equipment in the underground long roadway of the coal mine is accurately calculated by combining the laser and the camera with ultra-wideband data.

Description

Method and device for determining pose of mobile equipment under long roadway ultra-wideband one-dimensional constraint
Technical Field
The disclosure relates to the technical field of autonomous positioning of mobile devices, in particular to a method and a device for determining pose of mobile equipment under one-dimensional constraint of ultra wideband of a long roadway.
Background
Global position constraints are important in mobile device autonomous positioning. However, the satellite signals cannot be acquired underground in the coal mine, and global position information of the mobile device cannot be acquired through the global satellite positioning system. The ultra-wideband has extremely strong penetrating power and is suitable for indoor and underground positioning. The underground tunnel space of the coal mine is narrow and takes the shape of a long tunnel.
In order to save cost, an ultra-wideband base station is generally deployed at intervals according to the trend of a roadway in the coal mine at present, and sparse ultra-wideband can only form one-dimensional constraint under a narrow and long roadway in the coal mine, so that difficulty is brought to accurately calculating the position of mobile equipment.
Disclosure of Invention
The present disclosure aims to solve, at least to some extent, one of the technical problems in the related art.
An embodiment of a first aspect of the present disclosure provides a method for determining a pose of a mobile device under one-dimensional constraint of ultra wideband of a long roadway, including:
acquiring first data acquired by a camera, second data acquired by a laser and third data acquired by ultra-wideband under a roadway scene;
Calculating the global pose of the mobile equipment according to the first data and the second data;
acquiring target ultra-wideband position data according to the global pose, and forming an ultra-wideband distance position data set;
calculating the global position of the ultra-wideband base station according to the ultra-wideband distance position data set;
collecting camera data, laser data and inertial navigation data in a local time window to perform fusion odometer calculation so as to obtain a global initial pose and a local sparse map of the mobile equipment;
and updating the pose of the mobile equipment according to the currently acquired ultra-wideband data, the global initial pose, the global position of the ultra-wideband base station, the local sparse map and the objective function.
An embodiment of a second aspect of the present disclosure provides a mobile equipment pose determining device under long roadway ultra-wideband one-dimensional constraint, including:
the first acquisition module is used for acquiring first data acquired by a camera, second data acquired by a laser and third data acquired by ultra-wideband in a roadway scene;
a first calculation module for calculating a global pose of the mobile equipment according to the first data and the second data;
the second acquisition module is used for acquiring target ultra-wideband position data according to the global pose and forming an ultra-wideband distance position data set;
The second calculation module is used for calculating the global position of the ultra-wideband base station according to the ultra-wideband distance position data set;
the third calculation module is used for collecting camera data, laser data and inertial navigation data in a local time window to perform fusion odometer calculation so as to obtain a global initial pose and a local sparse map of the mobile equipment;
and the updating module is used for updating the pose of the mobile equipment according to the currently acquired ultra-wideband data, the global initial pose, the global position of the ultra-wideband base station, the local sparse map and the objective function.
An embodiment of a third aspect of the present disclosure provides an electronic device, including: the mobile equipment pose determining method under the long-roadway ultra-wideband one-dimensional constraint provided by the embodiment of the first aspect of the disclosure is realized when the processor executes the program.
An embodiment of a fourth aspect of the present disclosure proposes a non-transitory computer readable storage medium storing a computer program which, when executed by a processor, implements a method for determining a pose of a mobile device under long roadway ultra-wideband one-dimensional constraints as proposed by an embodiment of the first aspect of the present disclosure.
The method and the device for determining the pose of the mobile equipment under the long roadway ultra-wideband one-dimensional constraint have the following beneficial effects:
in the embodiment of the disclosure, the device firstly acquires first data acquired by a camera, second data acquired by a laser and third data acquired by ultra-wideband under a roadway scene, then calculates the global pose of mobile equipment according to the first data and the second data, then acquires target ultra-wideband position data according to the global pose, forms an ultra-wideband distance position data set, calculates the global position of an ultra-wideband base station according to the ultra-wideband distance position data set, acquires camera data, laser data and inertial navigation data in a local time window, and calculates the global initial pose and a local sparse map of the mobile equipment by fusing an odometer, and further optimizes the pose of the mobile equipment by combining the ultra-wideband data, the global initial pose, the global position of the ultra-wideband base station, the local sparse map and a target function. Therefore, laser and camera can be combined with ultra-wideband data, and the pose of underground coal mine mobile equipment can be automatically and accurately calculated through data dense closed-loop acquisition, global pose off-line mixed calibration of an acquisition device, ultra-wideband receiving end position estimation and ultra-wideband base station global position calculation.
Additional aspects and advantages of the disclosure will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the disclosure.
Drawings
The foregoing and/or additional aspects and advantages of the present disclosure will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
fig. 1 is a flow chart of a method for determining a pose of a mobile device under one-dimensional constraint of ultra wideband of a long roadway provided by an embodiment of the present disclosure;
fig. 2 is a block diagram of a mobile equipment pose determining device under a one-dimensional constraint of ultra wideband in a long roadway according to an embodiment of the present disclosure;
FIG. 3 illustrates a block diagram of an exemplary computer device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are exemplary and intended for the purpose of explaining the present disclosure and are not to be construed as limiting the present disclosure.
The following describes a mobile equipment pose determining method, a device, a computer device and a storage medium under a long roadway ultra-wideband one-dimensional constraint according to an embodiment of the present disclosure with reference to the accompanying drawings.
It should be noted that, the execution body of the mobile equipment pose determination method under the long-roadway ultra-wideband one-dimensional constraint in the embodiment of the present disclosure is a mobile equipment pose determination device under the long-roadway ultra-wideband one-dimensional constraint, and the device may be implemented by software and/or hardware, and the device may be configured in any electronic device. In the scenario set forth in the present disclosure, the method for determining the pose of the mobile equipment under the long-roadway ultra-wideband one-dimensional constraint set forth in the embodiment of the present disclosure will be described below with "the mobile equipment pose determination device under the long-roadway ultra-wideband one-dimensional constraint" as an execution subject, which is not limited herein.
Fig. 1 is a flow chart of a method for determining a pose of a mobile device under a long roadway ultra-wideband one-dimensional constraint provided by an embodiment of the present disclosure.
As shown in fig. 1, the method for determining the pose of the mobile equipment under the one-dimensional constraint of the ultra wideband of the long roadway can comprise the following steps:
step 101, acquiring first data acquired by a camera, second data acquired by a laser and third data acquired by ultra-wideband under a roadway scene.
The current roadway scene is a coal mine underground roadway space, is narrow and is in a long tunnel shape.
The laser may be a lidar sensor, and the camera, i.e. the vision sensor, may be image data. The second data may be laser point cloud data.
It should be noted that, the camera and the laser are both fixedly installed on the mobile equipment and have a certain spatial position relationship. The moving equipment may be a shearer loader, a coal plough, a curved scraper conveyor, a self-moving hydraulic support, a bridge conveyor, a telescopic belt conveyor, and the like, without limitation.
The first data collected by the camera, the second data collected by the laser and the third data collected by the ultra-wideband are closed-loop data obtained in a closed-loop mode in a roadway scene.
In order to ensure the effectiveness of global position calculation for a base station, under the condition of sufficient light supplement, vision, laser and ultra-wideband data can be densely collected through a handheld or vehicle-mounted camera laser and ultra-wideband receiver combined device, and the collected data form a closed loop (namely, the collecting start position and the collecting end position are overlapped).
In the underground tunnel of the coal mine, a plurality of ultra-wideband base stations are deployed, and each base station has corresponding position coordinates. The mobile device includes a receiver capable of receiving the ultra wideband signal and returning the signal to surrounding base stations. From the time difference of transmission of the ultra wideband signal and the doppler effect, the distance and speed of the mobile equipment with respect to each base station can be calculated.
Step 102, calculating the global pose of the mobile equipment according to the first data and the second data.
Optionally, before calculating the global pose of the mobile equipment from the first data and the second data, the following steps may be performed:
extracting and matching characteristic points of the first data of each frame and the second data of each frame to obtain an epipolar geometry map;
calculating rotational components of the camera and the laser in a world coordinate system based on the epipolar geometry map, respectively;
determining corresponding position components based on rotational components of the camera and the laser in a world coordinate system, respectively;
determining a first initial pose corresponding to the camera based on the rotational component and the positional component corresponding to the camera;
and determining a second initial pose corresponding to the laser based on the rotation component and the position component corresponding to the laser.
Specifically, image feature point extraction may be performed, for example, a feature point algorithm such as SIFT, SURF, ORB may be used to extract feature points from the collected image data, and calculate descriptors of the feature points, and then image feature point matching may be performed, for example, a feature point-based matching algorithm such as FLANN, KNN, etc. may be used to match image feature points between different frames, so as to obtain a feature point correspondence.
The point cloud feature extraction can use a feature extraction algorithm based on point cloud, such as FPFH, SHOT and the like, to extract feature points from the collected laser point cloud data and calculate feature descriptors. The point cloud feature point matching can use a feature point-based matching algorithm, such as FLANN, KNN and the like, to match the point cloud feature points among different frames, and obtain a feature point corresponding relation.
Alternatively, a RANSAC algorithm may be used to filter the correspondence between feature points and remove mismatching points. When the epipolar geometry map is constructed, the epipolar geometry relationship between the camera gesture and the laser radar gesture can be calculated by utilizing the corresponding relationship of the characteristic points, and the epipolar geometry map is constructed. Specifically, when the global map is constructed, the external polar geometry map can be utilized to register and fuse the laser point cloud data, so that the global map is constructed, and the positioning and navigation accuracy and robustness are improved.
The following formula (1) is a calculation formula of a rotation component of an ith frame image (or laser point cloud) in a world coordinate system, and the following formula (2) is a calculation formula of a position component of an mth frame image (or laser point cloud) in the world coordinate system.
Figure SMS_1
(1)
Figure SMS_2
(2)
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_4
And->
Figure SMS_7
Respectively representing a rotation matrix and a position vector of a camera (or a laser) corresponding to an mth frame image (or a laser point cloud) in a world coordinate system. />
Figure SMS_10
Representing the relative rotation matrix between the i-th frame image and the j-th frame image. For image data +.>
Figure SMS_5
Is the nth feature point of the mth frame image, < >>
Figure SMS_8
Is->
Figure SMS_11
The corresponding three-dimensional point is a point,
Figure SMS_13
representing the re-projection function. For laser point cloud data +.>
Figure SMS_3
For the nth three-dimensional laser spot in the local coordinates of the mth frame>
Figure SMS_6
Is->
Figure SMS_9
Corresponding to the three-dimensional laser spot under the world coordinate system, < >>
Figure SMS_12
Representing the spatial coordinate transformation function of the point cloud.
Specifically, for a first initial pose calculation of the camera: camera pose is typically composed of two components, position and rotation. When the position and rotational components of the camera are known, a first initial pose of the camera may be calculated using quaternion or euler angles, or the like. For example, using euler angles to represent the rotational component of the camera, it may be converted into a rotational matrix, and then the position and rotational matrix may be combined into a transformation matrix, thus obtaining the first initial pose of the camera.
Specifically, for the second initial pose calculation of the laser: laser pose is also typically composed of two components, position and rotation. When the position and rotation components of the laser are known, a second initial pose of the laser may be calculated using quaternion or euler angles, etc. For example, using euler angles to represent the rotational component of the laser, it may be converted into a rotation matrix, and then the position and rotation matrix are combined into a transformation matrix, so as to obtain the second initial pose of the laser. In calculating the initial pose of the camera and the laser, it is necessary to ensure that their reference frames are consistent. It is often necessary to calibrate the camera and laser coordinate systems using tools such as calibration plates to ensure that their reference systems are consistent.
Optionally, a set of three-dimensional points, a set of three-dimensional reconstruction points and a set of three-dimensional laser points which are commonly visible by the visual laser, and a set of three-dimensional reconstruction points, may be determined according to the first data and the second data, then the Euclidean transformation error corresponding to each point in the set of three-dimensional points which are commonly visible by the visual laser, the visual re-projection error corresponding to each point in the set of three-dimensional reconstruction points, and the iterative closest point matching error corresponding to each point in the set of three-dimensional laser points are determined, and then the Euclidean transformation error corresponding to each point in the set of three-dimensional points which are commonly visible by the visual laser, and the iterative closest point matching error corresponding to each point in the set of three-dimensional reconstruction points are minimized based on a preset second objective function, so as to correct the first initial pose and the second initial pose.
Optionally, the formula of the second objective function is as follows:
Figure SMS_14
where J represents a set of three-dimensional points commonly visible by the vision laser, V represents a set of three-dimensional vision reconstruction points (excluding the set of points J), L represents a set of three-dimensional laser points (excluding the set of points J),
Figure SMS_15
representing visual re-projection errors,/- >
Figure SMS_16
Representing the matching error of the iteration closest point of the three-dimensional laser spot, < >>
Figure SMS_17
European transformation error representing laser and vision co-observation three-dimensional point, < >>
Figure SMS_18
Respectively represent weight coefficients and +>
Figure SMS_19
Specifically, pose mixing adjustment can be performed based on a first initial pose of a camera and a second initial pose of a laser, the position of an ultra-wideband receiving end is determined by combining third data, and then the global pose of the mobile equipment is calculated based on the position of the ultra-wideband receiving end, the first data and the second data.
Step 103, acquiring target ultra-wideband position data according to the global pose, and forming an ultra-wideband distance position data set.
It should be noted that, the data acquisition frequencies of the camera, the laser and the ultra-wideband sensor are different, after the global pose of each camera and the laser at the acquisition time is acquired through binding adjustment, interpolation can be performed on a time axis, so that the position information of the ultra-wideband data acquisition time is acquired, and a data set can be formed:
Figure SMS_20
,/>
Figure SMS_21
wherein N is the total number of ultra-wideband samples, M is the total number of ultra-wideband base stations,
Figure SMS_22
indicating interpolation at the j-th sampling instantUltra wideband receiving end position of arrival, +.>
Figure SMS_23
Indicating the distance of ultra-wideband acquisition at the j-th moment.
Step 104, calculating the global position of the ultra-wideband base station according to the ultra-wideband distance position data set.
Further, after estimating the position of the ultra wideband receiving end, the positions of all the ultra wideband base stations can be obtained by optimizing the following objective functions:
Figure SMS_24
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_25
representing the Huber loss function, < >>
Figure SMS_26
Representing the global position of the optimized mth ultra wideband base station,/>
Figure SMS_27
Indicating the position of the ultra-wideband receiver interpolated at the j-th sampling instant,/for the receiver>
Figure SMS_28
Indicating the distance of ultra-wideband acquisition at the j-th moment.
And 105, collecting camera data, laser data and inertial navigation data in a local time window to perform fusion odometer calculation so as to obtain a global initial pose and a local sparse map of the mobile equipment.
And step 106, updating the pose of the mobile equipment according to the currently acquired ultra-wideband data, the global initial pose, the global position of the ultra-wideband base station, the local sparse map and the objective function.
The inertial navigation data may be data collected by sensors such as a gyroscope and an accelerometer, for example, attitude information: the attitude parameters of the mobile device, such as euler angles or quaternions, are collected and used for describing the orientation and the azimuth of the mobile device, and parameters such as acceleration, angular velocity, geomagnetic field strength and the like of the mobile device, and are used for calculating the motion state and track of the mobile device, or environment information, such as environmental parameters including temperature, humidity, air pressure and the like, and are used for analyzing the working environment and influencing factors of the mobile device.
Specifically, the camera may capture an image of the environment, the laser may scan objects and structures within the roadway, and inertial navigation may measure acceleration and angular velocity of the mobile equipment.
Specifically, the preprocessing of the camera data, the laser data and the inertial navigation data in the collected local time window can be performed firstly, including denoising, filtering, registering and other operations, so as to improve the data quality and accuracy. The inertial navigation data and visual odometry methods, such as a feature point method, a direct method and the like, are used for estimating the relative pose of the mobile equipment. In addition, laser SLAM technology can be used for processing laser data and estimating the relative pose of mobile equipment. The global initial position estimation can use GPS or other positioning technologies and combine inertial navigation data to acquire the global initial position of the mobile equipment in the coal mine tunnel. The local sparse map construction may be to construct a local sparse map using SLAM techniques using camera data and laser data. Map construction methods based on point cloud, such as ICP, NDT, etc., and map construction methods based on map optimization, such as ORB-SLAM, LSD-SLAM, etc., may be used, and are not limited thereto. The global initial pose can be estimated by fusing a local sparse map with a global initial position and using a method such as graph optimization. The local dense map construction can be to update the local sparse map in an incremental way by utilizing the camera data and the laser data, and construct the local dense map so as to improve the positioning and navigation accuracy and robustness.
The following optimization problems, namely the global pose and the local sparse map of the mobile device are initialized, can be rapidly optimized in the adjacent two frames of laser data acquisition time windows under the condition of keeping the IMU bias unchanged by adopting a camera, laser and inertial navigation fusion odometer.
Figure SMS_29
Figure SMS_30
Figure SMS_31
Figure SMS_32
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_34
respectively representing information matrixes corresponding to laser, image and IMU data; />
Figure SMS_39
Representing a matched set of three-dimensional laser spot pairs, +.>
Figure SMS_42
Representing three-dimensional points +.>
Figure SMS_36
Representing a three-dimensional coordinate transformation function; />
Figure SMS_40
Representing a matched set of two-dimensional image feature point pairs, z representing the two-dimensional image points, +.>
Figure SMS_43
Back-projection transformation function representing two-dimensional image points,/->
Figure SMS_45
Representing the depth of the corresponding image point +.>
Figure SMS_33
Sample pair representing IMU within local window, +.>
Figure SMS_38
Six-dimensional vector representing pose, +.>
Figure SMS_41
Representing IMU pre-integral function, ">
Figure SMS_44
Representing the accelerometer bias +.>
Figure SMS_35
Indicating gyroscope bias, +.>
Figure SMS_37
Indicating the speed.
Further, under the condition that ultra-wideband global position and odometer estimation are calibrated, the global pose of the mobile equipment is obtained in real time by optimizing the following objective functions (including optimizing pose, depth and IMU bias) in a larger time window through the mobile equipment pose estimation method under ultra-wideband one-dimensional distance constraint:
Figure SMS_46
Wherein C is the total number of ultra-wideband samples in the local window,
Figure SMS_47
the mobile to be estimated is equipped with a global position. Therefore, the pose of the mobile equipment can be accurately and robustly calculated by fusing the one-dimensional constraint and the inertial navigation odometer formed by the sparse ultra-wideband on the basis of accurately determining the position of the ultra-wideband base station in the underground long roadway of the coal mine, and the method can also effectively reduce the equipment acquisition cost and the manpower deployment cost due to the adoption of the sparse ultra-wideband base station.
In the embodiment of the disclosure, the device firstly acquires first data acquired by a camera, second data acquired by a laser and third data acquired by ultra-wideband under a roadway scene, then calculates the global pose of mobile equipment according to the first data and the second data, then acquires target ultra-wideband position data according to the global pose, forms an ultra-wideband distance position data set, calculates the global position of an ultra-wideband base station according to the ultra-wideband distance position data set, acquires camera data, laser data and inertial navigation data in a local time window, and calculates the global initial pose and a local sparse map of the mobile equipment by fusing an odometer, and further optimizes the pose of the mobile equipment by combining the ultra-wideband data, the global initial pose, the global position of the ultra-wideband base station, the local sparse map and a target function. Therefore, laser and camera can be combined with ultra-wideband data, the pose of mobile equipment can be accurately and robustly calculated by integrating one-dimensional constraint and inertial navigation odometer formed by sparse ultra-wideband on the basis of accurately determining the position of the ultra-wideband base station in the underground long roadway of the coal mine by data dense closed loop acquisition, global pose off-line mixed calibration of an acquisition device, ultra-wideband receiving end position estimation and ultra-wideband base station global position calculation.
In order to achieve the above embodiment, the present disclosure further provides a device for determining a pose of a mobile device under a long roadway ultra-wideband one-dimensional constraint.
Fig. 2 is a block diagram of a mobile equipment pose determining device under long roadway ultra-wideband one-dimensional constraint according to a second embodiment of the present disclosure.
As shown in fig. 2, the mobile equipment pose determining device 200 under the long roadway ultra-wideband one-dimensional constraint may include:
a first obtaining module 210, configured to obtain first data collected by a camera, second data collected by a laser, and third data collected by ultra wideband in a roadway scene;
a first calculation module 220, configured to calculate a global pose of the mobile equipment according to the first data and the second data;
a second obtaining module 230, configured to obtain target ultra-wideband position data according to the global pose, and form an ultra-wideband distance position data set;
a second calculation module 240, configured to calculate a global position of the ultra wideband base station according to the ultra wideband distance position data set;
a third calculation module 250, configured to collect camera data, laser data, inertial navigation data in a local time window, so as to perform fused odometer calculation, so as to obtain a global initial pose and a local sparse map of the mobile equipment;
And the updating module 260 is configured to update the pose of the mobile device according to the currently acquired ultra-wideband data, the global initial pose, the global position of the ultra-wideband base station, the local sparse map and the objective function.
Optionally, the first data collected by the camera, the second data collected by the laser, and the third data collected by the ultra-wideband are closed-loop data obtained in a closed-loop manner in the roadway scene.
Optionally, the first computing module further includes:
the acquisition unit is used for extracting and matching the characteristic points of the first data of each frame and the second data of each frame so as to acquire an epipolar geometry map;
a calculation unit for calculating rotational components of the camera and the laser in a world coordinate system, respectively, based on the epipolar geometry map;
a first determining unit configured to determine corresponding position components based on rotational components of the camera and the laser in a world coordinate system, respectively;
a second determining unit configured to determine a first initial pose corresponding to the camera based on the rotation component and the position component corresponding to the camera;
and the third determining unit is used for determining a second initial pose corresponding to the laser based on the rotation component and the position component corresponding to the laser.
Optionally, the third determining unit is further configured to:
according to the first data and the second data, a common visible three-dimensional point set of the visual laser, a three-dimensional visual reconstruction point set and a three-dimensional laser point set are determined;
determining Europe transformation errors corresponding to each point in the common visible three-dimensional point set of the visual laser, visual re-projection errors corresponding to each point in the three-dimensional visual reconstruction point set, and iteration closest point matching errors corresponding to each point in the three-dimensional laser point set;
based on a preset second objective function, minimizing the Euro-transformation error corresponding to each point in the common visible three-dimensional point set of the visual laser, the visual re-projection error corresponding to each point in the three-dimensional visual reconstruction point set, and the iterative closest point matching error corresponding to each point in the three-dimensional laser point set, so as to correct the first initial pose and the second initial pose.
Optionally, the first computing module is specifically configured to:
based on the first initial pose of the camera and the second initial pose of the laser, carrying out pose mixing adjustment, and determining the position of an ultra-wideband receiving end by combining the third data;
And calculating the global pose of the mobile equipment based on the position of the ultra-wideband receiving end, the first data and the second data.
In the embodiment of the disclosure, the device firstly acquires first data acquired by a camera, second data acquired by a laser and third data acquired by ultra-wideband under a roadway scene, then calculates the global pose of mobile equipment according to the first data and the second data, then acquires target ultra-wideband position data according to the global pose, forms an ultra-wideband distance position data set, calculates the global position of an ultra-wideband base station according to the ultra-wideband distance position data set, acquires camera data, laser data and inertial navigation data in a local time window, and calculates the global initial pose and a local sparse map of the mobile equipment by fusing an odometer, and further optimizes the pose of the mobile equipment by combining the ultra-wideband data, the global initial pose, the global position of the ultra-wideband base station, the local sparse map and a target function. Therefore, laser and camera can be combined with ultra-wideband data, and the pose of underground coal mine mobile equipment can be automatically and accurately calculated through data dense closed-loop acquisition, global pose off-line mixed calibration of an acquisition device, ultra-wideband receiving end position estimation and ultra-wideband base station global position calculation.
To achieve the above embodiments, the present disclosure further proposes a computer device including: the mobile equipment pose determining method under the ultra-wideband one-dimensional constraint of the long roadway provided by the embodiment of the disclosure is realized when the processor executes the program.
In order to implement the above embodiments, the disclosure further provides a non-transitory computer readable storage medium storing a computer program, which when executed by a processor, implements a method for determining a pose of a mobile device under a long roadway ultra-wideband one-dimensional constraint as proposed in the foregoing embodiments of the disclosure.
In order to implement the above-mentioned embodiments, the present disclosure also proposes a computer program product which, when executed by an instruction processor in the computer program product, performs a method for determining a pose of a mobile equipment under long roadway ultra-wideband one-dimensional constraints as proposed in the foregoing embodiments of the present disclosure.
FIG. 3 illustrates a block diagram of an exemplary computer device suitable for use in implementing embodiments of the present disclosure. The computer device 12 shown in fig. 3 is merely an example and should not be construed as limiting the functionality and scope of use of the disclosed embodiments.
As shown in FIG. 3, computer device 12 is in the form of a general purpose computing device. Components of computer device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, a bus 18 that connects the various system components, including the system memory 28 and the processing units 16.
Bus 18 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include industry Standard architecture (Industry Standard Architecture; hereinafter ISA) bus, micro channel architecture (Micro Channel Architecture; hereinafter MAC) bus, enhanced ISA bus, video electronics standards Association (Video Electronics Standards Association; hereinafter VESA) local bus, and peripheral component interconnect (Peripheral Component Interconnection; hereinafter PCI) bus.
Computer device 12 typically includes a variety of computer system readable media. Such media can be any available media that is accessible by computer device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 28 may include computer system readable media in the form of volatile memory, such as random access memory (Random Access Memory; hereinafter: RAM) 30 and/or cache memory 32. The computer device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from or write to non-removable, nonvolatile magnetic media (not shown in FIG. 3, commonly referred to as a "hard disk drive"). Although not shown in fig. 3, a disk drive for reading from and writing to a removable non-volatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable non-volatile optical disk (e.g., a compact disk read only memory (Compact Disc Read Only Memory; hereinafter CD-ROM), digital versatile read only optical disk (Digital Video Disc Read Only Memory; hereinafter DVD-ROM), or other optical media) may be provided. In such cases, each drive may be coupled to bus 18 through one or more data medium interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of the various embodiments of the disclosure.
A program/utility 40 having a set (at least one) of program modules 42 may be stored in, for example, memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Program modules 42 generally perform the functions and/or methods in the embodiments described in this disclosure.
The computer device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), one or more devices that enable a user to interact with the computer device 12, and/or any devices (e.g., network card, modem, etc.) that enable the computer device 12 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 22. Moreover, the computer device 12 may also communicate with one or more networks such as a local area network (Local Area Network; hereinafter LAN), a wide area network (Wide Area Network; hereinafter WAN) and/or a public network such as the Internet via the network adapter 20. As shown, network adapter 20 communicates with other modules of computer device 12 via bus 18. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with computer device 12, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
The processing unit 16 executes various functional applications and data processing by running programs stored in the system memory 28, for example, implementing the methods mentioned in the foregoing embodiments.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present disclosure. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present disclosure, the meaning of "a plurality" is at least two, such as two, three, etc., unless explicitly specified otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and additional implementations are included within the scope of the preferred embodiment of the present disclosure in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present disclosure.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. As with the other embodiments, if implemented in hardware, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
Furthermore, each functional unit in the embodiments of the present disclosure may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like. Although embodiments of the present disclosure have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the present disclosure, and that variations, modifications, alternatives, and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the present disclosure.

Claims (10)

1. The method for determining the pose of the mobile equipment under the one-dimensional constraint of the ultra-wideband of the long roadway is characterized by comprising the following steps of:
acquiring first data acquired by a camera, second data acquired by a laser and third data acquired by ultra-wideband under a roadway scene;
calculating the global pose of the mobile equipment according to the first data and the second data;
acquiring target ultra-wideband position data according to the global pose, and forming an ultra-wideband distance position data set;
calculating the global position of the ultra-wideband base station according to the ultra-wideband distance position data set;
collecting camera data, laser data and inertial navigation data in a local time window to perform fusion odometer calculation so as to obtain a global initial pose and a local sparse map of the mobile equipment;
And updating the pose of the mobile equipment according to the third data, the global initial pose, the global position of the ultra-wideband base station, the local sparse map and the objective function which are currently acquired.
2. The method of claim 1, wherein the first data collected by the camera, the second data collected by the laser, and the third data collected by the ultra-wideband are closed loop data acquired in a closed loop in the roadway scene.
3. The method of claim 1, further comprising, prior to said calculating a global pose of mobile equipment from said first data and said second data:
extracting and matching characteristic points of the first data of each frame and the second data of each frame to obtain an epipolar geometry map;
calculating rotational components of the camera and the laser in a world coordinate system based on the epipolar geometry map, respectively;
determining corresponding position components based on rotational components of the camera and the laser in a world coordinate system, respectively;
determining a first initial pose corresponding to the camera based on the rotational component and the positional component corresponding to the camera;
And determining a second initial pose corresponding to the laser based on the rotation component and the position component corresponding to the laser.
4. The method of claim 3, further comprising, after said determining the second initial pose corresponding to the laser:
according to the first data and the second data, a common visible three-dimensional point set of the visual laser, a three-dimensional visual reconstruction point set and a three-dimensional laser point set are determined;
determining Europe transformation errors corresponding to each point in the common visible three-dimensional point set of the visual laser, visual re-projection errors corresponding to each point in the three-dimensional visual reconstruction point set, and iteration closest point matching errors corresponding to each point in the three-dimensional laser point set;
based on a preset second objective function, minimizing the Euro-transformation error corresponding to each point in the common visible three-dimensional point set of the visual laser, the visual re-projection error corresponding to each point in the three-dimensional visual reconstruction point set, and the iterative closest point matching error corresponding to each point in the three-dimensional laser point set, so as to correct the first initial pose and the second initial pose.
5. The method of claim 1, wherein the computing a global pose of a mobile device from the first data and the second data comprises:
based on the first initial pose of the camera and the second initial pose of the laser, carrying out pose mixing adjustment, and determining the position of an ultra-wideband receiving end by combining the third data;
and calculating the global pose of the mobile equipment based on the position of the ultra-wideband receiving end, the first data and the second data.
6. The utility model provides a mobile equipment position appearance determining device under long tunnel ultra wide band one-dimensional constraint which characterized in that includes:
the first acquisition module is used for acquiring first data acquired by a camera, second data acquired by a laser and third data acquired by ultra-wideband in a roadway scene;
a first calculation module for calculating a global pose of the mobile equipment according to the first data and the second data;
the second acquisition module is used for acquiring target ultra-wideband position data according to the global pose and forming an ultra-wideband distance position data set;
the second calculation module is used for calculating the global position of the ultra-wideband base station according to the ultra-wideband distance position data set;
The third calculation module is used for collecting camera data, laser data and inertial navigation data in a local time window to perform fusion odometer calculation so as to obtain a global initial pose and a local sparse map of the mobile equipment;
and the updating module is used for updating the pose of the mobile equipment according to the third data, the global initial pose, the global position of the ultra-wideband base station, the local sparse map and the objective function which are currently acquired.
7. The apparatus of claim 6, wherein the first data collected by the camera, the second data collected by the laser, and the third data collected by the ultra-wideband are closed loop data acquired in a closed loop in the roadway scene.
8. The apparatus of claim 6, wherein the first computing module further comprises:
the acquisition unit is used for extracting and matching the characteristic points of the first data of each frame and the second data of each frame so as to acquire an epipolar geometry map;
a calculation unit for calculating rotational components of the camera and the laser in a world coordinate system, respectively, based on the epipolar geometry map;
a first determining unit configured to determine corresponding position components based on rotational components of the camera and the laser in a world coordinate system, respectively;
A second determining unit configured to determine a first initial pose corresponding to the camera based on the rotation component and the position component corresponding to the camera;
and the third determining unit is used for determining a second initial pose corresponding to the laser based on the rotation component and the position component corresponding to the laser.
9. The apparatus of claim 8, wherein the third determining unit is further configured to:
according to the first data and the second data, a common visible three-dimensional point set of the visual laser, a three-dimensional visual reconstruction point set and a three-dimensional laser point set are determined;
determining Europe transformation errors corresponding to each point in the common visible three-dimensional point set of the visual laser, visual re-projection errors corresponding to each point in the three-dimensional visual reconstruction point set, and iteration closest point matching errors corresponding to each point in the three-dimensional laser point set;
based on a preset second objective function, minimizing the Euro-transformation error corresponding to each point in the common visible three-dimensional point set of the visual laser, the visual re-projection error corresponding to each point in the three-dimensional visual reconstruction point set, and the iterative closest point matching error corresponding to each point in the three-dimensional laser point set, so as to correct the first initial pose and the second initial pose.
10. The apparatus of claim 6, wherein the first computing module is specifically configured to:
based on the first initial pose of the camera and the second initial pose of the laser, carrying out pose mixing adjustment, and determining the position of an ultra-wideband receiving end by combining the third data;
and calculating the global pose of the mobile equipment based on the position of the ultra-wideband receiving end, the first data and the second data.
CN202310500784.3A 2023-05-06 2023-05-06 Method and device for determining pose of mobile equipment under long roadway ultra-wideband one-dimensional constraint Active CN116202511B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310500784.3A CN116202511B (en) 2023-05-06 2023-05-06 Method and device for determining pose of mobile equipment under long roadway ultra-wideband one-dimensional constraint

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310500784.3A CN116202511B (en) 2023-05-06 2023-05-06 Method and device for determining pose of mobile equipment under long roadway ultra-wideband one-dimensional constraint

Publications (2)

Publication Number Publication Date
CN116202511A CN116202511A (en) 2023-06-02
CN116202511B true CN116202511B (en) 2023-07-07

Family

ID=86508089

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310500784.3A Active CN116202511B (en) 2023-05-06 2023-05-06 Method and device for determining pose of mobile equipment under long roadway ultra-wideband one-dimensional constraint

Country Status (1)

Country Link
CN (1) CN116202511B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116518961B (en) * 2023-06-29 2023-09-01 煤炭科学研究总院有限公司 Method and device for determining global pose of large-scale fixed vision sensor

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110657803B (en) * 2018-06-28 2021-10-29 深圳市优必选科技有限公司 Robot positioning method, device and storage device
CN110514225B (en) * 2019-08-29 2021-02-02 中国矿业大学 External parameter calibration and accurate positioning method for fusion of multiple sensors under mine
CN114814872A (en) * 2020-08-17 2022-07-29 浙江商汤科技开发有限公司 Pose determination method and device, electronic equipment and storage medium
CN115235464A (en) * 2021-04-23 2022-10-25 中煤科工开采研究院有限公司 Positioning method and device and moving tool thereof
CN113140040A (en) * 2021-04-26 2021-07-20 北京天地玛珂电液控制系统有限公司 Multi-sensor fusion coal mine underground space positioning and mapping method and device
CN114721001A (en) * 2021-11-17 2022-07-08 长春理工大学 Mobile robot positioning method based on multi-sensor fusion
CN115342796A (en) * 2022-07-22 2022-11-15 广东交通职业技术学院 Map construction method, system, device and medium based on visual laser fusion
CN115950418A (en) * 2022-12-09 2023-04-11 青岛慧拓智能机器有限公司 Multi-sensor fusion positioning method

Also Published As

Publication number Publication date
CN116202511A (en) 2023-06-02

Similar Documents

Publication Publication Date Title
CN112230242B (en) Pose estimation system and method
CN107727079B (en) Target positioning method of full-strapdown downward-looking camera of micro unmanned aerial vehicle
EP2133662B1 (en) Methods and system of navigation using terrain features
CN110487267B (en) Unmanned aerial vehicle navigation system and method based on VIO &amp; UWB loose combination
JP2020067439A (en) System and method for estimating position of moving body
CN110307836B (en) Accurate positioning method for welt cleaning of unmanned cleaning vehicle
EP2244063A2 (en) System and method for collaborative navigation
KR102006291B1 (en) Method for estimating pose of moving object of electronic apparatus
KR101880185B1 (en) Electronic apparatus for estimating pose of moving object and method thereof
CN116202511B (en) Method and device for determining pose of mobile equipment under long roadway ultra-wideband one-dimensional constraint
JP2018081008A (en) Self position posture locating device using reference video map
KR101985344B1 (en) Sliding windows based structure-less localization method using inertial and single optical sensor, recording medium and device for performing the method
CN112904395A (en) Mining vehicle positioning system and method
CN111665512A (en) Range finding and mapping based on fusion of 3D lidar and inertial measurement unit
CN105334518A (en) Laser radar three-dimensional imaging method based on indoor four-rotor aircraft
CN115407357A (en) Low-beam laser radar-IMU-RTK positioning mapping algorithm based on large scene
CN108613675B (en) Low-cost unmanned aerial vehicle movement measurement method and system
CN112835085A (en) Method and device for determining vehicle position
JP2023525927A (en) Vehicle localization system and method
CN115127543A (en) Method and system for eliminating abnormal edges in laser mapping optimization
CN114693754A (en) Unmanned aerial vehicle autonomous positioning method and system based on monocular vision inertial navigation fusion
Khoshelham et al. Vehicle positioning in the absence of GNSS signals: Potential of visual-inertial odometry
CN115183762A (en) Airport warehouse inside and outside mapping method, system, electronic equipment and medium
Li et al. Aerial-triangulation aided boresight calibration for a low-cost UAV-LiDAR system
US20240004080A1 (en) Capturing environmental scans using landmarks based on semantic features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant