CN113838140B - Monocular video pedestrian three-dimensional positioning method based on three-dimensional map assistance - Google Patents

Monocular video pedestrian three-dimensional positioning method based on three-dimensional map assistance Download PDF

Info

Publication number
CN113838140B
CN113838140B CN202110936647.5A CN202110936647A CN113838140B CN 113838140 B CN113838140 B CN 113838140B CN 202110936647 A CN202110936647 A CN 202110936647A CN 113838140 B CN113838140 B CN 113838140B
Authority
CN
China
Prior art keywords
pedestrian
dimensional
video image
target
monocular
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110936647.5A
Other languages
Chinese (zh)
Other versions
CN113838140A (en
Inventor
许志华
牛一如
孙文彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Mining and Technology Beijing CUMTB
Original Assignee
China University of Mining and Technology Beijing CUMTB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Mining and Technology Beijing CUMTB filed Critical China University of Mining and Technology Beijing CUMTB
Priority to CN202110936647.5A priority Critical patent/CN113838140B/en
Publication of CN113838140A publication Critical patent/CN113838140A/en
Application granted granted Critical
Publication of CN113838140B publication Critical patent/CN113838140B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The patent discloses a monocular video pedestrian three-dimensional positioning method based on three-dimensional map assistance: firstly, acquiring a monocular video image containing a dynamic pedestrian to be positioned and a laser radar point cloud in a video domain range; secondly, recovering the position, the posture and the internal azimuth elements of the video camera by extracting and matching the monocular video image with the characteristic points in the laser radar point cloud; thirdly, carrying out two-dimensional detection on the pedestrian to be positioned in the video image to obtain a pixel coordinate value of a target feature point, and simultaneously processing point cloud data of a scene where the target is positioned to extract a ground plane to obtain a coordinate value of the ground plane in the vertical direction of a laser radar coordinate system; and then, introducing a constraint condition that the target pedestrian is always vertical to the ground plane into a collineation equation, constructing a joint solution equation based on the prepared data obtained in the process, and recovering the three-dimensional coordinates and height information of the characteristics of the video pedestrian.

Description

Monocular video pedestrian three-dimensional positioning method based on three-dimensional map assistance
Technical Field
The invention belongs to the field of target tracking, and particularly relates to a monocular video pedestrian three-dimensional positioning method based on three-dimensional map assistance.
Background
The three-dimensional positioning of the dynamic target has good practical value in the fields of intelligent transportation, disaster emergency service, digital cities, public epidemic prevention and control and the like. Currently, three-dimensional positioning of targets mainly depends on a global positioning system (Global Positioning System, GPS), but in tall, dense building groups or closed spaces (such as indoor, tunnel, underground parking lot and the like), positioning misalignment is easily caused by multipath effects, signal shielding and other factors. In recent years, wireless positioning technologies, such as Ultra-Wide Band (UWB), wireless network (WIFI), bluetooth, infrared and the like, have also attracted widespread attention, but the positioning process thereof has a large dependence on external conditions, and has high cost and poor universality.
Over the past decades, vision-based and lidar-based target positioning methods have been proposed and have received great attention. Stereovision positioning techniques are relatively sophisticated, but focus primarily on three-dimensional position estimation of vehicles, while pedestrians are of varying heights and shapes, resulting in a lack of sufficient attribute information. The traditional three-dimensional pedestrian positioning method mostly depends on the scene or the known information of the pedestrian, for example, the relationship between the height and the body part and the stride obtained by medical statistics is utilized to position the pedestrian; or using physical constraints, the pixel measurements are converted to human height by simple motion trajectory analysis, such as jumping or running. This type of method uses other metrics to indirectly obtain the three-dimensional position of the pedestrian, possibly resulting in error propagation. In addition, the sensors such as a digital camera, a laser radar, a wireless sensor, an inertial gyroscope and the like can also be used as a data acquisition platform, image data and geographic position data are acquired simultaneously, and a three-dimensional map is constructed to realize multi-sensor fusion positioning. However, these methods are highly dependent on equipment or scene conditions and are not easily integrated into all public places. In recent years, many scholars try to solve the problem of three-dimensional positioning of pedestrians by using an artificial intelligence method, and different neural network structures are proposed for three-dimensional imaging, positioning or estimating three-dimensional postures of human bodies. However, the reliance on a large number of data sets for training in these studies, and the fact that pedestrians are mostly the same in height, can lead to inherent positioning errors, and the accuracy is difficult to meet.
In summary, the vision-based technology can capture detailed gesture and texture attributes, but is limited in that each spatial point has only one perspective projection straight line corresponding to the spatial point, and lacks depth information, and additional information is needed to realize conversion from two-dimensional coordinates to three-dimensional coordinates. In this context, we propose an effective alternative to achieve three-dimensional localization, using ground lidar to capture a three-dimensional map to estimate parameters of a monocular camera. Pedestrians are dynamic targets but always perpendicular to the ground so we can determine their three-dimensional position. The method aims to solve the limitation of plane positioning and improve the application of traditional photogrammetry in three-dimensional space.
The invention provides a monocular video pedestrian three-dimensional positioning method based on three-dimensional map assistance, which aims at solving the problems of the monocular video dynamic target three-dimensional positioning under the general condition. The method comprises the following implementation thought: firstly, calibrating a monocular video image by using a three-dimensional map to restore camera parameters, then completing pedestrian detection to obtain a two-dimensional boundary box comprising pedestrian head and foot positions, then extracting a vertical coordinate value of a ground plane from corresponding point cloud data of the monocular video, and realizing three-dimensional positioning of pedestrians by using the inherent condition that the bodies of the pedestrians are always vertical to the ground. The invention does not depend on special calibration objects or training data sets, does not limit scene geometric conditions, has simple and efficient calculation process, can obtain more accurate positioning results than other methods, can recover more accurate height values of pedestrians, and has theoretical and practical significance.
3. Summary of the invention
Solution of (one)
The invention aims to perform three-dimensional positioning on pedestrians in monocular videos. In view of the geometrical constraints that may not be available in a real scene, there is difficulty in determining the distance of a moving object to a camera based on monocular video. Aiming at the problem, the invention designs a monocular video pedestrian three-dimensional positioning method based on three-dimensional map assistance, which comprises the steps of firstly constructing a three-dimensional map by acquiring point cloud data of a video domain scene, then calibrating a camera by utilizing feature matching to acquire internal and external azimuth elements of a monocular video image, then detecting pedestrians to acquire pixel coordinate values of a pedestrian frame by the monocular video image containing a dynamic target to be positioned, simultaneously acquiring a vertical direction coordinate of a ground plane based on ground point cloud data of the video scene, introducing an inherent condition that the pedestrians are always vertical to the ground into a collineation equation, and carrying out joint adjustment on target feature points to acquire a three-dimensional position based on the intrinsic condition.
(II) technical scheme
In order to achieve the above purpose, the invention discloses a monocular video pedestrian three-dimensional positioning method based on three-dimensional map assistance, which specifically comprises the following steps:
step 1: respectively acquiring a monocular video image F containing a dynamic target to be positioned and LiDAR point cloud C which does not contain the target under the same scene;
step 2: recovering the internal and external azimuth elements of the camera by utilizing the 2D-3D matching relation of the feature points of the monocular video image F and the point cloud C, and taking the external azimuth elements as global transformation parameters of a target scene;
step 3: let P i (i E1, 2,3, …, n) represents pedestrians to be positioned in the monocular video image F, n is the total number of pedestrians to be positioned, the pixel coordinates of the middle points of the upper boundary line and the lower boundary line of the pedestrian detection frame are obtained based on the target detection algorithm, and t is used respectively i ,b i A representation;
step 4: extracting a ground plane from a laser radar point cloud C scanned by a scene where the video image F is positioned, and obtaining a coordinate value Z of the vertical direction of the ground plane g
Step 5: by pedestrians P i (i.epsilon.1, 2,3, …, n) always being perpendicular to the ground, and the coordinate value Z of the ground plane vertical direction extracted in the step 4 g Introducing a collineation equation, and obtaining a certain pedestrian pixel coordinate (u t ,v t ),(u b ,v b ) Constructing a joint solution model;
step 6: performing Taylor polynomial expansion on the model constructed in the step 5, and solving solutions meeting a certain threshold range after multiple iterations to obtain different pedestrians P respectively i (i.epsilon.1, 2,3, …, n) two geometric points t i ,b i Three-dimensional coordinates of (a) to realizeMonocular video pedestrian three-dimensional positioning.
(III) beneficial effects
1. By utilizing the method and the device, the three-dimensional positioning of the monocular video dynamic pedestrians can be realized under the condition that the real size of the target is unknown and the scene has no specific geometric characteristics.
2. The method can provide technical support for application such as urban dynamic pedestrian video cross-border tracking, track analysis and behavior anomaly detection.
4. Description of the drawings
Fig. 1 is a flow chart of a monocular video pedestrian three-dimensional positioning method based on three-dimensional map assistance.
FIG. 2 is a schematic view of a lidar point cloud and a monocular video image containing a dynamic pedestrian to be located.
FIG. 3 is a schematic diagram of a two-dimensional detection result of a dynamic pedestrian to be positioned in a monocular video image.
Fig. 4 is a schematic diagram of three-dimensional positioning of a dynamic pedestrian to be positioned in a monocular video image.
5. Detailed description of the preferred embodiments
Taking fig. 2,3 and 4 as examples, the implementation process of the present invention will be described in detail. The specific implementation mode is as follows:
step 1: here by any pedestrian P to be located i (i=1, 2,3, …, n) is described as an example. As shown in fig. 2, considering the dynamics of the pedestrian to be positioned, first, the pedestrian P not to be positioned is acquired by using a three-dimensional laser scanner i And then intercepting the target P by using a scene-deployed monitoring camera i Monocular video image F of (a).
Step 2: by O respectively F -X F Y F Z F And O C -X C Y C Z C And the coordinate system of the monocular video image F and the laser radar point cloud C is represented, and the camera calibration is realized by adopting a direct linear transformation algorithm. At least 6 pairs of characteristic points are selected from the monocular video image F and the ground point cloud C, and the internal azimuth element matrix A (u) of the camera is restored according to the 2D-3D matching relation of the characteristic points 0 ,v 0 F), wherein (u 0 ,v 0 ) Representing principal point coordinates, F being focal length, and video image F acquisitionAn instantaneous matrix E of external orientation elements (including a translation vector T and a rotation matrix R), of the formula (1) (X w ,Y w ,Z w ) For the object P to be positioned i Is its image Fang Erwei coordinates:
step 3: as shown in fig. 3, a pedestrian P in a monocular video image F is acquired with a YOLO object detector i Is a two-dimensional detection frame of (a). The YOLO algorithm takes the whole image as the input of the network, predicts the target area and the category thereof, takes the midpoint t of the upper edge of the pedestrian detection frame, namely the position of the head of the pedestrian, and the midpoint b of the lower edge, namely the position of the foot of the pedestrian, as the marking points, and uses (u t ,v t ),(u b ,v b ) Representing its pixel coordinates.
Step 4: in the laser radar, a gyroscope is arranged to ensure that the Z axis in a coordinate system is always vertical to the ground, so that a ground plane is extracted from the laser radar point cloud C, and a coordinate value Z in the vertical direction is obtained g Necessary basic data are provided for three-dimensional positioning of subsequent pedestrians.
Step 5: as shown in fig. 4, for the target pedestrian P to be positioned appearing only in the monocular video image F i After the camera calibration and posture recovery in the step 2 are completed, the coordinate value Z of the vertical direction of the ground plane extracted in the step 4 is obtained g Introducing an improved collineation equation, as in equation (2), to obtain the three-dimensional coordinates (X b ,Y b ,Z b ) Wherein a is i ,b i ,c i (i=1, 2, 3) represents the element values in the rotation matrix R, (X) S ,Y S ,Z S ) Representing the value of the element in the translation vector T,
step 6: considering discomfort of monocular vision positioning, pedestrian is always vertical in solving processOn the ground as constraint condition, calculating the target pedestrian P to be positioned i Is a re-projection error (deltau) of the head point t of (1) t ,Δv t ):
In the aboveThe pixel coordinates representing the point t detected directly from the video image F are:
accordingly, the reprojection error in the formula (4) can be expressed as a vertical direction coordinate value Z with respect to the pedestrian head point t t Is defined by the equation:
calculating unknown number Z by matrix operation of (7) t Correction of DeltaZ t After multiple iterations, the three-dimensional coordinates of the point t meeting the precision requirement are obtained, and the pedestrian P is obtained i After three-dimensional coordinates of the head and the foot, calculating the height h of the pedestrian:
step 7: step 6 gives (Y) t ,Y t ,Z t ) And (X) b ,Y b ,Z b ) I.e. to-be-positioned target pedestrian P satisfying the set threshold condition i Based on which the position of the video dynamic object is determined.
The above embodiments describe the implementation steps taking any object as an example, the method being equally applicable when there are multiple objects to be located. The three-dimensional positioning of the monocular video dynamic pedestrians can be realized.
While the foregoing is directed to embodiments of the present invention, other and further details of the invention may be had by the present invention, it should be understood that the foregoing description is merely illustrative of the present invention and that no limitations are intended to the scope of the invention, except insofar as modifications, equivalents, improvements or modifications are within the spirit and principles of the invention.

Claims (1)

1. A monocular video pedestrian three-dimensional positioning method based on three-dimensional map assistance comprises the following steps:
step 1: acquisition of pedestrian P without positioning by three-dimensional laser scanner i And then intercepting the target P by using a scene-deployed monitoring camera i Monocular video image F of (a);
step 2: by O respectively F -X F Y F Z F And O C -X C Y C Z C The coordinate system representing the monocular video image F and the laser radar point cloud C is used for realizing camera calibration by adopting a direct linear transformation algorithm, namely at least 6 pairs of characteristic points are selected from the monocular video image F and the ground point cloud C, and the internal azimuth element matrix A (u 0 ,v 0 ,f)
In the formula (1) (u) 0 ,v 0 ) Representing principal point coordinates, F being focal length, E representing the instant of acquisition of video image FComprises a translation vector T and a rotation matrix R, (X) w ,Y w ,Z w ) For the object P to be positioned i (u, v) is its image Fang Erwei coordinates;
step 3: acquiring pedestrian P in monocular video image F with YOLO object detector i The two-dimensional detection frame of the (a) takes the whole image as the input of the network, predicts the target area and the category thereof, takes the midpoint t of the upper edge of the pedestrian detection frame, namely the position of the head of the pedestrian and the midpoint b of the lower edge, namely the position of the foot of the pedestrian, as mark points, and uses (u) t ,v t ),(u b ,v b ) Representing its pixel coordinates;
step 4: because the gyroscope in the laser scanner can ensure that the Z axis in the point cloud coordinate system is always vertical to the ground, the ground plane is extracted from the laser radar point cloud C, and the vertical coordinate value Z of the ground plane is obtained g
Step 5: for the target pedestrian P to be located which appears only in the monocular video image F i After the camera calibration and posture recovery in the step 2 are completed, the coordinate value Z of the vertical direction of the ground plane extracted in the step 4 is obtained g The collineation equation is introduced to calculate the three-dimensional coordinates (X b ,Y b ,Z b ):
Wherein a is i ,b i ,c i (i=1, 2, 3) represents the element values in the rotation matrix R, (X) S ,Y S ,Z S ) Representing the element values in the translation vector T;
step 6: considering discomfort of monocular vision positioning, introducing pedestrians to be vertical to the ground all the time as constraint conditions in the solving process, and calculating the target pedestrians P to be positioned i Is a re-projection error (deltau) of the head point t of (1) t ,Δv t ):
In the aboveThe pixel coordinates representing the point t detected directly from the video image F are:
accordingly, the reprojection error in the formula (4) can be expressed as a vertical direction coordinate value Z with respect to the pedestrian head point t t Is defined by the equation:
calculating unknown number Z by matrix operation of (7) t Correction of DeltaZ t After multiple iterations, the three-dimensional coordinates of the point t meeting the precision requirement are obtained, and the pedestrian P is obtained i After three-dimensional coordinates of the head and the foot, calculating the height h of the pedestrian:
step 7: step 6 gives (X) t ,Y t ,Z t ) And (X) b ,Y b ,Z b ) I.e. to-be-positioned target pedestrian P satisfying the set threshold condition i Based on which the video motion can be determinedThe position of the state target is located.
CN202110936647.5A 2021-08-16 2021-08-16 Monocular video pedestrian three-dimensional positioning method based on three-dimensional map assistance Active CN113838140B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110936647.5A CN113838140B (en) 2021-08-16 2021-08-16 Monocular video pedestrian three-dimensional positioning method based on three-dimensional map assistance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110936647.5A CN113838140B (en) 2021-08-16 2021-08-16 Monocular video pedestrian three-dimensional positioning method based on three-dimensional map assistance

Publications (2)

Publication Number Publication Date
CN113838140A CN113838140A (en) 2021-12-24
CN113838140B true CN113838140B (en) 2023-07-18

Family

ID=78960701

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110936647.5A Active CN113838140B (en) 2021-08-16 2021-08-16 Monocular video pedestrian three-dimensional positioning method based on three-dimensional map assistance

Country Status (1)

Country Link
CN (1) CN113838140B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018081008A (en) * 2016-11-16 2018-05-24 株式会社岩根研究所 Self position posture locating device using reference video map
CN108932475A (en) * 2018-05-31 2018-12-04 中国科学院西安光学精密机械研究所 A kind of Three-dimensional target recognition system and method based on laser radar and monocular vision
CN109100741A (en) * 2018-06-11 2018-12-28 长安大学 A kind of object detection method based on 3D laser radar and image data
CN110906880A (en) * 2019-12-12 2020-03-24 中国科学院长春光学精密机械与物理研究所 Object automatic three-dimensional laser scanning system and method
CN111951305A (en) * 2020-08-20 2020-11-17 重庆邮电大学 Target detection and motion state estimation method based on vision and laser radar
CN112396664A (en) * 2020-11-24 2021-02-23 华南理工大学 Monocular camera and three-dimensional laser radar combined calibration and online optimization method
CN113255481A (en) * 2021-05-11 2021-08-13 北方工业大学 Crowd state detection method based on unmanned patrol car

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3926959A4 (en) * 2019-03-21 2022-03-23 LG Electronics Inc. Point cloud data transmission device, point cloud data transmission method, point cloud data reception device, and point cloud data reception method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018081008A (en) * 2016-11-16 2018-05-24 株式会社岩根研究所 Self position posture locating device using reference video map
CN108932475A (en) * 2018-05-31 2018-12-04 中国科学院西安光学精密机械研究所 A kind of Three-dimensional target recognition system and method based on laser radar and monocular vision
CN109100741A (en) * 2018-06-11 2018-12-28 长安大学 A kind of object detection method based on 3D laser radar and image data
CN110906880A (en) * 2019-12-12 2020-03-24 中国科学院长春光学精密机械与物理研究所 Object automatic three-dimensional laser scanning system and method
CN111951305A (en) * 2020-08-20 2020-11-17 重庆邮电大学 Target detection and motion state estimation method based on vision and laser radar
CN112396664A (en) * 2020-11-24 2021-02-23 华南理工大学 Monocular camera and three-dimensional laser radar combined calibration and online optimization method
CN113255481A (en) * 2021-05-11 2021-08-13 北方工业大学 Crowd state detection method based on unmanned patrol car

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Monocular 3D Object Detection Leveraging Accurate Proposals and Shape Reconstruction;Jason Ku 等;《Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)》;11867-11876 *
Monocular Pedestrian 3D Localization for Social Distance Monitoring;Yiru Niu 等;《sensors》;1-16 *
Pedestrian Detection with Lidar Point Clouds Based on Single Template Matching;Kaiqi Liu 等;《electronics》;1-20 *
一种新型三维激光扫描隧道测量点云坐标定位方法的精度评估;尤相骏 等;《测绘通报》(第4期);80-84 *
基于多线激光雷达点云的高精度车辆及行人检测;杨孝奎;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》(第2期);C035-438 *
多运动形式下的行人三维定位方法研究;赵辉 等;《北京信息科技大学学报(自然科学版)》;第31卷(第5期);82-86,96 *

Also Published As

Publication number Publication date
CN113838140A (en) 2021-12-24

Similar Documents

Publication Publication Date Title
Zhao et al. A vehicle-borne urban 3-D acquisition system using single-row laser range scanners
CN110859044B (en) Integrated sensor calibration in natural scenes
CN111462200A (en) Cross-video pedestrian positioning and tracking method, system and equipment
CN103512579B (en) A kind of map constructing method based on thermal infrared video camera and laser range finder
CN110033489B (en) Method, device and equipment for evaluating vehicle positioning accuracy
Qu et al. Vehicle localization using mono-camera and geo-referenced traffic signs
CN108868268B (en) Unmanned parking space posture estimation method based on point-to-surface distance and cross-correlation entropy registration
Alcantarilla et al. On combining visual SLAM and dense scene flow to increase the robustness of localization and mapping in dynamic environments
CN110345937A (en) Appearance localization method and system are determined in a kind of navigation based on two dimensional code
CN111060924B (en) SLAM and target tracking method
CN103093459B (en) Utilize the method that airborne LiDAR point cloud data assisted image mates
CN105225482A (en) Based on vehicle detecting system and the method for binocular stereo vision
CN103411587B (en) Positioning and orientation method and system
CN112833892B (en) Semantic mapping method based on track alignment
Pirker et al. GPSlam: Marrying Sparse Geometric and Dense Probabilistic Visual Mapping.
CN115468567A (en) Cross-country environment-oriented laser vision fusion SLAM method
CN115479598A (en) Positioning and mapping method based on multi-sensor fusion and tight coupling system
CN114325634A (en) Method for extracting passable area in high-robustness field environment based on laser radar
CN114413958A (en) Monocular vision distance and speed measurement method of unmanned logistics vehicle
Zhang LILO: A Novel Lidar–IMU SLAM System With Loop Optimization
Tao et al. Automated processing of mobile mapping image sequences
Khoshelham et al. Vehicle positioning in the absence of GNSS signals: Potential of visual-inertial odometry
Qian et al. Survey on fish-eye cameras and their applications in intelligent vehicles
CN113838140B (en) Monocular video pedestrian three-dimensional positioning method based on three-dimensional map assistance
CN116804553A (en) Odometer system and method based on event camera/IMU/natural road sign

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant