CN117173214A - High-precision map real-time global positioning tracking method based on road side monocular camera - Google Patents
High-precision map real-time global positioning tracking method based on road side monocular camera Download PDFInfo
- Publication number
- CN117173214A CN117173214A CN202311124950.0A CN202311124950A CN117173214A CN 117173214 A CN117173214 A CN 117173214A CN 202311124950 A CN202311124950 A CN 202311124950A CN 117173214 A CN117173214 A CN 117173214A
- Authority
- CN
- China
- Prior art keywords
- camera
- vehicle
- angle
- positioning
- tracking
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 27
- 238000004364 calculation method Methods 0.000 claims abstract description 5
- 238000001514 detection method Methods 0.000 claims description 23
- 238000010168 coupling process Methods 0.000 claims description 6
- 238000005859 coupling reaction Methods 0.000 claims description 6
- 238000013135 deep learning Methods 0.000 claims description 5
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 238000005096 rolling process Methods 0.000 claims description 4
- 239000011159 matrix material Substances 0.000 description 6
- 238000012544 monitoring process Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000002238 attenuated effect Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Traffic Control Systems (AREA)
Abstract
The invention discloses a high-precision map real-time global positioning tracking method based on a road side monocular camera, which comprises the following steps: the road side camera is self-calibrated, and the self-calibration of the camera is realized by learning semantic cues from traffic scenes; detecting and tracking a target, and detecting and tracking the vehicle on an image plane to obtain pixel position change in a period of time; estimating the angles of the target vehicle and the camera, and calculating the angles between the vehicle and the camera according to the pixel positions and the calibrated camera parameters; estimating the distance between the target vehicle and the camera, and calculating the distance between the vehicle and the camera by combining the calibrated camera parameters; final positioning calculation, namely realizing global positioning of the target by combining the obtained distance and angle with the longitude and latitude of the camera; and displaying the global positioning result obtained by tracking on a high-precision map in real time. The monocular global positioning of the uncalibrated camera on any road side is automatically realized without manual operation, and the positioning tracking is realized on the image and the high-precision plane.
Description
Technical Field
The invention relates to the field of computer vision, in particular to a camera calibration method based on deep learning.
Background
With the rapid development of autopilot technology, autopilot safety is increasingly emphasized. As a first step in autopilot awareness, accurate positioning is a precondition for ensuring autopilot safety.
Global positioning and tracking of vehicles using global satellite navigation systems (GNSS) or various commercial products with built-in GNSS technology has been widely used, however, GNSS-based positioning schemes require line of sight (LOS) of at least four navigation satellites, whereas in many areas, especially highly building-intensive areas where satellite signals are severely attenuated or even rejected, the availability of so many navigation satellites cannot be guaranteed due to urban canyon effects, and furthermore, the Root Mean Square Error (RMSE) of pure GNSS positioning in cities is typically greater than 5m, even if enhanced with inertial sensors, differential corrections or multi-constellation receivers, the required navigation performance cannot be guaranteed in terms of availability and accuracy. On the other hand, with the wide-scale deployment of road side monitoring cameras, in this case, we need to find a method for realizing the global positioning of road side monocular vehicles by using large-scale traffic monitoring cameras in the absence of GNSS. Although there are a large number of intelligent road side applications implemented by using road side monitoring cameras at present, including tasks such as speed estimation, traffic statistics and prediction, illegal parking identification, etc., to improve road safety and efficiency, there are few systematic methods to solve the problem of locating road side vehicles.
At present, most positioning methods manually select at least four corresponding points in a map and an image based on a mode of solving a homography matrix by four points, so that conversion from a camera coordinate system to a world coordinate system is realized. This method, although simple to implement, is prone to introducing human error. More importantly, in the current large-scale deployment scene of intelligent traffic cameras, the method has to establish initial point corresponding relation for each camera by manual mode to solve the coordinate system conversion matrix, which severely limits the application scale and deployment efficiency of the intelligent traffic cameras.
At present, the monocular positioning is mostly tested in an experimental scene, and camera calibration is finished by default, or the camera calibration is performed in advance by relying on a manual method, for example Zhang Zhengyou and the like, and the calibration is finished by inserting a chessboard. Both of these methods also require manual onsite, and the restriction of traffic flow makes this approach unsuitable for practical traffic scenarios. More importantly, for some road side scene camera angles, the angles of the road side scene cameras are often subjected to angle change by physical factors, and meanwhile, due to the popularization of the current PTZ cameras, the focal length of the cameras can be changed at any time, so that the manual camera calibration method is not applicable.
There is also a method for calibrating a camera on a road side by self-calibration, which relies on detection of vanishing points of vehicles, but the method requires that a road is approximately straight or a vehicle track is approximately straight, and the method cannot be used in some turning scenes.
In summary, there is no complete solution to the problem of monocular positioning at the road side, so that automatic camera calibration can be achieved without prior information of a scene, without any manual operation, and the method is suitable for monocular vehicle positioning in any unknown scene.
Disclosure of Invention
In order to solve the actual problems, the invention provides a high-precision map real-time global positioning tracking method based on a road side monocular camera, which has the following technical scheme:
the technical scheme provided by the invention can automatically realize calibration based on the existing road side monitoring camera, does not need manual participation, and has no requirement on scenes. The algorithm can be applied to any intersection, the vehicles in the scene are directly subjected to global positioning, and meanwhile, the global positioning result obtained after positioning is tracked on a high-precision map.
A global positioning tracking method based on a road side monocular camera comprises the following steps:
step A, a camera performs automatic calibration to obtain internal and external parameters of the camera;
step B, detecting the vehicle in the image by a target detector to obtain the pixel position of the vehicle in the image when the vehicle is in a certain frame;
step C, the target tracker establishes association between the front frame and the rear frame on the basis of detection to obtain the pixel position change of the vehicle along with time;
step D, the vehicle positioning receives the pixel position change to calculate the angle and distance between the vehicle and the camera, and the global position of the vehicle is obtained;
and E, displaying the global positioning information obtained through calculation in the high-precision map positioning tracking.
In the step A, under a road side scene, a model based on deep learning is adopted to directly learn semantic clues from the scene, and the vertical view angle, the rolling angle and the pitch angle are inferred without depending on manual input and prior information of the scene.
In the step A, the focal length is not directly estimated, but is converted into a vertical field angle for indirect reasoning; the pitch angle, the roll angle and the vertical view angle continuous values are respectively discretized into 256 barrels for classification prediction, then three full-coupling layers are respectively used for predicting the three amounts, and finally expected values of probability distribution of each full-coupling head are used as predicted values of the full-coupling heads; for vertical field angle, softargmaxbiased-L2 loss was used, and for roll and pitch standard Softargmax-L2 loss was used.
In the steps B and C, only the center point at the bottom of the detection frame obtained by detection and tracking is needed to be used as an input of positioning, and the accurate positioning depends on global parameters of the camera, such as longitude and latitude, camera height, camera course angle and parameters of the camera, including parameters inside and outside the camera.
In the step D, the angle estimation between the vehicle and the camera depends on the course angle and the horizontal view angle of the camera, the longitudinal distance between the vehicle and the camera depends on the height of the camera, the distance and the angle between the vehicle and the camera are calculated simultaneously on the basis of obtaining the parameters of the camera, and then the longitude and the latitude of the camera are calculated by combining the course angle and the longitude and the latitude of the camera by using the distance and the angle information.
A global high-precision map positioning and tracking system based on a roadside monocular camera, comprising: camera self-calibration, vehicle detection, vehicle tracking, vehicle positioning and high-precision map positioning tracking;
the camera self-calibration predicts the focal length, rolling angle and pitch angle information of the camera through a deep learning network;
the vehicle detection and tracking part detects and tracks the vehicle in real time to obtain the change of the center point coordinate at the bottom of the vehicle detection frame along with time;
the longitude and latitude coordinates of the target vehicle are obtained by combining a geographic position conversion formula after the angle and distance between the vehicle and the camera are obtained through the angle estimation and the distance estimation;
the high-precision vehicle positioning and tracking input pixel change and camera parameter obtained by camera calibration are combined with a positioning algorithm to display the geographic positioning result on a high-precision map in real time, and meanwhile, real-time positioning and tracking are carried out on an image plane and a high-precision map plane.
The camera calibration algorithm only infers the vertical field angle, the pitch angle and the roll angle, does not need any manual input, and has no requirement on scenes.
The target detection and tracking algorithm, the 2D target detection frame bottom center pixel location is used to calculate the distance and angle between the vehicle and the camera.
The angle estimation is realized by combining a horizontal field angle obtained by camera calibration, a self course angle of a camera and a vehicle pixel position obtained by target tracking, and the angle between the vehicle and the camera is calculated by the following formula:
ω d =ω h +ω c
wherein omega is h Representing a camera heading angle;
ω c representing an angle between a vehicle and a camera heading angle;
d represents the distance between the vehicle and the camera;
ωd represents the clockwise angle of the distance D between the vehicle and the camera from the north direction.
The distance estimation is realized by combining a field angle obtained by camera calibration and a vehicle pixel position obtained by target tracking, and the angle between the vehicle and the camera is calculated by the following formula:
wherein, (x, y) is the pixel coordinate of the bottom center point of the detection frame of the vehicle in the image;
h is the camera mounting height;
y represents the longitudinal distance between the vehicle and the camera;
w, h are the pixel width and pixel height of the image, respectively;
θ represents a pitch angle of the camera;
f represents a camera focal length;
the final distance between the vehicle and the camera is determined by the longitudinal distance and the angle between the vehicle and the camera, the global positioning result is calculated by combining the distance and the angle between the camera and the vehicle, the longitude and latitude of the camera are converted, and the final global positioning longitude and latitude result is calculated by the following formula:
R=6371.393×1000
wherein R is the radius of the earth, and the unit is meter;
l, g represent latitude and longitude of the vehicle, respectively;
l c 、g c representing the latitude and longitude of the camera, respectively;
the high-precision map tracking and positioning is performed by displaying longitude and latitude obtained through image calculation on a map without human participation.
The scheme of the invention can greatly reduce the deployment flow and manual operation in the road side positioning scene, reduce the human error caused by manual operation, and improve the adaptability of the road side positioning algorithm to the scene and the adaptability to different camera models. In addition, the monocular positioning result can be used as a supplementary service of the GNSS signal missing position, and the monocular positioning result can provide respective real-time positioning results for all automatic driving vehicles in a traffic scene.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present invention, and other drawings may be obtained according to these drawings for a person having ordinary skill in the art.
FIG. 1 is a schematic diagram of a method for estimating an angle of monocular positioning according to an embodiment of the present invention;
FIG. 2 is a schematic view of a vehicle on the right side of the camera view angle in angle estimation according to an embodiment of the invention
FIG. 3 is a schematic view of a vehicle at the left side of the camera view angle in the angle estimation according to the embodiment of the invention
Fig. 4 is a schematic diagram of a method for estimating a distance of monocular positioning according to an embodiment of the present invention.
Detailed Description
The invention aims to realize single-purpose positioning of the road side, does not need manual calibration of a camera, can be suitable for different road side scenes, realizes global vehicle positioning on the basis of calibration, and performs real-time positioning and tracking in a high-precision map.
To achieve the above solution, the camera calibration problem must be solved, and the global positioning of the vehicle must be achieved without manual means on the calibration basis.
Aiming at the problems, the invention provides a high-precision map real-time global positioning and tracking algorithm based on a road side monocular camera, which does not depend on manual input, does not limit deployment scenes, and does not limit camera types, installation positions and orientations.
In the invention, for a geometric camera model, a 3D point p in a scene w And image pixel positional relationship p im Can be expressed as:
p im =[λu λv λ] T =K[R|t][p w |1] T ,
wherein K is a camera projection matrix (camera reference), R and t are the camera in the world coordinate systemLower rotation and translation (camera reference). The invention adopts the main stream assumption, namely a pinhole camera model is adopted, the center of the image is taken as the main point, the camera is not inclined, the proportion of the camera in the horizontal direction and the vertical direction is equal (the length-width ratio is 1, square pixels), the focal length is equal in the two directions, and f x =f y =f, the projection matrix K can be expressed as k=diag ([ ff 1)]) Where f is the focal length (in pixels).
Since the range of focal distances is infinite and changes as we adjust the image size, in the present invention the radian of the vertical field of view α is estimated directly and converted to focal distances by:
the rotation matrix may be described in terms of three angles, a roll angle, a pitch angle, and a yaw angle, respectively. Since there is no natural reference system for estimating yaw (left and right) from any image, we do not estimate yaw angle, so our rotation matrix consists of pitch angle θ, roll angle ψ, which is represented by a horizontal line, where the pitch angle θ of the camera is represented by the intersection position of the horizontal midpoint and the image abscissa center point, and roll angle ψ is represented by the estimated line rotation angle relative to the horizontal line.
The angles of the pitch angle θ, the roll angle ψ, and the vertical field angle α are discretized into 256 barrels in the spatial domain, and the regression problem is converted into the B classification problem. The present invention uses ResNet-50 as the backbone network and uses a separate full tie layer to predict pitch angle θ, roll angle ψ, and vertical field angle α. The center value of each bucket may be represented as θ= [ θ ] 1 ,...θ i ,...θ B ],ψ=[ψ 1 ,...ψ i ,...ψ B ],α=[α 1 ,...α i ,...α B ]Order-making Representing the probability mass of the full connection layer of each header, respectively. The expected value of the probability mass for each full joint can be expressed as:
then, for the vertical field angle α, softargmaxbiased-L2 loss was used, and for the pitch angle θ, roll angle ψ, standard Softargmax-L2 was used, the final loss function was expressed as:
the model can be trained on some currently disclosed camera calibration data sets, such as pano360 data sets, and camera field angles, pitch angles and roll angles can be obtained after training.
And (3) performing target detection and tracking on vehicles in the scene at the same time of calibration to obtain the corresponding pixel position of each vehicle, wherein the target detection and tracking algorithm can be replaced by any algorithm. After calibration we obtain the pitch angle θ, roll angle ψ and vertical field angle α of the camera. In a traffic scenario, the deviation of roll angle ψ has less effect on the final positioning relative to pitch angle θ. Therefore, the present invention does not consider the influence of the roll angle.
As shown in fig. 1, given the position of the camera, we only need to know the distance D and angle ω between the camera and the target vehicle d The geographic location of the target vehicle can be estimatedAnd (5) placing. In the present invention ω d Represents the clockwise angle between the distance D and the camera's north N relative to the map, ω c Representing the vehicle and camera orientation angle omega h An angle therebetween. Given the image width w and the camera horizontal field angle hfov, the coordinates of the bottom center of the vehicle detection frame are (x, y), ω c Can be expressed as:
ω h representing a camera heading angle;
ω c representing an angle between a vehicle and a camera heading angle;
d represents the distance between the vehicle and the camera;
ω d the distance D between the vehicle and the camera is shown to be at a clockwise angle from the north direction.
Two different scenarios of fig. 1 are shown in fig. 2 and 3, wherein the direction of travel of the vehicle is located to the right and left of the camera field of view, respectively. For both cases ω d Can be expressed as:
ω d =ω h +ω c
the distance estimation model is shown in fig. 4, assuming that a vehicle is detected in the road scene, which is located in the ground coordinate system, (X w ,Y w ,Z w ) Let θ v For the detected projected ray (relative to the camera) at the intersection of the rear or front of the vehicle with the road plane, H is the mounting height of the camera. The invention assumes a horizontal road, and the longitudinal distance Y is easily known according to the angle relation and is expressed as follows:
wherein β can in turn be expressed as:
the longitudinal distance Y between the camera and the vehicle can be calculated by:
(x, y) is the pixel coordinates of the bottom center point of the detection frame of the vehicle in the image;
h is the camera mounting height;
y represents the longitudinal distance between the vehicle and the camera;
w, h are the pixel width and pixel height of the image, respectively;
θ represents a pitch angle of the camera;
f represents a camera focal length;
the final distance D is:
at the distance D and angle omega d Thereafter, assume that the camera itself latitude and longitude are respectively: l (L) c ,g c The vehicle latitude l and longitude g can be obtained by the following formula:
R=6371.393×1000,
wherein R is the earth radius in meters. l, g represent latitude and longitude of the vehicle, respectively;
l c 、g c representing the latitude and longitude of the camera, respectively;
after the global position of the vehicle is obtained, the positioning result can be displayed on the high-precision map in real time, and finally, the real-time positioning and tracking of the road side monocular high-precision map can be realized. The positioning can be realized under the condition of not depending on manual work by the method, and the calibration is carried out without manual work on site, so that the plug-and-play is realized. And the positioning precision can be improved by optimizing the camera calibration precision in the follow-up process.
The foregoing is merely illustrative of the embodiments of this invention and it will be appreciated by those skilled in the art that variations and modifications may be made without departing from the principles of the invention, and it is intended to cover all modifications and variations as fall within the scope of the invention.
Claims (10)
1. A global positioning tracking method based on a road side monocular camera comprises the following steps:
step A, a camera performs automatic calibration to obtain internal and external parameters of the camera;
step B, detecting the vehicle in the image by a target detector to obtain the pixel position of the vehicle in the image when the vehicle is in a certain frame;
step C, the target tracker establishes association between the front frame and the rear frame on the basis of detection to obtain the pixel position change of the vehicle along with time;
step D, the vehicle positioning receives the pixel position change to calculate the angle and distance between the vehicle and the camera, and the global position of the vehicle is obtained;
and E, displaying the global positioning information obtained through calculation in the high-precision map positioning tracking.
2. The global positioning tracking method based on roadside monocular camera of claim 1, wherein,
in the step A, under a road side scene, a model based on deep learning is adopted to directly learn semantic clues from the scene, and the vertical view angle, the rolling angle and the pitch angle are inferred without depending on manual input and prior information of the scene.
3. The global positioning tracking method based on roadside monocular camera of claim 1, wherein,
in the step A, the focal length is not directly estimated, but is converted into a vertical field angle for indirect reasoning; the pitch angle, the roll angle and the vertical view angle continuous values are respectively discretized into 256 barrels for classification prediction, then three full-coupling layers are respectively used for predicting the three amounts, and finally expected values of probability distribution of each full-coupling head are used as predicted values of the full-coupling heads; for vertical field angle, softargmaxbiased-L2 loss was used, and for roll and pitch standard Softargmax-L2 loss was used.
4. The global positioning tracking method based on roadside monocular camera of claim 1, wherein,
in the steps B and C, only the center point at the bottom of the detection frame obtained by detection and tracking is needed to be used as an input of positioning, and the accurate positioning depends on global parameters of the camera, such as longitude and latitude, camera height, camera course angle and parameters of the camera, including parameters inside and outside the camera.
5. The global positioning tracking method based on roadside monocular camera of claim 1, wherein,
in the step D, the angle estimation between the vehicle and the camera depends on the course angle and the horizontal view angle of the camera, the longitudinal distance between the vehicle and the camera depends on the height of the camera, the distance and the angle between the vehicle and the camera are calculated simultaneously on the basis of obtaining the parameters of the camera, and then the longitude and the latitude of the camera are calculated by combining the course angle and the longitude and the latitude of the camera by using the distance and the angle information.
6. A global high-precision map positioning and tracking system based on a road side monocular camera is characterized by comprising the following components: camera self-calibration, vehicle detection, vehicle tracking, vehicle positioning and high-precision map positioning tracking;
the camera self-calibration predicts the focal length, rolling angle and pitch angle information of the camera through a deep learning network;
the vehicle detection and tracking part detects and tracks the vehicle in real time to obtain the change of the center point coordinate at the bottom of the vehicle detection frame along with time;
the longitude and latitude coordinates of the target vehicle are obtained by combining a geographic position conversion formula after the angle and distance between the vehicle and the camera are obtained through the angle estimation and the distance estimation;
the high-precision vehicle positioning and tracking input pixel change and camera parameter obtained by camera calibration are combined with a positioning algorithm to display the geographic positioning result on a high-precision map in real time, and meanwhile, real-time positioning and tracking are carried out on an image plane and a high-precision map plane.
7. The roadside monocular camera-based global high-precision map positioning and tracking system of claim 6, wherein,
the camera calibration algorithm only infers the vertical field angle, the pitch angle and the roll angle, does not need any manual input, and has no requirement on scenes.
8. The roadside monocular camera-based global high-precision map positioning and tracking system of claim 6, wherein,
the target detection and tracking algorithm, the 2D target detection frame bottom center pixel location is used to calculate the distance and angle between the vehicle and the camera.
9. The roadside monocular camera-based global high-precision map positioning and tracking system of claim 6, wherein,
the angle estimation is realized by combining a horizontal field angle obtained by camera calibration, a self course angle of a camera and a vehicle pixel position obtained by target tracking, and the angle between the vehicle and the camera is calculated by the following formula:
ω d =ω h +ω c
wherein omega is h Representing camera headingA corner;
ω c representing an angle between a vehicle and a camera heading angle;
d represents the distance between the vehicle and the camera;
ωd represents the clockwise angle of the distance D between the vehicle and the camera from the north direction.
10. The roadside monocular camera-based global high-precision map positioning and tracking system of claim 6, wherein,
the distance estimation is realized by combining a field angle obtained by camera calibration and a vehicle pixel position obtained by target tracking, and the angle between the vehicle and the camera is calculated by the following formula:
wherein, (x, y) is the pixel coordinate of the bottom center point of the detection frame of the vehicle in the image;
h is the camera mounting height;
y represents the longitudinal distance between the vehicle and the camera;
w, h are the pixel width and pixel height of the image, respectively;
θ represents a pitch angle of the camera;
f represents a camera focal length;
the final distance between the vehicle and the camera is determined by the longitudinal distance and the angle between the vehicle and the camera, the global positioning result is calculated by combining the distance and the angle between the camera and the vehicle, the longitude and latitude of the camera are converted, and the final global positioning longitude and latitude result is calculated by the following formula:
R=6371.393×1000
wherein R is the radius of the earth, and the unit is meter;
l, g represent latitude and longitude of the vehicle, respectively;
l c 、g c representing the latitude and longitude of the camera, respectively;
the high-precision map tracking and positioning is performed by displaying longitude and latitude obtained through image calculation on a map without human participation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311124950.0A CN117173214A (en) | 2023-09-03 | 2023-09-03 | High-precision map real-time global positioning tracking method based on road side monocular camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311124950.0A CN117173214A (en) | 2023-09-03 | 2023-09-03 | High-precision map real-time global positioning tracking method based on road side monocular camera |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117173214A true CN117173214A (en) | 2023-12-05 |
Family
ID=88946368
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311124950.0A Pending CN117173214A (en) | 2023-09-03 | 2023-09-03 | High-precision map real-time global positioning tracking method based on road side monocular camera |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117173214A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118379698A (en) * | 2024-04-10 | 2024-07-23 | 北京大唐高鸿数据网络技术有限公司 | Data set construction method, device, equipment, program product and medium |
-
2023
- 2023-09-03 CN CN202311124950.0A patent/CN117173214A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118379698A (en) * | 2024-04-10 | 2024-07-23 | 北京大唐高鸿数据网络技术有限公司 | Data set construction method, device, equipment, program product and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110631593B (en) | Multi-sensor fusion positioning method for automatic driving scene | |
Alonso et al. | Accurate global localization using visual odometry and digital maps on urban environments | |
CN110926474B (en) | Satellite/vision/laser combined urban canyon environment UAV positioning and navigation method | |
US11573091B2 (en) | Method and device for determining the geographic position and orientation of a vehicle | |
Goldbeck et al. | Lane following combining vision and DGPS | |
JP4897542B2 (en) | Self-positioning device, self-positioning method, and self-positioning program | |
CN104833360B (en) | A kind of conversion method of two-dimensional coordinate to three-dimensional coordinate | |
CN105676253A (en) | Longitudinal positioning system and method based on city road marking map in automatic driving | |
Hu et al. | Real-time data fusion on tracking camera pose for direct visual guidance | |
WO2022041706A1 (en) | Positioning method, positioning system, and vehicle | |
Shunsuke et al. | GNSS/INS/on-board camera integration for vehicle self-localization in urban canyon | |
JP4596566B2 (en) | Self-vehicle information recognition device and self-vehicle information recognition method | |
JP7203805B2 (en) | Analysis of localization errors of moving objects | |
CN110018503B (en) | Vehicle positioning method and positioning system | |
CN117173214A (en) | High-precision map real-time global positioning tracking method based on road side monocular camera | |
Hara et al. | Vehicle localization based on the detection of line segments from multi-camera images | |
CN114820793A (en) | Target detection and target point positioning method and system based on unmanned aerial vehicle | |
Hu et al. | Fusion of vision, GPS and 3D gyro data in solving camera registration problem for direct visual navigation | |
Mounier et al. | High-precision positioning in GNSS-challenged environments: a LiDAR-based multi-sensor fusion approach with 3D digital maps registration | |
US20230273029A1 (en) | Vision-based location and turn marker prediction | |
CN117635683A (en) | Trolley indoor positioning method based on multiple cameras | |
Gu et al. | SLAM with 3dimensional-GNSS | |
Hu et al. | Fusion of vision, 3D gyro and GPS for camera dynamic registration | |
Gu et al. | Correction of vehicle positioning error using 3D-map-GNSS and vision-based road marking detection | |
Mounier et al. | Investigating the Complementary Use of Radar and LIDAR for Positioning Applications |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |